A Simple Query, Interesting Consequences

October 15, 2021

The balance between effective tools for law-enforcement and civil liberties is, of course, a tricky one. Forbes discusses the thorny issue of keyword warrants in, “Exclusive: Government Secretly Orders Google to Identify Anyone Who Searched a Sexual Assault Victim’s Name, Address and Telephone Number.” The use of this specific warrant was inadvertently, and temporarily, unsealed by the Justice Department in September. Forbes was able to review the documents before they were sealed again. The write-up gives some relevant details of the Wisconsin case, but basically investigators asked Google for the Google account information and IP addresses of anyone who had searched for the victim’s name, two spellings of her mother’s name, her address, and her phone number on 16 specific days. Before this, we’re told, only two other keyword warrants had been made public. Write Thomas Brewster emphasizes:

“While Google deals with thousands of such orders every year, the keyword warrant is one of the more contentious. In many cases, the government will already have a specific Google account that they want information on and have proof it’s linked to a crime. But search term orders are effectively fishing expeditions, hoping to ensnare possible suspects whose identities the government does not know. It’s not dissimilar to so-called geofence warrants, where investigators ask Google to provide information on anyone within the location of a crime scene at a given time. … The latest case shows Google is continuing to comply with such controversial requests, despite concerns over their legality and the potential to implicate innocent people who happened to search for the relevant terms.”

In this particular case, the warrant’s narrow scope probably prevented that from happening. Still, even the most carefully worded requests set precedent. And others have been broad enough to impugn the merely curious, as with these orders made to Google, Microsoft, and Yahoo during the investigation into 2018’s serial bombings in Austin. Those warrants called for the account information and IP addresses of anyone searching for certain addresses and terms like “low explosives” and “pipe bomb.” As the ACLU’s Jennifer Granick observes:

“Trawling through Google’s search history database enables police to identify people merely based on what they might have been thinking about, for whatever reason, at some point in the past. This is a virtual dragnet through the public’s interests, beliefs, opinions, values and friendships, akin to mind reading powered by the Google time machine.”

As Granick sees it, keyword warrants not only breach the Fourth Amendment’s protections from unreasonable searches, they also threaten the freedom of speech granted by First Amendment: Google users may hesitate to look up information if their search histories could be handed over to the government at any moment. It does not help, she notes, that this is all going down in secret. See the article for more information.

Cynthia Murrell October 15, 2021

Glean: Another Enterprise Search Solution

October 12, 2021

Enterprise search features are interesting, but users accept it as an unavoidable tech problems like unfindable content and sluggish indexing.. A former Google engineering director recognized the problem when he started his own startup and Forbes article, “Glean Emerges from Stealth With $55 Million To Bring Search To The Enterprise” tells the story.

Arvind Jain cofounded the cloud data management company Rubrik and always had problems locating information. Rubrik is now worth $3.7 million, but Jain left and formed the new startup Glean with Google veterans Piyush Prahladka, Tony Gentilcore, and T.R. Vishwanath. The team have developed a robust enterprise search engine application from multiple applications. Glean has raised $55 million in funding.

Other companies like Algolia and Elastic addressed the same enterprise search problem, but they focused on search boxes on consumer-facing Web sites instead of working for employees. With more enterprise systems shifting to the cloud and SaaS, Glean’s search product is an invaluable tool. Innovations with deep learning also make Glean’s search product more intuitive and customizable for each user:

“On the user side, Glean’s software analyzes the wording of a search query—for example, it understands that “quarterly goals” or “Q1 areas of focus” are asking the same thing—and shows all the results that correspond to it, whether they are located in Salesforce, Slack or another of the many applications that a company uses. The results are personalized based on the user’s job. Using deep learning, Glean can differentiate personas, such as a salesperson from an engineer, and tailor recommendations based on the colleagues that a user interacts with most frequently.”

Will Glean crack the enterprise search code? Interesting question to which the answer is not yet known.

Whitney Grace, October 12, 2021

Who Is Ready to Get Back to the Office?

October 4, 2021

The pandemic has had many workers asking “hey, who needs an office?” Maybe most of us, according to the write-up, “How Work from Home Has Changed and Became Less Desirable in Last 18 Months” posted at Illumination. Cloud software engineer Amrit Pal Singh writes:

“Work from home was something we all wanted before the pandemic changed everything. It saved us time, no need to commute to work, and more time with the family. Or at least we used to think that way. I must say, I used to love working from home occasionally in the pre-pandemic era. Traveling to work was a pain and I used to spend a lot of time on the road. Not to forget the interrupts, tea breaks, and meetings you need to attend at work. I used to feel these activities take up a lot of time. The pandemic changed it all. In the beginning, it felt like I could work from home all my life. But a few months later I want to go to work at least 2–3 times a week.”

What changed Singh’s mind? Being stuck at home, mainly. There is the expectation that since he is there he can both work and perform household chores each day. He also shares space with a child attending school virtually—as many remote workers know, this makes for a distracting environment. Then there is the loss of work-life balance; when both work and personal time occur in the same space, they tend to blend together and spur monotony. An increase in unnecessary meetings takes away from actually getting work done, but at the same time Singh misses speaking with his coworkers face-to-face. He concludes:

“I am not saying WFH is bad. In my opinion, a hybrid approach is the best where you go to work 2–3 days a week and do WFH the rest of the week. I started going to a nearby cafe to get some time alone. I have written this article in a cafe :)”

Is such a hybrid approach the balance we need?

Cynthia Murrell, October 4, 2021

Big Tech Responds to AI Concerns

October 4, 2021

We cannot decide whether this news represents a PR move or simply three red herrings. Reuters declares, “Money, Mimicry and Mind Control: Big Tech Slams Ethics Brakes on AI.” The article gives examples of Google, Microsoft, and IBM hitting pause on certain AI projects over ethical concerns. Reporters Paresh Dave and Jeffrey Dastin write:

“In September last year, Google’s (GOOGL.O) cloud unit looked into using artificial intelligence to help a financial firm decide whom to lend money to. It turned down the client’s idea after weeks of internal discussions, deeming the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender. Since early last year, Google has also blocked new AI features analyzing emotions, fearing cultural insensitivity, while Microsoft (MSFT.O) restricted software mimicking voices and IBM (IBM.N) rejected a client request for an advanced facial-recognition system. All these technologies were curbed by panels of executives or other leaders, according to interviews with AI ethics chiefs at the three U.S. technology giants.”

See the write-up for more details on each of these projects and the concerns around how they might be biased or misused. These suspensions sound very responsible of the companies, but they may be more strategic than conscientious. Is big tech really ready to put integrity over profits? Some legislators believe regulations are the only way to ensure ethical AI. The article tells us:

“The EU’s Artificial Intelligence Act, on track to be passed next year, would bar real-time face recognition in public spaces and require tech companies to vet high-risk applications, such as those used in hiring, credit scoring and law enforcement. read more U.S. Congressman Bill Foster, who has held hearings on how algorithms carry forward discrimination in financial services and housing, said new laws to govern AI would ensure an even field for vendors.”

Perhaps, though lawmakers in general are famously far from tech-savvy. Will they find advisors to help them craft truly helpful legislation, or will the industry dupe them into being its pawns? Perhaps Watson could tell us.

Cynthia Murrell, October 4, 2021

Elastic: Differentiation and Wagon Circling

September 22, 2021

Elastic expects two recent acquisitions to beef up its security in the cloud. Betakit reports, “Cybersecurity Startup Cmd to Be Acquired by Enterprise Search Firm Elastic.” This deal is on the heels of the company’s announcement that it snapped up authorization policy management platform build.security. Writer Josh Scott tells us:

“Cmd was founded in 2016 by CSO Jake King, former security operations lead at Hootsuite, and Milun Tesovic, general partner at Expa. The startup offers a runtime security platform for cloud workloads and Linux assets, providing infrastructure detection and response capabilities to global brands, financial institutions, and software companies. Cmd’s offering observes real-time session activity and allows Linux administrators and developers to take immediate remediation action. … Following the close of the deal, Elastic plans to work with Cmd to integrate Cmd’s cloud native data collection capabilities directly into the company’s Elastic Agent product, and Cmd’s user experience and workflows into Kibana, Elastic’s data visualization offering.”

Citing an article from TechCrunch, Scott notes that Cmd’s employees will be moving to Elastic, with King and CEO Santosh Krishnan slipping into executive roles. Elastic says current customers of both firms will benefit from the integration and specifically promises its existing clients will soon receive Cmd’s cloud security capabilities. Built around open source software, Elastic began as Elasticsearch Inc. in 2012, simplified its name in 2015, and went public in 2018. The company is based in Mountain View, California, and maintains offices around the world.

Cynthia Murrell, September 22, 2021

Useless Search Results? Thank Advertising

September 17, 2021

We thought this was obvious. The Conversation declares, “Google’s ‘Pay-Per-Click’ Ad Model Makes it Harder to Find What You’re Looking For.” Writers Mohiuddin Ahmed and Paul Haskell-Dowland begin by pointing out “to google” has literally become synonymous with searching online via any online search platform. Indeed, Google has handily dominated the online search business, burying some competitors and leaving the rest in the dust. Not coincidentally, the company also rules the web browser and online advertising markets. As our dear readers know, Google is facing pushback from competition and antitrust regulators in assorted countries. However, this article addresses the impact on search results themselves. The authors report:

“More than 80% of Alphabet’s revenue comes from Google advertising. At the same time, around 85% of the world’s search engine activity goes through Google. Clearly there is significant commercial advantage in selling advertising while at the same time controlling the results of most web searches undertaken around the globe. This can be seen clearly in search results. Studies have shown internet users are less and less prepared to scroll down the page or spend less time on content below the ‘fold’ (the limit of content on your screen). This makes the space at the top of the search results more and more valuable. In the example below, you might have to scroll three screens down before you find actual search results rather than paid promotions. While Google (and indeed many users) might argue that the results are still helpful and save time, it’s clear the design of the page and the prominence given to paid adverts will influence behavior. All of this is reinforced by the use of a pay-per-click advertising model which is founded on enticing users to click on adverts.”

We are reminded Google-owned YouTube is another important source of information for billions of users, and it is perhaps the leading platform for online ads. In fact, these ads now intrude on videos at a truly annoying rate. Unless one pays for a Premium subscription, of course. Ahmed and Haskell-Dowland remind us alternatives to Google Search exist, with the usual emphasis on privacy-centric DuckDuckGo. They conclude by pointing out other influential areas in which Google plays a lead role: AI, healthcare, autonomous vehicles, cloud computing, computing devices, and the Internet of Things. Is Google poised to take over the world? Why not?

Cynthia Murrell, September September 17, 2021, 2021

Lucky India. Google Wants to Help

September 16, 2021

Google seeks to clear up a misunderstanding. Odisha’s OrissaPost reports, “Google Says Firmly Sees Itself as Partner to India’s Financial Ecosystem.” At issue is Google Pay and its Spot platform. It sounds like some reports about its partnerships with banks may have given the impression Google is trying to supplant or undermine existing financial institutions in India. We learn:

“The company emphasized that in every geography where Google Pay is present, its stance is consistently one of partnering with the existing financial services and banking systems to help scale and enable frictionless delivery of financial products and services and contribute to the goal of financial inclusion. In a blogpost, Google India said there have been a few instances where these offerings have been reported as ‘Google Pay’s offerings’, which fuels misinterpretation. ‘To be clear, we have always looked at our role firmly as a partner to the existing financial ecosystem that brings unique skill sets and offerings to drive further adoption of digital payments in the country,’ it said. … The internet major also noted that its Spot platform works as an additional discovery channel for many businesses to build and offer new experiences to users to drive adoption of their services. The use cases span across ticket purchase, food ordering, paying for essential services like utility bills, shopping and getting access to various financial products.”

See the write-up or Google India’s blog post for more specific details. The company emphasizes bringing partners onto the Google Pay platform connects them to customers around India who would otherwise be unable to access their services, helping to “level social inequalities.” Aw Google, always looking out for the little guy aren’t you?

Cynthia Murrell, September 16, 2021

Silicon Valley: Fraud or Fake Is an Incorrect Characterization

September 10, 2021

I read “Elizabeth Holmes: Has the Theranos Scandal Changed Silicon Valley?” The write up contains a passage I found interesting; to wit:

In Silicon Valley, hyping up your product – over-promising – isn’t unusual…

Marketing is more important than the technology sold by the cash hype artists. Notice that I don’t use the word “entrepreneur,” “innovator,” “programmer,” or the new moniker “AIOps” (that’s artificial intelligence operations).

The Theranos story went wrong because there was not a “good enough” method provided. The fact that Theranos could not cook up a marginally better way of testing blood is less interesting than the fact about the money. She had plenty of money, and her failure is what I call the transition from PowerPoint to “good enough.”

Why not pull a me-too and change the packaging? Why not license a method from Eastern Europe or Thailand and rebrand it? Why not white label a system known to work, offer a discount, and convince the almost clueless Walgreen’s-type operation that the  Zirconia was dug out of a hole in a far-off country.

Each of these methods has been used to allow an exit strategy with honor and not a career-ending Tesla-like electric battery fire which burns for days.

The write up explains:

Particularly at an early stage, when a start-up is in its infancy, investors are often looking at people and ideas rather than substantive technology anyway. General wisdom holds that the technology will come with the right concept – and the right people to make it work. Ms Holmes was brilliant at selling that dream, exercising a very Silicon Valley practice: ‘fake it until you make it’. Her problem was she couldn’t make it work.

The transgression, in my opinion, was a failure to use a me-too model. That points to what I call a denial of reality.

Here are some examples of how a not-so-good solution has delivered to users a disappointing product or service yet flourished. How many of these have entered your personal ionosphere?

  1. Proprietary app stores which offer mobile software which is malware? The purpose of the proprietary app store is to prevent malfeasance, right?
  2. Operating systems which cannot provide security? My newsfeed is stuffed full of breaches, intrusions, phishing scams, and cloud vulnerabilities. How about that Microsoft Exchange and Azure security or the booming business of NSO Group-types of surveillance functionality?
  3. Self-driving vehicles anyone? Sorry, not for me.
  4. Smart software which is tuned to deliver irrelevant advertising despite a service’s access to browser history, user location, and email mail? If I see one more ad for Grammarly or Ke Chava when I watch a Thomas Gast French Foreign Legion video in German, I may have a stroke. (Smart software is great, isn’t it? Just like ad-supported Web search results!)
  5. Palantir-type systems are the business intelligence solutions for everyone with a question and deep pockets.

The article is interesting, but it sidesteps the principal reason why Theranos has become a touchstone for some people. The primum movens from my vantage point is:

There are no meaningful consequences: For the funders. For the educational institutions. For the “innovators.”

The people who get hurt are not part of the technology club. Maybe Ms. Holmes, the “face” of Theranos will go to jail, be slapped with a digital scarlet A, and end up begging in Berkeley?

I can’t predict the future, but I can visualize a Michael Milkin-type or Kevin Mitnick-type of phoenixing after walking out of jail.

Theranos is a consequence of the have and have not technology social construct. Technology is a tool. Ms. Holmes cut off her finger in woodworking class. That’s sort of embarrassing. Repurposing is so darned obvious and easy.

More adept pioneers have done the marketing thing and made a me-too approach to innovation work. But it does not matter. This year has been a good one for start ups. Get your digital currency. Embrace AIOps. Lease a self driving vehicle. Use TikTok. No problem.

Stephen E Arnold, September 10. 2021

More AI Bias? Seems Possible

September 10, 2021

Freddie Mac and Fannie Mae are stuck in the past—the mid-1990s, to be specific, when the Classic FICO loan-approval software was developed. Since those two quasi-government groups basically set the rules for the mortgage industry, their reluctance to change is bad news for many would-be home buyers and their families. The Markup examines “The Secret Bias Hidden in Mortgage-Approval Algorithms.” Reporters Emmanuel Martinez and Lauren Kirchner reveal what their organization’s research has uncovered:

“An investigation by The Markup has found that lenders in 2019 were more likely to deny home loans to people of color than to white people with similar financial characteristics — even when we controlled for newly available financial factors the mortgage industry for years has said would explain racial disparities in lending. Holding 17 different factors steady in a complex statistical analysis of more than two million conventional mortgage applications for home purchases, we found that lenders were 40 percent more likely to turn down Latino applicants for loans, 50 percent more likely to deny Asian/Pacific Islander applicants, and 70 percent more likely to deny Native American applicants than similar White applicants. Lenders were 80 percent more likely to reject Black applicants than similar White applicants. These are national rates. In every case, the prospective borrowers of color looked almost exactly the same on paper as the White applicants, except for their race.”

Algorithmic bias is a known and devastating problem in several crucial arenas, but recent years have seen efforts to mitigate it with better data sets and tweaked machine-learning processes. Advocates as well as professionals in the mortgage and housing industries have been entreating Fannie and Freddie to update their algorithm since 2014. Several viable alternatives have been developed but the Federal Housing Finance Agency, which oversees those entities, continues to drag its heels. No big deal, insists the mortgage industry—bias is just an illusion caused by incomplete data, representatives wheedle. The Markup’s research indicates otherwise. We learn:

“The industry had criticized previous similar analyses for not including financial factors they said would explain disparities in lending rates but were not public at the time: debts as a percentage of income, how much of the property’s assessed worth the person is asking to borrow, and the applicant’s credit score. The first two are now public in the Home Mortgage Disclosure Act data. Including these financial data points in our analysis not only failed to eliminate racial disparities in loan denials, it highlighted new, devastating ones.”

For example, researchers found high-earning Black applicants with less debt get rejected more often than white applicants with similar income but more debt. See the article for more industry excuses and the authors’ responses, as well some specifics on mechanisms of systemic racism and how location affects results. There are laws on the books that should make such discrimination a thing of the past, but they are difficult to enforce. An outdated algorithm shrouded in secrecy makes it even more so. The Federal Housing Finance Agency has been studying its AI’s bias and considering alternatives for five years now. When will it finally make a change? Families are waiting.

Cynthia Murrell, September 10, 2021

Techno-Psych: Perception, Remembering a First Date, and Money

September 9, 2021

Navigate to “Investor Memory of Past Performance Is Positively Biased and Predicts Overconfidence.” Download the PDF of the complete technical paper at this link. What will you find? Scientific verification of a truism; specifically, people remember good times and embellish those memory with sprinkles.

The write up explains:

First, we find that investors’ memories for past performance are positively biased. They tend to recall returns as better than achieved and are more likely to recall winners than losers. No published paper has shown these effects with investors. Second, we find that these positive memory biases are associated with overconfidence and trading frequency. Third, we validated a new methodology for reducing overconfidence and trading frequency by exposing investors to their past returns.

The issue at hand is investors who know they are financial poobahs. Mix this distortion of reality with technology and what does one get? My answer to this question is, “NFTs for burned Banksy art.”

The best line in the academic study, in my view, is:

Overconfidence is hazardous to your wealth.

Who knew? My answer is the 2004 paper called “Overconfidence and the Big Five.” I also think my 89-year-old great grandmother who told me when I was 13, “Don’t be over confident.”

I wonder if the Facebook artificial intelligence wizards were a bit too overconfident in the company’s smart software. There was, if I recall, a question about metatagging a human as a gorilla.

Stephen E Arnold, September 9, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta