Google Economics: The Cost of Bard Versus Staplers

April 4, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Does anyone remember the good old days at the Google. Tony Bennett performing in the cafeteria. What about those car washes? How about the entry security system which was beset with door propped open with credit card receipts from Fred’s Place. Those were the days.

I read “Google to Cut Down on Employee Laptops, Services and Staplers for Multi-Year Savings.” The article explains:

Google said it’s cutting back on fitness classes, staplers, tape and the frequency of laptop replacements for employees. One of the company’s important objectives for 2023 is to “deliver durable savings through improved velocity and efficiency.” Porat said in the email. “All PAs and Functions are working toward this,” she said, referring to product areas. OKR stands for objectives and key results.

Yes, OKR. I wonder if the Sundar and Prabhakar comedy act will incorporate staplers into their next presentation.

And what about the $100 billion the Google “lost” after its quantum supremacy smart software screwed up in Paris? Let’s convert that to staplers, shall we? Today (April 4, 2023), I can purchase one office stapler from Amazon (Google’s fellow traveler in trashing relevance with advertisements) for $10.98. I liked the Bostitch Office Heavy Duty device, which is Amazon’s number one best seller (according to Amazon marketing).

The write up pointed out:

Staplers and tape are no longer being provided to print stations companywide as “part of a cost effectiveness initiative,” according to a separate, internal facilities directive viewed by CNBC.

To recoup that $100 million, Google will have to not purchase 9,107,468.12. I want to retain the 0.12 because one must be attentive to small numbers (unlike some of the fancy math in the Snorkel world). Google, I have heard, has about 100,000 “employees”, but it is never clear which are “real” employees, contractors, interns, or mysterious partners. Thus each of these individuals will be responsible for NOT losing or breaking 91 staplers per year.

I know the idea of rationing staplers is like burning Joan of Arc. It’s not an opportunity to warm a croissant; it is the symbolism of the event.

Google in 2023 knows how to keep me in stitches. Sorry, staples. And the cost of Bard? As the real Bard said:

Poor and content is rich and rich enough,
But riches fineless is as poor as winter
To him that ever fears he shall be poor. (Othello, III.iv)

Stephen E Arnold, April 4, 2023

Researchers Break New Ground with a Turkey Baster and Zoom

April 4, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I do not recall much about my pre-school days. I do recall dropping off at different times my two children at their pre-schools. My recollections are fuzzy. I recall horrible finger paintings carried to the automobile and several times a month, mashed pieces of cake. I recall quite a bit of laughing, shouting, and jabbering about classmates whom I did not know. Truth be told I did not want to meet these progeny of highly educated, upwardly mobile parents who wore clothes with exposed logos and drove Volvo station wagons. I did not want to meet the classmates. The idea of interviewing pre-kindergarten children struck me as a waste of time and an opportunity to get chocolate Baskin & Robbins cake smeared on my suit. (I am a dinobaby, remember. Dress for success. White shirt. Conservative tie. Yada yada._

I thought (briefly, very briefly) about the essay in Science Daily titled “Preschoolers Prefer to Learn from a Competent Robot Than an Incompetent Human.” The “real news” article reported without one hint of sarcastic ironical skepticism:

We can see that by age five, children are choosing to learn from a competent teacher over someone who is more familiar to them — even if the competent teacher is a robot…

Okay. How were these data gathered? I absolutely loved the use of Zoom, a turkey baster, and nonsense terms like “fep.”

Fascinating. First, the idea of using Zoom and a turkey baster would never roamed across this dinobaby’s mind. Second, the intuitive leap by the researchers that pre-schoolers who finger-paint would like to undertake this deeply intellectual task with a robot, not a human. The human, from my experience, is necessary to prevent the delightful sprouts from eating the paint. Third, I wonder if the research team’s first year statistics professor explained the concept of a valid sample.

One thing is clear from the research. Teachers, your days are numbered unless you participate in the Singularity with Ray Kurzweil or are part of the school systems’ administrative group riding the nepotism bus.

“Fep.” A good word to describe certain types of research.

Stephen E Arnold, April 4, 2023

Ready, Fire, Aim: Google and File Limits

April 4, 2023

Google is quite accomplished when the firm is required to ingest money from its customers. These are individuals and organizations “important” to the company which operates in self-described quantum supremacy mode. In a few other corporate functions, the company is less polished.

One example is described in “Google Drive Does a Surprise Rollout of File Limits, Locking Out Some Users.” The subtitle of the article is:

The new file limit means you can’t actually use the storage you buy from Google.

If the information in the write up is correct, it appears that Google is collecting money and not delivering the service marketed to some of its customers. A corollary is that I pay a yearly fee for a storage unit. When I arrive to park my bicycle for the winter, my unit is locked, and there is no staff to let me open the unit or way to access what’s in the storage unit. I am not sure I would be happy.

The article points out:

The 5 million total file cap isn’t documented anywhere, and remember, it has been two months since this rolled out. It’s not listed on the Google One or Google Workspace plan pages, and we haven’t seen any support documents about it. Google also doesn’t have any tools to see if you’re getting close to this file limit—there’s no count of files anywhere.

If this statement is accurate, then Google is selling and collecting money for one thing and delivering another to some customers. In my view, I think Google has hit upon a brilliant solution to a problem of coping with the increasing burden of its ill-advised promotion of “free” and “low cost” storage cooked up by long-gone Googlers. Yep, those teenagers making cookies without mom supervising do create a mess.

The article includes a superb example of Google speak, a form of language known to please legal professionals adjudicating different issues in which Google finds itself tangled; to wit:

A Google spokesperson confirmed to Ars that the file limit isn’t a bug, calling the 5 million file cap “a safeguard to prevent misuse of our system in a way that might impact the stability and safety of the system.” The company clarified that the limit applies to “how many items one user can create in any Drive,” not a total cap for all files in a drive. For individual users, that’s not a distinction that matters, but it could matter if you share storage with several accounts. Google added, “This limit does not impact the vast majority of our users’ ability to use their Google storage.” and “In practice, the number of impacted users here is vanishingly small.”)

From my vantage point in rural Kentucky, I think the opaque and chaotic approach to file limits is a useful example of what I call “high school science club management methods.” Those folks, as I recall as a high school science club member myself, just know better, don’t check with anyone in administration, and offer non-explanations.

In fact, the “vanishingly small” number of users affected by this teeny bopper professionalism is vanishingly small. Isn’t that the direction in which Google’s image, brand, and trust factor is heading? Toward the vanishingly small? Let’s ask ChatGPT, shall we: “Why does Google engage in Ready, fire, aim antics?”

Stephen E Arnold, April 4, 2023

Thomson Reuters, Where Is Your Large Language Model?

April 3, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have to give the lovable Bloomberg a pat on the back. Not only did the company explain its large language model for finance, the end notes to the research paper are fascinating. One cited document has 124 authors. Why am I mentioning the end notes? The essay is 65 pages in length, and the notes consume 25 pages. Even more interesting is that the “research” apparently involved nVidia and everyone’s favorite online bookstore, Amazon and its Web services. No Google. No Microsoft. No Facebook. Just Bloomberg and the tenure-track researcher’s best friend: The end notes.

The article with a big end … note that is presents this title: “BloombergGPT: A Large Language Model for Finance.” I would have titled the document with its chunky equations “A Big Headache for Thomson Reuters,” but I know most people are not “into” the terminal rivalry, the analytics rivalry and the Thomson Reuters’ Fancy Dancing with Palantir Technologies, nor the “friendly” competition in which the two firms have engaged for decades.

Smart software score appears to be: Bloomberg 1, Thomson Reuters, zippo. (Am I incorrect? Of course, but this beefy effort, the mind boggling end notes, and the presence of Johns Hopkins make it clear that Thomson Reuters has some marketing to do. What Microsoft Bing has done to the Google may be exactly what Bloomberg wants to do to Thomson Reuters: Make money on the next big thing and marginalize a competitor. Bloomberg obviously wants more than the ageing terminal business and the fame achieved on free TV’s Bloomberg TV channels.

What is the Bloomberg LLM or large language model? Here’s what the paper asserts. Please, keep in mind that essays stuffed with mathy stuff and researchy data are often non-reproducible. Heck, even the president of Stanford University took short cuts. Plus more than half of the research results my team has tried to reproduce ends up in Nowheresville, which is not far from my home in rural Kentucky:

we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets. We validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks.

My interpretations of this quotation is:

  1. Lots of data
  2. Big model
  3. Informed financial decisions.

“Informed financial decisions” means to me that a crazed broker will give this Bloomberg thing a whirl in the hope of getting a huge bonus, a corner office which is never visited, and fame at the New York Athletic Club.

Will this happen? Who knows.

What I do know is that Thomson Reuters’ executives in London, New York, and Toronto are doing some humanoid-centric deep thinking about Bloomberg. And that may be what Bloomberg really wants because Bloomberg may be ahead. Imagine that Bloomberg ahead of the “trust” outfit.

Stephen E Arnold, April 3, 2023

The Scramblers of Mountain View: The Google AI Team

April 3, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I don’t know about you, but if I were a Googler (which I am not), I would pay attention to Google wizard and former Alta Vista wizard Jeff Dean. This individual was, I have heard, was involved in the dust up about Timnit Gebru’s stochastic parrot paper. (I love that metaphor. A parrot.) Dr. Dean has allegedly invested in the smart search outfit Perplexity. I found this interesting because it sends a faint signal from the bowels of Googzilla. Bet hedging? Admission that Google’s AI is lacking? A need for Dr. Dean to prepare to find his future elsewhere?

Why am I mentioning a Googler betting cash on one of the many Silicon Valley type outfits chasing the ChatGPT pot of gold? I read “Google Bard Is Switching to a More Capable Language Model, CEO Confirms.” The write up explains:

Bard will soon be moving from its current LaMDA-based model to larger-scale PaLM datasets in the coming days… When asked how he felt about responses to Bard’s release, Pichai commented: “We clearly have more capable models. Pretty soon, maybe as this goes live, we will be upgrading Bard to some of our more capable PaLM models, so which will bring more capabilities, be it in reasoning, coding.”

That’s a hoot. I want to add the statement “Pichai claims not to be worried about how fast Google’s AI develops compared to its competitors.” That a great line for the Sundar and Prabhakar Comedy Show. Isn’t Google in Code Red mode. Why? Not to worry. Isn’t Google losing the PR and marketing battle to the Devils from Redmond? Why? Not to worry. Hasn’t Google summoned Messrs. Brin and Page to the den of Googzilla to help out with AI? Why. Not to worry.

Then a Google invests in Perplexity. Right. Soon. Moving. More capable.

Net net: Dr. Dean’s investment may be more significant than the Code Red silliness.

Stephen E Arnold, April 3, 2023

« Previous Page

  • Archives

  • Recent Posts

  • Meta