Digital Addiction Game Plan: Get Those Kiddies When Young

April 6, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I enjoy research which provides roadmaps for confused digital Hummer drivers. The Hummer weighs more than four tons and costs about the same as one GMLRS rocket. Digital weapons are more effective and less expensive. One does give up a bit of shock and awe, however. Life is full of trade offs.

The information in “Teens on Screens: Life Online for Children and Young Adults Revealed” is interesting. The analytics wizards have figure out how to hook a young person on zippy new media. I noted this insight:

Children are gravitating to ‘dramatic’ online videos which appear designed to maximize stimulation but require minimal effort and focus…

How does one craft a magnetic video:

Gossip, conflict, controversy, extreme challenges and high stakes – often involving large sums of money – are recurring themes. ‘Commentary’ and ‘reaction’ video formats, particularly those stirring up rivalry between influencers while encouraging viewers to pick sides, were also appealing to participants. These videos, popularized by the likes of Mr Beast, Infinite and JackSucksAtStuff, are often short-form, with a distinct, stimulating, editing style, designed to create maximum dramatic effect. This involves heavy use of choppy, ‘jump-cut’ edits, rapidly changing camera angles, special effects, animations and fast-paced speech.

One interesting item in the article’s summary of the research concerned “split screening.” The term means that one watches more than one short-form video at the same time. (As a dinobaby, I have to work hard to get one thing done. Two things simultaneously. Ho ho ho.)

What can an enterprising person interested in weaponizing information do? Here are some ideas:

  • Undermine certain values
  • Present shaped information
  • Take time from less exciting pursuits like homework and reading books
  • Having self-esteem building experiences.

Who cares? Advertisers, those hostile to the interests of the US, groomers, and probably several other cohorts.

I have to stop now. I need to watch multiple TikToks.

Stephen E Arnold, April 6, 2023

Google: Traffic in Kings Cross? Not So Hot

April 6, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I saw a picture of a sign held by a Googler (maybe a Xoogler or a Xoogler to be?) with the message:

Google layoffs. Hostile. Unnecessary. Brutal. Unfair.

Another Google PR/HR moment upon which the management team can surf… or drown? (One must consider different outcomes, mustn’t one?)

I did a small bit of online sleuthing and discovered what may be a “real” news story about the traffic hassles in King’s Cross this morning (April 4, 2023). “Unite Google Workers Strike Outside London HQ over Alleged Appalling Treatment” reports:

Google workers have been reduced to tears by fears of being made redundant, a union representative told a London rally… Others clutched placards with messages such as “Being evil is not a strategy” and “R.I.P Google culture 1998 – 2023”.

Google’s wizardly management team allegedly said:

Google said it has been “constructively engaging and listening to employees”.

I want to highlight a quite spectacular statement, which — for all I know — could have been generated by Google’s smart software which has allegedly been infused with some ChatGPT goodness:

It [the union for aspiring Xooglers] also alleges that employees with disabilities are being told to get a doctor’s note if they want a colleague to attend their meetings and “even then, union representation is still prohibited”.

Let me put this in context. Google is dealing with what I call the Stapler Affair. Plus, it continues to struggle against the stream of marketing goodness flowing from Redmond, seat of the new online advertising pretender to Google’s throne. The company continues to flail at assorted legal eagles bringing good tidings of great joy to lawyers billing for the cornucopia of lawsuits aimed at the Google.

My goodness. Now Google has created a bit of ill will for London sidewalk, bus, and roadway users. Does this sound like a desirable outcome? Maybe for Google senior management, not those trying to be happy at King’s Cross.

Stephen E Arnold, April 6, 2023

Google, Does Quantum Supremacy Imply That Former Staff Grouse in Public?

April 5, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am not sure if this story is spot on. I am writing about “Report: A Google AI Researcher Resigned after Learning Google’s Bard Uses Data from ChatGPT.” I am skeptical because today is All Fools’ Day. Being careful is sometimes a useful policy. An exception might be when a certain online advertising company is losing bigly to the marketing tactics of [a] Microsoft, the AI in Word and Azure Security outfit, [b] OpenAI and its little language model that could, and [c] Midjourney which just rolled out its own camera with a chip called Bionzicle. (Is this perhaps pronounced “bio-cycle” like washing machine cycle or “bion zickle” like bio pickle? I go with the pickle sound; it seems appropriate.

The cited article reports as actual factual real news:

ChatGPT AI is often accused of leveraging “stolen” data from websites and artists to build its AI models, but this is the first time another AI firm has been accused of stealing from ChatGPT.  ChatGPT is powering Bing Chat search features, owing to an exclusive contract between Microsoft and OpenAI. It’s something of a major coup, given that Bing leap-frogged long-time search powerhouse Google in adding AI to its setup first, leading to a dip in Google’s share price.

This is im port’ANT as the word is pronounced on a certain podcast.

More interesting to me is that recycled Silicon Valley type real news verifies this remarkable assertion as the knowledge output of a PROM’ inANT researcher, allegedly named Jacob Devlin. Mr. Devil has found his future at – wait for it – OpenAI. Wasn’t OpenAI the company that wanted to do good and save the planet and then discovered Microsoft backing, thirsty trapped AI investors, and the American way of wealth?

Net net: I wish I could say, April’s fool, but I can’t. I have an unsubstantiated hunch that Google’s governance relies on the whims of high school science club members arguing about what pizza topping to order after winning the local math competition. Did the team cheat? My goodness no. The team has an ethical compass modeled on the triangulations of William McCloundy or I.O.U. O’Brian, the fellow who sold the Brooklyn Bridge in the early 20th century.

Stephen E Arnold, April 5, 2023

Gotcha, Googzilla: Bing Channels GoTo, Overture, and Yahoo with Smart Software

April 5, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “That Was Fast! Microsoft Slips Ads into AI-Powered Bing Chat.” Not exactly a surprise? No, nope. Microsoft now understands that offering those who want to put a message in front of eye balls generates money. Google is the poster child of Madison Avenue on steroids.

The write up says:

We are also exploring additional capabilities for publishers including our more than 7,500 Microsoft Start partner brands. We recently met with some of our partners to begin exploring ideas and to get feedback on how we can continue to distribute content in a way that is meaningful in traffic and revenue for our partners.

Just 7,500? Why not more? Do you think Microsoft will follow the Google playbook, just enhanced with the catnip of smart software? If you respond, “yes,” you are on the monetization supersonic jet. Buckle up.

Here are my predictions based on what little I know about Google’s “legacy”:

  1. Money talks; therefore, the ad filtering system will be compromised by those with access to getting ads into the “system”. (Do you believe that software and human filtering systems are perfect? I have a bridge to sell you.)
  2. The content will be warped by ads. This is the gravity principle: Get to close to big money and the good intentions get sucked into the advertisers’ universe. Maybe it is roses and Pepsi Cola in the black hole, but I know it will not contain good intentions with mustard.
  3. The notion of a balanced output, objectivity, or content selected by a smart algorithm will be fiddled. How do I know? I would point to the importance of payoffs in 1950s rock and roll radio and the advertising business. How about a week on a yacht? Okay, I will send details. No strings, of course.
  4. And guard rails? Yep, keep content that makes advertisers — particularly big advertisers — happy. Block or suppress content that makes advertisers — particularly big advertisers – unhappy.

Do I have other predictions? Oh, yes. Why not formulate your own ideas after reading “BingBang: AAD Misconfiguration Led to Bing.com Results Manipulation and Account Takeover.” Bingo!

Net net: Microsoft has an opportunity to become the new Google. What could go wrong?

Stephen E Arnold, April 5, 2023

Google Economics: The Cost of Bard Versus Staplers

April 4, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Does anyone remember the good old days at the Google. Tony Bennett performing in the cafeteria. What about those car washes? How about the entry security system which was beset with door propped open with credit card receipts from Fred’s Place. Those were the days.

I read “Google to Cut Down on Employee Laptops, Services and Staplers for Multi-Year Savings.” The article explains:

Google said it’s cutting back on fitness classes, staplers, tape and the frequency of laptop replacements for employees. One of the company’s important objectives for 2023 is to “deliver durable savings through improved velocity and efficiency.” Porat said in the email. “All PAs and Functions are working toward this,” she said, referring to product areas. OKR stands for objectives and key results.

Yes, OKR. I wonder if the Sundar and Prabhakar comedy act will incorporate staplers into their next presentation.

And what about the $100 billion the Google “lost” after its quantum supremacy smart software screwed up in Paris? Let’s convert that to staplers, shall we? Today (April 4, 2023), I can purchase one office stapler from Amazon (Google’s fellow traveler in trashing relevance with advertisements) for $10.98. I liked the Bostitch Office Heavy Duty device, which is Amazon’s number one best seller (according to Amazon marketing).

The write up pointed out:

Staplers and tape are no longer being provided to print stations companywide as “part of a cost effectiveness initiative,” according to a separate, internal facilities directive viewed by CNBC.

To recoup that $100 million, Google will have to not purchase 9,107,468.12. I want to retain the 0.12 because one must be attentive to small numbers (unlike some of the fancy math in the Snorkel world). Google, I have heard, has about 100,000 “employees”, but it is never clear which are “real” employees, contractors, interns, or mysterious partners. Thus each of these individuals will be responsible for NOT losing or breaking 91 staplers per year.

I know the idea of rationing staplers is like burning Joan of Arc. It’s not an opportunity to warm a croissant; it is the symbolism of the event.

Google in 2023 knows how to keep me in stitches. Sorry, staples. And the cost of Bard? As the real Bard said:

Poor and content is rich and rich enough,
But riches fineless is as poor as winter
To him that ever fears he shall be poor. (Othello, III.iv)

Stephen E Arnold, April 4, 2023

Researchers Break New Ground with a Turkey Baster and Zoom

April 4, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I do not recall much about my pre-school days. I do recall dropping off at different times my two children at their pre-schools. My recollections are fuzzy. I recall horrible finger paintings carried to the automobile and several times a month, mashed pieces of cake. I recall quite a bit of laughing, shouting, and jabbering about classmates whom I did not know. Truth be told I did not want to meet these progeny of highly educated, upwardly mobile parents who wore clothes with exposed logos and drove Volvo station wagons. I did not want to meet the classmates. The idea of interviewing pre-kindergarten children struck me as a waste of time and an opportunity to get chocolate Baskin & Robbins cake smeared on my suit. (I am a dinobaby, remember. Dress for success. White shirt. Conservative tie. Yada yada._

I thought (briefly, very briefly) about the essay in Science Daily titled “Preschoolers Prefer to Learn from a Competent Robot Than an Incompetent Human.” The “real news” article reported without one hint of sarcastic ironical skepticism:

We can see that by age five, children are choosing to learn from a competent teacher over someone who is more familiar to them — even if the competent teacher is a robot…

Okay. How were these data gathered? I absolutely loved the use of Zoom, a turkey baster, and nonsense terms like “fep.”

Fascinating. First, the idea of using Zoom and a turkey baster would never roamed across this dinobaby’s mind. Second, the intuitive leap by the researchers that pre-schoolers who finger-paint would like to undertake this deeply intellectual task with a robot, not a human. The human, from my experience, is necessary to prevent the delightful sprouts from eating the paint. Third, I wonder if the research team’s first year statistics professor explained the concept of a valid sample.

One thing is clear from the research. Teachers, your days are numbered unless you participate in the Singularity with Ray Kurzweil or are part of the school systems’ administrative group riding the nepotism bus.

“Fep.” A good word to describe certain types of research.

Stephen E Arnold, April 4, 2023

Ready, Fire, Aim: Google and File Limits

April 4, 2023

Google is quite accomplished when the firm is required to ingest money from its customers. These are individuals and organizations “important” to the company which operates in self-described quantum supremacy mode. In a few other corporate functions, the company is less polished.

One example is described in “Google Drive Does a Surprise Rollout of File Limits, Locking Out Some Users.” The subtitle of the article is:

The new file limit means you can’t actually use the storage you buy from Google.

If the information in the write up is correct, it appears that Google is collecting money and not delivering the service marketed to some of its customers. A corollary is that I pay a yearly fee for a storage unit. When I arrive to park my bicycle for the winter, my unit is locked, and there is no staff to let me open the unit or way to access what’s in the storage unit. I am not sure I would be happy.

The article points out:

The 5 million total file cap isn’t documented anywhere, and remember, it has been two months since this rolled out. It’s not listed on the Google One or Google Workspace plan pages, and we haven’t seen any support documents about it. Google also doesn’t have any tools to see if you’re getting close to this file limit—there’s no count of files anywhere.

If this statement is accurate, then Google is selling and collecting money for one thing and delivering another to some customers. In my view, I think Google has hit upon a brilliant solution to a problem of coping with the increasing burden of its ill-advised promotion of “free” and “low cost” storage cooked up by long-gone Googlers. Yep, those teenagers making cookies without mom supervising do create a mess.

The article includes a superb example of Google speak, a form of language known to please legal professionals adjudicating different issues in which Google finds itself tangled; to wit:

A Google spokesperson confirmed to Ars that the file limit isn’t a bug, calling the 5 million file cap “a safeguard to prevent misuse of our system in a way that might impact the stability and safety of the system.” The company clarified that the limit applies to “how many items one user can create in any Drive,” not a total cap for all files in a drive. For individual users, that’s not a distinction that matters, but it could matter if you share storage with several accounts. Google added, “This limit does not impact the vast majority of our users’ ability to use their Google storage.” and “In practice, the number of impacted users here is vanishingly small.”)

From my vantage point in rural Kentucky, I think the opaque and chaotic approach to file limits is a useful example of what I call “high school science club management methods.” Those folks, as I recall as a high school science club member myself, just know better, don’t check with anyone in administration, and offer non-explanations.

In fact, the “vanishingly small” number of users affected by this teeny bopper professionalism is vanishingly small. Isn’t that the direction in which Google’s image, brand, and trust factor is heading? Toward the vanishingly small? Let’s ask ChatGPT, shall we: “Why does Google engage in Ready, fire, aim antics?”

Stephen E Arnold, April 4, 2023

Thomson Reuters, Where Is Your Large Language Model?

April 3, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have to give the lovable Bloomberg a pat on the back. Not only did the company explain its large language model for finance, the end notes to the research paper are fascinating. One cited document has 124 authors. Why am I mentioning the end notes? The essay is 65 pages in length, and the notes consume 25 pages. Even more interesting is that the “research” apparently involved nVidia and everyone’s favorite online bookstore, Amazon and its Web services. No Google. No Microsoft. No Facebook. Just Bloomberg and the tenure-track researcher’s best friend: The end notes.

The article with a big end … note that is presents this title: “BloombergGPT: A Large Language Model for Finance.” I would have titled the document with its chunky equations “A Big Headache for Thomson Reuters,” but I know most people are not “into” the terminal rivalry, the analytics rivalry and the Thomson Reuters’ Fancy Dancing with Palantir Technologies, nor the “friendly” competition in which the two firms have engaged for decades.

Smart software score appears to be: Bloomberg 1, Thomson Reuters, zippo. (Am I incorrect? Of course, but this beefy effort, the mind boggling end notes, and the presence of Johns Hopkins make it clear that Thomson Reuters has some marketing to do. What Microsoft Bing has done to the Google may be exactly what Bloomberg wants to do to Thomson Reuters: Make money on the next big thing and marginalize a competitor. Bloomberg obviously wants more than the ageing terminal business and the fame achieved on free TV’s Bloomberg TV channels.

What is the Bloomberg LLM or large language model? Here’s what the paper asserts. Please, keep in mind that essays stuffed with mathy stuff and researchy data are often non-reproducible. Heck, even the president of Stanford University took short cuts. Plus more than half of the research results my team has tried to reproduce ends up in Nowheresville, which is not far from my home in rural Kentucky:

we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets. We validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks.

My interpretations of this quotation is:

  1. Lots of data
  2. Big model
  3. Informed financial decisions.

“Informed financial decisions” means to me that a crazed broker will give this Bloomberg thing a whirl in the hope of getting a huge bonus, a corner office which is never visited, and fame at the New York Athletic Club.

Will this happen? Who knows.

What I do know is that Thomson Reuters’ executives in London, New York, and Toronto are doing some humanoid-centric deep thinking about Bloomberg. And that may be what Bloomberg really wants because Bloomberg may be ahead. Imagine that Bloomberg ahead of the “trust” outfit.

Stephen E Arnold, April 3, 2023

The Scramblers of Mountain View: The Google AI Team

April 3, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I don’t know about you, but if I were a Googler (which I am not), I would pay attention to Google wizard and former Alta Vista wizard Jeff Dean. This individual was, I have heard, was involved in the dust up about Timnit Gebru’s stochastic parrot paper. (I love that metaphor. A parrot.) Dr. Dean has allegedly invested in the smart search outfit Perplexity. I found this interesting because it sends a faint signal from the bowels of Googzilla. Bet hedging? Admission that Google’s AI is lacking? A need for Dr. Dean to prepare to find his future elsewhere?

Why am I mentioning a Googler betting cash on one of the many Silicon Valley type outfits chasing the ChatGPT pot of gold? I read “Google Bard Is Switching to a More Capable Language Model, CEO Confirms.” The write up explains:

Bard will soon be moving from its current LaMDA-based model to larger-scale PaLM datasets in the coming days… When asked how he felt about responses to Bard’s release, Pichai commented: “We clearly have more capable models. Pretty soon, maybe as this goes live, we will be upgrading Bard to some of our more capable PaLM models, so which will bring more capabilities, be it in reasoning, coding.”

That’s a hoot. I want to add the statement “Pichai claims not to be worried about how fast Google’s AI develops compared to its competitors.” That a great line for the Sundar and Prabhakar Comedy Show. Isn’t Google in Code Red mode. Why? Not to worry. Isn’t Google losing the PR and marketing battle to the Devils from Redmond? Why? Not to worry. Hasn’t Google summoned Messrs. Brin and Page to the den of Googzilla to help out with AI? Why. Not to worry.

Then a Google invests in Perplexity. Right. Soon. Moving. More capable.

Net net: Dr. Dean’s investment may be more significant than the Code Red silliness.

Stephen E Arnold, April 3, 2023

Laws, Rules, Regulations for Semantic AI (No, I Do Not Know What Semantic AI Means)

March 31, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am not going to dispute the wisdom and insight in the Microsoft essay “Consider the Future of This Decidedly Semantic AI.” The author is Xoogler Sam Schillace, CVP or corporate vice president and now a bigly wizard at the world’s pre-eminent secure software firm. However, I am not sure to what the “this” refers. Let’s assume that it is the Bing thing and not the Google thing although some plumbing may be influenced by Googzilla’s open source contributions to “this.” How would you like to disambiguate that statement, Mr. Bing?

The essay sets forth some guidelines or bright, white lines in the lingo of the New Age search and retrieval fun house. The “Laws” number nine. I want to note some interesting word choice. The reason for my focus on these terms is that taken as a group, more is revealed than I first thought.

Here are the terms I circled in True Blue (a Microsoft color selected for the blue screen of death):

  • Intent. Rule 1 and 3. The user’s intent at first glance. However, what if the intent is the hard wiring of a certain direction in the work flow of the smart software. Intent in separate parts of a model can and will have a significant impact on how the model arrives at certain decisions. Isn’t that a thumb on the scale?
  • Leverage. Rule 2. Okay, some type of Archimedes’ truism about moving the world I think. Upon rereading the sentence in which the word is used, I think it means that old-school baloney like precision and recall are not going to move anything. The “this” world has no use for delivering on point information using outmoded methods like string matching or Boolean statements. Plus, the old-school methods are too expensive, slow, and dorky.
  • Right. Rule 3. Don’t you love it when an expert explains that a “right” way to solve a problem exists. Why then did I have to suffer through calculus classes in which expressions had to be solved different ways to get the “right” answer. Yeah, who is in charge here? Isn’t it wonderful to be a sophomore in high school again?
  • Brittle. Rule 4. Yep, peanut brittle or an old-school light bulb. Easily broken, cut fingers, and maybe blinded? Avoid brittleness by “not hard coding anything.” Is that why Microsoft software is so darned stable? How about those email vulnerabilities in the new smart Outlook?
  • Lack. Rule 5. Am I correct in interpreting the use of the word “lack” as a blanket statement that the “this” is just not very good. I do love the reference to GIGO; that is, garbage in, garbage out. What if that garbage is generated by Bard, the digital phantasm of ethical behavior?
  • Uncertainty. Rule 6. Hello, welcome to the wonderful world of statistical Fancy Dancing. Is that “answer” right? Sure, if it matches the “intent” of the developer and the smart software helping that individual. I love it when smart software is recursive and learns from errors, at least known errors.
  • Protocol. Rule 7. A protocol is, according to the smart search system You.com is:

In computer networking, a protocol refers to a set of rules and guidelines that define a standard way of communicating data over a network. It specifies the format and sequence of messages that are exchanged between the different devices on the network, as well as the actions that are taken when errors occur or when certain events happen.

Yep, more rules and a standard, something universal. I think I get what Microsoft’s agenda has as a starred item: The operating system for smart software in business, the government, and education.

  • Hard. Rule 8. Yes, Microsoft is doing intense, difficult work. The task is to live up to the marketing unleashed at the World Economic Forum. Whew. Time for a break.
  • Pareidolia. Rule 9. The word means something along the lines is that some people see things that aren’t there. Hello, Bruce Lemoine, please. Oh, he’s on a date with a smart avatar. Okay, please, tell him I called. Also, some people may see in the actions of their French bulldog, a certain human quality.

If we step back and view these words in the context of the Microsoft view of semantic AI, can we see an unintentional glimpse into the inner workings of the company’s smart software? I think so. Do you see a shadowy figure eager to dominate while saying, “Ah, shucks, we’re working hard at an uncertain task. Our intent is to leverage what we can to make money.” I do.

Stephen E Arnold, March 31, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta