The Google: Is Thinking Clearly a Core Competency at the Company

March 16, 2023

Editor’s Note: This short write up is the work of a real, semi-alive dinobaby, not smart software.

The essay “The Nightmare of AI-Powered Gmail Has Arrived.” The main point of the article is that Google is busy putting smart software in a number of its services. I noted this paragraph:

Google is retrofitting its product line with AI. Last month, it demonstrated its take on a chatty version of its search engine. Yesterday, it shared more details about AI-assisted Gmail and Google Docs. In Gmail, there are tools that will attempt to compose entire emails or edit them for tone as well as tools for ingesting and summarizing long threads.

Nope. Not interested.

google mgmt 7

The image of three managers with their hair on fire was generated by https://scribblediffusion.com/. My hunch is that a copyright troll will claim the image as their clients’ original work. I sticking with the smart software as the artist.

I underlined this statement as well:

Most interesting are the ways in which these features seem to be in conflict with one another.

What’s up?

  1. A Code Red at Google and suggestions from senior management to get in gear with smart software
  2. Big boy Microsoft continued to out market the Google (not too tough to do in my opinion)
  3. The ChatGPT juggernaut continued to operate like a large electro-magnet, pulling users from folks who has previously accrued significant experience with large language models.

The write up makes one point in my opinion. Google’s wizards are not able to think clearly. As the article concludes:

For example, in offices already burdened by inefficient communication and processes, it’s easy to see how reducing the cost of creating content might produce weird consequences and externalities. Tim can now send four times as many emails as he used to. Does he have four times as much to say?

Net net: Wow, the Google. The many and possibly overlapping smart services remind me of the outputs from a high school science club struggling to get as many Science Fair project done in the final days before the judging starts. Wow, the Google.

Stephen E Arnold, March 16, 2023

Ethical AI: Let Us Not Take Our Eye Off the Money Ball, Shall We?

March 15, 2023

What full-time job includes an automatic ejection seat?

Flying an F 35? Yes.

Working on ethical and responsible smart software? Yes. A super duper ejection module too.

I wonder if Google’s enabling of the stochastic parrot conference and the Dr. Timnit Gebru incident made an impression on Microsoft? Hmmm. I the information in “Microsoft Just Laid Off One of Its Responsible AI Teams” is accurate, Microsoft’s management has either [a] internalized the Google approach or [b] missed the memorandum describing downstream effects of deprecating “responsible AI.”

image

The image above was output by craiyon.com. True, one of the Beyond Search researchers added the evil red eye and the pile of cash. We think the evil eye and the money illustrate where ethical behavior ranks among the priorities of some senior executives.

The write up by two of the Land of Bank Crashes favorites reports:

Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company … The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design …

The article is about 1,500 words, and I suggest you work through the essay/news/chest thumper.

Several observations:

  1. The objective is control, not ethical control. Just control.
  2. Smart software knows how to string together words, not what the words connote.
  3. MBAs with incentive plans view ethics as an interesting concept but one with the appeal of calculating their bonuses on an Amiga computer.

Net net: What exactly is the news about a big tech company trimming its ethics professionals? I thought it was standard operating procedure.

PS. I admire the begging for sign up pleas as well. Classy for some “real news” write ups. Ejection seat activated.

Stephen E Arnold, March 15, 2023

Google: Poked Painfully in Its Snout

March 15, 2023

The essay “Why Didn’t DeepMind Build GPT3?” identifies three reasons for Google getting poked in its snout. According to the author, the reasons were [a] no specific problem to solve, [b] less academic hoo haa at OpenAI, and [c] less perceived risk. My personal view is that Googlers’ intelligence is directed at understanding their navels, not jumping that familiar Silicon Valley chasm. (Microsoft marketers spotted an opportunity and grabbed it. Boom. Score one for the Softies.)

image

Google’s management team reacting to ChatGPT’s marketing success. The art was created via https://scribblediffusion.com/ who owns the creative juices required to fabricate this interesting depiction of Google caught in a moment of management decision making.

These reasons make sense to me. I would suggest that several other Google characteristics played a role, probably bit parts, but roles nevertheless.

Since 2006, Google fragmented; that is, the idea of Google providing great benefit as an heir to the world of IBM and Microsoft gave Google senior managers a Droit du seigneur. However, the revenue for the company came from the less elevated world of online advertising. Thus, there was a disconnect after the fraught early years, the legal battle prior to the IPO, and the development of the mostly automated systems to make sure Google captured revenue in the buying and selling and brokering of online advertising. After 2006, the split between what Google management believed it had created and the reality of the business was institutionalized. Google and smart software was perceived as the one right way. Period. That way was a weird blend of group think and elite academic methods.

Also, Google failed to bring direction and focus to its products. I no longer remember how many messaging services Google offered. I cannot keep track of the company’s different and increasingly oblique investment arms. I have given up trying to recall the many new product and service incubators the company launched. I do remember that Google wanted to solve death. That, I believe, proved to be a difficult problem as if Loon balloons, digital games, and dealing with revenue challengers like Amazon and Facebook were no big deal. The fragmentation struck me as similar to the colored particles tossed during Holi, just with a more negative environmental effect. Googlers were vision impaired when it came to seeing what priorities to set.

Plus, from my point of view Google professionals lacked the ability to focus beyond getting more money, influence, and access to the senior managers. In short, Google demonstrated the inability to manage its people and the company. The last few years have been characterized by employee issues and other legal swamps. The management method has reminded me of my high school science club. Every member was a top student. Every member believed their view was correct. Every member believed that the traditional methods of teaching were stupid, boring, and irrelevant. The problem was that instead of chasing money and closeness to the “senior managers”, my high school science club was chasing validation and manifestation of superiority. That was baloney, of course, but what do 16 year olds actually understand. Google’s management is similar to my high school science club.

Are there other factors? Sure, and these include a wildly fluctuating moral compass, confusing personal objectives with ethical objectives, and giving into base instincts (baby making in the legal department, heroin on a yacht with a specialized contractor, and March Madness fun in Las Vegas).

Who will chronicle these Google gaffes? Perhaps someone will input a text string into ChatGPT to get the information many have either ignored, forgotten, or did not understand.

Stephen E Arnold, March xx, 2022

Synthetic Data: Yes, They Are a Thing

March 13, 2023

“Real” data — that is, data generated by humans — are expensive to capture, normalize, and manipulate. But, those “real” data are important. Unfortunately some companies have sucked up real data and integrated those items into products and services. Now regulators are awakening from a decades-long slumber and taking a look into the actions of certain data companies. More importantly, a few big data outfits are aware of the [a] the costs and [b] the risks of real data.

Enter synthetic data.

If you are unfamiliar with the idea, navigate to “What is Synthetic Data? The Good, the Bad, and the Ugly.” The article states:

The privacy engineering community can help practitioners and stakeholders identify the use cases where synthetic data can be used safely, perhaps even in a semi-automated way. At the very least, the research community can provide actionable guidelines to understand the distributions, types of data, tasks, etc. where we could achieve reasonable privacy-utility tradeoffs via synthetic data produced by generative models.

Helpful, correct?

The article does not point out two things which I find of interest.

First, the amount of money a company can earn by operating efficient synthetic data factories is likely to be substantial. Like other digital products, the upside can be profitable and give the “owner” of the synthetic data market and IBM-type of old-school lock in.

Second, synthetic data can be weaponized either intentionally via data poisoning or algorithm shaping.

I just wanted to point out that a useful essay does not explore what may be two important attributes of synthetic data. Will regulators rise to the occasion? Unlikely.

Stephen E Arnold, March 13, 2023

DarkTrace: A Cyber Security Star Makes an Analyst Bayes at the Moon

March 10, 2023

DarkTrace is a cyber security firm which used Sir Thomas Bayes’s math to thwart bad actors. “Fresh Clouds for Darktrace as New York Hedge Fund Claims Concerns Borne Out” states:

Quintessential Capital Management, which previously expressed its “fear that sales, margins, and growth rates may be overstated” today said: “Darktrace’s recent financial results are consistent with our thesis: growth, new customers, cash generation and profits are all shrinking fast.

Bayes works for some types of predictive applications. I think the disconnect between the technical methods of DarkTrace and the skeptical venture firm may be related to the distance between what smart software can do and what marketers say the smart software does. In that space are perched investors, stakeholders, employees, and customers.

What has caused a market downturn? The article says that it may be a consequence of ChatGPT? Here’s a statement I noted:

The cybersecurity business said ChatGPT “ may have helped increase the sophistication of phishing emails, enabling adversaries to create more targeted, personalized, and ultimately, successful attacks.” “Darktrace has found that while the number of email attacks across its own customer base remained steady since ChatGPT’s release, those that rely on tricking victims into clicking malicious links have declined, while linguistic complexity, including text volume, punctuation, and sentence length among others, have increased, the firm said.

Is this a case of DarkTrace’s smart software being outfoxed by smarter software? I still believe the marketers bear the responsibility. Knowing exactly how DarkTrace works and the specific results the system can deliver is important. Marketers rarely share my bias. Now the claims of the collateral writers are insufficiently robust to support the skepticism of tweeting analysts at Quintessential Capital Management.

Stephen E Arnold, March 10. 2023

Bing Begins, Dear Sundar and Prabhakar

March 9, 2023

Note: Note written by an artificial intelligence wonder system. The essay is the work of a certified dinobaby, a near80-year-old fossil. The Purple Prose parts are made up comments by me, the dinobaby, to help improve the meaning behind the words.

I think the World War 2 Dear John letter has been updated. Today’s version begins:

Dear Sundar and Prabhakar…

The New Bing and Edge – Progress from Our First Month” by Yusuf Mehdi explains that Bing has fallen in love with marketing. The old “we are so like one another, Sundar and Prabhakar” is now

“The magnetic Ms. OpenAI introduced me to her young son, ChatGPT. I am now going steady with that large language model. What a block of data! And I hope, Sundar and Prabhakar, we can still be friends. We can still chat, maybe at the high school reunion? Everyone will be there. Everyone. Timnit Gebru, Jerome Pesenti, Yan Lecun, Emily Bender, and you two, of course.”

The write up does not explicitly say these words. Here’s the actual verbiage from the marketing outfit also engaged in unpatchable security issues:

It’s hard to believe it’s been just over a month since we released the new AI-powered Bing and Edge to the world as your copilot for the web.  In that time, we have heard your feedback, learned a lot, and shipped a number of improvements.  We are delighted by the virtuous cycle of feedback and iteration that is driving strong Bing improvements and usage. 

A couple of questions? Is the word virtuous related to the word virgin? Pure, chaste, unsullied, and not corrupted by … advertising? Has it been a mere 30 days since Sundar and Prabhakar entered the world of Code Red? Were they surprised that their Paris comedy act drove attendees to Le Bar Bing? Is the copilot for the Web ready to strafe the digital world with Bing blasts?

Let’s look at what the love letter reports:

  • A million new users. What’s the Google pulled in with their change in the curse word policy for YouTube?
  • More searches on Le Bing than before the tryst with ChatGPT. Will Google address relevance ranking of bogus ads for a Thai restaurant favored by a certain humanoid influencer?
  • A mobile app. Sundar and Prabhakar, what’s happening with your mobile push? Hasn’t revenue from the Play store declined in the last year? Declined? Yep. As in down, down, down.

Is Bing a wonder working relevance engine? No way.

Is Bing going to dominate my world of search of retrieval? For the answer, just call 1 800 YOU WISH, please.

Is Bing winning the marketing battle for smarter search? Oh, yeah.

Well, Sundar and Prabhakar, don’t let that Code Red flashing light disturb your sleep. Love and kisses, Yusuf Mehdi. PS: The high school reunion is coming up. Maybe we can ChatGPT?

Stephen E Arnold, March 9, 2023

Publishers Face Another Existential Threat Beyond Their Own Management Decisions

March 7, 2023

Existential threat, existential threat. I hear that from many executives. The principal existential threat is a company’s own management decisions. Short-term, context-free, and uninformed deciders miss the boat, the train, and the bus to organic revenue growth. If I read a news story, I learn about another senior executive playing fast and loose with rules, regulations, and ethical guidelines.

Today I read the clickbait infused headline: “Big Media Is Gearing Up for Battle with Google and Microsoft over AI Chatbots Using Their Articles for Training: We Are Actively Considering Our Options.” (The headline seems to be pandering to the Google, does it not?)

What is an existential threat? Here a whack at a definition by Dictionary.com, a super duper source:

An existential threat is a threat to something’s very existence—when the continued being of something is at stake or in danger. It is used to describe threats to actual living things as well to nonliving thing things, such as a country or an ideology.

I think the phrase has been extended to cover an action or process which could erode the revenues of a publisher.

The write up cited is, of course, behind a paywall. No existential threat for Business Insider … yet. I learned:

It’s a moment some publishers consider the most disruptive change they’ve seen to their industry since the dawn of the internet — and the threat is no less than existential. The worry is that if people can get thorough answers to their questions through these bots, they won’t need to visit content sites anymore, undermining media’s entire revenue model, which has already been battered by digital upheaval.

But here’s the paragraph that caught my attention. Remember, that Rupert Murdoch and Fox News are in the midst of a conversation about dissemination of knowingly incorrect information. Remember the New York Times is discussing in a positive manner its coverage of some individuals’ efforts to shift from male to female and other possible combinations. Yep, Rupert and the Gray Lady.

“AI is a new frontier with great opportunity, but it can’t replace the trust, independence, and integrity of quality journalism,” said Danielle Coffey, EVP and general counsel of the News/Media Alliance, a publisher trade organization whose members include The New York Times and Wall Street Journal publisher News Corp. “Without compensation, we lose the humanity that journalists bring to telling a story.”

The issue was the loss of advertising revenue. Nope, that money is not coming back. Now the issue is loss of a reason to buy a subscription to “real news” publications. Nope, those readers are unlikely to come back.

Why? How about convenience?

I subscribe to dead tree newspapers. If the paper edition arrives, it could be torn, wet, or folded incorrectly because maintenance of the paper feed rollers is just an annoyance when someone wants to get a coffee.

What’s the fix? The desired fix is the termination with extreme prejudice of the evil Googzilla and its fellow travellers: Amazon, Apple, Facebook, Microsoft, and probably a few others on publishers’ dart boards.

A few observations:

  • AI is not something new. Publishers have, as far as I know, been mostly on the sidelines in the AI refinement efforts over the last 50 years. Yes, that’s a half a century.
  • The publishers want money. The “content” produced is simply a worm on a fish hook. Existential  threat to revenue, yes. Death of publishers? Meh.
  • The costs of litigation with an outfit like Google are likely to make the CFOs of the publishing companies going after Googzilla and its fellow travellers  unhappy. Why? The EU and the US government have not had a stellar track record of getting these digital outfits to return phone calls, let alone play by the rules.
  • Which outfits can pay the legal fees longer: Google and Microsoft or a group of publishers who seem to want Google traffic and whatever ad revenue can be had.

Net net: How about less existential threat talk and more use of plain English like “We want cash for content use”? I would ask why the publishers and their trade associations have not been in the vanguard of AI development. The focus seems to be on replacing humanoids with software to reduce costs. Søren Kierkegaard would be amused in my opinion.

Stephen E Arnold, March 7, 2023

SEO Fuels Smart Software

March 6, 2023

I read “Must Read: The 100 Most Cited Papers in 2022.” The principal finding is that Google-linked entities wrote most of the “important” papers. If one thinks back to Gene Garfield’s citation analysis work, a frequently cited paper is either really good, or it is an intellectual punching bag. Getting published is often not enough to prove that an academic is smart. Getting cited is the path to glory, tenure, and possibly an advantage when chasing grants.,

Here’s a passage which explains the fact that Google is “important”:

Google is consistently the strongest player followed by Meta, Microsoft, UC Berkeley, DeepMind and Stanford.

Keep in mind that Google and DeepMind are components of Alphabet.

Why’s this important?

  1. There is big, big money in selling/licensing models and data sets down the road
  2. Integrating technology into other people’s applications is a step toward vendor lock in and surveillance of one sort or another
  3. Analyzing information about the users of a technology provides a useful source of signals about [a] what to buy or invest in, [b] copy, or [c] acquire

If the data in this “100 Most Cited” article are accurate or at least close enough for horseshoes Google and OpenAI may be playing a clever game not unlike what the Cambridge Analytica crowd did.

Implications? Absolutely. I will talk about a few in my National Cyber Crime Conference lecture about OSINT Blindspots. (Yep, my old term has new life in the smart software memesphere.

Stephen E Aronld, March 6, 2023

Bard Is More Than You and I Know

March 6, 2023

I have to hand it to the real news outfit CNBC, the reporters have a way of getting interesting information about the innards of Googzilla. A case in point is “Google Execs Tell Employees in Testy All Hands Meeting That Bard A.I. Isn’t Just about Search.” Who knew? I thought Google was about online advertising and increasing revenue. Therefore, my dinobaby mind says, Bard is part of the Google; it follows that Bard is about advertising and maybe – just maybe – will have an impact of search. Nope.

I learned from CNBC:

In an all-hands meeting on Thursday (March 2, 2023), executives answered questions from Dory, the company’s internal forum, with most of the top-rated issues related to the priorities around Bard… [emphasis added]

Gee, I wonder why?

The write up pointed out:

employees criticized leadership, most notably CEO Sundar Pichai, for the way it handled the announcement of Bard

Oh, the Code Red, the Paris three star which delivered a froid McDo. (Goodness, I almost type “faux”.)

CNBC’s article added:

Staffers called Google’s initial public presentation [in Paris] “rushed,” “botched” and “un-Googley.”

Yeah, maybe faux is the better word, but I like the metaphor of a half cooked corporatized burger as well.

And the guru of Google Search, Prabhakar Raghavan, stepped out of the spotlight. A Googler named Jack Krawczyk, the product lead for Bard, filled in for the crowd favorite from Verity and Yahoo

. Mr. Krawczyk included in his stand up routine with one liners like this:

Bard is not search.

Mr. Krawczyk must have concluded that his audience was filled with IQ 100 types from assorted countries with lousy educational systems. I thought Googlers were exceptional. Surely Googlers could figure out what Bard could do. (Perhaps that is the reason for the employees’ interest in smart software:

Mr. Krawczyk quipped:

“It’s an experiment that’s a collaborative AI service that we talked about … “The magic that we’re finding in using the product is really around being this creative companion to helping you be the sparkplug for imagination, explore your curiosity, etc.”

CNBC pointed out that Mr. Krawczyk suggested the Google had “built a new feature for internal use called ‘Search It.’” That phrase reminded me of universal search which, of course, means separate queries for Google News, Google Scholar, Google Maps, et al. Yeah, universal search was a snappy marketing phrase but search has been a quite fragmented, relevance blind, information retrieval system.

The high value question in my opinion, is: Will “Search It” have the same cachet as ChatGPT?

Microsoft seems to be an effective marketer of to-be smart applications and services. Google, on the other hand, hopes I remember Mum or Ernie (not the cartoon character)?

Google, the Code Red outfit, is paddling its very expensive canoe with what appears to be desperation.

Net net: Google has not solved death and I am not sure the company will resolve the Microsoft / ChatGPT mindshare juggernaut. Here’s my contribution to the script of the next Sundar and Prabhakar Comedy Show: “We used to think Google was indecisive. But now we’re not so sure.”

Stephen E Arnold, March 6, 2023

OpenAI: Googzilla Gets Its Tail Set on Fire

March 2, 2023

Remember those. High school locker rooms. The crack of a wet towel. A howl. And then a squeal like the crappy brakes on the DC metro. Ah, memories. What happens if one tries to set Googzilla’s tail on fire? I think we are going to find out.

image

The image of a small entity (OpenAI) holding a blazing flame to the rear end of a large dinosaur (maybe Google’s Tyrannosaurus Rex before extinction).  Ouch. Let’s see how Googzilla dances to the new smash hit “Code Red or Dead.” The refrain is, “Dance, dinosaur, dance.” Art was created by Scribble Diffusion. I assume registered with whatever government agency is responsible for intellectual property.

OpenAI Opens ChatGPT Floodgates with Dirt-Cheap API” reports:

After a limited trial OpenAI has unleashed its ChatGPT and Whisper models on developers, who can now integrate chatbot interaction and speech-to-text conversion into their own applications through API calls.…

I think the OpenAI smart software is like the cake my mother baked when I was 10 years old. I think the phrase “half baked” captures the culinary marvel she produced. We were living in Brazil at the time, and I know that my mother and the Brazilian street vendors had trouble communicating when it came to ingredients. Oh, well. I survived.

I think Googzilla will survive. The company is in Code Red mode because of Microsoft’s slick marketing move. The Sundar and Prabhakar Comedy Show has not regained top billing on the search marketing vaudeville circuit. Now the OpenAI crowd is whipping up a frenzy of innovation among the true believers in the money making potential of ChatGPT.

“Ouch,” says Googzilla. “What’s that funny smell? Yikes. My tail is on fire. Code Redder. Code Redder.”

The article contains information that OpenAI cannot make money on discount API access to a service which is not without costs. Note this statement, please:

Max Woolf, a data scientist, in an online post, observes that that the API pricing is extraordinarily low. “I have no idea how OpenAI can make money on this,” he said. “This has to be a loss-leader to lock out competitors before they even get off the ground.”

Who cares? The point is marketing, not making money. Remember. I ate the half baked cake, loved it and burned the experience into my memory. Yep, Code Redder. Dance, dinosaur, dance.

Stephen E Arnold, March 2, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta