Google: Smart Software Confusion

June 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I cannot understand. Not only am I old; I am a dinobaby. Furthermore, I am like one of William James’s straw men: Easy to knock down or set on fire. Bear with me this morning.

I read “Google Skeptical of AI: Google Doesn’t Trust Its Own AI Chatbots, Asks Employees Not to Use Bard.” The write up asserts as “real” information:

It seems that Google doesn’t trust any AI chatbot, including its own Bard AI bot. In an update to its security measures, Alphabet Inc., Google’s parent company has asked its employees to keep sensitive data away from public AI chatbots, including their own Bard AI.

The go-to word for the Google in the last few weeks is “trust.” The quote points out that Google doesn’t “trust” its own smart software. Does this mean that Google does not “trust” that which it created and is making available to its “users”?

6 17 google gatekeeper

MidJourney, an interesting but possibly insecure and secret-filled smart software system, generated this image of Googzilla as a gatekeeper. Are gatekeepers in place to make money, control who does what, and record the comings and goings of people, data, and content objects?

As I said, I am a dinobaby, and I think I am dumb. I don’t follow the circular reasoning; for example:

Google is worried that human reviewers may have access to the chat logs that these chatbots generate. AI developers often use this data to train their LLMs more, which poses a risk of data leaks.

Now the ante has gone up. The issue is one of protecting itself from its own software. Furthermore, if the statement is accurate, I take the words to mean that Google’s Mandiant-infused, super duper, security trooper cannot protect Google from itself.

Can my interpretation be correct? I hope not.

Then I read “This Google Leader Says ML Infrastructure Is Conduit to Company’s AI Success.” The “this” refers to an entity called Nadav Eiron, a Stanford PhD and Googley wizard. The use of the word “conduit” baffles me because I thought “conduit” was a noun, not a verb. That goes to support my contention that I am a dumb humanoid.

Now let’s look at the text of this write up about Google’s smart software. I noted this passage:

The journey from a great idea to a great product is very, very long and complicated. It’s especially complicated and expensive when it’s not one product but like 25, or however many were announced that Google I/O. And with the complexity that comes with doing all that in a way that’s scalable, responsible, sustainable and maintainable.

I recall someone telling me when I worked at a Fancy Dan blue chip consulting firm, “Stephen, two objectives are zero objectives.” Obviously Google is orders of magnitude more capable than the bozos at the consulting company. Google can do 25 objectives. Impressive.

I noted this statement:

we created the OpenXLA [an open-source ML compiler ecosystem co-developed by AI/ML industry leaders to compile and optimize models from all leading ML frameworks] because the interface into the compiler in the middle is something that would benefit everybody if it’s commoditized and standardized.

I think this means that Google wants to be the gatekeeper or man in the middle.

Now let’s consider the first article cited. Google does not want its employees to use smart software because it cannot be trusted.

Is it logical to conclude that Google and its partners should use software which is not trusted? Should Google and its partners not use smart software because it is not secure? Given these constraints, how does Google make advances in smart software?

My perception is:

  1. Google is not sure what to do
  2. Google wants to position its untrusted and insecure software as the industry standard
  3. Google wants to preserve its position in a workflow to maximize its profit and influence in markets.

You may not agree. But when articles present messages which are alarming and clearly focused on market control, I turn my skeptic control knob. By the way, the headline should be “Google’s Nadav Eiron Says Machine Learning Infrastructure Is a Conduit to Facilitate Google’s Control of Smart Software.”

Stephen E Arnold, June 19, 2023

The Value of AI and the De-Valuing of Humanoids: Long Lines for Food Stamps Ahead?

June 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

AI, AI, AI-Yai-Ai. That could be a country western lyric. Maybe it is? I am not a fan of Grand Old Opry-type entertainment. I do enjoy what I call “Dark AI humor.” If the flow of amusing crAIziness continues, could it become a staple of comedy shows on Tubi or Pluto?

How many people live (theoretically) in the United States? The answer, according to an unimpeachable source, is 336,713,783. I love the precision of smart search software.

Consider the factoid in “300 Million Jobs Will Be Replaced, Diminished by Artificial Intelligence, Report Warns.” If we assume the population of the US is 337 million (sorry You.com), this works out to a trivial 37 million people who will have been promoted by smart software to the “Get Paycheck” social class. I may be overstating the “Paycheck Class,” but this is AI land, so numbers are fuzzified because you know… probability.

The write up points out:

Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work.

Disruption rocks on.

Now consider the information in “People Who Work with AI Are More Likely to Be Lonely, Suffer from Insomnia and Booze after Work, Study Finds.” The write up asserts:

Analysis revealed employees who interacted more frequently with AI systems were more likely to experience loneliness, insomnia and increased after-work alcohol consumption. But they also found these employees were more likely to offer to help their coworkers – a response that may be triggered by the need for social contact, the team said. Other experiments in the US, Indonesia and Malaysia, involving property management companies and a tech company, yielded similar results.

Let’s assume both articles contain actual factual information. Imagine managing a group of individuals in the top tier. Now think about those who are in the lower tier. Any fancy management ideas? I have none.

Exciting for sure.

Stephen E Arnold, June 16, 2023

Newsflash: Common Sense Illuminates Friendly Fish for Phishers

June 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Here’s a quick insider threat and phishing victim test: [a] Are you really friendly, fraternity or sorority social officer gregarious humanoid? [b] Are you a person who says “Yes” to almost any suggestion a friend or stranger makes to you? [c] Are you curious about emails offering big bucks, free prizes, or great deals on avocado slicers?

If you resonated with a, b, or c, researchers have some news for you.

Younger, More Extroverted, and More Agreeable Individuals Are More Vulnerable to Email Phishing Scams” reports:

… the older you are, the less susceptible you are to phishing scams. In addition, highly extroverted and agreeable people are more susceptible to this style of cyber attack. This research holds the potential to provide valuable guidance for future cybersecurity training, considering the specific knowledge and skills required to address age and personality differences.

The research summary continues:

The results of the current study support the idea that people with poor self-control and impulsive tendencies are more likely to misclassify phishing emails as legitimate. Interestingly, impulsive individuals also tend to be less confident in their classifications, suggesting they are somewhat aware of their vulnerability.

It is good to be an old, irascible, skeptical dinobaby after all.

Stephen E Arnold, June 16, 2023

AI and Non-State Actors

June 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

AI Weapons Need a Safe Back Door for Human Control” contains a couple of interesting statements.

The first is a quote from Hugh Durrant-Whyte, director of the Centre for Translational Data Science at the University of Sydney. He allegedly said:

China is investing arguably twice as much as everyone else put together. We need to recognize that it genuinely has gone to town. If you look at the payments, if you look at the number of publications, if you look at the companies that are involved, it is quite significant. And yet, it’s important to point out that the US is still dominant in this area.

For me, the important point is the investment gap. Perhaps the US should be more aggressive in its identifying and funding promising smart software companies?

The second statement which caught my attention was:

James Black, assistant director of defense and security research group RAND Europe, warned that non-state actors could lead in the proliferation of AI-enhanced weapons systems. “A lot of stuff is very much going to be difficult to control from a non-proliferation perspective, due to its inherent software-based nature. A lot of our export controls and non-proliferation regimes that exist are very much focused on old-school traditional hardware…

Several observations:

  1. Smart software ups the ante in modern warfare, intelligence, and law enforcement activities
  2. The smart software technology has been released into the wild. As a result, bad actors have access to advanced tools
  3. The investment gap is important but the need for skilled smart software engineers, mathematicians, and support personnel is critical in the US. University research departments are, in my opinion, less and less productive. The concentration of research in the hands of a few large publicly traded companies suggests that military, intelligence, and law enforcement priorities will be ignored.

Net net: China, personnel, and institution biases require attention by senior officials. These issues are not fooling around with Twitter scale. More is at stake. Urgent action is needed, which may be uncomfortable for fans of TikTok and expensive dinners in Washington, DC.

Stephen E Arnold, June 16, 2023

Is Smart Software Above Navel Gazing: Nope, and It Does Not Care

June 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumb_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Synthetic data. Statistical smoothing. Recursive methods. When we presented our lecture “OSINT Blindspots” at the 2023 National Cyber Crime Conference, the audience perked up. The terms might have been familiar, but our framing caught the more than 100 investigators’ attention. The problem my son (Erik) and I described was butt simple: Faked data will derail a prosecution if an expert witness explains that machine-generated output may be wrong.

We provided some examples, ranging from a respected executive who obfuscates his “real” business from a red-herring business. We profiled how information about a fervid Christian adherence to God’s precepts overshadowed a Ponzi scheme. We explained how an American living in Eastern Europe openly flaunts social norms in order to distract authorities from an encrypted email business set up to allow easy, seamless communication for interesting people. And we included more examples.

6 14 how long befoe...

An executive at a big time artificial intelligence firm looks over his domain and asks himself, “How long will it take for the boobs and boobettes to figure out that our smart software is wonky?” The illustration was spit out by the clever bits and bytes at MidJourney.

What’s the point in this blog post? Who cares besides analysts, lawyers, and investigators who have to winnow facts which are verifiable from shadow or ghost information activities?

It turns out that a handful of academics seem to have an interest in information manipulation. Their angle of vision is broader than my team’s. We focus on enforcement; the academics focus on tenure or getting grants. That’s okay. Different points of view lead to interesting conclusions.

Consider this academic and probably tough to figure out illustration from “The Curse of Recursion: Training on Generated Data Makes Models Forget”:

image

A less turgid summary of the researchers’ findings appears at this location.

The main idea is that gee-whiz methods like Snorkel and small language models have an interesting “feature.” They forget; that is, as these models ingest fake data they drift, get lost, or go off the rails. Synthetic cloth, unlike natural cotton T shirts, look like shirts. But on a hot day, those super duper modern fabrics can cause a person to perspire and probably emit unusual odors.

The authors introduce and explain “model collapse.” I am no academic. My interpretation of the glorious academic prose is that the numerical recipes, systems, and methods don’t work like the nifty demonstrations. In fact, over time, the models degrade. The hapless humanoids who are dependent on these lack the means to figure out what’s on point and what’s incorrect. The danger, obviously, is that clueless and lazy users of smart software make more mistakes in judgment than a person might otherwise reach.

The paper includes fancy mathematics and more charts which do not exactly deliver on the promise of a picture is worth a thousand words. Let me highlight one statement from the journal article:

Our evaluation suggests a “first mover advantage” when it comes to training models such as LLMs. In our work we demonstrate that training on samples from another generative model can induce a distribution shift, which over time causes Model Collapse. This in turn causes the model to mis-perceive the underlying learning task. To make sure that learning is sustained over a long time period, one needs to make sure that access to the original data source is preserved and that additional data not generated by LLMs remain available over time. The need to distinguish data generated by LLMs from other data raises questions around the provenance of content that is crawled from the Internet: it is unclear how content generated by LLMs can be tracked at scale. One option is community-wide coordination to ensure that different parties involved in LLM creation and deployment share the information needed to resolve questions of provenance. Otherwise, it may become increasingly difficult to train newer versions of LLMs without access to data that was crawled from the Internet prior to the mass adoption of the technology, or direct access to data generated by humans at scale.

Bang on.

What the academics do not point out are some “real world” business issues:

  1. Solving this problem costs money; the point of synthetic and machine-generated data is to reduce costs. Cost reduction wins.
  2. Furthermore, fixing up models takes time. In order to keep indexes fresh, delays are not part of the game plan for companies eager to dominate a market which Accenture pegs as worth trillions of dollars. (See this wild and crazy number.)
  3. Fiddling around to improve existing models is secondary to capturing the hearts and minds of those eager to worship a few big outfits’ approach to smart software. No one wants to see the problem because that takes mental effort. Those inside one of firms vying to own information framing don’t want to be the nail that sticks up. Not only do the nails get pounded down, they are forced to leave the platform. I call this the Dr. Timnit Gebru effect.

Net net: Good paper. Nothing substantive will change in the short or near term.

Stephen E Arnold, June 15, 2023

Two Creatures from the Future Confront a Difficult Puzzle

June 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I was interested in a suggestion a colleague made to me at lunch. “Check out the new printed World Book encyclopedia.”

I replied, “A new one. Printed? Doesn’t information change quickly today.”

My lunch colleague said, “That’s what I have heard.”

I offered, “Who wants a printed, hard-to-change content objects? Where’s the fun in sneaky or sockpuppet edits? Do you really want to go back to non-fluid information?”

My hungry debate opponent said, “What? Do you mean misinformation is good?”

I said, “It’s a digital world. Get with the program.”

Navigate to World Book.com and check out the 10 page sample about dinosaurs. When I scanned the entry, there was no information about dinobabies. I was disappointed because the dinosaur segment is bittersweet for these reasons:

  1. The printed encyclopedia is a dinosaur of sorts, an expensive one to produce and print at that
  2. As a dinobaby, I was expecting an IBM logo or maybe an illustration of a just-RIF’ed IBM worker talking with her attorney about age discrimination
  3. Those who want to fill a bookshelf can buy books at a second hand bookstore or connect with a zippy home designer to make the shelf tasteful. I think there is wallpaper of books on a shelf as an alternative.

69 aliens with book

Two aliens are trying to figure out what a single volume of a World Book encyclopedia contains? I assume the creatures will be holding the volume 6 “I”, the one with information about the Internet. The image comes from the creative bits at MidJourney.

Let me dip into my past. Ah, you are not interested? Tough. Here we go down memory lane:

In 1953 or 1954, my father had an opportunity to work in Brazil. Off our family went. One of the must-haves was a set of World Book encyclopedias. The covers were brown; the pictures were most black and white; and the information was, according to my parents, accurate.

The schools in Campinas, Brazil, at that time used one language. Portuguese. No teacher spoke English. Therefore, after failing every class except mathematics, my parents decided to get me a tutor. The course work was provided by something called Calvert in Baltimore, Maryland. My teacher would explain the lesson, watch me read, ask me a couple of questions, and bail out after an hour or two. That lasted about as long as my stint in the Campinas school near our house. My tutor found himself on the business end of a snake. The snake lived; the tutor died.

My father — a practical accountant — concluded that I should read the World Book encyclopedia. Every volume. I think there were about 20 plus a couple of annual supplements. My mother monitored my progress and made me write summaries of the “interesting” articles. I recall that interesting or not, I did one summary a day and kept my parents happy.

I hate World Books. I was in the fourth or fifth grade. Campinas had great weather. There were many things to do. Watch the tarantulas congregate in our garage. Monitor the vultures circling my mother when she sunbathed on our deck. Kick a soccer ball when the students got out of school. (I always played. I sucked, but I had a leather, size five ball. Prior to our moving to the neighborhood, the kids my age played soccer with a rock wrapped in rags. The ball was my passport to an abuse free stint in rural Brazil.)

But a big chunk of my time was gobbled by the yawing white maw of a World Book.

When we returned to the US, I entered the seventh grade. No one at the public school in Illinois asked about my classes in Brazil. I just showed up in Miss Soape’s classroom and did the assignments. I do know one thing for sure: I was the only student in my class who did not have to read the assigned work. Reading the World Book granted me a free ride through grade school, high school, and the first couple of years at college.

Do I recommend that grade school kids read the World Book cover to cover?

No, I don’t. I had no choice. I had no teacher. I had no radio because the electricity was on several hours a day. There was no TV because there were no broadcasts in Campinas. There were no English language anything. Thus, the World Book, which I hate, was the only game in town.

Will I buy the print edition of the 2023 World Book? Not a chance.

Will other people? My hunch is that sales will be a slog outside of library acquisitions and a few interior decorators trying to add color to a client’s book shelf.

I may be a dinobaby, but I have figured out how to look up information online.

The book thing: I think many young people will be as baffled about an encyclopedia as the two aliens in the illustration.

By the way, the full set is about $1,200. A cheap smartphone can be had for about $250. What will kids use to look up information? If you said, the printed encyclopedia, you are a rare bird. If you move to a remote spot on earth, you will definitely want to lug a set with you. Starlink can be expensive.

Stephen E Arnold, June 14, 2023

Can You Create Better with AI? Sure, Even If You Are Picasso or a TikTok Star

June 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Do we need to worry about how generative AI will change the world? Yes, but no more than we had to fear automation, the printing press, horseless carriages, and the Internet. The current technology revolution is analogous to the Industrial Revolutions and technology advancements of past centuries. University of Chicago history professor Ada Palmer is aware of humanity’s cyclical relationship with technology and she discusses it in her Microsoft Unlocked piece: “We Are An Information Revolution Species.”

Palmer explains that the human species has been living in an information revolution for twenty generations. She provides historical examples and how people bemoan changes. The changes arguably remove the “art” from tasks. These tasks, however, are simplified and allow humans to create more. It also frees up humanity’s time to conquer harder problems. Changes in technology spur a democratization of information. They also mean that jobs change, so humans need to adapt their skills for continual survival.

Palmer says that AI is just another tool as humanity progresses. She asserts that the bigger problems are outdated systems that no longer serve the current society. While technology has evolved so has humanity:

“This revolution will be faster, but we have something the Gutenberg generations lacked: we understand social safety nets. We know we need them, how to make them. We have centuries of examples of how to handle information revolutions well or badly. We know the cup is already leaking, the actor and the artist already struggling as the megacorp grows rich. Policy is everything. We know we can do this well or badly. The only sure road to real life dystopia is if we convince ourselves dystopia is unavoidable, and fail to try for something better.”

AI does need a social safety net so it does not transform into a sentient computer hell-bent on world domination. Palmer should point out that humans learn from their imaginations too. Star Trek or 2001: A Space Odyssey anyone?

A digital Sistine Chapel from a savant in Cairo, Illinois. Oh, right, Cairo, Illinois, is gone. But nevertheless…

Whitney Grace, June 15, 2023

Is This for Interns, Contractors, and Others Whom You Trust?

June 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Not too far from where my office is located, an esteemed health care institution is in its second month of a slight glitch. The word in Harrod’s Creek is that security methods at use at a major hospital were — how shall I frame this — a bit like the 2022-2023 University of Kentucky’s’ basketball team’s defense. In Harrod’s Creek lingo, this statement would translate to standard English as “them ‘Cats did truly suck.”

6 12 temp worker

A young temporary worker looks at her boss. She says, “Yes, I plugged a USB drive into this computer because I need to move your PowerPoint to a different machine to complete the presentation.” The boss says, “Okay, you can use the desktop in my office. I have to go to a cyber security meeting. See you after lunch. Text me if you need a password to something.” The illustration for this hypothetical conversation emerged from the fountain of innovation known as MidJourney.

The chatter about assorted Federal agencies’ cyber personnel meeting with the institution’s own cyber experts are flitting around. When multiple Federal entities park their unobtrusive and sometimes large black SUVs close to the main entrance, someone is likely to notice.

This short blog post, however, is not about the lame duck cyber security at the health care facility. (I would add an anecdote about an experience I had in 2022. I showed up for a check up at a unit of the health care facility. Upon arriving, I pronounced my date of birth and my name. The professional on duty said, “We have an appointment for your wife and we have her medical records.” Well, that was a trivial administrative error: Wrong patient, confidential information shipped to another facility, and zero idea how that could happen. I made the appointment myself and provided the required information. That’s a great computer systems and super duper security in my book.)

The question at hand, however, is: “How can a profitable, marketing oriented, big time in their mind health care outfit, suffer a catastrophic security breach?”

I shall point you to one possible pathway: Temporary workers, interns, and contractors. I will not mention other types of insiders.

Please, point your browser to Hak5.org and read about the USB Rubber Ducky. With a starting price of $80US, this USB stick has some functions which can accomplish some interesting actions. The marketing collateral explains:

Computers trust humans. Humans use keyboards. Hence the universal spec — HID, or Human Interface Device. A keyboard presents itself as a HID, and in turn it’s inherently trusted as human by the computer. The USB Rubber Ducky — which looks like an innocent flash drive to humans — abuses this trust to deliver powerful payloads, injecting keystrokes at superhuman speeds.

With the USB Rubby Ducky, one can:

  • Install backdoors
  • Covertly exfiltrate documents
  • Capture credential
  • Execute compound actions.

Plus, if there is a USB port, the Rubber Ducky will work.

I mention this device because it may not too difficult for a bad actor to find ways into certain types of super duper cyber secure networks. Plus temporary workers and even interns welcome a coffee in an organization’s cafeteria or a nearby coffee shop. Kick in a donut and a smile and someone may plug the drive in for free!

Stephen E Arnold, June 14, 2023

Smart Software: The Dream of Big Money Raining for Decades

June 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The illustration — from the crafty zeros and ones at MidJourney — depicts a young computer scientist reveling in the cash generated from his AI-infused innovation.

6 10 raining cash

For a budding wizard, the idea of cash falling around the humanoid is invigorating. It is called a “coder’s high” or Silicon Valley fever. There is no known cure, even when FTX-type implosions doom a fellow traveler to months of litigation and some hard time among individuals typically not in an advanced math program.

Where’s the cyclone of cash originate?

I would submit that articles like “Generative AI Revenue Is Set to Reach US$1.3 Trillion in 2032” are like catnip to a typical feline living amidst the cubes at a Google-type company or in the apartment of a significant other adjacent a blue chip university in the US.

Here’s the chart that makes it easy to see the slope of the growth:

image

I want to point out that this confection is the result of the mid tier outfit IDC and the fascinating Bloomberg terminal. Therefore, I assume that it is rock solid, based on in-depth primary research, and deep analysis by third-party consultants. I do, however, reserve the right to think that the chart could have been produced by an intern eager to hit the gym and grabbing a sushi special before the good stuff was gone.

Will generative AI hit the $1.3 trillion target in nine years? In the hospital for recovering victims of spreadsheet fever, the coder’s high might slow recovery. But many believe — indeed, fervently hope to experience the realities of William James’s mystics in his Varieties of Religious Experience.

My goodness, the vision of money from Generative AI is infectious. So regulate mysticism? Erect guard rails to prevent those with a coder’s high from driving off the Information Superhighway?

Get real.

Stephen E Arnold, June 12, 2023

Can One Be Accurate, Responsible, and Trusted If One Plagiarizes

June 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Now that AI is such a hot topic, tech companies cannot afford to hold back due to small flaws. Like a tendency to spit out incorrect information, for example. One behemoth seems to have found a quick fix for that particular wrinkle: simple plagiarism. Eager to incorporate AI into its flagship Search platform, Google recently released a beta version to select users. Forbes contributor Matt Novak was among the lucky few and shares his observations in, “Google’s New AI-Powered Search Is a Beautiful Plagiarism Machine.”

The author takes us through his query and results on storing live oysters in the fridge, complete with screenshots of the Googlebot’s response. (Short answer: you can for a few days if you cover them with a damp towel.) He highlights passages that were lifted from websites, some with and some without tiny tweaks. To be fair, Google does link to its source pages alongside the pilfered passages. But why click through when you’ve already gotten what you came for? Novak writes:

“There are positive and negative things about this new Google Search experience. If you followed Google’s advice, you’d probably be just fine storing your oysters in the fridge, which is to say you won’t get sick. But, again, the reason Google’s advice is accurate brings us immediately to the negative: It’s just copying from websites and giving people no incentive to actually visit those websites.

Why does any of this matter? Because Google Search is easily the biggest driver of traffic for the vast majority of online publishers, whether it’s major newspapers or small independent blogs. And this change to Google’s most important product has the potential to devastate their already dwindling coffers. … Online publishers rely on people clicking on their stories. It’s how they generate revenue, whether that’s in the sale of subscriptions or the sale of those eyeballs to advertisers. But it’s not clear that this new form of Google Search will drive the same kind of traffic that it did over the past two decades.”

Ironically, Google’s AI may shoot itself in the foot by reducing traffic to informative websites: it needs their content to answer queries. Quite the conundrum it has made for itself.

Cynthia Murrell, June 14, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta