Hoot Hoot Hoot: A Xoogler Pushes the Help Button

May 20, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The Daily Express US (?) published a remarkable story: “Former Google VP Issues Horror AI Warning As Technology Set to Leave Millions Jobless.” That’s a catchy assertion. Who is the Xoogler (that’s a former Googler for those who don’t know) that is mashing the Redder Alert siren? It is Geoffrey Hinton, who is a Big Wheel in the Land of AI.

image

Like a teacher with an out-of-control class, help is needed. Unfortunately pressing the big red button is performative. It is too late to get the class under control. Does AI behave like these kids? Thanks, MSFT Copilot. Good enough.

He believes that some entity has to provide a universal basic income to those people who are unable to find work because AI ate their jobs. The acronym UBI in the vernacular of a dinobaby means welfare. But those younger than I will interpret the UBI idea as something that “they” must provide.

The write up quotes the computer and AI wizard as opining:

"If you pay everybody a universal basic income, that solves the problem of them starving and not being able to pay the rent but that doesn’t solve the self-respect problem."

I like the reference to self-respect. I have not encountered too many examples in the last day or so. I have choked off the flood of “information” about the assorted trials of a former elected official, the hooligan trashing of Macy stores, and the arrest and un-arrest of a certain celebrity golfer. That’s enough of the self-respect thing for me.

The write up continues:

He added: "I am very worried about AI taking over lots of mundane jobs. That should be a good thing. It’s going to lead to a big increase in productivity, which leads to a big increase in wealth, and if that wealth was equally distributed that would be great, but it’s not going to be. "In the systems we live in, that wealth is going to go to the rich and not to the people whose jobs get lost, and that’s going to be very bad for society, I believe. It’s going to increase the gap between rich and poor, which increases the chances of right-wing populists getting elected."

Okay, that’s an interesting moment of insight from one of the people who assisted in the creation of this sprint to societal change.

I find it interesting that technology marches forward in a way that prevents smart people from peering down the road from a vantage point defined by their computer monitor and lab partners. The bird’s-eye view of a technology like AI is of interest only when the individual steps away from a Google-type outfit.

AI can hallucinate. I think it is clear that the wizards “inventing” smart software also hallucinate within their digital constructs.

What happens when the hallucinogenic wears off? For Dr. Hinton it is time to call for help. I assume the UBI help will arrive from “the government.” Will “the government” listen, get organized, and take action. Dr. Hinton, like some smart software, might be experiencing what some of his AI colleagues call hallucinating. Am I surprised? Nope. Wizards are quirky.

Stephen E Arnold, May 20, 2024

Googzilla Versus OpenAI: Moving Up to Pillow Fighting

May 17, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Mike Tyson is dressed in a Godzilla outfit. He looks like a short but quite capable Googzilla. He is wearing a Google hat. (I have one, but it is soiled. Bummer.) Googzilla is giving the stink eye to Sam AI-Man, who has followed health routines recommended by Huberman Lab and Anatoly, the fellow who hawks supplements after shaming gym brutes dressed as a norm core hero.

image

Sam AI-Man asks an important question. Googzilla seems to be baffled. But the cane underscores that he is getting old for a thunder lizard selling online advertising. Thanks, MSFT Copilot. How are the security initiatives coming along? Oh, too bad.

Now we have the first exhibition: Googzilla is taking on Sam AI-Man.

I read an analysis of this high-stakes battle in “ChatGPT 4o vs Gemini 1.5 Pro: It’s Not Even Close.” The article appeared in the delightfully named online publication “Beebom.” I am writing in Beyond Search, which is — quite frankly — a really boring name. But I am a dinobaby, and I am going to assume that Beebom has a much more tuned in owner operator.

The article illustrates a best practice in database comparison, just tweaked to provide some insights into how alike or different the Googzilla is from the AI-Man. There’s a math test. There is a follow the instructions query. There is an image test. A programming challenge. You get the idea. The article includes what a reader will need to run similar brain teasers to Googzilla and Sam AI-Man.

Who cares? Let’s get to the results.

The write up says:

It’s evidently clear that Gemini 1.5 Pro is far behind ChatGPT 4o. Even after improving the 1.5 Pro model for months while in preview, it can’t compete with the latest GPT-4o model by OpenAI. From commonsense reasoning to multimodal and coding tests, ChatGPT 4o performs intelligently and follows instructions attentively. Not to miss, OpenAI has made ChatGPT 4o free for everyone.

Welp. This statement is not going to make Googzilla happy. Anyone who plays Foosball with the beastie today will want to be alert that re-Fooses are not allowed. You lose when you what the ball out of the game.

But the sun has not set over the Googzilla computer lab. The write up opines:

The only thing going for Gemini 1.5 Pro is the massive context window with support for up to 1 million tokens. In addition, you can upload videos too which is an advantage. However, since the model is not very smart, I am not sure many would like to use it just for the larger context window.

I chuckled at the last line of the write up:

If Google has to compete with OpenAI, a substantial leap is required.

Several observations:

  1. Who knows the names of the “new” products Google rolled out?
  2. With numerous “new” products, has Google a grand vision or is it one of those high school stunts in which passengers in a luxury car jump out and run around the car shouting. Then the car drives off?
  3. Will Google’s management align its AI with its staff management methods in the context of the regulatory scrutiny?
  4. Where’s DeepMind in this somewhat confusing flood of “new” smart products?

Net net: Google is definitely showing the results of having its wizards work under Code Red’s flashing lights. More pillow fights ahead. (Can you list the “new” products announced at Google I/O? Don’t worry. Neither can I.)

Stephen E Arnold, May 17, 2024

Flawed AI Will Still Take Jobs

May 16, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Shocker. Organizations are using smart software which is [a] operating in an way its creators cannot explain, [b] makes up information, and [c] appears to be dominated by a handful of “above the law” outfits. Does this characterization seem unfair? No, well, stop reading. If it seems anchored in reality, you may find my comments about jobs for GenX, GenY or GenWhy?, millennials, and Alphas (I think this is what marketers call wee lads and lasses) somewhat in line with the IMF’s view of AI.

image

The answer is, “Your daughter should be very, very intelligent and very, very good at an in-demand skill. If she is not, then it is doom scrolling for sure. Thanks, MSFT Copilot. Do your part for the good of mankind today.

Artificial Intelligence Hitting Labour Forces Like a Tsunami – IMF Chief” screws up the metaphor. A tsunami builds, travels, dissipates. I am not sure what the headline writer thinks will dissipate in AI land. Jobs for sure. But AI seems to have some sticking power.

What does the IMF say? Here’s a bit of insight:

Artificial intelligence is likely to impact 60% of jobs in advanced economies and 40% of jobs around the world in the next two years…

So what? The IMF Big Dog adds:

“It could bring tremendous increase in productivity if we manage it well, but it can also lead to more misinformation and, of course, more inequality in our society.”

Could. I think it will but for those who know their way around AI and are in the tippy top of smart people. ATM users, TikTok consumers, and those who think school is stupid may not emerge as winners.

I find it interesting to consider what a two-tier society in the US and Western Europe will manifest. What will the people who do not have jobs do? Volunteer to work at the local animal shelter, pick up trash, or just kick back. Yeah, that’s fun.

What if one looks back over the last 50 years? When I grew up, my father had a job. My mother worked at home. I went to school. The text books were passed along year to year. The teachers grouped students by ability and segregated some students into an “advanced” track. My free time was spent outside “playing” or inside reading. When I was 15, I worked as a car hop. No mobile phones. No computer. Just radio, a record player, and a crappy black-and-white television which displayed fuzzy programs. The neighbors knew me and the other “kids.” From my eighth grade class, everyone went to college after high school. In my high school class of 1962, everyone was thinking about an advanced degree. Social was something a church sponsored. Its main feature was ice cream. After getting an advanced degree in 1965 I believe, I got a job because someone heard me give a talk about indexing Latin sermons and said, “We need you.” Easy.

A half century later, what is the landscape. AI is eliminating jobs. Many of these will be either intermediating jobs like doing email spam for a PR firm’s client or doing legal research. In the future, knowledge work will move up the Great Chain of Being. Most won’t be able to do the climbing to make it up to a rung with decent pay, some reasonable challenges, and a bit of power.

Let’s go back to the somewhat off-the-mark tsunami metaphor. AI is going to become more reliable. The improvements will continue. Think about what an IBM PC looked like in the 1980s. Now think about the MacBook Air you or your colleague has. They are similar but not equivalent. What happens when AI systems and methods keep improving? That’s tough to predict. What’s obvious is that the improvements and innovations in smart software are not a tsunami.

I liken it more like the continuous pressure in a petroleum cracking facility. Work is placed in contact with smart software, and stuff vaporizes. The first component to be consumed are human jobs. Next, the smart software will transform “work” itself. Most work is busy work; smart software wants “real” work. As long as the electricity stays on, the impact of AI will be on-going. AI will transform. A tsunami crashes, makes a mess, and then is entropified. AI is a different and much hardier development.

The IMF is on the right track; it is just not making clear how much change is now underway.

Stephen E Arnold, May 16, 2024

AI Delivers The Best of Both Worlds: Deception and Inaccuracy

May 16, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Wizards from one of Jeffrey Epstein’s probes made headlines about AI deception. Well, if there is one institution familiar with deception, I would submit that the Massachusetts Institute of Technology might be considered for the ranking, maybe in the top five.

The write up is “AI Deception: A Survey of Examples, Risks, and Potential Solutions.” If you want summaries of the write up, you will find them in The Guardian (we beg for dollars British newspaper) and Science Alert. Before I offer my personal observations, I will summarize the “findings” briefly. Smart software can output responses designed to deceive users and other machine processes.

image

Two researchers at a big name university make an impassioned appeal for a grant. These young, earnest, and passionate wizards know their team can develop a lie detector for an artificial intelligence large language model. The two wizards have confidence in their ability, of course. Thanks, MSFT Copilot. Good enough, like some enterprise software’s security architecture.

If you follow the “next big thing” hoo hah, you know that the garden variety of smart software incorporates technology from outfits like Google. I have described Google as a “slippery fish” because it generates explanations which often don’t make sense to me. Using the large language model generative text systems can yield some surprises. These range from images which seem out of step with historical fact to legal citations that land a lazy lawyer (yes! alliteration) in a load of lard.

The MIT researcher has verified that smart software may emulate the outstanding ethical qualities of an engineer or computer scientist. Logic is everything. Ethics are not anything.

The write up says:

Deception has emerged in a wide variety of AI systems trained to complete a specific task. Deception is especially likely to emerge when an AI system is trained to win games that have a social element …

The domain of the investigation was games. I want to step back and ask, “If LLMs are not understood by their developers, how do we know if deception is hard wired into the systems or that the systems learn deception from their developers with a dusting of examples from the training data?”

The answer to the question is, “At this time, no one knows how these large-scale systems work. Even the “small” LLMs can prove baffling. We input our own data into Mistral and managed to obtain gibberish. Another go produced a system crash that required a hard reboot of the Mac we were using for the test.

The reality appears to be that probability-based systems do not follow the same rules as a human. With more and more humans struggling with old-school skills like readin’, writin’ and ‘rithmatic  — most people won’t notice. For the top 10 percenters, the mistakes are amusing… sometimes.

The write up concludes:

Training models to be more truthful could also create risk. One way a model could become more truthful is by developing more accurate internal representations of the world. This also makes the model a more effective agent, by increasing its ability to successfully implement plans. For example, creating a more truthful model could actually increase its ability to engage in strategic deception by giving it more accurate insights into its opponents’ beliefs and desires. Granted, a maximally truthful system would not deceive, but optimizing for truthfulness could nonetheless increase the capacity for strategic deception. For this reason, it would be valuable to develop techniques for making models more honest (in the sense of causing their outputs to match their internal representations), separately from just making them more truthful. Here, as we discussed earlier, more research is needed in developing reliable techniques for understanding the internal representations of models. In addition, it would be useful to develop tools to control the model’s internal representations, and to control the model’s ability to produce outputs that deviate from its internal representations. As discussed in Zou et al., representation control is one promising strategy. They develop a lie detector and can control whether or not an AI lies. If representation control methods become highly reliable, then this would present a way of robustly combating AI deception.

My hunch is that MIT will be in the hunt for US government grants to develop a lie detector for AI models. It is also possible that Harvard’s medical school will begin work to determine where ethical behavior resides in the human brain so that can be replicated in one of the megawatt munching data centers some big tech outfits want to deploy.

Four observations:

  1. AI can generate what appears to be “accurate” information, but that information may be weaponized by a little-understood mechanism
  2. “Soft” human information like ethical behavior may be difficult to implement in the short term, if ever
  3. A lie detector for AI will require AI; therefore, how will an opaque and not understood system be designated okay to use? It cannot at this time
  4. Duplicity may be inherent in the educational institutions. Therefore, those affiliated with the institution may be duplicitous and produce duplicitous content. This assertion raises the question, “Whom can one trust in the AI development chain?

Net net: AI is hot because is a candidate for 2024’s next big thing. The “big thing” may be the economic consequences of its being a fairly small and premature thing. Incubator time?

Stephen E Arnold, May 16, 2024

Generative AI: Minor Value and Major Harms

May 16, 2024

Flawed though it is, generative AI has its uses. In fact, according to software engineer and Citation Needed author Molly White, AI tools for programming and writing are about as helpful as an intern. Unlike the average intern, however, AI supplies help with a side of serious ethical and environmental concerns. White discusses the tradeoffs in her post, “AI Isn’t Useless. But Is It Worth It?

At first White was hesitant to dip her toes in the problematic AI waters. However, she also did not want to dismiss their value out of hand. She writes:

“But as the hype around AI has grown, and with it my desire to understand the space in more depth, I wanted to really understand what these tools can do, to develop as strong an understanding as possible of their potential capabilities as well as their limitations and tradeoffs, to ensure my opinions are well-formed. I, like many others who have experimented with or adopted these products, have found that these tools actually can be pretty useful for some tasks. Though AI companies are prone to making overblown promises that the tools will shortly be able to replace your content writing team or generate feature-length films or develop a video game from scratch, the reality is far more mundane: they are handy in the same way that it might occasionally be useful to delegate some tasks to an inexperienced and sometimes sloppy intern. Still, I do think acknowledging the usefulness is important, while also holding companies to account for their false or impossible promises, abusive labor practices, and myriad other issues. When critics dismiss AI outright, I think in many cases this weakens the criticism, as readers who have used and benefited from AI tools think ‘wait, that’s not been my experience at all’.”

That is why White put in the time and effort to run several AI tools through their paces. She describes the results in the article, so navigate there for those details. Some features she found useful. Others required so much review and correction they were more trouble than they were worth. Overall, though, she finds the claims of AI bros to be overblown and the consequences to far outweigh the benefits. So maybe hand that next mundane task to the nearest intern who, though a flawed human, comes with far less baggage than ChatGPT and friends.

Cynthia Murrell, May 16, 2024

Ho Hum: The Search Sky Is Falling

May 15, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Google’s Broken Link to the Web” is interesting for two reasons: [a] The sky is falling — again and [b] search has been broken for a long time and suddenly I should worry.

The write up states:

When it comes to the company’s core search engine, however, the image of progress looks far muddier. Like its much-smaller rivals, Google’s idea for the future of search is to deliver ever more answers within its walled garden, collapsing projects that would once have required a host of visits to individual web pages into a single answer delivered within Google itself.

Nope. The walled garden has been in the game plan for a long, long time. People who lusted for Google mouse pads were not sufficiently clued in to notice. Google wants to be the digital Hotel California. Smarter software is just one more component available to the system which controls information flows globally. How many people in Denmark rely on Google search whether it is good, bad, or indifferent? The answer is, “99 percent.” What about people who let Google Gmail pass along their messages? How about 67 percent in the US. YouTube is video in many countries even with the rise of TikTok, the Google is hanging in there. Maps? Ditto. Calendars? Ditto. Each of these ubiquitous services are “search.” They have been for years. Any click can be monetized one way or another.

image

Who will pay attention to this message? Regulators? Users of search on an iPhone? How about commuters and Waze? Thanks, MSFT Copilot. Good enough. Working on those security issues today?

Now the sky is falling? Give me a break. The write up adds:

where the company once limited itself to gathering low-hanging fruit along the lines of “what time is the super bowl,” on Tuesday executives showcased generative AI tools that will someday plan an entire anniversary dinner, or cross-country-move, or trip abroad. A quarter-century into its existence, a company that once proudly served as an entry point to a web that it nourished with traffic and advertising revenue has begun to abstract that all away into an input for its large language models.  This new approach is captured elegantly in a slogan that appeared several times during Tuesday’s keynote: let Google do the Googling for you.

Of course, if Google does it, those “search” abstractions can be monetized.

How about this statement?

But to everyone who depended even a little bit on web search to have their business discovered, or their blog post read, or their journalism funded, the arrival of AI search bodes ill for the future. Google will now do the Googling for you, and everyone who benefited from humans doing the Googling will very soon need to come up with a Plan B.

Okay, what’s the plan B? Kagi? Yandex? Something magical from one of the AI start ups?

People have been trying to out search Google for a quarter century. And what has been the result? Google’s technology has been baked into the findability fruit cakes.

If one wants to be found, buy Google advertising. The alternative is what exactly? Crazy SEO baloney? Hire a 15 year old and pray that person can become an influencer? Put ads on Tubi?

The sky is not falling. The clouds rolled in and obfuscated people’s ability to see how weaponized information has seized control of multiple channels of information. I don’t see a change in weather any time soon. If one wants to run around saying the sky is falling, be careful. One might run into a wall or trip over a fire plug.

Stephen E Arnold, May 15, 2024

The Future for Flops with Humans: Flop with Fakes

May 15, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

As a dinobaby, I find the shift from humans to fake humans fascinating. Jeff Epstein’s favorite university published “Deepfakes of Your Dead Loved Ones Are a Booming Chinese Business.” My first thought is that MIT’s leadership will commission a digital Jeffrey. Imagine. He could introduce MIT fund raisers to his “friends.” He could offer testimonials about the university. He could invite — virtually, of course — certain select individuals to a virtual “island.”

image

The bar located near the technical university is a hot bed of virtual dating, flirting, and drinking. One savvy service person is disgusted by the antics of the virtual customers. The bartender is wide-eyed in amazement. He is a math major with an engineering minor. He sees what’s going on. Thanks, MSFT Copilot. Working hard on security, I bet.

Failing that, MIT might turn its attention to Whitney Wolfe Herd, the founder of Bumble. Although a graduate of the vastly, academically inferior Southern Methodist University in the non-Massachusetts locale of Texas (!), she has a more here-and-now vision. The idea is probably going to get traction among some of the MIT-type brainiacs. A machine-generated “self” — suitably enhanced to remove pocket protectors, plaid jammy bottoms, and observatory grade bifocals — will date a suitable companion’s digital self. Imagine the possibilities.

The write up “AI Personas Are the Future of Dating, Bumble Founder Says. Many Aren’t Buying.” The write up reports:

Herd proposed a scenario in which singles could use AI dating concierges as stand-ins for themselves when reaching out to prospective partners online. “There is a world where your dating concierge could go and date for you with other dating concierge … and then you don’t have to talk to 600 people,” she said during the summit.

Wow. More time to put a pony on the roof of an MIT building.

The write up did inject a potential downside. A downside? Who is NBC News kidding?

There’s some healthy skepticism over whether AI is the answer. A clip of Herd at the Bloomberg Summit gained over 10 million views on X, where people expressed uneasiness with the idea of an AI-based dating scene. Some compared it to episodes of "Black Mirror," a Netflix series that explores dystopian uses of technology. Others felt like the use of AI in dating would exacerbate the isolation and loneliness that people have been feeling in recent years.

Are those working in the techno-feudal empires or studying in the prep schools known to churn out the best, the brightest, the most 10X-ceptional knowledge workers weak in social skills? Come on. Having a big brain (particularly for mathy type of logic) is “obviously” the equipment needed to deal with lesser folk. Isolated? No. Think about gamers. Such camaraderie. Think about people like the head of Bumble. Lectures, Discord sessions, and access to data about those interested in loving and living virtually. Loneliness? Sorry. Not an operative word. Halt.

“AI Personas Are the Future…” reports:

"We will not be a dating app in a few years," she [the Bumble spokesperson] said. "Dating will be a component, but we will be a true human connection platform. This is where you will meet anyone you want to meet — a hiking buddy, a mahjong buddy, whatever you’re looking for."

What happens when a virtually Jeff Epstein goes to the bar and spots a first-year who looks quite youthful. Virtual fireworks?

Stephen E Arnold, May 15, 2024

AI and the Workplace: Change Will Happen, Just Not the Way Some Think

May 15, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “AI and the Workplace.” The essay contains observations related to smart software in the workplace. The idea is that employees who are savvy will experiment and try to use the technology within today’s work framework. I think that will happen just as the essay suggests. However, I think there is a larger, more significant impact that is easy to miss. Looking at today’s workplace is missing a more significant impact. Employees either [a] want to keep their job, [b] gain new skills and get a better job, or [c] quit to vegetate or become an entrepreneur. I understand.

The data in the report make clear that some employees are what I call change flexible; that is, these motivated individuals differentiate from others at work by learning and experimenting. Note that more than half the people in the “we don’t use AI” categories want to use AI.

image

These data come from the cited article and an outfit called Asana.

The other data in the report. Some employees get a productivity boost; others just chug along, occasionally getting some benefit from AI. The future, therefore, requires learning, double checking outputs, and accepting that it is early days for smart software. This makes sense; however, it misses where the big change will come.

In my view, the major shift will appear in companies founded now that AI is more widely available. These organizations will be crafted to make optimal use of smart software from the day the new idea takes shape. A new news organization might look like Grok News (the Elon Musk project) or the much reviled AdVon. But even these outfits are anchored in the past. Grok News just substitutes smart software (which hopefully will not kill its users) for old work processes and outputs. AdVon was a “rip and replace” tool for Sports Illustrated. That did not go particularly well in my opinion.

The big job impact will be on new organizational set ups with AI baked in. The types of people working at these organizations will not be from the lower 98 percent of the work force pool. I think the majority of employees who once expected to work in information processing or knowledge work will be like a 58 year old brand manager at a vape company. Job offers will not be easy to get and new companies might opt for smart software and search engine optimization marketing. How many workers will that require? Maybe zero. Someone on Fiverr.com will do the job for a couple of hundred dollars a month.

In my view, new companies won’t need workers who are not in the top tier of some high value expertise. Who needs a consulting team when one bright person with knowledge of orchestrating smart software is able to do the work of a marketing department, a product design unit, and a strategic planning unit? In fact, there may not be any “employees” in the sense of workers at a warehouse or a consulting firm like Deloitte.

Several observations are warranted:

  1. Predicting downstream impacts of a technology unfamiliar to a great many people is tricky and sometimes impossible. Who knew social media would spawn a renaissance in getting tattooed?
  2. Visualizing how an AI-centric start up is assembled is a challenge? I submit it won’t look like an insurance company today. What’s a Tesla repair station look like? The answer, “Not much.”
  3. Figuring out how to be one of the elite who gets a job means being perceived as “smart.” Unlike Alina Habba, I know that I cannot fake “smart.” How many people will work hard to maximize the return on their intelligence? The answer, in my experience, is, “Not too many, dinobaby.”

Looking at the future from within the framework of today’s datasphere distorts how one perceives impact. I don’t know what the future looks like, but it will have some quite different configurations than the companies today have. The future will arrive slowly and then it becomes the foundation of a further evolution. What’s the grandson of tomorrow’s AI firm look like? Beauty will be in the eye of the beholder.

Net net: Where will the never-to-be-employed find something meaningful to do?

Stephen E Arnold, May 15, 2024

AdVon: Why So Much Traction and Angst?

May 14, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

AdVon. AdVon. AdVon. Okay, the company is in the news. Consider this write up: “Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry.” So why meet AdVon? The subtitle explains:

Remember that AI company behind Sports Illustrated’s fake writers? We did some digging — and it’s got tendrils into other surprisingly prominent publications.

Let’s consider the question: Why is AdVon getting traction among “prominent publications” or any other outfit wanting content? The answer is not far to see: Cutting costs, doing more with less, get more clicks, get more money. This is not a multiple choice test in a junior college business class. This is common sense. Smart software makes it possible for those with some skill in the alleged art of prompt crafting and automation to sell “stories” to publishers for less than those publishers can produce the stories themselves.

image

The future continues to arrive. Here’s smart software is saying “Hasta la vista” to the human information generator. The humanoid looks very sad. The AI software nor its owner does not care. Revenue and profit are more important as long as the top dogs get paid big bucks. Thanks, MSFT Copilot. Working on your security systems or polishing the AI today?

Let’s look at the cited article’s peregrination to the obvious: AI can reduce costs of “publishing”. Plus, as AI gets more refined, the publications themselves can be replaced with scripts.

The write up says:

Basically, AdVon engages in what Google calls “site reputation abuse”: it strikes deals with publishers in which it provides huge numbers of extremely low-quality product reviews — often for surprisingly prominent publications — intended to pull in traffic from people Googling things like “best ab roller.” The idea seems to be that these visitors will be fooled into thinking the recommendations were made by the publication’s actual journalists and click one of the articles’ affiliate links, kicking back a little money if they make a purchase. It’s a practice that blurs the line between journalism and advertising to the breaking point, makes the web worse for everybody, and renders basic questions like “is this writer a real person?” fuzzier and fuzzier.

Okay. So what?

In spite of the article being labeled as “AI” in AdVon’s CMS, the Outside Inc spokesperson said the company had no knowledge of the use of AI by AdVon — seemingly contradicting AdVon’s claim that automation was only used with publishers’ knowledge.

Okay, corner cutting as part of AdVon’s business model. What about the “minimum viable product” or “good enough” approach to everything from self driving auto baloney to Boeing air craft doors? AI use is somehow exempt from what is the current business practice. Major academic figures take short cuts. Now an outfit with some AI skills is supposed to operate like a hybrid of Joan of Arc and Mother Theresa? Sure.

The write up states:

In fact, it seems that many products only appear in AdVon’s reviews in the first place because their sellers paid AdVon for the publicity. That’s because the founding duo behind AdVon, CEO Ben Faw and president Eric Spurling, also quietly operate another company called SellerRocket, which charges the sellers of Amazon products for coverage in the same publications where AdVon publishes product reviews.

To me, AdVon is using a variant of the Google type of online advertising concept. The bar room door swings both ways. The customer pays to enter and the customer pays to leave. Am I surprised? Nope. Should anyone? How about a government consumer protection watch dog. Tip: Don’t hold your breath. New York City tested a chatbot that provided information that violated city laws.

The write up concludes:

At its worst, AI lets unscrupulous profiteers pollute the internet with low-quality work produced at unprecedented scale. It’s a phenomenon which — if platforms like Google and Facebook can’t figure out how to separate the wheat from the chaff — threatens to flood the whole web in an unstoppable deluge of spam. In other words, it’s not surprising to see a company like AdVon turn to AI as a mechanism to churn out lousy content while cutting loose actual writers. But watching trusted publications help distribute that chum is a unique tragedy of the AI era.

The kicker is that the company owning the publication “exposing” AdVon used AdVon.

Let me offer several observations:

  1. The research reveals what will become an increasingly wide spread business practice. But the practice of using AI to generate baloney and spam variants is not the future. It is now.
  2. The demand for what appears to be old fashioned information generation is high. The cost of producing this type of information is going to force those who want to generate information to take short cuts. (How do I know? How about the president of Stanford University who took short cuts. That’s how. When a university president muddles forward for years and gets caught by accident, what are students learning? My answer: Cheat better than that.)
  3. AI diffusion is like gerbils. First, you have a couple of cute gerbils in your room. As a nine year old, you think those gerbils are cute. Then you have more gerbils. What do you do? You get rid of the gerbils in your house. What about the gerbils? Yeah, they are still out there. One can see gerbils; it is more difficult to see the AI gerbils. The fix is not the plastic bag filled with gerbils in the garbage can. The AI gerbils are relentless.

Net net: Adapt and accept that AI is here, reproducing rapidly, and evolving. The future means “adapt.” One suggestion: Hire McKinsey & Co. to help your firm make tough decisions. That sometimes works.

Stephen E Arnold, May 14, 2024

AI and Doctors: Close Enough for Horseshoes and More Time for Golf

May 14, 2024

Burnout is a growing pandemic for all industries, but it’s extremely high in medical professions. Doctors and other medical professionals are at incredibly high risk of burnout. The daily stressors of treating patients, paperwork, dealing with insurance agencies, resource limitations, etc. are worsening. Stat News reports that AI algorithms offer a helpful solution for medical professionals, but there are still bugs in the system: “Generative AI Is Supposed To Save Doctors From Burnout. New Data Show It Needs More Training.”

Clinical notes are important for patient care and ongoing treatment. The downside of clinical notes is that it takes a long time to complete the task. Academic hospitals became training grounds for generative AI usage in the medical fields. Generative AI is a tool with a lot of potential, but it’s proven many times that it still needs a lot of work. The large language models for generative AI in medical documentation proved lacking. Is anyone really surprised? Apparently they were:

“Just in the past week, a study at the University of California, San Diego found that use of an LLM to reply to patient messages did not save clinicians time; another study at Mount Sinai found that popular LLMs are lousy at mapping patients’ illnesses to diagnostic codes; and still another study at Mass General Brigham found that an LLM made safety errors in responding to simulated questions from cancer patients. One reply was potentially lethal.”

Why doesn’t common sense prevail in these cases? Yes, generative AI should be tested so the data will back up the logical outcome. It’s called the scientific method for a reason. Why does everyone act surprised, however? Stop reflecting on the obvious of lackluster AI tools and focus on making them better. Use these tests to find the bugs, fix them, and make them practical applications that work. Is that so hard to accomplish?

Whitney Grace, May 14, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta