Smart Productivity Software Means Pink Slip Flood

October 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Ready for some excitement, you under 50s?

Soon, workers may be spared the pain of training their replacements. Consciously, anyway. Wired reports, “Your Boss’s Spyware Could Train AI to Replace You.” Researcher Carl Frey’s landmark 2013 prediction that AI could threated half of US jobs has not yet come to pass. Now that current tools like ChatGPT have proven (so far) less accurate and self-sufficient than advertised, some workers are breathing a sigh of relief. Not so fast, warns journalist Thor Benson. It is the growingly pervasive “productivity” (aka monitoring) software we need to be concerned about. Benson writes:

“Enter corporate spyware, invasive monitoring apps that allow bosses to keep close tabs on everything their employees are doing—collecting reams of data that could come into play here in interesting ways. Corporations, which are monitoring their employees on a large scale, are now having workers utilize AI tools more frequently, and many questions remain regarding how the many AI tools that are currently being developed are being trained. Put all of this together and there’s the potential that companies could use data they’ve harvested from workers—by monitoring them and having them interact with AI that can learn from them—to develop new AI programs that could actually replace them. If your boss can figure out exactly how you do your job, and an AI program is learning from the data you’re producing, then eventually your boss might be able to just have the program do the job instead.”

Even at companies that do not use spyware, employees may unwittingly train their AI replacements simply by generating data as part of their work. To make matters worse, because it gets neither salary nor benefits, an algorithm need not exceed or even match a human’s performance to land the job.

So what can we do? We could retrain workers but, as MIT economics professor David Autor notes, that is not one of the US’s strong suits. Or we could take a cue from the Industrial Revolution: Frey points to Britain’s Poor Laws, which gave financial relief to workers whose jobs became obsolete back then. Hmm, we wonder: How would a similar measure fair in the current US Congress?

Cynthia Murrell, October 9, 2023

Cognitive Blind Spot 2: Bandwagon Surfing or Do What May Be Fashionable

October 6, 2023

Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The litigation about the use of Web content to train smart generative software is ramping up. Outfits like OpenAI, Microsoft, and Amazon and its new best friend will be snagged in the US legal system.

Humans are into trends. The NFL and Taylor Swift appear to be a trend. A sporting money machine and a popular music money machine. Jersey sales increase. Ms. Swift’s music sales go up. New eyeballs track a certain football player. The question is, “Who is exploiting whom?”

Which bandwagon are you riding? Thank you, MidJourney. Gloom seems to be part of your DNA.
Think about large language models and smart software. A similar dynamic may exist. Late in 2022, the natural language interface became the next big thing. Students and bad actors figured out that using a ChatGPT-type service could expedite certain activities. Students could be 500 word essays in less than a minute. Bad actors could be snippets of code in seconds. In short, many people were hopping on the LLM bandwagon decorated with smart software logos.

Now a bandwagon powered by healthy skepticism may be heading toward main street. Wired Magazine published a short essay titled “Chatbot Hallucinations Are Poisoning Web Search.” The foundational assumption is that Web search was better before ChatGPT-type incursions. I am not sure that idea is valid, but for the purposes of illustrating bandwagon surfing, it will pass unchallenged. Wired’s main point is that as AI-generated content proliferates, the results delivered by Google and a couple of other but vastly less popular search engines will deteriorate. I think this is a way to assert that lousy LLM output will make Web search worse. “Hallucination” is jargon for made up or just incorrect information.

Consider this essay “Evaluating LLMs Is a Minefield.” The essay and slide deck are the work of two AI wizards. The main idea is that figuring out whether a particular LLM or a ChatGPT-service is right, wrong, less wrong, more right, biased, or a digital representation of a 23 year old art history major working in a public relations firm is difficult.

I am not going to take the side of either referenced article. The point is that the hyperbolic excitement about “smart software” seems to be giving way to LLM criticism. From software for Every Man, the services are becoming tools for improving productivity.

To sum up, the original bandwagon has been pushed out of the parade by a new bandwagon filled with poobahs explaining that smart software, LLM, et al are making the murky, mysterious Web worse.

The question becomes, “Are you jumping on the bandwagon with the banner that says, “LLMs are really bad?” or are you sticking with the rah rah crowd? The point is that information at one point was good. Now information is less good. Imagine how difficult it will be to determine what’s right or wrong, biased or unbiased, or acceptable or unacceptable.

Who wants to do the work to determine provenance or answer questions about accuracy? Not many people. That, rather then lousy Web search, may be more important to some professionals. But that does not solve the problem of the time and resources required to deal with accuracy and other issues.

So which bandwagon are you riding? The NFL or Taylor Swift? Maybe the tension between the two?

Stephen E Arnold, October 6, 2023

Is Google Setting a Trap for Its AI Competition

October 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The litigation about the use of Web content to train smart generative software is ramping up. Outfits like OpenAI, Microsoft, and Amazon and its new best friend will be snagged in the US legal system.

But what big outfit will be ready to offer those hungry to use smart software without legal risk? The answer is the Google.

How is this going to work?

simple. Google is beavering away with its synthetic data. Some real data are used to train sophisticated stacks of numerical recipes. The idea is that these algorithms will be “good enough”; thus, the need for “real” information is obviated. And Google has another trick up its sleeve. The company has coveys of coders working on trimmed down systems and methods. The idea is that using less information will produce more and better results than the crazy idea of indexing content from wherever in real time. The small data can be licensed when the competitors are spending their days with lawyers.

How do I know this? I don’t but Google is providing tantalizing clues in marketing collateral like “Researchers from the University of Washington and Google have Developed Distilling Step-by-Step Technology to Train a Dedicated Small Machine Learning Model with Less Data.” The author is a student who provides sources for the information about the “less is more” approach to smart software training.

And, may the Googlers sing her praises, she cites Google technical papers. In fact, one of the papers is described by the fledgling Googler as “groundbreaking.” Okay.

What’s really being broken is the approach of some of Google’s most formidable competition.

When will the Google spring its trap? It won’t. But as the competitors get stuck in legal mud, the Google will be an increasingly attractive alternative.

The last line of the Google marketing piece says:

Check out the Paper and Google AI Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

Get that young marketer a Google mouse pad.

Stephen E Arnold, October 6, 2023

The Google and Its AI Peers Guzzle Water. Yep, Guzzle

October 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Much has been written about generative AI’s capabilities and its potential ramifications for business and society. Less has been stated about its environmental impact. The AP highlights this facet of the current craze in its article, “Artificial Intelligence Technology Behind ChatGPT Was Built in Iowa—With a Lot of Water.” Iowa? Who knew? Turns out, there is good reason to base machine learning operations, especially the training, in such a chilly environment. Reporters Matt O’Brien and Hannah Fingerhut write:

“Building a large language model requires analyzing patterns across a huge trove of human-written text. All of that computing takes a lot of electricity and generates a lot of heat. To keep it cool on hot days, data centers need to pump in water — often to a cooling tower outside its warehouse-sized buildings. In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons, or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research.”

During the same period, Google’s water usage surge by 20% according to the company. Notably, Google was strategic about where it guzzled this precious resource: it kept usage steady in Oregon, where there was already criticism about its water usage. But its consumption doubled outside Las Vegas, famously one of the nation’s hottest and driest regions. Des Moines, Iowa, on the other hand is a much cooler and wetter locale. We learn:

“In some ways, West Des Moines is a relatively efficient place to train a powerful AI system, especially compared to Microsoft’s data centers in Arizona that consume far more water for the same computing demand. … For much of the year, Iowa’s weather is cool enough for Microsoft to use outside air to keep the supercomputer running properly and vent heat out of the building. Only when the temperature exceeds 29.3 degrees Celsius (about 85 degrees Fahrenheit) does it withdraw water, the company has said in a public disclosure.”

Though merely a trickle compared to what the same work would take in Arizona, that summer usage is still a lot of water. Microsoft’s Iowa data centers swilled about 11.5 million gallons in July of 2022, the month just before GPT-4 graduated training. Naturally, both Microsoft and Google insist they are researching ways to use less water. It be nice if environmental protection were more than an afterthought.

The write-up introduces us to Shaolei Ren, a researcher at the University of California, Riverside. His team is working to calculate the environmental impact of generative AI enthusiasm. Their paper is due later this year, but they estimate ChatGPT swigs more than 16 ounces of water for every five to 50 prompts, depending on the servers’ location and the season. Will big tech find a way to curb AI’s thirst before it drinks us dry?

Cynthia Murrell, October 6, 2023

A Pivot al Moment in Management Consulting

October 4, 2023

The practice of selling “management consulting” has undergone a handful of tectonic shifts since Edwin Booz convinced Sears, the “department” store outfit to hire him. (Yes, I am aware I am cherry picking, but this is a blog post, not a for fee report.)

The first was the ability of a consultant to move around quickly. Trains and Chicago became synonymous with management razzle dazzle. The center of gravity shifted to New York City because consulting thrives where there are big companies. The second was the institutionalization of the MBA as a certification of a 23 year old’s expertise. The third was the “invention” of former consultants for hire. The innovator in this business was Gerson Lehrman Group, but there are many imitators who hire former blue-chip types and resell them without the fee baggage of the McKinsey & Co. type outfits. And now the fourth earthquake is rattling carpetland and the windows in corner offices (even if these offices are in an expensive home in Wyoming.)

9 30 centaur and cybord

A centaur and a cyborg working on a client report. Thanks, MidJourney. Nice hair style on the cyborg.

Now we have the era of smart software or what I prefer to call the era of hyperbole about semi-smart semi-automated systems which output “information.” I noted this write up from the estimable Harvard University. Yes, this is the outfit who appointed an expert in ethics to head up the outfit’s ethics department. The same ethics expert allegedly made up data for peer reviewed publications. Yep, that Harvard University.

Navigating the Jagged Technological Frontier” is an essay crafted by the D^3 faculty. None of this single author stuff in an institution where fabrication of research is a stand up comic joke. “What’s the most terrifying word for a Harvard ethicist?” Give up? “Ethics.” Ho ho ho.

What are the highlights of this esteemed group of researches, thinkers, and analysts. I quote:

  • For tasks within the AI frontier, ChatGPT-4 significantly increased performance, boosting speed by over 25%, human-rated performance by over 40%, and task completion by over 12%.
  • The study introduces the concept of a “jagged technological frontier,” where AI excels in some tasks but falls short in others.
  • Two distinct patterns of AI use emerged: “Centaurs,” who divided and delegated tasks between themselves and the AI, and “Cyborgs,” who integrated their workflow with the AI.

Translation: We need fewer MBAs and old timers who are not able to maximize billability with smart or semi smart software. Keep in mind that some consultants view clients with disdain. If these folks were smart, they would not be relying on 20-somethings to bail them out and provide “wisdom.”

This dinobaby is glad he is old.

Stephen E Arnold, October 4, 2023

A Complement to Bogus Amazon Product Reviews?

October 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The Author’s Guild and over 10,000 of its members have been asking Amazon to do something about AI-written books on its platform for months. Now, the AP reports, “Amazon to Require Some Authors to Disclose the Use of AI Material.” Writer Hillel Italie tells us:

“The Authors Guild praised the new regulations, which were posted Wednesday, as a ‘welcome first step’ toward deterring the proliferation of computer-generated books on the online retailer’s site. Many writers feared computer-generated books could crowd out traditional works and would be unfair to consumers who didn’t know they were buying AI content.”

Legitimate concerns. But how much good will the new requirements do, really? Amazon now requires those submitting works to its e-book program to disclose any AI-generated content. But we wonder how that is supposed to help since that information is not, as of this writing, publicly disclosed. We learn:

“A passage posted this week on Amazon’s content guideline page said, ‘We define AI-generated content as text, images, or translations created by an AI-based tool.’ Amazon is differentiating between AI-assisted content, which authors do not need to disclose, and AI-generated work. But the decision’s initial impact may be limited because Amazon will not be publicly identifying books with AI, a policy that a company spokesperson said it may revise. Guild CEO Mary Rasenberger said that her organization has been in discussions with Amazon about AI material since early this year. ‘Amazon never opposed requiring disclosure but just said they had to think it through, and we kept nudging them. We think and hope they will eventually require public disclosure when a work is AI-generated,’ she told The Associated Press on Friday.”

Perhaps. But even if Ms. Rasenberger’s gracious optimism is warranted, the requirement only applies to Amazon’s e-book program. What about the rest of the texts sold through the platform? Or, for that matter, through Amazon-owned Goodreads? Perhaps it is old-fashioned, but I for one would like to know whether a book was written by a human or by software before I buy.

Cynthia Murrell, October 4, 2023

Teens, Are You Bing-ing Yet?

October 3, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The online advertising and all-time champion of redacting documents has innovated again. “Google Expands Its Generative AI Search Experience to Teens Expected to Interact With a Chatbot—Is It Safe?” reports:

Google is opening its generative AI search experience to teenagers aged 13 to 17 in the United States with a Google Account. This expansion allows every teen to participate in Search Labs and engage with AI technology conversationally.

What will those teens do with smart software interested in conversational interactions. As a dinobaby, the memories of my teen experiences are fuzzy. I do recall writing reports for some of my classmates. If I were a teenie bopper with access to generative outputs, I would probably use that system to crank out for-fee writings. On the other hand, those classmates would just use the system themselves. Who wants to write about Lincoln’s night at the theater or how eager people from Asia built railroads.

The article notes:

Google is implementing an update to enhance the AI model’s ability to identify false or offensive premise queries, ensuring more accurate and higher-quality responses. The company is also actively developing solutions to enable large language models to self-assess their initial responses on sensitive subjects and rewrite them based on quality and safety criteria.

That’s helpful. Imagine training future Google advertising consumers to depend on the Google for truth. Redactions included, of course.

Stephen E Arnold, October 3, 2023

Who Will Ultimately Control AI?

September 27, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

In the Marvel comics universe, there is a being on Earth’s moon called The Watcher. He observes humanity and is not supposed to interfere with their affairs. Marvel’s The Watcher brings to mind the old adage, “Who watches the watcher?” While there is an endless amount of comic book lore to answer that question, the current controversial discussion surrounding AI regulations and who will watch AI does not. Time delves into the conversation about, “The Heated Debate Over Who Should Control Access To AI.”

In May 2023, the CEOs of three AI companies, OpenAI, Google, DeepMind, and Anthropic, signed a letter that stated AI could be harmful to humanity and as dangerous as nuclear weapons or a pandemic. AI experts and leaders are calling for restrictions on specific AI models to prevent bad actors from using it to spread disinformation, launch cyber attacks, make bioweapons, and cause other harm.

Not all of the experts and leaders agree, including the folks at Meta. US Senators Josh Hawley and Richard Blumenthal, Ranking Member and Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and Law don’t like that Meta is sharing powerful AI models.

“The disagreement between Meta and the Senators is just the beginning of a debate over who gets to control access to AI, the outcome of which will have wide-reaching implications. On one side, many prominent AI companies and members of the national security community, concerned by risks posed by powerful AI systems and possibly motivated by commercial incentives, are pushing for limits on who can build and access the most powerful AI systems. On the other, is an unlikely coalition of Meta, and many progressives, libertarians, and old-school liberals, who are fighting for what they say is an open, transparent approach to AI development.

OpenAI published a paper titled Frontier Model Regulation by researchers and academics from OpenAI, DeepMind, and Google with tips about how to control AI. Developing safety standards and requiring regulators to have visibility are no brainers. Other ideas, such as requiring AI developers to acquire a license to train and deploy powerful AI models, caused arguments. Licensing would be a good idea in the future but not great for today’s world.

Meta releases its AI models via open source or paid licenses for its more robust models. Meta’s CEO did say something idiotic:

Meta’s leadership is also not convinced that powerful AI systems could pose existential risks. Mark Zuckerberg, co-founder and CEO of Meta, has said that he doesn’t understand the AI doomsday scenarios, and that those who drum up these scenarios are “pretty irresponsible.” Yann LeCun, Turing Award winner and chief AI scientist at Meta has said that fears over extreme AI risks are ‘preposterously stupid.’’”

The remainder of the article delves into how regulations limit innovation, surveillance would be Orwellian in nature, and how bad acting countries wouldn’t follow the rules. It’s once again the same old arguments repackaged with an AI sticker.

Who will control AI? Gee, maybe the same outfits controlling information and software right this minute?

Whitney Grace, September 27, 2023

Getty and Its Licensed Smart Software Art

September 26, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid. (Yep, the dinobaby is back from France. Thanks to those who made the trip professionally and personally enjoyable.)

The illustration shows a very, very happy image rights troll. The cloud of uncertainty from AI generated images has passed. Now the rights software bots, controlled by cheerful copyright trolls, can scour the Web for unauthorized image use. Forget the humanoids. The action will be from tireless AI generators and equally robust bots designed to charge a fee for the image created by zeros and ones. Yes!

9 25 troll dancing

A quite joyful copyright troll displays his killer moves. Thanks, MidJourney. The gradient descent continues, right into the legal eagles’ nests.

Getty Made an AI Generator That Only Trained on Its Licensed Images” reports:

Generative AI by Getty Images (yes, it’s an unwieldy name) is trained only on the vast Getty Images library, including premium content, giving users full copyright indemnification. This means anyone using the tool and publishing the image it created commercially will be legally protected, promises Getty. Getty worked with Nvidia to use its Edify model, available on Nvidia’s generative AI model library Picasso.

This is exciting. Will the images include a tough-to-discern watermark? Will the images include a license plate, a social security number, or a just a nifty sting of harmless digits?

The article does reveal the money angle:

The company said any photos created with the tool will not be included in the Getty Images and iStock content libraries. Getty will pay creators if it uses their AI-generated image to train the current and future versions of the model. It will share revenues generated from the tool, “allocating both a pro rata share in respect of every file and a share based on traditional licensing revenue.”

Who will be happy? Getty, the trolls, or the designers who have a way to be more productive with a helping hand from the Getty robot? I think the world will be happier because monetization, smart software, and lawyers are a business model with legs… or claws.

Stephen E Arnold, September 26, 2023

Microsoft Claims to Bring Human Reasoning to AI with New Algorithm

September 20, 2023

Has Microsoft found the key to meld the strengths of AI reasoning and human cognition? Decrypt declares, “Microsoft Infuses AI with Human-Like Reasoning Via an ‘Algorithm of Thoughts’.” Not only does the Algorithm of Thoughts (AoT for short) come to better conclusions, it also saves energy by streamlining the process, Microsoft promises. Writer Jose Antonio Lanz explains:

“The AoT method addresses the limitations of current in-context learning techniques like the ‘Chain-of-Thought’ (CoT) approach. CoT sometimes provides incorrect intermediate steps, whereas AoT guides the model using algorithmic examples for more reliable results. AoT draws inspiration from both humans and machines to improve the performance of a generative AI model. While humans excel in intuitive cognition, algorithms are known for their organized, exhaustive exploration. The research paper says that the Algorithm of Thoughts seeks to ‘fuse these dual facets to augment reasoning capabilities within LLMs.’ Microsoft says this hybrid technique enables the model to overcome human working memory limitations, allowing more comprehensive analysis of ideas. Unlike CoT’s linear reasoning or the ‘Tree of Thoughts’ (ToT) technique, AoT permits flexible contemplation of different options for sub-problems, maintaining efficacy with minimal prompting. It also rivals external tree-search tools, efficiently balancing costs and computations. Overall, AoT represents a shift from supervised learning to integrating the search process itself. With refinements to prompt engineering, researchers believe this approach can enable models to solve complex real-world problems efficiently while also reducing their carbon impact.”

Wowza! Lanz expects Microsoft to incorporate AoT into its GPT-4 and other advanced AI systems. (Microsoft has partnered with OpenAI and invested billions into ChatGPT; it has an exclusive license to integrate ChatGPT into its products.) Does this development bring AI a little closer to humanity? What is next?

Cynthia Murrell, September 20, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta