AI Automation Has a Benefit … for Some

September 26, 2024

Humanity’s progress runs parallel to advancing technology. As technology advances, aspects of human society and culture are rendered obsolete and it is replaced with new things. Job automation is a huge part of this; past example are the Industrial Revolution and the implementation of computers. AI algorithms are set to make another part of the labor force defunct, but the BBC claims that might be beneficial to workers: “Klarna: AI Lets Us Cut Thousands Of Jobs-But Pay More.”

Klarna is a fintech company that provides online financial services and is described as a “buy now, pay later” company. Klarna plans to use AI to automate the majority of its workforce. The company’s leaders already canned 1200 employees and they plan to fire another 2000 as AI marketing and customer service is implemented. That leaves Klarna with a grand total of 1800 employees who will be paid more.

Klarna’s CEO Sebastian Siematkowski is putting a positive spin on cutting jobs by saying the remaining employees will receive larger salaries. While Siematkowski sees the benefits of AI, he does warn about AI’s downside and advises the government to do something. He said:

“ ‘I think politicians already today should consider whether there are other alternatives of how they could support people that may be effective,’ he told the Today programme, on BBC Radio 4.

He said it was “too simplistic” to simply say new jobs would be created in the future.

‘I mean, maybe you can become an influencer, but it’s hard to do so if you are 55-years-old,’ he said.”

The International Monetary Fund (IMF) predicts that 40% of all jobs will worsen in “overall equality” due to AI. As Klarna reduces its staff, the company will enter what is called “natural attrition” aka a hiring freeze. The remaining workforce will have bigger workloads. Siematkowski claims AI will eventually reduce those workloads.

Will that really happen? Maybe?

Will the remaining workers receive a pay raise or will that money go straight to the leaders’ pockets? Probably.

Whitney Grace, September 26, 2024

Amazon Has a Better Idea about Catching Up with Other AI Outfits

September 25, 2024

AWS Program to Bolster 80 AI Startups from Around the World

Can boosting a roster of little-known startups help AWS catch up with Google’s and Microsoft’s AI successes? Amazon must hope so. It just tapped 80 companies from around the world to receive substantial support in its AWS Global Generative AI Accelerator program. Each firm will receive up to $1 million in AWS credits, expert mentorship, and a slot at the AWS re:Invent conference in December.

India’s CXOtoday is particularly proud of the seven recipients from that country. It boasts, “AWS Selects Seven Generative AI Startups from India for Global AWS Generative AI Accelerator.” We learn:

“The selected Indian startups— Convrse, House of Models, Neural Garage, Orbo.ai, Phot.ai, Unscript AI, and Zocket, are among the 80 companies selected by AWS worldwide for their innovative use of AI and their global growth ambitions. The Indian cohort also represents the highest number of startups selected from a country in the Asia-Pacific region for the AWS Global Generative AI Accelerator program.”

The post offers this stat as evidence India is now an AI hotspot. It also supplies some more details about the Amazon program:

“Selected startups will gain access to AWS compute, storage, and database technologies, as well as AWS Trainium and AWS Inferentia2, energy-efficient AI chips that offer high performance at the lowest cost. The credits can also be used on Amazon SageMaker, a fully managed service that helps companies build and train their own foundation models (FMs), as well as to access models and tools to easily and securely build generative AI applications through Amazon Bedrock. The 10-week program matches participants with both business and technical mentors based on their industry, and chosen startups will receive up to US$1 million each in AWS credits to help them build, train, test, and launch their generative AI solutions. Participants will also have access to technology and technical sessions from program presenting partner NVIDIA.”

See the write-up to learn more about each of the Indian startups selected, or check out the full roster here.

The question is, “Will this help Amazon which is struggling to make Facebook, Google, and Microsoft look like the leaders in the AI derby?”

Cynthia Murrell, September 25, 2024

Open Source Dox Chaos: An Opportunity for AI

September 24, 2024

It is a problem as old as the concept of open source itself. ZDNet laments, “Linux and Open-Source Documentation Is a Mess: Here’s the Solution.” We won’t leave you in suspense. Writer Steven Vaughan-Nichols’ solution is the obvious one—pay people to write and organize good documentation. Less obvious is who will foot the bill. Generous donors? Governments? Corporations with their own agendas? That question is left unanswered.

But there is not doubt. Open-source documentation, when it exists at all, is almost universally bad. Vaughan-Nichols recounts:

“When I was a wet-behind-the-ears Unix user and programmer, the go-to response to any tech question was RTFM, which stands for ‘Read the F… Fine Manual.’ Unfortunately, this hasn’t changed for the Linux and open-source software generations. It’s high time we addressed this issue and brought about positive change. The manuals and almost all the documentation are often outdated, sometimes nearly impossible to read, and sometimes, they don’t even exist.”

Not only are the manuals that have been cobbled together outdated and hard to read, they are often so disorganized it is hard to find what one is looking for. Even when it is there. Somewhere. The post emphasizes:

“It doesn’t help any that kernel documentation consists of ‘thousands of individual documents’ written in isolation rather than a coherent body of documentation. While efforts have been made to organize documents into books for specific readers, the overall documentation still lacks a unified structure. Steve Rostedt, a Google software engineer and Linux kernel developer, would agree. At last year’s Linux Plumbers conference, he said, ‘when he runs into bugs, he can’t find documents describing how things work.’ If someone as senior as Rostedt has trouble, how much luck do you think a novice programmer will have trying to find an answer to a difficult question?”

This problem is no secret in the open-source community. Many feel so strongly about it they spend hours of unpaid time working to address it. Until they just cannot take it anymore. It is easy to get burned out when one is barely making a dent and no one appreciates the effort. At least, not enough to pay for it.

Here at Beyond Search we have a question: Why can’t Microsoft’s vaunted Copilot tackle this information problem? Maybe Copilot cannot do the job?

Cynthia Murrell, September 24, 2024

Microsoft Explains Who Is at Fault If Copilot Smart Software Does Dumb Things

September 23, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Those Windows Central experts have delivered a Dusie of a write up. “Microsoft Says OpenAI’s ChatGPT Isn’t Better than Copilot; You Just Aren’t Using It Right, But Copilot Academy Is Here to Help” explains:

Avid AI users often boast about ChatGPT’s advanced user experience and capabilities compared to Microsoft’s Copilot AI offering, although both chatbots are based on OpenAI’s technology. Earlier this year, a report disclosed that the top complaint about Copilot AI at Microsoft is that “it doesn’t seem to work as well as ChatGPT.”

I think I understand. Microsoft uses OpenAI, other smart software, and home brew code to deliver Copilot in apps, the browser, and Azure services. However, users have reported that Copilot doesn’t work as well as ChatGPT. That’s interesting. A hallucinating capable software processed by the Microsoft engineering legions is allegedly inferior to Copilot.

image

Enthusiastic young car owners replace individual parts. But the old car remains an old, rusty vehicle. Thanks, MSFT Copilot. Good enough. No, I don’t want to attend a class to learn how to use you.

Who is responsible? The answer certainly surprised me. Here’s what the Windows Central wizards offer:

A Microsoft employee indicated that the quality of Copilot’s response depends on how you present your prompt or query. At the time, the tech giant leveraged curated videos to help users improve their prompt engineering skills. And now, Microsoft is scaling things a notch higher with Copilot Academy. As you might have guessed, Copilot Academy is a program designed to help businesses learn the best practices when interacting and leveraging the tool’s capabilities.

I think this means that the user is at fault, not Microsoft’s refactored version of OpenAI’s smart software. The fix is for the user to learn how to write prompts. Microsoft is not responsible. But OpenAI’s implementation of ChatGPT is perceived as better. Furthermore, training to use ChatGPT is left to third parties. I hope I am close to the pin on this summary. OpenAI just puts Strawberries in front of hungry users and let’s them gobble up ChatGPT output. Microsoft fixes up ChatGPT and users are allegedly not happy. Therefore, Microsoft puts the burden on the user to learn how to interact with the Microsoft version of ChatGPT.

I thought smart software was intended to make work easier and more efficient. Why do I have to go to school to learn Copilot when I can just pound text or a chunk of data into ChatGPT, click a button, and get an output? Not even a Palantir boot camp will lure me to the service. Sorry, pal.

My hypothesis is that Microsoft is a couple of steps away from creating something designed for regular users. In its effort to “improve” ChatGPT, the experience of using Copilot makes the user’s life more miserable. I think Microsoft’s own engineering practices act like a struck brake on an old Lada. The vehicle has problems, so installing a new master cylinder does not improve the automobile.

Crazy thinking: That’s what the write up suggests to me.

Stephen E Arnold, September 23, 2024

DAIS: A New Attempt to Make AI Play Nicely with Humans

September 20, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

How about a decentralized artificial intelligence “association”? One has been set up by Michael Casey, the former chief content officer at Coindesk. (Coindesk reports about the bright, sunny world of crypto currency and related topics.) I learned about this society in — you guessed it — Coindesk’s online information service called Coindesk. The article “Decentralized AI Society Launched to Fight Tech Giants Who ‘Own the Regulators’” is interesting. I like the idea that “tech giants” own the regulators. This is an observation which Apple and Google might not agree. Both “tech giants” have been facing some unfavorable regulatory decisions. If these regulators are “owned,” I think the “tech giants” need to exercise their leadership skills to make the annoying regulators go away. One resigned in the EU this week, but as Shakespeare said of lawyers, let’s drown them. So far the “tech giants” have been bumbling along, growing bigger as a result of feasting on data and amplifying allegedly monopolistic behaviors which just seem to pop up, rules or no rules.

image

Two experts look at what emerged from a Petri dish of technological goodies. Quite a surprise I assume. Thanks, MSFT Copilot. Good enough.

The write up reports:

Industry leaders have launched a non-profit organization called the Decentralized AI Society (DAIS), dedicated to tackling the probability of the monopolization of the artificial intelligence (AI) industry.

What is the DAIS outfit setting out to do? Here’s what Coindesk reports and this is a quote of the bullets from the write up:

Bringing capital to the decentralized AI world in what has already become an arms race for resources like graphical processing units (GPUs) and the data centers that compute together.

Shaping policy to craft AI regulations.

Education and promotion of decentralized AI.

Engineering to create new algorithms for learning models in a distributed way.

These are interesting targets. I want to point out that “decentralization” is the opposite of what the “tech giants” have already put in place; that is, concentration of money, talent, and infrastructure. Even old dogs like Oracle are now hopping on the centralized bandwagon. Even newcomers want to get as many cattle into the killing chute before the glamor of AI begins to lose some sparkles.

Several observations:

  1. DAIS has some crypto roots. These may become positive or negative. Right now regulators are interested in crypto as are other enforcement entities
  2. One of the Arnold Laws of Online is that centralization, consolidation, and concentration are emergent behaviors for online products and services. Countering this “law” and its “emergent” functionality is going to take more than conferences, a Web site, and some “logical” ideas which any “rational” person would heartily endorse. But emergent is tough to stop based on my experience.
  3. Singapore has become a hot spot for certain financial and technical activities. The problem is that nation-states may not want to be inhibited in their AI ambitions. Some may find the notion of “education” a problem as well because curricula must conform to pre-defined frameworks. Distributed is not a pre-defined anything; it is the opposite of controlled and, therefore, likely to be a bit of a problem.

Net net: Interesting idea. But Amazon, Google, Facebook, Microsoft, and some other outfits may want to talk about “distributed” but really mean the technological notion is okay, but we want as much of the money as we can get.

Stephen E Arnold, September 20, 2024

YouTube Is Bringing More AI To Its Platform

September 20, 2024

AI-generated videos have already swarmed on YouTube. These videos range from fake Disney movie trailers to inappropriate content that missed being flagged. YouTube creators are already upset that their videos are being overlooked by the algorithm, but some are being hired for an AI project. Digital Trends explains more: “More AI May Be Coming To YouTube In A Big Way.”

Gemini AI is currently in beta testing across YouTube. Gemini AI is described as a tool for YouTubers to brainstorm video ideas, including titles, topics, and thumbnails. Only a select few YouTubers are testing Gemini AI and will share their feedback. The AI tool will eventually be located underneath the platform’s analytic menu, under the research tab. The tool could actually be helpful:

“This marks Google’s second foray into including AI assistance in YouTube users’ creative processes. In May, the company launched a content inspiration tool on YouTube Studio that provides tips and suggestions for future clip topics based on viewer trends. For most any given topic, the AI will highlight related videos you’ve already published, provide tips on themes to use, and generate a script outline for you to follow.”

The YouTubers are experimenting with both Gemini AI and the content inspiration tool. They’re doing A/B testing and their experiences will shape how AI is used on the video platform. YouTube does acknowledge that AI is a transformative creative tool, but viewers want to know if what they’re watching is real or fake. Is anyone imagining a AI warning or rating system?

Whitney Grace, September 20, 2024

Happy AI News: Job Losses? Nope, Not a Thing

September 19, 2024

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I read “AI May Not Steal Many Jobs after All. It May Just Make Workers More Efficient.” Immediately two points jumped out at me. The AP (the publisher of the “real” news story is hedging with the weasel word “may” and the hedgy phrase “after all.” Why is this important? The “real” news industry is interested in smart software to reduce costs and generate more “real” news more quickly. The days with “real” reporters disappearing for hours to confirm with a source are often associated with fiddling around. The costs of doing anything without a gusher of money pumping 24×7 are daunting. The word “efficient” sits in the headline as a digital harridan stakeholder. Who wants that?

image

The manager of a global news operation reports that under his watch, he has achieved peak efficiency. Thanks, MSFT Copilot. Will this work for production software development? Good enough is the new benchmark, right?

The story itself strikes me as a bit of content marketing which says, “Hey, everyone can use AI to become more efficient.” The subtext is, “Hey, don’t worry. No software robot or agentic thingy will reduce staff. Probably.

The AP is a litigious outfit even though I worked at a newspaper which “participated” in the business process of the entity. Here’s one sentence from the “real” news write up:

Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy.

Yep, just like the steam engine and the Internet.

When technologies emerge, most go away or become componentized or dematerialized. When one of those hot technologies fail to produce revenues, quite predictable outcomes result. Executives get fired. VC firms do fancy dancing. IRS professionals squint at tax returns.

So far AI has been a “big guys win sort of because they have bundles of cash” and “little outfits lose control of their costs”. Here’s my take:

  1. Human-generated news is expensive and if smart software can do a good enough job, that software will be deployed. The test will be real time. If the software fails, the company may sell itself, pivot, or run a garage sale.
  2. When “good enough” is the benchmark, staff will be replaced with smart software. Some of the whiz kids in AI like the buzzword “agentic.” Okay, agentic systems will replace humans with good enough smart software. That will happen. Excellence is not the goal. Money saving is.
  3. Over time, the ideas of the current transformer-based AI systems will be enriched by other numerical procedures and maybe— just maybe — some novel methods will provide “smart software” with more capabilities. Right now, most smart software just finds a path through already-known information. No output is new, just close to what the system’s math concludes is on point. Right now, the next generation of smart software seems to be in the future. How far? It’s anyone’s guess.

My hunch is that Amazon Audible will suggest that humans will not lose their jobs. However, the company is allegedly going to replace human voices with “audibles” generated by smart software. (For more about this displacement of humans, check out the Bloomberg story.)

Net net: The “real” news story prepares the field for planting writing software in an organization. It says, “Customer will benefit and produce more jobs.” Great assertions. I think AI will be disruptive and in unpredictable ways. Why not come out and say, “If the agentic software is good enough, we will fire people”? Answer: Being upfront is not something those who are not dinobabies do.

Stephen E Arnold, September 19, 2024

Smart Software: More Novel and Exciting Than a Mere Human

September 17, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Idea people: What a quaint notion. Why pay for expensive blue-chip consultants or wonder youth from fancy universities? Just use smart software to generate new, novel, unique ideas. Does that sound over the top? Not according to  “AIs Generate More Novel and Exciting Research Ideas Than Human Experts.” Wow, I forgot exciting. AI outputs can be exciting to the few humans left to examine the outputs.

The write up says:

Recent breakthroughs in large language models (LLMs) have excited researchers about the potential to revolutionize scientific discovery, with models like ChatGPT and Anthropic’s Claude showing an ability to autonomously generate and validate new research ideas. This, of course, was one of the many things most people assumed AIs could never take over from humans; the ability to generate new knowledge and make new scientific discoveries, as opposed to stitching together existing knowledge from their training data.

Aside from having no job and embracing couch surfing or returning to one’s parental domicile, what are the implications of this bold statement? It means that smart software is better, faster, and cheaper at producing novel and “exciting” research ideas. There is even a chart to prove that the study’s findings are allegedly reproducible. The graph has whisker lines too. I am a believer… sort of.

The magic of a Bonferroni correction which allegedly copes with data from multiple dependent or independent statistical tests are performed in one meta-calculation. Does it work? Sure, a fancy average is usually close enough for horseshoes I have heard.

bonferroni graph

Just keep in mind that human judgments are tossed into the results. That adds some of that delightful subjective spice. The proof of the “novelty” creation process, according to the write up comes from Google. The article says:

…we can’t understate AI’s potential to radically accelerate progress in certain areas – as evidenced by Deepmind’s GNoME system, which knocked off about 800 years’ worth of materials discovery in a matter of months, and spat out recipes for about 380,000 new inorganic crystals that could have revolutionary potential in all sorts of areas.  This is the fastest-developing technology humanity has ever seen; it’s reasonable to expect that many of its flaws will be patched up and painted over within the next few years. Many AI researchers believe we’re approaching general superintelligence – the point at which generalist AIs will overtake expert knowledge in more or less all fields.

image

Flaws? Hallucinations? Hey, not to worry. These will be resolved as the AI sector moves with the arrow of technology. Too bad if some humanoids are pierced by the arrow and die on the shoulder of the uncaring Information Superhighway. What about those who say AI will not take jobs? Have those people talked with an accountants responsible for cost control?

Stephen E Arnold, September 17, 2024

Trust AI? Obvious to Those Who Do Not Want to Think Too Much

September 16, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Who wants to evaluate information? The answer: Not too many people. In my lectures, I show a diagram of the six processes an analyst or investigator should execute. The reality is that several of the processes are difficult which means time and money are required to complete the processes in a thorough manner. Who has time? The answer: Not too many people or organizations.

What’s the solution? The Engineer’s article “Study Shows Alarming Level of Trust in AI for Life and Death Decisions” reports:

A US study that simulated life and death decisions has shown that humans place excessive trust in artificial intelligence when guiding their choices.

Interesting. Perhaps China is the poster child for putting “trust” in smart software hooked up to nuclear weapons? Fortune reported on September 10, 2024, that China has refused to sign an agreement to ban smart software from controlling nuclear weapons.

image

Yep, I trust AI, don’t you? Thanks, MSFT Copilot. I trusted you to do a great job. What did you deliver? A good enough cartoon.

The study reported in The Engineer might be of interest to some in China. Specifically, the write up stated:

Despite being informed of the fallibility of the AI systems in the study, two-thirds of subjects allowed their decisions to be influenced by the AI. The work, conducted by scientists at the University of California – Merced.

Are these results on point? My experience suggests that not only do people accept the outputs of a computer as “correct.” Many people when shown facts that contradict the computer output defend the computer as more reliable and accurate.

I am not quite such a goose. Machines and software generate errors. The systems have for decades. But I think the reason is that the humans with whom I have interacted pursue convenience. Verifying, analyzing, and thinking are hot processes. Humans want to kick back in cool, low humidity environments and pursue the least effort path in many situations.

The illusion of computer accuracy allows people to skip reviewing their Visa statement and doubting the validity of an output displayed in a spreadsheet. The fact that the smart software hallucinates is ignored. I hear “I know when the system needs checking.” Yeah, sure you do.

Those involved in preparing the study are quoted as saying:

“Our project was about high-risk decisions made under uncertainty when the AI is unreliable,” said Holbrook. “We should have a healthy skepticism about AI, especially in life-or-death decisions. “We see AI doing extraordinary things and we think that because it’s amazing in this domain, it will be amazing in another. We can’t assume that. These are still devices with limited abilities.”

These folks are not going to be hired to advise the Chinese government I surmise.

Stephen E Arnold, September 16, 2024

Need Help, Students? AI Is Here

September 13, 2024

Here is a resource for, well, for those who would cheat maybe? The site Pisi.ee shares information on a course called, “How to Use AI to Write a Research Paper.” Hosted by Fikper.com, the course is designed for “high school, college, and university students who are eager to improve their research and writing skills through the use of artificial intelligence.” Research, right. Wink, wink. The course description specifies:

“Whether you’re a high school student tackling your first research project, a college student refining your academic skills, or a university scholar pursuing advanced studies, understanding how to leverage AI can significantly enhance your efficiency and effectiveness. This course offers a comprehensive guide to integrating AI tools into your research process, providing you with the knowledge and skills to excel. Many students struggle with the task of conducting research and writing about it. Identifying a research problem, creating clear questions, looking for other literature, and keeping your academic integrity are a challenge, especially with all the information available. This course addresses these challenges head-on, providing step-by-step guidance and practical exercises that lead you through the research process. What sets this course apart from others is its practical, hands-on approach combined with a strong emphasis on academic integrity.”

A strong emphasis on integrity, you say? Well that is different then. All the tools one may need to generate, er, research papers are covered:

“Tools like Zotero, Mendeley, Grammarly, Hemingway App, IBM Watson, Google Scholar, Turnitin, Copyscape, EndNote, and QuillBot can be used at different stages of the research process. Our goal is to give you a toolkit of resources that you can choose to apply, making your research and writing tasks more efficient and effective.”

Yep, just what aspiring students need to gain that “competitive edge,” as the description puts it. With integrity, of course.

Cynthia Murrell, September 13, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta