India: AI, We Go This Way, Then We Go That Way

April 3, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In early March 2024, the India said it would require all AI-related projects still in development receive governmental approval before they were released to the public. India’s Ministry of Electronics and Information Technology stated it wanted to notify the public of AI technology’s fallacies and its unreliability. The intent was to label all AI technology with a “consent popup” that informed users of potential errors and defects. The ministry also wanted to label potentially harmful AI content, such as deepfakes, with a label or unique identifier.

The Register explains that it didn’t take long for the south Asian country to rescind the plan: “India Quickly Unwinds Requirement For Government Approval Of AIs.” The ministry issued a update that removed the requirement for government approval but they did add more obligations to label potentially harmful content:

"Among the new requirements for Indian AI operations are labelling deepfakes, preventing bias in models, and informing users of models’ limitations. AI shops are also to avoid production and sharing of illegal content, and must inform users of consequences that could flow from using AI to create illegal material.”

Minister of State for Entrepreneurship, Skill Development, Electronics, and Technology Rajeev Chandrasekhar provided context for the government’s initial plan for approval. He explained it was intended only for big technology companies. Smaller companies and startups wouldn’t have needed the approval. Chandrasekhar is recognized for his support of boosting India’s burgeoning technology industry.

Whitney Grace, April 3, 2024

Google AI Has a New Competitive Angle: AI Is a Bit of Problem for Everyone Except Us, Of Course

April 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google has not recovered from the MSFT Davos PR coup. The online advertising company with a wonderful approach to management promptly did a road show in Paris which displayed incorrect data. Next the company declared a Code Red emergency (whatever that means in an ad outfit). Then the Googley folk reorganized by laterally arabesque-ing Dr. Jeff Dean somewhere and putting smart software in the hands of the DeepMind survivors. Okay, now we are into Phase 2 of the quantumly supreme company’s push into smart software.

image

An unknown person in Hyde Park at Speaker’s Corner is explaining to the enthralled passers by that “AI is like cryptocurrency.” Is there a face in the crowd that looks like the powerhouse behind FTX? Good enough, MSFT Copilot.

A good example of this PR tactic appears in “Google DeepMind Co-Founder Voices Concerns Over AI Hype: ‘We’re Talking About All Sorts Of Things That Are Just Not Real’.” Some additional color similar to that of sour grapes appears in “Google’s DeepMind CEO Says the Massive Funds Flowing into AI Bring with It Loads of Hype and a Fair Share of Grifting.”

The main idea in these write ups is that the Top Dog at DeepMind and possible candidate to take over the online ad outfit is not talking about ruing the life of a Go player or folding proteins. Nope. The new message, as I understand it, AI is just not that great. Here’s an example of the new PR push:

The fervor amongst investors for AI, Hassabis told the Financial Times, reminded him of “other hyped-up areas” like crypto. “Some of that has now spilled over into AI, which I think is a bit unfortunate,” Hassabis told the outlet. “And it clouds the science and the research, which is phenomenal.”

Yes, crypto. Digital currency is associated with stellar professionals like Sam Bankman-Fried and those engaged in illegal activities. (I will be talking about some of those illegal activities at the US National Cyber Crime Conference in a few weeks.)

So what’s the PR angle? Here’s my take on the message from the CEO in waiting:

  1. The message allows Google and its numerous supporters to say, “We think AI is like crypto but maybe worse.”
  2. Google can suggest, “Our AI is not so good, but that’s because we are working overtime to avoid the crypto-curse which is inherent in outfits engaged in shoving AI down your throat.”
  3. Googlers gardons la tête froide unlike the possibly criminal outfits cheerleading for the wonders of artificial intelligence.

Will the approach work? In my opinion, yes, it will add a joke to the Sundar and Prabhakar Comedy Act. No, I don’t think it will not alter the scurrying in the world of entrepreneurs, investment firms, and “real” Silicon Valley journalists, poohbahs, and pundits.

Stephen E Arnold, April 2, 2024

AI and Job Wage Friction

April 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read again “The Jobs Being Replaced by AI – An Analysis of 5M Freelancing Jobs,” published in February 2024 by Bloomberg (the outfit interested in fiddled firmware on motherboards). The main idea in the report is that AI boosted a number of freelance jobs. What are the jobs where AI has not (as yet) added friction to the money making process. Here’s the list of jobs NOT impeded by smart software:

Accounting

Backend development

Graphics design

Market research

Sales

Video editing and production

Web design

Web development

Other sources suggest that “Accounting” may be targeted by an AI-powered efficiency expert. I want to watch how this profession navigates the smart software in what is often a repetitive series of eye glazing steps.

image

Thanks, MSFT Copilot. How are doing doing with your reorganization? Running smoothly? Yeah. Smoothly.

Now to the meat of the report: What professions or jobs were the MOST affected by AI. From the cited write up, these are:

Customer service (the exciting, long suffering discipline of chatbots)

Social media marketing

Translation

Writing

The write up includes another telling chunk of data. AI has apparently had an impact on the amount of money some customers were willing to pay freelancers or gig workers. The jobs finding greater billing friction are:

Backend development

Market research

Sales

Translation

Video editing and production

Web development

Writing

The article contains quite a bit of related information. Please, consult the original for a number of almost unreadable graphics and tabular data. I do want to offer several observations:

  1. One consequence of AI, if the data in this report are close enough for horseshoes, is that smart software drives down what customers will pay for a wide range of human centric services. You don’t lose your job; you just get a taste of Victorian sweat shop management thinking
  2. Once smart software is perceived as reasonably capable, demand and pay for good enough translation, smart software is embraced. My view is that translation services are likely to be a harbinger of how AI will affect other jobs. AI does not have to be great; it just has to be perceived as okay. Then. Bang. Hasta la vista human translators except for certain specialized functions.
  3. Data like the information in the Bloomberg article provide a handy road map for AI developers. The jobs least affected by AI become targets for entrepreneurs who find that low-hanging fruit like translation have been picked. (Accountants, I surmise, should not relax to much.)

Net net: The wage suppression angle and the incremental adoption of AI followed by quick adoption are important ideas to consider when analyzing the economic ripples of AI.

Stephen E Arnold, April 1, 2024

AI and Stupid Users: A Glimpse of What Is to Come

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

When smart software does not deliver, who is responsible? I don’t have a dog in the AI fight. I am thinking about deployment of smart software in professional environments. When the outputs are wonky or do not deliver the bang of a  competing system, what is the customer supposed to do. Is the vendor responsible? Is the customer responsible? Is the person who tried to validate the outputs guilty of putting a finger on the scale of a system which its developers cannot explain exactly how an output was determined? Viewed from one angle, this is the Achilles’ heel of artificial intelligence. Viewed from another angle determining responsibility is an issue which, in my opinion, will be decided by legal processes. In the meantime, the issue of a system’s not working can have significant consequences. How about those automated systems on aircraft which dive suddenly or vessels which can jam a ship channel?

I read a write up which provides a peek at what large outfits pushing smart software will do when challenged about quality, accuracy, or other subjective factors related to AI-imbued systems. Let’s take a quick look at “Customers Complain That Copilot Isn’t As Good as ChatGPT, Microsoft Blames Misunderstanding and Misuse.”

The main idea in the write up strikes me as:

Microsoft is doing absolutely everything it can to force people into using its Copilot AI tools, whether they want to or not. According to a new report, several customers have reported a problem: it doesn’t perform as well as ChatGPT. But Microsoft believes the issue lies with people who aren’t using Copilot correctly or don’t understand the differences between the two products.

Yep, the user is the problem. I can imagine the adjudicator (illustrated as a mother) listening to a large company’s sales professional and a professional certified developer arguing about how the customer went off the rails. Is the original programmer the problem? Is the new manager in charge of AI responsible? Is it the user or users?

image

Illustration by MSFT Copilot. Good enough, MSFT.

The write up continues:

One complaint that has repeatedly been raised by customers is that Copilot doesn’t compare to ChatGPT. Microsoft says this is because customers don’t understand the differences between the two products: Copilot for Microsoft 365 is built on the Azure OpenAI model, combining OpenAI’s large language models with user data in the Microsoft Graph and the Microsoft 365 apps. Microsoft says this means its tools have more restrictions than ChatGPT, including only temporarily accessing internal data before deleting it after each query.

Here’s another snippet from the cited article:

In addition to blaming customers’ apparent ignorance, Microsoft employees say many users are just bad at writing prompts. “If you don’t ask the right question, it will still do its best to give you the right answer and it can assume things,” one worker said. “It’s a copilot, not an autopilot. You have to work with it,” they added, which sounds like a slogan Microsoft should adopt in its marketing for Copilot. The employee added that Microsoft has hired partner BrainStorm, which offers training for Microsoft 365, to help create instructional videos to help customers create better Copilot prompts.

I will be interested in watching how these “blame games” unfold.

Stephen E Arnold, March 29, 2024

How to Fool a Dinobaby Online

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Marketers take note. Forget about gaming the soon-to-be-on-life-support Google Web search. Embrace fakery. And who, you may ask, will teach me? The answer is The Daily Beast. To begin your life-changing journey, navigate to “Facebook Is Filled With AI-Generated Garbage—and Older Adults Are Being Tricked.”

image

Two government regulators wonder where the Deep Fakes have gone? Thanks, MSFT Copilot. Keep on updating, please.

The write up explains:

So far, the few experiments to analyze seniors’ AI perception seem to align with the Facebook phenomenon…. The team found that the older participants were more likely to believe that AI-generated images were made by humans.

Okay, that’s step one: Identify your target market.

What’s next? The write up points out:

scammers have wielded increasingly sophisticated generative AI tools to go after older adults. They can use deepfake audio and images sourced from social media to pretend to be a grandchild calling from jail for bail money, or even falsify a relative’s appearance on a video call.

That’s step two: Weave in a family or social tug on the heart strings.

Then what? The article helpfully notes:

As of last week, there are more than 50 bills across 30 states aimed to clamp down on deepfake risks. And since the beginning of 2024, Congress has introduced a flurry of bills to address deepfakes.

Yep, the flag has been dropped. The race with few or no rules is underway. But what about government rules and regulations? Yeah, those will be chugging around after the race cars have disappeared from view.

Thanks for the guidelines.

Stephen E Arnold, March 29, 2024

AI and Jobs: Under Estimating Perhaps?

March 28, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I am interested in the impact of smart software on jobs. I spotted “1.5M UK Jobs Now at Risk from AI, Report Finds.” But the snappier assertion appears in the subtitle to the write up:

The number could rise to 7.9M in the future

The UK has about 68 million people (maybe more, maybe fewer but close enough). The estimate of 7.9 million job losses translates to seven million people out of work. Now these types of “future impact” estimates are diaphanous. But the message seems clear. Despite the nascent stage of smart software’s development, the number one use may be dumping humans and learning to love software. Will the software make today’s systems work more efficiently. In my experience, computerizing processes does very little to improve the outputs. Some tasks are completed quickly. However, get the process wrong, and one has a darned interesting project for a blue-chip consulting firm.

image

The smart software is alone in an empty office building. Does the smart software look lonely or unhappy? Thanks, MSFT Copilot. Good enough illustration.

The write up notes:

Back-office, entry-level, and part-time jobs are the ones mostly exposed, with employees on medium and low wages being at the greatest risk.

If this statement is accurate, life will be exciting for parents whose progeny camp out in the family room or who turn to other, possibly less socially acceptable, methods of generating cash. Crime comes to my mind, but you may see volunteers working to pick up trash in lovely Plymouth or Blackpool.

The write up notes:

Experts have argued that AI can be a force for good in the labor market — as long as it goes hand in hand with rebuilding workforce skills.

Academics, wizards, elected officials, consultants can find the silver lining in the cloud that spawned the tornado.

Several observations, if I may:

  1. The acceleration of tools to add AI to processes is evident in the continuous stream of “new” projects appearing in GitHub, Product Watch, and AI newsletters. The availability of tools means that applications will flow into job-reducing opportunities; that is, outfits which will pay cash to cut payroll.
  2. AI functions are now being embedded in mobile devices. Smart software will be a crutch and most users will not realize that their own skills are being transformed. Welcoming AI is an important first step in using AI to replace an expensive, unreliable humanoid.
  3. The floundering of government and non-governmental organizations is amusing to watch. Each day documents about managing the AI “risk” appear in my feedreader. Yet zero meaningful action is taking place as certain large companies work to consolidate their control of essential and mostly proprietary technologies and know how.

Net net: The job loss estimate is interesting. My hunch is that it underestimates the impact of smart software on traditional work. This is good for smart software and possibly not so good for humanoids.

Stephen E Arnold, March 28, 2024

Backpressure: A Bit of a Problem in Enterprise Search in 2024

March 27, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have noticed numerous references to search and retrieval in the last few months. Most of these articles and podcasts focus on making an organization’s data accessible. That’s the same old story told since the days of STAIRS III and other dinobaby artifacts. The gist of the flow of search-related articles is that information is locked up or silo-ized. Using a combination of “artificial intelligence,” “open source” software, and powerful computing resources — problem solved.

image

A modern enterprise search content processing system struggles to keep pace with the changes to already processed content (the deltas) and the flow of new content in a wide range of file types and formats. Thanks, MSFT Copilot. You have learned from your experience with Fast Search & Transfer file indexing it seems.

The 2019 essay “Backpressure Explained — The Resisted Flow of Data Through Software” is pertinent in 2024. The essay, written by Jay Phelps, states:

The purpose of software is to take input data and turn it into some desired output data. That output data might be JSON from an API, it might be HTML for a webpage, or the pixels displayed on your monitor. Backpressure is when the progress of turning that input to output is resisted in some way. In most cases that resistance is computational speed — trouble computing the output as fast as the input comes in — so that’s by far the easiest way to look at it.

Mr. Phelps identifies several types of backpressure. These are:

  1. More info to be processed than a system can handle
  2. Reading and writing file speeds are not up to the demand for reading and writing
  3. Communication “pipes” between and among servers are too small, slow, or unstable
  4. A group of hardware and software components cannot move data where it is needed fast enough.

I have simplified his more elegantly expressed points. Please, consult the original 2019 document for the information I have hip hopped over.

My point is that in the chatter about enterprise search and retrieval, there are a number of situations (use cases to those non-dinobabies) which create some interesting issues. Let me highlight these and then wrap up this short essay.

In an enterprise, the following situations exist and are often ignored or dismissed as irrelevant. When people pooh pooh my observations, it is clear to me that these people have [a] never been subject to a legal discovery process associated with enterprise search fraud and [b] are entitled whiz kids who don’t do too much in the quite dirty, messy, “real” world. (I do like the variety in T shirts and lumberjack shirts, however.)

First, in an enterprise, content changes. These “deltas” are a giant problem. I know that none of the systems I have examined, tested, installed, or advised which have a procedure to identify a change made to a PowerPoint, presented to a client, and converted to an email confirming a deal, price, or technical feature in anything close to real time. In fact, no one may know until the president’s laptop is examined by an investigator who discovers the “forgotten” information. Even more exciting is the opposing legal team’s review of a laptop dump as part of a discovery process “finds” the sequence of messages and connects the dots. Exciting, right. But “deltas” pose another problem. These modified content objects proliferate like gerbils. One can talk about information governance, but it is just that — talk, meaningless jabber.

Second, the content which an employees needs to answer a business question in a timely manner can reside in am employee’s laptop or a mobile phone, a digital notebook, in a Vimeo video or one of those nifty “private” YouTube videos, or behind the locked doors and specialized security systems loved by some pharma company’s research units, a Word document in something other than English, etc. Now the content is changed. The enterprise search fast talkers ignore identifying and indexing these documents with metadata that pinpoints the time of the change and who made it. Is this important? Some contract issues require this level of information access. Who asks for this stuff? How about a COTR for a billion dollar government contract?

Third, I have heard and read that modern enterprise search systems “use”, “apply,” “operate within” industry standard authentication systems. Sure they do within very narrowly defined situations. If the authorization system does not work, then quite problematic things happen. Examples range from an employee’s failure to find the information needed and makes a really bad decision. Alternatively the employee goes on an Easter egg hunt which may or may not work, but if the egg found is good enough, then that’s used. What happens? Bad things can happen? Have you ridden in an old Pinto? Access control is a tough problem, and it costs money to solve. Enterprise search solutions, even the whiz bang cloud centric distributed systems, implement something, which is often not the “right” thing.

Fourth, and I am going to stop here, the problem of end-to-end encrypted messaging systems. If you think employees do not use these, I suggest you do a bit of Eastern egg hunting. What about the content in those systems? You can tell me, “Our company does not use these.” I say, “Fine. I am a dinobaby, and I don’t have time to talk with you because you are so much more informed than I am.”

Why did I romp though this rather unpleasant issue in enterprise search and retrieval? The answer is, “Enterprise search remains a problematic concept.” I believe there is some litigation underway about how the problem of search can morph into a fantasy of a huge business because we have a solution.”

Sorry. Not yet. Marketing and closing deals are different from solving findability issues in an enterprise.

Stephen E Arnold, March 27, 2024

A Single, Glittering Google Gem for 27 March 2024

March 27, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

So many choices. But one gem outshines the others. Google’s search generative experience is generating publicity. The old chestnut may be true. Any publicity is good publicity. I would add a footnote. Any publicity about Google’s flawed smart software is probably good for Microsoft and other AI competitors. Google definitely looks as though it has some behaviors that are — how shall I phrase it? — questionable. No, maybe, ill-considered. No, let’s go with bungling. That word has a nice ring to it. Bungling.

! google gems

I learned about this gem in “Google’s New AI Search Results Promotes Sites Pushing Malware, Scams.” The write up asserts:

Google’s new AI-powered ‘Search Generative Experience’ algorithms recommend scam sites that redirect visitors to unwanted Chrome extensions, fake iPhone giveaways, browser spam subscriptions, and tech support scams.

The technique which gets the user from the quantumly supreme Google to the bad actor goodies is redirects. Some user notification functions to pump even more inducements toward the befuddled user. (See, bungling and befuddled. Alliteration.)

Why do users fall for these bad actor gift traps? It seems that Google SGE conversational recommendations sound so darned wonderful, Google users just believe that the GOOG cares about the information it presents to those who “trust” the company. k

The write up points out that the DeepMinded Google provided this information about the bumbling SGE:

"We continue to update our advanced spam-fighting systems to keep spam out of Search, and we utilize these anti-spam protections to safeguard SGE," Google told BleepingComputer. "We’ve taken action under our policies to remove the examples shared, which were showing up for uncommon queries."

Isn’t that reassuring? I wonder if the anecdote about this most recent demonstration of the Google’s wizardry will become part of the Sundar & Prabhakar Comedy Act?

This is a gem. It combines Google’s management process, word salad frippery, and smart software into one delightful bouquet. There you have it: Bungling, befuddled, bumbling, and bouquet. I am adding blundering. I do like butterfingered, however.

Stephen E Arnold, March 27, 2024

IBM and AI: A Spur to Other Ageing Companies?

March 27, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I love IBM. Well, I used to. Years ago I had three IBM PC 704 servers. Each was equipped with its expansion SCSI storage device. My love disappeared as we worked daily to keep the estimable ServeRAID softwware in tip top shape. For those unfamiliar with the thrill of ServeRAID, “tip top” means preventing the outstanding code from trashing data.

image

IBM is a winner. Thanks, MSFT Copilot. How are those server vulnerabilities today?

I was, therefore, not surprised to read “IBM Stock Nears an All-Time High—And It May Have Something to Do with its CEO Replacing As Many Workers with AI As Possible.” Instead of creating the first and best example of dinobaby substitution, Big Blue is now using smart software to reduce headcount. The write up says:

[IBM] used AI to reduce the number of employees working on relatively manual HR-related work to about 50 from 700 previously, which allowed them to focus on other things, he [Big Dog at IBM] wrote in an April commentary piece for Fortune. And in its January fourth quarter earnings, the company said it would cut costs in 2024 by $3 billion, up from $2 billion previously, in part by laying off thousands of workers—some of which it later chalked up to AI influence.

Is this development important? Yep. Here are the reasons:

  1. Despite its interesting track record in smart software, IBM has figured out it can add sizzle to the ageing giant by using smart software to reduce costs. Forget that cancer curing stuff. Go with straight humanoid replacement.
  2. The company has significant influence. Some Gen Y and Gen Z wizards don’t think about IBM. That’s fine, but banks, government agencies, Fortune 1000 firms, and family fund management firms do. What IBM does influences these bright entities’ thinking.
  3. The targeted workers are what one might call “expendable.” That’s a great way to motivate some of Big Blue’s war horses.

Net net: The future of AI is coming into focus for some outfits who may have a touch of arthritis.

Stephen E Arnold, March 27, 2024

Xoogler Predicts the Future: China Bad, Xoogler Good

March 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Did you know China, when viewed from the vantage point of a former Google executive, is bad? That is a stunning comment. Google tried valiantly to convert China into a money stream. That worked until it didn’t. Now a former Googler or Xoogler in some circles has changed his tune.

image

Thanks, MSFT Copilot. Working on security I presume?

Eric Schmidt’s China Alarm” includes some interesting observations. None of which address Google’s attempt to build a China-acceptable search engine. Oh, well, anyone can forget minor initiatives like that. Let’s look at a couple of comments from the article:

How about this comment about responding to China:

"We have to do whatever it takes."

I wonder if Mr. Schmidt has been watching Dr. Strangelove on YouTube. Someone might pull that viewing history to clarify “whatever it takes.”

Another comment I found interesting is:

China has already become a peer of the U.S. and has a clear plan for how it wants to dominate critical fields, from semiconductors to AI, and clean energy to biotech.

That’s interesting. My thought is that the “clear plan” seems to embrace education; that is, producing more engineers than some other countries, leveraging open source technology, and erecting interesting barriers to prevent US companies from selling some products in the Middle Kingdom. How long has this “clear plan” been chugging along? I spotted portions of the plan in Wuhan in 2007. But I guess now it’s a more significant issue after decades of being front and center.

I noted this comment about artificial intelligence:

Schmidt also said Europe’s proposals on regulating artificial intelligence "need to be re-done," and in general says he is opposed to regulating AI and other advances to solve problems that have yet to appear.

The idea is an interesting one. The UN and numerous NGOs and governmental entities around the world are trying to regulate, tame, direct, or ameliorate the impact of smart software. How’s that going? My answer is, “Nowhere fast.”

The article makes clear that Mr. Schmidt is not just a Xoogler; he is a global statesperson. But in the back of my mind, once a Googler, always a Googler.

Stephen E Arnold, March 26, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta