AI and Efficiency: What Is the Cost of Change?

November 18, 2024

dino orange_thumb_thumb_thumb_thumbNo smart software. Just a dumb dinobaby. Oh, the art? Yeah, MidJourney.

Companies are embracing smart software. One question which gets from my point of view little attention is, “What is the cost of changing an AI system a year or two down the road?” The focus at this time is getting some AI up and running so an organization can “learn” whether AI works or not. A parallel development is taking place in software vendors enterprise and industry-centric specialized software. Examples range from a brand new AI powered accounting system to Microsoft “sticking” AI into the ASCII editor Notepad.

image

Thanks, MidJourney. Good enough.

Let’s tally the costs which an organization faces 24 months after flipping the switch in, for example, a hospital chain which uses smart software to convert a physician’s spoken comments about a patient to data which can be used for analysis to provide insight into evidence based treatment for the hospital’s constituencies.

Here are some costs for staff, consultants, and lawyers:

  1. Paying for the time required to figure out what is on the money and what is not good or just awful like dead patients
  2. The time required to figure out if the present vendor can fix up the problem or a new vendor’s system must be deployed
  3. Going through the smart software recompete or rebid process
  4. Getting the system up and running
  5. The cost of retraining staff
  6. Chasing down dependencies like other third party software for the essential “billing process”
  7. Optimizing the changed or alternative system.

The enthusiasm for smart software makes talking about these future costs fade a little.

I read “AI Makes Tech Debt More Expensive,” and I want to quote one passage from the pretty good essay:

In essence, the goal should be to unblock your AI tools as much as possible. One reliable way to do this is to spend time breaking your system down into cohesive and coherent modules, each interacting through an explicit interface. A useful heuristic for evaluating a set of modules is to use them to explain your core features and data flows in natural language. You should be able to concisely describe current and planned functionality. You might also want to set up visibility and enforcement to make progress toward your desired architecture. A modern development team should work to maintain and evolve a system of well-defined modules which robustly model the needs of their domain. Day-to-day feature work should then be done on top of this foundation with maximum leverage from generative AI tooling.

Will organizations make this shift? Will the hyperbolic AI marketers acknowledge the future costs of pasting smart software on existing software like circus posters on crumbling walls?

Nope.

Those two year costs will be interesting for the bean counters when those kicked cans end up in their workspaces.

Stephen E Arnold, November 18, 2024

Let Them Eat Cake or Unplug: The AI Big Tech Bro Effect

November 7, 2024

I spotted a news item which will zip right by some people. The “real” news outfit owned by the lovable Jeff Bezos published “As Data Centers for AI Strain the Power Grid, Bills Rise for Everyday Customers.” The write up tries to explain that AI costs for electric power are being passed along to regular folks. Most of these electricity dependent people do not take home paychecks with tens of millions of dollars like the Nadellas, the Zuckerbergs, or the Pichais type breadwinners do. Heck, these AI poohbahs think about buying modular nuclear power plants. (I want to point out that these do not exist and may not for many years.)

The article is not going to thrill the professionals who are experts on utility demand and pricing. Those folks know that the smart software poohbahs have royally screwed up some weekends and vacations for the foreseeable future.

The WaPo article (presumably blessed by St. Jeffrey) says:

The facilities’ extraordinary demand for electricity to power and cool computers inside can drive up the price local utilities pay for energy and require significant improvements to electric grid transmission systems. As a result, costs have already begun going up for customers — or are about to in the near future, according to utility planning documents and energy industry analysts. Some regulators are concerned that the tech companies aren’t paying their fair share, while leaving customers from homeowners to small businesses on the hook.

Okay, typical “real” journospeak. “Costs have already begun going up for customers.” Hey, no kidding. The big AI parade began with the January 2023 announcement that the Softies were going whole hog on AI. The lovable Google immediately flipped into alert mode. I can visualize flashing yellow LEDs and faux red stop lights blinking in the gray corridors in Shoreline Drive facilities if there are people in those offices again. Yeah, ghostly blinking.

The write up points out, rather unsurprisingly:

The tech firms and several of the power companies serving them strongly deny they are burdening others. They say higher utility bills are paying for overdue improvements to the power grid that benefit all customers.

Who wants PEPCO and VEPCO to kill their service? Actually, no one. Imagine life in NoVa, DC, and the ever lovely Maryland without power. Yikes.

From my point of view, informed by some exposure to the utility sector at a nuclear consulting firm and then at a blue chip consulting outfit, here’s the scoop.

The demand planning done with rigor by US utilities took a hit each time the Big Dogs of AI brought more specialized, power hungry servers online and — here’s the killer, folks — and left them on. The way power consumption used to work is that during the day, consumer usage would fall and business/industry usage would rise. The power hogging steel industry was a 24×7 outfit. But over the last 40 years, manufacturing has wound down and consumer demand crept upwards. The curves had to be plotted and the demand projected, but, in general, life was not too crazy for the US power generation industry. Sure, there were the costs associated with decommissioning “old” nuclear plants and expanding new non-nuclear facilities with expensive but management environmental gewgaws, gadgets, and gizmos plugged in to save the snail darters and the frogs.

Since January 2023, demand has been curving upwards. Power generation outfits don’t want to miss out on revenue. Therefore, some utilities have worked out what I would call sweetheart deals for electricity for AI-centric data centers. Some of these puppies suck more power in a day than a dying city located in Flyover Country in Illinois.

Plus, these data centers are not enough. Each quarter the big AI dogs explain that more billions will be pumped into AI data centers. Keep in mind: These puppies run 24×7. The AI wolves have worked out discount rates.

What do the US power utilities do? First, the models have to be reworked. Second, the relationships to trade, buy, or “borrow” power have to be refined. Third, capacity has to be added. Fourth, the utility rate people create a consumer pricing graph which may look like this:

image

Guess who will pay? Yep, consumers.

The red line is the prediction for post-AI electricity demand. For comparison, the blue line shows the demand curve before Microsoft ignited the AI wars. Note that the gray line is consumer cost or the monthly electricity bill for Bob and Mary Normcore. The nuclear purple line shows what is and will continue to happen to consumer electricity costs. The red line is the projected power demand for the AI big dogs.

The graph shows that the cost will be passed to consumers. Why? The sweetheart deals to get the Big Dog power generation contracts means guaranteed cash flow and a hurdle for a low-ball utility to lumber over. Utilities like power generation are not the Neon Deions of American business.

There will be hand waving by regulators. Some city government types will argue, “We need the data centers.” Podcasts and posts on social media will sprout like weeds in an untended field.

Net net: Bob and Mary Normcore may have to decide between food and electricity. AI is wonderful, right.

Stephen E Arnold, November 7, 2024

Dreaming about Enterprise Search: Hope Springs Eternal…

November 6, 2024

dino orange_thumbThe post is the work of a humanoid who happens to be a dinobaby. GenX, Y, and Z, read at your own risk. If art is included, smart software produces these banal images.

Enterprise search is back, baby. The marketing lingo is very year 2003, however. The jargon has been updated, but the story is the same: We can make an organization’s information accessible. Instead of Autonomy’s Neurolinguistic Programming, we have AI. Instead of “just text,” we have video content processed. Instead of filters, we have access to cloud-stored data.

image

An executive knows he can crack the problem of finding information instantly. The problem is doing it so that the time and cost of data clean up does not cost more than buying the Empire State Building. Thanks, Stable Diffusion. Good enough.

A good example of the current approach to selling the utility of an enterprise search and retrieval system is the article / interview in Betanews called “How AI Is Set to Democratize Information.” I want to be upfront. I am a mostly aligned with the analysis of information and knowledge presented by Taichi Sakaiya. His The Knowledge Value Revolution or a History of the Future has been a useful work for me since the early 1990s. I was in Osaka, Japan, lecturing at the Kansai Institute of Technology when I learned of this work book from my gracious hosts and the Managing Director of Kinokuniya (my sponsor). Devaluing knowledge by regressing to the fat part of a Gaussian distribution is not something about which I am excited.

However, the senior manager of Pyron (Raleigh, North Carolina), an AI-powered information retrieval company, finds the concept in line with what his firm’s technology provides to its customers.  The article includes this statement:

The concept of AI as a ‘knowledge cloud’ is directly tied to information access and organizational intelligence. It’s essentially an interconnected network of systems of records forming a centralized repository of insights and lessons learned, accessible to individuals and organizations.

The benefit is, according to the Pyron executive:

By breaking down barriers to knowledge, the AI knowledge cloud could eliminate the need for specialized expertise to interpret complex information, providing instant access to a wide range of topics and fields.

The article introduces a fresh spin on the problems of information in organizations:

Knowledge friction is a pervasive issue in modern enterprises, stemming from the lack of an accessible and unified source of information. Historically, organizations have never had a singular repository for all their knowledge and data, akin to libraries in academic or civic communities. Instead, enterprise knowledge is scattered across numerous platforms and systems — each managed by different vendors, operating in silos.

Pyron opened its doors in 2017. After seven years, the company is presenting a vision of what access to enterprise information could, would, and probably should do.

The reality, based on my experience, is different. I am not talking about Pyron now. I am discussing the re-emergence of enterprise search as the killer application for bolting artificial intelligence to information retrieval. If you are in love with AI systems from oligopolists, you may want to stop scanning this blog post. I do not want to be responsible for a stroke or an esophageal spasm. Here we go:

  1. Silos of information are an emergent phenomenon. Knowledge has value. Few want to make their information available without some value returning to them. Therefore, one can talk about breaking silos and democratization, but those silos will be erected and protected. Secret skunk works, mislabeled projects, and squirreling away knowledge nuggets for a winter’s day. In the case of Senator Everett Dirksen, the information was used to get certain items prioritized. That’s why there is a building named after him.
  2. The “value” of information or knowledge depends on another person’s need. A database which contains the antidote to save a child from a household poisoning costs money to access. Why? Desperate people will pay. The “information wants to free” idea is not one that makes sense to those with information and the knowledge to derive value from what another finds inscrutable. I am not sure that “democratizing information” meshes smoothly with my view.
  3. Enterprise search, with or without, hits some cost and time problems with a small number of what have been problems for more than 50 years. SMART failed, STAIRS III failed, and the hundreds of followers have failed. Content is messy. The idea that one can process text, spreadsheets, Word files, and email is one thing. Doing it without skipping wonky files or the time and cost of repurposing data remains difficult. Chemical companies deal with formulae; nuclear engineering firms deal with records management and mathematics; and consulting companies deal with highly paid people who lock up their information on a personal laptop. Without these little puddles of information, the “answer” or the “search output” will not be just a hallucination. The answer may be dead wrong.

I understand the need to whip up jargon like “democratize information”, “knowledge friction”, and “RAG frameworks”. The problem is that despite the words, delivering accurate, verifiable, timely on-point search results in response to a query is a difficult problem.

Maybe one of the monopolies will crack the problem. But most of output is a glimpse of what may be coming in the future. When will the future arrive? Probably when the next PR or marketing write up about search appears. As I have said numerous times, I find it more difficult to locate the information I need than at any time in my more than half a century in online information retrieval.

What’s easy is recycling marketing literature from companies who were far better at describing a “to be” system, not a “here and now” system.

Stephen E Arnold, November 4, 2024

Twenty Five Percent of How Much, Google?

November 6, 2024

dino orangeThe post is the work of a humanoid who happens to be a dinobaby. GenX, Y, and Z, read at your own risk. If art is included, smart software produces these banal images.

I read the encomia to Google’s quarterly report. In a nutshell, everything is coming up roses even the hyperbole. One news hook which has snagged some “real” news professionals is that “more than a quarter of new code at Google is generated by AI.” The exclamation point is implicit. Google’s AI PR is different from some other firms; for example, Samsung blames its financial performance disappointments on some AI. Winners and losers in a game in which some think the oligopolies are automatic winners.

image

An AI believer sees the future which is arriving “soon, real soon.” Thanks, You.com. Good enough because I don’t have the energy to work around your guard rails.

The question is, “How much code and technical debt does Google have after a quarter century of its court-described monopolistic behavior? Oh, that number is unknown. How many current Google engineers fool around with that legacy code? Oh, that number is unknown and probably for very good reasons. The old crowd of wizards has been hit with retirement, cashing in and cashing out, and “leadership” nervous about fiddling with some processes that are “good enough.” But 25 years. No worries.

The big news is that 25 percent of “new” code is written by smart software and then checked by the current and wizardly professionals. How much “new” code is written each year for the last three years? What percentage of the total Google code base is “new” in the years between 2021 and 2024? My hunch is that “new” is relative. I also surmise that smart software doing 25 percent of the work is one of those PR and Wall Street targeted assertions specifically designed to make the Google stock go up. And it worked.

However, I noted this Washington Post article: “Meet the Super Users Who Tap AI to Get Ahead at Work.” Buried in that write up which ran the mostly rah rah AI “real” news article coincident with Google’s AI spinning quarterly reports one interesting comment:

Adoption of AI at work is still relatively nascent. About 67 percent of workers say they never use AI for their jobs compared to 4 percent who say they use it daily, according to a recent survey by Gallup.

One can interpret this as saying, “Imagine the growth that is coming from reduced costs. Get rid of most coders and just use Google’s and other firms’ smart programming tools.

Another interpretation is, “The actual use is much less robust than the AI hyperbole machine suggests.”

Which is it?

Several observations:

  1. Many people want AI to pump some life into the economic fuel tank. By golly, AI is going to be the next big thing. I agree, but I think the Gallup data indicates that the go go view is like looking at a field of corn from a crop duster zipping along at 1,000 feet. The perspective from the airplane is different from the person walking amidst the stalks.
  2. The lack of data behind Google-type assertions about how much machine code is in the Google mix sounds good, but where are the data? Google, aren’t you data driven? So, where’s the back up data for the 25 percent assertion.
  3. Smart software seems to be something that is expensive, requires dreams of small nuclear reactors next to a data center adjacent a hospital. Yeah, maybe once the impact statements, the nuclear waste, and the skilled worker issues have been addressed. Soon as measured in environmental impact statement time which is different from quarterly report time.

Net net: Google desperately wants to be the winner in smart software. The company is suggesting that if it were broken apart by crazed government officials, smart software would die. Insert the exclamation mark. Maybe two or three. That’s unlikely. The blurring of “as is” with “to be” is interesting and misleading.

Stephen E Arnold, November 6, 2024

How to Cut Podcasts Costs and Hassles: A UK Example

November 5, 2024

Using AI to replicate a particular human is a fraught topic. Of paramount concern is the relentless issue of deepfakes. There are also legal issues of control over one’s likeness, of course, and concerns the technology could put humans out of work. It is against this backdrop, the BBC reports, that “Michael Parkinson’s Son Defends New AI Podcast.” The new podcast uses AI to recreate the late British talk show host, who will soon interview (human) guests. Son Mike acknowledges the concerns, but insists this project is different. Writer Steven McIntosh explains:

“Mike Parkinson said Deep Fusion’s co-creators Ben Field and Jamie Anderson ‘are 100% very ethical in their approach towards it, they are very aware of the legal and ethical issues, and they will not try to pass this off as real’. Recalling how the podcast was developed, Parkinson said: ‘Before he died, we [my father and I] talked about doing a podcast, and unfortunately he passed away before it came true, which is where Deep Fusion came in. ‘I came to them and said, ‘if we wanted to do this podcast with my father talking about his archive, is it possible?’, and they said ‘it’s more than possible, we think we can do something more’. He added his father ‘would have been fascinated’ by the project, although noted the broadcaster himself was a ‘technophobe’. Discussing the new AI version of his father, Parkinson said: ‘It’s extraordinary what they’ve achieved, because I didn’t really think it was going to be as accurate as that.’”

So they have the family’s buy-in, and they are making it very clear the host is remade with algorithms. The show is called “Virtually Parkinson,” after all. But there is still that replacing human talent with AI thing. Deep Fusion’s Anderson notes that, since Parkinson is deceased, he is in no danger of losing work. However, McIntosh counters, any guest that appears on this show may give one fewer interview to a show hosted by a different, living person. Good point.

One thing noteworthy about Deep Fusion’s AI on this project is its ability to not just put words in Parkinson’s mouth, but to predict how he would have actually responded. Assuming that function is accurate, we have a request: Please bring back the objective reporting of Walter Cronkite. This world sorely needs it.

Cynthia Murrell, November 5, 2024

Apple: Challenges Little and Bigly

October 28, 2024

dino orangeAnother post from a dinobaby. No smart software required except for the illustration.

At lunch yesterday (October 23, 2024), one of the people in the group had a text message with a long string of data. That person wanted to move the data from the text message into an email. The idea was copy a bit of ascii, put it in an email, and email the data to his office email account. Simple? He fiddled but could not get the iPhone to do the job. He showed me the sequence and when he went through the highlighting, the curly arrow, and the tap to copy, he was following the procedure. When he switched to email and pressed the text was not available. A couple of people tried to make this sequence of tapping and long pressing work. Someone handed the phone to me. I fooled around with it, asked the person to restart the phone, and went through the process. It took two tries but I got the snip of ASCII to appear in the email message. Yep, that’s the Apple iPhone. Everyone loves the way it works, except when it does not. The frustration the iPhone owner demonstrated illustrates the “good enough” approach to many functions in Apple’s and other firms’ software.

image

Will the normal course of events swamp this big time executive? Thanks, You.com. You were not creative, but you were good enough.

Why mention this?

Apple is a curious company. The firm has been a darling of cored fans, investors, and the MBA crowd. I have noted two actions related to Apple which suggest that the company may have a sleek exterior but the interior is different. Let’s look at these two recent developments.

The first item concerns what appear to be untoward behavior by Apple and those really good folks at Goldman Sachs. The Apple credit card received a statement showing that $89 million was due. The issue appears to be fumbling the ball with customers. For a well managed company, how does this happen? My view is that getting cute was not appreciated by some government authorities. A tiny mistake? Yes. The fine is miniscule compared to the revenue represented by the outstanding enterprises paying the fine. With small fines, have the Apple and Goldman Sachs professionals learned a lesson. Yes, get out of the credit card game. Other than that, I surmise that neither of the companies will veer from their game plans.

The second item is, from my point of view, a bit more interesting than credit cuteness. Apple, if the news report in the Washington Times, is close to the truth, is getting very comfortable with China. The basic idea is that Apple wants to invest in China. Is China the best friend forever of the US? I thought some American outfits were somewhat cautious with regard to their support of that nation state. Well, that does not appear to apply to China.

With the weird software, the credit card judgment, and the China love fest, we have three examples of a company operating in what I would describe as a fog of pragmatism. The copy paste issue makes clear that simplicity and attention to a common task on a widely used device is not important. The message for the iPhone is, “Figure out our way. Don’t even think about a meaningful, user centric change. Just upgrade and get the vapor of smart software.”

The message from the credit card judgment is, “Hey, we will do what we want. If there is a problem, send us a bill. We will continue to do what we want.” That shows me that Apple buys into the behavior pattern which makes Silicon Valley behavior the gold standard in management excellence.

My interpretation of the China-Apple BFF activity is that the policy of the US government is of little interest. Apple, like other large technology outfits, is effectively operating as a nation state. The company will do what it wants and let lawyer and PR people make the activity palatable.

I find it amusing that Apple appears to be reducing orders for its next big iPhone release. The market may be reaching a saturation point or the economic conditions in certain markets make lower cost devices more appealing. My own view is that the AI vapor spewed by Apple and other US companies is dissipating. Another utility function which does not work in a reliable way may not be enough.

Why not make copy paste more usable or is that a challenge beneath your vast aspirations?

Stephen E Arnold, October 28, 2024

Meta, Politics, and Money

October 24, 2024

Meta and its flagship product, Facebook, makes money from advertising. Targeted advertising using Meta’s personalization algorithm is profitable and political views seem to turn the money spigot. Remember the January 6 Riots or how Russia allegedly influenced the 2016 presidential election? Some of the reasons those happened was due to targeted advertising through social media like Facebook.

Gizmodo reviews how much Meta generates from political advertising in: “How Meta Brings In Millions Off Political Violence.” The Markup and CalMatters tracked how much money Meta made from Trump’s July assassination attempt via merchandise advertising. The total runs between $593,000 -$813,000. The number may understate the actual money:

“If you count all of the political ads mentioning Israel since the attack through the last week of September, organizations and individuals paid Meta between $14.8 and $22.1 million dollars for ads seen between 1.5 billion and 1.7 billion times on Meta’s platforms. Meta made much less for ads mentioning Israel during the same period the year before: between $2.4 and $4 million dollars for ads that were seen between 373 million and 445 million times.  At the high end of Meta’s estimates, this was a 450 percent increase in Israel-related ad dollars for the company. (In our analysis, we converted foreign currency purchases to current U.S. dollars.)”

The organizations that funded those ads were supporters of Palestine or Israel. Meta doesn’t care who pays for ads. Tracy Clayton is a Meta spokesperson and she said that ads go through a review process to determine if they adhere to community standards. She also that advertisers don’t run their ads during times of strife, because they don’t want their goods and services associates with violence.

That’s not what the evidence shows. The Markup and CalMatters researched the ads’ subject matter after the July assassination attempt. While they didn’t violate Meta’s guidelines, they did relate to the event. There were ads for gun holsters and merchandise about the shooting. It was a business opportunity and people ran with it with Meta holding the finish line ribbon.

Meta really has an interesting ethical framework.

Whitney Grace, October 24, 2024

Money and Open Source: Unpleasant Taste?

October 23, 2024

Open-source veteran and blogger Armin Ronacher ponders “The Inevitability of Mixing Open Source and Money.” It is lovely when developers work on open-source projects for free out of the goodness of their hearts. However, the truth is these folks can only afford to spend so much time working for free. (A major reason open source documentation is a mess, by the way.)

For his part, Ronacher helped launch Sentry’s Open Source Pledge. That initiative asks companies to pledge funding to open source projects they actively use. It is particularly focused on small projects, like xz, that have a tougher time attracting funds than the big names. He acknowledges the perils of mixing open source and money, as described by Word Press’s David Heinemeier Hansson. But he insists the blend is already baked in. He considers:

“At face value, this suggests that Open Source and money shouldn’t mix, and that the absence of monetary rewards fosters a unique creative process. There’s certainly truth to this, but in reality, Open Source and money often mix quickly. If you look under the cover of many successful Open Source projects you will find companies with their own commercial interests supporting them (eg: Linux via contributors), companies outright leading projects they are also commercializing (eg: MariaDB, redis) or companies funding Open Source projects primarily for marketing / up-sell purposes (uv, next.js, pydantic, …). Even when money doesn’t directly fund an Open Source project, others may still profit from it, yet often those are not the original creators. These dynamics create stresses and moral dilemmas.”

For example, the conflict between Hansson and WP Engine. The tension can also personal stress. Ronacher shares doubts that have plagued him: to monetize or not to monetize? Would a certain project have taken off had he poured his own money into it? He has watched colleagues wrestle with similar questions that affected their health and careers. See his post for more on those issues. The write-up concludes:

“I firmly believe that the current state of Open Source and money is inadequate, and we should strive for a better one. Will the Pledge help? I hope for some projects, but WordPress has shown that we need to drive forward that conversation of money and Open Source regardless of the size of the project.”

Clearly, further discussion is warranted. New ideas from open-source enthusiasts are also needed. Can a balance be found?

Cynthia Murrell, October 23, 2024

AI: The Key to Academic Fame and Fortune

October 17, 2024

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumbJust a humanoid processing information related to online services and information access.

Why would professors use smart software to “help” them with their scholarly papers? The question may have been answered in the Phys.org article “Analysis of Approximately 75 Million Publications Finds Those Employing AI Are More Likely to Be a ‘Hit Paper’” reports:

A new Northwestern University study analyzing 74.6 million publications, 7.1 million patents and 4.2 million university course syllabi finds papers that employ AI exhibit a “citation impact premium.” However, the benefits of AI do not extend equitably to women and minority researchers, and, as AI plays more important roles in accelerating science, it may exacerbate existing disparities in science, with implications for building a diverse, equitable and inclusive research workforce.

Years ago some universities had an “honor code”? I think the University of Virginia was one of those dinosaurs. Today professors are using smart software to help them crank out academic hits.

The write up continues by quoting a couple of the study’s authors (presumably without using smart software) as saying:

“These advances raise the possibility that, as AI continues to improve in accuracy, robustness and reach, it may bring even more meaningful benefits to science, propelling scientific progress across a wide range of research areas while significantly augmenting researchers’ innovation capabilities…”

What are the payoffs for the professors who probably take a dim view of their own children using AI to make life easier, faster, and smoother? Let’s look at a handful my team and I discussed:

  1. More money in the form of pay raises
  2. Better shot at grants for research
  3. Fame at conferences
  4. Groupies. I know it is hard to imagine but it happens. A lot.
  5. Awards
  6. Better committee assignments
  7. Consulting work.

When one considers the benefits from babes to bucks, the chit chat about doing better research is of little interest to professors who see virtue in smart software.

The president of Stanford cheated. The head of the Harvard Ethics department appears to have done it. The professors in the study sample did it. The conclusion: Smart software use is normative behavior.

Stephen E Arnold, October 17, 2024

Happy AI News: Job Losses? Nope, Not a Thing

September 19, 2024

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I read “AI May Not Steal Many Jobs after All. It May Just Make Workers More Efficient.” Immediately two points jumped out at me. The AP (the publisher of the “real” news story is hedging with the weasel word “may” and the hedgy phrase “after all.” Why is this important? The “real” news industry is interested in smart software to reduce costs and generate more “real” news more quickly. The days with “real” reporters disappearing for hours to confirm with a source are often associated with fiddling around. The costs of doing anything without a gusher of money pumping 24×7 are daunting. The word “efficient” sits in the headline as a digital harridan stakeholder. Who wants that?

image

The manager of a global news operation reports that under his watch, he has achieved peak efficiency. Thanks, MSFT Copilot. Will this work for production software development? Good enough is the new benchmark, right?

The story itself strikes me as a bit of content marketing which says, “Hey, everyone can use AI to become more efficient.” The subtext is, “Hey, don’t worry. No software robot or agentic thingy will reduce staff. Probably.

The AP is a litigious outfit even though I worked at a newspaper which “participated” in the business process of the entity. Here’s one sentence from the “real” news write up:

Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy.

Yep, just like the steam engine and the Internet.

When technologies emerge, most go away or become componentized or dematerialized. When one of those hot technologies fail to produce revenues, quite predictable outcomes result. Executives get fired. VC firms do fancy dancing. IRS professionals squint at tax returns.

So far AI has been a “big guys win sort of because they have bundles of cash” and “little outfits lose control of their costs”. Here’s my take:

  1. Human-generated news is expensive and if smart software can do a good enough job, that software will be deployed. The test will be real time. If the software fails, the company may sell itself, pivot, or run a garage sale.
  2. When “good enough” is the benchmark, staff will be replaced with smart software. Some of the whiz kids in AI like the buzzword “agentic.” Okay, agentic systems will replace humans with good enough smart software. That will happen. Excellence is not the goal. Money saving is.
  3. Over time, the ideas of the current transformer-based AI systems will be enriched by other numerical procedures and maybe— just maybe — some novel methods will provide “smart software” with more capabilities. Right now, most smart software just finds a path through already-known information. No output is new, just close to what the system’s math concludes is on point. Right now, the next generation of smart software seems to be in the future. How far? It’s anyone’s guess.

My hunch is that Amazon Audible will suggest that humans will not lose their jobs. However, the company is allegedly going to replace human voices with “audibles” generated by smart software. (For more about this displacement of humans, check out the Bloomberg story.)

Net net: The “real” news story prepares the field for planting writing software in an organization. It says, “Customer will benefit and produce more jobs.” Great assertions. I think AI will be disruptive and in unpredictable ways. Why not come out and say, “If the agentic software is good enough, we will fire people”? Answer: Being upfront is not something those who are not dinobabies do.

Stephen E Arnold, September 19, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta