AI Proofing Tools in Higher Education Limbo

March 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Where is the line between AI-assisted plagiarism and a mere proofreading tool? That is something universities really should have decided by now. Those that have not risk appearing hypocritical and unjust. For example, the University of North Georgia (UNG) specifically recommends students use Grammarly to help proofread their papers. And yet, as News Nation reports, a “Student Fights AI Cheating Allegations for Using Grammarly” at that school.

The trouble began when Marley Stevens’ professor ran her paper through plagiarism-detection software Turnitin, which flagged it for an AI violation. Apparently that (ironically) AI-powered tool did not know Grammarly was on the university’s “nice” list. But surely the charge of cheating was reversed once human administrators got involved, right? Nope. Writer Damita Memezes tells us:

“‘I’m on probation until February 16 of next year. And this started when he sent me the email. It was October. I didn’t think that now in March of 2024, that this would still be a big thing that was going on,’ Stevens said. Despite Grammarly being recommended on the University of North Georgia’s website, Stevens found herself embroiled in battle to clear her name. The tool, briefly removed from the school’s website, later resurfaced, adding to the confusion surrounding its acceptable usage despite the software’s utilization of generative AI. ‘I have a teacher this semester who told me in an email like “yes use Grammarly. It’s a great tool.” And they advertise it,’ Stevens said. … Despite Stevens’ appeal and subsequent GoFundMe campaign to rectify the situation, her options seem limited. The university’s stance, citing the absence of suspension or expulsion, has left her in a bureaucratic bind.”

Grammarly’s Jenny Maxwell defends the tool and emphasizes her company’s transparency around its generative components. She suggests colleges and universities update their assessment methods to address evolving tech like Grammarly. For good measure, we would add Microsoft Word’s Copilot and Google Chrome’s "help me write" feature. Shouldn’t schools be training students in the responsible use of today’s technology? According to UNG, yes. And also, no.

This means that if you use Word and its smart software, you may be a cheater. No need to wait until you go to work at a blue chip consulting firm. You are working on your basic consulting skills.

Cynthia Murrell, March 26, 2024

Can Ma Bell Boogie?

March 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AT&T provides numerous communication and information services to the US government and companies. People see the blue and white trucks with obligatory orange cones and think nothing about their presence. Decades after Judge Green rained on the AT&T monopoly parade, the company has regained some of its market chutzpah. The old-line Bell heads knew that would happen. One reason was the simple fact that communications services have a tendency to pool; that is, online, for instance, wants to be a monopoly. Like water, online and communication services seek the lowest level. One can grouse about a leaking basement, but one is complaining about a basic fact. Complain away, but the water pools. Similarly AT&T benefits and knows how to make the best of this pooling, consolidating, and collecting reality.

I do miss the “old” AT&T. Say what you will about today’s destabilizing communications environment, just don’t forget that the pre-Judge Green world produced useful innovations, provided hardware that worked, and made it possible for some government functions to work much better than those operations perform today.

image

Thanks, MSFT, it seems you understand ageing companies which struggle in the midst of the cyber whippersnappers.

But what’s happened?

In February 2024, AT&T experienced an outage. The redundant, fail-safe, state-of-the-art infrastructure failed. “AT&T Cellular Service Restored after Daylong Outage; Cause Still Unknown” reported:

AT&T said late Thursday [February 24, 2024] that based on an initial review, the outage was “caused by the application and execution of an incorrect process used as we were expanding our network, not a cyber attack.” The company will continue to assess the outage.

What do we publicly know about this remarkable event a month ago? Not much. I am not going to speculate how a single misstep can knock out AT&T, but it raises some questions about AT&T’s procedures, its security, and, yes, its technical competence. The AT&T Ashburn data center is an interesting cluster of facilities. Could it be “knocked offline”? My concern is that the answer to this question is, “You bet your bippy it could.”

A second interesting event surfaced as well. AT&T suffered a mysterious breach which appears to have compromised data about millions of “customers.” And “AT&T Won’t Say How Its Customers’ Data Spilled Online.” Here’s a statement from the report of the breach:

When reached for comment, AT&T spokesperson Stephen Stokes told TechCrunch in a statement: “We have no indications of a compromise of our systems. We determined in 2021 that the information offered on this online forum did not appear to have come from our systems. This appears to be the same dataset that has been recycled several times on this forum.”

Leaked data are no big deal and the incident remains unexplained. The AT&T system went down essential at one fell swoop. Plus there is no explanation which resonates with my understanding of the Bell “way.”

Some questions:

  1. What has AT&T accomplished by its lack of public transparency?
  2. Has the company lost its ability to manage a large, dynamic system due to cost cutting?
  3. Is a lack of training and perhaps capable staff undermining what I think of as “mission critical capabilities” for business and government entities?
  4. What are US regulatory authorities doing to address what is, in my opinion, a threat to the economy of the US and the country’s national security?

Couple the AT&T events with emerging technology like artificial intelligence, will the company make appropriate decisions or create vulnerabilities typically associated with a dominant software company?

Not a positive set up in my opinion. Ma Bell, are you to old and fat to boogie?

Stephen E Arnold, March 26, 2024

Getting Old in the Age of AI? Yeah, Too Bad

March 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting essay called “’Gen X Has Had to Learn or Die: Mid-Career Workers Are Facing Ageism in the Job Market.” The title assumes that the reader knows the difference between Gen X, Gen Y, Gen Z, and whatever other demographic slices marketers and “social” scientists cook up. I recognize one time slice: Dinobabies like me and a category I have labeled “Other.”

image

Two Gen X dinobabies find themselves out of sync with the younger reptiles’ version of Burning Man. Thanks, MSFT Copilot. Close enough.

The write up, which I think is a work product of a person who realizes that the stranger in a photograph is the younger version of today’s self. “How can that be?” the author of the essay asks. “In my Gen X, Y, or Z mind I am the same. I am exactly the way I was when I was younger.” The write up states:

Gen Xers, largely defined as people in the 44-to-59 age group, are struggling to get jobs.

The write up quotes an expert, Christina Matz, associate professor at the Boston College School of Social Work, and director of the Center on Aging and Work. I believe this individual has a job for now. The essay quotes her observation:

older workers are sometimes perceived as “doddering but dear”. Matz says, “They’re labelled as slower and set in their ways, well-meaning on one hand and incompetent on the other. People of a certain age are considered out-of-touch, and not seen as progressive and innovative.”

I like to think of myself as doddering. I am not sure anyone, regardless of age, will label me “dear.”

But back to the BBC’s essay. I read:

We’re all getting older.

Now that’s an insight!

I noted that the acronym “AI” appears once in the essay. One source is quoted as offering:

… we had to learn the internet, then Web 2.0, and now AI. Gen X has had to learn or die,

Hmmm. Learn of die.

Several observations:

  1. The write up does not tackle the characteristic of work that strikes me as important; namely, if one is in the Top Tier of people in a particular discipline, jobs will be hard to find. Artificial intelligence will elevate those just below the “must hire” level and allow organizations to replace what once was called “the organization man” with software.
  2. The discovery that just because a person can use a mobile phone does not give them intellectual super powers. The kryptonite to those hunting for a “job” is that their “package” does not have “value” to an organization seeking full time equivalents. People slap a price tag on themselves and, like people running a yard sale, realize that no one will pay very much for that stack of old time post cards grandma collected.
  3. The notion of entitlement does not appear in the write up. In my experience, a number of people believe that a company or other type of entity “owes them a living.” Those accustomed to receiving “Also Participated” trophies and “easy” A’s have found themselves on the wrong side of paradise.

My hunch is that these “ageism” write ups are reactions to the gradual adoption of ever more capable “smart” software. I am not sure if the author agrees with me. I am asserting that the examples and comments in the write up are a reaction to the existential threat AI, bots, and embedded machine intelligence finding their way into “systems” today. Probably not.

Now let’s think about the “learn” plank of the essay. A person can learn, adapt, and thrive, right? My personal view is that this is a shibboleth. Oh, oh.

Stephen E Arnold, March 25, 2024

The University of Illinois: Unintentional Irony

March 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I admit it. I was in the PhD program at the University of Illinois at Champaign-Urbana (aka Chambana). There was nothing like watching a storm build from the upper floors of now departed FAR. I spotted a university news release titled “Americans Struggle to Distinguish Factual Claims from Opinions Amid Partisan Bias.” From my point of view, the paper presents research that says that half of those in the sample cannot distinguish truth from fiction. That’s a fact easily verified by visiting a local chain store, purchasing a product, and asking the clerk to provide the change in a specific way; for example, “May I have two fives and five dimes, please?” Putting data behind personal experience is a time-honored chore in the groves of academe.

image

Discerning people can determine “real” from “original fakes.” Well, only half the people can it seems. The problem is defining what’s true and what’s false. Thanks, MSFT Copilot. Keep working on your security. Those breaches are “real.” Half the time is close enough for horseshoes.

Here’s a quote from the write up I noted:

“How can you have productive discourse about issues if you’re not only disagreeing on a basic set of facts, but you’re also disagreeing on the more fundamental nature of what a fact itself is?” — Matthew Mettler, a U. of I. graduate student and co-author of with Jeffery J. Mondak, a professor of political science and the James M. Benson Chair in Public Issues and Civic Leadership at Illinois.

The news release about Mettler’s and Mondak’s research contains this statement:

But what we found is that, even before we get to the stage of labeling something misinformation, people often have trouble discerning the difference between statements of fact and opinion…. “What we’re showing here is that people have trouble distinguishing factual claims from opinion, and if we don’t have this shared sense of reality, then standard journalistic fact-checking – which is more curative than preventative – is not going to be a productive way of defanging misinformation,” Mondak said. “How can you have productive discourse about issues if you’re not only disagreeing on a basic set of facts, but you’re also disagreeing on the more fundamental nature of what a fact itself is?”

But the research suggests that highly educated people cannot differentiate made up data from non-weaponized information. What struck me is that Harvard’s Misinformation Review published this U of I research that provides a road map to fooling peers and publishers. Harvard University, like Stanford University, has found that certain big-time scholars violate academic protocols.

I am delighted that the U of I research is getting published. My concern is that the Misinformation Review does not find my laughing at its Misinformation Review to their liking. Harvard illustrates that academic transgressions cannot be identified by half of those exposed to the confections of up-market academics.

Should Messrs Mettler and Mondak have published their research in another journal? That a good question, but I am no longer convinced that professional publications have more credibility than the outputs of a content farm. Such is the erosion of once-valued norms. Another peril of thumb typing is present.

Stephen E Arnold, March 22, 2024

Software Failure: Why Problems Abound and Multiply Like Gerbils

March 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Why Software Projects Fail” after a lunch at which crappy software and lousy products were a source of amusement. The door fell off what?

What’s interesting about the article is that it contains a number of statements which resonated with me. I recommend the article, but I want to highlight several statements from the essay. These do a good job of explaining why small and large projects go off the rails. Within the last 12 months I witnessed one project get tangled in solving a problem that existed 15 years ago. Today not so much. The team crafted the equivalent of a Greek Corinthian helmet from the 8th century BCE. Another project infused with AI and vision of providing a “new” approach to security wobble between and among a telecommunications approach, an email approach, and an SMS approach with bells and whistles only a science fiction fan would appreciate. Both of these examples obtained funding; neither set out to build a clown car. What happened? That’s where “Why Projects Fail?” becomes relevant.

image

Thanks, MSFT Copilot. You have that MVP idea nailed with the recent Windows 11 update, don’t you. Good enough I suppose.

Let’s look at three passages from the essay, shall we?

Belief in One’s Abilities or I Got an Also-Participated Ribbon in Middle School

Here’s the statement from the essay:

One of the things that I’ve noticed is that developers often underestimate not just the complexity of tasks, but there’s a general overconfidence in their abilities, not limited by programming:

  1. Overconfidence in their coding skills.
  2. Overconfidence in learning new technologies.
  3. Overconfidence in our abstractions.
  4. Overconfidence in external dependencies, e.g., third-party services or some open-source library.

My comment: Spot on. Those ribbons built confidence, but they mean nothing.

Open Source Is Great Unless It Has Been Screwed Up, Become a Malware Delivery Vehicle, or Just Does Not Work

Here’s the statement from the essay:

… anything you do not directly control is a risk of hidden complexity. The assumption that third-party services, libraries, packages, or APIs will work as expected without bugs is a common oversight.

My view is that “complexity” is kicked around as if everyone held a shared understanding of the term. There are quite different types of complexity. For software, there is the complexity of a simple process created in Assembler but essentially impenetrable to a 20-something from a whiz-bang computer science school. There is the complexity of software built over time by attention deficit driven people who do not communicate, coordinate, or care what others are doing, will do, or have done. Toss in the complexity of indifferent, uninformed, or uninterested “management,” and you get an exciting environment in which to “fix up” software. The cherry on top of this confection is that quite a bit of software is assumed to be good. Ho ho ho.

The Real World: It Exists and Permeates

I liked this statement:

Technology that seemed straightforward refuses to cooperate, external competitors launch similar ideas, key partners back out, and internal business stakeholders focus more on the projects that include AI in their name. Things slow down, and as months turn into years, enthusiasm wanes. Then the snowball continues — key members leave, and new people join, each departure a slight shift in direction. New tech lead steps in, eager to leave their mark, steering the project further from its original course. At this point, nobody knows where the project is headed, and nobody wants to admit the project has failed. It’s a tough spot, especially when everyone’s playing it safe, avoiding the embarrassment or penalties of admitting failure.

What are the signals that trouble looms? A fumbled ball at the Google or the Apple car that isn’t can be blinking lights. Staff who go rogue on social media or find an ambulance chasing honed law firm can catch some individual’s attention.

The write up contains other helpful observations. Will people take heed? Are you kidding me? Excellence costs money, requires informed judgment, and expertise. Who has time for this with AI calendars, the demands of TikTok and Instagram, and hitting the local coffee shop?

Stephen E Arnold, March 19, 2024

Old Code, New Code: Can You Make It Work Again… Sort Of?

March 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Even hippy dippy super slick AI start ups have a technical debt problem. It is, in my opinion, no different from the “costs” imposed on outfits like JPMorgan Chase or (heaven help us) AMTRAK. Software which mostly works is subject to two environmental problems. First, the people who wrote the code or made it work that last time catastrophe struck (hello, AT&T, how are those pushed updates working for you now?) move on, quit, or whatever. Second, the technical options for remediating the problem are evolving (how are those security hot fixes working out, Microsoft?).

image

The helpful father asks an question the aspiring engineer cannot answer. Thus it was when the wizard was a child, and it is when the wizard is working on a modern engineering project. Buildings tip; aircraft lose doors and wheels. Software updates kill computers. Self-driving cars cannot. Thanks, MSFT Copilot. Did you get your model airplane to fly when you were a wee lad? I think I know the answer.

I thought about this problem of the cost of code remediating, fixing, redoing, upgrading or whatever term fast-talking sales engineers use in their Zooms and PowerPoints as I read “The High-Risk Refactoring.” The write up does a good job of explaining in a gentle way what happens when suits authorize making old code like new again. (The suits do not know the agonies of the original developers, but why should “history” intrude on a whiz bang GenX or GenY management type?

The article says:

it’s highly important to ensure the system works the same way after the swap with the new code. In that regard, immediately spotting when something breaks throughout the whole refactoring process is very helpful. No one wants to find that out in production.

No kidding.

In most cases, there are insufficient skilled people and money to create a new or revamped system, get it up and running in parallel for an appropriate period of time, identify the problems, remediate them, and then make the cut over. People buy cars this way, but that’s not how most organizations, regardless of size, “do” software. Okay, the take your car in, buy a new one, and drive off will not work in today’s business environment.

The write up focuses on what most organizations do; that is, write or fix new code and stick it into a system. There may or may not be resources for a staging server, but the result is the same. The old software has been “fixed” and the documentation is “sort of written” and people move on to other work or in the case of consulting engineering firms, just get replaced by a new, higher margin professional.

The write up takes a different approach and concludes with four suggestions or questions to ask. I quote:

“Refactor if things are getting too complicated, but  stop if can’t prove it works.

Accompany new features with refactoring for areas you foresee to be subject to a change, but copy-pasting is ok until patterns arise.

Be proactive in finding new ways to ensure refactoring predictability, but be conservative about the assumption QA will find all the bugs.

Move business logic out of busy components, but be brave enough to keep the legacy code intact if the only argument is “this code looks wrong”.

These are useful points. I would like to suggest some bright white lines for those who have to tackle an IRS-mainframe- or AT&T-billing system type of challenge as well as tweaking an artificial intelligence solution to respond to those wonky multi-ethnic images Google generated in order to allow the Sundar & Prabhakar Comedy Team to smile sheepishly and apologize again for lousy software.

Are you ready? Let’s go:

  1. Fixes add to the complexity of the code base. As time goes stumbling forward, the complexity of the software becomes greater. The cost of making sure the fix works and does not create exciting dependency behavior goes up. Thus, small fixes “cost” more, and these costs are tough to control.
  2. The safest fixes are “wrappers”; that is, no one in his or her right mind wants to change software written in 1978 for a machine no longer in production by the manufacturer. Therefore, new software is written to interact in a “safe” way with the original software. The new code “fixes up” the problem without screwing up what grandpa programmer wrote almost half a century ago. The problem is that “wrappers” tend to slow stuff down. The fix is to say one will optimize the system while one looks for a new project or job.
  3. The software used for “fixing” a problem is becoming the equivalent of repairing an aircraft component with Dawn laundry detergent. The “fix” is cheap, easy to use, and good enough. The software equivalent of this Dawn solution is that it will not stand the test of time. Instead of code crafted in good old COBOL or Assembler, we have some Fancy Dan tools which may fall out of favor in a matter of months, not decades.

Many projects result in better, faster, and cheaper. The reminder “Pick two” is helpful.

Net net: Fixing up lousy or flawed software is going to increase risks and costs. The question asked by bean counters is, “How much?” The answer is, “No one knows until the project is done … if ever.”

Stephen E Arnold, March 18, 2024

Humans Wanted: Do Not Leave Information Curation to AI

March 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Remember RSS feeds? Before social media took over the Internet, they were the way we got updates from sources we followed. It may be time to dust off the RSS, for it is part of blogger Joan Westenberg’s plan to bring a human touch back to the Web. We learn of her suggestions in, “Curation Is the Last Best Hope of Intelligent Discourse.”

Westenberg argues human judgement is essential in a world dominated by AI-generated content of dubious quality and veracity. Generative AI is simply not up to the task. Not now, perhaps not ever. Fortunately, a remedy is already being pursued, and Westenberg implores us all to join in. She writes:

“Across the Fediverse and beyond, respected voices are leveraging platforms like Mastodon and their websites to share personally vetted links, analysis, and creations following the POSSE model – Publish on your Own Site, Syndicate Elsewhere. By passing high-quality, human-centric content through their own lens of discernment before syndicating it to social networks, these curators create islands of sanity amidst oceans of machine-generated content of questionable provenance. Their followers, in turn, further syndicate these nuggets of insight across the social web, providing an alternative to centralised, algorithmically boosted feeds. This distributed, decentralised model follows the architecture of the web itself – networks within networks, sites linking out to others based on trust and perceived authority. It’s a rethinking of information democracy around engaged participation and critical thinking from readers, not just content generation alone from so-called ‘influencers’ boosted by profit-driven behemoths. We are all responsible for carefully stewarding our attention and the content we amplify via shares and recommendations. With more voices comes more noise – but also more opportunity to find signals of truth if we empower discernment. This POSSE model interfaces beautifully with RSS, enabling subscribers to follow websites, blogs and podcasts they trust via open standard feeds completely uncensored by any central platform.”

But is AI all bad? No, Westenberg admits, the technology can be harnessed for good. She points to Anthropic‘s Constitutional AI as an example: it was designed to preserve existing texts instead of overwriting them with automated content. It is also possible, she notes, to develop AI systems that assist human curators instead of compete with them. But we suspect we cannot rely on companies that profit from the proliferation of shoddy AI content to supply such systems. Who will?

Cynthia Murrell, March 15, 2024

Microsoft and Security: A Rerun with the Same Worn-Out Script

March 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Marvel cinematic universe has spawned two dozen sequels. Microsoft’s security circus features are moving up fast in the reprise business. Unfortunately there is no super hero who comes to the rescue of the giant American firm. The villains in these big screen stunners are a bit like those in the James Bond films. Microsoft seems to prefer to wrestle with the allegedly Russian cozy bear or at least convert a cartoon animal into the personification of evil.

image

Thanks, MSFT, you have nailed security theater and reruns of the same tired story.

What’s interesting about these security blockbusters is that each follows a Hollywood style “you’ve seen this before nudge nudge” approach to the entertainment. The sequence is a belated announcement that Microsoft security has been breached. The evil bad actors have stolen data, corrupted software, and by brute force foiled the norm cores in Microsoft World. Then announcements about fixes that the Microsoft custoemr must implement along with admonitions to keep that MSFT software updated and warnings about using “old” computers, etc. etc.

Russian Hackers Accessed Microsoft Source Code” is the equivalent of New York Times film review. The write up reports:

In January, Microsoft disclosed that Russian hackers had breached the company’s systems and managed to read emails belonging to senior executives. Now, the company has revealed that the breach was worse than initially understood and that the Russian hackers accessed Microsoft source code. Friday’s revelation — made in a blog post and a filing with the Securities and Exchange Commission — is the latest in a string of breaches affecting the company that have raised major questions in Washington about Microsoft’s security posture.

Well, that’s harsh. No mention of the estimable alleged monopoly’s releasing the information on March 7, 2024. I am capturing my thoughts on March 8, 2024. But with college basketball moving toward tournament time, who cares? I am not really sure any more. And Washington? Does the name evoke a person, a committee, a committee consisting of the heads of security committees, someone in the White House, an “expert” at the suddenly famous National Bureau of Standards, or absolutely no one.

The write asserts:

The company is concerned, however, that “Midnight Blizzard is attempting to use secrets of different types it has found,” including in emails between customers and Microsoft. “As we discover them in our exfiltrated email, we have been and are reaching out to these customers to assist them in taking mitigating measures,” the company said in its blog post. The company describes the incident as an example of “what has become more broadly an unprecedented global threat landscape, especially in terms of sophisticated nation-state attacks.” In response, the company has said it is increasing the resources and attention devoted to securing its systems.

Microsoft is “reaching out.” I can reach for a donut, but I do not grasp it and gobble it down. “Reach” is not the same as fixing the problems Microsoft caused.

Several observations:

  1. Microsoft is an alleged monopoly, and it is allowing its digital trains to set fire to the fields, homes, and businesses which have to use its tracks. Isn’t it time for purposeful action from the US government agencies with direct responsibility for cyber security and appropriate business conduct?
  2. Can Microsoft remediate its problems? My answer is, “No.” Vulnerabilities are engineered in because no one has the time, energy, or interest to chase down problems and fix them. There is an ageing programmer named Steve Gibson. His approach to software is the exact opposite of Microsoft’s. Mr. Gibson will never be a trillion dollar operation, but his software works. Perhaps Microsoft should consider adopting some of Mr. Gibson’s methods.
  3. Customers have to take a close look at the security breaches endlessly reported by cyber security companies. Some outfits’ software is on the list most of the time. Other companies’ software is an infrequent visitor to these breach parties. Is it time for customers to be looking for an alternative to what Microsoft provides?

Net net: A new security release will be coming to the computer near you. Don’t fail to miss it.

Stephen E Arnold, March 12, 2024

x

x

x

x

x

An Allocation Society or a Knowledge Value System? Pick One, Please!

February 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I get random inquiries, usually from LinkedIn, asking me about books I would recommend to a younger person trying to [a] create a brand and make oodles of money, [b] generate sales immediately from their unsolicited emails to strangers, and [c] a somewhat limp-wristed attempt to sell me something. I typically recommend a book I learned about when I was giving lectures at the Kansai Institute of Technology and a couple of outfits in Tokyo. The book is the Knowledge Value Revolution written by a former Japanese government professional named Taichi Sakaiya. The subtitle to the book is “A History of the Future.”

So what?

I read an essay titled “The Knowledge Economy Is Over. Welcome to the Allocation Economy.” The thesis of this essay is that Sakaiya’s description of the future is pretty much wacko. Here’s a passage from the essay about the allocation economy:

Summarizing used to be a skill I needed to have, and a valuable one at that. But before it had been mostly invisible, bundled into an amorphous set of tasks that I’d called “intelligence”—things that only I and other humans could do. But now that I can use ChatGPT for summarizing, I’ve carved that task out of my skill set and handed it over to AI. Now, my intelligence has learned to be the thing that directs or edits summarizing, rather than doing the summarizing myself.

image

A world class knowledge surfer now wins gold medals for his ability to surf on the output of smart robots and pervasive machines. Thanks, Google ImageFX. Not funny but good enough, which is the mark of a champion today, isn’t it?

For me, the message is that people want summaries. This individual was a summarizer and, hence, a knowledge worker. With the smart software doing the summarizing, the knowledge worker is kaput. The solution is for the knowledge worker to move up conceptually. The jump is a metaplay. Debaters learn quickly that when an argument is going nowhere, the trick that can deliver a win is to pop up a level. The shift from poverty to a discussion about the disfunction of a city board of advisors is a trick used in places like San Francisco. It does not matter that the problem of messios is not a city government issue. Tents and bench dwellers are the exhaust from a series of larger systems. None can do much about the problem. Therefore, nothing gets done. But for a novice debater unfamiliar with popping up a level or a meta-play, the loss is baffling.

The essay putting Sakaiya in the dumpster is not convincing and it certainly is not going to win a debate between the knowledge value revolution and the allocation economy. The reason strikes me a failure to see that smart software, the present and future dislocations of knowledge workers, and the brave words about becoming a director or editor are evidence that Sakaiya was correct. He wrote in 1985:

If the type of organization typical of industrial society could be said to resemble a symphony orchestra, the organizations typical of the knowledge-value society would be more like the line-up of a jazz band.

The author of the allocation economy does not realize that individuals with expertise are playing a piano or a guitar. Of those who do play, only a tiny fraction (a one percent of the top 10 percent perhaps?) will be able to support themselves. Of those elite individuals, how many Taylor Swifts are making the record companies and motion picture empresarios look really stupid? Two, five, whatever. The point is that the knowledge-value revolution transforms much more than “attention” or “allocation.” Sakaiya, in my opinion, is operating at a sophisticated meta-level. Renaming the plight of people who do menial mental labor does not change a painful fact: Knowledge value means those who have high-value knowledge are going to earn a living. I am not sure what the newly unemployed technology workers, the administrative facilitators, or the cut-loose “real” journalists are going to do to live as their parents did in the good old days.

The allocation essay offers:

AI is cheap enough that tomorrow, everyone will have the chance to be a manager—and that will significantly increase the creative potential of every human being. It will be on our society as a whole to make sure that, with the incredible new tools at our disposal, we bring the rest of the economy along for the ride.

How many jazz musicians can ride on a particular market sector propelled by smart software? How many individuals will enjoy personal and financial success in the AI allocation-centric world? Remember, please, there are about eight billion people in the world? How many Duke Ellingtons and Dave Brubecks were there?

The knowledge value revolution means that the majority of individuals will be excluded from nine to five jobs, significant financial success, and meaningful impact on social institutions. I am not for everyone becoming a surfer on smart software, but if that happens, the future is going to be more like the one Sakaiya outlined, not an allocation-centric operation in my opinion.

Stephen E Arnold, February 20, 2024

Is AI Another VisiCalc Moment?

February 14, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The easy-to-spot orange newspaper ran a quite interesting “essay” called “What the Birth of the Spreadsheet Can Teach Us about Generative AI.” Let me cut to the point when the fox is killed. AI is likely to be a job creator. AI has arrived at “the right time.” The benefits of smart software are obvious to a growing number of people. An entrepreneur will figure out a way to sell an AI gizmo that is easy to use, fast, and good enough.

In general, I agree. There is one point that the estimable orange newspaper chose not to include. The VisiCalc innovation converted old-fashioned ledger paper into software which could eliminate manual grunt work to some degree. The poster child of the next technology boom seems tailor-made to facilitate surveillance, weapons, and development of novel bio-agents.

image

AI is going to surprise some people more than others. Thanks, MSFT Copilot Bing thing. Not good but I gave up with the prompts to get a cartoon because you want to do illustrations. Sigh.

I know that spreadsheets are used by defense contractors, but the link between a spreadsheet and an AI-powered drone equipped with octanitrocubane variants is less direct. Sure, spreadsheets arrived in numerous use cases, some obvious, some not. But the capabilities for enabling a range of weapons systems strike me as far more obvious.

The Financial Times’s essay states:

Looking at the way spreadsheets are used today certainly suggests a warning. They are endlessly misused by people who are not accountants and are not using the careful error-checking protocols built into accountancy for centuries. Famous economists using Excel simply failed to select the right cells for analysis. An investment bank used the wrong formula in a risk calculation, accidentally doubling the level of allowable risk-taking. Biologists have been typing the names of genes, only to have Excel autocorrect those names into dates. When a tool is ubiquitous, and convenient, we kludge our way through without really understanding what the tool is doing or why. And that, as a parallel for generative AI, is alarmingly on the nose.

Smart software, however, is not a new thing. One can participate in quasi-religious disputes about whether AI is 20, 30, 40, or more years old. What’s interesting to me is that after chugging along like a mule cart on the Information Superhighway, AI is everywhere. Old-school British newspapers like it to the spreadsheet. Entrepreneurs spend big bucks on Product Hunt roll outs. Owners of mobile devices can locate “pizza near me” without having to type, speak, or express an interest in a cardiologist’s favorite snack.

AI strikes me as a different breed of technology cat. Here are my reasons:

  1. Serious AI takes serious money.
  2. Big AI is going to be a cloud-linked service which invites consolidation just like those hundreds of US railroads became the glorious two player system we have today: One for freight and one for passengers who love trains more than flying or driving.
  3. AI systems are going to have to find a way to survive and thrive without becoming victims of content inbreeding and bizarre outputs fueled by synthetic data. VisiCalc spawned spreadsheet fever in humans from the outset. The difference is that AI does its work largely without humanoids.

Net net: The spreadsheet looks like a convenient metaphor. But metaphors are not the reality. Reality can surprise in interesting ways.

Stephen E Arnold, February 14, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta