Getting Old in the Age of AI? Yeah, Too Bad
March 25, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read an interesting essay called “’Gen X Has Had to Learn or Die: Mid-Career Workers Are Facing Ageism in the Job Market.” The title assumes that the reader knows the difference between Gen X, Gen Y, Gen Z, and whatever other demographic slices marketers and “social” scientists cook up. I recognize one time slice: Dinobabies like me and a category I have labeled “Other.”
Two Gen X dinobabies find themselves out of sync with the younger reptiles’ version of Burning Man. Thanks, MSFT Copilot. Close enough.
The write up, which I think is a work product of a person who realizes that the stranger in a photograph is the younger version of today’s self. “How can that be?” the author of the essay asks. “In my Gen X, Y, or Z mind I am the same. I am exactly the way I was when I was younger.” The write up states:
Gen Xers, largely defined as people in the 44-to-59 age group, are struggling to get jobs.
The write up quotes an expert, Christina Matz, associate professor at the Boston College School of Social Work, and director of the Center on Aging and Work. I believe this individual has a job for now. The essay quotes her observation:
older workers are sometimes perceived as “doddering but dear”. Matz says, “They’re labelled as slower and set in their ways, well-meaning on one hand and incompetent on the other. People of a certain age are considered out-of-touch, and not seen as progressive and innovative.”
I like to think of myself as doddering. I am not sure anyone, regardless of age, will label me “dear.”
But back to the BBC’s essay. I read:
We’re all getting older.
Now that’s an insight!
I noted that the acronym “AI” appears once in the essay. One source is quoted as offering:
… we had to learn the internet, then Web 2.0, and now AI. Gen X has had to learn or die,
Hmmm. Learn of die.
Several observations:
- The write up does not tackle the characteristic of work that strikes me as important; namely, if one is in the Top Tier of people in a particular discipline, jobs will be hard to find. Artificial intelligence will elevate those just below the “must hire” level and allow organizations to replace what once was called “the organization man” with software.
- The discovery that just because a person can use a mobile phone does not give them intellectual super powers. The kryptonite to those hunting for a “job” is that their “package” does not have “value” to an organization seeking full time equivalents. People slap a price tag on themselves and, like people running a yard sale, realize that no one will pay very much for that stack of old time post cards grandma collected.
- The notion of entitlement does not appear in the write up. In my experience, a number of people believe that a company or other type of entity “owes them a living.” Those accustomed to receiving “Also Participated” trophies and “easy” A’s have found themselves on the wrong side of paradise.
My hunch is that these “ageism” write ups are reactions to the gradual adoption of ever more capable “smart” software. I am not sure if the author agrees with me. I am asserting that the examples and comments in the write up are a reaction to the existential threat AI, bots, and embedded machine intelligence finding their way into “systems” today. Probably not.
Now let’s think about the “learn” plank of the essay. A person can learn, adapt, and thrive, right? My personal view is that this is a shibboleth. Oh, oh.
Stephen E Arnold, March 25, 2024
The University of Illinois: Unintentional Irony
March 22, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I admit it. I was in the PhD program at the University of Illinois at Champaign-Urbana (aka Chambana). There was nothing like watching a storm build from the upper floors of now departed FAR. I spotted a university news release titled “Americans Struggle to Distinguish Factual Claims from Opinions Amid Partisan Bias.” From my point of view, the paper presents research that says that half of those in the sample cannot distinguish truth from fiction. That’s a fact easily verified by visiting a local chain store, purchasing a product, and asking the clerk to provide the change in a specific way; for example, “May I have two fives and five dimes, please?” Putting data behind personal experience is a time-honored chore in the groves of academe.
Discerning people can determine “real” from “original fakes.” Well, only half the people can it seems. The problem is defining what’s true and what’s false. Thanks, MSFT Copilot. Keep working on your security. Those breaches are “real.” Half the time is close enough for horseshoes.
Here’s a quote from the write up I noted:
“How can you have productive discourse about issues if you’re not only disagreeing on a basic set of facts, but you’re also disagreeing on the more fundamental nature of what a fact itself is?” — Matthew Mettler, a U. of I. graduate student and co-author of with Jeffery J. Mondak, a professor of political science and the James M. Benson Chair in Public Issues and Civic Leadership at Illinois.
The news release about Mettler’s and Mondak’s research contains this statement:
But what we found is that, even before we get to the stage of labeling something misinformation, people often have trouble discerning the difference between statements of fact and opinion…. “What we’re showing here is that people have trouble distinguishing factual claims from opinion, and if we don’t have this shared sense of reality, then standard journalistic fact-checking – which is more curative than preventative – is not going to be a productive way of defanging misinformation,” Mondak said. “How can you have productive discourse about issues if you’re not only disagreeing on a basic set of facts, but you’re also disagreeing on the more fundamental nature of what a fact itself is?”
But the research suggests that highly educated people cannot differentiate made up data from non-weaponized information. What struck me is that Harvard’s Misinformation Review published this U of I research that provides a road map to fooling peers and publishers. Harvard University, like Stanford University, has found that certain big-time scholars violate academic protocols.
I am delighted that the U of I research is getting published. My concern is that the Misinformation Review does not find my laughing at its Misinformation Review to their liking. Harvard illustrates that academic transgressions cannot be identified by half of those exposed to the confections of up-market academics.
Should Messrs Mettler and Mondak have published their research in another journal? That a good question, but I am no longer convinced that professional publications have more credibility than the outputs of a content farm. Such is the erosion of once-valued norms. Another peril of thumb typing is present.
Stephen E Arnold, March 22, 2024
Software Failure: Why Problems Abound and Multiply Like Gerbils
March 19, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “Why Software Projects Fail” after a lunch at which crappy software and lousy products were a source of amusement. The door fell off what?
What’s interesting about the article is that it contains a number of statements which resonated with me. I recommend the article, but I want to highlight several statements from the essay. These do a good job of explaining why small and large projects go off the rails. Within the last 12 months I witnessed one project get tangled in solving a problem that existed 15 years ago. Today not so much. The team crafted the equivalent of a Greek Corinthian helmet from the 8th century BCE. Another project infused with AI and vision of providing a “new” approach to security wobble between and among a telecommunications approach, an email approach, and an SMS approach with bells and whistles only a science fiction fan would appreciate. Both of these examples obtained funding; neither set out to build a clown car. What happened? That’s where “Why Projects Fail?” becomes relevant.
Thanks, MSFT Copilot. You have that MVP idea nailed with the recent Windows 11 update, don’t you. Good enough I suppose.
Let’s look at three passages from the essay, shall we?
Belief in One’s Abilities or I Got an Also-Participated Ribbon in Middle School
Here’s the statement from the essay:
One of the things that I’ve noticed is that developers often underestimate not just the complexity of tasks, but there’s a general overconfidence in their abilities, not limited by programming:
- Overconfidence in their coding skills.
- Overconfidence in learning new technologies.
- Overconfidence in our abstractions.
- Overconfidence in external dependencies, e.g., third-party services or some open-source library.
My comment: Spot on. Those ribbons built confidence, but they mean nothing.
Open Source Is Great Unless It Has Been Screwed Up, Become a Malware Delivery Vehicle, or Just Does Not Work
Here’s the statement from the essay:
… anything you do not directly control is a risk of hidden complexity. The assumption that third-party services, libraries, packages, or APIs will work as expected without bugs is a common oversight.
My view is that “complexity” is kicked around as if everyone held a shared understanding of the term. There are quite different types of complexity. For software, there is the complexity of a simple process created in Assembler but essentially impenetrable to a 20-something from a whiz-bang computer science school. There is the complexity of software built over time by attention deficit driven people who do not communicate, coordinate, or care what others are doing, will do, or have done. Toss in the complexity of indifferent, uninformed, or uninterested “management,” and you get an exciting environment in which to “fix up” software. The cherry on top of this confection is that quite a bit of software is assumed to be good. Ho ho ho.
The Real World: It Exists and Permeates
I liked this statement:
Technology that seemed straightforward refuses to cooperate, external competitors launch similar ideas, key partners back out, and internal business stakeholders focus more on the projects that include AI in their name. Things slow down, and as months turn into years, enthusiasm wanes. Then the snowball continues — key members leave, and new people join, each departure a slight shift in direction. New tech lead steps in, eager to leave their mark, steering the project further from its original course. At this point, nobody knows where the project is headed, and nobody wants to admit the project has failed. It’s a tough spot, especially when everyone’s playing it safe, avoiding the embarrassment or penalties of admitting failure.
What are the signals that trouble looms? A fumbled ball at the Google or the Apple car that isn’t can be blinking lights. Staff who go rogue on social media or find an ambulance chasing honed law firm can catch some individual’s attention.
The write up contains other helpful observations. Will people take heed? Are you kidding me? Excellence costs money, requires informed judgment, and expertise. Who has time for this with AI calendars, the demands of TikTok and Instagram, and hitting the local coffee shop?
Stephen E Arnold, March 19, 2024
Old Code, New Code: Can You Make It Work Again… Sort Of?
March 18, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Even hippy dippy super slick AI start ups have a technical debt problem. It is, in my opinion, no different from the “costs” imposed on outfits like JPMorgan Chase or (heaven help us) AMTRAK. Software which mostly works is subject to two environmental problems. First, the people who wrote the code or made it work that last time catastrophe struck (hello, AT&T, how are those pushed updates working for you now?) move on, quit, or whatever. Second, the technical options for remediating the problem are evolving (how are those security hot fixes working out, Microsoft?).
The helpful father asks an question the aspiring engineer cannot answer. Thus it was when the wizard was a child, and it is when the wizard is working on a modern engineering project. Buildings tip; aircraft lose doors and wheels. Software updates kill computers. Self-driving cars cannot. Thanks, MSFT Copilot. Did you get your model airplane to fly when you were a wee lad? I think I know the answer.
I thought about this problem of the cost of code remediating, fixing, redoing, upgrading or whatever term fast-talking sales engineers use in their Zooms and PowerPoints as I read “The High-Risk Refactoring.” The write up does a good job of explaining in a gentle way what happens when suits authorize making old code like new again. (The suits do not know the agonies of the original developers, but why should “history” intrude on a whiz bang GenX or GenY management type?
The article says:
it’s highly important to ensure the system works the same way after the swap with the new code. In that regard, immediately spotting when something breaks throughout the whole refactoring process is very helpful. No one wants to find that out in production.
No kidding.
In most cases, there are insufficient skilled people and money to create a new or revamped system, get it up and running in parallel for an appropriate period of time, identify the problems, remediate them, and then make the cut over. People buy cars this way, but that’s not how most organizations, regardless of size, “do” software. Okay, the take your car in, buy a new one, and drive off will not work in today’s business environment.
The write up focuses on what most organizations do; that is, write or fix new code and stick it into a system. There may or may not be resources for a staging server, but the result is the same. The old software has been “fixed” and the documentation is “sort of written” and people move on to other work or in the case of consulting engineering firms, just get replaced by a new, higher margin professional.
The write up takes a different approach and concludes with four suggestions or questions to ask. I quote:
“Refactor if things are getting too complicated, but stop if can’t prove it works.
Accompany new features with refactoring for areas you foresee to be subject to a change, but copy-pasting is ok until patterns arise.
Be proactive in finding new ways to ensure refactoring predictability, but be conservative about the assumption QA will find all the bugs.
Move business logic out of busy components, but be brave enough to keep the legacy code intact if the only argument is “this code looks wrong”.
These are useful points. I would like to suggest some bright white lines for those who have to tackle an IRS-mainframe- or AT&T-billing system type of challenge as well as tweaking an artificial intelligence solution to respond to those wonky multi-ethnic images Google generated in order to allow the Sundar & Prabhakar Comedy Team to smile sheepishly and apologize again for lousy software.
Are you ready? Let’s go:
- Fixes add to the complexity of the code base. As time goes stumbling forward, the complexity of the software becomes greater. The cost of making sure the fix works and does not create exciting dependency behavior goes up. Thus, small fixes “cost” more, and these costs are tough to control.
- The safest fixes are “wrappers”; that is, no one in his or her right mind wants to change software written in 1978 for a machine no longer in production by the manufacturer. Therefore, new software is written to interact in a “safe” way with the original software. The new code “fixes up” the problem without screwing up what grandpa programmer wrote almost half a century ago. The problem is that “wrappers” tend to slow stuff down. The fix is to say one will optimize the system while one looks for a new project or job.
- The software used for “fixing” a problem is becoming the equivalent of repairing an aircraft component with Dawn laundry detergent. The “fix” is cheap, easy to use, and good enough. The software equivalent of this Dawn solution is that it will not stand the test of time. Instead of code crafted in good old COBOL or Assembler, we have some Fancy Dan tools which may fall out of favor in a matter of months, not decades.
Many projects result in better, faster, and cheaper. The reminder “Pick two” is helpful.
Net net: Fixing up lousy or flawed software is going to increase risks and costs. The question asked by bean counters is, “How much?” The answer is, “No one knows until the project is done … if ever.”
Stephen E Arnold, March 18, 2024
Humans Wanted: Do Not Leave Information Curation to AI
March 15, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Remember RSS feeds? Before social media took over the Internet, they were the way we got updates from sources we followed. It may be time to dust off the RSS, for it is part of blogger Joan Westenberg’s plan to bring a human touch back to the Web. We learn of her suggestions in, “Curation Is the Last Best Hope of Intelligent Discourse.”
Westenberg argues human judgement is essential in a world dominated by AI-generated content of dubious quality and veracity. Generative AI is simply not up to the task. Not now, perhaps not ever. Fortunately, a remedy is already being pursued, and Westenberg implores us all to join in. She writes:
“Across the Fediverse and beyond, respected voices are leveraging platforms like Mastodon and their websites to share personally vetted links, analysis, and creations following the POSSE model – Publish on your Own Site, Syndicate Elsewhere. By passing high-quality, human-centric content through their own lens of discernment before syndicating it to social networks, these curators create islands of sanity amidst oceans of machine-generated content of questionable provenance. Their followers, in turn, further syndicate these nuggets of insight across the social web, providing an alternative to centralised, algorithmically boosted feeds. This distributed, decentralised model follows the architecture of the web itself – networks within networks, sites linking out to others based on trust and perceived authority. It’s a rethinking of information democracy around engaged participation and critical thinking from readers, not just content generation alone from so-called ‘influencers’ boosted by profit-driven behemoths. We are all responsible for carefully stewarding our attention and the content we amplify via shares and recommendations. With more voices comes more noise – but also more opportunity to find signals of truth if we empower discernment. This POSSE model interfaces beautifully with RSS, enabling subscribers to follow websites, blogs and podcasts they trust via open standard feeds completely uncensored by any central platform.”
But is AI all bad? No, Westenberg admits, the technology can be harnessed for good. She points to Anthropic‘s Constitutional AI as an example: it was designed to preserve existing texts instead of overwriting them with automated content. It is also possible, she notes, to develop AI systems that assist human curators instead of compete with them. But we suspect we cannot rely on companies that profit from the proliferation of shoddy AI content to supply such systems. Who will?
Cynthia Murrell, March 15, 2024
Microsoft and Security: A Rerun with the Same Worn-Out Script
March 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The Marvel cinematic universe has spawned two dozen sequels. Microsoft’s security circus features are moving up fast in the reprise business. Unfortunately there is no super hero who comes to the rescue of the giant American firm. The villains in these big screen stunners are a bit like those in the James Bond films. Microsoft seems to prefer to wrestle with the allegedly Russian cozy bear or at least convert a cartoon animal into the personification of evil.
Thanks, MSFT, you have nailed security theater and reruns of the same tired story.
What’s interesting about these security blockbusters is that each follows a Hollywood style “you’ve seen this before nudge nudge” approach to the entertainment. The sequence is a belated announcement that Microsoft security has been breached. The evil bad actors have stolen data, corrupted software, and by brute force foiled the norm cores in Microsoft World. Then announcements about fixes that the Microsoft custoemr must implement along with admonitions to keep that MSFT software updated and warnings about using “old” computers, etc. etc.
“Russian Hackers Accessed Microsoft Source Code” is the equivalent of New York Times film review. The write up reports:
In January, Microsoft disclosed that Russian hackers had breached the company’s systems and managed to read emails belonging to senior executives. Now, the company has revealed that the breach was worse than initially understood and that the Russian hackers accessed Microsoft source code. Friday’s revelation — made in a blog post and a filing with the Securities and Exchange Commission — is the latest in a string of breaches affecting the company that have raised major questions in Washington about Microsoft’s security posture.
Well, that’s harsh. No mention of the estimable alleged monopoly’s releasing the information on March 7, 2024. I am capturing my thoughts on March 8, 2024. But with college basketball moving toward tournament time, who cares? I am not really sure any more. And Washington? Does the name evoke a person, a committee, a committee consisting of the heads of security committees, someone in the White House, an “expert” at the suddenly famous National Bureau of Standards, or absolutely no one.
The write asserts:
The company is concerned, however, that “Midnight Blizzard is attempting to use secrets of different types it has found,” including in emails between customers and Microsoft. “As we discover them in our exfiltrated email, we have been and are reaching out to these customers to assist them in taking mitigating measures,” the company said in its blog post. The company describes the incident as an example of “what has become more broadly an unprecedented global threat landscape, especially in terms of sophisticated nation-state attacks.” In response, the company has said it is increasing the resources and attention devoted to securing its systems.
Microsoft is “reaching out.” I can reach for a donut, but I do not grasp it and gobble it down. “Reach” is not the same as fixing the problems Microsoft caused.
Several observations:
- Microsoft is an alleged monopoly, and it is allowing its digital trains to set fire to the fields, homes, and businesses which have to use its tracks. Isn’t it time for purposeful action from the US government agencies with direct responsibility for cyber security and appropriate business conduct?
- Can Microsoft remediate its problems? My answer is, “No.” Vulnerabilities are engineered in because no one has the time, energy, or interest to chase down problems and fix them. There is an ageing programmer named Steve Gibson. His approach to software is the exact opposite of Microsoft’s. Mr. Gibson will never be a trillion dollar operation, but his software works. Perhaps Microsoft should consider adopting some of Mr. Gibson’s methods.
- Customers have to take a close look at the security breaches endlessly reported by cyber security companies. Some outfits’ software is on the list most of the time. Other companies’ software is an infrequent visitor to these breach parties. Is it time for customers to be looking for an alternative to what Microsoft provides?
Net net: A new security release will be coming to the computer near you. Don’t fail to miss it.
Stephen E Arnold, March 12, 2024
x
x
x
x
x
An Allocation Society or a Knowledge Value System? Pick One, Please!
February 20, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I get random inquiries, usually from LinkedIn, asking me about books I would recommend to a younger person trying to [a] create a brand and make oodles of money, [b] generate sales immediately from their unsolicited emails to strangers, and [c] a somewhat limp-wristed attempt to sell me something. I typically recommend a book I learned about when I was giving lectures at the Kansai Institute of Technology and a couple of outfits in Tokyo. The book is the Knowledge Value Revolution written by a former Japanese government professional named Taichi Sakaiya. The subtitle to the book is “A History of the Future.”
So what?
I read an essay titled “The Knowledge Economy Is Over. Welcome to the Allocation Economy.” The thesis of this essay is that Sakaiya’s description of the future is pretty much wacko. Here’s a passage from the essay about the allocation economy:
Summarizing used to be a skill I needed to have, and a valuable one at that. But before it had been mostly invisible, bundled into an amorphous set of tasks that I’d called “intelligence”—things that only I and other humans could do. But now that I can use ChatGPT for summarizing, I’ve carved that task out of my skill set and handed it over to AI. Now, my intelligence has learned to be the thing that directs or edits summarizing, rather than doing the summarizing myself.
A world class knowledge surfer now wins gold medals for his ability to surf on the output of smart robots and pervasive machines. Thanks, Google ImageFX. Not funny but good enough, which is the mark of a champion today, isn’t it?
For me, the message is that people want summaries. This individual was a summarizer and, hence, a knowledge worker. With the smart software doing the summarizing, the knowledge worker is kaput. The solution is for the knowledge worker to move up conceptually. The jump is a metaplay. Debaters learn quickly that when an argument is going nowhere, the trick that can deliver a win is to pop up a level. The shift from poverty to a discussion about the disfunction of a city board of advisors is a trick used in places like San Francisco. It does not matter that the problem of messios is not a city government issue. Tents and bench dwellers are the exhaust from a series of larger systems. None can do much about the problem. Therefore, nothing gets done. But for a novice debater unfamiliar with popping up a level or a meta-play, the loss is baffling.
The essay putting Sakaiya in the dumpster is not convincing and it certainly is not going to win a debate between the knowledge value revolution and the allocation economy. The reason strikes me a failure to see that smart software, the present and future dislocations of knowledge workers, and the brave words about becoming a director or editor are evidence that Sakaiya was correct. He wrote in 1985:
If the type of organization typical of industrial society could be said to resemble a symphony orchestra, the organizations typical of the knowledge-value society would be more like the line-up of a jazz band.
The author of the allocation economy does not realize that individuals with expertise are playing a piano or a guitar. Of those who do play, only a tiny fraction (a one percent of the top 10 percent perhaps?) will be able to support themselves. Of those elite individuals, how many Taylor Swifts are making the record companies and motion picture empresarios look really stupid? Two, five, whatever. The point is that the knowledge-value revolution transforms much more than “attention” or “allocation.” Sakaiya, in my opinion, is operating at a sophisticated meta-level. Renaming the plight of people who do menial mental labor does not change a painful fact: Knowledge value means those who have high-value knowledge are going to earn a living. I am not sure what the newly unemployed technology workers, the administrative facilitators, or the cut-loose “real” journalists are going to do to live as their parents did in the good old days.
The allocation essay offers:
AI is cheap enough that tomorrow, everyone will have the chance to be a manager—and that will significantly increase the creative potential of every human being. It will be on our society as a whole to make sure that, with the incredible new tools at our disposal, we bring the rest of the economy along for the ride.
How many jazz musicians can ride on a particular market sector propelled by smart software? How many individuals will enjoy personal and financial success in the AI allocation-centric world? Remember, please, there are about eight billion people in the world? How many Duke Ellingtons and Dave Brubecks were there?
The knowledge value revolution means that the majority of individuals will be excluded from nine to five jobs, significant financial success, and meaningful impact on social institutions. I am not for everyone becoming a surfer on smart software, but if that happens, the future is going to be more like the one Sakaiya outlined, not an allocation-centric operation in my opinion.
Stephen E Arnold, February 20, 2024
Is AI Another VisiCalc Moment?
February 14, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The easy-to-spot orange newspaper ran a quite interesting “essay” called “What the Birth of the Spreadsheet Can Teach Us about Generative AI.” Let me cut to the point when the fox is killed. AI is likely to be a job creator. AI has arrived at “the right time.” The benefits of smart software are obvious to a growing number of people. An entrepreneur will figure out a way to sell an AI gizmo that is easy to use, fast, and good enough.
In general, I agree. There is one point that the estimable orange newspaper chose not to include. The VisiCalc innovation converted old-fashioned ledger paper into software which could eliminate manual grunt work to some degree. The poster child of the next technology boom seems tailor-made to facilitate surveillance, weapons, and development of novel bio-agents.
AI is going to surprise some people more than others. Thanks, MSFT Copilot Bing thing. Not good but I gave up with the prompts to get a cartoon because you want to do illustrations. Sigh.
I know that spreadsheets are used by defense contractors, but the link between a spreadsheet and an AI-powered drone equipped with octanitrocubane variants is less direct. Sure, spreadsheets arrived in numerous use cases, some obvious, some not. But the capabilities for enabling a range of weapons systems strike me as far more obvious.
The Financial Times’s essay states:
Looking at the way spreadsheets are used today certainly suggests a warning. They are endlessly misused by people who are not accountants and are not using the careful error-checking protocols built into accountancy for centuries. Famous economists using Excel simply failed to select the right cells for analysis. An investment bank used the wrong formula in a risk calculation, accidentally doubling the level of allowable risk-taking. Biologists have been typing the names of genes, only to have Excel autocorrect those names into dates. When a tool is ubiquitous, and convenient, we kludge our way through without really understanding what the tool is doing or why. And that, as a parallel for generative AI, is alarmingly on the nose.
Smart software, however, is not a new thing. One can participate in quasi-religious disputes about whether AI is 20, 30, 40, or more years old. What’s interesting to me is that after chugging along like a mule cart on the Information Superhighway, AI is everywhere. Old-school British newspapers like it to the spreadsheet. Entrepreneurs spend big bucks on Product Hunt roll outs. Owners of mobile devices can locate “pizza near me” without having to type, speak, or express an interest in a cardiologist’s favorite snack.
AI strikes me as a different breed of technology cat. Here are my reasons:
- Serious AI takes serious money.
- Big AI is going to be a cloud-linked service which invites consolidation just like those hundreds of US railroads became the glorious two player system we have today: One for freight and one for passengers who love trains more than flying or driving.
- AI systems are going to have to find a way to survive and thrive without becoming victims of content inbreeding and bizarre outputs fueled by synthetic data. VisiCalc spawned spreadsheet fever in humans from the outset. The difference is that AI does its work largely without humanoids.
Net net: The spreadsheet looks like a convenient metaphor. But metaphors are not the reality. Reality can surprise in interesting ways.
Stephen E Arnold, February 14, 2024
School Technology: Making Up Performance Data for Years
February 9, 2024
This essay is the work of a dumb dinobaby. No smart software required.
What is the “make up data” trend? Why is it plaguing educational institutions. From Harvard to Stanford, those who are entrusted with shaping young-in-spirit minds are putting ethical behavior in the trash can. I think I know, but let’s look at allegations of another “synthetic” information event. For context in the UK there is a government agency called the Office for Standards in Education, Children’s Services and Skills.” The agency is called OFSTED. Now let’s go to the “real” news story.“
A possible scene outside of a prestigious academic institution when regulations about data become enforceable… give it a decade or two. Thanks, MidJourney. Two tries and a good enough illustration.
“Ofsted Inspectors Make Up Evidence about a School’s Performance When IT Fails” reports:
Ofsted inspectors have been forced to “make up” evidence because the computer system they use to record inspections sometimes crashes, wiping all the data…
Quite a combo: Information technology and inventing data.
The article adds:
…inspectors have to replace those notes from memory without telling the school.
Will the method work for postal investigations? Sure. Can it be extended to other activities? What about data pertinent to the UK government initiates for smart software?
Stephen E Arnold, February 9, 2024
Alternative Channels, Superstar Writers, and Content Filtering
February 7, 2024
This essay is the work of a dumb dinobaby. No smart software required.
In this post-Twitter world, a duel of influencers is playing out in the blogosphere. At issue: Substack’s alleged Nazi problem. The kerfuffle began with a piece in The Atlantic by Jonathan M. Katz, but has evolved into a debate between Platformer’s Casey Newton and Jesse Singal of Singal-Minded. Both those blogs are hosted by Substack.
To get up to speed on the controversy, see the original Atlantic article. Newton wrote a couple posts about Substack’s responses and detailing Platformer’s involvement. In “Substack Says It Will Remove Nazi Publications from the Platform,” he writes:
“Substack is removing some publications that express support for Nazis, the company said today. The company said this did not represent a reversal of its previous stance, but rather the result of reconsidering how it interprets its existing policies. As part of the move, the company is also terminating the accounts of several publications that endorse Nazi ideology and that Platformer flagged to the company for review last week.”
How many publications did Platformer flag, and how many of those did Substack remove? Were they significant publications, and did they really violate the rules? These are the burning questions Singal sought to answer. He shares his account in, “Platformer’s Reporting on Substack’s Supposed ‘Nazi Problem’ Is Shoddy and Misleading.” But first, he specifies his own perspective on Katz’ Atlantic article:
“In my view, this whole thing is little more than a moral panic. Moreover, Katz cut certain corners to obscure the fact that to the extent there are Nazis on Substack at all, it appears they have almost no following or influence, and make almost no money. In one case, for example, Katz falsely claimed that a white nationalist was making a comfortable living writing on Substack, but even the most cursory bit of research would have revealed that that is completely false.”
Singal says he plans a detailed article supporting that assertion, but first he must pick apart Platformer’s position. Readers are treated to details from an email exchange between the bloggers and reasons Singal feels Newton’s responses are inadequate. One can navigate to that post for those details if one wants to get into the weeds. As of this writing, Newton has not published a response to Singal’s diatribe. Were we better off when such duels took place 280 characters at a time?
One positive about newspapers: An established editorial process kept superstars grounded in reality. Now entitlement, more than content, seems to be in the driver’s seat.
Cynthia Murrell, February 7, 2024