AI Will Doom You to Poverty Unless You Do AI to Make Money
January 23, 2025
Prepared by a still-alive dinobaby.
I enjoy reading snippets of the AI doomsayers. Some spent too much time worrying about the power of Joe Stalin’s approach to governing. Others just watched the Terminator series instead of playing touch football. A few “invented” AI by cobbling together incremental improvements in statistical procedures lashed to ever-more-capable computing infrastructures. A couple of these folks know that Nostradamus became a brand and want to emulate that predictive master.
I read “Godfather of AI Explains How Scary AI Will Increase the Wealth Gap and Make Society Worse.” That is a snappy title. Whoever wrote it crafted the idea of an explainer to fear. Plus, the click bait explains that homelessness is for you too. Finally, it presents a trope popular among the elder care set. (Remember, please, that I am a dinobaby myself.) Prod a group of senior citizens to a dinner and you will hear, “Everything is broken.” Also, “I am glad I am old.” Then there is the ever popular, “Those tattoos! The check out clerks cannot make change! I don’t understand commercials!” I like to ask, “How many wars are going on now? Quick.”
Two robots plan a day trip to see the street people in Key West. Thanks, You.com. I asked for a cartoon; I get a photorealistic image. I asked for a coffee shop; I get weird carnival setting. Good enough. (That’s why I am not too worried.)
Is society worse than it ever was? Probably not. I have had an opportunity to visit a number of countries, go to college, work with intelligent (for the most part) people, and read books whilst sitting on the executive mailing tube. Human behavior has been consistent for a long time. Indigenous people did not go to Wegman’s or Whole Paycheck. Some herded animals toward a cliff. Other harvested the food and raw materials from the dead bison at the bottom of the cliff. There were no unskilled change makers at this food delivery location.
The write up says:
One of the major voices expressing these concerns is the ‘Godfather of AI’ himself Geoffrey Hinton, who is viewed as a leading figure in the deep learning community and has played a major role in the development of artificial neural networks. Hinton previously worked for Google on their deep learning AI research team ‘Google Brain’ before resigning in 2023 over what he expresses as the ‘risks’ of artificial intelligence technology.
My hunch is that like me the “worked at” Google was for a good reason — Money. Having departed from the land of volleyball and weird empty office buildings, Geoffrey Hinton is in the doom business. His vision is that there will be more poverty. There’s some poverty in Soweto and the other townships in South Africa. The slums of Rio are no Palm Springs. Rural China is interesting as well. Doesn’t everyone want to run a business from the area in front of a wooden structure adjacent an empty highway to nowhere? Sounds like there is some poverty around, doesn’t it?
The write up reports:
“We’re talking about having a huge increase in productivity. So there’s going to be more goods and services for everybody, so everybody ought to be better off, but actually it’s going to be the other way around. “It’s because we live in a capitalist society, and so what’s going to happen is this huge increase in productivity is going to make much more money for the big companies and the rich, and it’s going to increase the gap between the rich and the people who lose their jobs.”
The fix is to get rid of capitalism. The alternative? Kumbaya or a better version of those fun dudes Marx. Lenin, and Mao. I stayed in the “last” fancy hotel the USSR built in Tallinn, Estonia. News flash: The hotels near LaGuardia are quite a bit more luxurious.
The godfather then evokes the robot that wanted to kill a rebel. You remember this character. He said, “I’ll be back.” Of course, you will. Hollywood does not do originals.
The write up says:
Hinton’s worries don’t just stop at the wealth imbalance caused by AI too, as he details his worries about where AI will stop following investment from big companies in an interview with CBC News: “There’s all the normal things that everybody knows about, but there’s another threat that’s rather different from those, which is if we produce things that are more intelligent than us, how do we know we can keep control?” This is a conundrum that has circulated the development of robots and AI for years and years, but it’s seeming to be an increasingly relevant proposition that we might have to tackle sooner rather than later.
Yep, doom. The fix is to become an AI wizard, work at a Google-type outfit, cash out, and predict doom. It is a solid career plan. Trust me.
Stephen E Arnold, January 23, 2025
Teenie Boppers and Smart Software: Yep, Just Have Money
January 23, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I scanned the research summary “About a Quarter of U.S. Teens Have Used ChatGPT for Schoolwork – Double the Share in 2023.” Like other Pew data, the summary contained numerous numbers. I was not sufficiently motivated to dig into the methodology to find out how the sample was assembled nor how Pew got the mobile addicted youth were prompted to provide presumably truthful answers to direct questions. But why nit pick? We are at the onset of an interesting year which will include forthcoming announcements about how algorithms are agentic and able to fuel massive revenue streams for those in the know.
Students doing their homework while their parents play polo. Thanks, MSFT Copilot. Good enough. I do like the croquet mallets and volleyball too. But children from well-to-do families have such items in abundance.
Let’s go to the video tape, as the late and colorful Warner Wolf once said to his legion of Washington, DC, fan.
One of the highlights of the summary was this finding:
Teens who are most familiar with ChatGPT are more likely to use it for their schoolwork. Some 56% of teens who say they’ve heard a lot about it report using it for schoolwork. This share drops to 18% among those who’ve only heard a little about it.
Not surprisingly, the future leaders of America embrace short cuts. The question is, “How quickly will awareness reach 99 percent and usage nosing above 75 percent?” My guesstimate is pretty quickly. Convenience and more time to play with mobile phones will drive the adoption. Who in America does not like convenience?
Another finding catching my eye was:
Teens from households with higher annual incomes are most likely to say they’ve heard about ChatGPT. The shares who say this include 84% of teens in households with incomes of $75,000 or more say they’ve heard at least a little about ChatGPT.
I found this interesting because it appears to suggest that if a student comes from a home where money does not seem to be a huge problem, the industrious teens are definitely aware of smart software. And when it comes to using the digital handmaiden, Pew finds apparently nothing. There is no data point relating richer progeny with greater use. Instead we learned:
Teens who are most familiar with the chatbot are also more likely to say using it for schoolwork is OK. For instance, 79% of those who have heard a lot about ChatGPT say it’s acceptable to use for researching new topics. This compares with 61% of those who have heard only a little about it.
My thought is that more wealthy families are more likely to have teens who know about smart software. I would hypothesize that wealthy parents will pay for the more sophisticated smart software and smile benignly as the future intelligentsia stride confidently to ever brighter futures. Those without the money will get the opportunity to watch their classmates have more time for mobile phone scrolling, unboxing Amazon deliveries, and grabbing burgers at Five Guys.
I am not sure that the link between wealth and access to learning experiences is a random, one-off occurrence. If I am correct, the Pew data suggest that smart software is not reinforcing democracy. It seems to be making a digital Middle Ages more and more probable. But why think about what a dinobaby hypothesizes? It is tough to scroll zippy mobile phones with old paws and yellowing claws.
Stephen E Arnold, January 23, 2025
Yo, MSFT-Types, Listen Up
January 23, 2025
Developers concerned about security should check out “Seven Types of Security Issues in Software Design” at InsBug. The article does leave out a few points we would have included. Using Microsoft software, for example, or paying for cyber security solutions that don’t work as licensees believe. And don’t forget engineering for security rather than expediency and cost savings. Nevertheless, the post makes some good points. It begins:
“Software is gradually defining everything, and its forms are becoming increasingly diverse. Software is no longer limited to the applications or apps we see on computers or smartphones. It is now an integral part of hardware devices and many unseen areas, such as cars, televisions, airplanes, warehouses, cash registers, and more. Besides sensors and other electronic components, the actions and data of hardware often rely on software, whether in small amounts of code or in hidden or visible forms. Regardless of the type of software, the development process inevitably encounters bugs that need to be identified and fixed. While major bugs are often detected and resolved before release or deployment by developers or testers, security vulnerabilities don’t always receive the same attention.”
Sad but true. The seven categories include: Misunderstanding of Security Protection Technologies; Component Integration and Hidden Security Designs; Ignoring Security in System Design; Security Risks from Poor Exception Handling; Discontinuous or Inconsistent Trust Relationships; Over-Reliance on Single-Point Security Measures; and Insufficient Assessment of Scenarios or Environments. See the write-up for details on each point. We note a common thread—a lack of foresight. The post concludes:
“To minimize security risks and vulnerabilities in software design and development, one must possess solid technical expertise and a robust background in security offense and defense. Developing secure software is akin to crafting fine art — it requires meticulous thought, constant consideration of potential threats, and thoughtful design solutions. This makes upfront security design critically important.”
Security should not be an afterthought. What a refreshing perspective.
Cynthia Murrell, January 23, 2025
AI: Yes, Intellectual Work Will Succumb, Just Sooner Rather Than Later
January 22, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Has AI innovation stalled? Nope. “It’s Getting Harder to Measure Just How Good AI Is Getting” explains:
OpenAI’s end-of-year series of releases included their latest large language model (LLM), o3. o3 does not exactly put the lie to claims that the scaling laws that used to define AI progress don’t work quite that well anymore going forward, but it definitively puts the lie to the claim that AI progress is hitting a wall.
Okay, that proves that AI is hitting the gym and getting pumped.
However, the write up veers into an unexpected calcified space:
The problem is that AIs have been improving so fast that they keep making benchmarks worthless. Once an AI performs well enough on a benchmark we say the benchmark is “saturated,” meaning it’s no longer usefully distinguishing how capable the AIs are, because all of them get near-perfect scores.
What is wrong with the lack of benchmarks? Nothing. Smart software is probabalistic. How accurate is the weather? Ask a wonk at the National Weather Service and you get quite optimistic answers. Ask a child whose birthday party at the park was rained out on a day Willie the Weather said that it would be sunny, and you get a different answer.
Okay, forget measurements. Here’s what the write up says will happen, and the prediction sounds really rock solid just like Willie the Weatherman:
The way AI is going to truly change our world is by automating an enormous amount of intellectual work that was once done by humans…. Like it or not (and I don’t really like it, myself; I don’t think that this world-changing transition is being handled responsibly at all) none of the three are hitting a wall, and any one of the three would be sufficient to lastingly change the world we live in.
Follow the argument? I must admit jumping from getting good, to an inability to measure “good” to humans will be replaced because AI can do intellectual work is quite a journey. Perhaps I am missing something, but:
- Just because people outside of research labs have smart software that seems to be working like a smart person, what about those hallucinatory outputs? Yep, today’s models make stuff up because probability dictates the output
- Use cases for smart software doing “intellectual work” are where in the write up? They aren’t because Vox doesn’t have any which are comfortable to journalists and writers who can be replaced by the SEO AI’s advertised on Telegram search engine optimization channels or by marketers writing for Forbes Magazine. That’s right. Excellent use cases are smart software killing jobs once held by fresh MBAs or newly minted CFAs. Why? Cheaper and as long as the models are “good enough” to turn a profit, let ‘em rip. Yahoooo.
- Smart software is created by humans, and humans shape what it does, how it is deployed, and care not a whit about the knock on effects. Technology operates in the hands of humans. Humans are deeply flawed entities. Mother Theresas are outnumbered by street gangs in Reno, Nevada, based on my personal observations of that fine city.
Net net: Vox which can and will be replaced by a cheaper and good enough alternative doesn’t want to raise that issue. Instead, Vox wanders around the real subject. That subject is that as those who drive AI figure out how to use what’s available and good enough, certain types of work will be pushed into the black boxes of smart software. Could smart software have written this essay? Yes. Could it have done a better job? Publications like the supremely weird Buzzfeed and some consultants I know sure like “good enough.” As long as it is cheap, AI is a winner.
Stephen E Arnold, January 22, 2025
Microsoft and Its Me-Too Interface for Bing Search
January 22, 2025
Bing will never be Google, but Microsoft wants its search engine to dominate queries. Microsoft Bing has a small percentage of Internet searches and in a bid to gain more traction it has copied Google’s user interface (UI). Windows Latest spills the tea over the UI copying: “Microsoft Bing Is Trying To Spoof Google UI When People Search Google.com.”
Google’s UI is very distinctive with its minimalist approach. The only item on the Google UI is the query box and menus along the top and bottom of the page. Microsoft Edge is Google’s Web browser and it is programed to use Bing. In a sneaky (and genius) move, when Edge users type Google into the bing search box they are taken to UI that is strangely Google-esque. Microsoft is trying this new UI to lower the Bing bounce rate, users who leave.
Is it an effective tactic?
“But you might wonder how effective this idea would be. Well, if you’re a tech-savvy person, you’ll probably realize what’s going on, then scroll and open Google from the link. However, this move could keep people on Bing if they just want to use a search engine.Google is the number one search engine, and there’s a large number of users who are just looking for a search engine, but they think the search engine is Google. In their mind, the two are the same. That’s because Google has become a synonym for search engines, just like Chrome is for browsers.A lot of users don’t really care what search engine they’re using, so Microsoft’s new practice, which might appear stupid to some of you, is likely very effective.”
For unobservant users and/or those who don’t care, it will work. Microsoft is also tugging on heartstrings with another tactic:
“On top of it, there’s also an interesting message underneath the Google-like search box that says “every search brings you closer to a free donation. Choose from over 2 million nonprofits.” This might also convince some people to keep using Bing.”
What a generous and genius tactic interface innovation. We’re not sure this is the interface everyone sees, but we love the me too approach from user-centric big tech outfits.
Whitney Grace, January 22, 2025
And 2024, a Not-So-Wonderful Year
January 22, 2025
Every year has tech failures, some of them will join the zeitgeist as cultural phenomenons like Windows Vista, Windows Me, Apple’s Pippin game console, chatbots, etc. PC Mag runs down the flops in: “Yikes: Breaking Down the 10 Biggest Tech Fails of 2024.” The list starts with Intel’s horrible year with its booted CEO, poor chip performance. It follows up with the Salt Typhoon hack that proved (not that we didn’t already know it with TikTok) China is spying on every US citizen with a focus on bigwigs.
National Public Data lost 272 million social security numbers to a hacker. That was a great day in summer for hacker, but the summer travel season became a nightmare when a CrowdStrike faulty kernel update grounded over 2700 flights and practically locked down the US borders. Microsoft’s Recall, an AI search tool that took snapshots of user activity that could be recalled later was a concern. What if passwords and other sensitive information were recorded?
The fabulous Internet Archive was hacked and taken down by a bad actor to protest the Israel-Gaza conflict. It makes us worry about preserving Internet and other important media history. Rabbit and Humane released AI-powered hardware that was supposed to be a hands free way to use a digital assistant, but they failed. JuiceBox ended software support on its EV car chargers, while Scarlett Johansson’s voice was stolen by OpenAI for its Voice Mode feature. She sued.
The worst of the worst is this:
“Days after he announced plans to acquire Twitter in 2022, Elon Musk argued that the platform needed to be “politically neutral” in order for it to “deserve public trust.” This approach, he said, “effectively means upsetting the far right and the far left equally.” In March 2024, he also pledged to not donate to either US presidential candidate, but by July, he’d changed his tune dramatically, swapping neutrality for MAGA hats. “If we want to preserve freedom and a meritocracy in America, then Trump must win,” Musk tweeted in September. He seized the @America X handle to promote Trump, donated millions to his campaign, shared doctored and misleading clips of VP Kamala Harris, and is now working closely with the president-elect on an effort to cut government spending, which is most certainly a conflict of interest given his government contracts. Some have even suggested that he become Speaker of the House since you don’t have to be a member of Congress to hold that position. The shift sent many X users to alternatives like Bluesky, Threads, and Mastodon in the days after the US election.”
Let’s assume NPR is on the money. Will the influence of the Leonardo da Vinci of modern times make everything better? Absolutely. I mean the last Space X rocket almost worked. No Tesla has exploded in my neighborhood this week. Perfect.
Whitney Grace, January 22, 2025
Why Ghost Jobs? Answer: Intelligence
January 21, 2025
Prepared by a still-alive dinobaby.
A couple of years ago, an intelware outfit’s US “president” contacted me. He was curious about the law enforcement and intelligence markets appetite for repackaged Maltego, some analytics, and an interface with some Palantir-type bells and whistles. I explained that I charged money to talk because as a former blue-chip consultant, billing is in my blood. I don’t have platelets. I have Shrinky-dink invoices. Add some work, and these Shrinky-dinks blow up to big juicy invoices. He disconnected.
A few weeks later, he sent me an email. He wanted to pick up our conversation because his calls to other people whom he thought knew something about selling software to the US government did not understand that his company emerged from a spy shop. I was familiar with the issues: Non-US company, ties to a high-power intelligence operation, an inability to explain whether the code was secure, and the charming attitude of many intelligence professionals who go from A to B without much thought about some social conventions.
The fellow wanted to know how one could obtain information about a competitor; specifically, what was the pricing spectrum. It is too bad the owner of the company dumped the start up and headed to the golf course. If that call came to me today, I would point him at this article: “1 in 5 Online Job Postings Are Either Fake or Never Filled, Study Finds.” Gizmodo has explained one reason why there are so many bogus jobs offering big bogus salaries and promising big bogus benefits.
The answer is obvious when viewed from my vantage point in rural Kentucky? The objective is to get a pile or résumés, filter through them looking for people who might have some experience (current or past) at a company of interest to the job advertiser. What? Isn’t that illegal? I don’t know, but the trick has been used for a long, long time. Headhunting is a tricky business, and it is easy for someone to post a job opening and gather information from individuals who want to earn money.
What’s the write up say?
The Wall Street Journal cites internal data from the hiring platform Greenhouse that shows one in five online job postings—or between 18% and 22% of jobs advertised—are either fake or never filled. That data was culled from Greenhouse’s proprietary information, which the company can access because it sells automated software that helps employers fill out job postings. The “ghost job” phenomenon has been growing for some time—much to the vexation of job-seekers.
Okay, snappy. Ghost jobs. But the number seems low to me.
The article fails to note the intelligence angle, however. It concludes:
The plague of such phantom positions has led some platforms to treat job postings in very much the same way that other online content gets treated: as either A) verified or B) potential misinformation. Both Greenhouse and LinkedIn now supply a job verification service, the Journal writes, which allows users to know whether a position is legit or not. “It’s kind of a horror show,” Jon Stross, Greenhouse’s president and co-founder, told the Journal. “The job market has become more soul-crushing than ever.”
I think a handful of observations may be warranted:
- Some how the education of a job seeker has ignored the importance of making sure that the résumé is sanitized so no information is provided to an unknown entity from whom there is likely to be zero response. Folks, this is data collection. Volume is good.
- Interviews are easier than ever. Fire up Zoom and hit the record button. The content of the interview can be reviewed and analyzed for tasty little info-nuggets.
- The process is cheap, easy, and safe. Getting some information can be quite tricky. Post an advertisement on a service and wait. Some podcasts brag about how many responses their help wanted ads generate in as little as a few hours. As I said, cheap, easy, and safe.
What can a person do to avoid this type of intelligence gathering activity? Sorry. I have some useful operational information, but those little platelet sized invoices are just so eager to escape this dinobaby’s body. What’s amazing is that this ploy is news just as it was to the intelware person who was struggling to figure out some basics about selling to the government. Recycling open source software and pretending that it was an “innovation” was more important than trying to hire a former US government procurement officer, based in the DC area with a minimum of 10 years in software procurement. We have a situation where professional intelligence officers, job seekers, and big time journalists have the same level of understanding about how to obtain high-value information quickly and easily. Amazing what a dinobaby knows, isn’t it?
Stephen E Arnold, January 21, 2025
Sonus, What Is That Painful Sound I Hear?
January 21, 2025
Sonos CEO Swap: Are Tech Products Only As Good As Their Apps?
Lawyers and accountants leading tech firms, please, take note: The apps customers use to manage your products actually matter. Engadget reports, “Sonos CEO Patrick Spence Falls on his Sword After Horrible App Launch.” Reporter Lawrence Bonk writes:
“Sonos CEO Patrick Spence is stepping down from the company after eight years on the job, according to reporting by Bloomberg. This follows last year’s disastrous app launch, in which a redesign was missing core features and was broken in nearly every major way. The company has tasked Tom Conrad to steer the ship as interim CEO. Conrad is a current member of the Sonos board, but was a co-founder of Pandora, VP at Snap and product chief at, wait for it, the short-lived video streaming platform Quibi. He also reportedly has a Sonos tattoo. The board has hired a firm to find a new long-term leader.”
Conrad told employees that “we” let people down with the terrible app. And no wonder. Bonk explains:
“The decision to swap leadership comes after months of turmoil at the company. It rolled out a mobile app back in May that was absolutely rife with bugs and missing key features like alarms and sleep timers. Some customers even complained that entire speaker systems would no longer work after updating to the new app. It was a whole thing.”
Indeed. And despite efforts to rekindle customer trust, the company is paying the price of its blunder. Its stock price has fallen about 13 percent, revenue tanked 16 percent in the fiscal fourth quarter, and it has laid off more than 100 workers since August. Chief Product Officer Patrick Spence is also leaving the firm. Will the CEO swap help Sonos recover? As he takes the helm, Conrad vows a return to basics. At the same time, he wants to expand Sonos’ products. Interesting combination. Meanwhile, the search continues for a more permanent replacement.
Cynthia Murrell, January 21, 2025
AWS and AI: Aw, Of Course
January 21, 2025
Mat Garman Interview Reveals AWS Perspective on AI
It should be no surprise that AWS is going all in on Artificial Intelligence. Will Amazon become an AI winner? Sure, if it keeps those managing the company’s third-party reseller program away from AWS. Nilay Patel, The Verge‘s Editor-in Chief, interviewed AWS head Matt Garmon. He explains “Why CEO Matt Garman Is Willing to Bet AWS on AI.” Patel writes:
“Matt has a really interesting perspective for that kind of conversation since he’s been at AWS for 20 years — he started at Amazon as an intern and was AWS’s original product manager. He’s now the third CEO in just five years, and I really wanted to understand his broad view of both AWS and where it sits inside an industry that he had a pivotal role in creating. … Matt’s perspective on AI as a technology and a business is refreshingly distinct from his peers, including those more incentivized to hype up the capabilities of AI models and chatbots. I really pushed Matt about Sam Altman’s claim that we’re close to AGI and on the precipice of machines that can do tasks any human could do. I also wanted to know when any of this is going to start returning — or even justifying — the tens of billions of dollars of investments going into it. His answers on both subjects were pretty candid, and it’s clear Matt and Amazon are far more focused on how AI technology turns into real products and services that customers want to use and less about what Matt calls ‘puffery in the press.'”
What a noble stance within a sea of AI hype. The interview touches on topics like AWS’ domination of streaming delivery, its partnerships with telco companies, and problems of scale as it continues to balloon. Garmon also compares the shift to AI to the shift from typewriters to computers. See the write-up for more of their conversation.
Cynthia Murrell, January 21, 2025
AI Doom: Really Smart Software Is Coming So Start Being Afraid, People
January 20, 2025
Prepared by a still-alive dinobaby.
The essay “Prophecies of the Flood” gathers several comments about software that thinks and decides without any humans fiddling around. The “flood” metaphor evokes the streams of money about which money people fantasize. The word “flood” evokes the Hebrew Biblical idea’s presentation of a divinely initiated cataclysm intended to cleanse the Earth of widespread wickedness. Plus, one cannot overlook the image of small towns in North Carolina inundated in mud and debris from a very bad storm.
When the AI flood strikes as a form of divine retribution, will the modern arc be filled with humans? Nope. The survivors will be those smart agents infused with even smarter software. Tough luck, humanoids. Thanks, OpenAI, I knew you could deliver art that is good enough.
To sum up: A flood is bad news, people.
The essay states:
the researchers and engineers inside AI labs appear genuinely convinced they’re witnessing the emergence of something unprecedented. Their certainty alone wouldn’t matter – except that increasingly public benchmarks and demonstrations are beginning to hint at why they might believe we’re approaching a fundamental shift in AI capabilities. The water, as it were, seems to be rising faster than expected.
The signs of darkness, according to the essay, include:
- Rising water in the generally predictable technology stream in the park populated with ducks
- Agents that “do” something for the human user or another smart software system. To humans with MBAs, art history degrees, and programming skills honed at a boot camp, the smart software is magical. Merlin wears a gray T shirt, sneakers, and faded denims
- Nifty art output in the form of images and — gasp! — videos.
The essay concludes:
The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.
Let’s assume that I buy this analysis and agree with the notion “prepare now.” How realistic is it that the United Nations, a couple of super powers, or a motivated individual can have an impact? Gentle reader, doom sells. Examples include The Big Short: Inside the Doomsday Machine, The Shifts and Shocks: What We’ve Learned – and Have Still to Learn – from the Financial Crisis, and Too Big to Fail: How Wall Street and Washington Fought to Save the Financial System from Crisis – and Themselves, and others, many others.
Have these dissections of problems had a material effect on regulators, elected officials, or the people in the bank down the street from your residence? Answer: Nope.
Several observations:
- Technology doom works because innovations have positive and negative impacts. To make technology exciting, no one is exactly sure what the knock on effects will be. Therefore, doom is coming along with the good parts
- Taking a contrary point of view creates opportunities to engage with those who want to hear something different. Insecurity is a powerful sales tool.
- Sending messages about future impacts pulls clicks. Clicks are important.
Net net: The AI revolution is a trope. Never mind that after decades of researchers’ work, a revolution has arrived. Lionel Messi allegedly said, “It took me 17 years to become an overnight success.” (Mr. Messi is a highly regarded professional soccer player.)
Will the ill-defined technology kill humans? Answer: Who knows. Will humans using ill-defined technology like smart software kill humans? Answer: Absolutely. Can “anyone” or “anything” take an action to prevent AI technology from rippling through society. Answer: Nope.
Stephen E Arnold, January 20, 2025