Dumb Smart Software? This Is News?

January 31, 2025

dino orange_thumbA blog post written by a real and still-alive dinobaby. If there is art, there is AI in my workflow.

The prescient “real” journalists at the Guardian have a new insight: When algorithms are involved, humans get the old shaftola. I assume that Weapons of Math Destruction was not on some folks’ reading list. (O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown, 2016). That book did a reasonably good job of explaining how smart software’s math can create some excitement for mere humans. Anecdotes about Amazon’s management of its team of hard-working delivery professionals shifting into survival tricks revealed by the wily Dane creating Survival Russia videos for YouTube.

(Yep, he took his kids to search for graves near a gulag.) “It’s a Nightmare: Couriers Mystified by the Algorithms That Control Their Jobs” explains that smart software raises some questions. The “real” journalist explains:

This week gig workers, trade unions and human rights groups launched a campaign for greater openness from Uber Eats, Just Eat and Deliveroo about the logic underpinning opaque algorithms that determine what work they do and what they are paid. The couriers wonder why someone who has only just logged on gets a gig while others waiting longer are overlooked. Why, when the restaurant is busy and crying out for couriers, does the app say there are none available?

Confusing? To some but to the senior managers of the organizations shifting to smart software, the cost savings are a big deal. Imagine. In Britain, a senior manager can spend a week or two in Nice, maybe Monaco? The write up reports:

The app companies say they do have rider support staffed by people and some information about the algorithms is available on their websites and when drivers are initially “onboarded”.

Of course the “app companies” say positive things. The issue is that management embraces smart software. A third-party firm is retained to advise the lawyers and accountants and possibly one presentable information technology person to a briefing. The options are considered and another third-party firm is retained to integrate the smart software. That third-party retains a probably unpresentable IT person who can lash up some smart software to the bailing-wire-and-spit enterprise software system. Bingo! The algorithms perform their magic. Oh, whom does one blame for a flawed solution? I don’t know. Just call in the lawyers.

The article explains the impact on a worker who delivers for people who cannot walk to a restaurant or the grocery:

“Every worker should understand the basis on which they are paid,” Farrar [a delivery professional] said. “But you’re being gamed into deciding whether to accept a job or not. Will I get a better offer? It’s like gambling and it’s very distressing and stressful for people. You are completely in a vacuum about how best to do the job and because people often don’t understand how decisions are being made about their work, it encourages conspiracies.”

To whom should Mr. Farrar and others shafted by math complain? Perhaps the Guardian newspaper, which is slightly less popular than TikTok or X.com, Facebook or Red Book, or BlueSky or YouTube. My suggestion would be for the Guardian to use these channels and beg for pounds or dollars like other valiant social media professionals. The person doing deliveries might want to explore working for Amazon deliveries and avail himself of Survival Russia videos when on his generous Amazon breaks. And what about the people who call a restaurant and specify at home delivery? I would recommend getting out of that comfy lounge chair and walking to the restaurant in person. While you wait for your lovingly-crafted meal at the Indian takeaway, you can read Weapons of Math Destruction.

Stephen E Arnold, January 31, 2025

AI Innovation: Writing Checks Is the Google Solution

January 30, 2025

dino orangeA blog post from an authentic dinobaby. He’s old; he’s in the sticks; and he is deeply skeptical.

Wow. First, Jeff Dean gets the lateral arabesque. Then the Google shifts its smart software to the “I am a star” outfit Deep Mind in the UK. Now, the cuddly Google has, according to Analytics India, pulled a fast one on the wizards laboring at spelling advertising another surprise. “Google Invests $1 Bn in Anthropic” reports:

This new investment is separate from the company’s earlier reported funding round of nearly $2 billion earlier this month, led by Lightspeed Venture Partners, to bump the company’s valuation to about $60 billion. In 2023, Google had invested $300 million in Anthropic, acquiring a 10% stake in the company. In November last, Amazon led Anthropic’s $4 billion fundraising effort, raising its overall funding to $8 billion for the company.

I thought Google was quantumly supreme. I thought Google reinvented protein stuff. I thought Google could do podcasts and fix up a person’s Gmail. I obviously was wildly off the mark. Perhaps Google’s “leadership” has taken time from writing scripts for the Sundar & Prabhakar Comedy Tour and had an epiphany. Did the sketch go like this:

Prabhakar: Did you see the slide deck for my last talk about artificial intelligence?

Sundar: Yes, I thought it was so so. Your final slide was a hoot. Did you think it up?

Prabhakar: No, I think little. I asked Anthropic Claude for a snappy joke. It worked.

Sundar: Did Jeff Dean help? Did Dennis Hassabis contribute?

Prabhakar: No, just Claude Sonnet. He likes me, Sundar.

Sundar: The secret of life is honesty, fair dealing, and Code Yellow!

Prabhakar: I think Google intelligence may be a contradiction in terms. May I requisition another billion for Anthropic?

Sundar: Yes, we need to care about posterity. Otherwise, our posterity will be defined by a YouTube ad.

Prabhakar: We don’t want to take it in the posterity, do we?

Sundar: Well….

Anthropic allegedly will release a “virtual collaborator.” Google wants that, right Jeff and Dennis? Are there anti-trust concerns? Are there potential conflicts of interest? Are there fears about revenues?

Of course not.

Will someone turn off those darned flashing red and yellow lights! Innovation is tough with the sirens, the lights, the quantumly supremeness of Googleness.

Stephen E Arnold, January 30, 2025

How Does Smart Software Interpret a School Test

January 29, 2025

dino orange_thumb_thumb_thumb_thumbA blog post from an authentic dinobaby. He’s old; he’s in the sticks; and he is deeply skeptical.

I spotted an article titled “‘Is This Question Easy or Difficult to You?’: This LSAT Reading Comprehension Question Is Breaking Brains.” Click bait? Absolutely.

Here’s the text to figure out:

Physical education should teach people to pursue healthy, active lifestyles as they grow older. But the focus on competitive sports in most schools causes most of the less competitive students to turn away from sports. Having learned to think of themselves as unathletic, they do not exercise enough to stay healthy.

Imagine you are sitting in a hot, crowded examination room. No one wants to be there. You have to choose one of the following solutions.

(a) Physical education should include noncompetitive activities.

[b] Competition causes most students to turn away from sports.

[c] People who are talented at competitive physical endeavors exercise regularly.

[d] The mental aspects of exercise are as important as the physical ones.

[e] Children should be taught the dangers of a sedentary lifestyle.

Okay, what did you select?

Well, the “correct” answer is [a], Physical education should include noncompetitive activities.

Now how did some of the LLMs or smart software do?

ChatGPT o1 settled on [a].

Claude Sonnet 3.5 spit out a page of text but did conclude that the correct answer as [a].

Gemini 1.5 Pro concluded that [a] was correct.

Llama 3.2 90B output two sentences and the correct answer [a]

Will students use large language models for school work, tests, and real life?

Yep. Will students question or doubt the outputs? Nope.

Are the LLMs “good enough”?

Yep.

Stephen E Arnold, January 29, 2025

The Joust of the Month: Microsoft Versus Salesforce

January 29, 2025

These folks don’t seem to see eye to eye: Windows Central tells us, “Microsoft Claps Back at Salesforce—Claims ‘100,000 Organizations’ Had Used Copilot Studio to Create AI Agents by October 2024.” Microsoft’s assertion is in response to jabs from Salesforce CEO Marc Benioff, who declares, “Microsoft has disappointed everybody with how they’ve approached this AI world.” To support this allegation, Benioff points to lines from a recent MarketWatch post. A post which, coincidentally, also lauds his company’s success with AI agents. The smug CEO also insists he is receiving complaints about his giant competitor’s AI tools. Writer Kevin Okemwa elaborates:

“Benioff has shared interesting consumer feedback about Copilot’s user experience, claiming customers aren’t finding themselves transformed while leveraging the tool’s capabilities. He added that customers barely use the tool, ‘and that’s when they don’t have a ChatGPT license or something like that in front of them.’ Last year, Salesforce’s CEO claimed Microsoft’s AI efforts are a ‘tremendous disservice’ to the industry while referring to Copilot as the new Microsoft Clippy because it reportedly doesn’t work or deliver value. As the AI agent race becomes more fierce, Microsoft has seemingly positioned itself in a unique position to compete on a level playing field with key players like Salesforce Agentforce, especially after launching autonomous agents and integrating them into Copilot Studio. Microsoft claims over 100,000 organizations had used Copilot Studio to create agents by October 2024. However, Benioff claimed Microsoft’s Copilot agents illustrated panic mode, majorly due to the stiff competition in the category.”

One notable example, writes Okemwa, is Zuckerberg’s vision of replacing Meta’s software engineers with AI agents. Oh, goodie. This anti-human stance may have inspired Benioff, who is second-guessing plans to hire live software engineers in 2025. At least Microsoft still appears to be interested in hiring people. For now. Will that antiquated attitude hold the firm back, supporting Benioff’s accusations?

Mount your steeds. Fight!

Cynthia Murrell, January 29, 2025

"Real" Entities or Sock Puppets? A New Solution Can Help Analysts and Investigators

January 28, 2025

Bitext’s NAMER (shorthand for "named entity recognition") can deliver precise entity tagging across dozens of languages.

Graphs — knowledge graphs and social graphs — have moved into the mainstream since Leonhard Euler formed the foundation for graph theory in the mid 18th century in Berlin.

With graphs, analysts can take advantage of smart software’s ability to make sense of Named Entity Recognition (NER), event extraction, and relationship mapping.

The problem is that humans change their names (handles, monikers, or aliases) for many reasons: Public embarrassment, a criminal record, a change in marital status, etc.

Bitext’s NER solution, NAMER, is specifically designed to meet the evolving needs of knowledge graph companies, offering exceptional features that tackle industry challenges.

Consider a person disgraced with involvement in a scheme to defraud investors in an artificial intelligence start up. The US Department of Justice published the name of a key actor in this scheme. (Source: https://www.justice.gov/usao-ndca/pr/founder-and-former-ceo-san-francisco-technology-company-and-attorney-indicted-years). The individual was identified by the court as Valerie Lau Beckman. The official court documents used the name "Lau" to reference her involvement in a multi-million dollar scam.

However, in order to correctly identify her in social media, subsequent news stories, and in possible public summaries of her training on a LinkedIn-type of smart software is not enough.

That’s the role of a specialized software solution. Here’s what NAMER delivers.

The system identifies and classifies entities (e.g., people, organizations, locations) in unstructured data. The system accurately links data across different sources of content. The NAMER technology can tag and link significant events (transactions, announcements) to maintain temporal relevance; for example, when Ms. Lau Beckman is discharged from the criminal process. NAMER can connect entities like Ms. Lau or Ms. Beckman to other individuals with whom she works or interacts and her "names" appearance in content streams.

The licensee specifies the languages NAMER is to process, either in a knowledge base or prior to content processing via a large language model.

Access to the proprietary NAMER technology is via a local SDK which is essential for certain types of entity analysis. NAMER can also be integrated into another system or provided as a "white label service" to enhance an intelligence system with NAMER’s unique functions. The developer provides for certain use cases direct access to the source code of the system.

For an organization or investigative team interested in keeping data about Lau Beckman at the highest level of precision, Bitext’s NAMER is an essential service.

Stephen E Arnold, January 28, 2025

What Do DeepSeek, a Genius Girl, and Temu Have in Common? Quite a Lot

January 28, 2025

Hopping DinoA write up from a still-living dinobaby.

The Techmeme for January 28, 2024, was mostly Deepseek territory. The China-linked AI model has roiled the murky waters of the US smart software fishing hole. A big, juicy AI creature has been pulled from the lake, and it is drawing a crowd. Here’s a small portion of the datasphere thrashing on January 28, 2025 at 0700 am US Eastern time:

image

I have worked through a number of articles about this open source software. I noted its back story about a venture firm’s skunk works tackling AI. Armed with relatively primitive tools due to the US restriction of certain computer components, the small team figured out how to deliver results comparable to the benchmarks published about US smart software systems.

image

Genius girl uses basic and cheap tools to repair an old generator. Americans buy a new generator from Harbor Freight. Genius girl repairs old generator proving the benefits of a better way or a shining path. Image from the YouTube outfit which does work the American way.

The story is torn from the same playbook which produces YouTube “real life” stories like “The genius girl helps the boss to repair the diesel generator, full of power!” You can view the one-hour propaganda film at this link. Here’s a short synopsis, and I want you to note the theme of the presentation:

  1. Young-appearing female works outside
  2. She uses primitive tools
  3. She takes apart a complex machine
  4. She repairs it
  5. The machine is better than a new machine.

The videos are interesting. The message has not been deconstructed. My interpretation is:

  1. Hard working female tackles tough problem
  2. Using ingenuity and hard work she cracks the code
  3. The machine works
  4. Why buy a new one? Use what you have and overcome obstacles.

This is not the “Go west, young man” or private equity approach to cracking an important problem. It is political and cultural with a dash of Hoisin technical sauce. The video presents a message like that of “plum blossom boxing.” It looks interesting but packs a wallop.

Here’s a point that has not been getting much attention; specifically, the AI probe is designed to direct a flow of energy at the most delicate and vulnerable part of the US artificial intelligence “next big thing” pumped up technology “bro.”

What is that? The answer is cost. The method has been refined by Shein and Temu by poking at Amazon. Here’s how the “genius girl” uses ingenuity.

  1. Technical papers are published
  2. Open source software released
  3. Basic information about using what’s available released
  4. Cost information is released.

The result is that a Chinese AI app surges to the top of downloads on US mobile stores. This is a first. Not even the TikTok service achieved this standing so quickly. The US speculators dump AI stocks. Techmeme becomes the news service for Chinese innovation.

I see this as an effective tactic for demonstrating the value of the “genius girl” approach to solving problems. And where did Chinese government leadership watch the AI balloon lose some internal pressure. How about Colombia, a three-hour plane flight from the capital of Central and South America. (That’s Miami in the event my reference was too oblique.)

In business, cheaper and good enough are very potent advantages. The Deepseek AI play is indeed about a new twist to today’s best method of having software perform in a way that most call “smart.” But the Deepseek play is another “genius girl” play from the Middle Kingdom.

How can the US replicate the “genius girl” or the small venture firm which came up with a better idea? That’s going to be touch. While the genius girl was repairing the generator, the US AI sector was seeking more money to build giant data centers to hold thousands of exotic computing tools. Instead of repairing, the US smart software aficionados were planning on modular nuclear reactors to make the next-generation of smart software like the tail fins on a 1959 pink Cadillac.

Deepseek and the “genius girl” are not about technology. Deepseek is a manifestation of the Shein and Temu method: Fast cycle, cheap and good enough. The result is an arm flapping response from the American way of AI. Oh, does the genius girl phone home? Does she censor what she says and does?

Stephen E Arnold, January 28, 2025

China Smart, US Dumb: Some AI Readings in English

January 28, 2025

dino orange_thumb_thumbA blog post from an authentic dinobaby. He’s old; he’s in the sticks; and he is deeply skeptical.

I read a short post in YCombinator’s Hacker News this morning (January 23, 2025). The original article is titled “Deepseek and the Effects of GPU Export Controls.” If you are interested in the poli sci approach to smart software, dive in. However, in the couple of dozen comments on Hacker News to the post, a contributor allegedly named LHL posted some useful links. I have pulled these from the comments and displayed them for your competitive intelligence large language model. On the other hand, you can read them because you are interested in what’s shaking in the Lin-gang Free Trade Zone in the Middle Kingdom:

Deepseek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

Deepseek-V3 Technical Report

Deepseek Coder V2: Breaking the Barrier of Closed Source Models in Code Intelligence

Deepseek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model

Deepseek LLM Scaling Open-Source Language Models with Longtermism

GitHub Deepseek AI

Hugging Face Deepseek AI.

First, a thanks to the poster LHL. The search string links timed out, so you may already be part of the HN herd who is looking at the generated bibliography.

Second, several observations:

  1. China has lots of people. There are numerous highly skilled mathematicians, Monte Carlo and gradient descent wonks, and darned good engineers. One should not assume that wizardry ends with big valuations and tie ups among Oracle, Open AI and the savvy funder of Banjo, an intelware outfit of some repute.
  2. Computing resource constraints translate into one outcome. Example: Howard Flank, one of my team members, received the Information Industry Association Award decades ago for cramming a searchable index of the Library of Congress’ holdings. Remember those wonderful machines in the early 1980s. Yeah, Howard did wonders with limited resources. The Chinese professionals can too and have. (Note to US government committee members: Keep Howard and similar engineering whiz kids in mind when thinking about how curtailing computer resources will stop innovation.)
  3. Deepseek’s methods are likely to find there way into some US wrapper products presented as groundbreaking AI. Nope. These innovations are enabled by an open source technology. Now what happens if an outfit like Telegram or one of the many cyber gangs which Microsoft’s Brad Smith references? Yeah. Innovation of a type that is not salubrious.
  4. The authors of the papers are important. Should these folks be cross correlated with other information about grants, academic affiliations with US institutions, and conference attendance?

In case anyone is curious, from my dinobaby point of view, the most important paper in the bunch is the one about a “mixture of experts.”

Stephen E Arnold, January 28, 2025

How to Make Software Smart Like Humans

January 27, 2025

Artificial intelligence algorithms are still only as smart as they’re programmed. In other words, they’re still software, sometimes stupid pieces of software. Most AI algorithms are trained on large language models (LLMS) and datasets that lack the human magic to make them “think” like a smart 14-year-old. That could change says Science Daily based on the research from Linköping University: “Machine Psychology: A Bridge To General AI?”

The Robert Johansson of Linköping University asserted in his dissertation that psychological learning models combined with AI could be the key to making machines smart like humans. Johansson developed the concept of Machine Psychology and explains that, unlike many people, he’s not afraid of an AI future. Artificial General Intelligence (AGI) has many positive and negatives. The technology must be carefully created, but AGI could counter many societal destructive developments.

Johansson suggests that AI developers should follow the principle-led path. He means that through his research he’s identified important psychological learning principles that could explain intelligence and they could be implemented in machines. He’s used a logic system called Non-Axiomatic Reasoning System (NARS) that is purposely designed without complete data, computational power, and in real time. This provides the flexibility to handle problems that arise in reality.

NARS works on limited information like a human:

“The combination of NARS and learning psychology principles constitutes an interdisciplinary approach that Robert Johansson calls Machine Psychology, a concept he was the first to coin but more actors have now started to use, including Google DeepMind. The idea is that artificial intelligence should learn from different experiences during its lifetime and then apply what it has learned to many different situations, just as humans begin to do as early as the age of 18 months — something no other animal can do.”

Johansson said that it is possible machines could be as smart as humans within five years. It is a plan, but do computers have the correct infrastructure to handle that type of intelligence? Do humans have the smarts to handle smarter software?

Whitney Grace, January 27, 2025

AI Will Doom You to Poverty Unless You Do AI to Make Money

January 23, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb Prepared by a still-alive dinobaby.

I enjoy reading snippets of the AI doomsayers. Some spent too much time worrying about the power of Joe Stalin’s approach to governing. Others just watched the Terminator series instead of playing touch football. A few “invented” AI by cobbling together incremental improvements in statistical procedures lashed to ever-more-capable computing infrastructures. A couple of these folks know that Nostradamus became a brand and want to emulate that predictive master.

I read “Godfather of AI Explains How Scary AI Will Increase the Wealth Gap and Make Society Worse.” That is a snappy title. Whoever wrote it crafted the idea of an explainer to fear. Plus, the click bait explains that homelessness is for you too. Finally, it presents a trope popular among the elder care set. (Remember, please, that I am a dinobaby myself.) Prod a group of senior citizens to a dinner and you will hear, “Everything is broken.” Also, “I am glad I am old.” Then there is the ever popular, “Those tattoos! The check out clerks cannot make change! I  don’t understand commercials!” I like to ask, “How many wars are going on now? Quick.”

two robots

Two robots plan a day trip to see the street people in Key West. Thanks, You.com. I asked for a cartoon; I get a photorealistic image. I asked for a coffee shop; I get weird carnival setting. Good enough. (That’s why I am not too worried.)

Is society worse than it ever was? Probably not. I have had an opportunity to visit a number of countries, go to college, work with intelligent (for the most part) people, and read books whilst sitting on the executive mailing tube. Human behavior has been consistent for a long time. Indigenous people did not go to Wegman’s or Whole Paycheck. Some herded animals toward a cliff. Other harvested the food and raw materials from the dead bison at the bottom of the cliff. There were no unskilled change makers at this food delivery location.

The write up says:

One of the major voices expressing these concerns is the ‘Godfather of AI’ himself Geoffrey Hinton, who is viewed as a leading figure in the deep learning community and has played a major role in the development of artificial neural networks. Hinton previously worked for Google on their deep learning AI research team ‘Google Brain’ before resigning in 2023 over what he expresses as the ‘risks’ of artificial intelligence technology.

My hunch is that like me the “worked at” Google was for a good reason — Money. Having departed from the land of volleyball and weird empty office buildings, Geoffrey Hinton is in the doom business. His vision is that there will be more poverty. There’s some poverty in Soweto and the other townships in South Africa. The slums of Rio are no Palm Springs. Rural China is interesting as well. Doesn’t everyone want to run a business from the area in front of a wooden structure adjacent an empty highway to nowhere? Sounds like there is some poverty around, doesn’t it?

The write up reports:

“We’re talking about having a huge increase in productivity. So there’s going to be more goods and services for everybody, so everybody ought to be better off, but actually it’s going to be the other way around. “It’s because we live in a capitalist society, and so what’s going to happen is this huge increase in productivity is going to make much more money for the big companies and the rich, and it’s going to increase the gap between the rich and the people who lose their jobs.”

The fix is to get rid of capitalism. The alternative? Kumbaya or a better version of those fun dudes Marx. Lenin, and Mao. I stayed in the “last” fancy hotel the USSR built in Tallinn, Estonia. News flash: The hotels near LaGuardia are quite a bit more luxurious.

The godfather then evokes the robot that wanted to kill a rebel. You remember this character. He said, “I’ll be back.” Of course, you will. Hollywood does not do originals.

The write up says:

Hinton’s worries don’t just stop at the wealth imbalance caused by AI too, as he details his worries about where AI will stop following investment from big companies in an interview with CBC News: “There’s all the normal things that everybody knows about, but there’s another threat that’s rather different from those, which is if we produce things that are more intelligent than us, how do we know we can keep control?” This is a conundrum that has circulated the development of robots and AI for years and years, but it’s seeming to be an increasingly relevant proposition that we might have to tackle sooner rather than later.

Yep, doom. The fix is to become an AI wizard, work at a Google-type outfit, cash out, and predict doom. It is a solid career plan. Trust me.

Stephen E Arnold, January 23, 2025

Teenie Boppers and Smart Software: Yep, Just Have Money

January 23, 2025

dino-orange_thumb_thumbThis blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I scanned the research summary “About a Quarter of U.S. Teens Have Used ChatGPT for Schoolwork – Double the Share in 2023.” Like other Pew data, the summary contained numerous numbers. I was not sufficiently motivated to dig into the methodology to find out how the sample was assembled nor how Pew got the mobile addicted youth were prompted to provide presumably truthful answers to direct questions. But why nit pick? We are at the onset of an interesting year which will include forthcoming announcements about how algorithms are agentic and able to fuel massive revenue streams for those in the know.

image

Students doing their homework while their parents play polo. Thanks, MSFT Copilot. Good enough. I do like the croquet mallets and volleyball too. But children from well-to-do families have such items in abundance.

Let’s go to the video tape, as the late and colorful Warner Wolf once said to his legion of Washington, DC, fan.

One of the highlights of the summary was this finding:

Teens who are most familiar with ChatGPT are more likely to use it for their schoolwork. Some 56% of teens who say they’ve heard a lot about it report using it for schoolwork. This share drops to 18% among those who’ve only heard a little about it.

Not surprisingly, the future leaders of America embrace short cuts. The question is, “How quickly will awareness reach 99 percent and usage nosing above 75 percent?” My guesstimate is pretty quickly. Convenience and more time to play with mobile phones will drive the adoption. Who in America does not like convenience?

Another finding catching my eye was:

Teens from households with higher annual incomes are most likely to say they’ve heard about ChatGPT. The shares who say this include 84% of teens in households with incomes of $75,000 or more say they’ve heard at least a little about ChatGPT.

I found this interesting because it appears to suggest that if a student comes from a home where money does not seem to be a huge problem, the industrious teens are definitely aware of smart software. And when it comes to using the digital handmaiden, Pew finds apparently nothing. There is no data point relating richer progeny with greater use. Instead we learned:

Teens who are most familiar with the chatbot are also more likely to say using it for schoolwork is OK. For instance, 79% of those who have heard a lot about ChatGPT say it’s acceptable to use for researching new topics. This compares with 61% of those who have heard only a little about it.

My thought is that more wealthy families are more likely to have teens who know about smart software. I would hypothesize that wealthy parents will pay for the more sophisticated smart software and smile benignly as the future intelligentsia stride confidently to ever brighter futures. Those without the money will get the opportunity to watch their classmates have more time for mobile phone scrolling, unboxing Amazon deliveries, and grabbing burgers at Five Guys.

I am not sure that the link between wealth and access to learning experiences is a random, one-off occurrence. If I am correct, the Pew data suggest that smart software is not reinforcing democracy. It seems to be making a digital Middle Ages more and more probable. But why think about what a dinobaby hypothesizes? It is tough to scroll zippy mobile phones with old paws and yellowing claws.

Stephen E Arnold, January 23, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta