Facebook: Always Giving Families a Boost
March 21, 2025
What parent has not erred on the side of panic? We learn of one mom who turned to Facebook in the search for her teenage adult daughter, who "vanished" for ten days without explanation. The daughter had last been seen leaving her workplace with a man who, she later revealed, is her boyfriend. The Rakyat Post of Malaysia reports, "Mom’s Missing Teen Alert Backfires: ‘Stop Embarrassing Me, I’m Fine!’" To be fair, it can be hard to distinguish between a kidnapping and a digital cold shoulder. Writer Fernando Fong explains:
"CCTV footage from what’s believed to be the company dormitory showed Pei Ting leaving with a man around 2 PM on the 18th, carrying her bags and luggage. Since then, she has refused to answer calls or reply to WhatsApp messages, leading her mother to worry that someone might be controlling her phone. The mother said neither her elder daughter nor the employer had seen this man."
Such a scenario would alarm many a parent. The post continues:
"Desperate and frantic, the mother turned to social media as her last hope, only to be stunned when her daughter emerged from the digital shadows – not with remorse or understanding, but with embarrassment and indignation at her mother’s public display of concern."
Oops. In the comments of her mother’s worried post, the daughter identified the mystery man as her boyfriend. She also painted a picture of family conflict. Ahh, dirty laundry heaped in the virtual public square. Social media has certainly posed a novel type of challenge for parents.
Cynthia Murrell, March 21, 2025
Why Worry about TikTok?
March 21, 2025
We have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.
I hope this news item from WCCF Tech is wildly incorrect. I have a nagging thought that it might be on the money. “Deepseek’s Chatbot Was Being Used By Pentagon Employees For At Least Two Days Before The Service Was Pulled from the Network; Early Version Has Been Downloaded Since Fall 2024” is the headline I noted. I find this interesting.
The short article reports:
A more worrying discovery is that Deepseek mentions that it stores data on servers in China, possibly presenting a security risk when Pentagon employees started playing around with the chatbot.
And adds:
… employees were using the service for two days before this discovery was made, prompting swift action. Whether the Pentagon workers have been reprimanded for their recent act, they might want to exercise caution because Deepseek’s privacy policy clearly mentions that it stores user data on its Chinese servers.
Several observations:
- This is a nifty example of an insider threat. I thought cyber security services blocked this type of to and fro from government computers on a network connected to public servers.
- The reaction time is either months (fall of 2024 to 48 hours). My hunch is that it is the months long usage of an early version of the Chinese service.
- Which “manager” is responsible? Sorting out which vendors’ software did not catch this and which individual’s unit dropped the ball will be interesting and probably unproductive. Is it in any authorized vendors’ interest to say, “Yeah, our system doesn’t look for phoning home to China but it will be in the next update if your license is paid up for that service.” Will a US government professional say, “Our bad.”
Net net: We have snow removal services that don’t remove snow. We have aircraft crashing in sight of government facilities. And we have Chinese smart software running on US government systems connected to the public Internet. Interesting.
Stephen E Arnold, March 21, 2025
Good News for AI Software Engineers. Others, Not So Much
March 20, 2025
Another dinobaby blog post. No AI involved which could be good or bad depending on one’s point of view.
Spring is on the way in rural Kentucky. Will new jobs sprout like the dogwoods? Judging from the last local business event I attended, the answer is, “Yeah, maybe not so much.”
But there is a bright spot in the AI space. “ChatGPT and Other AI Startups Drive Software Engineer Demand” says:
AI technology has created many promising new opportunities for software engineers in recent years.
That certainly appears to apply to the workers in the smart software power houses and the outfits racing to achieve greater efficiency via AI. (Does “efficiency” translate to non-AI specialist job reductions?)
Back to the good news. The article asserts:
Many sectors have embraced digital transformation as a means of improving efficiency, enhancing customer experience, and staying competitive. Industries like manufacturing, agriculture, and even construction are now leveraging technologies like the Internet of Things (IoT), artificial intelligence (AI), and machine learning. Software engineers are pivotal in developing, implementing, and maintaining these technologies, allowing companies to streamline operations and harness data analytics for informed decision-making. Smart farming is just one example that has emerged as a significant trend where software engineers design applications that optimize crop yields through data analysis, weather forecasting, and resource management.
Yep, the efficiency word again. Let’s now dwell on the secondary job losses, shall we. This is a good news blog post.
The essay continues:
The COVID-19 pandemic drastically accelerated the shift towards remote work. Remote, global collaboration has opened up exciting opportunities for most professionals, but software engineers are a major driving factor of that availability in any industry. As a result, companies are now hiring engineers from anywhere in the world. Now, businesses are actively seeking tech-savvy individuals to help them leverage new technologies in their fields. The ability to work remotely has expanded the horizons of what’s possible in business and global communications, making software engineering an appealing path for professionals all over the map.
I liked the “hiring engineers from anywhere in the world.” That poses some upsides like cost savings for US AI staff. That creates a downside because a remote worker might also be a bad actor laboring to exfiltrate high value data from the clueless hiring process.
Also, the Covid reference, although a bit dated, reminds people that the return to work movement is a way to winnow staff. I assume the AI engineer will not be terminated but for those unlucky enough to be in certain DOGE and McKinsey-type consultants targeting devices.
As I said, this is a good news write up. Is it accurate? No comment. What about efficiency? Sure, fewer humans means lower costs. What about engineers who cannot or will learn AI? Yeah, well.
Stephen E Arnold, March 20, 2025
AI: Apple Intelligence or Apple Ineptness?
March 20, 2025
Another dinobaby blog post. No AI involved which could be good or bad depending on one’s point of view.
I read a very polite essay with some almost unreadable graphs. “Apple Innovation and Execution” says:
People have been claiming that Apple has forgotten how to innovate since the early 1980s, or longer – it’s a standing joke in talking about the company. But it’s also a question.
Yes, it is a question. Slap on your Apple goggles and look at the world from the fan boy perspective. AI is not a thing. Siri is a bit wonky. The endless requests to log in to use Facetime and other Apple services are from an objective point of view a bit stupid. The annual iPhone refresh. Er, yeah, now what are the functional differences again? The Apple car? Er, yeah.
Is that an innovation worm? Is that a bad apple? One possibility is that innovation worm is quite happy making an exit and looking for a better orchard. Thanks, You.com “Creative.” Good enough.
The write up says:
And ‘Apple Intelligence’ certainly isn’t going to drive a ‘super-cycle’ of iPhone upgrades any time soon. Indeed, a better iPhone feature by itself was never going to drive fundamentally different growth for Apple
So why do something which makes the company look stupid?
And what about this passage?
And the failure of Siri 2 is by far the most dramatic instance of a growing trend for Apple to launch stuff late. The software release cycle used to be a metronome: announcement at WWDC in the summer, OS release in September with everything you’d seen. There were plenty of delays and failed projects under the hood, and centres of notorious dysfunction (Apple Music, say), and Apple has always had a tendency to appear to forget about products for years (most Apple Watch faces don’t support the key new feature in the new Apple Watch) but public promise were always kept. Now that seems to be slipping. Is this a symptom of a Vista-like drift into systemically poor execution?
Some innovation worms are probably gnawing away inside the Apple. Apple’s AI. Easy to talk about. Tough to convert marketing baloney crafted by art history majors into software of value to users in my opinion.
Stephen E Arnold, March 20, 2025
Bankman-Fried and Cooled
March 20, 2025
We are not surprised a certain tech bro still has not learned to play by the rules, even in prison. Mediaite reports, "Unauthorized Tucker Carlson Interview Lands Sam Bankman-Fried in Solitary Confinement." Reporter Kipp Jones tells us:
"FTX founder Sam Bankman-Fried was reportedly placed in solitary confinement on Thursday following a video interview with Tucker Carlson that was not approved by corrections officials. The 33-year-old crypto billionaire-turned-inmate spoke to Carlson about a wide range of topics for an interview posted on X. Bankman-Fried and the former Fox News host discussed everything from prescription drug abuse to political contributions. According to The New York Times, prison officials became aware of the interview and put the crypto fraudster in the hole."
What riveting insights were worth that risk? Apparently he has made friends with Diddy, and he passes the time playing chess. That’s nice. He also holds no animosity toward prison staff, he said, though of course "no one wants to be in prison." Perhaps during his stint in solitary, Bankman-Fried will reflect on how he can stay out when he is released in 11 – 24 years.
Cynthia Murrell, March 20, 2025
AI Checks Professors Work: Who Is Hallucinating?
March 19, 2025
This blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything. Ask any 80 year old why don’t you?
I read an amusing write up in Nature Magazine, a publication which does not often veer into MAD Magazine territory. The write up “AI Tools Are Spotting Errors in Research Papers: Inside a Growing Movement” has a wild subtitle as well: “Study that hyped the toxicity of black plastic utensils inspires projects that use large language models to check papers.”
Some have found that outputs from large language models often make up information. I have included references in my writings to Google’s cheese errors and lawyers submitting court documents with fabricated legal references. The main point of this Nature article is that presumably rock solid smart software will check the work of college professors, pals in the research industry, and precocious doctoral students laboring for love and not much money.
Interesting but will hallucinating smart software find mistakes in the work of people like the former president of Stanford University and Harvard’s former ethics star? Well, sure, peers and co-authors cannot be counted on to do work and present it without a bit of Photoshop magic or data recycling.
The article reports that their are two efforts underway to get those wily professors to run their “work” or science fiction through systems developed by Black Spatula and YesNoError. The Black Spatula emerged from tweaked research that said, “Your black kitchen spatula will kill you.” The YesNoError is similar but with a crypto twist. Yep, crypto.
Nature adds:
Both the Black Spatula Project and YesNoError use large language models (LLMs) to spot a range of errors in papers, including ones of fact as well as in calculations, methodology and referencing.
Assertions and claims are good. Black Spatula markets with the assurance its system “is wrong about an error around 10 percent of the time.” The YesNoError crypto wizards “quantified the false positives in only around 100 mathematical errors.” Ah, sure, low error rates.
I loved the last paragraph of the MAD inspired effort and report:
these efforts could reveal some uncomfortable truths. “Let’s say somebody actually made a really good one of these… in some fields, I think it would be like turning on the light in a room full of cockroaches…”
Hallucinating smart software. Professors who make stuff up. Nature Magazine channeling important developments in research. Hey, has Nature Magazine ever reported bogus research? Has Nature Magazine run its stories through these systems?
Good question. Might be a good idea.
Stephen E Arnold, March 19, 2025
An Econ Paper Designed to Make Most People Complacent about AI
March 19, 2025
Yep, another dinobaby original.
I zipped through — and I mean zipped — a 60 page working paper called “Artificial Intelligence and the Labor Market.” I have to be upfront. I detested economics, and I still do. I used to take notes when Econ Talk was actually discussing economics. My notes were points that struck me as wildly unjustifiable. That podcast has changed. My view of economics has not. At 80 years of age, do you believe that I will adopt a different analytical stance? Wow, I hope not. You may have to take care of your parents some day and learn that certain types of discourse do not compute.
This paper has multiple authors. In my experience, the more authors, the more complicated the language. Here’s an example:
“Labor demand decreases in the average exposure of workers’ tasks to AI technologies; second, holding the average exposure constant, labor demand increases in the dispersion of task exposures to AI, as workers shift effort to tasks that are not displaced by AI.” ?
The idea is that the impact of smart software will not affect workers equally. As AI gets better at jobs humans do, humans will learn more and get a better job or integrate AI into their work. In some jobs, the humans are going to be out of luck. The good news is that these people can take other jobs or maybe start their own business.
The problem with the document I reviewed is that there are several fundamental “facts of life” that make the paper look a bit wobbly.
First, the minute it is cheaper for smart software to do a job that a human does, the human gets terminated. Software does not require touchy feely interactions, vacations, pay raises, and health care. Software can work as long as the plumbing is working. Humans sleep which is not productive from an employer’s point of view.
Second, government policies won’t work. Why? Government bureaucracies are reactive. By the time, a policy arrives, the trend or the smart software revolution has been off to the races. One cannot put spilled radioactive waste back into its containment vessel quickly, easily, or cheaply. How’s that Fukushima remediation going?
Third, the reskilling idea is baloney. Most people are not skilled in reskilling themselves. Life long learning is not a core capability of most people. Sure, in theory anyone can learn. The problem is that most people are happy planning a vacation, doom scrolling, or watch TikTok-type videos. Figuring out how to make use of smart software capabilities is not as popular as watching the Super Bowl.
Net net: The AI services are getting better. That means that most people will be faced with a re-employment challenge. I don’t think LinkedIn posts will do the job.
Stephen E Arnold, March 19, 2025
AI: Meh.
March 19, 2025
It seems consumers can see right through the AI hype. TechRadar reports, “New Survey Suggests the Vast Majority of iPhone and Samsung Galaxy Users Find AI Useless—and I’m Not Surprised.” Both iPhones and Samsung Galaxy smartphones have been pushing AI onto their users. But, according to a recent survey, 73% of iPhone users and 87% of Galaxy users respond to the innovations with a resounding “meh.” Even more would refuse to pay for continued access to the AI tools. Furthermore, very few would switch platforms to get better AI features: 16.8% of iPhone users and 9.7% of Galaxy users. In fact, notes writer Jamie Richards, fewer than half of users report even trying the AI features. He writes:
“I have some theories about what could be driving this apathy. The first centers on ethical concerns about AI. It’s no secret that AI is an environmental catastrophe in motion, consuming massive amounts of water and emitting huge levels of CO2, so greener folks may opt to give it a miss. There’s also the issue of AI and human creativity – TechRadar’s Editorial Associate Rowan Davies recently wrote of a nascent ‘cultural genocide‘ as a result of generative AI, which I think is a compelling reason to avoid it. … Ultimately, though, I think AI just isn’t interesting to the everyday person. Even as someone who’s making a career of being excited about phones, I’ve yet to see an AI feature announced that doesn’t look like a chore to use or an overbearing generative tool. I don’t use any AI features day-to-day, and as such I don’t expect much more excitement from the general public.”
No, neither do we. If only investors would catch on. The research was performed by phone-reselling marketplace SellCell, which surveyed over 2,000 smartphone users.
Cynthia Murrell, March 19, 2025
What Sells Books? Publicity, Sizzle, and Mouth-Watering Titbits
March 18, 2025
Editor note: This post was written on March 13, 2025. Availability of the articles and the book cited may change when this appears in Mr. Arnold’s public blog.
I have heard that books are making a comeback. In rural Kentucky, where I labor in an underground nook, books are good for getting a fire started. The closest bookstore is filled with toys and odd stuff one places on a desk. I am rarely motivated to read a whatchamacallit like a book. I must admit that I read one of those emergence books from a geezer named Stuart A. Kauffman at the Santa Fe Institute, and it was pretty good. Not much in the jazzy world of social media but it was a good use of my time.
I now have another book I want to read. I think it is a slice of reality TV encapsulated in a form of communication less popular than TikTok- or Telegram Messenger-type of media. The bundle of information is called Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism. Many and pundits have grabbed the story of a dispute between everyone’s favorite social media company and an authoress named Sarah Wynn-Williams.
There is nothing like some good old legal action, a former employee, and a very defensive company.
The main idea is that a memoir published on March 11, 2025, and available via Amazon at https://shorturl.at/Q077l is not supposed to be sold. Like any good dinobaby who actually read a dead tree thing this year, I bought the book. I have no idea if it has been delivered to my Kindle. I know one thing. Good old Amazon will be able to reach out and kill that puppy when the news reaches the equally sensitive leadership at that outstanding online service.
A festive group ready to cook dinner over a small fire of burning books. Thanks, You.com. Good enough.
According to The Verge, CNBC, and the Emergency International Arbitral Tribunal, an arbitrator (Nicholas Gowen) decided that the book has to be put in the information freezer. According to the Economic Times:
… violated her contract… In addition to halting book promotions and sales, Wynn-Williams must refrain from engaging in or ‘amplifying any further disparaging, critical or otherwise detrimental comments… She also must retract all previous disparaging comments ‘to the extent within her control.’”
My favorite green poohbah publication The Verge offered:
…it’s unclear how much authority the arbitrator has to do so.
Such a bold statement: It’s unclear, we say.
The Verge added:
In the decision, the arbitrator said Wynn-Williams must stop making disparaging remarks against Meta and its employees and, to the extent that she can control, cease further promoting the book, further publishing the book, and further repetition of previous disparaging remarks. The decision also says she must retract disparaging remarks from where they have appeared.
Now I have written a number of books and monographs. These have been published by outfits no longer in business. I had a publisher in Scandinavia. I had a publisher in the UK. I had a publisher in the United States. A couple of these actually made revenue and one of them snagged a positive review in a British newspaper.
But in all honesty, no one really cared about my Google, search and retrieval, and electronic publishing work.
Why?
I did not have a giant company chasing me to the Emergency International Arbitral Tribunal and making headlines for the prestigious outfit CNBC.
Well, in my opinion Sarah Wynn-Williams has hit a book publicity home run. Imagine, non readers like me buying a book about a firm to which I pay very little attention. Instead of writing about the Zuckbook, I am finishing a book (gasp!) about Telegram Messenger and that sporty baby maker Pavel Durov. Will his “core” engineering team chase me down? I wish. Sara Wynn-Williams is in the news.
Will Ms. Wynn-Williams “win” a guest spot on the Joe Rogan podcast or possibly the MeidasTouch network? I assume that her publisher, agent, and she have their fingers crossed. I heard somewhere that any publicity is good publicity.
I hope Mr. Beast picks up this story. Imagine what he would do with forced arbitration and possibly a million dollar payoff for the PR firm that can top the publicity the apparently Meta has delivered to Ms. Wynn-Williams.
Net net: Win, Wynn!
Stephen E Arnold, March 18, 2025
A Swelling Wave: Internet Shutdowns in Africa
March 18, 2025
Another dinobaby blog post. No AI involved which could be good or bad depending on one’s point of view.
How does a government deal with information it does not like, want, or believe? The question is a pragmatic one. Not long ago, Russia suggested to Telegram that it cut the flow of Messenger content to Chechnya. Telegram has been somewhat more responsive to government requests since Pavel Durov’s detainment in France, but it dragged its digital feet. The fix? The Kremlin worked with service providers to kill off the content flow or at least as much of it as was possible. Similar methods have been used in other semi-enlightened countries.
“Internet Shutdowns at Record High in Africa As Access Weaponised’ reports:
A report released by the internet rights group Access Now and #KeepItOn, a coalition of hundreds of civil society organisations worldwide, found there were 21 shutdowns in 15 African countries, surpassing the existing record of 19 shutdowns in 2020 and 2021.
There are workarounds, but some of these are expensive and impractical for the people in Cormoros, Guinea-Bassau, Mauritius, Burundi, Ethiopia, Equatorial Guinea, and Kenya. I am not sure the list is complete, but the idea of killing Internet access seems to be an accepted response in some countries.
Several observations:
- Recent announcements about Google making explicit its access to users’ browser histories provide a rich and actionable pool of information. Will these type of data be used to pinpoint a dissident or a problematic individual? In my visits to Africa, including the thrilling Zimbabwe, I would suggest that the answer could be, “Absolutely.”
- Online is now pervasive, and due to a lack of meaningful regulation, the idea of going online and sharing information is a negative. In the late 1980s, I gave a lecture for ASIS at Rutgers University. I pointed out that flows of information work like silica grit in a sand blasting device to remove rust in an autobody shop. I can say from personal experience that no one knew what I was talking about. In 40 years, people and governments have figured out how online flows erode structures and social conventions.
- The trend of shutdown is now in the playbook of outfits around the world. Commercial companies can play the game of killing a service too. Certain large US high technology companies have made it clear that their service would summarily be blocked if certain countries did not play ball the US way.
As a dinobaby who has worked in online for decades, I find it interesting that the pigeons are coming home to roost. A failure years ago to recognize and establish rules and regulation for online is the same as having those lovable birds loose in the halls of government. What do pigeons produce? Yep, that’s right. A mess, a potentially deadly one too.
Stephen E Arnold, March 18, 2025