A Decade after WeChat a Marketer Touts OpenAI as the Everything App
June 10, 2025
Just a dinobaby and no AI: How horrible an approach?
Lester thinks OpenAI will become the Internet. Okay, Lester, are you on the everything app bandwagon. That buggy rolled in China and became one of the little engines that could for social scoring? “How ChatGPT Could Replace the Internet As We Know It” provides quite a bit about Lester. Zipping past the winner prose, I noted this passage:
In fact, according to Khyati Hooda of Keywords Everywhere, ChatGPT handles 54% of queries without using traditional search engines. This alarming stat indicates a shift in how users seek information. As the adoption grows and ChatGPT cements itself as the single source of information, the internet as we know it becomes kinda pointless.
One question? Where does the information originate? From intercepted mobile communications, from nifty listening devices like smart TVs, or from WeChat-style methods? The jump from the Internet to an everything app is a nifty way to state that everything is reducible to bits. Get the bits, get the “information.”
Lester says:
Basically, ChatGPT is cutting out the middleman, but what’s even scarier is that it’s working. ChatGPT reached 1 million users in just 5 days and has 400 million weekly active users as of early 2025, making it the fastest-growing consumer app in history. The platform receives over 5.19 billion visits per month, ranking as the 8th most visited website in the world.
He explains:
What started as a chatbot has become a platform where people book travel, plan meals, write emails, create schedules, and even do homework. Surveys show that around 80% of ChatGPT users leverage it for professional tasks such as drafting emails, creating reports, and generating marketing content. This marks a fundamental shift in how we engage with the internet, where more everyday tasks move from web browsing to a prompt.
How likely is this shift, Lester? Lester responds in a ZDNet-type way:
I wouldn’t be surprised if ChatGPT added a super agent that does tasks autonomously by December of this year. Amazed? Sure. But surprised? Nah. It’s not hard to imagine a near future where ChatGPT doesn’t just replace the internet but OpenAI becomes the foundation for future companies, in the same way that roads became the foundation for civilization.
Lester interprets the shift as mostly good news. Jobs will be created. There are a few minor problems; for instance, retraining and changing business models. Otherwise, Lester does not see too many new problems. In fact, he makes his message clear:
If you stand still, never evolve, never improve your skills, and never push yourself to be better, life will decimate you like a gorilla vs 100 men.
But what if the gorilla is Google? And that Google creature has friends like Microsoft and others. A super human like Elon Musk or Pavel Durov might jump into the fray against the men, presumably from OpenAI.
Convergence and collapsing to an “everything” app is logical. However, humans are not logical. Plus smart software has a couple of limitations. These include cost, energy requirements, access to information, pushback from humans who cannot be or do not want to be “retrained,” and making stuff up (you know, hallucinations like gluing cheese on pizza).
Net net: Old school search is now wearing a new furry suit, but WeChat and Telegram are existing “everything” apps. Mr. Musk and Sam AI-Man know or sense there is a future in co-opting the idea, bolting on smart software, and hitting the marketing start button. However, envisioning and pulling off are two different things. China allegedly helped WeChat think about its role; Telegram’s founder visited Russia dozens of times prior to his arrest in France. What nation state will husband a Western European or American “everything” app?
Mr. Musk has a city in Texas. Perhaps that’s why he has participated in a shadow dance with Telegram?
Lester, you have identified the “everything” app. Good work. Don’t forget WeChat débuted in 2011. Telegram rolled out in 2013. Now a decade later, the “everything” app is the next big thing. Okay. But who is the “we” in the essay’s title? It is not this dinobaby.
Stephen E Arnold, June 10, 2025
Will the EU Use an AI Agent to Automate Fines?
June 10, 2025
Just a dinobaby and no AI: How horrible an approach?
Apple, at least to date, has not demonstrated adeptness in lashing smart software to its super secure and really user friendly system. How many times do I have to dismiss “log in to iCloud” and “log in to Facetime”? How frequently will Siri wander in dataspace? How often do I have to dismiss “two factor authentication” for the old iPad I use to read Kindle books? How often? The answer is, “As many times as the European Union will fine the company for failure to follow its rules, guidelines, laws, and special directives.
I read “EU Ruling: Apple’s App Store Still in Violation of DMA, 30 Days to Comply” and I really don’t know what Apple has blown off. I vaguely recall that the company ignored a US court order in the US. However, the EU is not the US, and the EU can make quite miserable for the company, its employees residing in the EU, and its contractors with primary offices in member countries. The tools can be trivial: A bit of friction at international airports. The machinery can be quite Byzantine when financial or certification activities can be quite entertaining to an observer.
The write up says:
Following its initial €500 million fine in April, the European Commission is now giving Apple 30 days to fully align its App Store rules with the Digital Markets Act (DMA). If it fails to comply, the EU says it will start imposing “periodic penalty payments” until Apple [follows the rules]…
For me, the operative word is “periodic.” I think it means a phenomenon that repeats at regular intervals of time. Okay, a fine like the most recent €500 would just occur in a heart beat fashion. One example would be every month. After one year, the fines total €6,000,000,000. What happens if the EU gets frisky after a bottle of French burgundy from a very good year? The fine would be levied for each day in a calendar year and amount to €2,190,000,000,000 or two trillion one hundred ninety billion euros. Even for a high flier like Apple and its pilot Tim Apple, stakeholders might suggest, “Just obey the law, please.”
I wonder if the EU might consider using Telegram bots to automate the periodic fines. The system developed by France’s favorite citizen Pavel Durov is robust, easily extensible, and essentially free. The “FineApple_bot” could fire on a schedule and message Tim Apple, his Board of Directors, the other “leadership” of Apple, and assorted news outlets. The free service operates quickly enough for most users, but by paying a nominal monthly fee, the FineApple_bot could issues 1,000 instructions a second. But that’s probably overkill unless the EU decides to fine Apple by the minute. In case you were wondering the annual fine would be in the neighborhood of €52,560,000,000,000 (or fifty-two trillion five hundred sixty billion euros).
My hunch is that despite Apple’s cavalier approach to court orders, some less intransigent professional in the core of Apple would find a way to resolve the problem. But I personally quite like the Telegram bot approach.
Stephen E Arnold, June 10, 2025
Google Places a Big Bet, and It May Not Pay Off
June 10, 2025
Just a dinobaby and no AI: How horrible an approach?
Each day brings more AI news. I have playing in the background a video called “The AI Math That Left Number Theorists Speechless.” That word “speechless” does not apply because the interlocutor and the math whiz are chatty Cathies. The video runs a little less that two hours. Speechless? No, when it comes to smart software some people become verbose and excited. I like to be verbose. I don’t like to get excited about artificial intelligence. I am a dinobaby, remember?
I clicked on the first item in my trusty Overflight service and this write up greeted me: “Google Is Burying the Web Alive.” How does one “bury” a digital service? I assumed or inferred that the idea is that the alleged multi-monopoly Google was going to create another monopoly for itself anchored in AI.
The write up says:
[AI Overviews are] Google’s “most powerful AI search, with more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web,” the company says, “breaking down your question into subtopics and issuing a multitude of queries simultaneously on your behalf.” It’s available to everyone. It’s a lot like using AI-first chatbots that have search functions, like those from OpenAI, Anthropic, and Perplexity, and Google says it’s destined for greater things than a small tab. “As we get feedback, we’ll graduate many features and capabilities from AI Mode right into the core Search experience,” the company says.
Let’s slow down the buggy. A completely new product or service has some baggage on board. Like “New Coke”, quite a few people liked “old Coke.” The company figured it out and innovated and finally just started buying beverage outfits that were pulling new customers. Then there is the old chestnut by the buggy stand which says, “Most start ups fail.” Finally, there is the shadow of impatient stakeholders. Fail to keep those numbers up, and consequences manifest themselves.
The write up gallops forward:
From the very first use, however, AI Mode crystallized something about Google’s priorities and in particular its relationship to the web from which the company has drawn, and returned, many hundreds of billions of dollars of value. AI Overviews demoted links, quite literally pushing content from the web down on the page, and summarizing its contents for digestion without clicking…
Those clicks make Google’s money flow. It does not matter if the user clicks to view a YouTube short or a click to view a Web page about a vacation rental. Clicks equal revenue. Fewer clicks may translate to less revenue. If this is true, then what happens?
The write up suggests an answer: The good old Web is marginalized. Kaput. Dead as a door nail:
of course, Google is already working on ads for both Overviews and AI Mode). In its drive to embrace AI, Google is further concealing the raw material that fuels it, demoting links as it continues to ingest them for abstraction. Google may still retain plenty of attention to monetize and perhaps keep even more of it for itself, now that it doesn’t need to send people elsewhere; in the process, however, it really is starving the web that supplies it with data on which to train and from which to draw up-to-date details. (Or, one might say, putting it out of its misery.)
As a dinobaby, I quite like the old Web. Again we have a giant company doing something “new” and “different.” How will those bold innovations work out? That’s the $64 question (a rigged game show my mother told me).
The article concludes:
In any case, the signals from Google — despite its unconvincing suggestions to the contrary — are clear: It’ll do anything to win the AI race. If that means burying the web, then so be it.
Whoa, Nellie!
Let’s think about what the Google is allegedly doing. First, the Google is spending money to index the “Web.” My team tells me that Google is indexing less thoroughly than it was 10 years ago. Google indexes where the traffic is, and quite a bit of that traffic is to Google itself. The losers have been grousing about a lack of traffic for years. I have worked with a consumer Web site since 1993, and the traffic cratered about seven years ago. Why? Google selected sites to boost because of the link between advertiser appetite and clicks. The owner of this consumer Web site cooked up a bit of jargon for what Google was doing; he called it “steering.” The idea is that Google shaped its crawls and “relevance” in order to maximize revenue from known big ad spenders.
Google is not burying anything. The company is selecting to maximize financial benefits. My experience suggests that when Google strays too far from what stakeholders want, the company will be whipped until it gets the horses under control. Second, the AI revolution poses a significant challenge for a number of reasons. Among these is the users’ desire for the information equivalent of a “dumb” mobile phone. The cacophony of digital information is too much and creates a “why bother” need. Google wants to respond in the hope that it can come up with a product or service that produces as much money as the old Yahoo Overture GoTo model. Hope, however, is not reality.
As a dinobaby, I think Google has a reasonably good chance of stratifying its “users”. Some will pay. Some will consume the sponsored by ads AI output. Some will find a way to get the restaurant address surrounded by advertisements.
What about AI?
I am not sure that anyone knows. Both Google and Microsoft have to find a way to produce significant and sustainable revenue from the large language model method which has come to be synonymous with smart software. The costs are massive. The use cases usually focus on firing people for cost savings until the AI doesn’t work. Then the AI supporters just hire people again. That’s the Klarna call to think clearly again.
Net net: The Google is making a big bet that it can increase its revenues with smart software. How probable is it that the “new” Google will turn out like the “New Coke”? How much of the AI hype is just l’entreprise parle dans le vide? The hype may be the inverse of reality. Something will be buried, and it may not be the “Web.”
Stephen E Arnold, June 10, 2025
A 30-Page Explanation from Tim Apple: AI Is Not Too Good
June 9, 2025
I suppose I should use smart software. But, no, I prefer the inept, flawed, humanoid way. Go figure. Then say to yourself, “He’s a dinobaby.”
Gary Marcus, like other experts, are putting Apple into an old-fashioned peeler. You can get his insights in “A Knock Out Blow for LLMs.” I have a different angle on the Apple LLM explainer. Here we go:
Many years ago I had some minor role to play in the commercial online database sector. One of our products seemed to be reasonably good at summarizing business and technical journal articles, academic flights of fancy, and some just straight out crazy write ups from Harvard Business Review-type publications.
I read a 30-page “research” paper authored by what appear to be some of the “aw, shucks” folks at Apple. The write up is located on Apple’s content delivery network, of course. No run-of-the-mill service is up to those high Apple standards of Tim and his orchard keepers. The paper is authored by Parshin Shojaee (who is identified as an intern who made an equal contribution to the write up), Imam Mirzadeh (Apple), Keivan Alizadeh (Apple), Maxwell Horton (Apple), Samy Bengio (Apple), and Mehrdad Farajtabar (Apple). Okay, this seems to be a very academic line up with an intern who was doing “equal contribution” along with the high-powered horticulturists laboring on the write up.
The title is interesting: “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.” In a nutshell, the paper tries to make clear that current large language models deliver inconsistent results and cannot reason in a reliable manner. When I read this title, my mind conjured up an image of AI products and services delivering on-point outputs to iPhone users. That was the “illusion” of a large, ageing company trying to keep pace with technology and applications from its competitors, the upstarts, and the nation-states doing interesting things with the admittedly-flawed large language models. But those outside the Apple orchard have been delivering something.
My reaction to this document and its easy-to-read pastel charts like this one from page 30:
One of my addled professors told me, “Also, end on a strong point. Be clear, concise, and issue a call to action.” Apple obviously believes that these charts deliver exactly what my professor told me.
I interpreted the paper differently; to wit:
- Apple announced “Apple intelligence” and failed to ship for what a year or more had been previously announced
- Siri still sucks from my point of view
- Apple reorganized its smart software team in a significant way. Why? See items 1 and 2.
- Apple runs the risk of having its many iPhone users just skip “Apple intelligence” and maybe not upgrade due to the dalliance with China, the tariff issue, and the reality of assuming that what worked in the past will be just super duper in the future.
Sorry, gardeners. A 30-page paper is not going to change reality. Apple is a big outfit. It seems to be struggling. No Apple car. An increasingly wonky browser. An approach to “settings” almost as bad as Microsoft’s. And much, much more. Coming soon will be a new iOS numbering system and more!
That’s what happens when interns contribute as much as full-time equivalents and employees. The result is a paper. Okay, good enough.
But, sorry, Tim Apple: Papers, pastel charts, and complaining about smart software will not change a failure to match marketing with what users can access.
Stephen E Arnold, June 9, 2025
Is Google Headed for the Big Computer Room in the Sky? Actually Yes It Is
June 9, 2025
Just a dinobaby and no AI: How horrible an approach?
As freshman in college in 1962, I had seen computers like the clunky IBMs at Keystone Steel & Wire Co., where my father worked as some sort of numbers guy, a bean counter, I guessed. “Look but don’t touch,” he said, not even glancing up from his desk with two adding machines, pencils, and ledgers. I looked.
Once I convinced a professor of poetry to hire me to help him index Latin sermons, I was hooked. Next up were Digital Equipment machines. At Halliburton Nuclear a fellow named Bill Montano listened to my chatter about searching text. Then I bopped into a big blue chip consulting firms and there were computing machines in the different offices I visited. When I ended up at the database company in the early 1980s, I had my own Wang in my closet. There you go. A file cabinet sized gizmo, weird hums, and connections to terminals in my little space and to other people who could “touch” their overheated hearts. Then the Internet moved from the research world into the mainstream. Zoom. Things were changing.
Computer companies arrived, surged, and faded. Then personal computer companies arrived, surged, and faded. The cadence of the computer industry was easy to dance to. As Carmen Giménez used to say on American Bandstand in 1959, “I like the beat and it is easy to dance to.” I have been tapping along and doing a little jig in the computer (online) sector for many years, around 60 I think.
I read “Google As You Know It Is Slowly Dying.” Okay, another tech outfit moving through its life cycle. Break out your copy of Elisabeth Kübler-Ross’s On Death and Dying. Jump to Acceptance section, read it, and move on. But, no. It is time for one more “real news” write up to explain that Googzilla is heading toward its elder care facility. This is not news. If it is, fire up your Burroughs B5500 and do your inventory update.
The essay presents the obvious as “new.” The Vox write up says:
Google is dominant enough that two federal judges recently ruled that it’s operating as an illegal monopoly, and the company is currently waiting to see if it will be broken up.
From my point of view, this is an important development. Furthermore, it has nothing to do with the smart software approach to search. After two decades of doing exactly what it wanted, Google — like Apple and Meta — are in the spotlight. Those spotlights are solar powered and likely to remain on for the foreseeable future. That’s news.
In this spotlight are companies providing a “new” way to search. Since search is required to do most things online, the Google has to figure out how to respond in an intelligent way to two — count ‘em — big problems: Government actions and upstarts using Google’s own Transformer innovation.
The intersection of regulatory action and the appearance of an alternative to “search as you know it” is the same old story, just jazzed up with smart software, efficiency, the next big thing, Sky Net, and more. The write up says:
The government might not be the biggest threat to Google dominance, however. AI has been chipping away at the foundation of the web in the past couple of years, as people have increasingly turned to tools like ChatGPT and Perplexity to find information online.
My view is that it is the intersection, not the things themselves that have created the end-of-the-line sign for the Google bullet train. Google will try to do what it has done since Backrub: Appropriate ideas like Yahoo, Overture, and GoTo advertising methods, create a bar in which patrons pay to go in and out (advertisers and users), and treat the world as a bunch of dorks by whiz kids who just know so much more about the digital world. No more.
Google’s legacy is the road map for other companies lucky or skilled enough to replicate the approach. Consequently, the Google is in Code Red, announcing so many “new” products and services I certainly can’t keep them straight, and serving up a combination of hallucinatory output and irrelevant search results. The combination is problematic as the regulators close in.
The write up concludes with this statement:
In the chaotic, early days of the web, Google got popular by simplifying the intimidating task of finding things online, as the Washington Post’s Geoffrey A. Fowler points out. Its supremacy in this new AI-powered future is far less certain. Maybe another startup will come along and simplify things this time around, so you can have a user-friendly bot explain things to you, book travel for you, and make movies for you.
I disagree. Google became popular because it indexed Web sites, used some Clever ideas, and implemented processes that produced pages usually related to the user’s query. Over time, wrapper software provided Google with a way to optimize its revenue. Innovation eluded the company. In the social media “space”, Google bumbled Orkut and then continued to bumble until it pretty much gave up on killing Facebook. In the Microsoft “space,” Google created its own office and it rolled out its cloud service. There have not had a significant impact in the enterprise market when the river of money flows for Microsoft and whatever it calls its alleged monopolistic-inclined services. There are other examples of outright failure.
Now the Google is just spewing smart software products. This reminds me of a person who, shortly before dying, sees bright lights and watches the past flash before them. Then the person dies. My view is that Google is having what are like those near death experiences. The person survives but knows exactly what death is.
Believe me, Google knows that the annoying competitors are more popular; to wit, Sam AI-Man and his ChatGPT, his vision for the “everything” app, and his rather clever deal with Telegram. To wit, Microsoft and its deals with most smart software companies and its software lock in the US Federal government, its boot camp deal with Palantir Technologies, and its mind-boggling array of ways to get access to word processing software.
Google has not proven it can deal with the confluence of regulators demanding money and lesser entities serving up products and services that capture headlines. Code Red and dozens of “new” products each infused with Gemini or whatever the name of the smart software is today is not a solution that returns Google to its glory days.
The patient is going through tough times. Googzilla may survive but search is going to remain finding on point information. LLMs are a current approach that people like. By itself, it will not kill Google or allow it to survive. Google is caught between the reality of meaningful regulatory action and innovators who are more agile.
Googzilla is old and spends some time looking for suitable elder care facilities.
Stephen E Arnold, June 9, 2025
Education in Angst: AI, AI, AI
June 9, 2025
Just a dinobaby and no AI: How horrible an approach?
Bing Crosby went through a phase in which ai, ai, ai was the groaner’s fingerprint. Now, it is educated adults worrying about smart software. AI, AI, AI. “An Existential Crisis: Can Universities Survive ChatGPT?” The sub-title is pure cubic Zirconia:
Students are using AI to cheat and professors are struggling to keep up. If an AI can do all the research and writing, what is the point of a degree?
I can answer this question. The purpose of a college degree is, in order of importance, [1] get certified as having been accepted to and participated in a university’s activities, [2] have fun, including but not limited to drinking, sex, and intramural sports, [3] meeting friends who are likely to get high paying jobs, start companies, or become powerful political figures. Notice that I did not list reading, writing, and arithmetic. A small percentage of college attendees will be motivated, show up for class, do homework, and possibly discover something of reasonable importance. The others? These will be mobile phone users, adepts with smart software, and equipped with sufficient funds to drink beer and go on spring break trips.
The cited article presents this statement:
Research by the student accommodation company Yugo reveals that 43 per cent of UK [United Kingdom] university students are using AI to proofread academic work, 33 per cent use it to help with essay structure and 31 per cent use it to simplify information. Only 2 per cent of the 2,255 students said they used it to cheat on coursework.
I thought the Yugo was a quite terrible automobile, but by reading this essay, I learned that the name “Yugo” refers to a research company. (When it comes to auto names, I quite like “No Va” or no go in Spanish. No, I did consult ChatGPT for this translation.)
The write up says:
Universities are somewhat belatedly scrambling to draw up new codes of conduct and clarifying how AI can be used depending on the course, module and assessment.
Since when did academic institutions respond with alacrity to a fresh technical service? I would suggest that the answer to this question is, “Never.”
The “existential crisis” lingo appears to be the non-AI powered former vice chancellor of the University of Buckingham (Buckinghamshire, England) located near River Great Ouse. (No, I did not need smart software to know the name of this somewhat modest “river.”)
What is an existential crisis? I have to dredge up a recollection of Dr. Francis Chivers’ lecture on the topic in the 1960s. I think she suggested something along the lines: A person is distressed about something: Life, its purpose, or his/her identity.
A university is not a person and, therefore, to my dinobaby mind, not able to have an existential crisis. More appropriately, those whose livelihood depends on universities for money, employment, a peer group, social standing, or just feeling like scholarship has delivered esteem, are in crisis. The university is a collection of buildings and may have some quantum “feeling” but most structures are fairly reticent to offer opinions about what happens within their walls.
I quibble. The worriers about traditional education should worry. One of those “move fast, break things” moments has arrived to ruin the sleep of those collecting paychecks from a university. Some may worry that their side gig may be put into financial squalor. Okay, worry away.
What’s the fix, according to the cited essay? Ride out the storm, adapt, and go to meetings.
I want to offer a handful of observations:
- Higher education has been taking karate chops since Silicon Valley started hiring high school students and suggesting they don’t need to attend college. Examples of what can happen include Bill Gates and Mark Zuckerberg. “Be like them” is a siren song for some bright sparks.
- University professional have been making up stuff for their research papers for years. Smart software has made this easier. Peer review by pals became a type of search engine optimization in the 1980s. How do I know this? Gene Garfield told me in 1981 or 1983. (He was the person who pioneered link analysis in sci-tech, peer reviewed papers and is, therefore, one of the individuals who enabled PageRank.
- Universities in the United States have been in the financial services business for years. Examples range from student loans to accepting funds for “academic research.” Certain schools have substantial income from these activities which do not directly transfer to high quality instruction. I myself was a “research fellow.” I got paid to do “work” for professors who converted my effort into consulting gigs. Did I mind? I had zero clue that I was a serf. I thought I was working on a PhD.* Plus, I taught a couple of classes if you could call what I did “teaching.” Did the students know I was clueless? Nah, they just wanted a passing grade and to get out of my 4 pm Friday class so they could drink beer.
Smart software snaps in quite nicely to the current college and university work flow. A useful instructional program will emerge. However, I think only schools with big reputations and winning sports teams will be the beacons of learning in the future. Smart software has arrived, and it is not going to die quickly even if it hallucinates, costs money, and generates baloney.
Net net: Change is not coming. Change has arrived.
——————–
* Note: I did not finish my PhD. I went to work at Hallilburton’s nuclear unit. Why? Answer: Money. Should I have turned in my dissertation? Nah, it was about Chaucer, and I was working on kinetic weapons. Definitely more interesting to a 23 year old.
Stephen E Arnold, June 9, 2025
Jobs for Humanoids: AI Output Checker Like a Digital Grocery Clerk
June 9, 2025
George at the Throwable Substack believes humans will forever have a place in software development. In the post, “What’s Next for Software,” the blogger believes code maintenance will always rely on human judgement. This, he imagines, will balance out the code-creation jobs lost to AI. After all, he believes, humans will always be held liable for snafus. He writes:
“While engineers won’t be as responsible for producing code, they will be ultimately responsible for what that code does. A VP or CEO can blame an AI all they want when the system is down, but if the AI can’t solve the problem, it can’t solve the problem. And I don’t expect firing the AI will be very cathartic.”
Maybe not. But do executives value catharsis over saving money? We think they will find a way to cope. Perhaps a season pass to the opera. The post continues:
“It’s hard to imagine a future where humans aren’t the last line of defense for maintenance, debugging, incident response, etc. Paired with the above—that they’re vastly outnumbered by the quantity of services and features and more divorced from the code that’s running than ever before—being that last line of defense is a tall order.”
So tall it can never be assigned to AI? Do not bet on it. In a fast-moving, cost-driven environment, software will act more quickly. Each human layer will be replaced as technology improves. Sticking one’s head in the sand is not the way to prepare for that eventuality.
Cynthia Murrell, June 6, 2025
Who Knew? Remote Workers Are Happier Than Cube Laborers
June 6, 2025
To some of us, these findings come as no surprise. The Farmingdale Observer reports, “Scientists Have Been Studying Remote Work for Four Years and Have Reached a Very Clear Conclusion: ‘Working from Home Makes Us Happier’.” Nestled in our own environment, no commuting, comfy clothes—what’s not to like? In case anyone remains unconvinced, researchers at the University of South Australia spent four years studying the effects of working from home. Writer Bob Rubila tells us:
“An Australian study, conducted over four years and starting before the pandemic, has come up with some enlightening conclusions about the impact of working from home. The researchers are unequivocal: this flexibility significantly improves the well-being and happiness of employees, transforming our relationship with work. … Their study, which was unique in that it began before the health crisis, tracked changes in the well-being of Australian workers over a four-year period, offering a unique perspective on the long-term effects of teleworking. The conclusions of this large-scale research highlight that, despite the sometimes contradictory data inherent in the complexity of the subject, offering employees the flexibility to choose to work from home has significant benefits for their physical and mental health.”
Specifically, researchers note remote workers get more sleep, eat better, and have more time for leisure and family activities. The study also contradicts the common fear that working from home means lower productivity. Quite the opposite, it found. As for concerns over losing in-person contact with colleagues, we learn:
“Concerns remain about the impact on team cohesion, social ties at work, and promotion opportunities. Although the connection between colleagues is more difficult to reproduce at a distance, the study tempers these fears by emphasizing the stability, and even improvement, in performance.”
That is a bit of a hedge. On balance, though, remote work seems to be a net positive. An important caveat: The findings are considerably less rosy if working from home was imposed by, say, a pandemic lock-down. Though not all jobs lend themselves to remote work, the researchers assert flexibility is key. The more one’s work situation is tailored to one’s needs and lifestyle, the happier and more productive one will be.
Cynthia Murrell, June 6, 2025
YouTube Reveals the Popularity Winners
June 6, 2025
No AI, just a dinobaby and his itty bitty computer.
Another big technology outfit reports what is popular on its own distribution system. The trusted outfit knows that it controls the information flow for many Googlers. Google pulls the strings.
When I read “Weekly Top Podcast Shows,” I asked myself, “Are these data audited?” And, “Do these data match up to what Google actually pays the people who make these programs?”
I was not the only person asking questions about the much loved, alleged monopoly. The estimable New York Times wondered about some programs missing from the Top 100 videos (podcasts) on Google’s YouTube. Mediaite pointed out:
The rankings, based on U.S. watch time, will update every Wednesday and exclude shorts, clips and any content not tagged as a podcast by creators.
My reaction to the listing is that Google wants to make darned sure that it controls the information flow about what is getting views on its platform. Presumably some non-dinobaby will compare the popularity listings to other lists, possibly the misfiring Apple’s list. Maybe an enthusiast will scrape the “popular” listings on the independent podcast players? Perhaps a research firm will figure out how to capture views like the now archaic logs favored decades ago by certain research firms.
Several observations:
- Google owns the platform. Google controls the data. Google controls what’s left up and what’s taken down? Google is not known for making its click data just a click away. Therefore, the listing is an example of information control and shaping.
- Advertisers, take note. Now you can purchase air time on the programs that matter.
- Creators who become dependent on YouTube for revenue are slowly being herded into the 21st century’s version of the Hollywood business model from the 1940s. A failure to conform means that the money stream could be reduced or just cut off. That will keep the sheep together in my opinion.
- As search morphs, Google is putting on its thinking cap in order to find ways to keep that revenue stream healthy and hopefully growing.
But I trust Google, don’t you? Joe Rogan does.
Stephen E Arnold, June 6, 2025
AI: The Ultimate Intelligence Phaser. Zap. You Are Now Dumber Than Before the Zap
June 6, 2025
We need smart, genuine, and kind people so we can retain the positive aspects of humanity and move forward to a better future. It might be hard to connect the previous statement with a YouTube math channel, but it won’t be after you read BoingBoing’s story: “Popular Math YouTuber 3Blue1Brown Victimized By Malicious And Stupid AI Bots.”
We know that AI bots have consumed YouTube and are battling for domination of not only the video sharing platform, but all social media. Unfortunately these automated bots flagged a respected mathematics channel 3Blue1Brown, who makes awesome math animations and explanations. The 3Blue1Brown team makes math easier to understand for the rest of us dunderheads. 3Blue1Brown was hit with a strike. Grant Sanderson, the channel’s creator, said:
“I learned yesterday the video I made in 2017 explaining how Bitcoin works was taken down, and my channel received a copyright strike (despite it being 100% my own content). The request seems to have been issued by a company chainpatrol, on behalf of Arbitrum, whose website says they "makes use of advanced LLM scanning" for "Brand Protection for Leading Web3 Companies" I could be wrong, but it sounds like there’s a decent chance this means some bot managed to convince YouTube’s bots that some re-upload of that video (of which there has been an incessant onslaught) was the original, and successfully issue the takedown and copyright strike request. It’s naturally a little worrying that it should be possible to use these tools to issue fake takedown requests, considering that it only takes 3 to delete an entire channel.”
Can we do a collective EEP?!
ChainPatrol.io is a notorious YouTube AI tool that patrols the platform. It “trolls” channels that make original content and hits them with “guilty until proven innocent” tags. It’s known for doing the opposite of this:
“ChainPatrol.io, the company whose system initiated the takedown, claims its "threat detection system makes use of advanced LLM scanning, image recognition, and proprietary models to detect brand impersonation and malicious actors targeting your organization.”
ChainPatol.io responded with a generic answer:
“Hello! This was a false positive in our systems at @ChainPatrol. We are retracting the takedown request, and will conduct a full post-mortem to ensure this does not happen again. We have been combatting a huge volume of fake YouTube videos that are attempting to steal user funds. Unfortunately, in our mission to protect users from scams, false positives (very) occasionally slip through. We are actively working to reduce how often this happens, because it’s never our intent to flag legitimate videos. We’re very sorry about this! Will keep you posted on the takedown retraction.”
Helpful. Meanwhile Grant Sanderson and his fans have given ChainPatrol.io a digital cold shoulder.
Whitney Grace, June 6, 2025