AI Forces Stack Exchange to Try a Rebranding Play
June 19, 2025
Stack Exchange is a popular question and answer Web site. Devclass reports it will sone be rebranding: “Stack Overflow Seeks Rebrand As Traffic Continues To Plummet – Which Is Bad News For Developers.”
According to Stack Overflow’s data explorer, the amount of questions and answers posted in April 2025 compared to April 2024 is down 64% and it’s down 90% from 2020. The company will need to rebrand because AI is changing how users learn, build, and resolve problems. Some users don’t think a rebrand is necessary, but the Stack Exchange thinks differently:
“Nevertheless, community SVP Philippe Beaudette and marketing SVP Eric Martin stated that the company’s “brand identity” is causing “daily confusion, inconsistency, and inefficiency both inside and outside the business.”
Among other things, Beaudette and Martin feel that Stack Overflow, dedicated to developer Q&A, is too prominent and that “most decisions are developer-focused, often alienating the wider network.”
CEO Prashanth Chandrasekar wants his company’s focus to change from only a question and answer platform to include community and career pillars. The company needs to do a lot to maintain its relevancy but Stack Overflow is still important to AI:
“The company’s search for a new direction though confirms that the fast-disappearing developer engagement with Stack Overflow poses an existential challenge to the organization. Those who have found the site unfriendly or too ready to close carefully-worded questions as duplicate or off-topic may not be sad; but it is also true that the service has delivered high value to developers over many years. Although AI may seem to provide a better replacement, some proportion of those AI answers will be based on the human-curated information posted by the community to Stack Overflow. The decline in traffic is not good news for developers, nor for the AI which is replacing it.”
Stack Overflow is an important information fount, but the human side of it is its most important resource. Why not let gentle OpenAI suggest some options?
Whitney Grace, June 19, 2025
Brin: The Balloons Do Not Have Pull. It Is AI Now
June 18, 2025
It seems the nitty gritty of artificial intelligence has lured Sergey Brin back onto the Google campus. After stepping away from day-to-day operations in 2019, reports eWeek, “Google’s Co-Founder in Office ‘Pretty Much Every Day’ to Work on AI.” Writer Fiona Jackson tells us:
“Google co-founder Sergey Brin made an unannounced appearance on stage at the I/O conference on Tuesday, stating that he’s in the company’s office ‘pretty much every day now’ to work on Gemini. In a chat with DeepMind CEO Demis Hassabis, he claimed this is because artificial intelligence is something that naturally interests him. ‘I tend to be pretty deep in the technical details,’ Brin said, according to Business Insider. ‘And that’s a luxury I really enjoy, fortunately, because guys like Demis are minding the shop. And that’s just where my scientific interest is.’”
We love Brin’s work ethic. Highlights include borrowing Yahoo online ad ideas, the CLEVER patent, and using product promotions as a way to satisfy some primitive human desires. The executive also believes in 60-hour work weeks—at least for employees. Jackson notes Brin is also known for the downfall of Google Glass. Though that spiffy product faced privacy concerns and an unenthusiastic public, Brin recently blamed his ignorance of electronic supply chains for the failure. Great. Welcome back. But what about the big balloon thing?
Cynthia Murrell, June 18, 2025
AI Can Do Code, Right?
June 18, 2025
Developer Jj at Blogmobly deftly rants against AI code assistants in, “The Copilot Delusion.” Jj admits tools like GitHub Copilot and Claude Codex are good at some things, but those tasks are mere starting points for skillful humans to edit or expand upon. Or they should be. Instead, firms turn to bots more than they should in the name of speed. But AI gets its information from random blog posts and comment sections. Those are nowhere near the reasoning and skills of an experienced human coder. What good are lines of code that are briskly generated if they do not solve the problem?
Read the whole post for the strong argument for proficient humans and against overreliance on bots. These paragraphs stuck out to us:
“The real horror isn’t that AI will take our jobs. It’s that it will entice people who never wanted the job to begin with. People who don’t care for quality. It’ll remove the already tiny barrier to entry that at-least required people to try and comprehend control flow. Vampires with SaaS dreams and Web3 in their LinkedIn bio. Empty husks who see the terminal not as a frontier, but as a shovel for digging up VC money. They’ll drool over their GitHub Copilot like it’s the holy spirit of productivity, pumping out React CRUD like it’s oxygen. They’ll fork VS Code yet again, just to sell the same dream to a similarly deluded kid.”
Also:
“And what’s worse, we’ll normalize this mediocrity. Cement it in tooling. Turn it into a best practice. We’ll enshrine this current bloated, sluggish, over-abstracted hellscape as the pinnacle of software. The idea that building something lean and wild and precise, or even squeezing every last drop of performance out of a system, will sound like folklore. If that happens? If the last real programmers are drowned in a sea of button-clicking career-chasers – then I pity the smart outsider kids to come after me. Defer your thinking to the bot, and we all rot.”
Eloquently put: Good enough is now excellence.
Cynthia Murrell, June 18, 2025
Baby Naming: The AI Use Case
June 17, 2025
This is one way to use smart software, we suppose. Writer Amanda Smith at CNet reveals, “I Used AI to Brainstorm Baby Names. I Liked 1 Out of Every 10 Suggestions It Had.” While we are happy Smith and her wife are closing in on the perfect name, we are intrigued by something else this process reveals: the information ChatGPT gathers about users and the conclusions it draws. Smith writes:
“It’s best to log in to the chatbot, rather than just using the main search, so that ChatGPT can connect to its past history about you (ChatGPT shared some revealing insights about me when I asked it recently). This way, it’s not starting from scratch, and knows what kind of things you like.”
Not creepy at all. According to Smith’s screenshot, the chatbot prefaced its list with this observation:
“Based on what I know about you – a thoughtful, well-traveled, creative soul with a love for stories, global perspective, poetic yet grounded sensibilities, and a desire to raise a child who feels deeply wanted and connected—here are 10 boy baby names you might like.”
How insightful. When Smith asked where it got the idea to suggest one specific name, ChatGPT elaborated:
“You didn’t mention Cassian explicitly before – but the name fits your overall sensibility so well that I pulled it from a blend of cues you’ve given me over time. Here’s the thinking behind it:
- Your tone and taste lean literary, poetic, and soulful – you love Franzen, Yanagihara, and coming-of-age stories with depth.
- Your personal values (creative, worldly, introspective, emotionally intelligent) along with names that feel storied and thoughtful.
- Names like Atlas and Finn [names Smith had told the bot she liked] signal you like modern, adventurous, slightly offbeat names with mythology, travel, or narrative resonance.
- Your interest in cultural storytelling and spirituality made me think of names that carry historical or philosophical layers – Cassian being both the name of an early Christian monk and a name with a lyrical, almost cinematic feel.
Basically, Cassian hits that sweet spot between intellectual and emotive, rare but not strange – a name I imagined someone like you might already love.”
We are not sure whether Smith is an “Andor” fan, but we suspect more than a few of her son’s peers will be dubbed Cassian. Just an observation.
The above is quite the profile ChatGPT has built based on one user’s history of interactions. Granted, the bot’s list was not perfect. Some of the names were just too “out there,” Smith reports. The AI also seemed to think she’d prefer names from literature, which is not quite right. But those are minor errors, ones the bot may have already corrected. While Smith and others seem comfortable with this level of familiarity, some might find it alarming. One can imagine a few uses for such a dossier that are less benign than baby name suggestions.
Cynthia Murrell, June 17, 2025
Professor Marcus, You Missed One Point about the Apple Reasoning Paper
June 16, 2025
An opinion essay written by a dinobaby who did not rely on smart software but for the so-so cartoon.
The intern-fueled Apple academic paper titled “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity” has caused a stir. An interesting analysis of the responses to this tour de force is “Seven Replies to the Viral Apple Reasoning Paper – and Why They Fall Short.” Professor Gary Marcus in his analysis identifies categories of reactions to the Apple document.
In my opinion, these are, and I paraphrase with abandon:
- Human struggle with complex problems; software does too
- Smart software needs lots of computation so deliver a good enough output that doesn’t cost too much
- The paper includes an intern’s work because recycling and cheap labor are useful to busy people
- Bigger models are better because that’s what people do in Texas
- System can solve some types of problems and fail at others
- Limited examples because the examples require real effort
- The paper tells a reader what is already known: Smart software can be problematic because it is probabilistic, not intelligent.
I look at the Apple paper from a different point of view.
The challenge for Apple has been for more than a year to make smart software with its current limitations work reasonably well. Apple’s innovation in smart software has been the somewhat flawed SIRI (sort of long in the tooth) and the formulation of a snappy slogan “Apple Intelligence.”
This individual is holding a “cover your a**” document. Thanks, You.com. Good enough given your constraints, guard rails, and internal scripts.
The job of a commercial enterprise is to create something useful and reasonably clever to pull users to a product. Apple failed. Other companies have rolled out products making use of smart software as it currently is. One of the companies with a reasonably good product is OpenAI’s ChatGPT. Another is Perplexity.
Apple is not in this part of the smart software game. Apple has failed to use “as is” software in a way that adds some zing to the firm’s existing products. Apple has failed, just as it failed with the weird googles, its push into streaming video, and the innovations for the “new” iPhone. Changing case colors and altering an interface to look sort of like Microsoft’s see-through approach are not game changers. Labeling software by the year of release does not make me want to upgrade.
What is missing from the analysis of the really important paper that says, “Hey, this smart software has big problems. The whole house of LLM cards is wobbling in the wind”?
The answer is, “The paper is a marketing play.” The best way to make clear that Apple has not rolled out AI is because the current technology is terrible. Therefore, we need more time to figure out how to do AI well with crappy tools and methods not invented at Apple.
I see the paper as pure marketing. The timing of the paper’s release is marketing. The weird colors of the charts are marketing. The hype about the paper itself is marketing.
Anyone who has used some of the smart software tools knows one thing: The systems make up stuff. Everyone wants the “next big thing.” I think some of the LLM capabilities can be quite useful. In the coming months and years, smart software will enable useful functions beyond giving students a painless way to cheat, consultants a quick way to appear smart in a very short time, and entrepreneurs a way to vibe code their way into a job.
Apple has had one job: Find a way to use the available technology to deliver something novel and useful to its customers. It has failed. The academic paper is a “cover your a**” memo more suitable for a scared 35 year old middle manager in an advertising agency. Keep in mind that I am no professor. I am a dinobaby. In my world, an “F” is an “F.” Apple’s viral paper is an excuse for delivering something useful with Apple Intelligence. The company has delivered an illustration of why there is no Apple smart TV or Apple smart vehicle.
The paper is marketing, and it is just okay marketing.
Stephen E Arnold, June 16, 2025
Googley: A Dip Below Good Enough
June 16, 2025
A dinobaby without AI wrote this. Terrible, isn’t it? I did use smart software for the good enough cartoon. See, this dinobaby is adapting.
I was in Washington, DC, from June 9 to 11, 2025. My tracking of important news about the online advertising outfit was disrupted. I have been trying to catch up with new product mist, AI razzle dazzle, and faint signals of importance. The first little beep I noticed appeared in “Google’s Voluntary Buyouts Lead its Internal Restructuring Efforts.” “Ah, ha,” I thought. After decades of recruiting the smartest people in the world, the Google is dumping full time equivalents. Is this a move to become more efficient? Google has indicated that it is into “efficiency”; therefore, has the Google redefined the term? Had Google figured out that the change to tax regulations about research investments sparked a re-thing? Is Google so much more advanced than other firms, its leadership can jettison staff who choose to bail with a gentle smile and an enthusiastic wave of leadership’s hand?
The home owner evidences a surge in blood pressure. The handyman explains that the new door has been installed in a “good enough” manner. If it works for service labor, it may work for Google-type outfits too. Thanks, Sam AI-Man. Your ChatGPT came through with a good enough cartoon. (Oh, don’t kill too many dolphins, snail darters, and lady bugs today, please.)
Then I read “Google Cloud Outage Brings Down a Lot of the Internet.” Enticed by the rock solid metrics for the concept of “a lot,” I noticed this statement:
Large swaths of the internet went down on Thursday (June 12, 2025), affecting a range of services, from global cloud platform Cloudflare to popular apps like Spotify. It appears that a Google Cloud outage is at the root of these other service disruptions.
What? Google the fail over champion par excellence went down. Will the issue be blamed on a faulty upgrade? Will a single engineer who will probably be given an opportunity to find his or her future elsewhere be identified? Will Google be able to figure out what happened?
What are the little beeps my system continuously receives about the Google?
- Wikipedia gets fewer clicks than OpenAI’s ChatGPT? Where’s the Google AI in this? Answer: Reorganizing, buying out staff, and experiencing outages.
- Google rolls out more Gemini functions for Android devices. Where’s the stability and service availability for these innovations? Answer: I cannot look up the answer. Google is down.
- Where’s the revenue from online advertising as traditional Web search presents some thunderclouds? Answer: Well, that is a good question. Maybe revenues from Waymo, a deal with Databricks, or a bump in Pixel phone sales?
My view is that the little beeps may become self-amplifying. The magic of the online advertising model seems to be fading like the allure of Disneyland. When imagineering becomes imitation, more than marketing fairy dust may be required.
But what’s evident from the tiny beeps is that Google is now operating in “good enough” mode. Will it be enough to replace the Yahoo-GoTo-Overture pay-to-play approach to traffic?
Maybe Waymo is the dark horse when the vehicles are not combustible?
Stephen E Arnold, June 16, 2025
OpenA Ive: The Sam Altman Jive is Alive
June 16, 2025
Look out, Tim Apple, your pie may be burning. According to Fox News, “OpenAI’s $6.5B New Acquisition Signals Apple’s Biggest AI Crisis Yet.” The purchase in question? AI device startup io, founded by former Apple designer Jony Ive. “Cyber Guy” Kurt Knutsson tells us:
“This isn’t your typical business deal. It’s a collaboration between Sam Altman, who leads OpenAI, and the designer responsible for some of Apple’s most iconic products, including the iPhone and Apple Watch. Together, they want to create a new generation of AI-powered devices that could completely change how we use technology.”
Altman is trusting Ive and his team of (largely) Apple expats to propel OpenAI to the forefront of a “screenless” tech future. What, exactly, that will look like remains a mystery—a tantalizing one, if the pair has their way. Knutsson writes:
“While Altman and Ive are keeping most details secret, they have hinted at a family of AI devices that focus on seamless, intuitive interaction rather than screens. They want to create something that understands your context, adapts to your needs and helps you connect and create in new ways, all without requiring you to stare at a display. The device won’t be a phone or a pair of glasses but something entirely new that fits into your life as naturally as a MacBook or iPhone once did.”
Contrast this ambitious vision with Apple’s current position as a relative laggard in the AI field. Unlike Google, which tried to overtake Apple on the same AI racetrack, OpenAI plans to blaze a new path. Not only is the firm working on a new paradigm, it hopes to rapidly overtake the market when it gets there. We learn:
“OpenAI’s ambition is huge. In fact, they want to ship 100 million units faster than any company has ever done with a new product, which shows just how big their vision is.”
Will Apple be able to weather this innovation assault? Will tech devices soon ditch the display? Will OpenAI buy Google Chrome? Will soft AI carry the day? So many questions.
Cynthia Murrell, June 16, 2025
Just Cheat Your Way Through Life: Hey, It Is 2025. Get with It, Loser
June 13, 2025
Just a dinobaby and no AI: How horrible an approach?
I am a dinobaby. I lived in Campinas, Brazil. The power was on and off most days of the week. Mostly off, though. My family in the 1950s was one of the few American units in that town. My father planned for my education. I attended the local school for a few weeks. Then the director sent me home. The school was not set up for non-Portuguese speakers. There were a few missionaries in Campinas, and one of them became my Calvert Course tutor. He went to visit a smaller town, tangled with a snake, and died. That meant that I had to read the World Books my father bought as a replacement for the years of schooling I missed.
Bummer. No ChatGPT. Not much of anything except reading the turgid prose of the World Books and answering questions my mother and father presented for the section I read that day. “What was the capital of Tasmania?” I answered, “Hobart.” I guess that meant I passed. So it went for several years.
What would I have done if I had a laptop, electricity, and an Internet connection? I can tell you straight away that I would have let the smart software do my homework. Skip the reading. Let ChatGPT, You.com, Venice.ai, or some similar system do the work. I had a leather soccer (football) and the locals let me play even though I sucked.
When I read “AI Cheating Is So Out of Hand In America’s Schools That the Blue Books Are Coming Back,” I immediately sat down and wrote this blog post. I don’t need smart software, thank you. I have access to it and other magical computer software. I actually like doing research, analysis, and critical thinking. I am happy when someone tells me I am wrong, uninformed, or off base. I take note, remember the input, and try not to make the same mistake again.
But the reality of today is that smart software is like the World Books my parents made me read, memorize facts, and answer questions based on whatever baloney those volumes contained. AI is here; education has changed; and most students are not going to turn their backs on smart software, speed, and elimination of what is for most people the painful process of learning.
People are not stupid. Most just stop learning anything they don’t absolutely have to master. Now why learn anything? Whip out the smart phone, punch the icon for smart software, and let the system do the thinking.
The write up says:
… as AI tears through America’s elite educational system, lobotomizing tomorrow’s young leaders as it goes, could it be that blue books have been refashioned from a villain of the pre-AI age to a hero for our algorithmically-poisoned times? More and more, it seems like they’re the dark knight that America’s illiterate masses needs. The Journal notes that Roaring Spring Paper Products, the family-owned paper company that produces a majority of the blue books that are sold on college campuses, admits that the new AI era has ironically been good for its business.
Nifty. Lobotomize: I wonder if the author of the article knows exactly how unpredictable the procedure was and probably still is in some remote part of the modern world. Will using LLMs make people stupider? No, what makes people stupider is the inability, the motivation, and the curiosity required to learn. Doom scrolling is popular because young people are learning to follow trends, absorb video techniques, and learn how to “do” their fingernails. These may be more important than my knowing that the longest snake known when the World Books were published was over 20 feet long, specifically, the reticulated python. (Thank goodness, the snake lived in Indonesia, not Brazil.)
The write up says:
Indeed, if the return of pen and paper is a promising sign, America’s educators aren’t out of the woods yet—not even close. A recent survey found that 89% of college students had admitted to using ChatGPT to complete a homework assignment. AI-detection tools designed to spot cheating also routinely fail. Increasingly, America’s youth seem to view their educations as a high-stakes video game to be algorithmically juked. In short, more drastic measures (like the formulation of new laws and regulations around AI use) may need to be taken if the onset of America’s aggressive stupidification is to be halted.
My personal view is that a cultural shift has taken place. People don’t want to “work.” Families are no longer nuclear; they are not one mother, one father, and 2.4 children and maybe a dog, probably a boxer or a Labrador. Students no longer grab a book; they only have two hands and both are required to operate a mobile phone or a laptop. Teachers are no longer authority figures; they are viewed as problems, particularly by upper middle class and wealthy parents or parent as the case may be.
The blue book thing is mildly interesting, but I am not sure these are a solution. Students cannot read or write cursive; they print. This means that answers will be shorter, maybe like social media posts. If a student has a knack for art, icons may be included next to an insightful brief statement. A happy face signals the completion of the test. I would, if I were 13, draw a star and a calligraphic “A” on the front of my blue book.
What type of world will this educational milieu deliver? To be honest, I am glad I am old and will die before I have to experience to much of the LLM world. ![]()
Stephen E Arnold, June 13, 2025
Another Vote for the Everything App
June 13, 2025
Just a dinobaby and no AI: How horrible an approach?
An online information service named 9 to 5 Mac published an essay / interview summary titled “Nothing CEO says Apple No Longer Creative; Smartphone Future Is a Single App.” The write up focuses on the “inventor / coordinator” of the OnePlus mobile devices and the Nothing Phone. The key point of the write up is the idea that at some point in the future, one will have a mobile device and a single app, the everything app.
The article quotes a statement Carl Pei (the head of the Nothing Phone) made to another publication; to wit:
I believe that in the future, the entire phone will only have one app—and that will be the OS. The OS will know its user well and will be optimized for that person […] The next step after data-driven personalization, in my opinion, is automation. That is, the system knows you, knows who you are, and knows what you want. For example, the system knows your situation, time, place, and schedule, and it suggests what you should do. Right now, you have to go through a step-by-step process of figuring out for yourself what you want to do, then unlocking your smartphone and going through it step by step. In the future, your phone will suggest what you want to do and then do it automatically for you. So it will be agentic and automated and proactive.
This type of device will arrive in seven to 10 years.
For me, the notion of an everything app or a super app began in 2010, but I am not sure who first mentioned the phrase to me. I know that WeChat, the Chinese everything app, became available in 2011. The Chinese government was aware at some point that an “everything” app would make surveillance, social scoring, and filtering much easier. The “let many approved flowers bloom” approach of the Apple and Google online app stores was inefficient. One app was more direct, and I think the A to B approach to tracking and blocking online activity makes sense to many in the Middle Kingdom. The trade off of convenience for a Really Big Brother was okay with citizens of China. Go along and get along may have informed the uptake of WeChat.
Now the everything app seems like a sure bet. The unknown is which outstanding technology firm will prevail. The candidates are WeChat, Telegram, X.com, Sam Altman’s new venture, or a surprise player. Will other apps (the not everything apps from restaurant menus to car washes) survive? Sure. But if Sam AI-Man is successful with his Ive smart device and his stated goal of buying the Chrome browser from the Google catch on, the winner may be a CEO who was fired by his board, came back, and cleaned out those who did not jump on the AI-Man’s bandwagon.
That’s an interesting thought. It is Friday the 13th, Google. You too Microsoft. And Apple. How could I have forgotten Tim Cook and his team of AI adepts?
Stephen E Arnold, June 13, 2025
Musk, Grok, and Banning: Another Burning Tesla?
June 12, 2025
Just a dinobaby and no AI: How horrible an approach?
“Elon Musk’s Grok Chatbot Banned by a Quarter of European Firms” reports:
A quarter of European organizations have banned Elon Musk’s generative AI chatbot Grok, according to new research from cybersecurity firm Netskope.
I find this interesting because my own experiences with Grok have been underwhelming. My first query to Grok was, “Can you present only Twitter content?” The answer was a bunch of jabber which meant, “Nope.” Subsequent queries were less than stellar, and I moved it out of my rotation for potentially useful AI tools. Did the sample crafted by Netskope have a similar experience?
The write up says:
Grok has been under the spotlight recently for a string of blunders. They include spreading false claims about a “white genocide” in South Africa and raising doubts about Holocaust facts. Such mishaps have raised concerns about Grok’s security and privacy controls. The report said the chatbot is frequently blocked in favor of “more secure or better-aligned alternatives.”
I did not feel comfortable with Grok because of content exclusion or what I like to call willful or unintentional coverage voids. The easiest way to remove or weaponize content in the commercial database world is to exclude it. When a person searches a for fee database, the editorial policy for that service should make clear what’s in and what’s out. Filtering out is the easiest way to marginalize a concept, push down a particular entity, or shape an information stream.
The cited write up suggests that Grok is including certain content to give it credence, traction, and visibility. Assuming that an electronic information source is comprehensive is a very risky approach to assembling data.
The write up adds another consideration to smart software, which — like it or not — is becoming the new way to become informed or knowledgeable. The information may be shallow, but the notion of relying on weaponized information or systems that spy on the user presents new challenges.
The write up reports:
Stable Diffusion, UK-based Stability AI’s image generator, is the most blocked AI app in Europe, barred by 41% of organizations. The app was often flagged because of concerns around privacy or licensing issues, the report found.
How concerned should users of Grok or any other smart software be? Worries about Grok may be an extension of fear of a burning Tesla or the face of the Grok enterprise. In reality, smart software fosters the illusion of completeness, objectivity, and freshness of the information presented. Users are eager to use a tool that seems to make life easier and them appear more informed.
The risks of reliance on Grok or any other smart software include:
- The output is incomplete
- The output is weaponized or shaped by intentional or factors beyond the developers’ control
- The output is simply wrong, made up, or hallucinated
- Users who act as though shallow knowledge is sufficient for a decision.
The alleged fact that 25 percent of the Netskope sample have taken steps to marginalize Grok is interesting. That may be a positive step based on my tests of the system. However, I am concerned that the others in the sample are embracing a technology which appears to be delivering the equivalent of a sugar rush after a gym workout.
Smart software is being applied in novel ways in many situations. However, what are the demonstrable benefits other than the rather enthusiastic embrace of systems and methods known to output errors? The rejection of Grok is one interesting factoid if true. But against the blind acceptance of smart software, Grok’s down check may be little more than a person stepping away from a burning Tesla. The broader picture is that the buildings near the immolating vehicle are likely to catch on fire.
Stephen E Arnold, June 12, 2025

