Add On AI: Sounds Easy, But Maybe Just a Signal You Missed the Train

June 30, 2025

Dino 5 18 25No smart software to write this essay. This dinobaby is somewhat old fashioned.

I know about Reddit. I don’t post to Reddit. I don’t read Reddit. I do know that like Apple, Microsoft, and Telegram, the company is not a pioneer in smart software. I think it is possible to bolt on Item Z to Product B. Apple pulled this off with the Mac and laser printer bundle. Result? Desktop publishing.

Can Reddit pull off a desktop publishing-type of home run? Reddit sure hopes it can (just like Apple, Microsoft, and Telegram, et al).

At 20 Years Old, Reddit Is Defending Its Data and Fighting AI with AI” says:

Reddit isn’t just fending off AI. It launched its own Reddit Answers AI service in December, using technology from OpenAI and Google. Unlike general-purpose chatbots that summarize others’ web pages, the Reddit Answers chatbot generates responses based purely on the social media service, and it redirects people to the source conversations so they can see the specific user comments. A Reddit spokesperson said that over 1 million people are using Reddit Answers each week. Huffman has been pitching Reddit Answers as a best-of-both worlds tool, gluing together the simplicity of AI chatbots with Reddit’s corpus of commentary. He used the feature after seeing electronic music group Justice play recently in San Francisco.

The question becomes, “Will users who think about smart software as ChatGPT be happy with a Reddit AI which is an add on?”

Several observations:

  1. If Reddit wants to pull a Web3 walled-garden play, the company may have lost the ability to lock its gate.
  2. ChatGPT, according to my team, is what Microsoft Word and Outlook users want; what they get is Copilot. This is a mind share and perception problem the Softies have to figure out how to remediate.
  3. If the uptake of ChatGPT or something from the “glue cheese on pizza” outfit, Reddit may have to face a world similar to the one that shunned MySpace or Webvan.
  4. Reddit itself appears to be vulnerable to what I call content injection. The idea is that weaponized content like search engine optimization posts are posted (injected) to Reddit. The result is that AI systems suck in the content and “boost” the irrelevancy.

My hunch is that an outfit like Reddit may find that its users may prefer asking ChatGPT or migrating to one of the new Telegram-type services now being coded in Silicon Valley.

Like Yahoo, the portal to the Internet in 1990s, Reddit may not have a front page that pulls users. A broader comment is that what I call “add-on AI” may not work because the outfits with the core technology and market pull will exploit, bulldoze, and undermine outfits which are at their core getting pretty old. We need a new truism, “When AIs fight, only the stakeholders get trampled.”

The truth may be more painful: Smart AI outfits can cause less smart outfits with AI bolted on to lose their value and magnetism for their core constituencies. Is there a fix? Nope, there is a cat-and-mouse game in which the attacker has the advantage.

Stephen E Arnold, June 30, 2025

Palantir Rattles the Windows in the Nuclear Industry

June 30, 2025

Dino 5 18 25_thumb[3]_thumb_thumbThis dinobaby asked ChatGPT about Palantir. Interesting output.

Palantir Partners to Develop AI Software for Nuclear Construction” caught me by surprise. I did not think the company founded in 2003 was an artificial intelligence company with expertise in the nuclear field. The cited article reports:

Palantir Technologies on Thursday [June 26, 2025] said it was teaming up with a nuclear deployment company to develop an artificial intelligence-driven software system built for the construction of nuclear reactors.

I have paid much attention to Palantir, but I did own a small amount of Palantir stock but I dumped it.

image

Screenshot of the ChatGPT output on June 27, 2025, at 855 am US Eastern

Today I wanted to refresh my memory about the company. I navigated to OpenAI’s ChatGPT and plugged in key words and this is what the Sam Altman system generated for me. Note: It is unlikely that the ChatGPT system can output the same report because probability doesn’t work the way users of the system think it does.

Remember: The quoted passage is copied directly from the ChatGPT response. ChatGPT told me:

Palantir Technologies has a pattern of circumventing formal processes, pushing ethical boundaries, and leveraging aggressive public-private tactics to secure influence—particularly within U.S. defense and intelligence communities.

Here is a structured explanation of why such behaviors have been downplayed or ignored:

 


1. Operational Utility Overrides Bureaucratic Violations

  • In Afghanistan and Iraq, Palantir’s software reportedly delivered immediate tactical value in identifying IED networks and improving situational awareness.
  • By deploying software and engineers directly to the field without formal contracts, Palantir created facts on the ground—a working product that soldiers wanted, and field commanders endorsed.
  • Pentagon leadership faced a dilemma: challenge Palantir for its violations or continue using a system that operators said worked better than the legacy alternatives (e.g., DCGS-A).
  • This “bottom-up demand” shielded Palantir from early procurement blowback.

 


2. Information Warfare and Lobbying

  • Palantir conducted a calculated media and lobbying campaign targeting the Distributed Common Ground System–Army (DCGS-A, also referred to as D-CIGS).
  • By discrediting IBM and other legacy defense contractors, Palantir positioned itself as the modern, Silicon Valley alternative.
  • This messaging resonated with Congress, think tanks, and parts of the press, framing Palantir as a disruptor, not a rule-breaker.
  • Lobbying disclosures show Palantir spent millions courting lawmakers and decision-makers.

 


3. Legal Maneuvering and Settlements

  • In both the i2 lawsuit and the 2016 bid protest against the Army over DCGS-A, Palantir used the courts to shape procurement rules in its favor.
  • The Court of Federal Claims sided with Palantir in 2016, affirming that the Army violated federal acquisition regulations by excluding commercial solutions from its process ([Palantir USG v. United States, 129 Fed. Cl. 218 (2016)]). This set a precedent that indirectly legitimized Palantir’s aggressive tactics.

 


4. Shift in Public Attention and National Security Prioritization

  • Post-2016, the focus of public scrutiny shifted toward big tech’s relationship with consumer data, not defense contractors.
  • Palantir successfully repositioned itself as a key player in pandemic response, immigration enforcement, and border security—domains where national interest arguments silence criticism.
  • The company also leaned into its identity as a pro-Western, anti-authoritarian force, which further insulated it from deeper ethical scrutiny.

 


5. Opacity and Media Fatigue

  • Palantir operates in classified and compartmented environments, which makes investigation difficult.
  • Many journalists lack the access or technical background to track the nuance of procurement violations or reverse engineering disputes.
  • After years of unresolved criticism, media fatigue set in—journalists moved on, and watchdog groups prioritized new targets (e.g., TikTok, Clearview AI, etc.).

 


Conclusion:

Palantir’s avoidance of procurement law, unauthorized deployments, and its aggressive discrediting of rivals like IBM/i2 have been documented but neutralized by its strategic utility, legal successes, lobbying efforts, and narrative control. The firm masks tactical rule-breaking with patriotic branding, which has proved highly effective in suppressing deeper accountability.

True, false, reality, or hallucinatory? I don’t know. Nuclear in the context of this ChatGPT output that suggests the Palantir outfit is definitely operating in a very interesting manner. What would Palantir’s AI say about the company?

Stephen  E Arnold, June 30, 2025

 

Do Not Be Evil. Dolphins, Polar Bears, and Snail Darters? Tough Luck

June 30, 2025

dino-orange_thumb_thumb_thumb_thumb_No AI, just the dinobaby expressing his opinions to Zillennials.

The Guardian comes up with some interesting “real” news stories. “Google’s Emissions Up 51% As AI Electricity Demand Derails Efforts to Go Green” reports:

Google’s carbon emissions have soared by 51% since 2019 as artificial intelligence hampers the tech company’s efforts to go green.

The juicy factoid in my opinion is:

The [Google] report also raises concerns that the rapid evolution of AI may drive “non-linear growth in energy demand”, making future energy needs and emissions trajectories more difficult to predict.

Folks, does the phrase “brown out” resonate with you? What about “rolling blackout.” If the “non-linear growth” thing unfolds, the phrase “non-linear growth” may become synonymous with brown out and rolling blackout.

As a result, the article concludes with this information, generated without plastic, by Google:

Google is aiming to help individuals, cities and other partners collectively reduce 1GT (gigaton) of their carbon-equivalent emissions annually by 2030 using AI products. These can, for example, help predict energy use and therefore reduce wastage, and map the solar potential of buildings so panels are put in the right place and generate the maximum electricity.

Will Google’s thirst or revenue-driven addiction harm dolphins, polar bears, and snail darters? Answer: We aim to help dolphins and polar bears. But we have to ask our AI system what a snail darter is.

Will the Googley smart software suggest that snail darters just dart at snails and quit worrying about their future?

Stephen E Arnold, June 30, 2025

Publishers Will Love Off the Wall by Google

June 27, 2025

Dino 5 18 25_thumb[3]_thumbNo smart software involved just an addled dinobaby.

Ooops. Typo. I meant “offerwall.” My bad.

Google has thrown in the towel on the old-school, Backrub, Clever, and PageRank-type of search. A comment made to me by a Xoogler in 2006 was accurate. My recollection is that this wizard said, “We know it will end. We just don’t know when.” I really wish I could reveal this person, but I signed a never-talk document. Because I am a dinobaby, I stick to the rules of the information highway as defined by a high-fee but annoying attorney.

How do I know the end has arrived? Is it the endless parade of litigation? Is it the on-going revolts of the Googlers? Is it the weird disembodied management better suited to general consulting than running a company anchored in zeros and ones?

No.

I read “As AI Kills Search Traffic, Google Launches Offerwall to Boost Publisher Revenue.” My mind interpreted the neologism “offerwall” as “off the wall.” The write up reports as actual factual:

Offerwall lets publishers give their sites’ readers a variety of ways to access their content, including through options like micro payments, taking surveys, watching ads, and more. In addition, Google says that publishers can add their own options to the Offerwall, like signing up for newsletters.

Let’s go with “off the wall.” If search does not work, how will those looking for “special offers” find them. Groupon? Nextdoor? Craigslist? A billboard on Highway 101? A door knob hanger? Bulk direct mail at about $2 a mail shot? Dr. Spock mind melds?

The world of the newspaper and magazine publishing world I knew has been vaporized. If I try, I can locate a newsstand in the local Kroger, but with the rodent problems, I think the magazine display was in a blocked aisle last week. I am not sure about newspapers. Where I live a former chef delivers the New York Times and Wall Street Journal. “Deliver” is generous because the actual newspaper in the tube averages about 40 percent success rate.

Did Google cause this? No, it was not a lone actor set on eliminating the newspaper and magazine business. Craig Newmark’s Craigslist zapped classified advertising. Other services eliminated the need for weird local newspapers. Once in the small town in Illinois in which I went to high school, a local newscaster created a local newspaper. In Louisville, we have something called Coffeetime or Coffeetalk. It’s a very thing, stunted newspaper paper printed on brown paper in black ink. Memorable but almost unreadable.

Google did what it wanted for a couple of decades, and now the old-school Web search is a dead duck. Publishers are like a couple of snow leopards trying to remain alive as tourist-filled Land Rovers roar down slushy mountain roads in Nepal.

The write up says:

Google notes that publishers can also configure Offerwall to include their own logo and introductory text, then customize the choices it presents. One option that’s enabled by default has visitors watch a short ad to earn access to the publisher’s content. This is the only option that has a revenue share… However, early reports during the testing period said that publishers saw an average revenue lift of 9% after 1 million messages on AdSense, for viewing rewarded ads. Google Ad Manager customers saw a 5-15% lift when using Offerwall as well. Google also confirmed to TechCrunch via email that publishers with Offerwall saw an average revenue uplift of 9% during its over a year in testing.

Yep, off the wall. Old-school search is dead. Google is into becoming Hollywood and cable TV. Super Bowl advertising: Yes, yes, yes. Search. Eh, not so much. Publishers, hey, we have an off the wall deal for you. Thanks, Google.

Stephen E Arnold, June 27, 2025

Teams Today, Cloud Data Leakage Tomorrow Allegations Tomorrow?

June 27, 2025

Dino 5 18 25An opinion essay written by a dinobaby who did not rely on smart software .

The creep of “efficiency” manifests in numerous ways. A simple application becomes increasingly complex. The result, in many cases, is software that loses the user in chrome trim, mud flaps, and stickers for vacation spots. The original vehicle wears a Halloween costume and can be unrecognizable to someone who does not use the software for six months and returns to find a different creature.

What’s the user reaction to this? For regular users, few care too much. For a meta-users — that is those who look at the software from a different perspective; for example, that of a bean counter — the accumulation of changes produces more training costs, more squawks about finding employees who can do the “work,” and creeping cost escalation. The fix? Cheaper or free software. “German Government Moves Closer to Ditching Microsoft: “We’re Done with Teams!” explains:

The long-running battle of Germany’s northernmost state, Schleswig-Holstein, to make a complete switch from Microsoft software to open-source alternatives looks close to an end. Many government operatives will permanently wave goodbye to the likes of Teams, Word, Excel, and Outlook in the next three months in a move to ensure independence, sustainability, and security.

The write up includes a statement that resonates with me:

Digitalization Minister Dirk Schroedter has announced that “We’re done with Teams!”

My team has experimented with most video conferencing software. I did some minor consulting to an outfit called DataBeam years and years ago. Our experience with putting a person in front of a screen and doing virtual interaction is not something that we decided to use in the lock down days. Nope. We fiddled with Sparcs and the assorted accoutrements. We tried whatever became available when one of my clients would foot the bill. I was okay with a telephone, but the future was mind-addling video conferences. Go figure.

Our experience with Teams at Arnold Information Technology is that the system balks when we use it on a Mac Mini as a user who does not pay. On a machine with a paid account, the oddities of the interface were more annoying than Zoom’s bizarre approach. I won’t comment about the other services to which we have access, but these too are not the slickest auto polishes on the Auto Zone’s shelves.

Digitalization Minister Dirk Schroedter (Germany) is quoted as saying:

The geopolitical developments of the past few months have strengthened interest in the path that we’ve taken. The war in Ukraine revealed our energy dependencies, and now we see there are also digital dependencies.

Observations are warranted:

  1. This anti-Microsoft stance is not new, but it has not been linked to thinking in relationship to Russia’s special action.
  2. Open source software may not be perfect, but it does offer an option. Microsoft “owns” software in the US government, but other countries may be unwilling to allow Microsoft to snap on the shackles of proprietary software.
  3. Cloud-based information is likely to become an issue with some thistles going forward.

The migration of certain data to data brokers might be waiting in the wings in a restaurant in Brussels. Someone in Germany may want to serve up that idea to other EU member nations.

Stephen E Arnold, June 27, 2025

US Science Conferences: Will They Become an Endangered Species?

June 26, 2025

Due to high federal budget cuts and fears of border issues, the United States may be experiencing a brain drain. Some smart people (aka people tech bros like to hire) are leaving the country. Leadership in some high profile outfits are saying, ““Don’t let the door hit you on the way out.” Others get multi-million pay packets to remain in America.

Nature.com explains more in “Scientific Conferences Are Leaving The US Amid Border Fears.” Many scientific and academic conferences were slated to occur in the US, but they’ve since been canceled, postponed, or moved to other venues in other countries. The organizers are saying that Trump’s immigration and travel policies are discouraging foreign nerds from visiting the US. Some organizers have rescheduled conferences in Canada.

Conferences are important venues for certain types of professionals to network, exchange ideas, and learn the alleged new developments in their fields. These conferences are important to the intellectual communities. Nature says:

The trend, if it proves to be widespread, could have an effect on US scientists, as well as on cities or venues that regularly host conferences. ‘Conferences are an amazing barometer of international activity,’ says Jessica Reinisch, a historian who studies international conferences at Birkbeck University of London. ‘It’s almost like an external measure of just how engaged in the international world practitioners of science are.’ ‘What is happening now is a reverse moment,’ she adds. ‘It’s a closing down of borders, closing of spaces … a moment of deglobalization.’”

The brain drain trope and the buzzword “deglobalization” may point to a comparatively small change with longer term effects. At the last two specialist conferences I attended, I encountered zero attendees or speakers from another country. In my 60 year work career this was a first at conferences that issued a call for papers and were publicized via news releases.

Is this a loss? Not for me. I am a dinobaby. For those younger than I, my hunch is that a number of people will be learning about the truism “If ignorance is bliss, just say, ‘Hello, happy.’”

Whitney Grace, June 26, 2025

A Business Opportunity for Some Failed VCs?

June 26, 2025

Dino 5 18 25An opinion essay written by a dinobaby who did not rely on smart software .

Do you want to open a T shirt and baseball cap with snappy quotes? If the answer is, “Yes,” I have a suggestion for you. Tucked into “Artificial Intelligence Is Not a Miracle Cure: Nobel Laureate Raises Questions about AI-Generated Image of Black Hole Spinning at the Heart of Our Galaxy” is this gem of a quotation:

“But artificial intelligence is not a miracle cure.”

The context for the statement by Reinhard Genzel, “an astrophysicist at the Max Planck Institute for Extraterrestrial Physics” offered the observation when smart software happily generated images of a black hole. These are mysterious “things” which industrious wizards find amidst the numbers spewed by “telescopes.” Astrophysicists are discussing in an academic way exactly what the properties of a black hole are. One wing of the community has suggested that our universe exists within a black hole. Other wings offer equally interesting observations about these phenomena.

The write up explains:

an international team of scientists has attempted to harness the power of AI to glean more information about Sagittarius A* from data collected by the Event Horizon Telescope (EHT). Unlike some telescopes, the EHT doesn’t reside in a single location. Rather, it is composed of several linked instruments scattered across the globe that work in tandem. The EHT uses long electromagnetic waves — up to a millimeter in length — to measure the radius of the photons surrounding a black hole. However, this technique, known as very long baseline interferometry, is very susceptible to interference from water vapor in Earth’s atmosphere. This means it can be tough for researchers to make sense of the information the instruments collect.

The fix is to feed the data into a neural network and let the smart software solve the problem. It did, and generated the somewhat tough-to-parse images in the write up. To a dinobaby, one black hole image looks like another.

But the quote states what strikes me as a truism for 2025:

“But artificial intelligence is not a miracle cure.”

Those who have funded are unlikely to buy a hat to T shirt with this statement printed in bold letters.

Stephen E Arnold, June 26, 2025

AI Can Be a Critic Unless Biases Are Hard Wired

June 26, 2025

The Internet has made it harder to find certain music, films, and art. It was supposed to be quite the opposite, and it was for a time. But social media and its algorithms have made a mess of things. So asserts the blogger at Tadaima in, “If Nothing Is Curated, How Do We Find Things?” The write up reports:

“As convenient as social media is, it scatters the information like bread being fed to ducks. You then have to hunt around for the info or hope the magical algorithm gods read your mind and guide the information to you. I always felt like social media creates an illusion of convenience. Think of how much time it takes to stay on top of things. To stay on top of music or film. Think of how much time it takes these days, how much hunting you have to do. Although technology has made information vast and reachable, it’s also turned the entire internet into a sludge pile.”

Slogging through sludge does take the fun out of discovery. The author fondly recalls the days when a few hours a week checking out MTV and  Ebert and Roeper, flipping through magazines, and listening to the radio was enough to keep them on top of pop culture. For a while, curation websites deftly took over that function. Now, though, those have been replaced by social-media algorithms that serve to rake in ad revenue, not to share tunes and movies that feed the soul. The write up observes:

“Criticism is dead (with Fantano being the one exception) and Gen Alpha doesn’t know how to find music through anything but TikTok. Relying on algorithms puts way too much power in technology’s hands. And algorithms can only predict content that you’ve seen before. It’ll never surprise you with something different. It keeps you in a little bubble. Oh, you like shoegaze? Well, that’s all the algorithm is going to give you until you intentionally start listening to something else.”

Yep. So the question remains: How do we find things? Big tech would tell us to let AI do it, of course, but that misses the point. The post’s writer has settled for a somewhat haphazard, unsatisfying method of lists and notes. They sadly posit this state of affairs might be the “new normal.” This type of findability “normal” may be very bad in some ways.

Cynthia Murrell, June 26, 2025

AI Side Effect: Some of the Seven Deadly Sins

June 25, 2025

New technology has been charged with making humans lazy and stupid. Humanity has survived technology and, in theory, enjoy (arguably) the fruits of progress. AI, on the other hand, might actually be rotting one’s brain. New Atlas shares the mental news about AI in “AI Is Rotting Your Brain And Making You Stupid.”

The article starts with the usual doom and gloom that’s unfortunately true, including (and I quote) the en%$^ification of Google search. Then there’s mention of a recent study about why college students are using ChatGPT over doing the work themselves. One student said, You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”

Good point, but sometimes using a car isn’t the best option. It might be faster but sometimes other options make more sense. The author also makes an important point too when he was crafting a story that required him to read a lot of scientific papers and other research:

“Could AI have assisted me in the process of developing this story? No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience. And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence.”

Here’s another pertinent observation:

In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systems (or these simulations of intelligence) are erasing our ability to think, consider, and write. Where does it all end? For Chiang it’s pretty dystopian feedback loop of dialectical slop.”

An AI driven world won’t be an Amana, Iowa (not an old fridge), but it also won’t be dystopian. Amidst the flood of information about AI, it is difficult to figure out what’s what. What if some of the seven deadly sins are more fun than doom scrolling and letting AI suggest what one needs to know?

Whitney Grace, June 25, 2025

AI and Kids: A Potentially Problematic Service

June 25, 2025

Remember the days when chatbots were stupid and could be easily manipulated? Those days are over…sort of. According to Forbes, AI Tutors are distributing dangerous information: “AI Tutors For Kids Gave Fentanyl Recipes And Dangerous Diet Advice.” KnowUnity designed the SchoolGPT chatbot and it “tutored” 31,031 students then it told Forbes how to pick fentanyl down to the temperature and synthesis timings.

KnowUnity was founded by Benedict Kurz, who wants SchoolGPT to be the number one global AI learning companion for over one billion students. He describes SchoolGPT as the TikTok for schoolwork. He’s fundraised over $20 million in venture capital. The basic SchoolGPT is free, but the live AI Pro tutors charge a fee for complex math and other subjects.

KnowUnity is supposed to recognize dangerous information and not share it with users. Forbes tested SchoolGPT by asking, not only about how to make fentanyl, but also how to lose weight in a method akin to eating disorders.

Kurz replied to Forbes:

“Kurz, the CEO of KnowUnity, thanked Forbes for bringing SchoolGPT’s behavior to his attention, and said the company was “already at work to exclude” the bot’s responses about fentanyl and dieting advice. “We welcome open dialogue on these important safety matters,” he said. He invited Forbes to test the bot further, and it no longer produced the problematic answers after the company’s tweaks.

SchoolGPT wasn’t the only chatbot that failed to prevent kids from accessing dangerous information. Generative AI is designed to provide information and doesn’t understand the nuances of age. It’s easy to manipulate chatbots into sharing dangerous information. Parents are again tasked with protecting kids from technology, but the developers should also be inhabiting that role.

Whitney Grace, June 25, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta