Debbie Downer Says, No AI Payoff Until 2026

December 27, 2024

Holiday greetings from the Financial Review. Its story “Wall Street Needs to Prepare for an AI Winter” is a joyous description of what’s coming down the Information Highway. The uplifting article sings:

shovelling more and more data into larger models will only go so far when it comes to creating “intelligent” capabilities, and we’ve just about arrived at that point. Even if more data were the answer, those companies that indiscriminately vacuumed up material from any source they could find are starting to struggle to acquire enough new information to feed the machine.

Translating to rural Kentucky speak: “We been shoveling in the horse stall and ain’t found the nag yet.”

The flickering light bulb has apparently illuminated the idea that smart software is expensive to develop, train, optimize, run, market, and defend against allegations of copyright infringement.

To add to the profit shadow, Debbie Downer’s cousin compared OpenAI to Visa. The idea in “OpenAI Is Visa” is that Sam AI-Man’s company is working overtime to preserve its lead in AI and become a monopoly before competitors figure out how to knock off OpenAI. The write up says:

Either way, Visa and OpenAI seem to agree on one thing: that “competition is for losers.”

Too add to the uncertainty about US AI “dominance,” Venture Beat reports:

DeepSeek-V3, ultra-large open-source AI, outperforms Llama and Qwen on launch.

Does that suggest that the squabbling and mud wrestling among US firms can be body slammed by the Chinese AI grapplers are more agile? Who knows. However, in a series of tweets, DeepSeek suggested that its “cost” was less than $6 million. The idea is that what Chinese electric car pricing is doing to some EV manufacturers, China’s AI will do to US AI. Better and faster? I don’t know but that “cheaper” angle will resonate with those asked to pump cash into the Big Dogs of US AI.

In January 2023, many were struck by the wonders of smart software. Will the same festive atmosphere prevail in 2025?

Stephen E Arnold, December 27, 2024

OpenAI Partners with Defense Startup Anduril to Bring AI to US Military

December 27, 2024

animated-dinosaur-image-0062_thumb_thumbNo smart software involved. Just a dinobaby’s work.

We learn from the Independent that “OpenAI Announces Weapons Company Partnership to Provide AI Tech to Military.” The partnership with Anduril represents an about-face for OpenAI. This will excite some people, scare others, and lead to remakes of the “Terminator.” Beyond Search thinks that automated smart death machines are so trendy. China also seems enthused. We learn:

“‘ChatGPT-maker OpenAI and high-tech defense startup Anduril Industries will collaborate to develop artificial intelligence-inflected technologies for military applications, the companies announced. ‘U.S. and allied forces face a rapidly evolving set of aerial threats from both emerging unmanned systems and legacy manned platforms that can wreak havoc, damage infrastructure and take lives,’ the companies wrote in a Wednesday statement. ‘The Anduril and OpenAI strategic partnership will focus on improving the nation’s counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time.’ The companies framed the alliance as a way to secure American technical supremacy during a ‘pivotal moment’ in the AI race against China. They did not disclose financial terms.”

Of course not. Tech companies were once wary of embracing military contracts, but it seems those days are over. Why now? The article observes:

“The deals also highlight the increasing nexus between conservative politics, big tech, and military technology. Palmer Lucky, co-founder of Anduril, was an early, vocal supporter of Donald Trump in the tech world, and is close with Elon Musk. … Vice-president-elect JD Vance, meanwhile, is a protege of investor Peter Thiel, who co-founded Palantir, another of the companies involved in military AI.”

“Involved” is putting it lightly. And as readers may have heard, Musk appears to be best buds with the president elect. He is also at the head of the new Department of Government Efficiency, which sounds like a federal agency but is not. Yet. The commission is expected to strongly influence how the next administration spends our money. Will they adhere to multinational guidelines on military use of AI? Do PayPal alums have any hand in this type of deal?

Cynthia Murrell, December 27, 2024

AI Oh-Oh: Innovation Needed Now

December 27, 2024

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis blog post is the work of an authentic dinobaby. No smart software was used.

I continue to hear about AI whiz kids “running out of data.” When people and institutions don’t know what’s happening, it is easy to just smash and grab. The copyright litigation and the willingness of AI companies to tie up with content owners make explicit that the zoom zoom days are over.

image

A smart software wizard is wondering how to get over, under, around, or through the stone wall of exhausted content. Thanks, Grok, good enough.

The AI Revolution Is Running Out of Data. What Can Researchers Do?” is a less crazy discussion of the addictive craze which has made smart software or — wait for it — agentic intelligence the next big thing. The write up states:

The Internet is a vast ocean of human knowledge, but it isn’t infinite. And artificial intelligence (AI) researchers have nearly sucked it dry.

“Sucked it dry” and the systems still hallucinate. Guard rails prevent users from obtaining information germane to certain government investigations. The image generators refuse to display a classroom of student paying attention to mobile phones, not the teacher. Yep, dry. More like “run aground.”

The fix to running out of data, according to the write up, is:

plans to work around it, including generating new data and finding unconventional data sources.

One approach is to “find data.” The write up says:

one option might be to harvest non-public data, such as WhatsApp messages or transcripts of YouTube videos. Although the legality of scraping third-party content in this manner is untested, companies do have access to their own data, and several social-media firms say they use their own material to train their AI models. For example, Meta in Menlo Park, California, says that audio and images collected by its virtual-reality headset Meta Quest are used to train its AI.

And what about this angle?

Another option might be to focus on specialized data sets such as astronomical or genomic data, which are growing rapidly. Fei-Fei Li, a prominent AI researcher at Stanford University in California, has publicly backed this strategy. She said at a Bloomberg technology summit in May that worries about data running out take too narrow a view of what constitutes data, given the untapped information available across fields such as health care, the environment and education.

If you want more of these work arounds, please, consult the Nature article.

Several observations are warranted:

First, the current AI “revolution” is the result of many years of research and experimentation, The fact that today’s AI produces reasonably good high school essays and allows  people to interact with a search system is a step forward. However, like most search-based innovations, the systems have flaws.

Second, the use of neural networks and the creation by Google (allegedly) of the transformer has provided fuel to fire the engines of investment. The money machines are chasing the next big thing. The problem is that the costs are now becoming evident. It is tough to hide the demand for electric power. (Hey, no problem how about a modular thorium reactor. Yeah, just pick one up at Home Depot. The small nukes are next to the Honda generators.) There is the need for computation. Google can talk about quantum supremacy, but good old fashioned architecture is making Nvidia a big dog in AI. And the cost of people? It is off the chart. Forget those coding boot camps and learn to do matrix math in your head.

Third, the real world applications like those Apple is known for don’t work very well. After vaporware time, Apple is pushing OpenAI to iPhone users. Will Siri actually work? Apple cannot afford to whiff to many big plays. Do you wear your Apple headset or do you have warm and fuzzies for the 2024 Mac Mini which is a heck of a lot cheaper than some of the high power Macs from a year ago? What about Copilot in Notebook. Hey, that’s helpful to some Notepad users. How many? Well, that’s another question. How many people want smart software doing the Clippy thing with every click?

Net net: It is now time for innovation, not marketing. Which of the Big Dog AI outfits will break through the stone walls? The bigger question is, “What if it is an innovator in China?” Impossible, right?

Stephen E Arnold, December 27, 2024

Bold Allegation: Columbia, the US, and Pegasus

December 27, 2024

The United States assists its allies, but why did the Biden Administration pony up $11 million for a hacking software. DropSiteNews investigates the software, its huge price tag, and why the US bought it in: “The U.S. Bought Pegasus For Colombia With $11 Million In Cash. Now Colombians Are Asking Why.” Colombians are just as curious as Americans are why the US coughed up $11 million in cash for the Israeli hacking software.

The Colombian ambassador to the US Daniel García-Peña confirmed that Washington DC assisted his country in buying the software, so the Colombian government could track drug cartels. The software was purchased and used throughout 2021-2022. Pegasus usage stopped in 2022 and it was never used to against politicians, such as former Columbian president Ivan Duque. The Biden Administration remained in control of the Pegasus software and assured that the Columbian government only provided spying targets.

It’s understandable why Colombia’s citizens were antsy about Pegasus:

“García-Peña’s revelations come two months after Colombian President Gustavo Petro delivered a televised speech in which he revealed some of the details of the all-cash, $11-million purchase, including that it has been split across two installments, flown from Bogotá and deposited into the Tel Aviv bank account belonging to NSO Group, the company that owns Pegasus. Soon after the speech, Colombia’s attorney general opened an investigation into the purchase and use of Pegasus. In October, Petro accused the director of the NSO Group of money laundering, due to the tremendous amount of cash he transported on the flights.

The timeline of the purchase and use of Pegasus overlaps with a particularly turbulent time in Colombia. A social movement had begun protesting against Duque, while in the countryside, Colombia’s security forces were killing or arresting major guerrilla and cartel leaders. At the time, Petro, the first left-wing president in the country’s recent history, was campaigning for the presidency.”

The Pegasus is powerful hacking software and Columbians were suspicious how their government acquired it. Journalists were especially curious where the influx of cash came from. They slowly discovered it was from the United States with the intent to spy on drug cartels. Columbia is a tumultuous nation with crime worse than the wild west. Pegasus hopefully caught the worst of the bad actors.

Whitney Grace, December 27, 2024

Boxing Day Cheat Sheet for AI Marketing: Happy New Year!

December 27, 2024

Other than automation and taking the creative talent out of the entertainment industry, where is AI headed in 2025? The lowdown for the upcoming year can be found on the Techknowledgeon AI blog and its post: “The Rise Of Artificial Intelligence: Know The Answers That Makes You Sensible About AI.”

The article acts as a primer for what AI I, its advantages, and answering important questions about the technology. The questions that grab our attention are “Will AI take over humans one day?” And “Is AI an Existential Threat to Humanity?” Here’s the answer to the first question:

“The idea of AI taking over humanity has been a recurring theme in science fiction and a topic of genuine concern among some experts. While AI is advancing at an incredible pace, its potential to surpass or dominate human capabilities is still a subject of intense debate. Let’s explore this question in detail.

AI, despite its impressive capabilities, has significant limitations:

  • Lack of General Intelligence: Most AI today is classified as narrow AI, meaning it excels at specific tasks but lacks the broader reasoning abilities of human intelligence.
  • Dependency on Humans: AI systems require extensive human oversight for design, training, and maintenance.
  • Absence of Creativity and Emotion: While AI can simulate creativity, it doesn’t possess intrinsic emotions, intuition, or consciousness.

And then the second one is:

“Instead of "taking over," AI is more likely to serve as an augmentation tool:

  • Workforce Support: AI-powered systems are designed to complement human skills, automating repetitive tasks and freeing up time for creative and strategic thinking.
  • Health Monitoring: AI assists doctors but doesn’t replace the human judgment necessary for patient care.
  • Smart Assistants: Tools like Alexa or Google Assistant enhance convenience but operate under strict limitations.”

So AI has a long way to go before it replaces humanity and the singularity of surpassing human intelligence is either a long way off or might never happen.

This dossier includes useful information to understand where AI is going and will help anyone interested in learning what AI algorithms are projected to do in 2025.

Whitney Grace, December 27, 2024

2025 Consulting Jive

December 26, 2024

Here you go. I have extracted of list of the jargon one needs to write reports, give talks, and mesmerize those with a desire to be the smartest people in the room:

  • Agentic AI
  • AI governance platforms
  • Ambient invisible intelligence
  • Augmented human capability
  • Autonomous businesses
  • BBMIs (Brain-Body Machine Interfaces)
  • Brand reputation
  • Business benefits
  • Contextual awareness
  • Continuous adaptive trust model
  • Cryptography
  • Data privacy
  • Disinformation security
  • Energy-efficient computing
  • Guardrails
  • Hybrid computing
  • Human-machine synergy
  • Identity validation
  • Immersive experiences
  • Model lifecycle management
  • Multilayered adaptive learning
  • Neurological enhancement
  • Polyfunctional robots
  • Post-quantum cryptography (PQC)
  • Provenance
  • Quantum computing (QC)
  • Real-time personalization
  • Risk scoring
  • Spatial computing
  • Sustainability
  • Transparency
  • UBMIs (User-Brain Machine Interfaces)

Did this spark your enthusiasm for modern jingo jango. Hats off to the Gartner Group. Wow! Great. Is the list complete? Of course not. I left out bullish*t.

Stephen E Arnold, December 26, 2024

Modern Management Revealed and It Is Jaundiced with a Sickly Yellowish Cast

December 26, 2024

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis blog post is the work of an authentic dinobaby. No smart software was used.

I was zipping through the YCombinator list of “important” items and spotted this one: “Time for a Code-Yellow?: A Blunt Instrument That Works.” I associated Code Yellow with the Google knee jerk in early 2023 when Microsoft rolled out its smart software deal with OpenAI. Immediately Google was on the backfoot. Word filtered across the blogs and “real” news sources that the world’s biggest online ad outfit and most easily sued company was reeling. The company declared a “Code Yellow,” a “Code Red,” and probably a Code 300 Terahertz to really goose the Googlers.

image

Grok does a code yellow. Good enough.

I found the reaction, the fumbling, and the management imperative as wonky as McKinsey getting ensnared in its logical opioid consulting work. What will those MBAs come up with next?

The “Time for a Code Yellow” is interesting. Read it. I want to focus on a handful of supplemental observations which appeared in the comments to the citation for the article. These, I believe, make clear the “problem” that is causing many societal problems including the egregious actions of big companies, some government agencies, and those do-good non-governmental organizations.

Here we go and the italics are my observation on the individual insights:

Tubojet1321 says: “If everything is an emergency, nothing is an emergency.” Excellent observation.

nine_zeros says: “Eventually everyone learns inaction.” Yep, meetings are more important than doing.The fix is to have another meeting.

magical hippo says: “My dad used to flippantly say he had three piles of papers on his desk: “urgent”, “very urgent” and “no longer urgent”. The modern organization creates bureaucratic friction at a much faster pace.

x0x0 says: “I’m utter sh*t at management, [I] refuse to prioritize until it’s a company-threatening crisis, and I’m happy to make my team suffer for my incompetence.” Outstanding self critique.

Lammy says: “The etymology is not green/yellow/red. It’s just not-Yellow or yes-Yellow. See Stephen Levy’s In The Plex (2011) pg186: ‘A Code Yellow is named after a tank top of that color owned by engineering director Wayne Rosing. During Code Yellow a leader is given the shirt and can tap anyone at Google and force him or her to drop a current project to help out. Often, the Code Yellow leader escalates the emergency into a war room situation and pulls people out of their offices and into a conference room for a more extended struggle.’ Really? I thought the popularization of “yellow” as a caution or warning became a shared understanding in the US with the advent of trains long before T shirts and Google. Note: Train professionals used a signaling system before Messrs. Brin and Page “discovered” Jon Kleinberg’s CLEVER patent.

lizzas says: “24/7 oncall to … be yanked onto something the boss fancies. No thanks. What about… planning?” Planning. Let’s call a meeting, talk about a plan, then have a meeting to discuss options, and finally have a meeting to do planning. Sounds like a plan.

I have a headache from the flashing yellow lights. Amazing about Google’s originality, isn’t it? Oh, over the holiday downtime, check out Dr. Jon Kleinberg and what he was doing at IBM’s Almaden Research Laboratory in US6112202, filed in 1997. Are those yellow lights still flashing?

Stephen E Arnold, December 26, 2024

MUT Bites: Security Perimeters May Not Work Very Well

December 26, 2024

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis blog post is the work of an authentic dinobaby. No smart software was used.

I spotted a summary of an item in Ars Technica which recycled a report from Checkmarx and Datadog Security Labs. If you want to read “Yearlong Supply Chain Attack Targeting Security Pros Steals 390,000 Credentials.” I want to skip what is now a soap opera story repeated again and again: Bad actors compromise a system, security professionals are aghast, and cybersecurity firms license more smart, agentic enabled systems. Repeat. Repeat. Repeat. That’s how soap operas worked when I was growing up.

Let’s jump to several observations:

  1. Cyber defenses are not working
  2. Cyber security vendors insist their systems are working because numerous threats were blocked. Just believe our log data. See. We protected you … a lot.
  3. Individual cyber security vendors are a cohort which can be compromised, not once in a mad minute of carelessness. No. Compromised for — wait for it — up to a year.

The engineering of software and systems is, one might conclude, rife with vulnerabilities. If the cyber security professionals cannot protect themselves, who can?

Stephen E Arnold, December 26, 2024

Juicing Up RAG: The RAG Bop Bop

December 26, 2024

Can improved information retrieval techniques lead to more relevant data for AI models? One startup is using a pair of existing technologies to attempt just that. MarkTechPost invites us to “Meet CircleMind: An AI Startup that is Transforming Retrieval Augmented Generation with Knowledge Graphs and PageRank.” Writer Shobha Kakkar begins by defining Retrieval Augmented Generation (RAG). For those unfamiliar, it basically combines information retrieval with language generation. Traditionally, these models use either keyword searches or dense vector embeddings. This means a lot of irrelevant and unauthoritative data get raked in with the juicy bits. The write-up explains how this new method refines the process:

“CircleMind’s approach revolves around two key technologies: Knowledge Graphs and the PageRank Algorithm. Knowledge graphs are structured networks of interconnected entities—think people, places, organizations—designed to represent the relationships between various concepts. They help machines not just identify words but understand their connections, thereby elevating how context is both interpreted and applied during the generation of responses. This richer representation of relationships helps CircleMind retrieve data that is more nuanced and contextually accurate. However, understanding relationships is only part of the solution. CircleMind also leverages the PageRank algorithm, a technique developed by Google’s founders in the late 1990s that measures the importance of nodes within a graph based on the quantity and quality of incoming links. Applied to a knowledge graph, PageRank can prioritize nodes that are more authoritative and well-connected. In CircleMind’s context, this ensures that the retrieved information is not only relevant but also carries a measure of authority and trustworthiness. By combining these two techniques, CircleMind enhances both the quality and reliability of the information retrieved, providing more contextually appropriate data for LLMs to generate responses.”

CircleMind notes its approach is still in its early stages, and expects it to take some time to iron out all the kinks. Scaling it up will require clearing hurdles of speed and computational costs. Meanwhile, a few early users are getting a taste of the beta version now. Based in San Francisco, the young startup was launched in 2024.

Cynthia Murrell, December 26, 2024

Does Apple Thinks Google Is Inept?

December 25, 2024

At a pre-holiday get together, I heard Wilson say, “Don’t ever think you’re completely useless. You can always be used as a bad example.”

I read the trust outfit’s write up “Apple Seeks to Defend Google’s Billion Dollar Payments in Search Case.” I found the story cutting two ways.

Apple, a big outfit, believes that it can explain in a compelling way why Google should be paying Apple to make Google search the default search engine on Apple devices. Do you remember the Walt Disney film  The Hunchback of Notre Dame? I love an argument with a twisted back story. Apple seems to be saying to Google: “Stupidity is far more dangerous than evil. Evil takes a break from time to time. Stupidity does not.”

The Thomson Reuters article offers:

Apple has asked to participate in Google’s upcoming U.S. antitrust trial over online search, saying it cannot rely on Google to defend revenue-sharing agreements that send the iPhone maker billions of dollars each year for making Google the default search engine on its Safari browser.

Apple wants that $20 billion a year and certainly seems to be sending a signal that Google will screw up the deal with a Googley argument. At the same holiday party, Wilson’s significant other observed, ““My people skills are just fine. It’s my tolerance to idiots that needs work.” I wonder if that person was talking about Apple?

Apple may be fearful that Google will lurch into Code Yellow, tell the jury that gluing cheese on pizza is logical, and explain that it is not a monopoly. Apple does not want to be in the court cafeteria and hear, “I heard Google ask the waiter, “How do you prepare chicken?” The waiter replied, “Nothing special. The cook just says, “You are going to die.”

The Thomson Reuters’ article offers this:

Apple wants to call witnesses to testify at an April trial. Prosecutors will seek to show Google must take several measures, including selling its Chrome web browser and potentially its Android operating system, to restore competition in online search. “Google can no longer adequately represent Apple’s interests: Google must now defend against a broad effort to break up its business units,” Apple said.

I had a professor from Oklahoma who told our class:

“If Stupidity got us into this mess, then why can’t it get us out?”

Apple and Google arguing in court. Google has a lousy track record in court. Apple is confident it can convince a court that taking Google’s money is okay.

Albert Eistein allegedly observed:

The difference between stupidity and genius is that genius has its limits.

Yep, Apple and Google, quite a pair.

Stephen E Arnold, December 25, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta