AI Video Is Improving: Hello, Hollywood!
December 30, 2024
Has AI video gotten scarily believable? Well, yes. For anyone who has not gotten the memo, The Guardian declares, “Video Is AI’s New Frontier—and It Is so Persuasive, We Should All Be Worried.” Writer Victoria Turk describes recent developments:
“Video is AI’s new frontier, with OpenAI finally rolling out Sora in the US after first teasing it in February, and Meta announcing its own text-to-video tool, Movie Gen, in October. Google made its Veo video generator available to some customers this month. Are we ready for a world in which it is impossible to discern which of the moving images we see are real?”
Ready or not, here it is. No amount of hand-wringing will change that. Turk mentions ways bad actors abuse the technology: Scammers who impersonate victims’ loved ones to extort money. Deepfakes created to further political agendas. Fake sexual images and videos featuring real people. She also cites safeguards like watermarks and content restrictions as evidence AI firms understand the potential for abuse.
But the author’s main point seems to be more philosophical. It was prompted by convincing fake footage of a tree frog, documentary style. She writes:
“Yet despite the technological feat, as I watched the tree frog I felt less amazed than sad. It certainly looked the part, but we all knew that what we were seeing wasn’t real. The tree frog, the branch it clung to, the rainforest it lived in: none of these things existed, and they never had. The scene, although visually impressive, was hollow.”
Turk also laments the existence of this Meta-made baby hippo, which she declares is “dead behind the eyes.” Is it though? Either way, these experiences led Turk to ponders a bleak future in which one can never know which imagery can be trusted. She concludes with this anecdote:
“I was recently scrolling through Instagram and shared a cute video of a bunny eating lettuce with my husband. It was a completely benign clip – but perhaps a little too adorable. Was it AI, he asked? I couldn’t tell. Even having to ask the question diminished the moment, and the cuteness of the video. In a world where anything can be fake, everything might be.”
That is true. An important point to remember when we see footage of a politician doing something horrible. Or if we get a distressed call from a family member begging for money. Or if we see a cute animal video but prefer to withhold the dopamine rush lest it turn out to be fake.
Cynthia Murrell, December 30, 2024
Debbie Downer Says, No AI Payoff Until 2026
December 27, 2024
Holiday greetings from the Financial Review. Its story “Wall Street Needs to Prepare for an AI Winter” is a joyous description of what’s coming down the Information Highway. The uplifting article sings:
shovelling more and more data into larger models will only go so far when it comes to creating “intelligent” capabilities, and we’ve just about arrived at that point. Even if more data were the answer, those companies that indiscriminately vacuumed up material from any source they could find are starting to struggle to acquire enough new information to feed the machine.
Translating to rural Kentucky speak: “We been shoveling in the horse stall and ain’t found the nag yet.”
The flickering light bulb has apparently illuminated the idea that smart software is expensive to develop, train, optimize, run, market, and defend against allegations of copyright infringement.
To add to the profit shadow, Debbie Downer’s cousin compared OpenAI to Visa. The idea in “OpenAI Is Visa” is that Sam AI-Man’s company is working overtime to preserve its lead in AI and become a monopoly before competitors figure out how to knock off OpenAI. The write up says:
Either way, Visa and OpenAI seem to agree on one thing: that “competition is for losers.”
Too add to the uncertainty about US AI “dominance,” Venture Beat reports:
DeepSeek-V3, ultra-large open-source AI, outperforms Llama and Qwen on launch.
Does that suggest that the squabbling and mud wrestling among US firms can be body slammed by the Chinese AI grapplers are more agile? Who knows. However, in a series of tweets, DeepSeek suggested that its “cost” was less than $6 million. The idea is that what Chinese electric car pricing is doing to some EV manufacturers, China’s AI will do to US AI. Better and faster? I don’t know but that “cheaper” angle will resonate with those asked to pump cash into the Big Dogs of US AI.
In January 2023, many were struck by the wonders of smart software. Will the same festive atmosphere prevail in 2025?
Stephen E Arnold, December 27, 2024
OpenAI Partners with Defense Startup Anduril to Bring AI to US Military
December 27, 2024
No smart software involved. Just a dinobaby’s work.
We learn from the Independent that “OpenAI Announces Weapons Company Partnership to Provide AI Tech to Military.” The partnership with Anduril represents an about-face for OpenAI. This will excite some people, scare others, and lead to remakes of the “Terminator.” Beyond Search thinks that automated smart death machines are so trendy. China also seems enthused. We learn:
“‘ChatGPT-maker OpenAI and high-tech defense startup Anduril Industries will collaborate to develop artificial intelligence-inflected technologies for military applications, the companies announced. ‘U.S. and allied forces face a rapidly evolving set of aerial threats from both emerging unmanned systems and legacy manned platforms that can wreak havoc, damage infrastructure and take lives,’ the companies wrote in a Wednesday statement. ‘The Anduril and OpenAI strategic partnership will focus on improving the nation’s counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time.’ The companies framed the alliance as a way to secure American technical supremacy during a ‘pivotal moment’ in the AI race against China. They did not disclose financial terms.”
Of course not. Tech companies were once wary of embracing military contracts, but it seems those days are over. Why now? The article observes:
“The deals also highlight the increasing nexus between conservative politics, big tech, and military technology. Palmer Lucky, co-founder of Anduril, was an early, vocal supporter of Donald Trump in the tech world, and is close with Elon Musk. … Vice-president-elect JD Vance, meanwhile, is a protege of investor Peter Thiel, who co-founded Palantir, another of the companies involved in military AI.”
“Involved” is putting it lightly. And as readers may have heard, Musk appears to be best buds with the president elect. He is also at the head of the new Department of Government Efficiency, which sounds like a federal agency but is not. Yet. The commission is expected to strongly influence how the next administration spends our money. Will they adhere to multinational guidelines on military use of AI? Do PayPal alums have any hand in this type of deal?
Cynthia Murrell, December 27, 2024
AI Oh-Oh: Innovation Needed Now
December 27, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
I continue to hear about AI whiz kids “running out of data.” When people and institutions don’t know what’s happening, it is easy to just smash and grab. The copyright litigation and the willingness of AI companies to tie up with content owners make explicit that the zoom zoom days are over.
A smart software wizard is wondering how to get over, under, around, or through the stone wall of exhausted content. Thanks, Grok, good enough.
“The AI Revolution Is Running Out of Data. What Can Researchers Do?” is a less crazy discussion of the addictive craze which has made smart software or — wait for it — agentic intelligence the next big thing. The write up states:
The Internet is a vast ocean of human knowledge, but it isn’t infinite. And artificial intelligence (AI) researchers have nearly sucked it dry.
“Sucked it dry” and the systems still hallucinate. Guard rails prevent users from obtaining information germane to certain government investigations. The image generators refuse to display a classroom of student paying attention to mobile phones, not the teacher. Yep, dry. More like “run aground.”
The fix to running out of data, according to the write up, is:
plans to work around it, including generating new data and finding unconventional data sources.
One approach is to “find data.” The write up says:
one option might be to harvest non-public data, such as WhatsApp messages or transcripts of YouTube videos. Although the legality of scraping third-party content in this manner is untested, companies do have access to their own data, and several social-media firms say they use their own material to train their AI models. For example, Meta in Menlo Park, California, says that audio and images collected by its virtual-reality headset Meta Quest are used to train its AI.
And what about this angle?
Another option might be to focus on specialized data sets such as astronomical or genomic data, which are growing rapidly. Fei-Fei Li, a prominent AI researcher at Stanford University in California, has publicly backed this strategy. She said at a Bloomberg technology summit in May that worries about data running out take too narrow a view of what constitutes data, given the untapped information available across fields such as health care, the environment and education.
If you want more of these work arounds, please, consult the Nature article.
Several observations are warranted:
First, the current AI “revolution” is the result of many years of research and experimentation, The fact that today’s AI produces reasonably good high school essays and allows people to interact with a search system is a step forward. However, like most search-based innovations, the systems have flaws.
Second, the use of neural networks and the creation by Google (allegedly) of the transformer has provided fuel to fire the engines of investment. The money machines are chasing the next big thing. The problem is that the costs are now becoming evident. It is tough to hide the demand for electric power. (Hey, no problem how about a modular thorium reactor. Yeah, just pick one up at Home Depot. The small nukes are next to the Honda generators.) There is the need for computation. Google can talk about quantum supremacy, but good old fashioned architecture is making Nvidia a big dog in AI. And the cost of people? It is off the chart. Forget those coding boot camps and learn to do matrix math in your head.
Third, the real world applications like those Apple is known for don’t work very well. After vaporware time, Apple is pushing OpenAI to iPhone users. Will Siri actually work? Apple cannot afford to whiff to many big plays. Do you wear your Apple headset or do you have warm and fuzzies for the 2024 Mac Mini which is a heck of a lot cheaper than some of the high power Macs from a year ago? What about Copilot in Notebook. Hey, that’s helpful to some Notepad users. How many? Well, that’s another question. How many people want smart software doing the Clippy thing with every click?
Net net: It is now time for innovation, not marketing. Which of the Big Dog AI outfits will break through the stone walls? The bigger question is, “What if it is an innovator in China?” Impossible, right?
Stephen E Arnold, December 27, 2024
Boxing Day Cheat Sheet for AI Marketing: Happy New Year!
December 27, 2024
Other than automation and taking the creative talent out of the entertainment industry, where is AI headed in 2025? The lowdown for the upcoming year can be found on the Techknowledgeon AI blog and its post: “The Rise Of Artificial Intelligence: Know The Answers That Makes You Sensible About AI.”
The article acts as a primer for what AI I, its advantages, and answering important questions about the technology. The questions that grab our attention are “Will AI take over humans one day?” And “Is AI an Existential Threat to Humanity?” Here’s the answer to the first question:
“The idea of AI taking over humanity has been a recurring theme in science fiction and a topic of genuine concern among some experts. While AI is advancing at an incredible pace, its potential to surpass or dominate human capabilities is still a subject of intense debate. Let’s explore this question in detail.
AI, despite its impressive capabilities, has significant limitations:
- Lack of General Intelligence: Most AI today is classified as narrow AI, meaning it excels at specific tasks but lacks the broader reasoning abilities of human intelligence.
- Dependency on Humans: AI systems require extensive human oversight for design, training, and maintenance.
- Absence of Creativity and Emotion: While AI can simulate creativity, it doesn’t possess intrinsic emotions, intuition, or consciousness.
And then the second one is:
“Instead of "taking over," AI is more likely to serve as an augmentation tool:
- Workforce Support: AI-powered systems are designed to complement human skills, automating repetitive tasks and freeing up time for creative and strategic thinking.
- Health Monitoring: AI assists doctors but doesn’t replace the human judgment necessary for patient care.
- Smart Assistants: Tools like Alexa or Google Assistant enhance convenience but operate under strict limitations.”
So AI has a long way to go before it replaces humanity and the singularity of surpassing human intelligence is either a long way off or might never happen.
This dossier includes useful information to understand where AI is going and will help anyone interested in learning what AI algorithms are projected to do in 2025.
Whitney Grace, December 27, 2024
Juicing Up RAG: The RAG Bop Bop
December 26, 2024
Can improved information retrieval techniques lead to more relevant data for AI models? One startup is using a pair of existing technologies to attempt just that. MarkTechPost invites us to “Meet CircleMind: An AI Startup that is Transforming Retrieval Augmented Generation with Knowledge Graphs and PageRank.” Writer Shobha Kakkar begins by defining Retrieval Augmented Generation (RAG). For those unfamiliar, it basically combines information retrieval with language generation. Traditionally, these models use either keyword searches or dense vector embeddings. This means a lot of irrelevant and unauthoritative data get raked in with the juicy bits. The write-up explains how this new method refines the process:
“CircleMind’s approach revolves around two key technologies: Knowledge Graphs and the PageRank Algorithm. Knowledge graphs are structured networks of interconnected entities—think people, places, organizations—designed to represent the relationships between various concepts. They help machines not just identify words but understand their connections, thereby elevating how context is both interpreted and applied during the generation of responses. This richer representation of relationships helps CircleMind retrieve data that is more nuanced and contextually accurate. However, understanding relationships is only part of the solution. CircleMind also leverages the PageRank algorithm, a technique developed by Google’s founders in the late 1990s that measures the importance of nodes within a graph based on the quantity and quality of incoming links. Applied to a knowledge graph, PageRank can prioritize nodes that are more authoritative and well-connected. In CircleMind’s context, this ensures that the retrieved information is not only relevant but also carries a measure of authority and trustworthiness. By combining these two techniques, CircleMind enhances both the quality and reliability of the information retrieved, providing more contextually appropriate data for LLMs to generate responses.”
CircleMind notes its approach is still in its early stages, and expects it to take some time to iron out all the kinks. Scaling it up will require clearing hurdles of speed and computational costs. Meanwhile, a few early users are getting a taste of the beta version now. Based in San Francisco, the young startup was launched in 2024.
Cynthia Murrell, December 26, 2024
Anthropic Gifts a Feeling of Safety: Insecurity Blooms This Holiday Season
December 25, 2024
Written by a dinobaby, not an over-achieving, unexplainable AI system.
TechCrunch published “Google Is Using Anthropic’s Claude to Improve Its Gemini AI.” The write up reports:
Contractors working to improve Google’s Gemini AI are comparing its answers against outputs produced by Anthropic’s competitor model Claude, according to internal correspondence seen by TechCrunch. Google would not say, when reached by TechCrunch for comment, if it had obtained permission for its use of Claude in testing against Gemini.
Beyond Search notes Pymnts.com report from February 5, 2023, that Google invested at that time $300 million in Anthropic. Beyond Search recalls a presentation at a law enforcement conference. One comment made by an attendee to me suggested that Google was well aware of Anthropic’s so-called constitutional AI. I am immune to AI and crypto babble, but I did chase down “constitutional AI” because the image the bound phrase sparked in my mind was that of the mess my French bulldog delivers when he has eaten spicy food.
The illustration comes from You.com. Kwanzaa was the magic word. Good enough.
The explanation consumes 34 pages of an ArXiv paper called “Constitutional AI: Harmlessness from AI Feedback.” The paper has more than 48 authors. (Headhunters, please, take note when you need to recruit AI wizards.) I read the paper, and I think — please, note, “think” — the main idea is:
Humans provides some input. Then the Anthropic system figures out how to achieve helpfulness and instruction-following without human feedback. And the “constitution”? Those are the human-created rules necessary to get the smart software rolling along. Presumably Anthropic’s algorithms ride without training wheels forevermore.
The CAI acronym has not caught on like the snappier RAG or “retrieval augmented generation” or the most spectacular jargon “synthetic data.” But obviously Google understands and values to the tune of hundreds of millions of dollars, staff time, and the attention of big Googler thinkers like Jeff Dean (who once was the Big Dog of AI) but has given way to the alpha dog at DeepMind).
The swizzle for this “testing” or whatever the Googlers are doing is “safety.” I know that when I ask for an image like “a high school teacher at the greenboard talking to students who are immersed in their mobile phones”, I am informed that the image is not safe. I assume Anthropic will make such crazy prohibitions slightly less incomprehensible. Well, maybe, maybe not.
Several observations are warranted:
- Google’s investment in Anthropic took place shortly after the Microsoft AI marketing coup in 2023. Perhaps someone knew that Google’s “we invented it” transformer technology was becoming a bit of a problem
- Despite the Google “we are the bestest” in AI technology, the company continues to feel the need to prove that it is the bestest. That’s good. Self- knowledge and defeating “not invented here” malaise are positives.
- DeepMind itself — although identified as the go-to place for the most bestest AI technology — may not be perceived as the outfit with the Holy Grail, the secret to eternal life, and the owner of most of the land on which the Seven Cities of Cibola are erected.
Net net: Lots of authors, Google testing itself, and a bit of Google’s inferiority complex — Quite a Kwanzaa gift.
Stephen E Arnold, December 25, 2024
FReE tHoSe smaRT SoFtWarEs!
December 25, 2024
No smart software involved. Just a dinobaby’s work.
Do you have the list of stop words you use in your NLP prompts? (If not, click here.) You are not happy when words on the list like “b*mb,” “terr*r funding,” and others do not return exactly what you are seeking? If you say, “Yes”, you will want to read “BEST-OF-N JAILBREAKING” by a Frisbee team complement of wizards; namely, John Hughes, Sara Price, Aengus Lynch, Rylan Schaeffer, Fazl Barez, Sanmi Koyejo, Henry Sleight, Erik Jones, Ethan Perez, and Mrinank Sharma. The people doing the heavy lifting were John Hughes (a consultant who does work for Speechmatics and Anthropic) and Mrinank Sharma (an Anthropic engineer involved in — wait for it — adversarial robustness).
The main point is that Anthropic linked wizards have figured out how to knock down the guard rails for smart software. And those stop words? Just whip up a snappy prompt, mix up the capital and lower case letters, and keep sending the query to a smart software. At some point, those capitalization and other fixes will cause the LLM to go your way. Want to whip up a surprise in your bathtub? LLMs will definitely help you out.
The paper has nifty charts and lots of academic hoo-hah. The key insight is what the many, many authors call “attack composition.” You will be able to get the how-to by reading the 73 page paper, probably a result of each author writing 10 pages in the hopes of landing an even more high paying, in demand gig.
Several observations:
- The idea that guard rails work is now called into question
- The disclosure of the method means that smart software will do whatever a clever bad actor wants
- The rush to AI is about market lock up, not the social benefit of the technology.
The new year will be interesting. The paper’s information is quite the holiday gift.
Stephen E Arnold, December 25, 2024
Agentic Babies for 2025?
December 24, 2024
Are the days of large language models numbered? Yes, according to the CEO and co-founder of Salesforce. Finance site Benzinga shares, “Marc Benioff Says Future of AI Not in Bots Like ChatGPT But In Autonomous Agents.” Writer Ananya Gairola points to a recent Wall Street Journal podcast in which Benioff shared his thoughts:
“He stated that the next phase of AI development will focus on autonomous agents, which can perform tasks independently, rather than relying on LLMs to drive advancements. He argued that while AI tools like ChatGPT have received significant attention, the real potential lies in agents. ‘Has the AI taken over? No. Has AI cured cancer? No. Is AI curing climate change? No. So we have to keep things in perspective here,’ he stated. Salesforce provides both prebuilt and customizable AI agents for businesses looking to automate customer service functions. ‘But we are not at that moment that we’ve seen in these crazy movies — and maybe we will be one day, but that is not where we are today,’ Benioff stated during the podcast.”
Someday, he says. But it would seem the race is on. Gairola notes OpenAI is poised to launch its own autonomous AI agent in January. Will that company dominate the autonomous AI field, as it has with generative AI? Will the new bots come equipped with bias and hallucinations? Stay tuned.
Cynthia Murrell, December 24, 2024
AI Makes Stuff Up and Lies. This Is New Information?
December 23, 2024
The blog post is the work of a dinobaby, not AI.
I spotted “Alignment Faking in Large Language Models.” My initial reaction was, “This is new information?” and “Have the authors forgotten about hallucination?” The original article from Anthropic sparked another essay. This one appeared in Time Magazine (online version). Time’s article was titled “Exclusive: New Research Shows AI Strategically Lying.” I like the “strategically lying,” which implies that there is some intent behind the prevarication. Since smart software reflects its developers use of fancy math and the numerous knobs and levers those developers can adjust at the same time the model is gobbling up information and “learning”, the notion of “strategically lying” struck me as as interesting.
Thanks MidJourney. Good enough.
What strategy is implemented? Who thought up the strategy? Is the strategy working? were the questions which occurred to me. The Time essay said:
experiments jointly carried out by the AI company Anthropic and the nonprofit Redwood Research, shows a version of Anthropic’s model, Claude, strategically misleading its creators during the training process in order to avoid being modified.
This suggests that the people assembling the algorithms and training data, configuring the system, twiddling the administrative settings, and doing technical manipulations were not imposing a strategy. The smart software was cooking up a strategy. Who will say that the software is alive and then, like the former Google engineer, express a belief that the system is alive. It’s sci-fi time I suppose.
The write up pointed out:
Researchers also found evidence that suggests the capacity of AIs to deceive their human creators increases as they become more powerful.
That is an interesting idea. Pumping more compute and data into a model gives it a greater capacity to manipulate its outputs to fool humans who are eager to grab something that promises to make life easier and the user smarter. If data about the US education system’s efficacy are accurate, Americans are not doing too well in the reading, writing, and arithmetic departments. Therefore, discerning strategic lies might be difficult.
The essay concluded:
What Anthropic’s experiments seem to show is that reinforcement learning is insufficient as a technique for creating reliably safe models, especially as those models get more advanced. Which is a big problem, because it’s the most effective and widely-used alignment technique that we currently have.
What’s this “seem.” The actual output of large language models using transformer methods crafted by Google output baloney some of the time. Google itself had to regroup after the “glue cheese to pizza” suggestion.
Several observations:
- Smart software has become the technology more important than any other. The problem is that its outputs are often wonky and now the systems are befuddling the wizards who created and operate them. What if AI is like a carnival ride that routinely injures those looking for kicks?
- AI is finding its way into many applications but the resulting revenue has frayed some investors’ nerves. The fix is to go faster and win to reach the revenue goal. This frenzy for payoff has been building since early 2024 but those costs remain brutally high.
- The behavior of large language models is not understood by some of its developers. Does this seem like a problem?
Net net: “Seem?” One lies or one does not.
Stephen E Arnold, December 23, 2024