Some AI Wisdom: Is There a T Shirt?

January 14, 2025

dino orange_thumb_thumb_thumb Prepared by a still-alive dinobaby.

I was catching up with newsfeeds and busy filtering the human output from the smart software spam-arator. I spotted “The Serious Science of Trolling LLMs,” published in the summer of 2024. The article explains that value can be derived from testing large language models like ChatGPT, Gemini, and others with prompts to force the software to generate something really stupid, off base, incorrect, or goofy. I zipped through the write up and found it interesting. Then I came upon this passage:

the LLM business is to some extent predicated on deception; we are not supposed to know where the magic ends and where cheap tricks begin. The vendors’ hope is that with time, we will reach full human-LLM parity; and until then, it’s OK to fudge it a bit. From this perspective, the viral examples that make it patently clear that the models don’t reason like humans are not just PR annoyances; they are a threat to product strategy.

Several observations:

  1. Progress from my point of view with smart software seems to have slowed. The reason may be that free and low cost services cannot affords to provide the functionality they did before someone figured out the cost per query. The bean counters spoke and “quality” went out the window.
  2. The gap between what the marketers say and what the systems do is getting wider. Sorry, AI wizards, the systems typically fail to produce an output satisfactory for my purposes on the first try. Multiple prompts are required. Again a cost cutting move in my opinion.
  3. Made up information or dead wrong information is becoming more evident. My hunch is that the consequence of ingesting content produced by AI is degrading the value of the models originally trained on human generated content. I think this is called garbage in — garbage out.

Net net: Which of the deep pocket people will be the first to step back from smart software built upon systems that consume billions of dollars the way my French bulldog eats doggie treats? The Chinese system Deepseek’s marketing essentially says, “Yo, we built this LLM at a fraction of the cost of the spendthrifts at Google, Microsoft, and OpenAI. Are the Chinese AI wizards dragging a red herring around the AI forest?

To go back to the Lcamtuf essay, “it’s OK to fudge a bit.” Nope, it is mandatory to fudge a lot.

Stephen E Arnold, January 14, 2025

AI Defined in an Arts and Crafts Setting No Less

January 13, 2025

dino orange_thumb_thumb Prepared by a still-alive dinobaby.

I was surprised to learn that a design online service (what I call arts and crafts) tackled a to most online publications skip. The article “What Does AI Really Mean?” tries to define AI or smart software. I remember a somewhat confused and erratic college professor trying to define happiness. Wow, that was a wild and crazy lecture. (I think the person’s name was Dr. Chapman. I tip my ball cap with the SS8 logo on it to him.) The author of this essay is a Googler, so it must be outstanding, furthering the notion of quantum supremacy at Google.

What is AI? The write up says:

I hope this helped you better understand what those terms mean and the processes which encompass the term “AI”.

Okay, “helped you understand better.” What does the essay do to help me understand better. Hang on to your SS8 ball cap. The author briefly defines these buzzwords:

  • Data as coordinates
  • Querying per approximation
  • Language models both large and small
  • Fine “Tunning” (Isn’t that supposed to be tuning?)
  • Enhancing context with information, including grounded generation
  • Embedding.

For me, a list of buzzwords is not a definition. (At least the hapless Dr. Chapman tried to provide concrete examples and references to his own experience with happiness, which as I recall eluded him.)

The “definition” jumps to a section called “Let’s build.” The author concludes the essay with:

I hope this helped you better understand what those terms mean and the processes which encompass the term “AI”. This merely scratches the surface of complexity, though. We still need to talk about AI Agents and how all these approaches intertwine to create richer experiences. Perhaps we can do that in a later article — let me know in the comments if you’d like that!

That’s it. The Google has, from his point of view, defined AI. As Holden Caufield in The Catcher in the Rye said:

“I can’t explain what I mean. And even if I could, I’m not sure I’d feel like it.”

Bingo.

Stephen E Arnold, January 13, 2025

Oh, Oh! Silicon Valley Hype Minimizes Risk. Who Knew?

January 10, 2025

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis is an official dinobaby post. No smart software involved in this blog post.

I read “Silicon Valley Stifled the AI Doom Movement in 2024.” I must admit I was surprised that one of the cheerleaders for Silicon Valley is disclosing something absolutely no one knew. I mean unregulated monopolies, the “Puff the Magic Dragon” strafing teens, and the vulture capitalists slavering over the corpses of once thriving small and mid sized businesses. Hey, I thought that “progress” myth was real. I thought technology only makes life better. Now I read that “Silicon Valley” wanted only good news about smart software. Keep in mind that this is software which outputs hallucinations, makes decisions about medical care for people, and monitors the clicks and location of everyone with a mobile device or a geotracker.

The write up reminded me that ace entrepreneur / venture professional Marc Andreessen said:

“The era of Artificial Intelligence is here, and boy are people freaking out. Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it,” said Andreessen in the essay. In his conclusion, Andreessen gave a convenient solution to our AI fears: move fast and break things – basically the same ideology that has defined every other 21st century technology (and their attendant problems). He argued that Big Tech companies and startups should be allowed to build AI as fast and aggressively as possible, with few to no regulatory barriers. This would ensure AI does not fall into the hands of a few powerful companies or governments, and would allow America to compete effectively with China, he said.

What publications touted Mr. Andreessen’s vision? Answer: Lots.

Regulate smart software? Nope. From Connecticut’s effort to the US government, smart software regulation went nowhere. The reasons included, in my opinion:

  1. A chance to make a buck, well, lots of bucks
  2. Opportunities to foist “smart software” plus its inherent ability to make up stuff on corporate sheep
  3. A desire to reinvent “dumb” processes like figuring out how to push buttons to create addiction to online gambling, reduce costs by eliminating inefficient humans, and using stupid weapons.

Where are we now? A pillar of the Silicon Valley media ecosystem writes about the possible manipulation of information to make smart software into a Care Bear. Cuddly. Harmless. Squeezable. Yummy too.

The write up concludes without one hint of the contrast between the AI hype and the viewpoints of people who think that the technology of AI is immature but fumbling forward to stick its baby finger in a wall socket. I noted this concluding statement in the write up:

Calling AI “tremendously safe” and attempts to regulate it “dumb” is something of an oversimplification. For example, Character.AI – a startup a16z has invested in – is currently being sued and investigated over child safety concerns. In one active lawsuit, a 14-year-old Florida boy killed himself after allegedly confiding his suicidal thoughts to a Character.AI chatbot that he had romantic and sexual chats with. This case shows how our society has to prepare for new types of risks around AI that may have sounded ridiculous just a few years ago. There are more bills floating around that address long-term AI risk – including one just introduced at the federal level by Senator Mitt Romney. But now, it seems AI doomers will be fighting an uphill battle in 2025.

But don’t worry. Open source AI provides a level playing field for [a] adversaries of the US, [b] bad actors who use smart software to compromise Swiss cheese systems, and [c] manipulate people on a grand scale. Will the “Silicon Valley” media give equal time to those who don’t see technology as a benign or net positive? Are you kidding? Oh, aren’t those smart drones with kinetic devices just fantastic?

Stephen E Arnold, January 10, 2025

GitHub Identifies a Sooty Pot and Does Not Offer a Fix

January 9, 2025

Hopping Dino_thumb_thumb_thumb_thumb_thumbThis is an official dinobaby post. No smart software involved in this blog post.

GitLab’s Sabrina Farmer is a sharp thinking person. Her “Three Software Development Challenges Slowing AI Progress” articulates an issue often ignored or just unknown. Specifically, according to her:

AI is becoming an increasingly critical component in software development. However, as is the case when implementing any new tool, there are potential growing pains that may make the transition to AI-powered software development more challenging.

Ms. Farmer is being kind and polite. I think she is suggesting that the nest with the AI eggs from the fund-raising golden goose has become untidy. Perhaps, I should use the word “unseemly”?

She points out three challenges which I interpret as the equivalent of one of those unsolved math problems like cracking the Riemann Hypothesis or the Poincaré Conjecture. These are:

  1. AI training. Yeah, marketers write about smart software. But a relatively small number of people fiddle with the knobs and dials on the training methods and the rat’s nests of computational layers that make life easy for an eighth grader writing an essay about Washington’s alleged crossing of the Delaware River whilst standing up in a boat rowed by hearty, cheerful lads. Big demand, lots of pretenders, and very few 10X coders and thinkers are available. AI Marketers? A surplus because math and physics are hard and art history and social science are somewhat less demanding on today’s thumb typers.
  2. Tools, lots of tools. Who has time to keep track of every “new” piece of smart software tooling? I gave up as the hyperbole got underway in early 2023. When my team needs to do something specific, they look / hunt for possibilities. Testing is required because smart software often gets things wrong. Some call this “innovation.” I call it evidence of the proliferation of flawed or cute software. One cannot machine titanium with lousy tools.
  3. Management measurements. Give me a break, Ms. Farmer. Managers are often evidence of the Peter Principle, an accountant, or a lawyer. How can one measure what one does not use, understand, or creates? Those chasing smart software are not making spindles for a wooden staircase. The task of creating smart software that has a shot at producing money is neither art nor science. It is a continuous process of seeing what works, fiddling, and fumbling. You want to measure this? Good luck, although blue chip consultants will gladly create a slide deck to show you the ropes and then churn out a spectacular invoice for professional services.

One question: Is GitLab part of the problem or part of the solution?

Stephen E Arnold, January 9, 2025

AI Outfit Pitches Anti Human Message

January 9, 2025

AI startup Artisan thought it could capture attention by telling companies to get rid of human workers and use its software instead. It was right. Gizmodo reports, “AI Firm’s ‘Stop Hiring Humans’ Billboard Campaign Sparks Outrage.” The firm plastered its provocative messaging across San Francisco. Writer Lucas Ropek reports:

“The company, which is backed by startup accelerator Y-Combinator, sells what it calls ‘AI Employees’ or ‘Artisans.’ What the company actually sells is software designed to assist with customer service and sales workflow. The company appears to have done an internal pow-wow and decided that the most effective way to promote its relatively mundane product was to fund an ad campaign heralding the end of the human age. Writing about the ad campaign, local outlet SFGate notes that the posters—which are strewn all over the city—include plugs like the following:

‘Artisans won’t complain about work-life balance’
‘Artisan’s Zoom cameras will never ‘not be working’ today.’
‘Hire Artisans, not humans.’
‘The era of AI employees is here.'”

The write-up points to an interview with SFGate in which CEO Jaspar Carmichael-Jack states the ad campaign was designed to “draw eyes.” Mission accomplished. (And is it just me, or does that name belong in a pirate movie?) Though Ropek acknowledges his part in drawing those eyes, he also takes this chance to vent about AI and big tech in general. He writes:

“It is Carmichael-Jackson’s admission that his billboards are ‘dystopian’—just like the product he’s selling—that gets to the heart of what is so [messed] up about the whole thing. It’s obvious that Silicon Valley’s code monkeys now embrace a fatalistic bent of history towards the Bladerunner-style hellscape their market imperatives are driving us.”

Like Artisan’s billboards, Ropek pulls no punches. Located in San Francisco, Artisan was launched in 2023. Founders hail from the likes of Stanford, Oxford, Meta, and IBM. Will the firm find a way to make its next outreach even more outrageous?

Cynthia Murrell, January 9, 2025

Ground Hog Day: Smart Enterprise Search

January 7, 2025

Hopping DinoI am a dinobaby. I also wrote the Enterprise Search Report, 1st, 2nd, and 3rd editions. I wrote The New Landscape of Search. I wrote some other books. The publishers are long gone, and I am mostly forgotten in the world of information retrieval. Read this post, and you will learn why. Oh, no AI helped me out unless I come up with an art idea. I used Stable Diffusion for the rat, er, sorry, ground hog day creature.

I think it was 2002 when the owner of a publishing company asked me if I thought there was an interest in profiles of companies offering “enterprise search solutions.” I vaguely remember the person, and I will leave it up to you to locate a copy of the 400 page books I wrote about enterprise search.

The set up for the book was simple. I identified the companies which seemed to bid on government contracts for search, companies providing search and retrieval to organizations, and outfits which had contacted me to pitch their enterprise search systems before they were exiting stealth mode. By the time the first edition appeared in 2004, the companies in the ESR were flogging their products.

image

The ground hog effect is a version of the Yogi Berra “Déjà vu all over again” thing. Enterprise search is just out of reach now and maybe forever.

The enterprise search market imploded. It was there and then it wasn’t. Can you describe the features and functions of these enterprise search systems from the “golden age” of information retrieval:

  • Innerprise
  • InQuira
  • iPhrase
  • Lextek Onix
  • MondoSearch
  • Speed of Mind
  • Stratify (formerly Purple Yogi)

The end of enterprise search coincided with large commercial enterprises figuring out that “search” in a complex organization was not one thing. The problem remains today. Lawyers in a Fortune 1000 company want one type of search. Marketers want another “flavor” of search. The accountants want a search that retrieves structured and unstructured data plus images of invoices. Chemists want chemical structure search. Senior managers want absolutely zero search of their personal and privileged data unless it is lawyers dealing with litigation. In short, each unit wants a highly particularized search and each user wants access to his or her data. Access controls are essential, and they are a hassle at a time when the notion of an access control list was like learning to bake bread following a recipe in Egyptian hieroglyphics.

These problems exist today and are complicated by podcasts, video, specialized file types for 3D printing, email, encrypted messaging, unencrypted messaging, and social media. No one has cracked the problem of a senior sales person who changes a PowerPoint deck to close a deal. Where is that particular PowerPoint? Few know and the sales person may have deleted the file changed minutes before the face to face pitch. This means that baloney like “all” the information in an organization is searchable is not just stupid; it is impossible.

The key events were the legal and financial hassles over Fast Search & Transfer. Microsoft bought the company in 2008 and that was the end of a reasonably capable technology platform and — believe it or not — a genuine alternative to Google Web search. A number of enterprise search companies sold out because the cost of keeping the technology current and actually running a high-grade sales and marketing program spelled financial doom. Examples include Exalead and Vivisimo, among others. Others just went out of business: Delphes (remember that one?). The kiss of death for the type of enterprise search emphasized in the ESR was the acquisition of Autonomy by Hewlett Packard. There was a roll up play underway by OpenText which has redefined itself as a smart software company with Fulcrum and BRS Search under its wing.

What replaced enterprise search when the dust settled in 2011? From my point of view it was Shay Banon’s Elastic search and retrieval system. One might argue that Lucid Works (né Lucid Imagination) was a player. That’s okay. I am, however, to go with Elastic because it offered a version as open source and a commercial version with options for on-going engineering support. For the commercial alternatives, I would say that Microsoft became the default provider. I don’t think SharePoint search “worked” very well, but it was available. Google’s Search Appliance appeared and disappeared. There was zero upside for the Google with a product that was “inefficient” at making a big profit for the firm. So, Microsoft it was. For some government agencies, there was Oracle.

Oracle acquired Endeca and focused on that computationally wild system’s ability to power eCommerce sites. Oracle paid about $1 billion for a system which used to be an enterprise search with consulting baked in. One could buy enterprise search from Oracle and get structured query language search, what Oracle called “secure enterprise search,” and may a dollop of Triple Hop and some other search systems the company absorbed before the end of the enterprise search era. IBM talked about search but the last time I drove by IBM Government systems in Gaithersburg, Maryland, it like IBM search, had moved on. Yo, Watson.

Why did I make this dalliance on memory lane the boring introduction to a blog post? The answer is that I read “Are LLMs At Risk Of Going The Way Of Search? Expect A Duopoly.” This is a paywalled article, so you will have to pony up cash or go to a library. Here’s an abstract of the write up:

  1. The evolution of LLMs (Large Language Models) will lead users to prefer one or two dominant models, similar to Google’s dominance in search.

  2. Companies like Google and Meta are well-positioned to dominate generative AI due to their financial resources, massive user bases, and extensive data for training.

  3. Enterprise use cases present a significant opportunity for specialized models.

Therefore, consumer search will become a monopoly or duopoly.

Let’s assume the Forbes analysis is accurate. Here’s what I think will happen:

First, the smart software train will slow and a number of repackagers will use what’s good enough; that is, cheap enough and keeps the client happy. Thus, a “golden age” of smart search will appear with outfits like Google, Meta, Microsoft, and a handful of others operating as utilities. The US government may standardize on Microsoft, but it will be partners who make the system meet the quite particular needs of a government entity.

Second, the trajectory of the “golden age” will end as it did for enterprise search. The costs and shortcomings become known. Years will pass, probably a decade, maybe less, until a “new” approach becomes feasible. The news will diffuse and then a seismic event will occur. For AI, it was the 2023 announcement that Microsoft and OpenAI would change how people used Microsoft products and services. This created the Google catch up and PR push. We are in the midst of this at the start of 2025.

Third, some of the problems associated with enterprise information and an employee’s finding exactly what he or she needs will be solved. However, not “all” of the problems will be solved. Why? The nature of information is that it is a bit like pushing mercury around. The task requires fresh thinking.

To sum up, the problem of search is an excellent illustration of the old Hegelian chestnut of Hegelian thesis, antithesis, and synthesis.  This means the problem of search is unlikely to be “solved.” Humans want answers. Some humans want to verify answers which means that the data on the sales person’s laptop must be included. When the detail oriented human learns that the sales person’s data are missing, the end of the “search solution” has begun.

The question “Will one big company dominate?” The answer is, in my opinion, maybe in some use cases. Monopolies seem to be the natural state of social media, online advertising, and certain cloud services. For finding information, I don’t think the smart software will be able to deliver. Examples are likely to include [a] use cases in China and similar countries, [b] big multi-national organizations with information silos, [c] entities involved in two or more classified activities for a government, [d] high risk legal cases, and [e] activities related to innovation, trade secrets, and patents, among others.

The point is that search and retrieval remains an extraordinarily difficult problem to solve in many situations. LLMs contribute some useful functional options, but by themselves, these approaches are unlikely to avoid the reefs which sank the good ships Autonomy and Fast Search & Transfer, and dozens of others competing in the search space.

Maybe Yogi Berra did not say “Déjà vu all over again.” That’s okay. I will say it. Enterprise search is “Déjà vu all over again.”

Stephen E Arnold, January 7, 2025

Salesforce Surfs Agentic AI and Hopes to Stay on the Long Board

January 7, 2025

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis is an official dinobaby post. No smart software involved in this blog post.

I spotted a content marketing play, and I found it amusing. The spin was enough to make my eyes wobble. “Intelligence (AI). Its Stock Is Up 39% in 4 Months, and It Could Soar Even Higher in 2025” appeared in the Motley Fool online investment information service. The headline is standard fare, but the catchphrase in the write up is “the third wave of AI.” What were the other two waves, you may ask? The first wave was machine learning which is an age measured in decades. The second wave which garnered the attention of the venture community and outfits like Google was generative AI. I think of the second wave as the content suck up moment.

So what’s the third wave? Answer: Salesforce. Yep, the guts of the company is a digitized record of sales contacts. The old word for what made a successful sales person valuable was “Rolodex.” But today one may as well talk about a pressing ham.

What makes this content marketing-type article notable is that Salesforce wants to “win” the battle of the enterprise and relegate Microsoft to the bench. What’s interesting is that Salesforce’s innovation is presented this way:

The next wave of AI will build further on generative AI’s capabilities, enabling AI to make decisions and take actions across applications without human intervention. Salesforce (CRM -0.42%) CEO Marc Benioff calls it the “digital workforce.” And his company is leading the growth in this Agentic AI with its new Agentforce product.

Agentic.

What’s Salesforce’s secret sauce? The write up says:

Artificial intelligence algorithms are only as good as the data used to train them. Salesforce has accurate and specific data about each of its enterprise customers that nobody else has. While individual businesses could give other companies access to those data, Salesforce’s ability to quickly and simply integrate client data as well as its own data sets makes it a top choice for customers looking to add AI agents to their “workforce.” During the company’s third-quarter earnings call, Benioff called Salesforce’s data an “unfair advantage,” noting Agentforce agents are more accurate and less hallucinogenic as a result.

To put some focus on the competition, Salesforce targets Microsoft. The write up says:

Benioff also called out what might be Salesforce’s largest competitor in Agentic AI, Microsoft (NASDAQ: MSFT). While Microsoft has a lot of access to enterprise customers thanks to its Office productivity suite and other enterprise software solutions, it doesn’t have as much high-quality data on a business as Salesforce. As a result, Microsoft’s Copilot abilities might not be up to Agentforce in many instances. Benioff points out Microsoft isn’t using Copilot to power its online help desk like Salesforce.

I think it is worth mentioning that Apple’s AI seems to be a tad problematic. Also, those AI laptops are not the pet rock for a New Year’s gift.

What’s the Motley Fool doing for Salesforce besides making the company’s stock into a sure-fire winner for 2025? The rah rah is intense; for example:

But if there’s one thing investors have learned from the last two years of AI innovation, it’s that these things often grow faster than anticipated. That could lead Salesforce to outperform analysts’ expectations over the next few years, as it leads the third wave of artificial intelligence.

Let me offer several observations:

  1. Salesforce sees a marketing opportunity for its “agentic” wrappers or apps. Therefore, put the pedal to the metal and grab mind share and market share. That’s not much different from the company’s attention push.
  2. Salesforce recognizes that Microsoft has some momentum in some very lucrative markets. The prime example is the Microsoft tie up with Palantir. Salesforce does not have that type of hook to generate revenue from US government defense and intelligence budgets.
  3. Salesforce is growing, but so is Oracle. Therefore, Salesforce feels that it could become the cold salami in the middle of a Microsoft and Oracle sandwich.

Net net: Salesforce has to amp up whatever it can before companies that are catching the rising AI cloud wave swamp the Salesforce surf board.

Stephen E Arnold, January 7, 2025

China Smart, US Dumb: The Deepseek Interview

January 6, 2025

Hopping Dino_thumb_thumb_thumbThis is an official dinobaby post. I used AI to assist me in this AI. In fact, I used the ChatGPT system which seems to be the benchmark against which China’s AI race leader measures itself. This suggests that Deepseek has a bit of a second-place mentality, a bit of jealousy, and possibly a signal of inferiority, doesn’t it?

Deepseek: The Quiet Giant Leading China’s AI Race” is a good example of what the Middle Kingdom is revealing about smart software. The 5,000 word essay became available as a Happy New Year’s message to the US. Like the girl repairing broken generators without fancy tools, the message is clear to me: 2025 is going to be different.

image

Here’s an abstract of the “interview” generated by a US smart software system. I would have used Deepseek, but I don’t have access to it. I used the ChatGPT service which Deepseek has surpassed to create the paragraph below. Make sure the summary is in line with the ChinaTalk original and read the 5,000 word original and do some comparisons.

Deepseek, a Chinese AI startup, has emerged as an innovator in the AI industry, surpassing OpenAI’s o1 model with its R1 model on reasoning benchmarks. Backed entirely by High-Flyer, a top Chinese quantitative hedge fund, Deepseek focuses on foundational AI research, eschewing commercialization and emphasizing open-source development. The company has disrupted the AI market with breakthroughs like the multi-head latent attention and sparse mixture-of-experts architectures, which significantly reduce inference and computational costs, sparking a price war among Chinese AI developers. Liang Wenfeng, Deepseek CEO, aims to achieve artificial general intelligence through innovation rather than imitation, challenging the common perception that Chinese companies prioritize commercialization over technological breakthroughs. Wenfeng’s background in AI and engineering has fostered a bottom-up, curiosity-driven research culture, enabling the team to develop transformative models. Deepseek Version 2 delivers unparalleled cost efficiency, prompting major tech giants to reduce their API prices. Deepseek’s commitment to innovation extends to its organizational approach, leveraging young, local talent and promoting interdisciplinary collaboration without rigid hierarchies. The company’s open-source ethos and focus on advancing the global AI ecosystem set it apart from other large-model startups. Despite industry skepticism about China’s capacity for original innovation, Deepseek is reshaping the narrative, positioning itself as a catalyst for technological advancement. Liang’s vision highlights the importance of confidence, long-term investment in foundational research, and societal support for hardcore innovation. As Deepseek continues to refine its AGI roadmap, focusing on areas like mathematics, multimodality, and natural language, it exemplifies the transformative potential of prioritizing innovation over short-term profit.

I left the largely unsupported assertions in this summary. I also retained the repeated emphasis on innovation, originality, and local talent. With the aid of smart software, I was able to retain the essence of the content marketing propaganda piece’s 5,000 words.

You may disagree with my viewpoint. That’s okay. Let me annoy you further by offering several observations:

  1. The release of this PR piece coincides with additional information about China’s infiltration of the US telephone network and the directed cyber attack on the US Treasury.
  2. The multi-pronged content marketing / propaganda flow about China’s “local talent” is a major theme of these PR efforts. From the humble brilliant girl repairing equipment with primitive tools because she is a “genius” to the notion that China’s young “local talent” have gone beyond what the “imported” talent in the US has been able to achieve are two pronged. One tine of the conceptual pitchfork is that the US is stupid. The other tine is that China just works better, smarter, faster, and cheaper.
  3. The messaging is largely accomplished using free or low cost US developed systems and methods. This is definitely surfing on other people’s knowledge waves.

Net net: Mr. Putin is annoyed that the European Union wants to block Russia-generated messaging about the “special action.” The US is less concerned about China’s propaganda attacks. The New Year will be interesting, but I have lived through enough “interesting times” to do much more than write blogs posts from my outpost in rural Kentucky. What about you, gentle reader? China smart, US dumb: Which is it?

Stephen E Arnold, January 6, 2025

Chinese AI Lab Deepseek Grinds Ahead…Allegedly

December 31, 2024

Is the world’s most innovative AI company a low-profile Chinese startup? ChinaTalk examines “Deepseek: The Quiet Giant Leading China’s AI Race.” The Chinese-tech news site shares an annotated translation of a rare interview with DeepSeek CEO Liang Wenfeng. The journalists note the firm’s latest R1 model just outperformed OpenAI’s o1. In their introduction to the July interview, they write:

“Before Deepseek, CEO Liang Wenfeng’s main venture was High-Flyer, a top 4 Chinese quantitative hedge fund last valued at $8 billion. Deepseek is fully funded by High-Flyer and has no plans to fundraise. It focuses on building foundational technology rather than commercial applications and has committed to open sourcing all of its models. It has also singlehandedly kicked off price wars in China by charging very affordable API rates. Despite this, Deepseek can afford to stay in the scaling game: with access to High-Flyer’s compute clusters, Dylan Patel’s best guess is they have upwards of ‘50k Hopper GPUs,’ orders of magnitude more compute power than the 10k A100s they cop to publicly. Deepseek’s strategy is grounded in their ambition to build AGI. Unlike previous spins on the theme, Deepseek’s mission statement does not mention safety, competition, or stakes for humanity, but only ‘unraveling the mystery of AGI with curiosity’. Accordingly, the lab has been laser-focused on research into potentially game-changing architectural and algorithmic innovations.”

For example, we learn:

“They proposed a novel MLA (multi-head latent attention) architecture that reduces memory usage to 5-13% of the commonly used MHA architecture. Additionally, their original DeepSeekMoESparse structure minimized computational costs, ultimately leading to reduced overall costs.”

Those in Silicon Valley are well aware of this “mysterious force from the East,” with several AI head honchos heaping praise on the firm. The interview is split into five parts. The first examines the large-model price war set off by Deepseek’s V2 release. Next, Wenfeng describes how an emphasis on innovation over imitation sets his firm apart but, in part three, notes that more money does not always lead to more innovation. Part four takes a look at the talent behind DeepSeek’s work, and in part five the CEO looks to the future. Interested readers should check out the full interview. Headquartered in Hangzhou, China, the young firm was founded in 2023.

Cynthia Murrell, December 31, 2024

AI Video Is Improving: Hello, Hollywood!

December 30, 2024

Has AI video gotten scarily believable? Well, yes. For anyone who has not gotten the memo, The Guardian declares, “Video Is AI’s New Frontier—and It Is so Persuasive, We Should All Be Worried.” Writer Victoria Turk describes recent developments:

“Video is AI’s new frontier, with OpenAI finally rolling out Sora in the US after first teasing it in February, and Meta announcing its own text-to-video tool, Movie Gen, in October. Google made its Veo video generator available to some customers this month. Are we ready for a world in which it is impossible to discern which of the moving images we see are real?”

Ready or not, here it is. No amount of hand-wringing will change that. Turk mentions ways bad actors abuse the technology: Scammers who impersonate victims’ loved ones to extort money. Deepfakes created to further political agendas. Fake sexual images and videos featuring real people. She also cites safeguards like watermarks and content restrictions as evidence AI firms understand the potential for abuse.

But the author’s main point seems to be more philosophical. It was prompted by convincing fake footage of a tree frog, documentary style. She writes:

“Yet despite the technological feat, as I watched the tree frog I felt less amazed than sad. It certainly looked the part, but we all knew that what we were seeing wasn’t real. The tree frog, the branch it clung to, the rainforest it lived in: none of these things existed, and they never had. The scene, although visually impressive, was hollow.”

Turk also laments the existence of this Meta-made baby hippo, which she declares is “dead behind the eyes.” Is it though? Either way, these experiences led Turk to ponders a bleak future in which one can never know which imagery can be trusted. She concludes with this anecdote:

“I was recently scrolling through Instagram and shared a cute video of a bunny eating lettuce with my husband. It was a completely benign clip – but perhaps a little too adorable. Was it AI, he asked? I couldn’t tell. Even having to ask the question diminished the moment, and the cuteness of the video. In a world where anything can be fake, everything might be.”

That is true. An important point to remember when we see footage of a politician doing something horrible. Or if we get a distressed call from a family member begging for money. Or if we see a cute animal video but prefer to withhold the dopamine rush lest it turn out to be fake.

Cynthia Murrell, December 30, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta