LLMs Paired With AI Are Dangerous Propaganda Tools

February 13, 2025

AI chatbots are in their infancy. While they have been tested for a number of years, they are still prone to bias and other devastating mistakes. Big business and other organizations aren’t waiting for the technology to improve. Instead they’re incorporating chatbots and more AI into their infrastructures. Baldur Bjarnason warns about the dangers of AI, especially when it comes to LLMs and censorship:

“Poisoning For Propaganda: Rising Authoritarianism Makes LLMs More Dangerous.”

Large language models (LLMs) are used to train AI algorithms. Bjarnason warns that using any LLM, even those run locally, are dangerous.

Why?

LLMs are contained language databases that are programmed around specific parameters. These parameters are prone to error, because they were written by humans—ergo why AI algorithms are untrustworthy. They can also be programmed to be biased towards specific opinions aka propaganda machines. Bjarnason warns that LLMs are being used for the lawless takeover of the United States. He also says that corporations, in order to maintain their power, won’t hesitate to remove or add the same information from LLMs if the US government asks them.

This is another type of censorship:

“The point of cognitive automation is NOT to enhance thinking. The point of it is to avoid thinking in the first place. That’s the job it does. You won’t notice when the censorship kicks in… The alternative approach to censorship, fine-tuning the model to return a specific response, is more costly than keyword blocking and more error-prone. And resorting to prompt manipulation or preambles is somewhat easily bypassed but, crucially, you need to know that there is something to bypass (or “jailbreak”) in the first place. A more concerning approach, in my view, is poisoning.”

Corporations paired with governments (it’s not just the United States) are “poisoning” the AI LLMs with propagandized sentiments. It’s a subtle way of transforming perspectives without loud indoctrination campaigns. It is comparable to subliminal messages in commercials or teaching only one viewpoint.

Controls seem unlikely.

Whitney Grace, February 13, 2025

Are These Googlers Flailing? (Yes, the Word Has “AI” in It Too)

February 12, 2025

Is the Byte write up on the money? I don’t know, but I enjoyed it. Navigate to “Google’s Finances Are in Chaos As the Company Flails at Unpopular AI. Is the Momentum of AI Starting to Wane?” I am not sure that AI is in its waning moment. Deepseek has ignited a fire under some outfits. But I am not going to critic the write up. I want to highlight some of its interesting information. Let’s go, as Anatoly the gym Meister says, just with an Eastern European accent.

Here’s the first statement in the article which caught my attention:

Google’s parent company Alphabet failed to hit sales targets, falling a 0.1 percent short of Wall Street’s revenue expectations — a fraction of a point that’s seen the company’s stock slide almost eight percent today, in its worst performance since October 2023. It’s also a sign of the times: as the New York Times reports, the whiff was due to slower-than-expected growth of its cloud-computing division, which delivers its AI tools to other businesses.

Okay, 0.1 percent is something, but I would have preferred the metaphor of the “flail” word to have been used in the paragraph begs for “flog,” “thrash,” and “whip.”

image

I used Sam AI-Man’s AI software to produce a good enough image of Googlers flailing. Frankly I don’t think Sam AI-Man’s system understands exactly what I wanted, but close enough for horseshoes in today’s world.

I noted this information and circled it. I love Gouda cheese. How can Google screw up cheese after its misstep with glue and cheese on pizza. Yo, Googlers. Check the cheese references.

Is Alphabet’s latest earnings result the canary in the coal mine? Should the AI industry brace for tougher days ahead as investors become increasingly skeptical of what the tech has to offer? Or are investors concerned over OpenAI’s ChatGPT overtaking Google’s search engine? Illustrating the drama, this week Google appears to have retroactively edited the YouTube video of a Super Bowl ad for its core AI model called Gemini, to remove an extremely obvious error the AI made about the popularity of gouda cheese.

Stalin revised history books. Google changes cheese references for its own advertising. But cheese?

The write up concludes with this, mostly from American high technology watching Guardian newspaper in the UK:

Although it’s still well insulated, Google’s advantages in search hinge on its ubiquity and entrenched consumer behavior,” Emarketer senior analyst Evelyn Mitchell-Wolf told The Guardian. This year “could be the year those advantages meaningfully erode as antitrust enforcement and open-source AI models change the game,” she added. “And Cloud’s disappointing results suggest that AI-powered momentum might be beginning to wane just as Google’s closed model strategy is called into question by Deepseek.”

Does this constitute the use of the word “flail”? Sure, but I like “thrash” a lot. And “wane” is good.

Stephen E Arnold, February 12, 2025

A New Spin on Insider Threats: Employees Secretly Use AI At Work

February 12, 2025

We’re afraid of AI replacing our jobs. Employers are blamed for wanting to replace humans with algorithms, but employees are already bringing AI into work. According to the BBC, employees are secretly using AI: “Why Employees Smuggle AI Into Work.” In IT departments across the United Kingdom (and probably the world), knowledge workers are using AI tools without permission from their leads.

Software AG conducted a survey of knowledge workers and the results showed that half of them used personal AI tools. Knowledge workers are defined at people who primarily work at a desk or a computer. Some of them are using the tools because their job doesn’t offer tools and others said they wanted to choose their tools.

Many of the workers are also not asking. They’re abiding by the mantra of, “It’s easier to ask forgiveness than permission.”

One worker uses ChatGPT as a mechanized coworker. ChatGPT allows the worker to consume information at faster rates and it has increased his productivity. His company banned AI tools, he didn’t know why but assumes it is a control thing.

AI tools also pose security risks, because the algorithms learn from user input. The algorithms store information and it can expose company secrets:

“Companies may be concerned about their trade secrets being exposed by the AI tool’s answers, but Alastair Paterson, CEO and co-founder of Harmonic Security, thinks that’s unlikely. "It’s pretty hard to get the data straight out of these [AI tools]," he says.

However, firms will be concerned about their data being stored in AI services they have no control over, no awareness of, and which may be vulnerable to data breaches.”

Using AI tools is like any new technology. The AI tools need to be used and tested, then regulated. AI can’t replace experience, but it certainly helps get the job done.

Whitney Grace, February 12, 2025

Innovation: Deepseek, Google, OpenAI, and the EU. Legal Eagles Aloft

February 11, 2025

dino orangeWe have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.

I have been thinking about the allegations that the Deepseek crowd ripped off US smart software companies. Someone with whom I am not familiar expressed the point of view that the allegation will be probed. With open source goodness whizzing around, I am not sure how would make a distinction between one allegedly open source system and another allegedly open source system will work. I am confident the lawyers will figure innovation out because clever mathematical tricks and software optimization are that group of professionals’ core competency.

image

The basement sale approach to smart software: Professional, organized, and rewarding. Thanks OpenAI. (No, I did not generate this image with the Deepseek tools. I wouldn’t do that to you, Sam AI-Man.)

And thinking of innovation this morning, I found the write up in the Times of India titled “Google Not Happy With This $4.5 Billion Fine, Here’s What the Company Said.” [Editor’s note: The url is a wonky one indeed. If the link does not resolve, please, don’t write me and complain. Copy the article headline and use Bing or Google to locate a valid source. Failing that, just navigate to the Times of India and hunt for the source document there.] Innovation is the focus of the article, and the annoyance — even indignation bubbling beneath the surface of the Google stance — may foreshadow a legal dust up between OpenAI and Deepseek.

So what’s happening?

The Times of India reports with some delicacy:

Google is set to appeal a record €4.3 billion ($4.5 billion) antitrust fine imposed by the European Union seven years ago, a report claimed. Alphabet-owned company has argued that the penalty unfairly punished the company for its innovation in the Android mobile operating system. The appeal, heard by the Court of Justice of the European Union in Luxembourg, comes two years after a lower tribunal upheld the European Commission’s decision, which found Google guilty of using Android to restrict competition. However, the company claimed that its actions benefited consumers and fostered innovation in the mobile market. This new appeal comes after the lower court reduced the fine to 4.1 billion euros ($4.27 billion).

Yes, Google’s business systems and methods foster innovation in the mobile market. The issue is that Google has been viewed an anti competitive by some legal eagles in the US government as behaving in a way that is anti competitive. I recall the chatter about US high technology companies snuffing innovation. Has Google done that with its approach to Android?

The write up reports:

In this case, the Commission failed to discharge its burden and its responsibility and, relying on multiple errors of law, punished Google for its superior merits, attractiveness and innovation.” Lamadrid justified Google’s agreements that require phone manufacturers to pre-install Google Search, the Chrome browser, and the Google Play app store on their Android devices, while also restricting them from adopting rival Android systems. Meanwhile, EU antitrust regulators argued that these conditions restricted competition.

Innovation seems to go hand in hand with pre-installing certain Google applications. The fact that Google allegedly restricts phone companies from “adopting rival Android systems” is a boost to innovation. Is this Google argument food for thought if Google and its Gemini unit decided to sue OpenAI for its smart software innovation.

One thing is clear. Google sees itself as fostering innovation, and it should not be punished for creating opportunities, employment, and benefits for those in the European Union. On the other hand, the Deepseek innovation is possibly improper because it delivered an innovation US high technology outfits did not deliver.

Adding some Chinese five-flavor spice to the recipe is the fact that the Deepseek innovation seems to be a fungible insight about US smart software embracing Google influenced open source methods. The thought that “innovation” will be determined in legal proceedings is interesting.

Is innovation crafted to preserve a dominant market share unfair? Is innovation which undermines US smart software companies improper? The perception of Google as an innovator, from my vantage, has dwindled. On the other hand, my perception of the Deepseek approach strikes me as unique. I have pointed out that the Deepseek innovation seems to deliver reasonably good results with a lower cost method. This is the Shein-Temu approach to competition. It works. Just ask Amazon.

Maybe the US will slap a huge find on Deepseek because the company innovated? The EU has decided to ring its cash register because Google allegedly inhibited innovation.

For technologists, the process of innovation is fraught with legal peril. Who benefits? I would suggest that the lawyers are at the head of the line for the upsides of this “innovation” issue.

Stephen E Arnold, February 11, 2025

Men, Are You Loving Those AI Babes with Big Bits?

February 11, 2025

The dating scene has never been easy. It is apparently so difficult to find love these days that men are turning to digital love in the form of AI girlfriends. Vice News shares that “Most Men Would Marry Their AI Girlfriends If It Were Legal” and it is astounding the lengths men will go to for companionship.

EVA AI is a platform that allows people to connect with an AI partner. The platform recently surveyed 2000 men and discovered that 8 in 10 men would considered marrying their AI girlfriends if it was legal. It sounds like something out of the science fiction genre. The survey also found more startling news about men and AI girlfriends:

“Not only that, but 83% of men also believe they could form a deep emotional bond with an AI girlfriend. What’s even scarier is that a whopping 78% of men surveyed said they would consider creating a replica of their ex, and three-quarters would duplicate their current partner to create a “polished” version of them.”

Cale Jones, head of community growth at EVA AI, said that men find AI girlfriends to be safe and they are allowed to be their authentic selves. Jones continued that because AI girlfriends are safe, men feel free to share their thoughts, emotions, and desires. Continuing on the safety train of thought, Jones explained that individuals are also exploring their sexual identities without fear.

AI girlfriends and boyfriends are their own brand of creepiness. If the AI copies an ex-girlfriend or boyfriend, a movie star, or even a random person, it creates many psychological and potentially dangerous issues:

“I think what raises the most concern is the ability to replicate another person. That feels exploitative and even dangerous in many ways. I mean, imagine some random dude created an AI girlfriend based on your sister, daughter, or mother…then, picture them beginning to feel possessive over this person, forming actual feelings for the individual but channeling them into the robot. If they were to run into the actual human version of their AI girlfriend in real life, well…who knows what could/would happen? Ever heard of a crime of passion?

Of course, this is just a hypothetical, but it’s the first thing that came to mind. Many people already have issues feeling like they have a right to someone else’s body. Think about the number of celebrities who are harassed by superfans. Is this going to feed that issue even further, making it a problem for everyday people, like classmates, friends, and colleagues?”

Let’s remember that the men surveyed by EVA AI are probably a small sample of “men.” So far.

Whitney Grace, February 10, 2025

A Case for Export Controls in the Wake of Deepseek Kerfuffle

February 11, 2025

Some were shocked by recent revelations of Deepseek’s AI capabilities, including investors. Others had been forewarned about the (allegedly) adept firm. Interesting how social media was used to create the shock and awe that online information services picked up and endlessly repeated. Way to amplify the adversary’s propaganda.

At any rate, this escalating AI arms race is now top-of-mind for many. Could strong export controls give the US an edge? After all, China’s own chip manufacturing is said to lag about five years behind ours. Anthropic CEO Dario Amodei believes they can, as he explains in his post, "On Deepseek and Export Controls."

The AI maestro begins with some groundwork. First, he describes certain ways AI development scales and shifts. He then looks at what makes Deepseek so special—and what does not. See the post for those details, but here is the key point for our discussion: AI developers everywhere require more and more hardware to progress. So far, Chinese and US companies have had access to similar reserves of both funds and chips. However, if we limit the number of chips flowing into China, Chinese firms will eventually hit a proverbial wall. Amodei compares hypothetical futures:

"The question is whether China will also be able to get millions of chips. If they can, we’ll live in a bipolar world, where both the US and China have powerful AI models that will cause extremely rapid advances in science and technology — what I’ve called ‘countries of geniuses in a datacenter‘. A bipolar world would not necessarily be balanced indefinitely. Even if the US and China were at parity in AI systems, it seems likely that China could direct more talent, capital, and focus to military applications of the technology. Combined with its large industrial base and military-strategic advantages, this could help China take a commanding lead on the global stage, not just for AI but for everything."

How ominous. And if we successfully implement and enforce export controls? He continues:

"If China can’t get millions of chips, we’ll (at least temporarily) live in a unipolar world, where only the US and its allies have these models. It’s unclear whether the unipolar world will last, but there’s at least the possibility that, because AI systems can eventually help make even smarter AI systems, a temporary lead could be parlayed into a durable advantage. Thus, in this world, the US and its allies might take a commanding and long-lasting lead on the global stage."

"Might," he says. There is no certainty here. Still, an advantage like this may be worthwhile if it keeps China’s military from outstripping ours. Hindering an Anthropic competitor is just a side effect of this advice, right? Sure, in a peaceful world, international "competition and collaboration make the world a better place." But that is not our reality at the moment.

Amodei hastens to note he thinks the Deepseek folks are fine researchers and curious innovators. It is just that bit about being beholden to their authoritarian government that may be the issue.

Cynthia Murrell, February 11, 2025

Google Goes Googley in Paris Over AI … Again

February 10, 2025

Google does some interesting things in Paris. The City of Light was the scene of a Googler’s demonstration of its AI complete with hallucinations about two years ago. On Monday, February 10, 2025, Google’s “leadership” Sundar Pichai alleged leaked his speech or shared some memorable comments with journalists. These were reported in AAWSAT.com, an online information service in the story “AI Is ‘Biggest Shift of Our Lifetimes’, Says Google Boss.”

I like the shift; it reminds me of the word “shifty.”

One of the passages catching my attention was this one, although I am not sure of the accuracy of the version in the cited article. The gist seems on point with Google’s posture during Code Red and its subsequent reorganization of the firm’s smart software unit. The context, however, does not seem to include the impact of Deepseek’s bargain basement approach to AI. Google is into big money for big AI. One wins big in a horse race bet by plopping big bucks on a favorite nag. AI is doing the big bet on AI, about $75 billion in capital expenditures in the next 10 months.

Here’s the quote:

Artificial intelligence (AI) is a "fundamental rewiring of technology" that will act as an "accelerant of human ingenuity." We’re still in the early days of the AI platform shift, and yet we know it will be the biggest of our lifetimes… With AI, we have the chance to democratize access (to a new technology) from the start, and to ensure that the digital divide doesn’t become an AI divide….

The statement exudes confidence. With billions riding on Mr. Pichai gambler’s instinct, stakeholders and employees not terminated for cost savings hope he is correct. Those already terminated may be rooting for a different horse.

Google’s head of smart software (sorry, Jeff Dean) allegedly offered this sentiment:

“Material science, mathematics, fusion, there is almost no area of science that won’t benefit from these AI tools," the Nobel chemistry laureate said.

Are categorical statements part of the mental equipment that makes a Nobel prize winner. He did include an “almost,” but I think the hope is that many technical disciplines will reap the fruits of smart software. Some smart software may just reap fruits from users of smart software’s inputs.

A statement which I found more remarkable was:

Every generation worries that the new technology will change the lives of the next generation for the worse — and yet it’s almost always the opposite.

Another hedged categorical affirmative: “Almost always”. The only issue is that as Jacques Ellul asserted in The Technological Bluff, technology creates problems which invoke more technology to address old problems while simultaneously creating a new technology. I think Father Ellul was on the beam.

How about this for a concluding statement:

We must not let our own bias for the present get in the way of the future. We have a once-in-a-generation opportunity to improve lives at the scale of AI.

Scale. Isn’t that what Deepseek demonstrated may be a less logical approach to smart software? Paris has quite an impact on Google thought processes in my opinion. Did Google miss the Deepseek China foray? Did the company fail to interpret it in the context of wide adoption of AI? On the other hand, maybe if one does not talk about something, one can pretend that something does not exist. Like the Super Bowl ad with misinformation about cheese. Yes, cheese, again.

Stephen E Arnold, February 10, 2025

Microsoft, Deepseek, and OpenAI: An Interesting Mixture Like RDX?

February 10, 2025

dino orange_thumbWe have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.

I have successfully installed Deepseek and run some queries. The results seem okay, but most of the large language models we have installed have their strengths and weaknesses. What’s interesting about Deepseek is that it caused a bit of a financial squall when it was publicized during a Chinese dignitary’s visit to Colombia.

A short time after a high flying video card company lost a few bucks, an expert advising the new US administration suggested “there’s substantial evidence that Deepseek used OpenAI’s models to train its own.” This story appeared X.com via Fox. Another report said that Microsoft was investigating Deepseek. When I checked my newsfeed this morning (January 30, 2025), Slashdot pointed me to this story: “Microsoft makes Deepseek’s R1 Model Available on Azure AI and GitHub.”

Did Microsoft do a speedy investigation or is the inclusion of Deepseek in AzureAI and GitHub part of its investigation. Did loading up Deepseek kill everyone’s favorite version of Office on January 29, 2024? Probably not, but there is a lot of action in the AI space at Microsoft Town.

Let’s recap the stuff from the AI chemistry lab. First, we have the fascinating Sam AI-Man. With a deal of note because Oracle is in and Grok is out, OpenAI remains a partner with Microsoft. Second, Microsoft, fresh from bumper revenues, continues to embrace AI and demonstrate that a welcome mat is outside Satya Nadella’s door for AI outfits. Third, who stole what? AI companies have been viewed as information bandits by some outfits. Legal eagles cloud the sunny future of smart software.

What will these chemical elements combine to deliver? Let’s consider a few options.

  1. Like RDX a go-to compound for some kinetic applications, the elements combust.
  2. The legal eagles effectively grind innovation to a halt due to restrictions on Nvidia, access to US open source software, and getting in the way of the reinvigoration of the USA.
  3. Nothing. That’s right. The status quo chugs along with predictable ups and downs but nothing changes.

Net net: This will be an interesting techno-drama to watch in real time. On the other hand, I may wait until the Slice outfit does a documentary about the dust up, partnerships, and failed bro-love affairs.

Stephen E Arnold, February 10, 2025

What Does One Do When Innovation Falters? Do the Me-Too Bop

February 10, 2025

Hopping Dino_thumbAnother dinobaby commentary. No smart software required.

I found the TechRadar story “In Surprise Move Microsoft Announces Deepseek R1 Is Coming to CoPilot+ PCs – Here’s How to Get It” an excellent example of bit tech innovation. The article states:

Microsoft has announced that, following the arrival of Deepseek R1 on Azure AI Foundry, you’ll soon be able to run an NPU-optimized version of Deepseek’s AI on your Copilot+ PC. This feature will roll out first to Qualcomm Snapdragon X machines, followed by Intel Core Ultra 200V laptops, and AMD AI chipsets.

Yep, me too, me too. The write up explains the ways in which one can use Deepseek, and I will leave taking that step to you. (On the other hand, navigate to Hugging Face and download it, or you could zip over to You.com and give it a try.)

The larger issue is not the speed with which Microsoft embraced the me too approach to innovation. For me, the decision illustrates the paucity of technical progress in one of the big technology giants. You know, Microsoft, the originator of Bob and the favorite software company of bad actors who advertise their malware on Telegram.

Several observations:

  1. It doesn’t matter how the Chinese start up nurtured by a venture capital firm got Deepseek to work. The Chinese outfit did it. Bang. The export controls and the myth of trillions of dollars to scale up disappeared. Poof.
  2. No US outfit — with or without US government support — was in the hockey rink when the Chinese team showed up and blasted a goal in the first few minutes of a global game. Buzz. 1 to zip. The question is, “Why not?” and “What’s happened since Microsoft triggered the crazy Code Red or whatever at the Google?” Answer: Burning money quickly.
  3. More pointedly, are the “innovations” in AI touted by Product Hunt and podcasters innovations? What if these are little more than wrappers with some snappy names? Answer: A reminder that technical training and some tactical kung fu can deliver a heck of a punch.

Net net: Deepseek was a tactical foray or probe. The data are in. Microsoft will install Chinese software in its global software empire. That’s interesting, and it underscores the problem of me to. Innovation takes more than raising prices and hiring a PR firm.

Stephen E Arnold, February 10, 2025

Deepseek: Details Surface Amid Soft Numbers

February 7, 2025

dino orange_thumb_thumb_thumb_thumbWe have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.

I read “Research exposes Deepseek’s AI Training Cost Is Not $6M, It’s a Staggering $1.3B.” The assertions in the write up are interesting and closer to the actual cost of the Deepseek open source smart software. Let’s take a look at the allegedly accurate and verifiable information. Then I want to point out two costs not included in the estimated cost of Deepseek.

The article explains that the analysis for training was closer to $1.3 billion. I am not sure if this estimate is on the money, but a higher cost is certainly understandable based on the money burning activities of outfits like Microsoft, OpenAI, Facebook / Meta, and the Google, among others.

The article says:

In its latest report, SemiAnalysis, an independent research company, has spotlighted Deepseek, a rising player in the AI landscape. The SemiAnalysis challenges some of the prevailing narratives surrounding Deepseek’s costs and compares them to competing technologies in the market. One of the most prominent claims in circulation is that Deepseek V3 incurs a training cost of around $6 million.

One important point is that building and making available for free a smart software system incurs many costs. The consulting firm has narrowed its focus to training costs.

The write up reports:

The $6 million estimate primarily considers GPU pre-training expenses, neglecting the significant investments in research and development, infrastructure, and other essential costs accruing to the company. The report highlights that Deepseek’s total server capital expenditure (CapEx) amounts to an astonishing $1.3 billion. Much of this financial commitment is directed toward operating and maintaining its extensive GPU clusters, the backbone of its computational power.

But “astonishing.” Nope. Sam AI-Man tossed around numbers in the trillions. I am not sure we will ever know how much Amazon, Facebook, Google, and Microsoft — to name four outfits — have spent in the push to win the AI war, get a new monopoly, and control everything from baby cams to zebra protection in South Africa.

I do agree that the low ball number was low, but I think the pitch for this low ball was a tactic designed to see what a Chinese-backed AI product could do to the US financial markets.

There are some costs that neither the SemiAnalytics outfit or the Interesting Engineering wordsmith considered.

First, if you take a look at the authors of the Deepseek ArXiv papers you will see a lot of names. Most of these individuals are affiliated with Chinese universities. How we these costs handled? My hunch is that the costs were paid by the Chinese government and the authors of the paper did what was necessary to figure out how to come up with a “do more for less” system. The idea is that China, hampered by US export restrictions, is better at AI than the mythological Silicon Valley. Okay, that’s a good intelligence operation: Test destabilization with a reasonably believable free software gilded with AI sparklies. But the costs? Staff, overhead, and whatever perks go with being a wizard at a Chinese university have to be counted, multiplied by the time required to get the system to work mostly, and then included in the statement of accounts. These steps have not been taken, but a company named Complete Analytics should do the work.

Second, what was the cost of the social media campaign that made Deepseek more visible than the head referee of the Kansas City Chiefs and Philadelphia Eagle game? That cost has not been considered. Someone should grind through the posts, count the authors or their handles, and produce an estimate. As far as I know, there is no information about who is a paid promoter of Deepseek.

Third, how much did the electricity to get DeepSeek to do its tricks? We must not forget the power at the universities, the research labs, and the laptops. Technology Review has some thoughts along this power line.

Finally, what’s the cost of the overhead. I am thinking about the planning time, the lunches, the meetings, and the back and forth needed to get Deepseek on track to coincide with the new president’s push to make China not so great again? We have nothing. We need a firm called SpeculativeAnalytics for this task or maybe MasterCard can lend a hand?

Net net: The Deepseek operation worked. The recriminations, the allegations, and the explanations will begin. I am not sure they will have as much impact as this China smart, US dumb strategy. Plus, that SemiAnalytics’ name is a hoot.

Stephen E Arnold, February 7, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta