Researchers Raise Deepseek Security Concerns

February 25, 2025

What a shock. It seems there are some privacy concerns around Deepseek. We learn from the Boston Herald, “Researchers Link Deepseek’s Blockbuster Chatbot to Chinese Telecom Banned from Doing Business in US.” Former Wall Street Journal and now AP professional Byron Tau writes:

“The website of the Chinese artificial intelligence company Deepseek, whose chatbot became the most downloaded app in the United States, has computer code that could send some user login information to a Chinese state-owned telecommunications company that has been barred from operating in the United States, security researchers say. The web login page of Deepseek’s chatbot contains heavily obfuscated computer script that when deciphered shows connections to computer infrastructure owned by China Mobile, a state-owned telecommunications company.”

If this is giving you déjà vu, dear reader, you are not alone. This scenario seems much like the uproar around TikTok and its Chinese parent company ByteDance. But it is actually worse. ByteDance’s direct connection to the Chinese government is, as of yet, merely hypothetical. China Mobile, on the other hand, is known to have direct ties to the Chinese military. We learn:

“The U.S. Federal Communications Commission unanimously denied China Mobile authority to operate in the United States in 2019, citing ‘substantial’ national security concerns about links between the company and the Chinese state. In 2021, the Biden administration also issued sanctions limiting the ability of Americans to invest in China Mobile after the Pentagon linked it to the Chinese military.”

It was Canadian cybersecurity firm Feroot Security that discovered the code. The AP then had the findings verified by two academic cybersecurity experts. Might similar code be found within TikTok? Possibly. But, as the article notes, the information users feed into Deepseek is a bit different from the data TikTok collects:

“Users are increasingly putting sensitive data into generative AI systems — everything from confidential business information to highly personal details about themselves. People are using generative AI systems for spell-checking, research and even highly personal queries and conversations. The data security risks of such technology are magnified when the platform is owned by a geopolitical adversary and could represent an intelligence goldmine for a country, experts warn.”

Interesting. But what about CapCut, the ByteDance video thing?

Cynthia Murrell, February 25, 2025

Content Injection Can Have Unanticipated Consequences

February 24, 2025

dino orangeThe work of a real, live dinobaby. Sorry, no smart software involved. Whuff, whuff. That’s the sound of my swishing dino tail. Whuff.

Years ago I gave a lecture to a group of Swedish government specialists affiliated with the Forestry Unit. My topic was the procedure for causing certain common algorithms used for text processing to increase the noise in their procedures. The idea was to input certain types of text and numeric data in a specific way. (No, I will not disclose the methods in this free blog post, but if you have a certain profile, perhaps something can be arranged by writing benkent2020 at yahoo dot com. If not, well, that’s life.)

We focused on a handful of methods widely used in what now is called “artificial intelligence.” Keep in mind that most of the procedures are not new. There are some flips and fancy dancing introduced by individual teams, but the math is not invented by TikTok teens.

In my lecture, the forestry professionals wondered if these methods could be used to achieve specific objectives or “ends”. The answer was and remains, “Yes.” The idea is simple. Once methods are put in place, the algorithms chug along, some are brute force and others are probabilistic. Either way, content and data injections can be shaped, just like the gizmos required to make kinetic events occur.

The point of this forestry excursion is to make clear that a group of people, operating in a loosely coordinated manner can create data or content. Those data or content can be weaponized. When ingested by or injected into a content processing flow, the outputs of the larger system can be fiddled: More emphasis here, a little less accuracy there, and an erosion of whatever “accuracy” calculations are used to keep the system within the engineers’ and designers’ parameters. A plebian way to describe the goal: Disinformation or accuracy erosion.

I read “Meet the Journalists Training AI Models for Meta and OpenAI.” The write up explains that journalists without jobs or in search of extra income are creating “content” for smart software companies. The idea is that if one just does the Silicon Valley thing and sucks down any and all content, lawyers might come calling. Therefore, paying for “real” information is a better path.

Please, read the original article to get a sense of who is doing the writing, what baggage or mind set these people might bring to their work.

If the content is distorted — either intentionally or unintentionally — the impact of these content objects on the larger smart software system might have some interesting consequences. I just wanted to point out that weaponized information can have an impact. Those running smart software and buying content assuming it is just fine, might find some interesting consequences in the outputs.

Stephen E Arnold, February 24, 2025

AI Worriers, Play Some Bing Crosby Music

February 24, 2025

dino orangeThis blog post is the work of a real-live dinobaby. No smart software involved.

The Guardian newspaper ran an interesting write up about smart software and the inevitability of complaining to stop it in its tracks. “I Met the Godfathers of AI in Paris – Here’s What They Told Me to Really Worry About.” I am not sure what’s being taught in British schools, but the headline features the author, a split infinitive, and the infamous “ending a sentence with a preposition” fillip. Very sporty.

The write up includes quotes from the godfathers:

“It’s not today’s AI we need to worry about, it’s next year’s,” Tegmark told me. “It’s like if you were interviewing me in 1942, and you asked me: ‘Why aren’t people worried about a nuclear arms race?’ Except they think they are in an arms race, but it’s actually a suicide race.”

I am not sure what psychologists call worrying about the future. Bing Crosby took a different approach. He sang, “Don’t worry about tomorrow” and offered:

Why should we cling to some old faded thing
That used to be

Bing looked beyond the present but did not seem unduly worried. The Guardian is a bit more up tight.

The write up says:

The idea that we, on Earth, might lose control of an AGI that then turns on us sounds like science fiction – but is it really so far-fetched considering the exponential growth of AI development? As Bengio [an AI godfather, according to the Guardian] pointed out, some of the most advanced AI models have already attempted to deceive human programmers during testing, both in pursuit of their designated objectives and to escape being deleted or replaced with an update.

I circled this passage:

It seems as if we have a shrinking opportunity to lay down the incentives for companies to create the kind of AI that actually benefits our individual and collective lives: sustainable, inclusive, democracy-compatible, controlled. And beyond regulation, “to make sure there is a culture of participation embedded in AI development in general”, as Eloïse Gabadou, a consultant to the OECD on technology and democracy, put it.

Okay, so what’s the fix? Who implements the fix? Will the fix stop British universities in Manchester, Cambridge, and Oxford among others from teaching about AI or stop researchers from fiddling with snappier methods? Will the Mayor of London shut down the DeepMind outfit?

Nope. I am delighted that some people are talking about smart software. However, in the high tech world in which we love, I want to remind the Guardian, the last train for Chippenham has left the station. Too late, old chap. Learn to play Bing’s song. Chill.

Stephen E Arnold, February 24, 2025

Advice for Programmers: AI-Proof Your Career

February 24, 2025

Software engineer and blogger Sean Goedecke has some career advice for those who, like himself, are at risk of losing their programming jobs to AI. He counsels, "To Avoid Being Replaced by LLMs, Do What They Can’t." Logical enough. But what will these tools be able to do, and when will they be able to do it? That is the $25 million question. Goedecke has suggestions for the medium term, and the long term.

Right now, he advises, engineers should do three things: First, use the tools. They can help you gain an advantage in the field. And also, know-thine-enemy, perhaps? Next, learn how LLMs work so you can transition to the growing field of AI work. If you can’t beat them, join them, we suppose. Finally, climb the ranks posthaste, for those in junior roles will be the first to go. Ah yes, the weak get eaten. It is a multipronged approach.

For the medium term, Goedecke predicts which skills LLMs are likely to master first. Get good at the opposite of that. For example, ill-defined or poorly-scoped problems, solutions that are hard to verify, and projects with huge volumes of code are all very difficult for algorithms. For now.

In the long term, work yourself into a position of responsibility. There are few of those to go around. So, as noted above, start vigorously climbing over your colleagues now. Why? Because executives will always need at least one good human engineer they can trust. The post observes:

"A LLM strong enough to take responsibility – that is, to make commitments and be trusted by management – would have to be much, much more powerful than a strong engineer. Why? Because a LLM has no skin in the game, which means the normal mechanisms of trust can’t apply. Executives trust engineers because they know those engineers will experience unpleasant consequences if they get it wrong. Because the engineer is putting something on the line (e.g. their next bonus, or promotion, or in the extreme case being fired), the executive can believe in the strength of their commitment. A LLM has nothing to put on the line, so trust has to be built purely on their track record, which is harder and takes more time. In the long run, when almost every engineer has been replaced by LLMs, all companies will still have at least one engineer around to babysit the LLMs and to launder their promises and plans into human-legible commitments. Perhaps that engineer will eventually be replaced, if the LLMs are good enough. But they’ll be the last to go."

If you are lucky, it will be time to retire by then. For those young enough that this is unlikely, or for those who do not excel at the rat race, perhaps a career change is in order. What jobs are safe? Sadly, this dino-baby writer does not have the answer to that question.

Cynthia Murrell, February 24, 2025

OpenAI Furthers Great Research

February 21, 2025

Unsatisfied with existing AI cheating solutions? If so, Gizmodo has good news for you: “OpenAI’s ‘Deep Research’ Gives Students a Whole New Way to Cheat on Papers.” Writer Kyle Barr explains:

“OpenAI’s new ‘Deep Research’ tool seems perfectly designed to help students fake their way through a term paper unless asked to cite sources that don’t include Wikipedia. OpenAI’s new feature, built on top of its upcoming o3 model and released on Sunday, resembles one Google introduced late last year with Gemini 2.0. Google’s ‘Deep Research’ is supposed to generate long-form reports over the course of 30 minutes or more, depending on the depth of the requested topic. Boiled down, Google’s and OpenAI’s tools are AI agents capable of performing multiple internet searches while reasoning about the next step to generate a report.”

Deep Research even functions in a side panel, providing updates on its direction and progress. So helpful! However, the tool is not for those looking to score an A. Like a student rushing to finish a paper the old-fashioned way, Barr notes, it relies heavily on Wikipedia. An example report did include a few trusted sites, like Pew Research, but such reliable sources were in the minority. Besides, the write-up emphasizes:

“Remember, this is just a bot scraping the internet, so it won’t be accessing any non-digitized books or—ostensibly—any content locked behind a paywall. … Because it’s essentially an auto-Googling machine, the AI likely won’t have access to the most up-to-date and large-scale surveys from major analysis firms. … That’s not to say the information was inaccurate, but anybody who generates a report is at the mercy of suspect data and the AI’s interpretation of that data.”

Meh, we suppose that is okay if one just needs a C to get by. But is it worth the $200 per month subscription? I suppose that depends on the student, and the parents willingness to sign up for services that will make gentle Ben and charming Chrissie smarter. Besides, we are sure more refined versions are in our future.

Cynthia Murrell, February 21, 2025

Gemini, the Couch Potato, Watches YouTube

February 21, 2025

Have you ever told yourself that you have too many YouTube videos to watch? Now you can save time by using Gemini AI to watch them for you. What is Gemini AI? According to Make Use Of, the algorithm can “Gemini Can Now Watch YouTube Videos And Save Hours Of Time.”

Google recently uploaded a new an update to its Gemini AI that allows users to catch up YouTube videos without having to actually watch them. The new feature is a marvelous advancement! The new addition to Gemini 2.0 Flash will watch the video then it can answer questions or provide a summary of it. Google users can access Gemini through the Gemini site or the smartphone app. It’s also available for free without the Gemini Advanced subscription.

To access video watching feature, users must select the 2.0 Flash Thinking Experimental with apps model from the sidebar.

Here’s how the cited article’s author used Gemini:

… I came across a YouTube video about eight travel tips for Las Vegas. Instead of watching the entire video, I simply asked Gemini, “What are the eight travel tips in this video?” Gemini then processed the video and provided a concise summary of the travel tips. I also had Gemini summarize a video on changing a windshield wiper on a Honda CR-V, a chore I needed to complete. The results were simple and easy to understand, allowing me to glance at my iPhone screen instead of constantly stopping and starting the video during the process. The easiest way to grab a YouTube link is through your web browser or the Share button under the video.”

YouTube videos can be long and boring. Gemini condenses the information into digestible and quick to read bits. It’s an awesome tool, but if Gemini watches a video does it count as a view for advertising? Will Gemini put on a few pounds snacking on Pringles?

Whitney Grace, February 21, 2025

What Do Gamers Know about AI? Nothing, Nothing at All

February 20, 2025

Take-Two Games CEO says, "There’s no such thing" as AI.

Is the head of a major gaming publisher using semantics to downplay the role of generative AI in his industry? PC Gamer reports, "Take-Two CEO Strauss Zelnick Takes a Moment to Remind Us Once Again that ‘There’s No Such Thing’ as Artificial Intelligence." Writer Andy Chalk quotes Strauss’ from a recent GamesIndustry interview:

"Artificial intelligence is an oxymoron, there’s no such thing. Machine learning, machines don’t learn. Those are convenient ways to explain to human beings what looks like magic. The bottom line is that these are digital tools and we’ve used digital tools forever. I have no doubt that what is considered AI today will help make our business more efficient and help us do better work, but it won’t reduce employment. To the contrary, the history of digital technology is that technology increases employment, increases productivity, increases GDP and I think that’s what’s going to happen with AI. I think the videogame business will probably be on the leading, if not bleeding, edge of using AI."

So AI, which does not exist, will actually create jobs instead of eliminate them? The write-up correctly notes the evidence points to the contrary. On the other hand, Strauss seems clear-eyed on the topic of copyright violations. AI-on-AI violations, anyway. We learn:

"That’s a mess Zelnick seems eager to avoid. ‘In terms of [AI] guardrails, if you mean not infringing on other people’s intellectual property by poaching their LLMs, yeah, we’re not going to do that,’ he said. ‘Moreover, if we did, we couldn’t protect that, we wouldn’t be able to protect our own IP. So of course, we’re mindful of what technology we use to make sure that it respects others’ intellectual property and allows us to protect our own.’"

Perhaps Strauss is on to something. It is true that generative AI is just another digital tool—albeit one that tends to put humans out of work. But as we know, hype is more important than reality for those chasing instant fame and riches.

Cynthia Murrell, February 20, 2025

Smart Software and Law Firms: Realities Collide

February 19, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumbThis blog post is the work of a real-live dinobaby. No smart software involved.

TechCrunch published “Legal Tech Startup Luminance, Backed by the Late Mike Lynch, Raises $75 Million.” Good news for Luminance. Now the company just needs to ring the bell for those putting up the money. The write up says:

Claiming to be capable of highly accurate interrogation of legal issues and contracts, Luminance has raised $75 million in a Series C funding round led by Point72 Private Investments. The round is notable because it’s one of the largest capital raises by a pure-play legal AI company in the U.K. and Europe. The company says it has raised over $115 million in the last 12 months, and $165 million in total.  Luminance was originally developed by Cambridge-based academics Adam Guthrie (founder and chief technical architect) and Dr. Graham Sills (founder and director of AI).

Why is Luminance different? The method is similar to that used by Deepseek. With concerns about the cost of AI, a method which might be less expensive to get up and keep running seems like a good bet.

However, Eudia has raised $105 million with backing from people familiar with Relativity’s legal business. Law dot com suggests that Eudia will streamline legal business processes.

The article “Massive Law Firm Gets Caught Hallucinating Cases” offers an interesting anecdote about a large law firm’s facing sanctions. What did the big boys and girls at the law firm do? Those hard working Type A professionals cited nine cases to support an argument. There is just one trivial issue perplexing the senior partners. Eight of those cases were “nonexistent.” That means made up, invented, and spot out by a nifty black box of probabilities and their methods.

I am no lawyer. I did work as an expert witness and picked up some insight about the thought processes of big time lawyers. My observations may not apply to the esteemed organizations to which I linked in this short essay, but I will assume that I am close enough for horseshoes.

  1. Partners want big pay and juicy bonuses. If AI can help reduce costs and add protein powder to the compensation package, AI is definitely a go-to technology to use.
  2. Lawyers who are very busy all of the billable time and then some want to be more efficient. The hyperbole swirling around AI makes it clear that using an AI is a productivity booster. Do lawyers have time to check what the AI system did? Nope. Therefore, hallucination is going to be part of the transformer-based methodologies until something better becomes feasible. (Did someone say, “Quantum computers?)
  3. The marketers (both directly compensated and the social media remoras) identify a positive. Then that upside is gilded like Tzar Nicholas’ powder room and repeated until it sure seems true.

The reality for the investors is that AI could be a winner. Go for it. The reality is for the lawyers that time to figure out what’s in bounds and what’s out of bounds is unlikely to be available. Other professionals will discover what the cancer docs did when using the late, great IBM Watson. AI can do some things reasonably well. Other things can have severe consequences.

Stephen E Arnold, February 19, 2025

Speed Up Your Loss of Critical Thinking. Use AI

February 19, 2025

While the human brain isn’t a muscle, its neurology does need to be exercised to maintain plasticity. When a human brain is rigid, it’s can’t function in a healthy manner. AI is harming brains by making them not think good says 404 Media: “Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared.” You can read the complete Microsoft research report at this link. (My hunch is that this type of document would have gone the way of Timnit Gebru and the flying stochastic parrot, but that’s just my opinion, Hank, Advait, Lev, Ian, Sean, Dick, and Nick.)

Carnegie Mellon University and Microsoft researchers released a paper that says the more humans rely on generative AI the “result in the deterioration of cognitive faculties that ought to be preserved.”

Really? You don’t say! What else does this remind you of? How about watching too much television or playing too many videogames? These passive activities (arguable with videogames) stunt the development of brain gray matter and in a flight of Mary Shelley rhetoric make a brain rot! What else did the researchers discover when they studied 319 knowledge workers who self-reported their experiences with generative AI:

“ ‘The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI,’ the researchers wrote. ‘Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving.’”

By the way, we definitely love and absolutely believe data based on self reporting. Think of the mothers who asked their teens, “Where did you go?” The response, “Out.” The mothers ask, “What did you do?” The answer, “Nothing.” Yep, self reporting.

Does this mean generative AI is a bad thing? Yes and no. It’ll stunt the growth of some parts of the brain, but other parts will grow in tandem with the use of new technology. Humans adapt to their environments. As AI becomes more ingrained into society it will change the way humans think but will only make them sort of dumber [sic]. The paper adds:

“ ‘GenAI tools could incorporate features that facilitate user learning, such as providing explanations of AI reasoning, suggesting areas for user refinement, or offering guided critiques,’ the researchers wrote. ‘The tool could help develop specific critical thinking skills, such as analyzing arguments, or cross-referencing facts against authoritative sources. This would align with the motivation enhancing approach of positioning AI as a partner in skill development.’”

The key is to not become overly reliant AI but also be aware that the tool won’t go away. Oh, when my mother asked me, “What did you do, Whitney?” I responded in the best self reporting manner, “Nothing, mom, nothing at all.”

Whitney Grace, February 19, 2025

Programming: Missing the Message

February 18, 2025

dino orange_thumb_thumb_thumb_thumb_thumbThis blog post is the work of a real-live dinobaby. No smart software involved.

I read “New Junior Developers Can’t Actually Code.” The write up is interesting. I think an important point in the essay has been either overlooked or sidestepped. The main point of the article in my opinion is:

The foundational knowledge that used to come from struggling through problems is just… missing. We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.

I agree. The push is to make creating software has shifted to what I like to describe as a TikTok mindset. The idea is that one can do a quick search and get an answer, preferably in less than 30 seconds. I know there are young people who spend time working through problems. We have one of these 12 year olds in our family. The problem is that I am not sure how many other 12-year-olds have this baked in desire to work through problems. From what I see and hear, teachers are concerned that students are in TikTok mode, not in “work through” mode, particularly in class.

The write up says:

Here’s the reality: The acceleration has begun and there’s nothing we can do about it. Open source models are taking over, and we’ll have AGI running in our pockets before we know it. But that doesn’t mean we have to let it make us worse developers. The future isn’t about whether we use AI—it’s about how we use it. And maybe, just maybe, we can find a way to combine the speed of AI with the depth of understanding that we need to learn.

I agree. Now the “however”:

  1. Mistakes with older software may not be easily remediated. I am a dinobaby. Dinobabies drop out or die. The time required to figure out why something isn’t working may not be available. That might be a problem for a small issue. For something larger, like a large bank, the problem can be a difficult one.
  2. People with modern skills may not know where to look for an answer. The reference materials, the snippets of code, or the knowledge about a specific programming language may not be available. There are many reasons for this “knowledge loss.” Once gone, it will take time and money to get the information, not a TikTok fix.
  3. The software itself may be a hack job. We did a project for Bell Labs at the time of the Judge Green break up. The regional manager running my project asked the people working with me on this minor job if Alan and Howard (my two mainframe IBM CICS specialists) if they wrote documentation. Howard said, “Ho ho ho. We just use Assembler and make it work.” The project manager said, “You can’t do that for this project.” Alan said, “How do you propose to get the service you want us to implement to work?” We got the job, and the system is still almost 50 years later still in service. Okay, young wizard with smart software, fix up our work.

So what? We are reaching a point when the disconnect between essential computer science knowledge and actual implementation in large-scale, mission-critical systems is being lost. Maybe AI can do what Alan, Howard, and I did to comply with Judge Green’s order relating to Baby Bell information exchange in the IBM environment.

I am skeptical. That’s a problem with the TikTok approach and smart software. If the model gets it wrong, there may be no fix. TikTok won’t be much help either. (I think Steve Gibson might agree with some of my assertions.) The write up does not flip over the rock. There is some shocking stuff beneath the gray, featureless surface.

Stephen E Arnold, February 18, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta