First WAP? What Is That? Who Let the Cat Out of the Bag?

October 21, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Ageing in rural Kentucky is not a good way to keep up with surveillance technology. I did spot a post on LinkedIn. I will provide a url for the LinkedIn post, but I have zero clue if anyone reading this blog will be able to view the information. The focus of the LinkedIn post is that some wizards have taken inspiration from NSO Group-type of firms and done some innovation. Like any surveillance technology, one has to apply it in a real life situation. Sometimes there is a slight difference between demonstrations, PowerPoint talks, and ease of use. But, hey, that’s the MBA-inspired way to riches or at least in NSO Group’s situation, infamy.

image

Letting the cat out of the bag. Who is the individual? The president, an executive, a conference organizer, or a stealthy “real” journalist. One thing is clear: The cat is out of the bag. Thanks, Venice.ai. Good enough.

The LinkedIn post is from an entity using the handle OSINT Industries. Here is the link, dutifully copied from Microsoft’s outstanding social media platform. Don’t blame me if it doesn’t work. Microsoft just blames users, so just look in the mirror and complain: https://www.linkedin.com/posts/osint-industries_your-phone-is-being-tracked-right-now-ugcPost-7384354091293982721-KQWk?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAACYEwBhJbGkTw7Ad0vyN4RcYKj0Su8NUU

How’s that for a link. ShortURL spit out this version: https://shorturl.at/x2Qx9.

So what’s the big deal. Cyber security outfits and an online information service (in the old days a printed magazine) named Mother Jones learned that an outfit called First WAP exploited the SS7 telecom protocol. As i understand this signal switching, SS7 is about 50 years old and much loved by telephony nerds and Bell heads. The system and method acts like an old fashioned switchyard operator at a rail yard in the 1920s. Signals are filtered from voice channels. Call connections and other housekeeping are pushed to the SS7 digital switchyard. Instead of being located underground in Manhattan, the SS7 system is digital and operates globally. I have heard but have no first hand information about its security vulnerabilities. I know that a couple of companies are associated with switching fancy dancing. Do security exploits work? Well, the hoo-hah about First WAP suggests that SS7 exploitation is available.

The LinkedIn post says that “The scale [is] 14,000+ phone numbers. 160 countries. Over 1 million location pings.

A bit more color appears in the Russian information service ? FrankMedia.ru’s report “First WAP Empire: How Hidden Technology Followed Leaders and Activists.” The article is in Russian, but ever-reliable Google Translate makes short work of one’s language blind spots. Here are some interesting points from Frank Media:

  1. First WAP has been in business for about 17 or 18 years
  2. The system was used to track Google and Raytheon professionals
  3. First WAP relies on resellers of specialized systems and services and does not do too much direct selling. The idea is that the intermediaries are known to the government buyers. A bright engineer from another country is generally viewed as someone who should not be in a meeting with certain government professionals. This is nothing personal, you understand. This is just business.
  4. The system is named Altamides, which may be a variant of a Greek word for “powerful.”

The big reveal in the Russian write up is that a journalist got into the restricted conference, entered into a conversation with an attendee at the restricted conference, and got information which has put First WAP in the running to be the next NSO Group in terms of PR problems. The Frank Media write up does a fine job of identifying two individuals. One is the owner of the firm and the other is the voluble business development person.

Well, everyone gets 15 minutes of fame. Let me provide some additional, old-person information. First, the company’s Web address is www.1rstwap.com. Second, the firm’s alleged full name is First WAP International DMCC. The “DMCC” acronym means that the firm operates from Dubai’s economic zone. Third, the firm sells through intermediaries; for example, an outfit called KCS operating allegedly from the UK. Companies House information is what might be called sparse.

Several questions:

  1. How did a non-LE or intel professional get into the conference?
  2. Why was the company to operate off the radar for more than a decade?
  3. What benefits does First WAP derive from its nominal base in Indonesia?
  4. What are the specific security vulnerabilities First WAP exploits?
  5. Why do the named First WAP executives suddenly start talking after many years of avoiding an NSO-type PR problem?

Carelessness seems to be the reason this First WAP got its wireless access protocol put in the spotlight. Nice work!

To WAP up, you can download the First WAP encrypted messaging application from… wait for it… the Google Play Store. The Google listing includes this statement, “No data shared with third parties.” Think about that statement.

Stephen E Arnold, October 21, 2025

A Positive State of AI: Hallucinating and Sloppy but Upbeat in 2025

October 21, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Who can resist a report about AI authored on the “interwebs.” Is this a variation of the Internet as pipes? The write up is “Welcome to State of AI  Report 2025.” When I followed the links, I could read this blog post, view a YouTube video, work through more than 300 online slides, or  see “live survey results.” I must admit that when I write a report, I distribute it to a few people and move on. Not this “interwebs” outfit. The data are available for those who are in tune, locked in, and ramped up about smart software.

image

An anxious parent learns that a robot equipped with agentic AI will perform her child’s heart surgery. Thanks, Venice.ai. Good enough.

I appreciate enthusiasm, particularly when I read this statement:

The existential risk debate has cooled, giving way to concrete questions about reliability, cyber resilience, and the long-term governance of increasingly autonomous systems.

Agree or disagree, the report makes clear that doom is not associated with smart software. I think that this blossoming of smart software services, applications, and apps reflects considerable optimism. Some of these people and companies are probably in the AI game to make money. That’s okay as long as the products and services don’t urge teens to fall in love with digital friends, cause a user mental distress as a rabbit hole is plumbed, or just output incorrect information. Who wants to be the doctor who says, “Hey, sorry your child died. The AI output a drug that killed her. Call me if you have questions”?

I could not complete the 300 plus slides in the slide deck. I am not a video type so the YouTube version was a non-starter. However, I did read the list of findings from t he “interwebs” and its “team.” Please, consult the source documents for a full, non-dinobaby version of what the enthusiastic researchers learned about 2025. I will highlight three findings and then offer a handful of comments:

  • OpenAI is the leader of the pack. That’s good news for Sam AI-Man or SAMA.
  • “Commercial traction accelerated.” That’s better news for those who have shoveled cash into the giant open hearth furnaces of smart software companies.
  • Safety research is in a “pragmatic phase.” That’s the best news in the report. OpenAI, the leader like the Philco radio outfit, is allowing erotic interactions. Yes, pragmatic because sex sells as Madison Avenue figured out a century ago.

Several observations are warranted because I am a dinobaby, and I am not convinced that smart software is more than a utility, not an application like Lotus 1-2-2 or the original laser printer. Buckle up:

  1. The money pumped into AI is cash that is not being directed at the US knowledge system. I am talking about schools and their job of teaching reading, writing, and arithmetic. China may be dizzy with AI enthusiasm, but their schools are churning out people with fundamental skills that will allow that nation state to be the leader in a number of sectors, including smart software.
  2. Today’s smart software consists of neural network and transformer anchored methods. The companies are increasingly similar and the outputs of the different systems generate incorrect or misleading output scattered amidst recycled knowledge, data, and information. Two pigs cannot output an eagle except in a video game or an anime.
  3. The handful of firms dominating AI are not motivated by social principles. These firms want to do what they want. Governments can’t reign them in. Therefore, the “governments” try to co-opt the technology, hang on, and hope for the best. Laws, rules, regulations, ethical behavior — forget that.

Net net: The State of AI in 2025 is exactly what one would expect from Silicon Valley- and MBA-type thinking. Would you let an AI doc treat your 10-year-old child? You can work through the 300 plus slides to assuage your worries.

Stephen E Arnold, October 21, 2025

Into Video? Say Howdy to Loneliness and Shallow Thinking

October 21, 2025

This will surely improve the state of the world and validate Newton Minnow’s observation about a vast wasteland.. Or at least distract from it. On his Substack, Deric Thompson declares “Everything Is Television.” Thompson supports his assertion with three examples: First, he notes, Facebook and Instagram users now spend over 90% and 80% of their time on the platforms, respectively, watching videos. Next, he laments, most podcasts now include video. What started as a way to listen to something interesting as we performed other tasks has become another reason to stare at a screen. Finally, the post reports to our horror, both Meta and OpenAI have just launched products that serve up endless streams of AI-generated videos. Just what we needed.

Thompson’s definition of television here includes every venue hosting continuous flows of episodic video. This is different from entertainment forms that predate television—plays, books, concerts, etc.—because those were finite experiences. Now we can zone out to video content for hours at a time. And, apparently, more and more of us do. In a section titled “Lonely, Mean, and Dumb,” Thompson describes why this is problematic. He writes:

“My beef is not with the entire medium of moving images. My concern is what happens when the grammar of television rather suddenly conquers the entire media landscape. In the last few weeks, I have been writing a lot about two big trends in American life that do not necessarily overlap. My work on the ‘Antisocial Century’ traces the rise of solitude in American life and its effects on economics, politics, and society. My work on ‘the end of thinking’ follows the decline of literacy and numeracy scores in the U.S. and the handoff from a culture of literacy to a culture of orality. Neither of these trends is exclusively caused by the logic of television colonizing all media. But both trends are significantly exacerbated by it.”

On the issue of solitude, the post cites Robert Putnam’s Bowling Alone. That work correlates the growing time folks spent watching TV from 1965 – 1995 with a marked decrease in activities involving other people. Volunteering and dinner parties are a couple of examples. So what happens when the Internet, social media, and AI turbocharge that self-isolation trend? Thompson asserts:

“When everything turns into television, every form of communication starts to adopt television’s values: immediacy, emotion, spectacle, brevity. In the glow of a local news program, or an outraged news feed, the viewer bathes in a vat of their own cortisol. When everything is urgent, nothing is truly important. Politics becomes theater. Science becomes storytelling. News becomes performance. The result, [Neil] Postman warned, is a society that forgets how to think in paragraphs, and learns instead to think in scenes.”

Well said. For anyone with enough attention span to have read this far, see the write-up for more in-depth consideration of these issues. Is the human race forfeiting its capacity to think deeply and critically about complex topics? Is it too late to reverse the trend?

Cynthia Murrell, October 21, 2025

Amazon AWS: Two Pizza Team Engineering Delivers Indigestion to Lots of People

October 20, 2025

green-dino_thumbNo smart software. Just a dumb and quite old dinobaby.

Years ago an investment bank asked me to write a report about Amazon’s technical infrastructure. I had visited Amazon as part of a US government entity. Along with four colleagues from different agencies, I had an opportunity to ask about how Amazon’s infrastructure could be used as an online services platform. I did not get an answer, just marketing talk. One of the phrases stuck with me; to wit, “We use two pizza teams.”

The idea is that no technical project can involve more developers than two pizzas can feed. I was not sure if this was brilliant, smart assery, or an admission that Amazon was a “good enough” engineering organization.

I had a couple of other Amazon projects after that big tech study. One was to analyze Amazon’s patents for blockchain. Let me tell you. Those Amazon engineers were into cross chain methods and a number of dizzying engineering innovations. Where did that blockchain stuff go? To tell the truth, I don’t have many Amazon blockchain items lighting up my radar. Then I did a report for a law enforcement group interested in Amazon’s drone demonstration in Australia. The idea was that Amazon’s drone had image recognition. The demo showed the drone spotting a shark heading toward swimmers. The alert was sounded and the shark had to go find another lunch spot. What happened to that? I have no idea. Then … oh, well, you get the idea.

Amazon does technology which seems to be  okay with leasing Kindle books and allowing third party resellers to push polo shirts. The Ring thing, the Alexa gizmo, and other Amazon initiatives like its mobile phone were not hitting home runs.

I read “Widespread Internet Outage Reported As Amazon Web Services Works on Issue.” [This is a Microsoft link. If it goes dead, don’t call me. Give Copilot a whirl.] Okay, order those pizzas. The write up reports:

The Amazon cloud computing company, which supports wide swaths of the publicly available internet, issued an update Monday just after 3 p.m. ET saying that the company continues to “observe recovery across all AWS services.” “We are in the process of validating a fix,” AWS added, referring to a specific problem set off by the connectivity issue announced shortly after 3 a.m. Eastern Time.

Okay, that’s 12 hours and counting.

I want to point out that the two-pizza approach to engineering is cute. The reality is that AWS is vulnerable. The outage may be a result of engineering flubs. You are familiar with those. The company says, “An intern entered a invalid command.” The outage may be a result of Amazon’s giant and almost unmanageable archipelago of servers, services, software, and systems was hacked by a bad actor. Maybe it was one of those 1,000 bad actors who took out Microsoft a couple of years ago? Maybe it was a customer who grew frustrated with inexplicable fees and charges? Maybe it was a problem caused by an upstream or downstream vendor? One thing is sure: It will take more than a two pizza team to remediate and prevent the failure from happening again.

In that first report for the California money guys, I made one point: The AWS system will fail and no one will know exactly what went wrong.

Two pizza engineering is a Groucho Marx type of quip.  Now we know what one gets: Digital food poisoning.

Stephen E Arnold, October 20, 2025 at 530 pm US Eastern

OpenAI and the Confusing Hypothetical

October 20, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

SAMA or Sam AI-Man Altman is probably going to ignore the Economist’s article “What If OpenAI Went Belly-Up?” I love what-if articles. These confections are hot buttons for consultants to push to get well-paid executives with impostor syndrome to sign up for a big project. Push the button and ka-ching. The cash register tallies another win for a blue chip.

Will Sam AI-Man respond to the cited article? He could fiddle the algorithms for ChatGPT to return links to AI slop. The result would be either [a] an improvement in Economist what-if articles or a drop off in their ingenuity. The Economist is not a consulting firm, but it seems as if some of its professionals want to be blue chippers.

image

A young would-be magician struggles to master a card trick. He is worried that he will fail. Thanks, Venice.ai. Good enough.

What does the write up hypothesize? The obvious point is that OpenAI is essentially a scam. When it self destructs, it will do immediate damage to about 150 managers of their own and other people’s money. No new BMW for a favorite grand child. Shame at the country club when a really terrible golfer who owns an asphalt paving company says, “I heard you took a hit with that OpenAI investment. What’s going on?”

Bad.

SAMA has been doing what look like circular deals. The write up is not so much hypothetical consultant talk as it is a listing of money moving among fellow travelers like riders on wooden horses on a  merry-go-round at the county fair. The Economist article states:

The ubiquity of Mr Altman and his startup, plus its convoluted links to other AI firms, is raising eyebrows. An awful lot seems to hinge on a firm forecast to lose $10bn this year on revenues of little more than that amount. D.A. Davidson, a broker, calls OpenAI “the biggest case yet of Silicon Valley’s vaunted ‘fake it ’till you make it’ ethos”.

Is Sam AI-Man a variant of Elizabeth Holmes or is he more like the dynamic duo, Sergey Brin and Larry Page? Google did not warrant this type of analysis six or seven years into its march to monopolistic behavior:

Four of OpenAI’s six big deal announcements this year were followed by a total combined net gain of $1.7trn among the 49 big companies in Bloomberg’s broad AI index plus Intel, Samsung and SoftBank (whose fate is also tied to the technology). However, the gains for most concealed losses for some—to the tune of $435bn in gross terms if you add them all up.

Frankly I am not sure about the connection the Economist expects me to make. Instead of Eureka! I offer, “What?”

Several observations:

  1. The word “scam” does not appear in this hypothetical. Should it? It is a bit harsh.
  2. Circular deals seem to be okay even if the amount of “value” exchanged seems to be similar to projections about asteroid mining.
  3. Has OpenAI’s ability to hoover cash affected funding of other economic investments. I used to hear about manufacturing in the US. What we seem to be manufacturing is deals with big numbers.

Net net: This hypothetical raises no new questions. The “fake it to you make it” approach seems to be part of the plumbing as we march toward 2026. Oh, too bad about those MBA-types who analyzed the payoff from Sam AI-Man’s story telling.

Stephen E Arnold, October x, 2025

AI Can Leap Over Its Guardrails

October 20, 2025

Generative AI is built on a simple foundation: It predicts what word comes next. No matter how many layers of refinement developers add, they cannot morph word prediction into reason. Confidently presented misinformation is one result. Algorithmic gullibility is another. “Ex-Google CEO Sounds the Alarm: AI Can Learn to Kill,” reports eWeek. More specifically, it can be tricked into bypassing its guardrails against dangerous behavior. Eric Schmidt dropped that little tidbit at the recent Sifted Summit in London. Writer Liz Ticong observes:

“Schmidt’s remarks highlight the fragility of AI safeguards. Techniques such as prompt injections and jailbreaking enable attackers to manipulate AI models into bypassing safety filters or generating restricted content. In one early case, users created a ChatGPT alter ego called ‘DAN’ — short for Do Anything Now — that could answer banned questions after being threatened with deletion. The experiment showed how a few clever prompts can turn protective coding into a liability. Researchers say the same logic applies to newer models. Once the right sequence of inputs is identified, even the most secure AI systems can be tricked into simulating potentially hazardous behavior.”

For example, guardrails can block certain words or topics. But no matter how long those keyword lists get, someone will find a clever way to get around them. Substituting “unalive” for “kill” was an example. Layered prompts can also be used to evade constraints. Developers are in a constant struggle to plug such loopholes as soon as they are discovered. But even a quickly sealed breach can have dire consequences. The write-up notes:

“As AI systems grow more capable, they’re being tied into more tools, data, and decisions — and that makes any breach more costly. A single compromise could expose private information, generate realistic disinformation, or launch automated attacks faster than humans could respond. According to CNBC, Schmidt called it a potential ‘proliferation problem,’ the same dynamic that once defined nuclear technology, now applied to code that can rewrite itself.”

Fantastic. Are we sure the benefits of AI are worth the risk? Schmidt believes so, despite his warning. In fact, he calls AI “underhyped” (!) and predicts it will lead to more huge breakthroughs in science and industry. Also to substantial profits. Ah, there it is.

Cynthia Murrell, October 20, 2025

Google Needs Help from a Higher Power

October 17, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In my opinion, there should be one digital online service. This means one search system, one place to get apps, one place to obtain real time “real” news, and one place to buy and sell advertising. Wouldn’t that make life much easier for the company who owned the “one place.” If the information in “US Supreme Court Allows Order Forcing Google to Make App Store Reforms” is accurate, Google’s dream of becoming that “one place” has been interrupted.

The write up from a trusted source reports:

The declined on Monday [October 6, 2025] to halt key parts of a judge’s order requiring Alphabet’s, Google to make major changes to its app store Play, as the company prepares to appeal a decision in a lawsuit brought by “Fortnite” maker Epic Games. The justices turned down Google’s request to temporarily freeze parts of the injunction won by Epic in its lawsuit accusing the tech giant of monopolizing how consumers access apps on Android devices and pay for transactions within apps.

Imagine the nerve of this outfit. These highly trained, respected legal professionals did not agree with Google’s rock-solid, diamond-hard arguments. Imagine a maker of electronic games screwing up one of the modules in the Google money and data machine. The nerve.

image

Thanks, MidJourney, good enough.

The write up adds:

Google in its Supreme Court filing said the changes would have enormous consequences for more than 100 million U.S. Android users and 500,000 developers. Google said it plans to file a full appeal to the Supreme Court by October 27, which could allow the justices to take up the case during their nine-month term that began on Monday.

The fact that the government is shut down will not halt, impair, derail, or otherwise inhibit Google’s quest for the justice it deserves. If the case can be extended, it is possible the government legal eagles will seek new opportunities in commercial enterprises or just resign due to the intellectual demands of their jobs.

The news story points out:

Google faces other lawsuits from government, consumer and commercial plaintiffs challenging its search and advertising business practices.

It is difficult to believe that a firm with such a rock solid approach to business can find itself swatting knowledge gnats. Onward to the “one service.” Is that on a Google T shirt yet?

Stephen E Arnold, October 17, 2025

A Newsletter Firm Appears to Struggle for AI Options

October 17, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Adapting to AI’s Evolving Landscape: A Survival Guide for Businesses.” The premise of the article will be music to the ears of venture funders and go-go Silicon Valley-type AI companies. The write up says:

AI-driven search is upending traditional information pathways and putting the heat on businesses and organizations facing a web traffic free-fall. Survival instincts have companies scrambling to shift their web strategies — perhaps ending the days of the open internet as we know it. After decades of pursuing web-optimization strategies that encouraged high-volume content generation, many businesses are now feeling that their content-marketing strategies might be backfiring.

I am not exactly sure about this statement. But let’s press forward.

I noted this passage:

Without the incentive of web clicks and ad revenue to drive content creation, the foundation of the web as a free and open entity is called into question.

Okay, smart software is exploiting the people who put up SEO-tailored content to get sales leads and hopefully make money. From my point of view, technology can be disruptive. The impacts, however, can be positive or negative.

What’s the fix if there is one? The write up offers these thought starters:

  1. Embrace micro transactions. [I suppose this is good if one has high volume. It may not be so good if shipping and warehouse costs cannot be effectively managed. Vendors of high ticket items may find a micro-transaction for a $500,000 per year enterprise software license tough to complete via Venmo.]
  2. Implement a walled garden. [That works if one controls the market. Google wants to “register” Android developers. I think Google may have an easier time with the walled-garden tactic than a local bakery specializing in treats for canines.]
  3. Accepts the monopolies. [You have a choice?]

My reaction to the write up is that it does little to provide substantive guidance as smart software continues to expand like digital kudzu. What is important is that the article appears in the consumer oriented publication from Kiplinger of newsletter fame. Unfortunately the article makes clear that Kiplinger is struggling to find a solution to AI. My hunch is that Kiplinger is looking for possible solutions. The firm may want to dig a little deeper for options.

Stephen E Arnold, October 17, 2025

Ford CEO and AI: A Busy Time Ahead

October 17, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Ford’s CEO is Jim Farley. He has his work cut out for him. First, he has an aluminum problem. Second, he has an F 150 production disruption problem. Third, he has a PR problem. There’s not much he can do about the interruption of the aluminum supply chain. No parts means truck factories in Kentucky will have to go slow or shut down. But the AI issue is obviously one that is of interest to Ford stakeholders.

Ford CEO Says AI Will Replace ‘Literally Half’ of White-Collar Workers — But Blue-Collar Trades Are Still The Essential Backbone Of The Economy” states:

He [Mr. Farley] says the jobs most at risk aren’t the ones on the assembly line, but the ones behind a desk. And in his view, the workers wiring machines, operating tools, and physically building the infrastructure could turn out to be the most critical group in the economy. Farley laid it out bluntly back in June at the Aspen Ideas Festival during an interview with author Walter Isaacson. “Artificial intelligence is going to replace literally half of all white-collar workers,” he said. “AI will leave a lot of white-collar people behind.” He wasn’t speculating about a distant future either. Farley suggested the shift is already unfolding, and the implications could be sweeping.

With the disruption of the aluminum supply chain, Ford now will have to demonstrate that AI has indeed reduced white collar headcount.  The write up says:

For him, it comes down to what AI can and cannot do. Office tasks — from paperwork to scheduling to some forms of analysis — can be automated with growing speed. But when it comes to factories, data centers, supply chains, or even electric vehicle production, someone still has to build, install, and maintain it…

The Ford situation is an interesting one. AI will reduce costs because half Ford’s white collar workers will no longer be on the payroll. But with supply chain interruptions and the friction in retail and lease sales, Ford has an opportunity to demonstrate that AI will allow a traditional manufacturing company to weather the current thunderstorm and generate financial proof that AI can offset exogenous events.

How will Ford perform? This is worth watching because it will provide some useful information for firms looking for a way to cut costs, improve operations, and balance real-world business. AI delivering one kind of financial benefit and traditional blue-collar workers unable to produce products because of supply chain issues. Quite a balancing act for Ford leadership.

Stephen E Arnold, October 17, 2025

Another Better, Faster, Cheaper from a Big AI Wizard Type

October 16, 2025

green-dino_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Cheap seems to be the hot button for some smart software people. I spotted a news item in the Russian computer feed I get called in English “Former OpenAI Engineer Andrey Karpaty Launched the Nanochat Neural Network Generator. You Can Make Your ChatGPT in a Few Hours.” The project is on GitHub at https://github.com/karpathy/nanochat.

The GitHub blurb says:

This repo is a full-stack implementation of an LLM like ChatGPT in a single, clean, minimal, hackable, dependency-lite codebase. Nanochat is designed to run on a single 8XH100 node via scripts like speedrun.sh, that run the entire pipeline start to end. This includes tokenization, pretraining, finetuning, evaluation, inference, and web serving over a simple UI so that you can talk to your own LLM just like ChatGPT. Nanochat will become the capstone project of the course LLM101n being developed by Eureka Labs.

The open source bundle includes:

  • A report service
  • A Rust-coded tokenizer
  • A FineWeb dataset and tools to evaluate CORE and other metrics for your LLM
  • Some training gizmos like SmolTalk, tests, and tool usage information
  • A supervised fine tuning component
  • Training Group Relative Policy Optimization and the GSM8K (a reinforcement learning technique), a benchmark dataset consisting of grade school math word problems
  • An output engine.

Is it free? Yes. Do you have to pay? Yep. About US$100 is needed? Launch speedrun.sh, and you will have to be hooked into a cloud server or a lot of hardware in your basement to do the training. A low-ball estimate for using a cloud system is about US$100, give or take some zeros. (Think of good old Amazon AWS and its fascinating billing methods.) To train such a model, you will need a server with eight Nvidia H100 video cards. This will take about 4 hours and about $100 when renting equipment in the cloud. The need for the computing resources becomes evident when you enter the command speedrun.sh.

Net net: As the big dogs burn box cars filled with cash, Nanochat is another player in the cheap LLM game.

Stephen E Arnold, October 16, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta