So You Want to Be an AI Millionaire?
January 27, 2025
A blog post from an authentic dinobaby. He’s old; he’s in the sticks; and he is deeply skeptical.
The US Attorney in the Northern District of California issued a remarkable public statement about an AI investor scheme that did not work. The write up states:
A 25-count indictment was unsealed today charging Alexander Charles Beckman, the founder and former CEO of GameOn, Inc., also known as GameOn Technology or ON Platform (“GameOn”), and Valerie Lau Beckman (“Lau”), an attorney who worked on GameOn matters and is married to Beckman, with conspiracy, wire fraud, securities fraud, identity theft, and other offenses. Lau was also charged with obstruction of justice. According to the indictment filed on Jan. 21, 2025, Beckman, 41, and Lau, 38, both of San Francisco, allegedly conspired to defraud GameOn investors, GameOn, and a bank.
I want to point out that this type of fraud is a glimpse of the interesting world of the Silicon Valley FOMO or fear of missing out. Writing checks based on a PowerPoint deck is a variation of playing roulette, just with money not a casino with no clocks.
However, in the official statement, there was some fascinating information about the specific method used by the individuals involved in the scam. The public document says:
As alleged in the indictment, Beckman’s statements to GameOn investors often described non-existent revenue, inflated cash balances, and fake and otherwise exaggerated customer relationships. To further the scheme, Beckman allegedly used the names of at least seven real people—including fake emails and signatures—without their permission to distribute false and fraudulent GameOn financial and business information and documents with the intent to defraud GameOn and its investors. Among the individuals whose names Beckman used to commit the fraud scheme was a GameOn CFO, two bank employees, and an employee of a major professional sports league. Beckman also fabricated two GameOn audit reports using the names, signatures, and trademarks of reputable accounting firms, including one of the Big Four accounting firms, to validate false financial statements, and distributed over a dozen fake bank statements for GameOn’s accounts as part of the scheme.
Building a financial crime is hard, detailed work. Here’s the twist used by those in the US Attorney’s news release:
After changing law firms multiple times, Lau joined a venture capital firm in September 2021. Lau is alleged to have provided Beckman with genuine audit reports that she obtained from her own employer that Beckman then used to create fake audit reports for GameOn. The indictment alleges that Lau personally emailed one of these fake audit reports to a GameOn investor’s representative, knowing it to be fake, to induce further investment into the company. In June 2024, Lau furthered the scheme to defraud by delivering a fake GameOn account statement—one that she knew falsely listed GameOn’s balance at a certain financial institution as over $13 million when the company’s true balance was just $25.93—to a bank branch in San Francisco and asking a bank employee to keep the fake statement in an envelope at the bank for Beckman to pick up later that day. Lau knew that Beckman planned to pick up the fake statement with a GameOn director who represented a major investor on GameOn’s board. Beckman picked up the fake statement with the GameOn director that day.
Several observations:
- Bad actors in this case did a great deal of work. Imagine the benefit of applying those talents to a non-fraudulent activity.
- The FOMO lure generates a pool of suckers for get rich quick schemes.
- The difference between a “real” AI play and one that is little more than a vehicle for big bucks resides on a fine line subject to Heisenberg’s uncertainty principle. Some crazy AI schemes get lucky and become “real” businesses. Everyone is surprised.
The clever work may be rewarded with new career opportunities for those involved.
Stephen E Arnold, January 27, 2025
How to Make Software Smart Like Humans
January 27, 2025
Artificial intelligence algorithms are still only as smart as they’re programmed. In other words, they’re still software, sometimes stupid pieces of software. Most AI algorithms are trained on large language models (LLMS) and datasets that lack the human magic to make them “think” like a smart 14-year-old. That could change says Science Daily based on the research from Linköping University: “Machine Psychology: A Bridge To General AI?”
The Robert Johansson of Linköping University asserted in his dissertation that psychological learning models combined with AI could be the key to making machines smart like humans. Johansson developed the concept of Machine Psychology and explains that, unlike many people, he’s not afraid of an AI future. Artificial General Intelligence (AGI) has many positive and negatives. The technology must be carefully created, but AGI could counter many societal destructive developments.
Johansson suggests that AI developers should follow the principle-led path. He means that through his research he’s identified important psychological learning principles that could explain intelligence and they could be implemented in machines. He’s used a logic system called Non-Axiomatic Reasoning System (NARS) that is purposely designed without complete data, computational power, and in real time. This provides the flexibility to handle problems that arise in reality.
NARS works on limited information like a human:
“The combination of NARS and learning psychology principles constitutes an interdisciplinary approach that Robert Johansson calls Machine Psychology, a concept he was the first to coin but more actors have now started to use, including Google DeepMind. The idea is that artificial intelligence should learn from different experiences during its lifetime and then apply what it has learned to many different situations, just as humans begin to do as early as the age of 18 months — something no other animal can do.”
Johansson said that it is possible machines could be as smart as humans within five years. It is a plan, but do computers have the correct infrastructure to handle that type of intelligence? Do humans have the smarts to handle smarter software?
Whitney Grace, January 27, 2025
How to Garner Attention from X.com: The Guardian Method Seems Infallible
January 24, 2025
Prepared by a still-alive dinobaby.
The Guardian has revealed its secret to getting social media attention from Twitter (now the X). “‘Just the Start’: X’s New AI Software Driving Online Racist Abuse, Experts Warn” makes the process dead simple. Here are the steps:
- Publish a diatribe about the power of social media in general with specific references to the Twitter machine
- Use name calling to add some clickable bound phrases; for example, “online racism”, “fake images”, and “naked hate”
- Use loaded words to describe images; for example, an athlete “who is black, picking cotton while another shows that same player eating a banana surrounded by monkeys in a forest.”
Bingo. Instantly clickable.
The write up explains:
Callum Hood, the head of research at the Center for Countering Digital Hate (CCDH), said X had become a platform that incentivised and rewarded spreading hate through revenue sharing, and AI imagery made that even easier. “The thing that X has done, to a degree that no other mainstream platform has done, is to offer cash incentives to accounts to do this, so accounts on X are very deliberately posting the most naked hate and disinformation possible.”
This is a recipe for attention and clicks. Will the Guardian be able to convert the magnetism of the method in cash money?
Stephen E Arnold, January 24, 2025
Amazon: Twitch Is Looking a Bit Lame
January 24, 2025
Are those 30-second ads driving away viewers? Are the bans working to alienate creators and their fans? Is Amazon going to innovate in streaming?
These are questions Amazon needs to answer in a way that is novel and actually works.
Twitch is an online streaming platform primarily used by gamers to stream their play seasons and interact with their fanbase. There hasn’t been much news about Twitch in recent months and it could be die to declining viewership. Tube Filter dives into the details with “Is Twitch Viewership At Its Lowest Point In Four Years?”
The article explains that Twitch had a total of 1.58 billion watch time hours in December 2024. This was its lowest month in four years according to Stream Charts. Twitch, however, did have a small increase in new streamers joining the platform and the amount of channels live at one time. Stream Charts did mention that December is a slow month due to the holiday season. Twitch is dealing with dire financial straits and made users upset when it used AI to make emotes.
Here are some numbers:
“In both October and November 2024, around 89,000 channels on average would be live on Twitch at any one time. In December, that figure pushed up to 92,392. Twitch also saw a bump in the overall number of active channels from 4,490,725 in November to 4,777,395 in December—a 6% increase. [I]t’s important to note that other key metrics for both viewer and streamer activity remain strong,” it wrote in a report about December’s viewership. “A positive takeaway from December was the variety of content on offer. Streamers broadcasted in 43,200 different categories, the highest figure of the year, second only to March.”
Streams Charts notes that all these streamers broadcasted a more diverse range of content of content than usual.
Twitch is also courting TikTok creators in case the US federal government bans the short video streaming platform. The platform has offerings that streamers want, but it needs to do more to attract more viewers.
Whitney Grace, January 24, 2025
And the Video Game Struggler for 2024 Is… Video Games
January 24, 2025
Yep, 2024 sas the worst year for videogames since 1983.
Videogames are still a young medium, but they’re over fifty years old. The gaming industry has seen ups and downs with the first (and still legendary) being the 1983 crash. Arcade games were all the rage back then, but these days consoles and computers have the action. At least, they should.
Wired writes that “2024 Was The Year The Bottom Fell Out Of The Games Industry” due to multiple reasons. There was massive layoffs in 2023 with over 10,000 game developers losing their jobs. Some of this was attributed to AI slowly replacing developers. The gaming industry’s job loss in 2024 was forty percent higher than the prior year. Yikes!
DEI (diversity, equity, and inclusion) combined with woke mantra was also blamed for the failue of many games, including Suicide Squad: Kill the Justice League. The phrase “go woke, go broke” echoed throughout the industry as it is in Hollywood, Silicon Valley, and other fields. I noted:
“According to Matthew Ball, an adviser and producer in the games and TV space…says that the blame for all of this can’t be pinned to a single thing, like capitalism, mismanagement, Covid-19, or even interest rates. It also involves development costs, how studios are staffed, consumers’ spending habits, and game pricing. “This storm is so brutal,” he says, ‘because it is all of these things at once, and none have really alleviated since the layoffs began.’”
Many indie studios were shuttered and large tech leaders such as Microsoft and Sony shut down parts of their gaming division. Also a chain of events influenced by the hatred of DEI and its associated mindsets that is being called a second GamerGate.
The gaming industry will continue through the beginnings of 2025 with business as usual. The industry will bounce back, but it will be different than the past.
Whitney Grace, January 24, 2025
AI Will Doom You to Poverty Unless You Do AI to Make Money
January 23, 2025
Prepared by a still-alive dinobaby.
I enjoy reading snippets of the AI doomsayers. Some spent too much time worrying about the power of Joe Stalin’s approach to governing. Others just watched the Terminator series instead of playing touch football. A few “invented” AI by cobbling together incremental improvements in statistical procedures lashed to ever-more-capable computing infrastructures. A couple of these folks know that Nostradamus became a brand and want to emulate that predictive master.
I read “Godfather of AI Explains How Scary AI Will Increase the Wealth Gap and Make Society Worse.” That is a snappy title. Whoever wrote it crafted the idea of an explainer to fear. Plus, the click bait explains that homelessness is for you too. Finally, it presents a trope popular among the elder care set. (Remember, please, that I am a dinobaby myself.) Prod a group of senior citizens to a dinner and you will hear, “Everything is broken.” Also, “I am glad I am old.” Then there is the ever popular, “Those tattoos! The check out clerks cannot make change! I don’t understand commercials!” I like to ask, “How many wars are going on now? Quick.”
Two robots plan a day trip to see the street people in Key West. Thanks, You.com. I asked for a cartoon; I get a photorealistic image. I asked for a coffee shop; I get weird carnival setting. Good enough. (That’s why I am not too worried.)
Is society worse than it ever was? Probably not. I have had an opportunity to visit a number of countries, go to college, work with intelligent (for the most part) people, and read books whilst sitting on the executive mailing tube. Human behavior has been consistent for a long time. Indigenous people did not go to Wegman’s or Whole Paycheck. Some herded animals toward a cliff. Other harvested the food and raw materials from the dead bison at the bottom of the cliff. There were no unskilled change makers at this food delivery location.
The write up says:
One of the major voices expressing these concerns is the ‘Godfather of AI’ himself Geoffrey Hinton, who is viewed as a leading figure in the deep learning community and has played a major role in the development of artificial neural networks. Hinton previously worked for Google on their deep learning AI research team ‘Google Brain’ before resigning in 2023 over what he expresses as the ‘risks’ of artificial intelligence technology.
My hunch is that like me the “worked at” Google was for a good reason — Money. Having departed from the land of volleyball and weird empty office buildings, Geoffrey Hinton is in the doom business. His vision is that there will be more poverty. There’s some poverty in Soweto and the other townships in South Africa. The slums of Rio are no Palm Springs. Rural China is interesting as well. Doesn’t everyone want to run a business from the area in front of a wooden structure adjacent an empty highway to nowhere? Sounds like there is some poverty around, doesn’t it?
The write up reports:
“We’re talking about having a huge increase in productivity. So there’s going to be more goods and services for everybody, so everybody ought to be better off, but actually it’s going to be the other way around. “It’s because we live in a capitalist society, and so what’s going to happen is this huge increase in productivity is going to make much more money for the big companies and the rich, and it’s going to increase the gap between the rich and the people who lose their jobs.”
The fix is to get rid of capitalism. The alternative? Kumbaya or a better version of those fun dudes Marx. Lenin, and Mao. I stayed in the “last” fancy hotel the USSR built in Tallinn, Estonia. News flash: The hotels near LaGuardia are quite a bit more luxurious.
The godfather then evokes the robot that wanted to kill a rebel. You remember this character. He said, “I’ll be back.” Of course, you will. Hollywood does not do originals.
The write up says:
Hinton’s worries don’t just stop at the wealth imbalance caused by AI too, as he details his worries about where AI will stop following investment from big companies in an interview with CBC News: “There’s all the normal things that everybody knows about, but there’s another threat that’s rather different from those, which is if we produce things that are more intelligent than us, how do we know we can keep control?” This is a conundrum that has circulated the development of robots and AI for years and years, but it’s seeming to be an increasingly relevant proposition that we might have to tackle sooner rather than later.
Yep, doom. The fix is to become an AI wizard, work at a Google-type outfit, cash out, and predict doom. It is a solid career plan. Trust me.
Stephen E Arnold, January 23, 2025
Teenie Boppers and Smart Software: Yep, Just Have Money
January 23, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I scanned the research summary “About a Quarter of U.S. Teens Have Used ChatGPT for Schoolwork – Double the Share in 2023.” Like other Pew data, the summary contained numerous numbers. I was not sufficiently motivated to dig into the methodology to find out how the sample was assembled nor how Pew got the mobile addicted youth were prompted to provide presumably truthful answers to direct questions. But why nit pick? We are at the onset of an interesting year which will include forthcoming announcements about how algorithms are agentic and able to fuel massive revenue streams for those in the know.
Students doing their homework while their parents play polo. Thanks, MSFT Copilot. Good enough. I do like the croquet mallets and volleyball too. But children from well-to-do families have such items in abundance.
Let’s go to the video tape, as the late and colorful Warner Wolf once said to his legion of Washington, DC, fan.
One of the highlights of the summary was this finding:
Teens who are most familiar with ChatGPT are more likely to use it for their schoolwork. Some 56% of teens who say they’ve heard a lot about it report using it for schoolwork. This share drops to 18% among those who’ve only heard a little about it.
Not surprisingly, the future leaders of America embrace short cuts. The question is, “How quickly will awareness reach 99 percent and usage nosing above 75 percent?” My guesstimate is pretty quickly. Convenience and more time to play with mobile phones will drive the adoption. Who in America does not like convenience?
Another finding catching my eye was:
Teens from households with higher annual incomes are most likely to say they’ve heard about ChatGPT. The shares who say this include 84% of teens in households with incomes of $75,000 or more say they’ve heard at least a little about ChatGPT.
I found this interesting because it appears to suggest that if a student comes from a home where money does not seem to be a huge problem, the industrious teens are definitely aware of smart software. And when it comes to using the digital handmaiden, Pew finds apparently nothing. There is no data point relating richer progeny with greater use. Instead we learned:
Teens who are most familiar with the chatbot are also more likely to say using it for schoolwork is OK. For instance, 79% of those who have heard a lot about ChatGPT say it’s acceptable to use for researching new topics. This compares with 61% of those who have heard only a little about it.
My thought is that more wealthy families are more likely to have teens who know about smart software. I would hypothesize that wealthy parents will pay for the more sophisticated smart software and smile benignly as the future intelligentsia stride confidently to ever brighter futures. Those without the money will get the opportunity to watch their classmates have more time for mobile phone scrolling, unboxing Amazon deliveries, and grabbing burgers at Five Guys.
I am not sure that the link between wealth and access to learning experiences is a random, one-off occurrence. If I am correct, the Pew data suggest that smart software is not reinforcing democracy. It seems to be making a digital Middle Ages more and more probable. But why think about what a dinobaby hypothesizes? It is tough to scroll zippy mobile phones with old paws and yellowing claws.
Stephen E Arnold, January 23, 2025
Yo, MSFT-Types, Listen Up
January 23, 2025
Developers concerned about security should check out “Seven Types of Security Issues in Software Design” at InsBug. The article does leave out a few points we would have included. Using Microsoft software, for example, or paying for cyber security solutions that don’t work as licensees believe. And don’t forget engineering for security rather than expediency and cost savings. Nevertheless, the post makes some good points. It begins:
“Software is gradually defining everything, and its forms are becoming increasingly diverse. Software is no longer limited to the applications or apps we see on computers or smartphones. It is now an integral part of hardware devices and many unseen areas, such as cars, televisions, airplanes, warehouses, cash registers, and more. Besides sensors and other electronic components, the actions and data of hardware often rely on software, whether in small amounts of code or in hidden or visible forms. Regardless of the type of software, the development process inevitably encounters bugs that need to be identified and fixed. While major bugs are often detected and resolved before release or deployment by developers or testers, security vulnerabilities don’t always receive the same attention.”
Sad but true. The seven categories include: Misunderstanding of Security Protection Technologies; Component Integration and Hidden Security Designs; Ignoring Security in System Design; Security Risks from Poor Exception Handling; Discontinuous or Inconsistent Trust Relationships; Over-Reliance on Single-Point Security Measures; and Insufficient Assessment of Scenarios or Environments. See the write-up for details on each point. We note a common thread—a lack of foresight. The post concludes:
“To minimize security risks and vulnerabilities in software design and development, one must possess solid technical expertise and a robust background in security offense and defense. Developing secure software is akin to crafting fine art — it requires meticulous thought, constant consideration of potential threats, and thoughtful design solutions. This makes upfront security design critically important.”
Security should not be an afterthought. What a refreshing perspective.
Cynthia Murrell, January 23, 2025
AI: Yes, Intellectual Work Will Succumb, Just Sooner Rather Than Later
January 22, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Has AI innovation stalled? Nope. “It’s Getting Harder to Measure Just How Good AI Is Getting” explains:
OpenAI’s end-of-year series of releases included their latest large language model (LLM), o3. o3 does not exactly put the lie to claims that the scaling laws that used to define AI progress don’t work quite that well anymore going forward, but it definitively puts the lie to the claim that AI progress is hitting a wall.
Okay, that proves that AI is hitting the gym and getting pumped.
However, the write up veers into an unexpected calcified space:
The problem is that AIs have been improving so fast that they keep making benchmarks worthless. Once an AI performs well enough on a benchmark we say the benchmark is “saturated,” meaning it’s no longer usefully distinguishing how capable the AIs are, because all of them get near-perfect scores.
What is wrong with the lack of benchmarks? Nothing. Smart software is probabalistic. How accurate is the weather? Ask a wonk at the National Weather Service and you get quite optimistic answers. Ask a child whose birthday party at the park was rained out on a day Willie the Weather said that it would be sunny, and you get a different answer.
Okay, forget measurements. Here’s what the write up says will happen, and the prediction sounds really rock solid just like Willie the Weatherman:
The way AI is going to truly change our world is by automating an enormous amount of intellectual work that was once done by humans…. Like it or not (and I don’t really like it, myself; I don’t think that this world-changing transition is being handled responsibly at all) none of the three are hitting a wall, and any one of the three would be sufficient to lastingly change the world we live in.
Follow the argument? I must admit jumping from getting good, to an inability to measure “good” to humans will be replaced because AI can do intellectual work is quite a journey. Perhaps I am missing something, but:
- Just because people outside of research labs have smart software that seems to be working like a smart person, what about those hallucinatory outputs? Yep, today’s models make stuff up because probability dictates the output
- Use cases for smart software doing “intellectual work” are where in the write up? They aren’t because Vox doesn’t have any which are comfortable to journalists and writers who can be replaced by the SEO AI’s advertised on Telegram search engine optimization channels or by marketers writing for Forbes Magazine. That’s right. Excellent use cases are smart software killing jobs once held by fresh MBAs or newly minted CFAs. Why? Cheaper and as long as the models are “good enough” to turn a profit, let ‘em rip. Yahoooo.
- Smart software is created by humans, and humans shape what it does, how it is deployed, and care not a whit about the knock on effects. Technology operates in the hands of humans. Humans are deeply flawed entities. Mother Theresas are outnumbered by street gangs in Reno, Nevada, based on my personal observations of that fine city.
Net net: Vox which can and will be replaced by a cheaper and good enough alternative doesn’t want to raise that issue. Instead, Vox wanders around the real subject. That subject is that as those who drive AI figure out how to use what’s available and good enough, certain types of work will be pushed into the black boxes of smart software. Could smart software have written this essay? Yes. Could it have done a better job? Publications like the supremely weird Buzzfeed and some consultants I know sure like “good enough.” As long as it is cheap, AI is a winner.
Stephen E Arnold, January 22, 2025
Microsoft and Its Me-Too Interface for Bing Search
January 22, 2025
Bing will never be Google, but Microsoft wants its search engine to dominate queries. Microsoft Bing has a small percentage of Internet searches and in a bid to gain more traction it has copied Google’s user interface (UI). Windows Latest spills the tea over the UI copying: “Microsoft Bing Is Trying To Spoof Google UI When People Search Google.com.”
Google’s UI is very distinctive with its minimalist approach. The only item on the Google UI is the query box and menus along the top and bottom of the page. Microsoft Edge is Google’s Web browser and it is programed to use Bing. In a sneaky (and genius) move, when Edge users type Google into the bing search box they are taken to UI that is strangely Google-esque. Microsoft is trying this new UI to lower the Bing bounce rate, users who leave.
Is it an effective tactic?
“But you might wonder how effective this idea would be. Well, if you’re a tech-savvy person, you’ll probably realize what’s going on, then scroll and open Google from the link. However, this move could keep people on Bing if they just want to use a search engine.Google is the number one search engine, and there’s a large number of users who are just looking for a search engine, but they think the search engine is Google. In their mind, the two are the same. That’s because Google has become a synonym for search engines, just like Chrome is for browsers.A lot of users don’t really care what search engine they’re using, so Microsoft’s new practice, which might appear stupid to some of you, is likely very effective.”
For unobservant users and/or those who don’t care, it will work. Microsoft is also tugging on heartstrings with another tactic:
“On top of it, there’s also an interesting message underneath the Google-like search box that says “every search brings you closer to a free donation. Choose from over 2 million nonprofits.” This might also convince some people to keep using Bing.”
What a generous and genius tactic interface innovation. We’re not sure this is the interface everyone sees, but we love the me too approach from user-centric big tech outfits.
Whitney Grace, January 22, 2025