Point-and-Click Coding: An eGame Boom Booster
November 22, 2024
TheNextWeb explains “How AI Can Help You Make a Computer Game Without Knowing Anything About Coding.” That’s great—unless one is a coder who makes one’s living on computer games. Writer Daniel Zhou Hao begins with a story about one promising young fellow:
“Take Kyo, an eight-year-old boy in Singapore who developed a simple platform game in just two hours, attracting over 500,000 players. Using nothing but simple instructions in English, Kyo brought his vision to life leveraging the coding app Cursor and also Claude, a general purpose AI. Although his dad is a coder, Kyo didn’t get any help from him to design the game and has no formal coding education himself. He went on to build another game, an animation app, a drawing app and a chatbot, taking about two hours for each. This shows how AI is dramatically lowering the barrier to software development, bridging the gap between creativity and technical skill. Among the range of apps and platforms dedicated to this purpose, others include Google’s AlphaCode 2 and Replit’s Ghostwriter.”
The write-up does not completely leave experienced coders out of the discussion. Hao notes tools like Tabnine and GitHub Copilot act as auto-complete assistance, while Sourcery and DeepCode take the tedium out of code cleanup. For the 70-ish percent of companies that have adopted one or more of these tools, he tells us, the benefits include time savings and more reliable code. Does this mean developers will to shift to “higher value tasks,” like creative collaboration and system design, as Hao insists? Or will it just mean firms will lighten their payrolls?
As for building one’s own game, the article lists seven steps. They are akin to basic advice for developing a product, but with an AI-specific twist. For those who want to know how to make one’s AI game addictive, contact benkent2020 at yahoo dot com.
Cynthia Murrell, November 22, 2024
Marketers, Deep Fakes Work
November 14, 2024
Bad actors use AI for pranks all the time, but this could be the first time AI pranked an entire Irish town out of its own volition. KXAN reports on the folly: “AI Slop Site Sends Thousands In Ireland To Fake Halloween Parade.” The website MySpiritHalloween.com dubs itself as the ultimate resource for all things Halloween. The website uses AI-generated content and an article told the entire city of Dublin that there would be a parade.
If this was a small Irish village, there would giggles and the police would investigate criminal mischief before they stopped wasting their time. Dublin, however, is one of the country’s biggest cities and folks appeared in the thousands to see the Halloween parade. They eventually figured something was wrong without the presence of barriers, law enforcement, and (most importantly) costume people on floats!
MySpiritHalloween is owned by Naszir Ali and he was embarrassed by the situation.
Per Ali’s explanation, his SEO agency creates websites and ranks them on Google. He says the company hired content writers who were in charge of adding and removing events all across the globe as they learned whether or not they were happening. He said the Dublin event went unreported as fake and that the website quickly corrected the listing to show it had been cancelled….
Ali said that his website was built and helped along by the use of AI but that the technology only accounts for 10-20% of the website’s content. He added that, according to him, AI content won’t completely help a website get ranked on Google’s first page and that the reason so many people saw MySpiritHalloween.com was because it was ranked on Google’s first page — due to what he calls “80% involvement” from actual humans.”
Ali claims his website is based in Illinois, but all investigations found that its hosted in Pakistan. Ali’s website is one among millions that use AI-generated content to manipulate Google’s algorithm. Ali is correct that real humans did make the parade rise to the top of Google’s search results, but he was responsible for the content.
Media around the globe took this as an opportunity to teach the parade goers and others about being aware of AI-generated scams. Marketers, launch your AI infused fakery.
Whitney Grace, November 14, 2024
The Yogi Berra Principle: Déjà Vu All Over Again
November 7, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
I noted two write ups. The articles share what I call a meta concept. The first article is from PC World called “Office Apps Crash on Windows 11 24H2 PCs with CrowdStrike Antivirus.” The actors in this soap opera are the confident Microsoft and the elegant CrowdStrike. The write up says:
The annoying error affects Office applications such as Word or Excel, which crash and become completely unusable after updating to Windows 11 24H2. And this apparently only happens on systems that are running antivirus software by CrowdStrike. (Yes, the very same CrowdStrike that caused the global IT meltdown back in July.)
A patient, skilled teacher explains to the smart software, “You goofed, speedy.” Thanks, Midjourney. Good enough.
The other write up adds some color to the trivial issue. “Windows 11 24H2 Misery Continues, As Microsoft’s Buggy Update Is Now Breaking Printers – Especially on Copilot+ PCs” says:
Neowin reports that there are quite a number of complaints from those with printers who have upgraded to Windows 11 24H2 and are finding their device is no longer working. This is affecting all the best-known printer manufacturers, the likes of Brother, Canon, HP and so forth. The issue is mainly being experienced by those with a Copilot+ PC powered by an Arm processor, as mentioned, and it either completely derails the printer, leaving it non-functional, or breaks certain features. In other cases, Windows 11 users can’t install the printer driver.
Okay, dinobaby, what’s the meta concept thing? Let me offer my ideas about the challenge these two write ups capture:
- Microsoft may have what I call the Boeing disease; that is, the engineering is not so hot. Many things create problems. Finger pointing and fast talking do not solve the problems.
- The entities involved are companies and software which have managed to become punch lines for some humorists. For instance, what software can produce more bricks than a kiln? Answer: A Windows update. Ho ho ho. Funny until one cannot print a Word document for a Type A, drooling MBA.
- Remediating processes don’t remediate. The word process itself generates flawed outputs. Stated another way, like some bank mainframes and 1960s code, fixing is not possible and there are insufficient people and money to get the repair done.
The meta concept is that the way well paid workers tackle an engineering project is capable of outputting a product or service that has a failure rate approaching 100 percent. How about those Windows updates? Are they the digital equivalent of the Boeing space initiative.
The answer is, “No.” We have instances of processes which cannot produce reliable products and services. The framework itself produces failure. This idea has some interesting implications. If software cannot allow a user to print, what else won’t perform as expected? Maybe AI?
Stephen E Arnold, November 7, 2024
Microsoft 24H2: The Reality Versus Self Awareness
November 4, 2024
Sorry. Written by a dumb humanoid. Art? It is AI, folks. Eighty year old dinobabies cannot draw very well in my experience.
I spotted a short item titled “Microsoft Halts Windows 11 24H2 Update for Many PCs Due to Compatibility Issues.” Today is October 29, 2024. By the time you read this item, you may have a Windows equipped computer humming along on the charmingly named 11 24H2 update. That’s the one with Recall.
Microsoft does not see itself as slightly bedraggled. Those with failed updates do. Thanks, ChatGPT, good enough, but at least you work. MSFT Copilot has been down for six days with a glitch.
Now if you work at the Redmond facility where Google paranoia reigns, you probably have Recall running on your computing device as well as Teams’ assorted surveillance features. That means that when you run a query for “updates”, you may see screens presenting an array of information about non functioning drivers, printer errors, visits to the wonderfully organized knowledge bases, and possibly images of email from colleagues wanting to take kinetic action about the interns, new hires, and ham fisted colleagues who rolled out an update which does not update.
According to the write up offers this helpful advice:
We advise users against manually forcing the update through the Windows 11 Installation Assistant or media creation tool, especially on the system configurations mentioned above. Instead, users should check for updates to the specific software or hardware drivers causing the holds and wait for the blocks to be lifted naturally.
Okay.
Let’s look at this from the point of view of bad actors. These folks know that the “new” Windows with its many nifty new features has some issues. When the Softies cannot get wallpaper to work, one knows that deeper, more subtle issues are not on the wizards’ radar.
Thus, the 24H2 update will be installed on bad actors’ test systems and subjected to tests only a fan of Metasploit and related tools can appreciate. My analogy is that these individuals, some of whom are backed by nation states, will give the update the equivalent of a digital colonoscopy. Sorry, Redmond, no anesthetic this go round.
Why?
Microsoft suggests that security is Job Number One. Obviously when fingerprint security functions don’t work and the Windows Hello fails, the bad actor knows that other issues exist. My goodness. Why doesn’t Microsoft just turn its PR and advertising firms lose on Telegram hacking groups and announce, “Take me. I am yours!”
Several observations:
- The update is flawed
- Core functions do not work
- Partners, not Microsoft, are supposed to fix the broken slot machine of operating systems
- Microsoft is, once again, scrambling to do what it should have done correctly before releasing a deeply flawed bundle of software.
Net net: Blaming Google for European woes and pointing fingers at everything and everyone except itself, Microsoft is demonstrating that it cannot do a basic task correctly. The only users who are happy are those legions of bad actors in the countries Microsoft accuses of making its life difficult. Sorry. Microsoft you did this, but you could blame Google, of course.
Stephen E Arnold, November 4, 2024
Google Goes Nuclear For Data Centers
October 31, 2024
From the The Future-Is-Just-Around-the-Corner Department:
Pollution is blamed on consumers who are told to cut their dependency on plastic and drive less, while mega corporations and tech companies are the biggest polluters in the world. Some of the biggest users of energy are data centers and Google decided to go nuclear to help power them says Engadget: “Google Strikes A Deal With A Nuclear Startup To Power Its AI Data Centers.”
Google is teaming up with Kairos Power to build seven small nuclear reactors in the United States. The reactors will power Google’s AI Drive and add 500 megawatts. The first reactor is expected to be built in 2030 with the plan to finish the rest by 2035. The reactors are called small modular reactors or SMRs for short.
Google’s deal with Kairos Power would be the first corporate deal to buy nuclear power from SMRs. The small reactors are build inside a factory, instead of on site so their construction is lower than a full power plant.
“Kairos will need the US Nuclear Regulatory Commission to approve design and construction permits for the plans. The startup has already received approval for a demonstration reactor in Tennessee, with an online date targeted for 2027. The company already builds test units (without nuclear-fuel components) at a development facility in Albuquerque, NM, where it assesses components, systems and its supply chain.
The companies didn’t announce the financial details of the arrangement. Google says the deal’s structure will help to keep costs down and get the energy online sooner.”
These tech companies say they’re green but now they are contributing more to global warming with their AI data centers and potential nuclear waste. At least nuclear energy is more powerful and doesn’t contribute as much as coal or natural gas to pollution, except when the reactors melt down. Amazon is doing one too.
Has Google made the engineering shift from moon shots to environmental impact statements, nuclear waste disposal, document management, assorted personnel challenges? Sure, of course. Oh, and one trivial question: Is there a commercially available and certified miniature nuclear power plant? Russia may be short on cash. Perhaps someone in that country will sell a propulsion unit from those super reliable nuclear submarines? Google can just repurpose it in a suitable data center. Maybe one in Ashburn, Virginia?
Whitney Grace, October 31, 2024
An Emergent Behavior: The Big Tech DNA Proves It
October 14, 2024
Writer Mike Masnick at TechDirt makes quite the allegation: “Big Tech’s Promise Never to Block Access to Politically Embarrassing Content Apparently Only Applies to Democrats.” He contends:
“It probably will not shock you to find out that big tech’s promises to never again suppress embarrassing leaked content about a political figure came with a catch. Apparently, it only applies when that political figure is a Democrat. If it’s a Republican, then of course the content will be suppressed, and the GOP officials who demanded that big tech never ever again suppress such content will look the other way.”
The basis for Masnick’s charge of hypocrisy lies in a tale of two information leaks. Tech execs and members of Congress responded to each data breach very differently. Recently, representatives from both Meta and Google pledged to Senator Tom Cotton at a Senate Intelligence Committee hearing to never again “suppress” news as they supposedly did in 2020 with Hunter Biden laptop story. At the time, those platforms were leery of circulating that story until it could be confirmed.
Less than two weeks after that hearing, Journalist Ken Klippenstein published the Trump campaign’s internal vetting dossier on JD Vance, a document believed to have been hacked by Iran. That sounds like just the sort of newsworthy, if embarrassing, story that conservatives believe should never be suppressed, right? Not so fast—Trump mega-supporter Elon Musk immediately banned Ken’s X account and blocked all links to Klippenstein’s Substack. Similarly, Meta blocked links to the dossier across its platforms. That goes further than the company ever did with the Biden laptop story, the post reminds us. Finally, Google now prohibits users from storing the dossier on Google Drive. See the article for more of Masnick’s reasoning. He concludes:
“Of course, the hypocrisy will stand, because the GOP, which has spent years pointing to the Hunter Biden laptop story as their shining proof of ‘big tech bias’ (even though it was nothing of the sort), will immediately, and without any hint of shame or acknowledgment, insist that of course the Vance dossier must be blocked and it’s ludicrous to think otherwise. And thus, we see the real takeaway from all that working of the refs over the years: embarrassing stuff about Republicans must be suppressed, because it’s doxing or hacking or foreign interference. However, embarrassing stuff about Democrats must be shared, because any attempt to block it is election interference.”
Interesting. But not surprising.
Cynthia Murrell, October 14, 2024
AI: New Atlas Sees AI Headed in a New Direction
October 11, 2024
I like the premise of “AI Begins Its Ominous Split Away from Human Thinking.” Neural nets trained by humans on human information are going in their own direction. Whom do we thank? The neural net researchers? The Googlers who conceived of “the transformer”? The online advertisers who have provided significant sums of money? The “invisible hand” tapping on a virtual keyboard? Maybe quantum entanglement? I don’t know.
I do know that New Atlas’ article states:
AIs have a big problem with truth and correctness – and human thinking appears to be a big part of that problem. A new generation of AI is now starting to take a much more experimental approach that could catapult machine learning way past humans.
But isn’t that the point? The high school science club types beavering away in the smart software vineyards know the catchphrase:
Boldly go where no man has gone before!
The big outfits able to buy fancy chips and try to start mothballed nuclear plants have “boldly go where no man has gone before.” Get in the way of one of these captains of the star ship US AI, and you will be terminated, harassed, or forced to quit. If you are not boldly going, you are just not going.
The article says ChatGPT 4 whatever is:
… the first LLM that’s really starting to create that strange, but super-effective AlphaGo-style ‘understanding’ of problem spaces. In the domains where it’s now surpassing Ph.D.-level capabilities and knowledge, it got there essentially by trial and error, by chancing upon the correct answers over millions of self-generated attempts, and by building up its own theories of what’s a useful reasoning step and what’s not.
But, hey, it is pretty clear where AI is going from New Atlas’ perch:
OpenAI’s o1 model might not look like a quantum leap forward, sitting there in GPT’s drab textual clothing, looking like just another invisible terminal typist. But it really is a step-change in the development of AI – and a fleeting glimpse into exactly how these alien machines will eventually overtake humans in every conceivable way.
But if the AI goes its own way, how can a human “conceive” where the software is going?
Doom and fear work for the evening news (or what passes for the evening news). I think there is a cottage industry of AI doomsters working diligently to stop some people from fooling around with smart software. That is not going to work. Plus, the magical “transformer” thing is a culmination of years of prior work. It is simply one more step in the more than 50 year effort to process content.
This “stage” seems to have some utility, but more innovations will come. They have to. I am not sure how one stops people with money hunting for people who can say, “I have the next big thing in AI.”
Sorry, New Atlas, I am not convinced. Plus, I don’t watch movies or buy into most AI wackiness.
Stephen E Arnold, October 11, 2024
Dolma: Another Large Language Model
October 9, 2024
The biggest complaint AI developers have are the lack of variety and diversity in large language models (LLMs) to train the algorithms. According to the Cornell University computer science paper, “Dolma: An Open Corpus Of There Trillion Tokens For Language Model Pretraining Research” the LLMs do exist.
The paper’s abstract details the difficulties of AI training very succinctly:
“Information about pretraining corpora used to train the current best-performing language models is seldom discussed: commercial models rarely detail their data, and even open models are often released without accompanying training data or recipes to reproduce them. As a result, it is challenging to conduct and advance scientific research on language modeling, such as understanding how training data impacts model capabilities and limitations.”
Due to the lack of LLMs, the paper’s team curated their own model called Dolma. Dolma is a three-trillion-token English opus. It was built on web content, public domain books, social media, encyclopedias code, scientific papers, and more. The team thoroughly documented every information source so they wouldn’t deal with the same problems of other LLMs. These problems include stealing copyrighted material and private user data.
Dolma’s documentation also includes how it was built, design principles, and content summaries. The team share Dolma’s development through analyses and experimental test results. They are thoroughly documenting everything to guarantee that this is the ultimate LLM and (hopefully) won’t encounter problems other than tech related. Dolma’s toolkit is open source and the team want developers to use it. This is a great effort on behalf of Dolma’s creators! They support AI development and data curation, but doing it responsibly.
Give them a huge round of applause!
Cynthia Murrell, October 10, 2024
Windows Fruit Loop Code, Oops. Boot Loop Code.
October 8, 2024
Windows Update Produces Boot Loops. Again.
Some Windows 11 users are vigilant about staying on top of the latest updates. Recently, such users paid for their diligence with infinite reboots, freezes, and/ or the dreaded blue screen of death. Digitaltrends warns, “Whatever You Do, Don’t Install the Windows 11 September Update.” Writer Judy Sanhz reports:
“The bug here can cause what’s known as a ‘boot loop.’ This is an issue that Windows versions have had for decades, where the PC will boot and restart endlessly with no way for users to interact, forcing a hard shutdown by holding the power button. Boot loops can be incredibly hard to diagnose and even more complicated to fix, so the fact that we know the latest Windows 11 update can trigger the problem already solves half the battle. The Automatic Repair tool is a built-in feature on your PC that automatically detects and fixes any issues that prevent your computer from booting correctly. However, recent Windows updates, including the September update, have introduced problems such as freezing the task manager and others in the Edge browser. If you’re experiencing these issues, our handy PC troubleshooting guide can help.”
So for many the update hobbled the means to fix it. Wonderful. It may be worthwhile to bookmark that troubleshooting guide. On multiple devices, if possible. Because this is not the first time Microsoft has unleased this particular aggravation on its users. In fact, the last instance was just this past August. The company has since issued a rollback fix, but one wonders: Why ship a problematic update in the first place? Was it not tested? And is it just us, or does this sound eerily similar to July’s CrowdStrike outage?
(Does the fruit loop experience come with sour grapes?)
Cynthia Murrell, October 8, 2024
Hey, Live to Be a 100 like a Tech Bro
October 8, 2024
If you, gentle reader, are like me, you have taken heart at tales of people around the world living past 100. Well, get ready to tamp down some of that hope. An interview at The Conversation declares, “The Data on Extreme Human Ageing Is Rotten from the Inside Out.” Researcher Saul Justin Newman recently won an Ig Nobel Prize (not to be confused with a Nobel Prize) for his work on data about ageing. When asked about his work, Newman summarizes:
“In general, the claims about how long people are living mostly don’t stack up. I’ve tracked down 80% of the people aged over 110 in the world (the other 20% are from countries you can’t meaningfully analyze). Of those, almost none have a birth certificate. In the US there are over 500 of these people; seven have a birth certificate. Even worse, only about 10% have a death certificate. The epitome of this is blue zones, which are regions where people supposedly reach age 100 at a remarkable rate. For almost 20 years, they have been marketed to the public. They’re the subject of tons of scientific work, a popular Netflix documentary, tons of cookbooks about things like the Mediterranean diet, and so on. Okinawa in Japan is one of these zones. There was a Japanese government review in 2010, which found that 82% of the people aged over 100 in Japan turned out to be dead. The secret to living to 110 was, don’t register your death.”
That is one way to go, we suppose. We learn of other places Newman found bad ageing data. Europe’s “blue zones” of Sardinia in Italy and Ikaria in Greece, for example. There can be several reasons for erroneous data. For example, wars or other disasters that destroyed public records. Or clerical errors that set the wrong birth years in stone. But one of the biggest factors seems to be pension fraud. We learn:
“Regions where people most often reach 100-110 years old are the ones where there’s the most pressure to commit pension fraud, and they also have the worst records. For example, the best place to reach 105 in England is Tower Hamlets. It has more 105-year-olds than all of the rich places in England put together. It’s closely followed by downtown Manchester, Liverpool and Hull. Yet these places have the lowest frequency of 90-year-olds and are rated by the UK as the worst places to be an old person.”
That does seem fishy. Especially since it is clear rich folks generally live longer than poor ones. (And that gap is growing, by the way.) So get those wills notarized, trusts set up, and farewell letters written sooner than later. We may not have as much time as we hoped.
Cynthia Murrell, October 8, 2024