China Seeks to Curb Algorithmic Influence and Manipulation
December 5, 2024
Someone is finally taking decisive action against unhealthy recommendation algorithms, AI-driven price optimization, and exploitative gig-work systems. That someone is China. ”China Sets Deadline for Big Tech to Clear Algorithm Issues, Close ‘Echo Chambers’,” reports the South China Morning Post. Ah, the efficiency of a repressive regime. Writer Hayley Wong informs us:
‘Tech operators in China have been given a deadline to rectify issues with recommendation algorithms, as authorities move to revise cybersecurity regulations in place since 2021. A three-month campaign to address ‘typical issues with algorithms’ on online platforms was launched on Sunday, according to a notice from the Communist Party’s commission for cyberspace affairs, the Ministry of Industry and Information Technology, and other relevant departments. The campaign, which will last until February 14, marks the latest effort to curb the influence of Big Tech companies in shaping online views and opinions through algorithms – the technology behind the recommendation functions of most apps and websites. System providers should avoid recommendation algorithms that create ‘echo chambers’ and induce addiction, allow manipulation of trending items, or exploit gig workers’ rights, the notice said.
They should also crack down on unfair pricing and discounts targeting different demographics, ensure ‘healthy content’ for elderly and children, and impose a robust ‘algorithm review mechanism and data security management system’.”
Tech firms operating within China are also ordered to conduct internal investigations and improve algorithms’ security capabilities by the end of the year. What happens if firms fail? Reeducation? A visit to the death van? Or an opportunity to herd sheep in a really nice area near Xian? The brief write-up does not specify.
We think there may be a footnote to the new policy; for instance, “Use algos to advance our policies.”
Cynthia Murrell, December 5, 2024
Listary: A Chinese Alternative to Windows File Explorer
December 5, 2024
For anyone frustrated with Windows’ built-in search function, Lifehacker suggests an alternative. “Listary Is a Fast, Powerful Search Tool for Windows,” declares writer Justin Pot. He tells us:
“Listary is a free app with great indexing that allows you to find any file on your computer in just a couple of keystrokes. Tap the control key twice, start typing, and hit enter when you see what you want. You can also use the tool to launch applications or search the web. … The keyboard shortcut brings up a search window similar to Spotlight on the Mac. There is also a more advanced version of the application which you can bring up by clicking the tray icon for the application. This lets you do things like filter your search by file type or how recently it was created. This view also notably allows you to preview files before opening them, which I appreciate. You’re not limited to searching on your computer—you can also start web searches from here.”
That Web search function is preloaded with a few search engines, like Google, Wikipedia, IMDB, and YouTube, but one can add more platforms. The free version of Listary is for personal use only. The company, Bopsoft, makes its money on the Pro version, which is $20. Just once, not monthly or annually. That version offers network-drive indexing and customization options. Bopsoft appears to be based in Zaozhuang, China.
Cynthia Murrell, December 5, 2024
Legacy Code: Avoid, Fix, or Flee (Two Out of Three Mean Forget It)
December 4, 2024
In his Substack post, “Legacy Schmegacy,” software engineer David Reis offers some pointers on preventing and coping with legacy code. We found this snippet interesting:
“Someone must fix the legacy code, but it doesn’t have to be you. It’s far more honorable to switch projects or companies than to lead a misguided rewrite.”
That’s the spirit: quit and let someone else deal with it. But not everyone is in the position to cut and run. For those actually interested in addressing the problem, Reis has some suggestions. First, though, the post lists factors that can prevent legacy code in the first place:
- “The longer a programmer’s tenure the less code will become legacy, since authors will be around to appreciate and maintain it.
- The more code is well architected, clear and documented the less it will become legacy, since there is a higher chance the author can transfer it to a new owner successfully.
- The more the company uses pair programming, code reviews, and other knowledge transfer techniques, the less code will become legacy, as people other than the author will have knowledge about it.
- The more the company grows junior engineers the less code will become legacy, since the best way to grow juniors is to hand them ownership of components.
- The more a company uses simple standard technologies, the less likely code will become legacy, since knowledge about them will be widespread in the organization. Ironically if you define innovation as adopting new technologies, the more a team innovates the more legacy it will have. Every time it adopts a new technology, either it won’t work, and the attempt will become legacy, or it will succeed, and the old systems will.”
Reiss’ number one suggestion to avoid creating legacy code is, “don’t write crappy code.” Noted. Also, stick with tried and true methods unless shiny a new tech is definitely the best option. Perhaps most importantly, coders should teach others in the organization how their code works and leave behind good documentation. So, common sense and best practices. Brilliant!
When confronted with a predecessor’s code, he advises one to “delegacify” it. That is a word he coined to mean: Take time to understand the code and see if it can be improved over time before tossing it out entirely. Or, as noted above, just run away. That can be an option for some.
Cynthia Murrell, December 4, 2024
Point-and-Click Coding: An eGame Boom Booster
November 22, 2024
TheNextWeb explains “How AI Can Help You Make a Computer Game Without Knowing Anything About Coding.” That’s great—unless one is a coder who makes one’s living on computer games. Writer Daniel Zhou Hao begins with a story about one promising young fellow:
“Take Kyo, an eight-year-old boy in Singapore who developed a simple platform game in just two hours, attracting over 500,000 players. Using nothing but simple instructions in English, Kyo brought his vision to life leveraging the coding app Cursor and also Claude, a general purpose AI. Although his dad is a coder, Kyo didn’t get any help from him to design the game and has no formal coding education himself. He went on to build another game, an animation app, a drawing app and a chatbot, taking about two hours for each. This shows how AI is dramatically lowering the barrier to software development, bridging the gap between creativity and technical skill. Among the range of apps and platforms dedicated to this purpose, others include Google’s AlphaCode 2 and Replit’s Ghostwriter.”
The write-up does not completely leave experienced coders out of the discussion. Hao notes tools like Tabnine and GitHub Copilot act as auto-complete assistance, while Sourcery and DeepCode take the tedium out of code cleanup. For the 70-ish percent of companies that have adopted one or more of these tools, he tells us, the benefits include time savings and more reliable code. Does this mean developers will to shift to “higher value tasks,” like creative collaboration and system design, as Hao insists? Or will it just mean firms will lighten their payrolls?
As for building one’s own game, the article lists seven steps. They are akin to basic advice for developing a product, but with an AI-specific twist. For those who want to know how to make one’s AI game addictive, contact benkent2020 at yahoo dot com.
Cynthia Murrell, November 22, 2024
Marketers, Deep Fakes Work
November 14, 2024
Bad actors use AI for pranks all the time, but this could be the first time AI pranked an entire Irish town out of its own volition. KXAN reports on the folly: “AI Slop Site Sends Thousands In Ireland To Fake Halloween Parade.” The website MySpiritHalloween.com dubs itself as the ultimate resource for all things Halloween. The website uses AI-generated content and an article told the entire city of Dublin that there would be a parade.
If this was a small Irish village, there would giggles and the police would investigate criminal mischief before they stopped wasting their time. Dublin, however, is one of the country’s biggest cities and folks appeared in the thousands to see the Halloween parade. They eventually figured something was wrong without the presence of barriers, law enforcement, and (most importantly) costume people on floats!
MySpiritHalloween is owned by Naszir Ali and he was embarrassed by the situation.
Per Ali’s explanation, his SEO agency creates websites and ranks them on Google. He says the company hired content writers who were in charge of adding and removing events all across the globe as they learned whether or not they were happening. He said the Dublin event went unreported as fake and that the website quickly corrected the listing to show it had been cancelled….
Ali said that his website was built and helped along by the use of AI but that the technology only accounts for 10-20% of the website’s content. He added that, according to him, AI content won’t completely help a website get ranked on Google’s first page and that the reason so many people saw MySpiritHalloween.com was because it was ranked on Google’s first page — due to what he calls “80% involvement” from actual humans.”
Ali claims his website is based in Illinois, but all investigations found that its hosted in Pakistan. Ali’s website is one among millions that use AI-generated content to manipulate Google’s algorithm. Ali is correct that real humans did make the parade rise to the top of Google’s search results, but he was responsible for the content.
Media around the globe took this as an opportunity to teach the parade goers and others about being aware of AI-generated scams. Marketers, launch your AI infused fakery.
Whitney Grace, November 14, 2024
The Yogi Berra Principle: Déjà Vu All Over Again
November 7, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
I noted two write ups. The articles share what I call a meta concept. The first article is from PC World called “Office Apps Crash on Windows 11 24H2 PCs with CrowdStrike Antivirus.” The actors in this soap opera are the confident Microsoft and the elegant CrowdStrike. The write up says:
The annoying error affects Office applications such as Word or Excel, which crash and become completely unusable after updating to Windows 11 24H2. And this apparently only happens on systems that are running antivirus software by CrowdStrike. (Yes, the very same CrowdStrike that caused the global IT meltdown back in July.)
A patient, skilled teacher explains to the smart software, “You goofed, speedy.” Thanks, Midjourney. Good enough.
The other write up adds some color to the trivial issue. “Windows 11 24H2 Misery Continues, As Microsoft’s Buggy Update Is Now Breaking Printers – Especially on Copilot+ PCs” says:
Neowin reports that there are quite a number of complaints from those with printers who have upgraded to Windows 11 24H2 and are finding their device is no longer working. This is affecting all the best-known printer manufacturers, the likes of Brother, Canon, HP and so forth. The issue is mainly being experienced by those with a Copilot+ PC powered by an Arm processor, as mentioned, and it either completely derails the printer, leaving it non-functional, or breaks certain features. In other cases, Windows 11 users can’t install the printer driver.
Okay, dinobaby, what’s the meta concept thing? Let me offer my ideas about the challenge these two write ups capture:
- Microsoft may have what I call the Boeing disease; that is, the engineering is not so hot. Many things create problems. Finger pointing and fast talking do not solve the problems.
- The entities involved are companies and software which have managed to become punch lines for some humorists. For instance, what software can produce more bricks than a kiln? Answer: A Windows update. Ho ho ho. Funny until one cannot print a Word document for a Type A, drooling MBA.
- Remediating processes don’t remediate. The word process itself generates flawed outputs. Stated another way, like some bank mainframes and 1960s code, fixing is not possible and there are insufficient people and money to get the repair done.
The meta concept is that the way well paid workers tackle an engineering project is capable of outputting a product or service that has a failure rate approaching 100 percent. How about those Windows updates? Are they the digital equivalent of the Boeing space initiative.
The answer is, “No.” We have instances of processes which cannot produce reliable products and services. The framework itself produces failure. This idea has some interesting implications. If software cannot allow a user to print, what else won’t perform as expected? Maybe AI?
Stephen E Arnold, November 7, 2024
Microsoft 24H2: The Reality Versus Self Awareness
November 4, 2024
Sorry. Written by a dumb humanoid. Art? It is AI, folks. Eighty year old dinobabies cannot draw very well in my experience.
I spotted a short item titled “Microsoft Halts Windows 11 24H2 Update for Many PCs Due to Compatibility Issues.” Today is October 29, 2024. By the time you read this item, you may have a Windows equipped computer humming along on the charmingly named 11 24H2 update. That’s the one with Recall.
Microsoft does not see itself as slightly bedraggled. Those with failed updates do. Thanks, ChatGPT, good enough, but at least you work. MSFT Copilot has been down for six days with a glitch.
Now if you work at the Redmond facility where Google paranoia reigns, you probably have Recall running on your computing device as well as Teams’ assorted surveillance features. That means that when you run a query for “updates”, you may see screens presenting an array of information about non functioning drivers, printer errors, visits to the wonderfully organized knowledge bases, and possibly images of email from colleagues wanting to take kinetic action about the interns, new hires, and ham fisted colleagues who rolled out an update which does not update.
According to the write up offers this helpful advice:
We advise users against manually forcing the update through the Windows 11 Installation Assistant or media creation tool, especially on the system configurations mentioned above. Instead, users should check for updates to the specific software or hardware drivers causing the holds and wait for the blocks to be lifted naturally.
Okay.
Let’s look at this from the point of view of bad actors. These folks know that the “new” Windows with its many nifty new features has some issues. When the Softies cannot get wallpaper to work, one knows that deeper, more subtle issues are not on the wizards’ radar.
Thus, the 24H2 update will be installed on bad actors’ test systems and subjected to tests only a fan of Metasploit and related tools can appreciate. My analogy is that these individuals, some of whom are backed by nation states, will give the update the equivalent of a digital colonoscopy. Sorry, Redmond, no anesthetic this go round.
Why?
Microsoft suggests that security is Job Number One. Obviously when fingerprint security functions don’t work and the Windows Hello fails, the bad actor knows that other issues exist. My goodness. Why doesn’t Microsoft just turn its PR and advertising firms lose on Telegram hacking groups and announce, “Take me. I am yours!”
Several observations:
- The update is flawed
- Core functions do not work
- Partners, not Microsoft, are supposed to fix the broken slot machine of operating systems
- Microsoft is, once again, scrambling to do what it should have done correctly before releasing a deeply flawed bundle of software.
Net net: Blaming Google for European woes and pointing fingers at everything and everyone except itself, Microsoft is demonstrating that it cannot do a basic task correctly. The only users who are happy are those legions of bad actors in the countries Microsoft accuses of making its life difficult. Sorry. Microsoft you did this, but you could blame Google, of course.
Stephen E Arnold, November 4, 2024
Google Goes Nuclear For Data Centers
October 31, 2024
From the The Future-Is-Just-Around-the-Corner Department:
Pollution is blamed on consumers who are told to cut their dependency on plastic and drive less, while mega corporations and tech companies are the biggest polluters in the world. Some of the biggest users of energy are data centers and Google decided to go nuclear to help power them says Engadget: “Google Strikes A Deal With A Nuclear Startup To Power Its AI Data Centers.”
Google is teaming up with Kairos Power to build seven small nuclear reactors in the United States. The reactors will power Google’s AI Drive and add 500 megawatts. The first reactor is expected to be built in 2030 with the plan to finish the rest by 2035. The reactors are called small modular reactors or SMRs for short.
Google’s deal with Kairos Power would be the first corporate deal to buy nuclear power from SMRs. The small reactors are build inside a factory, instead of on site so their construction is lower than a full power plant.
“Kairos will need the US Nuclear Regulatory Commission to approve design and construction permits for the plans. The startup has already received approval for a demonstration reactor in Tennessee, with an online date targeted for 2027. The company already builds test units (without nuclear-fuel components) at a development facility in Albuquerque, NM, where it assesses components, systems and its supply chain.
The companies didn’t announce the financial details of the arrangement. Google says the deal’s structure will help to keep costs down and get the energy online sooner.”
These tech companies say they’re green but now they are contributing more to global warming with their AI data centers and potential nuclear waste. At least nuclear energy is more powerful and doesn’t contribute as much as coal or natural gas to pollution, except when the reactors melt down. Amazon is doing one too.
Has Google made the engineering shift from moon shots to environmental impact statements, nuclear waste disposal, document management, assorted personnel challenges? Sure, of course. Oh, and one trivial question: Is there a commercially available and certified miniature nuclear power plant? Russia may be short on cash. Perhaps someone in that country will sell a propulsion unit from those super reliable nuclear submarines? Google can just repurpose it in a suitable data center. Maybe one in Ashburn, Virginia?
Whitney Grace, October 31, 2024
An Emergent Behavior: The Big Tech DNA Proves It
October 14, 2024
Writer Mike Masnick at TechDirt makes quite the allegation: “Big Tech’s Promise Never to Block Access to Politically Embarrassing Content Apparently Only Applies to Democrats.” He contends:
“It probably will not shock you to find out that big tech’s promises to never again suppress embarrassing leaked content about a political figure came with a catch. Apparently, it only applies when that political figure is a Democrat. If it’s a Republican, then of course the content will be suppressed, and the GOP officials who demanded that big tech never ever again suppress such content will look the other way.”
The basis for Masnick’s charge of hypocrisy lies in a tale of two information leaks. Tech execs and members of Congress responded to each data breach very differently. Recently, representatives from both Meta and Google pledged to Senator Tom Cotton at a Senate Intelligence Committee hearing to never again “suppress” news as they supposedly did in 2020 with Hunter Biden laptop story. At the time, those platforms were leery of circulating that story until it could be confirmed.
Less than two weeks after that hearing, Journalist Ken Klippenstein published the Trump campaign’s internal vetting dossier on JD Vance, a document believed to have been hacked by Iran. That sounds like just the sort of newsworthy, if embarrassing, story that conservatives believe should never be suppressed, right? Not so fast—Trump mega-supporter Elon Musk immediately banned Ken’s X account and blocked all links to Klippenstein’s Substack. Similarly, Meta blocked links to the dossier across its platforms. That goes further than the company ever did with the Biden laptop story, the post reminds us. Finally, Google now prohibits users from storing the dossier on Google Drive. See the article for more of Masnick’s reasoning. He concludes:
“Of course, the hypocrisy will stand, because the GOP, which has spent years pointing to the Hunter Biden laptop story as their shining proof of ‘big tech bias’ (even though it was nothing of the sort), will immediately, and without any hint of shame or acknowledgment, insist that of course the Vance dossier must be blocked and it’s ludicrous to think otherwise. And thus, we see the real takeaway from all that working of the refs over the years: embarrassing stuff about Republicans must be suppressed, because it’s doxing or hacking or foreign interference. However, embarrassing stuff about Democrats must be shared, because any attempt to block it is election interference.”
Interesting. But not surprising.
Cynthia Murrell, October 14, 2024
AI: New Atlas Sees AI Headed in a New Direction
October 11, 2024
I like the premise of “AI Begins Its Ominous Split Away from Human Thinking.” Neural nets trained by humans on human information are going in their own direction. Whom do we thank? The neural net researchers? The Googlers who conceived of “the transformer”? The online advertisers who have provided significant sums of money? The “invisible hand” tapping on a virtual keyboard? Maybe quantum entanglement? I don’t know.
I do know that New Atlas’ article states:
AIs have a big problem with truth and correctness – and human thinking appears to be a big part of that problem. A new generation of AI is now starting to take a much more experimental approach that could catapult machine learning way past humans.
But isn’t that the point? The high school science club types beavering away in the smart software vineyards know the catchphrase:
Boldly go where no man has gone before!
The big outfits able to buy fancy chips and try to start mothballed nuclear plants have “boldly go where no man has gone before.” Get in the way of one of these captains of the star ship US AI, and you will be terminated, harassed, or forced to quit. If you are not boldly going, you are just not going.
The article says ChatGPT 4 whatever is:
… the first LLM that’s really starting to create that strange, but super-effective AlphaGo-style ‘understanding’ of problem spaces. In the domains where it’s now surpassing Ph.D.-level capabilities and knowledge, it got there essentially by trial and error, by chancing upon the correct answers over millions of self-generated attempts, and by building up its own theories of what’s a useful reasoning step and what’s not.
But, hey, it is pretty clear where AI is going from New Atlas’ perch:
OpenAI’s o1 model might not look like a quantum leap forward, sitting there in GPT’s drab textual clothing, looking like just another invisible terminal typist. But it really is a step-change in the development of AI – and a fleeting glimpse into exactly how these alien machines will eventually overtake humans in every conceivable way.
But if the AI goes its own way, how can a human “conceive” where the software is going?
Doom and fear work for the evening news (or what passes for the evening news). I think there is a cottage industry of AI doomsters working diligently to stop some people from fooling around with smart software. That is not going to work. Plus, the magical “transformer” thing is a culmination of years of prior work. It is simply one more step in the more than 50 year effort to process content.
This “stage” seems to have some utility, but more innovations will come. They have to. I am not sure how one stops people with money hunting for people who can say, “I have the next big thing in AI.”
Sorry, New Atlas, I am not convinced. Plus, I don’t watch movies or buy into most AI wackiness.
Stephen E Arnold, October 11, 2024