Enter the Dragon: America Is Unhealthy
November 4, 2024
Written by a humanoid dinobaby. No AI except the illustration.
The YouTube video “A Genius Girl Who Is Passionate about Repairing Machines” presents a simple story in a 38 minute video. The idea is that a young woman with no help fixes a broken motorcycles with basic hand tools outside in what looks like a hoarder’s backyard. The message is: Wow, she is smart and capable. Don’t you wish you knew person like this who could repair your broken motorcycle.
This video is from @vutvtgamming and not much information is provided. After watching this and similar videos like “Genius Girl Restored The 280mm Lathe From 50 Years Ago And Made It Look Like”, I feel pretty stupid for an America dinobaby. I don’t think I can recall meeting a person with similar mechanical skills when I worked at Keystone Steel, Halliburton Nuclear, or Booz, Allen & Hamilton’s Design & Development division. The message I carried away was: I was stupid as were many people with whom I associated.
Thanks, MSFT Copilot. Good enough. (I slipped a put down through your filters. Imagine that!)
I picked up a similar vibe when I read “Today’s AI Ecosystem Is Unsustainable for Most Everyone But Nvidia, Warns Top Scholar.” On the surface, the ZDNet write up is an interview with the “scholar” Kai-Fu Lee, who, according to the article:
served as founding director of Microsoft Research Asia before working at Google and Apple, founded his current company, Sinovation Ventures, to fund startups such as 01.AI, which makes a generative AI search engine called BeaGo.
I am not sure how “scholar” correlates with commercial work for US companies and running an investment firm with a keen interest in Chinese start ups. I would not use the word “scholar.” My hunch is that the intent of Kai-Fu Lee is to present as simple and obvious something that US companies don’t understand. The interview is a different approach to explaining how advanced Kai-Fu Lee’s expertise is. He is, via this interview, sharing an opinion that the US is creating a problem and overlooking the simple solution. Just like the young woman able to repair a motorcycle or the lass fixing up a broken industrial lathe alone, the American approach does not get the job done.
What does ZDNet present as Kai-Fu Lee’s message. Here are a couple of examples:
“The ecosystem is incredibly unhealthy,” said Kai-Fu Lee in a private discussion forum earlier this month. Lee was referring to the profit disparity between, on the one hand, makers of AI infrastructure, including Nvidia and Google, and, on the other hand, the application developers and companies that are supposed to use AI to reinvent their operations.
Interesting. I wonder if the “healthy” ecosystem might be China’s approach of pragmatism and nuts-and-bolts evidenced in the referenced videos. The unhealthy versus healthy is a not-so-subtle message about digging one’s own grave in my opinion. The “economics” of AI are unhealthy, which seems to say, “America’s approach to smart software is going to kill it. A more healthy approach is the one in which government and business work to create applications.” Translating: China, healthy; America, sick as a dog.
Here’s another statement:
Today’s AI ecosystem, according to Lee, consists of Nvidia, and, to a lesser extent, other chip makers such as Intel and Advanced Micro Devices. Collectively, the chip makers rake in $75 billion in annual chip sales from AI processing. “The infrastructure is making $10 billion, and apps, $5 billion,” said Lee. “If we continue in this inverse pyramid, it’s going to be a problem,” he said.
Who will flip the pyramid? Uganda, Lao PDR, Greece? Nope, nope, nope. The flip will take an outfit with a strong mind and body. A healthy entity is needed to flip the pyramid. I wonder if that strong entity is China.
Here’s Kai-Fu kung fu move:
He recommended that companies build their own vertically integrated tech stack the way Apple did with the iPhone, in order to dramatically lower the cost of generative AI. Lee’s striking assertion is that the most successful companies will be those that build most of the generative AI components — including the chips — themselves, rather than relying on Nvidia. He cited how Apple’s Steve Jobs pushed his teams to build all the parts of the iPhone, rather than waiting for technology to come down in price.
In the write up Kai-Fu Lee refers to “we”. Who is included in that we? Excluded will be the “unhealthy.” Who is left? I would suggest that the pragmatic and application focused will be the winners. The reason? The “we” includes the healthy entities. Once again I am thinking of China’s approach to smart software.
What’s the correct outcome? Kai-Fu Lee allegedly said:
What should result, he said, is “a smaller, leaner group of leaders who are not just hiring people to solve problems, but delegating to smart enterprise AI for particular functions — that’s when this will make the biggest deal.”
That sounds like the Chinese approach to a number of technical, social, and political challenges. Healthy? Absolutely.
Several observations:
- I wonder if ZDNet checked on the background of the “scholar” interviewed at length?
- Did ZDNet think about the “healthy” versus “unhealthy” theme in the write up?
- Did ZDNet question the “scholar’s” purpose in explaining what’s wrong with the US approach to smart software?
I think I know the answer. The ZDNet outfit and the creators of this unusual private interview believe that the young women rebuilt complicated devices without any assistance. Smart China; dumb America. I understand the message which seems to have not been internalized by ZDNet. But I am a dumb dinobaby. What do I know? Exactly. Unhealthy that American approach to AI.
Stephen E Arnold, October 30, 2024
Computer Security and Good Enough Methods
November 1, 2024
Written by a humanoid dinobaby. No AI except the illustration.
I read “TikTok Owner Sacks Intern for Sabotaging AI Project.” The BBC report is straight forward; it does not provide much “management” or “risk” commentary. In a nutshell, the allegedly China linked ByteDance hired or utilized an intern. The term “intern” used to mean a student who wanted to get experience. Today, “intern” has a number of meanings. For example, for certain cyber fraud outfits operating in Southeast Asia an “intern” could be:
- A person paid to do work in a special economic zone
- A person coerced into doing work for an organization engaged in cyber fraud
- A person who is indeed a student and wants to get some experience
- An individual kidnapped and forced to perform work; otherwise, bad things can happen in dark rooms.
What’s the BBC say? Here is a snippet:
TikTok owner, ByteDance, says it has sacked an intern for “maliciously interfering” with the training of one of its artificial intelligence (AI) models.
The punishment, according to the write up, was “contacting” the intern’s university. End of story.
My take on this incident is a bit different from the BBC’s.
First, how did a company allegedly linked to the Chinese government make a bad hire? If the student was recommended by a university, what mistake did the university and the professors training the young person commit. The idea is to crank out individuals who snap into certain roles. I am not sure the spirit of an American party school is part of the ByteDance and TikTok work culture, but I may be off base.
Second, when a company hires a gig worker or brings an intern into an organization, are today’s managers able to identify potential issues either with an individual’s work or that person’s inner wiring? The fact that an intern was able to fiddle with code indicates a failure of internal checks and balances. The larger question is, “Can organizations trust interns who are operating as insiders, but without the controls an organization should have over individual workers. This gaffe makes clear that modern management methods are not proactive; they are reactive. For that reason, insider threats exist and could do damage. ByteDance, according to the write up, downplayed the harm caused by the intern:
ByteDance also denied reports that the incident caused more than $10m (£7.7m) of damage by disrupting an AI training system made up of thousands of powerful graphics processing units (GPU).
Is this claim credible? Nope. I refer to the information about four companies “downplaying the impact of the SolarWinds hack.” US outfits don’t want to reveal the impact of a cyber issue. Are outfits like ByteDance and TikTok on the up and up about the impact of the intern’s actions.
Third, the larger question becomes, “How does an organization minimize insider threats?” As organizations seek to cut training staff and rely on lower cost labor?” The answer is, in my opinion, clear to me. An organization does what it can and hope for the best.
Like many parts of a life in an informationized world or datasphere in my lingo, the quality of most efforts is good enough. The approach guarantees problems in the future. These are problems which cannot be solved. Management just finds something to occupy its time. The victims are the users, the customers, or the clients.
The world, even when allegedly linked with nation states, is struggling to achieve good enough.
Stephen E Arnold, November 1, 2024
The Reason IT Work is Never Done: The New Sisyphus Task
November 1, 2024
Why are systems never completely fixed? There is always some modification that absolutely must be made. In a recent blog post, engagement firm Votito chalks it up to Tog’s Paradox (aka The Complexity Paradox). This rule states that when a product simplifies user tasks, users demand new features that perpetually increase the product’s complexity. Both minimalists and completionists are doomed to disappointment, it seems.
The post supplies three examples of Tog’s Paradox in action. Perhaps the most familiar to many is that of social media. We are reminded:
“Initially designed to provide simple ways to share photos or short messages, these platforms quickly expanded as users sought additional capabilities, such as live streaming, integrated shopping, or augmented reality filters. Each of these features added new layers of complexity to the app, requiring more sophisticated algorithms, larger databases, and increased development efforts. What began as a relatively straightforward tool for sharing personal content has transformed into a multi-faceted platform requiring constant updates to handle new features and growing user expectations.”
The post asserts software designers may as well resign themselves to never actually finishing anything. Every project should be seen as an ongoing process. The writer observes:
“Tog’s Paradox reveals why attempts to finalize design requirements are often doomed to fail. The moment a product begins to solve its users’ core problems efficiently, it sparks a natural progression of second-order effects. As users save time and effort, they inevitably find new, more complex tasks to address, leading to feature requests that expand the scope far beyond what was initially anticipated. This cycle shows that the product itself actively influences users’ expectations and demands, making it nearly impossible to fully define design requirements upfront. This evolving complexity highlights the futility of attempting to lock down requirements before the product is deployed.”
Maybe humanoid IT workers will become enshrined as new age Sisyphuses? Or maybe Sisyphi?
Cynthia Murrell, November 1, 2024
Great Moments in Marketing: MSFT Copilot, the Salesforce Take
November 1, 2024
A humanoid wrote this essay. I tried to get MSFT Copilot to work, but it remains dead. That makes four days with weird messages about a glitch. That’s the standard: Good enough.
It’s not often I get a kick out of comments from myth-making billionaires. I read through the boy wonder to company founder titled “An Interview with Salesforce CEO Marc Benioff about AI Abundance.” No paywall on this essay, unlike the New York Times’ downer about smart software which appears to have played a part in a teen’s suicide. Imagine when Perplexity can control a person’s computer. What exciting stories will appear. Here’s an example of what may be more common in 2025.
Great moments in Salesforce marketing. A senior Agentforce executive considers great marketing and brand ideas of the past. Inspiration strikes. In 2024, he will make fun of Clippy. Yes, a 1995 reference will resonate with young deciders in 2024. Thanks, Stable Diffusion. You are working; MSFT Copilot is not.
The focus today is a single statement in this interview with the big dog of Salesforce. Here’s the quote:
Well, I guess it wasn’t the AGI that we were expecting because I think that there has been a level of sell, including Microsoft Copilot, this thing is a complete disaster. It’s like, what is this thing on my computer? I don’t even understand why Microsoft is saying that Copilot is their vision of how you’re going to transform your company with AI, and you are going to become more productive. You’re going to augment your employees, you’re going to lower your cost, improve your customer relationships, and fundamentally expand all your KPIs with Copilot. I would say, “No, Copilot is the new Clippy”, I’m even playing with a paperclip right now.
Let’s think about this series of references and assertions.
First, there is the direct statement “Microsoft Copilot, this thing is a complete disaster.” Let’s assume the big dog of Salesforce is right. The large and much loved company — Yes, I am speaking about Microsoft — rolled out a number of implementations, applications, and assertions. The firm caught everyone’s favorite Web search engine with its figurative pants down like a hapless Russian trooper about to be dispatched by a Ukrainian drone equipped with a variant of RTX. (That stuff goes bang.) Microsoft “won” a marketing battle and gained the advantage of time. Google with its Sundar & Prabhakar Comedy Act created an audience. Microsoft seized the opportunity to talk to the audience. The audience applauded. Whether the technology worked, in my opinion was secondary. Microsoft wanted to be seen as the jazzy leader.
Second, the idea of a disaster is interesting. Since Microsoft relied on what may be the world’s weirdest organizational set up and supported the crumbling structure, other companies have created smart software which surfs on Google’s transformer ideas. Microsoft did not create a disaster; it had not done anything of note in the smart software world. Microsoft is a marketer. The technology is a second class citizen. The disaster is that Microsoft’s marketing seems to be out of sync with what the PowerPoint decks say. So what’s new? The answer is, “Nothing.” The problem is that some people don’t see Microsoft’s smart software as a disaster. One example is Palantir, which is Microsoft’s new best friend. The US government cannot rely on Microsoft enough. Those contract renewals keep on rolling. Furthermore the “certified” partners could not be more thrilled. Virtually every customer and prospect wants to do something with AI. When the blind lead the blind, a person with really bad eyesight has an advantage. That’s Microsoft. Like it or not.
Third, the pitch about “transforming your company” is baloney. But it sounds good. It helps a company do something “new” but within the really familiar confines of Microsoft software. In the good old days, it was IBM that provided the cover for doing something, anything, which could produce a marketing opportunity or a way to add a bit pizazz to a 1955 Chevrolet two door 210 sedan. Thus, whether the AI works or does not work, one must not lose sight of the fact that Microsoft centric outfits are going to go with Microsoft because most professionals need PowerPoint and the bean counters do not understand anything except Excel. What strikes me as important that Microsoft can use modest, even inept smart software, and come out a winner. Who is complaining? The Fortune 1000, the US Federal government, the legions of MBA students who cannot do a class project without Excel, PowerPoint, and Word?
Finally, the ultimate reference in the quote is Clippy. Personally I think the big dog at Salesforce should have invoked both Bob and Clippy. Regardless of the “joke” hooked to these somewhat flawed concepts, the names “Bob” and “Clippy” have resonance. Bob rolled out in 1995. Clippy helped so many people beginning in the same year. Decades later Microsoft’s really odd software is going to cause a 20 something who was not born to turn away from Microsoft products and services? Nope.
Let’s sum up: Salesforce is working hard to get a marketing lift by making Microsoft look stupid. Believe me. Microsoft does not need any help. Perhaps the big dog should come up with a marketing approach that replicates or comes close to what Microsoft pulled off in 2023. Google still hasn’t recovered fully from that kung fu blow.
The big dog needs to up its marketing game. Say Salesforce and what’s the reaction? Maybe meh.
Stephen E Arnold, November 1, 2024
Google Goes Nuclear For Data Centers
October 31, 2024
From the The Future-Is-Just-Around-the-Corner Department:
Pollution is blamed on consumers who are told to cut their dependency on plastic and drive less, while mega corporations and tech companies are the biggest polluters in the world. Some of the biggest users of energy are data centers and Google decided to go nuclear to help power them says Engadget: “Google Strikes A Deal With A Nuclear Startup To Power Its AI Data Centers.”
Google is teaming up with Kairos Power to build seven small nuclear reactors in the United States. The reactors will power Google’s AI Drive and add 500 megawatts. The first reactor is expected to be built in 2030 with the plan to finish the rest by 2035. The reactors are called small modular reactors or SMRs for short.
Google’s deal with Kairos Power would be the first corporate deal to buy nuclear power from SMRs. The small reactors are build inside a factory, instead of on site so their construction is lower than a full power plant.
“Kairos will need the US Nuclear Regulatory Commission to approve design and construction permits for the plans. The startup has already received approval for a demonstration reactor in Tennessee, with an online date targeted for 2027. The company already builds test units (without nuclear-fuel components) at a development facility in Albuquerque, NM, where it assesses components, systems and its supply chain.
The companies didn’t announce the financial details of the arrangement. Google says the deal’s structure will help to keep costs down and get the energy online sooner.”
These tech companies say they’re green but now they are contributing more to global warming with their AI data centers and potential nuclear waste. At least nuclear energy is more powerful and doesn’t contribute as much as coal or natural gas to pollution, except when the reactors melt down. Amazon is doing one too.
Has Google made the engineering shift from moon shots to environmental impact statements, nuclear waste disposal, document management, assorted personnel challenges? Sure, of course. Oh, and one trivial question: Is there a commercially available and certified miniature nuclear power plant? Russia may be short on cash. Perhaps someone in that country will sell a propulsion unit from those super reliable nuclear submarines? Google can just repurpose it in a suitable data center. Maybe one in Ashburn, Virginia?
Whitney Grace, October 31, 2024
The Sweet Odor of Musk
October 31, 2024
The old Twitter was a boon for academics. It was a virtual gathering place where they could converse with each other, the general public, and even lawmakers. Information was spread and discussed far and wide. The platform was also a venue for conducting online research. Now, though, scholars seem to be withering under the “Musk effect.” Cambridge University Press shares its researchers’ paper, “The Vibes Are Off: Did Elon Musk Push Academics Off Twitter?”
The abstract begins by noting several broad impacts of Twitter’s transition to “X,” as Elon Musk has renamed it: Most existing employees were laid-off. Access to its data was monetized. Its handling of censorship and misinformation has were upended and its affordances shifted. But the scope of this paper is more narrow. Researchers James Bisbee and Kevin Munger set out to answer:
“What did Elon Musk’s takeover of the platform mean for this academic ecosystem? Using a snowball sample of more than 15,700 academic accounts from the fields of economics, political science, sociology, and psychology, we show that academics in these fields reduced their ‘engagement’ with the platform, measured by either the number of active accounts (i.e., those registering any behavior on a given day) or the number of tweets written (including original tweets, replies, retweets, and quote tweets).”
Why did scholars disengage? The “Musk Effect,” as the paper calls it, was a mix of factors. Changes to the verification process and account-name rules were part of it. Many were upset when Musk nixed the free API they’d relied on for research in a range of fields. But much of it was simply a collective disgust at the new owner’s unscientific nature, childishness, and affinity for conspiracy theories. The researchers write:
“We argue that a combination of these features of the threat and then the reality of Musk’s ownership of the Twitter corporation influenced academics either to quit Twitter altogether or at least reduce their engagement with the platform (i.e., ‘disengage’). The policy changes and personality of Twitter’s new owner were difficult to avoid and may have made the experience of using the platform less palatable. Conversely, these same attributes may have stimulated a type of ideological boycott, in which academics disengaged with Twitter as a political strategy to indicate their intellectual and moral opposition.”
See the paper for a description of its methodology, the detailed results (complete with charts), and a discussion of the factors behind the Musk Effect. It also describes the role pre-X Twitter played in academic research. Check out section 1 to learn what the scientific community lost when one bratty billionaire decided to make a spite purchase the size of small country’s gross domestic product.
Cynthia Murrell, October 31, 2024
Secure Phones Keep Appearing
October 31, 2024
The KDE community has developed an open source interface for mobile devices called Plasma Mobile. It allegedly turns any phone into a virtual fortress, promising a “privacy-respecting, open source and secure phone ecosystem.” This project is based on the original Plasma for desktops, an environment focused on security and flexibility. As with many open-source projects, Plasma Mobile is an imperfect work in progress. We learn:
“A pragmatic approach is taken that is inclusive to software regardless of toolkit, giving users the power to choose whichever software they want to use on their device. … Plasma Mobile is packaged in multiple distribution repositories, and so it can be installed on regular x86 based devices for testing. Have an old Android device? postmarketOS, is a project aiming to bring Linux to phones and offers Plasma Mobile as an available interface for the devices it supports. You can see the list of supported devices here, but on any device outside the main and community categories your mileage may vary. Some supported devices include the OnePlus 6, Pixel 3a and PinePhone. The interface is using KWin over Wayland and is now mostly stable, albeit a little rough around the edges in some areas. A subset of the normal KDE Plasma features are available, including widgets and activities, both of which are integrated into the Plasma Mobile UI. This makes it possible to use and develop for Plasma Mobile on your desktop/laptop. We aim to provide an experience (with both the shell and apps) that can provide a basic smartphone experience. This has mostly been accomplished, but we continue to work on improving shell stability and telephony support. You can find a list of mobile friendly KDE applications here. Of course, any Linux-based applications can also be used in Plasma Mobile.
KDE states its software is “for everyone, from kids to grandparents and from professionals to hobbyists.” However, it is clear that being an IT professional would certainly help. Is Plasma Mobile as secure as they claim? Time will tell.
Cynthia Murrell, October 31, 2024
FOGINT: ANKR and TON Hook Up
October 30, 2024
A humanoid wrote this essay. I tried to get MSFT Copilot to work, but it remains dead. That makes four days with weird messages about a glitch. That’s the standard: Good enough.
The buzzwords “DePIN” and “SNAS” may not be familiar to some cyber investigators. The first refers to an innovation which ANKR embraces. A DePIN is a decentralized physical infrastructure or a network of nodes. The nodes can be geographically distributed. Instead of residing on a physical server, virtualization makes the statement “We don’t know what’s on the hardware a customer licenses and configures.” There is no there there becomes more than a quip about Oakland, California. The SNAS is a consequence of DePIN-type architecture. The SNAS is a super network as a service. A customer can rent big bang systems and leave the hands on work to the ANKR team.
Why am I mentioning a start up operating in Romania?
The answer is that ANKR has cut a deal with The One Network Foundation. This entity was created after Telegram had its crypto plans derailed by the US Securities & Exchange Commission several years ago. The TONcoin is now “open” and part of the “open” One Network Foundation entity. TON, as of October 24, 2024, is directly accessible through ANKR’s Web3 API (application programming interface).
Telegram organization allows TONcoin to “run” on the Telegram blockchain via the Open Network Foundation based in Zug, Switzerland. The plumbing is Telegram; the public face of the company is the Zug outfit. With Mr. Durov’s remarkable willingness to modify how the company responds to law enforcement, there is pressure on the Telegram leadership to make TONcoin the revenue winner.
ANKR is an important tie up. It may be worth watching.
Stephen E Arnold, October 30, 2024
Bookmark This: HathiTrust Digital Library
October 30, 2024
Concerned for the Internet Archive? So are we. (For multiple reasons.) But while that venerable site recovers from its recent cyberattacks, remember Hathi exists. Founded in 2008, the not-for-profit HathiTrust Digital Library is a collaborative of academic and research libraries. The site makes millions of digitized items available for study by humans as well as for data mining. The site shares the collection’s story:
“HathiTrust’s digital library came into being during the mid-2000s when companies such as Google began scanning print titles from the shelves of university and college campus libraries. When many of those same libraries created HathiTrust in 2008, they united library copies of those digitized books into a single, shared collection to make as much of the collection available for access as allowable by copyright law. Through HathiTrust, libraries collaborate on long-term management, preservation, and access of their collections. Book lovers and researchers like you can explore this huge collection of digitized materials! Today, HathiTrust Digital Library is the largest set of digitized books managed by academic and research libraries. The collection includes materials typically found on the shelves of North American university and college campuses with the benefit of being available online instead of scattered in buildings around the globe. Our enormous collection includes thousands of years of human knowledge and published materials from around the world, selected by librarians and preserved in the libraries of academic and research libraries. You can find all kinds of digitized books and primary source materials to suit a wide range of research needs.”
The collection contains books and “book-like” items—basically anything except audio/visual files. All Library of Congress subjects are represented, but the largest treasures lie in the Language & Literature, Philosophy, Religion, History, and Social Sciences chambers. All volumes not restricted by copyright are free for anyone to read. Just over half the works are in English, while the rest span over 400 languages, including some that are now extinct. Ninety-five percent were scanned from print by Google, but a few specialized collections were contributed by individuals or institutions. The Collection page offers several sample collections to get you started, or you can build your own. Have fun browsing their collections, and with luck the Internet Archive will be back up and running in no time.
Cynthia Murrell, October 30, 2024
PrivacyTools.io: A Good Resource for Privacy Tools and Services
October 30, 2024
Keeping up with the latest in global mass surveillance by private and state-sponsored groups can be a challenge. Here is a resource that can help: Privacy Tools evaluates the many tools designed to fight mass surveillance and highlights the best on its website. Its Home page lists its many clickable categories on the left and describes the criteria by which the site evaluates privacy tools and services. It also educates visitors on surveillance issues and why even those with “nothing to hide” should be concerned. It specifies:
“Many of the activities we carry out on the internet leave a trail of data that can be used to track our behavior and access some personal information. Some of the activities that collect data include credit card transactions, GPS, phone records, browsing history, instant messaging, watching videos, and searching for goods. Unfortunately, there are many companies and individuals on the internet that are looking for ways to collect and exploit your personal data to their own benefit for issues like marketing, research, and customer segmentation. Others have malicious intentions with your data and may use it for phishing, accessing your banking information or hacking into your online accounts. Businesses have similar privacy issues. Malicious entities could be looking for ways to access customer information, steal trade secrets, stop networks and platforms such as e-commerce sites from operating and disrupt your operations.”
The site’s list of solutions to these threats is long. Some are free and some are not. And which to choose will differ depending on one’s situation. One way to simplify the selection is with the group’s specific Privacy Guides—collections of tools for specific concerns. Categories currently include Android, Encryption, Network, Smartphones, Tor Browser, and Tracking, to name a few. This is a handy way to narrow down the many solutions featured on the site. A worthy undertaking since, as the site emphasizes, “You are being watched.”