Microsoft: That Old Time Religion Which Sort of Works
November 15, 2024
Having a favorite OS can be akin to being in a technology cult or following a popular religion. Apple people are experienced enthusiasts, Linux users are the odd ones because it has a secret language and handshakes, while Microsoft is vanilla with diehard followers. Microsoft apparently loves its users and employees to have this mantra and feed into it says Edward Zitron of Where’s Your Ed At? in the article, “The Cult Of Microsoft.”
Zitron reviewed hundreds of Microsoft’s internal documents and spoke with their employees about the company culture. He learned that Microsoft subscribed to “The Growth Mindset” and it determines how far someone will go within the hallowed Redmond halls. There are two types of growth mindset: you can learn and change to continue progressing or you believe everything is immutable (aka the fixed mindset).
Satya Nadella even wrote a bible of sorts called Hit Refresh that discusses The Growth Mindset. Zitron purports that Nadella wants to setup himself up as a messianic figure and used his position to claim a place at the top of the bestseller list. How? He “urged” his Microsoft employees to discuss Hit Refresh with as many people as possible. The communication methods he had his associates use was like a pyramid scheme aka a multi-level marketing ploy.
Microsoft is as fervent of following The Growth Mindset as women used to be selling Mary Kay and Avon products. The problem, Zitron reports, is that it has little to do with actual improvement. The Growth Mindset can’t be replicated without the presence of the original creator.
“In other words, the evidence that supports the efficacy of mindset theory is unreliable, and there’s no proof that this actually improves educational outcomes. To quote Wenner Moyer:
‘MacNamara and her colleagues found in their analysis that when study authors had a financial incentive to report positive effects — because, say, they had written books on the topic or got speaker fees for talks that promoted growth mindset — those studies were more than two and half times as likely to report significant effects compared with studies in which authors had no financial incentives.’
Turning to another view: Wenner Moyer’s piece is a balanced rundown of the chaotic world of mindset theory, counterbalanced with a few studies where there were positive outcomes, and focuses heavily on one of the biggest problems in the field — the fact that most of the research is meta-analyses of other people’s data…”
Microsoft has employees write biannual self-performance reviews called Connects. Everyone hates them but if the employees want raises and to keep their jobs then they have to fill out those forms. What’s even more demeaning is that Copilot is being used to write the Connects. Copilot is throwing out random metrics and achievements that don’t have a basis on any facts.
Is the approach similar to a virtual pyramid scheme. Are employees are taught or hired to externalize their success and internalize their failures. If something the Big Book of MSFT provides grounding in the Redmond way.
Mr. Nadella strikes me as having adopted the principles and mantra of a cult. Will the EU and other regulatory authorities bow before the truth or act out their heresies?
Whitney Grace, November 15, 2024
A Digital Flea Market Tests Smart Software
November 14, 2024
Sales platform eBay has learned some lessons about deploying AI. The company tested three methods and shares its insights in the post, “Cutting Through the Noise: Three Things We’ve Learned About Generative AI and Developer Productivity.” Writer Senthil Padmanabhan explains:
“Through our AI work at eBay, we believe we’ve unlocked three major tracks to developer productivity: utilizing a commercial offering, fine-tuning an existing Large Language Model (LLM), and leveraging an internal network. Each of these tracks requires additional resources to integrate, but it’s not a matter of ranking them ‘good, better, or best.’ Each can be used separately or in any combination, and bring their own benefits and drawbacks.”
The company could have chosen from several existing commercial AI offerings. It settled on GitHub Copilot for its popularity with developers. That and the eBay codebase was already on GitHub. They found the tool boosted productivity and produced mostly accurate documents (70%) and code (60%). The only problem: Copilot’s limited data processing ability makes it impractical for some applications. For now.
To tweak and train an open source LLM, the team chose Code Llama 13B. They trained the camelid on eBay’s codebase and documentation. The resulting tool reduced the time and labor required to perform certain tasks, particularly software upkeep. It could also sidestep a problem for off-the-shelf options: because it can be trained to access data across internal services and within non-dependent libraries, it can get to data the commercial solutions cannot find. Thereby, code duplication can be avoided. Theoretically.
Finally, the team used an Retrieval Augmented Generation to synthesize documentation across disparate sources into one internal knowledge base. Each piece of information entered into systems like Slack, Google Docs, and Wikis automatically received its own vector, which was stored in a vector database. When they queried their internal GPT, it quickly pulled together an answer from all available sources. This reduced the time and frustration of manually searching through multiple systems looking for an answer. Just one little problem: Sometimes the AI’s responses were nonsensical. Were any just plain wrong? Padmanabhan does not say.
The post concludes:
“These three tracks form the backbone for generative AI developer productivity, and they keep a clear view of what they are and how they benefit each project. The way we develop software is changing. More importantly, the gains we realize from generative AI have a cumulative effect on daily work. The boost in developer productivity is at the beginning of an exponential curve, which we often underestimate, as the trouble with exponential growth is that the curve feels flat in the beginning.”
Okay, sure. It is all up from here. Just beware of hallucinations along the way. After all, that is one little detail that still needs to be ironed out.
Cynthia Murrell, November 14, 2024
Smart Software: It May Never Forget
November 13, 2024
A recent paper challenges the big dogs of AI, asking, “Does Your LLM Truly Unlearn? An Embarrassingly Simple Approach to Recover Unlearned Knowledge.” The study was performed by a team of researchers from Penn State, Harvard, and Amazon and published on research platform arXiv. True or false, it is a nifty poke in the eye for the likes of OpenAI, Google, Meta, and Microsoft, who may have overlooked the obvious. The abstract explains:
“Large language models (LLMs) have shown remarkable proficiency in generating text, benefiting from extensive training on vast textual corpora. However, LLMs may also acquire unwanted behaviors from the diverse and sensitive nature of their training data, which can include copyrighted and private content. Machine unlearning has been introduced as a viable solution to remove the influence of such problematic content without the need for costly and time-consuming retraining. This process aims to erase specific knowledge from LLMs while preserving as much model utility as possible.”
But AI firms may be fooling themselves about this method. We learn:
“Despite the effectiveness of current unlearning methods, little attention has been given to whether existing unlearning methods for LLMs truly achieve forgetting or merely hide the knowledge, which current unlearning benchmarks fail to detect. This paper reveals that applying quantization to models that have undergone unlearning can restore the ‘forgotten’ information.”
Oops. The team found as much as 83% of data thought forgotten was still there, lurking in the shadows. The paper offers a explanation for the problem and suggestions to mitigate it. The abstract concludes:
“Altogether, our study underscores a major failure in existing unlearning methods for LLMs, strongly advocating for more comprehensive and robust strategies to ensure authentic unlearning without compromising model utility.”
See the paper for all the technical details. Will the big tech firms take the researchers’ advice and improve their products? Or will they continue letting their investors and marketing departments lead them by the nose?
Cynthia Murrell, November 13, 2024
The Bezos Bulldozer Could Stalls in a Nuclear Fuel Pool
November 11, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
Microsoft is going to flip a switch and one of Three Mile Islands’ nuclear units will blink on. Yeah. Google is investing in small nuclear power unit. But one, haul it to the data center of your choice, and plug it in. Shades of Tesla thinking. Amazon has also be fascinated by Cherenkov radiation which is blue like Jack Benny’s eyes.
A physics amateur learned about 880 volts by reading books on his Kindle. Thanks, MidJourney. Good enough.
Are these PR-tinged information nuggets for real? Sure, absolutely. The big tech outfits are able to do anything, maybe not well, but everything. Almost.
The “trusted” real news outfit (Thomson Reuters) published “US Regulators Reject Amended Interconnect for Agreement for Amazon Data Center.” The story reports as allegedly accurate information:
U.S. energy regulators rejected an amended interconnection agreement for an Amazon data center connected directly to a nuclear power plant in Pennsylvania, a filing showed on Friday. Members of the Federal Energy Regulatory Commission said the agreement to increase the capacity of the data center located on the site of Talen Energy’s Susquehanna nuclear generating facility could raise power bills for the public and affect the grid’s reliability.
Amazon was not inventing a function modular nuclear reactor using the better option thorium. No. Amazon just wanted to fun a few of those innocuous high voltage transmission line, plug in a converter readily available from one of Amazon’s third party merchants, and let a data center chock full of dolphin loving servers, storage devices, and other gizmos. What’s the big deal?
The write up does not explain what “reliability” and “national security” mean. Let’s just accept these as words which roughly translate to “unlikely.”
Is this an issue that will go away? My view is, “No.” Nuclear engineers are not widely represented among the technical professionals engaged in selling third-party vendors’ products, figuring out how to make Alexa into a barn burner of a product, or forcing Kindle users to smash their devices in frustration when trying to figure out what’s on their Kindle and what’s in Amazon’s increasingly bizarro cloud system.
Can these companies become nuclear adepts? Sure. Will that happen quickly? Nope. Why? Nuclear is specialized field and involves a number of quite specific scientific disciplines. But Amazon can always ask Alexa and point to its Ring door bell system as the solution to security concerns. The approach will impress regulatory authorities.
Stephen E Arnold, November 11, 2024
Instragram Does the YouTube Creator Fear Thing
November 11, 2024
Instagram influencers are enraged by CEO Adam Mosseri’s bias towards popular content. According to the BBC in, “Instagram Lowering Quality Of Less Viewed Videos ‘Alarming’ Creators”, video quality is lowered for older, less popular videos. More popular content gets the HD white glove treatment. Influencers are upset over this “discrimination,” especially when they concentrate on making income through Instagram over other platforms.
The influences view the lower quality output as harmful and affects the quality of original art. Mosseri argues that most influencers have their videos watched soon after publication. The only videos being affected by lower quality are older and no longer receive many views. While that sounds logical, it could also create a cycle that benefits only a few influencers:
Social media consultant Matt Navarra told the BBC the move ‘seems to somewhat contradict Instagram’s earlier messages or efforts to encourage new creators’.
"How can creators gain traction if their content is penalized for not being popular," he said. And he said it could risk creating a cycle of more established creators reaping the rewards of higher engagement from viewers over those trying to build their following.”
Instagram is lowering the quality to save on costs. It always comes down money, doesn’t it? When asked to respond about that, Mosseri said viewers are more interested in a video’s content over its image quality. Navarra agreed to that statement:
“He [Navarra] said creators should focus on how they can make engaging content that caters to their audience, rather than be overly concerned by the possibility of its quality being degraded by Instagram.”
Navarra’s right. Video quality will be decent and not poor like a cathode-ray tube TV. The creators should focus on building themselves and not investing all of their creative energy into one platform. Diversify!
Whitney Grace, November 11, 2024
FOGINT: Crypto Is a Community Builder
November 9, 2024
CreationNetwork.ai: A One-Stop Shop for All Your Digital Needs. And Crypto, Too!
Here is an interesting development. “CreationNetwork.ai Emerges As a Leading AI-Powered Platform, Integrating Over Twenty Two Tools,” reports HackerNoon. The AI aggregator uses Telegram plus other social media to push its service. Furthermore, the company is integrating crypto into its business plan. We expect these “blending” plays will become more common. The Chainwire press release says about this one:
“As an all-in-one solution for content creation, e-commerce, social media management, and digital marketing, CreationNetwork.ai combines 22+ proprietary AI-powered tools and 29+ platform integrations to deliver the most extensive digital ecosystem available. … CreationNetwork.ai’s suite of tools spans every facet of digital engagement, equipping users with powerful AI technologies to streamline operations, engage audiences, and optimize performance. Each tool is meticulously designed to enhance productivity and efficiency, making it easy to create, manage, and analyze content across multiple channels.”
See the write-up for a list of the tools included in CreationNetwork.ai, from AI Copywriter to Team-Powered Branding. The hefty roster of platform connections is also specified, including obvious players: all the major social media platforms, the biggest e-commerce platforms, and content creation tools like Canva, Grammarly, Adobe Express, Unsplash, and Dropbox. We learn:
“One of the most distinguishing features of CreationNetwork.ai is its extensive integration network. With over 29 integrations, users can synchronize their digital activities across major social media, e-commerce, and content platforms, providing centralized management and engagement capabilities. … This integration network empowers users to manage their brand presence across platforms from a single, unified dashboard, significantly enhancing efficiency and reach.”
Nifty. What a way to simplify digital processes for users. And to make it harder for new services to break into the market. But what groundbreaking platform would be complete without its own cryptocurrency? The write-up states:
“In preparation for its Initial Coin Offering (ICO), CreationNetwork.ai is launching a $750,000 CRNT Token Airdrop to reward early supporters and incentivize participation in the CreationNetwork.ai ecosystem. Qualified participants can secure their position by following CreationNetwork.ai’s social media accounts and completing the whitelist form available on the official website. This initiative highlights CreationNetwork.ai’s commitment to building a strong, engaged community.”
Crypto — The community builder.
Cynthia Murrell, November 11, 2024
Let Them Eat Cake or Unplug: The AI Big Tech Bro Effect
November 7, 2024
I spotted a news item which will zip right by some people. The “real” news outfit owned by the lovable Jeff Bezos published “As Data Centers for AI Strain the Power Grid, Bills Rise for Everyday Customers.” The write up tries to explain that AI costs for electric power are being passed along to regular folks. Most of these electricity dependent people do not take home paychecks with tens of millions of dollars like the Nadellas, the Zuckerbergs, or the Pichais type breadwinners do. Heck, these AI poohbahs think about buying modular nuclear power plants. (I want to point out that these do not exist and may not for many years.)
The article is not going to thrill the professionals who are experts on utility demand and pricing. Those folks know that the smart software poohbahs have royally screwed up some weekends and vacations for the foreseeable future.
The WaPo article (presumably blessed by St. Jeffrey) says:
The facilities’ extraordinary demand for electricity to power and cool computers inside can drive up the price local utilities pay for energy and require significant improvements to electric grid transmission systems. As a result, costs have already begun going up for customers — or are about to in the near future, according to utility planning documents and energy industry analysts. Some regulators are concerned that the tech companies aren’t paying their fair share, while leaving customers from homeowners to small businesses on the hook.
Okay, typical “real” journospeak. “Costs have already begun going up for customers.” Hey, no kidding. The big AI parade began with the January 2023 announcement that the Softies were going whole hog on AI. The lovable Google immediately flipped into alert mode. I can visualize flashing yellow LEDs and faux red stop lights blinking in the gray corridors in Shoreline Drive facilities if there are people in those offices again. Yeah, ghostly blinking.
The write up points out, rather unsurprisingly:
The tech firms and several of the power companies serving them strongly deny they are burdening others. They say higher utility bills are paying for overdue improvements to the power grid that benefit all customers.
Who wants PEPCO and VEPCO to kill their service? Actually, no one. Imagine life in NoVa, DC, and the ever lovely Maryland without power. Yikes.
From my point of view, informed by some exposure to the utility sector at a nuclear consulting firm and then at a blue chip consulting outfit, here’s the scoop.
The demand planning done with rigor by US utilities took a hit each time the Big Dogs of AI brought more specialized, power hungry servers online and — here’s the killer, folks — and left them on. The way power consumption used to work is that during the day, consumer usage would fall and business/industry usage would rise. The power hogging steel industry was a 24×7 outfit. But over the last 40 years, manufacturing has wound down and consumer demand crept upwards. The curves had to be plotted and the demand projected, but, in general, life was not too crazy for the US power generation industry. Sure, there were the costs associated with decommissioning “old” nuclear plants and expanding new non-nuclear facilities with expensive but management environmental gewgaws, gadgets, and gizmos plugged in to save the snail darters and the frogs.
Since January 2023, demand has been curving upwards. Power generation outfits don’t want to miss out on revenue. Therefore, some utilities have worked out what I would call sweetheart deals for electricity for AI-centric data centers. Some of these puppies suck more power in a day than a dying city located in Flyover Country in Illinois.
Plus, these data centers are not enough. Each quarter the big AI dogs explain that more billions will be pumped into AI data centers. Keep in mind: These puppies run 24×7. The AI wolves have worked out discount rates.
What do the US power utilities do? First, the models have to be reworked. Second, the relationships to trade, buy, or “borrow” power have to be refined. Third, capacity has to be added. Fourth, the utility rate people create a consumer pricing graph which may look like this:
Guess who will pay? Yep, consumers.
The red line is the prediction for post-AI electricity demand. For comparison, the blue line shows the demand curve before Microsoft ignited the AI wars. Note that the gray line is consumer cost or the monthly electricity bill for Bob and Mary Normcore. The nuclear purple line shows what is and will continue to happen to consumer electricity costs. The red line is the projected power demand for the AI big dogs.
The graph shows that the cost will be passed to consumers. Why? The sweetheart deals to get the Big Dog power generation contracts means guaranteed cash flow and a hurdle for a low-ball utility to lumber over. Utilities like power generation are not the Neon Deions of American business.
There will be hand waving by regulators. Some city government types will argue, “We need the data centers.” Podcasts and posts on social media will sprout like weeds in an untended field.
Net net: Bob and Mary Normcore may have to decide between food and electricity. AI is wonderful, right.
Stephen E Arnold, November 7, 2024
Google: The Intellectual Oakland Because There Is No There There, Just Ads
November 7, 2024
The post is the work of a humanoid who happens to be a dinobaby. GenX, Y, and Z, read at your own risk. If art is included, smart software produces these banal images.
I read a clever essay titled “I Attended Google’s Creator Conversation Event, And It Turned Into A Funeral.” Be aware that the text can disappear as a big gray box covers the essay. Just read quickly.
The report explains that a small group of problematic content creators found their sites or other content effectively made invisible in Google search results. Boom. Videos disappear. Boom. Key words no longer retrieve a Web site. So Google did the politically correct thing and got a former conference organizer to round up some of the disaffected and come to a meeting to discuss content getting disappeared. No real reasons are given by Google, but the essay recounts the experience on one intrepid optimist who thought, “This time Google will be different.”
A very professional Googler evades a question. A person in the small group meeting asks it again. She is not happy with Google’s evasiveness. Well, too bad. Thanks, Midjourney. MSFT Copilot is still dead.
Nope.
The write up does a good job of capturing the nothingness of a Google office. If you have not been to one, try to set up a meeting. Good luck.
I want to focus on a couple of points in the essay and then offer a handful of observations.
I noted this statement:
During this small group discussion, I and others tried to get our Googlers to address the biggest problem facing our industry: Google giving big brands special treatment. Each time a site owner brought up the topic, we were quickly steered in another direction.
Google wants or assumes that it is control of everything most of the time. Losing control means one is not a true Googler. The people at this Google event learned that non-Googlers and their questions are simply not relevant. Google goes through certain theatrical events to check off a task.
Also, I circled this comment:
…we then asked the only question that mattered: Why has Google shadow banned our sites? Google’s Chief Search Scientist answered this question using a strategy based around gaslighting and said they hadn’t. Google doesn’t ever derank an entire site, only individual pages, he said. There is no site-wide classifier. He insisted it is only done at the page level.
This statement is accurate. A politically correct answer is one that does not reveal Google’s intent for anything. Individuals in charge of a project or program usually do not know what is going on with that program. Information is shared via the in house communications systems, email or text messages which are viewed as potential problems if released to those outside the Google, and meetings which are often ad hoc or without an agenda. The company sells advertising and reacts to what appear to be threats, legal actions, or ways to make money.
I noted this statement:
someone bluntly asked, since nothing is wrong with our sites, how do we recover? Google’s elderly Chief Search Scientist answered, without an ounce of pity or concern, that there would be updates but he didn’t know when they’d happen or what they’d do. Further questions on the subject were met with indifference as if he didn’t understand why we cared. He’d gotten the information he wanted. The conference was over. I don’t think he even said thanks.
This is accurate. Why should a member of leadership care about a “user”? Clicks produce data. The data and attendant content are processed to make more money. That’s why customer service is limited to big budget advertisers who spend real money on Google advertising.
Several observations:
- Google is big (150,000 or more employees). Google is chaotic. Individuals, groups, and entire divisions are not sure what is going on. How many messaging apps did Google have at one time? Lots. Why? No one knows or knew. The people coming to the meeting about finding themselves invisible in the Google finding systems assumed that a big company was not chaotic. Now the attendees know.
- People who create content have replaced people who used to get paid to create content. Now the “creators” work for the hope of Google advertising money. What’s the percentage paid to a “creator”? Try to find those data, gentle reader. Google does what it does: Individual Googlers or a couple of Googlers set up a system and go to play Foosball. That means 149,998 colleagues have zero idea what’s happening. Content “creators” expect someone to be responsible and to know how systems work. The author of the essay now knows only a couple of people may know the answer to the question. If those people quit, one will never know.
- The people who use Google to find relevant information are irrelevant as individuals. The person who wants to find a pizza in Cleveland may find only the pizza a person working on Google Local for Cleveland may like. If that pizza joint spends a couple of thousand per month on Google ads, that pizza may be findable. Most people do not understand the linkages between search engine optimization, Google advertising sales, and the Google mostly automated Google ad auctioning system. One search engineer working from home can have quite an impact on people who make content and assume that Google will make it findable. The author of the article knows this assumption about Google is a fairy tale.
Google has be labeled a monopoly. Google is suggesting that if the company is held accountable for its behaviors, the US will lose its foothold in artificial intelligence. Brilliant argument. Google has employees who have won a Nobel Prize. People not held accountable often lose sight of many things.
That’s why the big meeting was dumped into the task list of a person who ran search engine optimization conferences. One does not pray for that individual; one does not go to meetings managed by such a professional.
Stephen E Arnold, November 7, 2024
Hey, US Government, Listen Up. Now!
November 5, 2024
This post is the work of a dinobaby. If there is art, accept the reality of our using smart art generators. We view it as a form of amusement.
Microsoft on the Issues published “AI for Startups.” The write is authored by a dream team of individuals deeply concerned about the welfare of their stakeholders, themselves, and their corporate interests. The sensitivity is on display. Who wrote the 1,400 word essay? Setting aside the lawyers, PR people, and advisors, the authors are:
- Satya Nadella, Chairman and CEO, Microsoft
- Brad Smith, Vice-Chair and President, Microsoft
- Marc Andreessen, Cofounder and General Partner, Andreessen Horowitz
- Ben Horowitz, Cofounder and General Partner, Andreessen Horowitz
Let me highlight a couple of passages from essay (polemic?) which I found interesting.
In the era of trustbusters, some of the captains of industry had firm ideas about the place government professionals should occupy. Look at the railroads. Look at cyber security. Look at the folks living under expressway overpasses. Tumultuous times? That’s on the money. Thanks, MidJourney. A good enough illustration.
Here’s the first snippet:
Artificial intelligence is the most consequential innovation we have seen in a generation, with the transformative power to address society’s most complex problems and create a whole new economy—much like what we saw with the advent of the printing press, electricity, and the internet.
This is a bold statement of the thesis for these intellectual captains of the smart software revolution. I am curious about how one gets from hallucinating software to “the transformative power to address society’s most complex problems and cerate a whole new economy.” Furthermore, is smart software like printing, electricity, and the Internet? A fact or two might be appropriate. Heck, I would be happy with a nifty Excel chart of some supporting data. But why? This is the first sentence, so back off, you ignorant dinobaby.
The second snippet is:
Ensuring that companies large and small have a seat at the table will better serve the public and will accelerate American innovation. We offer the following policy ideas for AI startups so they can thrive, collaborate, and compete.
Ah, companies large and small and a seat at the table, just possibly down the hall from where the real meetings take place behind closed doors. And the hosts of the real meeting? Big companies like us. As the essay says, “that only a Big Tech company with our scope and size can afford, creating a platform that is affordable and easily accessible to everyone, including startups and small firms.”
The policy “opportunity” for AI startups includes many glittering generalities. The one I like is “help people thrive in an AI-enabled world.” Does that mean universal basic income as smart software “enhances” jobs with McKinsey-like efficiency. Hey, it worked for opioids. It will work for AI.
And what’s a policy statement without a variation on “May live in interesting times”? The Microsoft a2z twist is, “We obviously live in a tumultuous time.” That’s why the US Department of Justice, the European Union, and a few other Luddites who don’t grok certain behaviors are interested in the big firms which can do smart software right.
Translation: Get out of our way and leave us alone.
Stephen E Arnold, November 5, 2024
Enter the Dragon: America Is Unhealthy
November 4, 2024
Written by a humanoid dinobaby. No AI except the illustration.
The YouTube video “A Genius Girl Who Is Passionate about Repairing Machines” presents a simple story in a 38 minute video. The idea is that a young woman with no help fixes a broken motorcycles with basic hand tools outside in what looks like a hoarder’s backyard. The message is: Wow, she is smart and capable. Don’t you wish you knew person like this who could repair your broken motorcycle.
This video is from @vutvtgamming and not much information is provided. After watching this and similar videos like “Genius Girl Restored The 280mm Lathe From 50 Years Ago And Made It Look Like”, I feel pretty stupid for an America dinobaby. I don’t think I can recall meeting a person with similar mechanical skills when I worked at Keystone Steel, Halliburton Nuclear, or Booz, Allen & Hamilton’s Design & Development division. The message I carried away was: I was stupid as were many people with whom I associated.
Thanks, MSFT Copilot. Good enough. (I slipped a put down through your filters. Imagine that!)
I picked up a similar vibe when I read “Today’s AI Ecosystem Is Unsustainable for Most Everyone But Nvidia, Warns Top Scholar.” On the surface, the ZDNet write up is an interview with the “scholar” Kai-Fu Lee, who, according to the article:
served as founding director of Microsoft Research Asia before working at Google and Apple, founded his current company, Sinovation Ventures, to fund startups such as 01.AI, which makes a generative AI search engine called BeaGo.
I am not sure how “scholar” correlates with commercial work for US companies and running an investment firm with a keen interest in Chinese start ups. I would not use the word “scholar.” My hunch is that the intent of Kai-Fu Lee is to present as simple and obvious something that US companies don’t understand. The interview is a different approach to explaining how advanced Kai-Fu Lee’s expertise is. He is, via this interview, sharing an opinion that the US is creating a problem and overlooking the simple solution. Just like the young woman able to repair a motorcycle or the lass fixing up a broken industrial lathe alone, the American approach does not get the job done.
What does ZDNet present as Kai-Fu Lee’s message. Here are a couple of examples:
“The ecosystem is incredibly unhealthy,” said Kai-Fu Lee in a private discussion forum earlier this month. Lee was referring to the profit disparity between, on the one hand, makers of AI infrastructure, including Nvidia and Google, and, on the other hand, the application developers and companies that are supposed to use AI to reinvent their operations.
Interesting. I wonder if the “healthy” ecosystem might be China’s approach of pragmatism and nuts-and-bolts evidenced in the referenced videos. The unhealthy versus healthy is a not-so-subtle message about digging one’s own grave in my opinion. The “economics” of AI are unhealthy, which seems to say, “America’s approach to smart software is going to kill it. A more healthy approach is the one in which government and business work to create applications.” Translating: China, healthy; America, sick as a dog.
Here’s another statement:
Today’s AI ecosystem, according to Lee, consists of Nvidia, and, to a lesser extent, other chip makers such as Intel and Advanced Micro Devices. Collectively, the chip makers rake in $75 billion in annual chip sales from AI processing. “The infrastructure is making $10 billion, and apps, $5 billion,” said Lee. “If we continue in this inverse pyramid, it’s going to be a problem,” he said.
Who will flip the pyramid? Uganda, Lao PDR, Greece? Nope, nope, nope. The flip will take an outfit with a strong mind and body. A healthy entity is needed to flip the pyramid. I wonder if that strong entity is China.
Here’s Kai-Fu kung fu move:
He recommended that companies build their own vertically integrated tech stack the way Apple did with the iPhone, in order to dramatically lower the cost of generative AI. Lee’s striking assertion is that the most successful companies will be those that build most of the generative AI components — including the chips — themselves, rather than relying on Nvidia. He cited how Apple’s Steve Jobs pushed his teams to build all the parts of the iPhone, rather than waiting for technology to come down in price.
In the write up Kai-Fu Lee refers to “we”. Who is included in that we? Excluded will be the “unhealthy.” Who is left? I would suggest that the pragmatic and application focused will be the winners. The reason? The “we” includes the healthy entities. Once again I am thinking of China’s approach to smart software.
What’s the correct outcome? Kai-Fu Lee allegedly said:
What should result, he said, is “a smaller, leaner group of leaders who are not just hiring people to solve problems, but delegating to smart enterprise AI for particular functions — that’s when this will make the biggest deal.”
That sounds like the Chinese approach to a number of technical, social, and political challenges. Healthy? Absolutely.
Several observations:
- I wonder if ZDNet checked on the background of the “scholar” interviewed at length?
- Did ZDNet think about the “healthy” versus “unhealthy” theme in the write up?
- Did ZDNet question the “scholar’s” purpose in explaining what’s wrong with the US approach to smart software?
I think I know the answer. The ZDNet outfit and the creators of this unusual private interview believe that the young women rebuilt complicated devices without any assistance. Smart China; dumb America. I understand the message which seems to have not been internalized by ZDNet. But I am a dumb dinobaby. What do I know? Exactly. Unhealthy that American approach to AI.
Stephen E Arnold, October 30, 2024

