So Much for Silicon Valley Solidarity
April 23, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I thought the entity called Benzinga was a press release service. Guess not. I received a link to what looked like a “real” news story written by a Benzinga Staff Writer name Jain Rounak. “Elon Musk Reacts As Marc Andreessen Says Google Is ‘Literally Run By Employee Mobs’ With ‘Chinese Spies’ Scooping Up AI Chip Designs.” The article is a short one, and it is not exactly what the title suggested to me. Nevertheless, let’s take a quick look at what seems to be some ripping of the Silicon Valley shibboleth of solidarity.
The members of the Happy Silicon Valley Social club are showing signs of dissention. Thanks, MSFT Copilot. How is your security today? Oh, really.
The hook for the story is another Google employee protest. The cause was a deal for Google to provide cloud services to Israel. I assume the Googlers split along ethno-political-religious lines: One group cheering for Hamas and another for Israel. (I don’t have any first-hand evidence, so I am leveraging the scant information in the Benzinga news story.
Then what? Apparently Marc Andreessen of Netscape fame and AI polemics offered some thoughts. I am not sure where these assertions were made or if they are on the money. But, I grant to Benzinga, that the Andreessen emissions are intriguing. Let’s look at one:
“The company is literally overrun by employee mobs, Chinese spies are walking AI chip designs out the door, and they turn the Founding Fathers and the Nazis black.”
The idea that there are “Google mobs” running from Foosball court to vending machines and then to their quiet space and then to the parking lot is interesting. Where’s Charles Dickens of Tale of Two Cities fame when you need an observer to document a revolution. Are Googlers building barricades in the passage ways? Are Prius and Tesla vehicles being set on fire?
In the midst of this chaotic environment, there are Chinese spies. I am not sure one has to walk chip designs anywhere. Emailing them or copying them from one Apple device to another works reasonably well in my experience. The reference to the Google art is a reminder that the high school management club approach to running a potential trillion dollar, alleged monopoly need some upgrades.
Where’s the Elon in this? I think I am supposed to realize that Elon and Andreessen are on the same mental wave length. The Google is not. Therefore, the happy family notion is shattered. Okay, Benzinga. Whatever. Drop those names. The facts? Well, drop those too.
Stephen E Arnold, April 23, 2024
More Inside Dope about McKinsey & Company
April 23, 2024
This essay is the work of a dumb dinobaby. No smart software required.
It appears that blue chip consultants are finding some choppy waters in the exclusive money pond at the knowledge country club.
“I Was a Consultant at McKinsey. Here’s the Frustrating Way They Pushed Me Out” reveals some interesting but essentially personal assertions about the blue chip consulting firm. McKinsey & Co. is associated in my mind with the pharmaceutical industry’s money maker, synthetic opioids. Living in Kentucky, evidence about the chemical compound is fairly easy to spot. Drive East of my home. Check out Nitro, West Virginia, and you can gather more evidence.
ChatGPT captures an elite group pushing someone neither liked nor connected out the door. Good enough.
The main idea of the write up is that McKinsey is presented as an exclusive club. Being liked and having connections are more important than any other capability. A “best of the best” on the outs is left marooned in a cube. The only email comes from a consultant offering help related to finding one’s future elsewhere. Fun.
What’s the firm doing in the first quarter of 2024? If the information in the Business Insider article is on the money, McKinsey is reinventing itself. Here are some of the possibly accurate statements in the article:
- McKinsey & Co. has found easy consulting money drying up
- The firm is downsizing
- Work at McKinsey is mostly PowerPoint decks shaped to make the customer “look good”
- McKinsey does not follow its own high-value consulting advice when it comes to staffing.
What does the write up suggest? That is a question with different answers. For someone who has never worked at a blue chip consulting firm, the answer is, “Who cares?” For a person with some exposure to these outfits, the answer is, “So what’s new?” From an objective and reasonably well informed vantage point, the answer may be, “Are consulting firms a bunch of baloney?”
Change, however, is afoot. Let me cite one example. Competition for the blue-chip outfits was once narrowly defined. Now the competition is coming from unexpected places. I will offered one example to get your thought process rolling. Axios, a publishing company owned by , is now positioning its journalists as “experts.” Instead of charging a couple thousand of dollars per hour, Axios will sell a “name brand expert,” video calls, and special news reports. Plus, Axios will jump into the always-exciting world of conferences in semi-nice places.
How will McKinsey and its ilk respond? Will these firms reveal that they are also publishing houses and have been since their inception? Will they morph into giants of artificial intelligence, possibly creating their own models from the reams of proprietary reports, memoranda, emails, and consultant notes? Will McKinsey buy an Axios-type outfit and morph into something the partners from the 1960s would never recognize? Will blue-chip firms go out of business as individuals low-ball engagements to cash-conscious clients?
Net net: When a firm like McKinsey finds itself pilloried for failure to follow its own advice, the future is uncertain. Perhaps McKinsey should call another blue chip outfit? Better yet, buy some help from GLG or Coleman.
Stephen E Arnold, April 23, 2024
The National Public Radio Entity Emulates Grandma
April 17, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I can hear my grandmother telling my cousin Larry. Chew your food. Or… no television for you tonight. The time was 6 30 pm. The date was March 3, 1956. My cousin and I were being “watched” when our parents were at a political rally and banquet. Grandmother was in charge, and my cousin was edging close to being sent to grandfather for a whack with his wooden paddle. Tough love I suppose. I was a good boy. I chewed my food and worked to avoid the Wrath of Ma. I did the time travel thing when I read “NPR Suspends Veteran Editor As It Grapples with His Public Criticism.” I avoid begging for dollars outfits. I had no idea what the issue is or was.
“Gea’t haspoy” which means in grandmother speak: “That’s it. No TV for you tonight. In the morning, both of you are going to help Grandpa mow the yard and rake up the grass.” Thanks, NPR. Oh, sorry, thanks MSFT Copilot. You do the censorship thing too, don’t you?
The write up explains:
NPR has formally punished Uri Berliner, the senior editor who publicly argued a week ago that the network had “lost America’s trust” by approaching news stories with a rigidly progressive mindset.
Oh, I get it. NPR allegedly shapes stories. A “real” journalist does not go along with the program. The progressive leaning outfit ignores the free speech angle. The “real” journalist is punished with five days in a virtual hoosegow. An NPR “real” journalist published an essay critical of NPR and then vented on a podcast.
The article I have cited is an NPR article. I guess self criticism is progressive trait maybe? Any way, the article about the grandma action stated:
In rebuking Berliner, NPR said he had also publicly released proprietary information about audience demographics, which it considers confidential. He said those figures “were essentially marketing material. If they had been really good, they probably would have distributed them and sent them out to the world.”
There is no hint that this “real” journalist shares beliefs believed to be held by Julian Assange or that bold soul Edward Snowden, both of whom have danced with super interesting information.
Several observations:
- NPR’s suspending an employee reminds me of my grandmother punishing us for not following her wacky rules
- NPR is definitely implementing a type of information shaping; if it were not, what’s the big deal about a grousing employee? How many of these does Google have protesting in a year?
- Banning a person who is expressing an opinion strikes me as a tasty blend of X.com and that master motivator Joe Stalin. But that’s just my dinobaby mind have a walk-about.
Net net: What media are not censoring, muddled, and into acting like grandma?
Stephen E Arnold, April 15, 2024
A Less Crazy View of AI: From Kathmandu via Tufts University
April 16, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I try to look for interesting write ups from numerous places. Some in Kentucky (well, not really) and others in farther flung locations like Kathmandu. I read “The boring truth about AI.” The article was not boring in my opinion. The author (Amar Bhidé) presented what seemed like a non-crazy, hyperbole-free discussion of smart software. I am not sure how many people in Greenspring, Kentucky, read the Khatmandu Post, but I am not sure how many people in Greenspring, Kentucky, can read.
Rah rah. Thanks, MSFT Copilot, you have the hands-on expertise to prove that the New York City chatbot is just the best system when it comes to providing information of a legal nature that is dead wrong. Rah rah.
What’s the Tufts University business professor say? Let’s take a look at several statements in the article.
First, I circled this passage:
As economic historian Nathan Rosenberg and many others have shown, transformative technologies do not suddenly appear out of the blue. Instead, meaningful advances require discovering and gradually overcoming many unanticipated problems.
Second, I put a blue check mark next to this segment:
Unlike the Manhattan Project, which proceeded at breakneck speed, AI developers have been at work for more than seven decades, quietly inserting AI into everything from digital cameras and scanners to smartphones, automatic braking and fuel-injection systems in cars, special effects in movies, Google searches, digital communications, and social-media platforms. And, as with other technological advances, AI has long been put to military and criminal uses. Yet AI advances have been gradual and uncertain.
The author references IBM’s outstanding Watson system. I think that’s part of the gradual and uncertain in the hands of Big Blue’s marketing professionals.
Finally, I drew a happy face next to this:
Perhaps LLM chatbots can increase profits by providing cheap if maddening, customer service. Someday, a breakthrough may dramatically increase the technology’s useful scope. For now, though, these oft-mendacious talking horses warrant neither euphoria nor panic about “existential risks to humanity.” Best keep calm and let the traditional decentralised evolution of technology, laws, and regulations carry on.
I would suggest that a more pragmatic and less frenetic approach to smart software makes more sense than the wild and crazy information zapped from podcasts and conference presentations.
Stephen E Arnold, April 16, 2024
Is This Incident the Price of Marketing: A Lesson for Specialized Software Companies
April 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
A comparatively small number of firms develop software an provide specialized services to analysts, law enforcement, and intelligence entities. When I started work at a nuclear consulting company, these firms were low profile. In fact, if one tried to locate the names of the companies in one of those almost-forgotten reference books (remember telephone books), the job was a tough one. First, the firms would have names which meant zero; for example, Rice Labs or Gray & Associates. Next, if one were to call, a human (often a person with a British accent) would politely inquire, “To whom did you wish to speak?” The answer had to conform to a list of acceptable responses. Third, if you were to hunt up the address, you might find yourself in Washington, DC, staring at the second floor of a non-descript building once used to bake pretzels.
Decisions, decisions. Thanks, MSFT Copilot. Good enough. Does that phrase apply to one’s own security methods?
Today, the world is different. Specialized firms in a country now engaged in a controversial dust up in the Eastern Mediterranean has companies which have Web sites, publicize their capabilities as mechanisms to know your customer, or make sense of big data. The outfits have trade show presences. One outfit, despite between the poster child from going off the rails, gives lectures and provides previews of its technologies at public events. How times have changed since I have been working in commercial and government work since the early 1970s.
Every company, including those engaged in the development and deployment of specialized policeware and intelware are into marketing. The reason is cultural. Madison Avenue is the whoo-whoo part of doing something quite interesting and wanting to talk about the activity. The other reason is financial. Cracking tough technical problems costs money, and those who have the requisite skills are in demand. The fix, from my point of view, is to try to operate with a public presence while doing the less visible, often secret work required of these companies. The evolution of the specialized software business has been similar to figuring out how to walk a high wire over a circus crowd. Stay on the wire and the outfit is visible and applauded. Fall off the wire and fail big time. But more and more specialized software vendors make the decision to try to become visible and get recognition for their balancing act. I think the optimal approach is to stay out of the big tent avoid the temptations of fame, bright lights, and falling to one’s death.
“Why CISA Is Warning CISOs about a Breach at Sisense” provides a good example of public visibility and falling off the high wire. The write up says:
New York City based Sisense has more than a thousand customers across a range of industry verticals, including financial services, telecommunications, healthcare and higher education. On April 10, Sisense Chief Information Security Officer Sangram Dash told customers the company had been made aware of reports that “certain Sisense company information may have been made available on what we have been advised is a restricted access server (not generally available on the internet.)”
Let me highlight one other statement in the write up:
The incident raises questions about whether Sisense was doing enough to protect sensitive data entrusted to it by customers, such as whether the massive volume of stolen customer data was ever encrypted while at rest in these Amazon cloud servers. It is clear, however, that unknown attackers now have all of the credentials that Sisense customers used in their dashboards.
This firm enjoys some visibility because it markets itself using the hot button “analytics.” The function of some of the Sisense technology is to integrate “analytics” into other products and services. Thus it is an infrastructure company, but one that may have more capabilities than other types of firms. The company has non commercial companies as well. If one wants to get “inside” data, Sisense has done a good job of marketing. The visibility makes it easy to watch. Someone with skills and a motive can put grease on the high wire. The article explains what happens when the actor slips up: “More than a thousand customers.”
How can a specialized software company avoid a breach? One step is to avoid visibility. Another is to curtail dreams of big money. Redefine success because those in your peer group won’t care much about you with or without big bucks. I don’t think that is just not part of the game plan of many specialized software companies today. Each time I visit a trade show featuring specialized software firms as speakers and exhibitors I marvel at the razz-ma-tazz the firms bring to the show. Yes, there is competition. But when specialized software companies, particularly those in the policeware and intelware business, market to both commercial and non-commercial firms, that visibility increases their visibility. The visibility attracts bad actors the way Costco roasted chicken makes my French bulldog shiver with anticipation. Tibby wants that chicken. But he is not a bad actor and will not get out of bounds. Others do get out of bounds. The fix is to move the chicken, then put it in the fridge. Tibby will turn his attention elsewhere. He is a dog.
Net net: Less blurring of commercial and specialized customer services might be useful. Fewer blogs, podcasts, crazy marketing programs, and oddly detailed marketing write ups to government agencies. (Yes, these documents can be FOIAed by the Brennan folks, for instance. Yes, those brochures and PowerPoints can find their way to public repositories.) Less marketing. More judgment. Increased security attention, please.
Stephen E Arnold, April 12, 2024
Are Experts Misunderstanding Google Indexing?
April 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Google is not perfect. More and more people are learning that the mystics of Mountain View are working hard every day to deliver revenue. In order to produce more money and profit, one must use Rust to become twice as wonderful than a programmer who labors to make C++ sit up, bark, and roll over. This dispersal of the cloud of unknowing obfuscating the magic of the Google can be helpful. What’s puzzling to me is that what Google does catches people by surprise. For example, consider the “real” news presented in “Google Books Is Indexing AI-Generated Garbage.” The main idea strikes me as:
But one unintended outcome of Google Books indexing AI-generated text is its possible future inclusion in Google Ngram viewer. Google Ngram viewer is a search tool that charts the frequencies of words or phrases over the years in published books scanned by Google dating back to 1500 and up to 2019, the most recent update to the Google Books corpora. Google said that none of the AI-generated books I flagged are currently informing Ngram viewer results.
Thanks, Microsoft Copilot. I enjoyed learning that security is a team activity. Good enough again.
Indexing lousy content has been the core function of Google’s Web search system for decades. Search engine optimization generates information almost guaranteed to drag down how higher-value content is handled. If the flagship provides the navigation system to other ships in the fleet, won’t those vessels crash into bridges?
In order to remediate Google’s approach to indexing requires several basic steps. (I have in various ways shared these ideas with the estimable Google over the years. Guess what? No one cared, understood, and if the Googler understood, did not want to increase overhead costs. So what are these steps? I shall share them:
- Establish an editorial policy for content. Yep, this means that a system and method or systems and methods are needed to determine what content gets indexed.
- Explain the editorial policy and what a person or entity must do to get content processed and indexed by the Google, YouTube, Gemini, or whatever the mystics in Mountain View conjure into existence
- Include metadata with each content object so one knows the index date, the content object creation date, and similar information
- Operate in a consistent, professional manner over time. The “gee, we just killed that” is not part of the process. Sorry, mystics.
Let me offer several observations:
- Google, like any alleged monopoly, faces significant management challenges. Moving information within such an enterprise is difficult. For an organization with a Foosball culture, the task may be a bit outside the wheelhouse of most young people and individuals who are engineers, not presidents of fraternities or sororities.
- The organization is under stress. The pressure is financial because controlling the cost of the plumbing is a reasonably difficult undertaking. Second, there is technical pressure. Google itself made clear that it was in Red Alert mode and keeps adding flashing lights with each and every misstep the firm’s wizards make. These range from contentious relationships with mere governments to individual staff member who grumble via internal emails, angry Googler public utterances, or from observed behavior at conferences. Body language does speak sometimes.
- The approach to smart software is remarkable. Individuals in the UK pontificate. The Mountain View crowd reassures and smiles — a lot. (Personally I find those big, happy looks a bit tiresome, but that’s a dinobaby for you.)
Net net: The write up does not address the issue that Google happily exploits. The company lacks the mental rigor setting and applying editorial policies requires. SEO is good enough to index. Therefore, fake books are certainly A-OK for now.
Stephen E Arnold, April 12, 2024
AI Will Take Jobs for Sure: Money Talks, Humans Walk
April 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Report Shows Managers Eager to Replace or Devalue Workers with AI Tools
Bosses have had it with the worker-favorable labor market that emerged from the pandemic. Fortunately, there is a new option that is happy to be exploited. We learn from TechSpot that a recent “Survey Reveals Almost Half of All Managers Aim to Replace Workers with AI, Could Use It to Lower Wages.” The report is by beautiful.ai, which did its best to spin the results as a trend toward collaboration, not pink slips. Nevertheless, the numbers seem to back up worker concerns. Writer Rog Thubron summarizes:
“A report by Beautiful.ai, which makes AI-powered presentation software, surveyed over 3,000 managers about AI tools in the workplace, how they’re being implemented, and what impact they believe these technologies will have. The headline takeaway is that 41% of managers said they are hoping that they can replace employees with cheaper AI tools in 2024. … The rest of the survey’s results are just as depressing for worried workers: 48% of managers said their businesses would benefit financially if they could replace a large number of employees with AI tools; 40% said they believe multiple employees could be replaced by AI tools and the team would operate well without them; 45% said they view AI as an opportunity to lower salaries of employees because less human-powered work is needed; and 12% said they are using AI in hopes to downsize and save money on worker salaries. It’s no surprise that 62% of managers said that their employees fear that AI tools will eventually cost them their jobs. Furthermore, 66% of managers said their employees fear that AI tools will make them less valuable at work in 2024.”
Managers themselves are not immune to the threat: Half of them said they worry their pay will decrease, and 64% believe AI tools do their jobs better than experienced humans do. At least they are realistic. Beautiful.ai stresses another statistic: 60% of respondents who are already using AI tools see them as augmenting, not threatening, jobs. The firm also emphasizes the number of managers who hope to replace employees with AI decreased “significantly” since last year’s survey. Progress?
Cynthia Murrell, April 12, 2024
The Only Dataset Search Tool: What Does That Tell Us about Google?
April 11, 2024
This essay is the work of a dumb dinobaby. No smart software required.
If you like semi-jazzy, academic write ups, you will revel in “Discovering Datasets on the Web Scale: Challenges and Recommendations for Google Dataset Search.” The write up appears in a publication associated with Jeffrey Epstein’s favorite university. It may be worth noting that MIT and Google have teamed to offer a free course in Artificial Intelligence. That is the next big thing which does hallucinate at times while creating considerable marketing angst among the techno-giants jousting to emerge as the go-to source of the technology.
Back to the write up. Google created a search tool to allow a user to locate datasets accessible via the Internet. There are more than 700 data brokers in the US. These outfits will sell data to most people who can pony up the cash. Examples range from six figure fees for the Twitter stream to a few hundred bucks for boat license holders in states without much water.
The write up says:
Our team at Google developed Dataset Search, which differs from existing dataset search tools because of its scope and openness: potentially any dataset on the web is in scope.
A very large, money oriented creature enjoins a worker to gather data. If someone asks, “Why?”, the monster says, “Make up something.” Thanks MSFT Copilot. How is your security today? Oh, that’s too bad.
The write up does the academic thing of citing articles which talk about data on the Web. There is even a table which organizes the types of data discovery tools. The categorization of general and specific is brilliant. Who would have thought there were two categories of a vertical search engine focused on Web-accessible data. I thought there was just one category; namely, gettable. The idea is that if the data are exposed, take them. Asking permission just costs time and money. The idea is that one can apologize and keep the data.
The article includes a Googley graphic. The French portal, the Italian “special” portal, and the Harvard “dataverse” are identified. Were there other Web accessible collections? My hunch is that Google’s spiders such down as one famous Googler said, “All” the world’s information. I will leave it to your imagination to fill in other sources for the dataset pages. (I want to point out that Google has some interesting technology related to converting data sets into normalized data structures. If you are curious about the patents, just write benkent2020 at yahoo dot com, and one of my researchers will send along a couple of US patent numbers. Impressive system and method.)
The section “Making Sense of Heterogeneous Datasets” is peculiar. First, the Googlers discovered the basic fact of data from different sources — The data structures vary. Think in terms of grapes and deer droppings. Second, the data cannot be “trusted.” There is no fix to this issue for the team writing the paper. Third, the authors appear to be unaware of the patents I mentioned, particularly the useful example about gathering and normalizing data about digital cameras. The method applies to other types of processed data as well.
I want to jump to the “beyond metadata” idea. This is the mental equivalent of “popping” up a perceptual level. Metadata are quite important and useful. (Isn’t it odd that Google strips high value metadata from its search results; for example, time and data?) The authors of the paper work hard to explain that the Google approach to data set search adds value by grouping, sorting, and tagging with information not in any one data set. This is common sense, but the Googley spin on this is to build “trust.” Remember: This is an alleged monopolist engaged in online advertising and co-opting certain Web services.
Several observations:
- This is another of Google’s high-class PR moves. Hooking up with MIT and delivering razz-ma-tazz about identifying spiderable content collections in the name of greater good is part of the 2024 Code Red playbook it seems. From humble brag about smart software to crazy assertions like quantum supremacy, today’s Google is a remarkable entity
- The work on this “project” is divorced from time. I checked my file of Google-related information, and I found no information about the start date of a vertical search engine project focused on spidering and indexing data sets. My hunch is that it has been in the works for a while, although I can pinpoint 2006 as a year in which Google’s technology wizards began to talk about building master data sets. Why no time specifics?
- I found the absence of AI talk notable. Perhaps Google does not think a reader will ask, “What’s with the use of these data? I can’t use this tool, so why spend the time, effort, and money to index information from a country like France which is not one of Google’s biggest fans. (Paris was, however, the roll out choice for the answer to Microsoft and ChatGPT’s smart software announcement. Plus that presentation featured incorrect information as I recall.)
Net net: I think this write up with its quasi-academic blessing is a bit of advance information to use in the coming wave of litigation about Google’s use of content to train its AI systems. This is just a hunch, but there are too many weirdnesses in the academic write up to write off as intern work or careless research writing which is more difficult in the wake of the stochastic monkey dust up.
Stephen E Arnold, April 11, 2024
Perplexed at Perplexity? It Is Just the Need for Money. Relax.
April 5, 2024
This essay is the work of a dumb dinobaby. No smart software required.
“Gen-AI Search Engine Perplexity Has a Plan to Sell Ads” makes it clear that the dynamic world of wildly over-hyped smart software is somewhat fluid. Pivoting from “No, never” to “Yes, absolutely” might catch some by surprise. But this dinobaby is ready for AI’s morphability. Artificial intelligence means something to the person using the term. There may be zero correlation between the meaning of AI in the mind of any other people. Absent the Vulcan mind meld, people have to adapt. Morphability is important.
The dinobaby analyst is totally confused. First, say one thing. Then, do the opposite. Thanks, MSFT Copilot. Close enough. How’s that AI reorganization going?
I am thinking about AI because Perplexity told Adweek that despite obtaining $73 million in Series B funding, the company will start selling ads. This is no big deal for Google which slips unmarked ads into its short video streams. But Perplexity was not supposed to sell ads. Yeah, well, that’s no longer an operative concept.
The write up says:
Perplexity also links sources in the response while suggesting related questions users might want to ask. These related questions, which account for 40% of Perplexity’s queries, are where the company will start introducing native ads, by letting brands influence these questions,
Sounds rock solid, but I think that the ads will have a bit of morphability; that is, when big bucks are at stake, those ads are going to go many places. With an alleged 10 million monthly active users, some advertisers will want those ads shoved down the throat of anything that looks like a human or bot with buying power.
Advertisers care about “brand safety.” But those selling ads care about selling ads. That’s why exciting ads turn up in quite interesting places.
I have a slight distrust for pivoters. But that’s just an old dinobaby, an easily confused dinobaby at that.
Stephen E Arnold, April 5, 2024
Nah, AI Is for Little People Too. Ho Ho Ho
April 5, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I like the idea that smart software is open. Anyone can download software and fire up that old laptop. Magic just happens. The reality is that smart software is going to involve some big outfits and big bucks when serious applications or use cases are deployed. How do I know this? Well, I read “Microsoft and OpenAI Reportedly Building $100 Billion Secret Supercomputer to Train Advanced AI.” The number $100 billion in not $6 trillion bandied about by Sam AI-Man a few weeks ago. It does, however, make Amazon’s paltry $3 billion look like chump change. And where does that leave the AI start ups, the AI open source champions, and the plain vanilla big-smile venture folks? The answer is, “Ponying up some bucks to get that AI to take flight.”
Thanks, MSFT Copilot. Stick to your policies.
The write up states:
… the dynamic duo are working on a $100 billion — that’s "billion" with a "b," meaning a sum exceeding many countries’ gross domestic products — on a hush-hush supercomputer designed to train powerful new AI.
The write up asks a question some folks with AI sparkling in their eyes cannot answer; to wit:
Needless to say, that’s a mammoth investment. As such, it shines an even brighter spotlight on a looming question for the still-nascent AI industry: how’s the whole thing going to pay for itself?
But I know the answer: With other people’s money and possibly costs distributed across many customers.
Observations are warranted:
- The cost of smart software is likely to be an issue for everyone. I don’t think “free” is the same as forever
- Mistral wants to do smaller language models, but Microsoft has “invested” in that outfit as well. If necessary, some creative end runs around an acquisition may be needed because MSFT may want to take Mistral off the AI chess board
- What’s the cost of the electricity to operate what $100 billion can purchase? How about a nifty thorium reactor?
Net net: Okay, Google, what is your move now that MSFT has again captured the headlines?
Stephen E Arnold, April 5, 2024