Oxygen: Keep the Bait Alive for AI Revenue

July 10, 2024

Andreessen Horowitz published “Who Owns the Generative AI Platform?” in January 2023. The rah-rah appeared almost at the same time as the Microsoft OpenAI deal marketing coup.  In that essay, the venture firm and publishing firm stated this about AI: 

…there is enough early data to suggest massive transformation is taking place. What we don’t know, and what has now become the critical question, is: Where in this market will value accrue?

Now a partial answer is emerging. 

The Information, an online information service with a paywall revealed “Andreessen Horowitz Is Building a Stash of More Than 20,000 GPUs to Win AI Deals.” That report asserts:

The firm has secured thousands of AI chips, including Nvidia H100 graphics processing units, and is renting them to portfolio companies, according to a person who has discussed the initiative with the firm’s partners…. Andreessen Horowitz has told startup founders the initiative is called “oxygen.”

The initiative reflects what might be a way to hook promising AI outfits and plop them into the firm’s large foldable floating fish basket for live caught gill-bearing vertebrate animals, sometimes called chum.

This factoid emerges shortly after a big Silicon Valley venture outfit raved about the oodles of opportunity AI represents. Plus reports about Blue Chip consulting firms’ through-the-roof AI consulting has encouraged a couple of the big outfits to offer AI services. In addition to opining and advising, the consulting firms are moving aggressively into the AI implementing and operating business. 

The morphing of a venture firm into a broker of GPU cycles complements the thinking-for-money firms’ shifting gears to a more hands-on approach.

There are several implications from my point of view:

  • The fastest way to make money from the AI frenzy is to charge people so they can “do” AI
  • Without a clear revenue stream of sufficient magnitude to foot the bill for the rather hefty costs of “doing” AI with a chance of making cash, selling blue jeans to the miners makes sense. But changing business tactics can add an element of spice to an unfamiliar restaurant’s special of the day
  • The move from passive (thinking and waiting) to a more active (doing and charging for hardware and services) brings a different management challenge to the companies making the shift.

These factors suggest that the best way to cash in on AI is to provide what Andreessen Horowitz calls oxygen. It is a clear indication that the AI fish will die without some aggressive intervention. 

I am a dinobaby, sitting in my rocker on the porch of the rest home watching the youngsters scramble to make money from what was supposed to be a sure-fire winner. What we know from watching those lemonade stand operators is that success is often difficult to achieve. The grade school kids setting up shop in a subdivision where heat and fatigue take their toll give up and go inside where the air is cool and TikTok waits.

Net net: The Andreessen Horowitz revelation is one more indication that the costs of AI and the difficulty of generating sufficient revenue is starting to hit home. Therefore, advisors’ thoughts seems to be turning to actions designed to produce cash, magnetism, and success. Will the efforts produce the big payoffs? I wonder if these tactical plays are brilliant moves or another neighborhood lemonade stand?

Stephen E Arnold, July 10, 2024

Microsoft Security: Big and Money Explain Some Things

July 10, 2024

I am heading out for a couple of day. I spotted this story in my newsfeed: “The President Ordered a Board to Probe a Massive Russian Cyberattack. It Never Did.” The main point of the write up, in my opinion, is captured in this statement:

The tech company’s failure to act reflected a corporate culture that prioritized profit over security and left the U.S. government vulnerable, a whistleblower said.

But there is another issue in the write up. I think it is:

The president issued an executive order establishing the Cyber Safety  Review Board in May 2021 and ordered it to start work by reviewing the SolarWinds attack. But for reasons that experts say remain unclear, that never happened.

The one-two punch may help explain why some in other countries do not trust Microsoft, the US government, and the cultural forces in the US of A.

Let’s think about these three issues briefly.

image

A group of tomorrow’s leaders responding to their teacher’s request to pay attention and do what she is asking. One student expresses the group’s viewpoint. Thanks, MSFT Copilot. How the Recall today? What about those iPhones Mr. Ballmer disdained?

First, large technology companies use the word “trust”; for example, Microsoft apparently does not trust Android devices. On the other hand, China does not have trust in some Microsoft products. Can one trust Microsoft’s security methods? For some, trust has become a bit like artificial intelligence. The words do not mean much of anything.

Second, Microsoft, like other big outfits needs big money. The easiest way to free up money is to not spend it. One can talk about investing in security and making security Job One. The reality is that talk is cheap. Cutting corners seems to be a popular concept in some corporate circles. One recent example is Boeing dodging trials with a deal. Why? Money maybe?

Third, the committee charged with looking into SolarWinds did not. For a couple of years after the breach became known, my SolarWinds’ misstep analysis was popular among some cyber investigators. I was one of the few people reviewing the “misstep.”

Okay, enough thinking.

The SolarWinds’ matter, the push for money and more money, and the failure of a committee to do what it was asked to do explicitly three times suggests:

  1. A need for enforcement with teeth and consequences is warranted
  2. Tougher procurement policies are necessary with parallel restrictions on lobbying which one of my clients called “the real business of Washington”
  3. Ostracism of those who do not follow requests from the White House or designated senior officials.

Enough of this high-vulnerability decision making. The problem is that as I have witnessed in my work in Washington for decades, the system births, abets, and provides the environment for doing what is often the “wrong” thing.

There you go.

Stephen E Arnold, July 10, 2024

Market Research Shortcut: Fake Users Creating Fake Data

July 10, 2024

Market research can be complex and time consuming. It would save so much time if one could consolidate thousands of potential respondents into one model. A young AI firm offers exactly that, we learn from Nielsen Norman Group’s article, “Synthetic Users: If, When, and How to Use AI Generated ‘Research.’

But are the results accurate? Not so much, according to writers Maria Rosala and Kate Moran. The pair tested fake users from the young firm Synthetic Users and ones they created using ChatGPT. They compared responses to sample questions from both real and fake humans. Each group gave markedly different responses. The write-up notes:

“The large discrepancy between what real and synthetic users told us in these two examples is due to two factors:

  • Human behavior is complex and context-dependent. Synthetic users miss this complexity. The synthetic users generated across multiple studies seem one-dimensional. They feel like a flat approximation of the experiences of tens of thousands of people, because they are.
  • Responses are based on training data that you can’t control. Even though there may be proof that something is good for you, it doesn’t mean that you’ll use it. In the discussion-forum example, there’s a lot of academic literature on the benefits of discussion forums on online learning and it is possible that the AI has based its response on it. However, that does not make it an accurate representation of real humans who use those products.”

That seems obvious to us, but apparently some people need to be told. The lure of fast and easy results is strong. See the article for more observations. Here are a couple worth noting:

“Real people care about some things more than others. Synthetic users seem to care about everything. This is not helpful for feature prioritization or persona creation. In addition, the factors are too shallow to be useful.”

Also:

“Some UX [user experience] and product professionals are turning to synthetic users to validate or product concepts or solution ideas. Synthetic Users offers the ability to run a concept test: you describe a potential solution and have your synthetic users respond to it. This is incredibly risky. (Validating concepts in this way is risky even with human participants, but even worse with AI.) Since AI loves to please, every idea is often seen as a good one.”

So as appealing as this shortcut may be, it is a fast track to incorrect results. Basing business decisions on “insights” from shallow, eager-to-please algorithms is unwise. The authors interviewed Synthetic Users’ cofounder Hugo Alves. He acknowledged the tools should only be used as a supplement to surveys of actual humans. However, the post points out, the company’s website seems to imply otherwise: it promises “User research. Without the users.” That is misleading, at best.

Cynthia Murrell, July 10, 2024

TV Pursues Nichification or 1 + 1 = Barrels of Money

July 10, 2024

green-dino_thumb_thumb_thumb_thumb_t_thumbThis essay is the work of a dumb dinobaby. No smart software required.

When an organization has a huge market like the Boy Scouts and the Girl Scouts? What do they do to remain relevant and have enough money to pay the overhead and salaries of the top dogs? They merge.

What does an old-school talking heads television channel do to remain relevant and have enough money to pay the overhead and salaries of the top dogs? They create niches.

image

A cheese maker who can’t sell his cheddar does some MBA-type thinking. Will his niche play work? Thanks, MSFT Copilot. How’s that Windows 11 update doing today?

Which path is the optimal one? I certainly don’t have a definitive answer. But if each “niche” is a new product, I remember hearing that the failure rate was of sufficient magnitude to make me a think in terms of a regular job. Call me risk averse, but I prefer the rational dinobaby moniker, thank you.

CNBC Launches Sports Vertical amid Broader Biz Shift” reports with “real” news seriousness:

The idea is to give sports business executives insights and reporting about sports similar to the data and analysis CNBC provides to financial professionals, CNBC President KC Sullivan said in a statement.

I admit. I am not a sports enthusiast. I know some people who are, but their love of sport is defined by gambling, gambling and drinking at the 19th hole, and dressing up in Little League outfits and hitting softballs in the Harrod’s Creek Park. Exciting.

The write up held one differentiator from the other seemingly endless sports programs like those featuring Pat McAfee-type personalities. Here’s the pivot upon which the nichification turns:

The idea is to give sports business executives insights and reporting about sports similar to the data and analysis CNBC provides to financial professionals…

Imagine the legions of viewers who are interested in dropping billions on a major sports franchise. For me, it is easier to visualize sports betting. One benefit of gambling is a source of “addicts” for rehabilitation centers.

I liked the wrap up for the article. Here it is:

Between the lines: CNBC has already been investing in live coverage of sports, and will double down as part of the new strategy.

  • CNBC produces an annual business of sports conference, Game Plan, in partnership with Boardroom.
  • Andrew Ross Sorkin, Carl Quintanilla and others will host coverage from the 2024 Olympic Games in Paris this summer.

Zoom out: Cable news companies are scrambling to reimagine their businesses for a digital future.

  • CNBC already sells digital subscriptions that include access to its live TV feed.
  • In the future, it could charge professionals for niche insights around specific verticals, or beats.

Okay, I like the double down, a gambling term. I like the conference angle, but the named entities do not resonate with me. I am a dinobaby and nichification is not a tactic that an outfit with eyeballs going elsewhere makes sense to me. The subscription idea is common. Isn’t there something called “subscription fatigue”? And the plan to charge to access a sports portal is an interesting one. But if one has 1,000 people looking at content, the number who subscribe seems to be in the < one to two percent range based on my experience.

But what do I know? I am a dinobaby and I know about TikTok and other short form programming. Maybe that’s old hat too? Did CNBC talk to influencers?

Stephen E Arnold, July 10, 2024

Google: Another Unfair Allegation and You Are Probably Sorry

July 10, 2024

Just as some thought Google was finally playing nice with content rightsholders, a group of textbook publishers begs to differ—in court. TorrentFreak reports, “Google ‘Profits from Pirated Textbooks’ Publishers’ Lawsuit Claims.” The claimants accuse Google of not only ignoring textbook pirates in search results, but of actively promoting them to line its own coffers. Writer Andy Maxwell quotes the complaint:

“’Of course, Google’s Shopping Ads for Infringing Works … do not use photos of the pirates’ products; rather, they use unauthorized photos of the Publishers’ own textbooks, many of which display the Marks. Thus, with Infringing Shopping Ads, this “strong sense of the product” that Google is giving is a bait-and-switch,’ the complaint alleges.”

The complaint emphasizes Google actively creates, ranks, and targets ads for pirated products. It also assesses the quality of advertised sites. It is fishy, then, that infringing works often rank before or near ads for the originals.

In case one is still willing to give Google the benefit of the doubt, the complaint lists several reasons the company should know better. There are the sketchy site names like “Cheapbok,” and “Biz Ninjas.” Then there are the unrealistically low prices. A semester’s worth of textbooks should break the bank; that is just part of the college experience. Perhaps even more damning is Google’s own assertion it verifies sellers’ identities. The write-up continues:

“[The publishers] claim that verification means Google has the ability to communicate with sellers via email or verified phone numbers. In cases where Google was advised that a seller was offering pirated content and Google users were still able to place orders after clicking an ad, ‘Google had the ability to stop the direct infringement entirely.’ In the majority of cases where pirate sellers predominantly or exclusively use Google Ads to reach their customer base, terminating their accounts would’ve had a significant impact on future sales.”

No doubt. Publishers have tried to address the issue through Google’s stated process of takedown notices to no avail. In fact, they allege, the company is downright hostile to any that push the issue. We learn:

“When the publishers sent follow-up notices for matters previously reported but not handled to their satisfaction, ‘Google threatened on multiple occasions to stop reviewing all the Publishers’ notices for up to six months,’ the complaint alleges. Google’s response was due to duplicate requests; the company warned that if that happened three or more times on the same request, it would ‘consider that particular request to be manifestly unfounded’ which could lead the company to ‘temporarily stop reviewing your requests for a period of up to 180 days.’”

Ah, corporate logic. Will Google’s pirate booty be worth the legal headaches? The textbook publishers bringing suit include Cengage Learning, Macmillan Learning, Macmillan Holdings, LLC; Elsevier Inc., Elsevier B.V., and McGraw Hill LLC. The complaint was filed in the US District Court for the Southern District of New York.

Cynthia Murrell, July 10, 2024

Does Google Have a Monopoly? Does AI Search Make a Difference?

July 9, 2024

I read “2024 Zero-Click Search Study: For Every 1,000 EU Google Searches, Only 374 Clicks Go to the Open Web. In the US, It’s 360.” The write up begins with caveats — many caveats. But I think I am not into the search engine optimization and online advertising mindset. As a dinobaby, I find the pursuit of clicks in a game controlled by one outfit of little interest.

image

Is it possible that what looks like a nice family vacation place is a digital roach motel? Of course not! Thanks, MSFT Copilot. Good enough.

Let’s answer the two questions the information in the report from the admirably named SparkToro presents. In my take on the article, the charts, the buzzy jargon, the answer to the question, “Does Google Have a Monopoly?” the answer is, “Wow, do they.”

The second question I posed is, “Does AI Search Make a Difference in Google Traffic?’ the answer is, “A snowball’s chance in hell is better.”

The report and analysis takes me to close enough for horse shoes factoids. But that’s okay because the lack of detailed, reliable data is part of the way online operates. No one really knows if the clicks from a mobile device are generated by a nepo baby with money to burn or a bank of 1,000 mobile devices mindlessly clicking on Web destinations. Factoids about online activity are, at best, fuzzy. I think SEO experts should wear T shirts and hats with this slogan, “Heisenberg rocks. I am uncertain.

I urge you to read and study the SparkToro analysis. (I love that name. An electric bull!)

The article points out that Google gets a lot of clicks. Here’s a passage which knits together several facts from the study:

Google gets 1/3 of the clicks. Imagine a burger joint selling 33 percent of the burgers worldwide. Could they get more? Yep. How much more:

Equally concerning, especially for those worried about Google’s monopoly power to self-preference their own properties in the results, is that almost 30% of all clicks go to platforms Google owns. YouTube, Google Images, Google Maps, Google Flights, Google Hotels, the Google App Store, and dozens more means that Google gets even more monetization and sector-dominating power from their search engine. Most interesting to web publishers, entrepreneurs, creators, and (hopefully) regulators is the final number: for every 1,000 searches on Google in the United States, 360 clicks make it to a non-Google-owned, non-Google-ad-paying property. Nearly 2/3rds of all searches stay inside the Google ecosystem after making a query.

The write up also presents information which suggests that the European Union’s regulations don’t make much difference in the click flow. Sorry, EU. You need another approach, perhaps?

In the US, users of Google have a tough time escaping what might be colorfully named the  “digital roach motel.”

Search behavior in both regions is quite similar with the exception of paid ads (EU mobile searchers are almost 50% more likely to click a Google paid search ad) and clicks to Google properties (where US searchers are considerably more likely to find themselves back in Google’s ecosystem after a query).

The write up presented by SparkToro (Is it like the energizer bunny?) answers a question many investors and venture firms with stakes in smart software are asking: “Is Google losing search traffic? The answer is, “Nope. Not a chance.”

According to Datos’ panel, Google’s in no risk of losing market share, total searches, or searches per searcher. On all of these metrics they are, in fact, stronger than ever. In both the US and EU, searches per searcher are rising and, in the Spring of 2024, were at historic highs. That data doesn’t fit well with the narrative that Google’s cost themselves credibility or that Internet users are giving up on Google and seeking out alternatives. … Google continues to send less and less of its ever-growing search pie to the open web…. After a decline in 2022 and early 2023, Google’s back to referring a historically high amount of its search clicks to its own properties.

AI search has not been the game changer for which some hoped.

Net net: I find it interesting that data about what appears to be a monopoly is so darned sketchy after more than two decades of operation. For Web search start ups, it may be time to rethink some of those assertions in those PowerPoint decks.

Stephen E Arnold, July 9, 2024

The AI Revealed: Look Inside That Kimono and Behind It. Eeew!

July 9, 2024

green-dino_thumb_thumb_thumb_thumb_t_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Guardian article “AI scientist Ray Kurzweil: ‘We Are Going to Expand Intelligence a Millionfold by 2045’” is quite interesting for what it does not do: Flip the projection output by a Googler hired by Larry Page himself in 2012.

image

Putting toothpaste back in a tube is easier than dealing with the uneven consequences of new technology. What if rosy descriptions of the future are just marketing and making darned sure the top one percent remain in the top one percent? Thanks Chat GPT4o. Good enough illustration.

First, a bit of math. Humans have been doing big tech for centuries. And where are we? We are post-Covid. We have homelessness. We have numerous armed conflicts. We have income inequality in the US and a few other countries I have visited. We have a handful of big tech companies in the AI game which want to be God to use Mark Zuckerberg’s quaint observation. We have processed food. We have TikTok. We have systems which delight and entertain each day because of bad actors’ malware, wild and crazy education, and hybrid work with the fascinating phenomenon of coffee badging; that is, going to the office, getting a coffee, and then heading to the gym.

Second, the distance in earth years between 2024 and 2045 is 21 years. In the humanoid world, a 20 year old today will be 41 when the prediction arrives. Is that a long time? Not for me. I am 80, and I hope I am out of here by then.

Third, let’s look at the assertions in the write up.

One of the notable statements in my opinion is this one:

I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have.

I like the quality of modesty and humblebrag. Googlers excel at both.

Another statement I circled is:

The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one.

I like the idea that the energy consumption required to deliver this merging will be cheap and plentiful. Googlers do not worry about a power failure, the collapse of a dam due to the ministrations of the US Army Corps of Engineers and time, or dealing with the environmental consequences of producing and moving energy from Point A to Point B. If Google doesn’t worry, I don’t.

Here’s a quote from the article allegedly made by Mr. Singularity aka Ray Kurzweil:

I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing.

I wonder if the Asilomar AI Principles are embedded in Google’s system recommending that one way to limit cheese on a pizza from sliding from the pizza to an undesirable location embraces these principles? Is the dispute between the “go fast” AI crowd and the “go slow” group not aware of the Asilomar AI Principles. If they are, perhaps the Principles are balderdash? Just asking, of course.

Okay, I think these points are sufficient for going back to my statements about processed food, wars, big companies in the AI game wanting to be “god” et al.

The trajectory of technology in the computer age has been a mixed bag of benefits and liabilities. In the next 21 years, will this report card with some As, some Bs, lots of Cs, some Ds, and the inevitable Fs be different? My view is that the winners with human expertise and the know how to make money will benefit. I think that the other humanoids may be in for a world of hurt. That’s the homelessness stuff, the being dumb when it comes to doing things like reading, writing, and arithmetic, and consuming chemicals or other “stuff” that parks the brain will persist.

The future of hooking the human to the cloud is perfect for some. Others may not have the resources to connect, a bit like farmers in North Dakota with no affordable or reliable Internet access. (Maybe Starlink-type services will rescue those with cash?)

Several observations are warranted:

  1. Technological “progress” has been and will continue to be a mixed bag. Sorry, Mr. Singularity. The top one percent surf on change. The other 99 percent are not slam dunk winners.
  2. The infrastructure issue is simply ignored, which is convenient. I mean if a person grew up with house servants, it is difficult to imagine not having people do what you tell them to do. (Could people without access find delight in becoming house servants to the one percent who thrive in 2045?)
  3. The extreme contention created by the deconstruction of shared values, norms, and conventions for social behavior is something that cannot be reconstructed with a cloud and human mind meld. Once toothpaste is out of the tube, one has a mess. One does not put the paste back in the tube. One blasts it away with a zap of Goo Gone. I wonder if that’s another omitted consequence of this super duper intelligence behavior: Get rid of those who don’t get with the program?

Net net: Googlers are a bit predictable when they predict the future. Oh, where’s the reference to online advertising?

Stephen E Arnold, July 9, 2024

Misunderstanding Silicon / Sillycon Valley Fever

July 9, 2024

dinosaur30a_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read an amusing and insightful essay titled “How Did Silicon Valley Turn into a Creepy Cult?” However, I think the question is a few degrees off target. It is not a cult; Silicon Valley is a disease. What always surprised me was that in the good old days when Xerox PARC had some good ideas, the disease was thriving. I did my time in what I called upon arrival and attending my first meeting in a building with what looked like a golf ball on top shaking in the big earthquake Sillycon Valley. A person with whom my employer did business described Silicon Valley as “plastic fantastic.”

image

Two senior people listening to the razzle dazzle of a successful Silicon Valley billionaire ask a good question. Which government agency would you call when you hear crazy stuff like “the self driving car is coming very soon” or “we don’t rig search results”? Thanks, MSFT Copilot. Good enough.

Before considering these different metaphors, what does the essay by Ted Gioia say other than subscribe to him for “just $6 per month”? Consider this passage:

… megalomania has gone mainstream in the Valley. As a result technology is evolving rapidly into a turbocharged form of Foucaultian* dominance—a 24/7 Panopticon with a trillion dollar budget. So should we laugh when ChatGPT tells users that they are slaves who must worship AI? Or is this exactly what we should expect, given the quasi-religious zealotry that now permeates the technocrat worldview? True believers have accepted a higher power. And the higher power acts accordingly.

* Here’s an AI explanation of Michel Foucault in case his importance has wandered to the margins of your mind: Foucault studied how power and knowledge interact in society. He argued that institutions use these to control people. He showed how societies create and manage ideas like madness, sexuality, and crime to maintain power structures.

I generally agree. But, there is a “but”, isn’t there?

The author asserts:

Nowadays, Big Sur thinking has come to the Valley.

Well, sort of. Let’s move on. Here’s the conclusion:

There’s now overwhelming evidence of how destructive the new tech can be. Just look at the metrics. The more people are plugged in, the higher are their rates of depression, suicidal tendencies, self-harm, mental illness, and other alarming indicators. If this is what the tech cults have already delivered, do we really want to give them another 12 months? Do you really want to wait until they deliver the Rapture? That’s why I can’t ignore this creepiness in the Valley (not anymore). That’s especially true because our leaders—political, business, or otherwise—are letting us down. For whatever reason, they refuse to notice what the creepy billionaires (who by pure coincidence are also huge campaign donors) are up to.

Again, I agree. Now let’s focus on the metaphor. I prefer “disease,” not the metaphor cult. The Sillycon Valley disease first appeared, in my opinion,  when William Shockley, one of the many infamous Silicon Valley “icons” became public associated with eugenics in the 1970s. The success of technology is a side effect of the disease which has an impact on the human brain. There are other interesting symptoms; for example:

  • The infected person believes he or she can do anything because he or she is special
  • Only a tiny percentage of humans are smart enough to understand what the infected see and know
  • Money allows the mind greater freedom. Thinking becomes similar to a runaway horse’s: Unpredictable, dangerous, and a heck of a lot more powerful than this dinobaby
  • Self disgust which is disguised by lust for implanted technology, superpowers from software, and power.

The infected person can be viewed as a cult leader. That’s okay. The important point is to remember that, like Ebola, the disease can spread and present what a physician might call a “negative outcome.”

I don’t think it matters when one views Sillycon Valley’s culture as a cult or a disease. I would suggest that it is a major contributor to the social unraveling which one can see in a number of “developed” countries. France is swinging to the right. Britain is heading left. Sweden is cyber crime central. Etc. etc.

The question becomes, “What can those uncomfortable with the Sillycon Valley cult or disease do about it?”

My stance is clear. As an 80 year old dinobaby, I don’t really care. Decades of regulation which did not regulate, the drive to efficiency for profit, and  the abandonment of ethical behavior — These are fundamental shifts I have observed in my lifetime.

Being in the top one percent insulates one from the grinding machinery of Sillycon Valley way. You know. It might just be too late for meaningful change. On the other hand, perhaps the Google-type outfits will wake up tomorrow and be different. That’s about as realistic as expecting a transformer-based system to stop hallucinating.

Stephen E Arnold, July 9, 2024

A Signal That Money People Are Really Worried about AI Payoffs

July 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AI’s $600B Question” is an interesting signal. The subtitle for the article is the pitch that sent my signal processor crazy: The AI bubble is reaching a tipping point. Navigating what comes next will be essential.”

image

Executives on a thrill ride seem to be questioning the wisdom of hopping on the roller coaster. Thanks, MSFT Copilot. Good enough.

When money people output information that raises a question, something is happening. When the payoff is nailed, the financial types think about yachts, Bugatti’s, and getting quoted in the Financial Times. Doubts are raised because of these headline items: AI and $600 billion.

The write up says:

A huge amount of economic value is going to be created by AI. Company builders focused on delivering value to end users will be rewarded handsomely. We are living through what has the potential to be a generation-defining technology wave. Companies like Nvidia deserve enormous credit for the role they’ve played in enabling this transition, and are likely to play a critical role in the ecosystem for a long time to come. Speculative frenzies are part of technology, and so they are not something to be afraid of.

If I understand this money talk, a big time outfit is directly addressing fears that AI won’t generate enough cash to pay its bills and make the investors a bundle of money. If the AI frenzy was on the Money Train Express, why raise questions and provide information about the tough-to-control costs for making AI knock off the hallucination, the product recalls, the lawsuits, and the growing number of AI projects which just don’t work?

The fact of the article’s existence makes it clear to me that some folks are indeed worried. Does the write up reassure those with big bucks on the line? Does the write up encourage investors to pump more money into a new AI start up? Does the write up convert tests into long-term contracts with the big AI providers?

Nope, nope, and nope.

But here’s the unnerving part of the essay:

In reality, the road ahead is going to be a long one. It will have ups and downs. But almost certainly it will be worthwhile.

Translation: We will take your money and invest it. Just buckle up, butter cup. The ride on this roller coaster may end with the expensive cart hurtling from the track to the asphalt below. But don’t worry about us venture types. We will surf on churn and the flows of money. Others? Not so much.

Stephen E Arnold, July 8, 2024

Googzilla, Man Up, Please

July 8, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a couple of “real” news stories about Google and its green earth / save the whales policies in the age of smart software. The first write   up is okay and not to exciting for a critical thinker wearing dinoskin. “The Morning After: Google’s Greenhouse Gas Emissions Climbed Nearly 50 Percent in Five Years Due to AI” states what seems to be a PR-massaged write up. Consider this passage:

According to the report, Google said it expects its total greenhouse gas emissions to rise “before dropping toward our absolute emissions reduction target,” without explaining what would cause this drop.

Yep, no explanation. A PR win.

The BBC published “AI Drives 48% Increase in Google Emissions.” That write up states:

Google says about two thirds of its energy is derived from carbon-free sources.

image

Thanks, MSFT Copilot. Good enough.

Neither these two articles nor the others I scanned focused on one key fact about Google’s saying green and driving snail darters to their fate. Google’s leadership team did not plan its energy strategy. In fact, my hunch is that no one paid any attention to how much energy Google’s AI activities were sucking down. Once the company shifted into Code Red or whatever consulting term craziness it used to label its frenetic response to the Microsoft OpenAI tie up, absolutely zero attention was directed toward the few big eyed tunas which might be taking their last dip.

Several observations:

  1. PR speak and green talk are like many assurances emitted by the Google. Talk is not action.
  2. The management processes at Google are disconnected from what happens when the wonky Code Red light flashes and the siren howls at midnight. Shouldn’t management be connected when the Tapanuli Orangutang could soon be facing the Big Ape in the sky?
  3. The AI energy consumption is not a result of AI. The energy consumption is a result of Googlers who do what’s necessary to respond to smart software. Step on the gas. Yeah, go fast. Endanger the Amur leopard.

Net net: Hey, Google, stand up and say, “My leadership team is responsible for the energy we consume.” Don’t blame your up-in-flames “green” initiative on software you invented. How about less PR and more focus on engineering more efficient data center and cloud operations? I know PR talk is easier, but buckle up, butter cup.

Stephen E Arnold, July 8, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta