Oxygen: Keep the Bait Alive for AI Revenue

July 10, 2024

Andreessen Horowitz published “Who Owns the Generative AI Platform?” in January 2023. The rah-rah appeared almost at the same time as the Microsoft OpenAI deal marketing coup.  In that essay, the venture firm and publishing firm stated this about AI: 

…there is enough early data to suggest massive transformation is taking place. What we don’t know, and what has now become the critical question, is: Where in this market will value accrue?

Now a partial answer is emerging. 

The Information, an online information service with a paywall revealed “Andreessen Horowitz Is Building a Stash of More Than 20,000 GPUs to Win AI Deals.” That report asserts:

The firm has secured thousands of AI chips, including Nvidia H100 graphics processing units, and is renting them to portfolio companies, according to a person who has discussed the initiative with the firm’s partners…. Andreessen Horowitz has told startup founders the initiative is called “oxygen.”

The initiative reflects what might be a way to hook promising AI outfits and plop them into the firm’s large foldable floating fish basket for live caught gill-bearing vertebrate animals, sometimes called chum.

This factoid emerges shortly after a big Silicon Valley venture outfit raved about the oodles of opportunity AI represents. Plus reports about Blue Chip consulting firms’ through-the-roof AI consulting has encouraged a couple of the big outfits to offer AI services. In addition to opining and advising, the consulting firms are moving aggressively into the AI implementing and operating business. 

The morphing of a venture firm into a broker of GPU cycles complements the thinking-for-money firms’ shifting gears to a more hands-on approach.

There are several implications from my point of view:

  • The fastest way to make money from the AI frenzy is to charge people so they can “do” AI
  • Without a clear revenue stream of sufficient magnitude to foot the bill for the rather hefty costs of “doing” AI with a chance of making cash, selling blue jeans to the miners makes sense. But changing business tactics can add an element of spice to an unfamiliar restaurant’s special of the day
  • The move from passive (thinking and waiting) to a more active (doing and charging for hardware and services) brings a different management challenge to the companies making the shift.

These factors suggest that the best way to cash in on AI is to provide what Andreessen Horowitz calls oxygen. It is a clear indication that the AI fish will die without some aggressive intervention. 

I am a dinobaby, sitting in my rocker on the porch of the rest home watching the youngsters scramble to make money from what was supposed to be a sure-fire winner. What we know from watching those lemonade stand operators is that success is often difficult to achieve. The grade school kids setting up shop in a subdivision where heat and fatigue take their toll give up and go inside where the air is cool and TikTok waits.

Net net: The Andreessen Horowitz revelation is one more indication that the costs of AI and the difficulty of generating sufficient revenue is starting to hit home. Therefore, advisors’ thoughts seems to be turning to actions designed to produce cash, magnetism, and success. Will the efforts produce the big payoffs? I wonder if these tactical plays are brilliant moves or another neighborhood lemonade stand?

Stephen E Arnold, July 10, 2024

Market Research Shortcut: Fake Users Creating Fake Data

July 10, 2024

Market research can be complex and time consuming. It would save so much time if one could consolidate thousands of potential respondents into one model. A young AI firm offers exactly that, we learn from Nielsen Norman Group’s article, “Synthetic Users: If, When, and How to Use AI Generated ‘Research.’

But are the results accurate? Not so much, according to writers Maria Rosala and Kate Moran. The pair tested fake users from the young firm Synthetic Users and ones they created using ChatGPT. They compared responses to sample questions from both real and fake humans. Each group gave markedly different responses. The write-up notes:

“The large discrepancy between what real and synthetic users told us in these two examples is due to two factors:

  • Human behavior is complex and context-dependent. Synthetic users miss this complexity. The synthetic users generated across multiple studies seem one-dimensional. They feel like a flat approximation of the experiences of tens of thousands of people, because they are.
  • Responses are based on training data that you can’t control. Even though there may be proof that something is good for you, it doesn’t mean that you’ll use it. In the discussion-forum example, there’s a lot of academic literature on the benefits of discussion forums on online learning and it is possible that the AI has based its response on it. However, that does not make it an accurate representation of real humans who use those products.”

That seems obvious to us, but apparently some people need to be told. The lure of fast and easy results is strong. See the article for more observations. Here are a couple worth noting:

“Real people care about some things more than others. Synthetic users seem to care about everything. This is not helpful for feature prioritization or persona creation. In addition, the factors are too shallow to be useful.”

Also:

“Some UX [user experience] and product professionals are turning to synthetic users to validate or product concepts or solution ideas. Synthetic Users offers the ability to run a concept test: you describe a potential solution and have your synthetic users respond to it. This is incredibly risky. (Validating concepts in this way is risky even with human participants, but even worse with AI.) Since AI loves to please, every idea is often seen as a good one.”

So as appealing as this shortcut may be, it is a fast track to incorrect results. Basing business decisions on “insights” from shallow, eager-to-please algorithms is unwise. The authors interviewed Synthetic Users’ cofounder Hugo Alves. He acknowledged the tools should only be used as a supplement to surveys of actual humans. However, the post points out, the company’s website seems to imply otherwise: it promises “User research. Without the users.” That is misleading, at best.

Cynthia Murrell, July 10, 2024

The AI Revealed: Look Inside That Kimono and Behind It. Eeew!

July 9, 2024

green-dino_thumb_thumb_thumb_thumb_t_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Guardian article “AI scientist Ray Kurzweil: ‘We Are Going to Expand Intelligence a Millionfold by 2045’” is quite interesting for what it does not do: Flip the projection output by a Googler hired by Larry Page himself in 2012.

image

Putting toothpaste back in a tube is easier than dealing with the uneven consequences of new technology. What if rosy descriptions of the future are just marketing and making darned sure the top one percent remain in the top one percent? Thanks Chat GPT4o. Good enough illustration.

First, a bit of math. Humans have been doing big tech for centuries. And where are we? We are post-Covid. We have homelessness. We have numerous armed conflicts. We have income inequality in the US and a few other countries I have visited. We have a handful of big tech companies in the AI game which want to be God to use Mark Zuckerberg’s quaint observation. We have processed food. We have TikTok. We have systems which delight and entertain each day because of bad actors’ malware, wild and crazy education, and hybrid work with the fascinating phenomenon of coffee badging; that is, going to the office, getting a coffee, and then heading to the gym.

Second, the distance in earth years between 2024 and 2045 is 21 years. In the humanoid world, a 20 year old today will be 41 when the prediction arrives. Is that a long time? Not for me. I am 80, and I hope I am out of here by then.

Third, let’s look at the assertions in the write up.

One of the notable statements in my opinion is this one:

I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have.

I like the quality of modesty and humblebrag. Googlers excel at both.

Another statement I circled is:

The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one.

I like the idea that the energy consumption required to deliver this merging will be cheap and plentiful. Googlers do not worry about a power failure, the collapse of a dam due to the ministrations of the US Army Corps of Engineers and time, or dealing with the environmental consequences of producing and moving energy from Point A to Point B. If Google doesn’t worry, I don’t.

Here’s a quote from the article allegedly made by Mr. Singularity aka Ray Kurzweil:

I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing.

I wonder if the Asilomar AI Principles are embedded in Google’s system recommending that one way to limit cheese on a pizza from sliding from the pizza to an undesirable location embraces these principles? Is the dispute between the “go fast” AI crowd and the “go slow” group not aware of the Asilomar AI Principles. If they are, perhaps the Principles are balderdash? Just asking, of course.

Okay, I think these points are sufficient for going back to my statements about processed food, wars, big companies in the AI game wanting to be “god” et al.

The trajectory of technology in the computer age has been a mixed bag of benefits and liabilities. In the next 21 years, will this report card with some As, some Bs, lots of Cs, some Ds, and the inevitable Fs be different? My view is that the winners with human expertise and the know how to make money will benefit. I think that the other humanoids may be in for a world of hurt. That’s the homelessness stuff, the being dumb when it comes to doing things like reading, writing, and arithmetic, and consuming chemicals or other “stuff” that parks the brain will persist.

The future of hooking the human to the cloud is perfect for some. Others may not have the resources to connect, a bit like farmers in North Dakota with no affordable or reliable Internet access. (Maybe Starlink-type services will rescue those with cash?)

Several observations are warranted:

  1. Technological “progress” has been and will continue to be a mixed bag. Sorry, Mr. Singularity. The top one percent surf on change. The other 99 percent are not slam dunk winners.
  2. The infrastructure issue is simply ignored, which is convenient. I mean if a person grew up with house servants, it is difficult to imagine not having people do what you tell them to do. (Could people without access find delight in becoming house servants to the one percent who thrive in 2045?)
  3. The extreme contention created by the deconstruction of shared values, norms, and conventions for social behavior is something that cannot be reconstructed with a cloud and human mind meld. Once toothpaste is out of the tube, one has a mess. One does not put the paste back in the tube. One blasts it away with a zap of Goo Gone. I wonder if that’s another omitted consequence of this super duper intelligence behavior: Get rid of those who don’t get with the program?

Net net: Googlers are a bit predictable when they predict the future. Oh, where’s the reference to online advertising?

Stephen E Arnold, July 9, 2024

A Signal That Money People Are Really Worried about AI Payoffs

July 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AI’s $600B Question” is an interesting signal. The subtitle for the article is the pitch that sent my signal processor crazy: The AI bubble is reaching a tipping point. Navigating what comes next will be essential.”

image

Executives on a thrill ride seem to be questioning the wisdom of hopping on the roller coaster. Thanks, MSFT Copilot. Good enough.

When money people output information that raises a question, something is happening. When the payoff is nailed, the financial types think about yachts, Bugatti’s, and getting quoted in the Financial Times. Doubts are raised because of these headline items: AI and $600 billion.

The write up says:

A huge amount of economic value is going to be created by AI. Company builders focused on delivering value to end users will be rewarded handsomely. We are living through what has the potential to be a generation-defining technology wave. Companies like Nvidia deserve enormous credit for the role they’ve played in enabling this transition, and are likely to play a critical role in the ecosystem for a long time to come. Speculative frenzies are part of technology, and so they are not something to be afraid of.

If I understand this money talk, a big time outfit is directly addressing fears that AI won’t generate enough cash to pay its bills and make the investors a bundle of money. If the AI frenzy was on the Money Train Express, why raise questions and provide information about the tough-to-control costs for making AI knock off the hallucination, the product recalls, the lawsuits, and the growing number of AI projects which just don’t work?

The fact of the article’s existence makes it clear to me that some folks are indeed worried. Does the write up reassure those with big bucks on the line? Does the write up encourage investors to pump more money into a new AI start up? Does the write up convert tests into long-term contracts with the big AI providers?

Nope, nope, and nope.

But here’s the unnerving part of the essay:

In reality, the road ahead is going to be a long one. It will have ups and downs. But almost certainly it will be worthwhile.

Translation: We will take your money and invest it. Just buckle up, butter cup. The ride on this roller coaster may end with the expensive cart hurtling from the track to the asphalt below. But don’t worry about us venture types. We will surf on churn and the flows of money. Others? Not so much.

Stephen E Arnold, July 8, 2024

Googzilla, Man Up, Please

July 8, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a couple of “real” news stories about Google and its green earth / save the whales policies in the age of smart software. The first write   up is okay and not to exciting for a critical thinker wearing dinoskin. “The Morning After: Google’s Greenhouse Gas Emissions Climbed Nearly 50 Percent in Five Years Due to AI” states what seems to be a PR-massaged write up. Consider this passage:

According to the report, Google said it expects its total greenhouse gas emissions to rise “before dropping toward our absolute emissions reduction target,” without explaining what would cause this drop.

Yep, no explanation. A PR win.

The BBC published “AI Drives 48% Increase in Google Emissions.” That write up states:

Google says about two thirds of its energy is derived from carbon-free sources.

image

Thanks, MSFT Copilot. Good enough.

Neither these two articles nor the others I scanned focused on one key fact about Google’s saying green and driving snail darters to their fate. Google’s leadership team did not plan its energy strategy. In fact, my hunch is that no one paid any attention to how much energy Google’s AI activities were sucking down. Once the company shifted into Code Red or whatever consulting term craziness it used to label its frenetic response to the Microsoft OpenAI tie up, absolutely zero attention was directed toward the few big eyed tunas which might be taking their last dip.

Several observations:

  1. PR speak and green talk are like many assurances emitted by the Google. Talk is not action.
  2. The management processes at Google are disconnected from what happens when the wonky Code Red light flashes and the siren howls at midnight. Shouldn’t management be connected when the Tapanuli Orangutang could soon be facing the Big Ape in the sky?
  3. The AI energy consumption is not a result of AI. The energy consumption is a result of Googlers who do what’s necessary to respond to smart software. Step on the gas. Yeah, go fast. Endanger the Amur leopard.

Net net: Hey, Google, stand up and say, “My leadership team is responsible for the energy we consume.” Don’t blame your up-in-flames “green” initiative on software you invented. How about less PR and more focus on engineering more efficient data center and cloud operations? I know PR talk is easier, but buckle up, butter cup.

Stephen E Arnold, July 8, 2024

AI: Hurtful and Unfair. Obviously, Yes

July 5, 2024

It will be years before AI is “smart” enough to entirely replace humans, but it’s in the immediate future. The problem with current AI is that they’re stupid. They don’t know how to do anything unless they’re trained on huge datasets. These datasets contain the hard, copyrighted, trademarked, proprietary, etc. work of individuals. These people don’t want their work used to train AI without their permission, much less replace them. Futurism shares that even AI engineers are worried about their creations, “Video Shows OpenAI Admitting It’s ‘Deeply Unfair’ To ‘Build AI And Take Everyone’s Job Away.”

The interview with an AI software engineer’s admission of guilt originally appeared in The Atlantic, but their morality is quickly covered by their apathy. Brian Wu is the engineer in question. He feels about making jobs obsolete, but he makes an observation that happens with progress and new technology: things change and that is inevitable:
“It won’t be all bad news, he suggests, because people will get to ‘think about what to do in a world where labor is obsolete.’

But as he goes on, Wu sounds more and more unconvinced by his own words, as if he’s already surrendered himself to the inevitability of this dystopian AI future.

‘I don’t know,’ he said. ‘Raise awareness, get governments to care, get other people to care.’ A long pause. ‘Yeah. Or join us and have one of the few remaining jobs. I don’t know. It’s rough.’”

Wu’s colleague Daniel Kokotajlo believes human will invent an all-knowing artificial general intelligence (AGI). The AGI will create wealth and it won’t be distributed evenly, but all humans will be rich. Kokotaljo then delves into the typical science-fiction story about a super AI becoming evil and turning against humanity. The AI engineers, however, aren’t concerned with the moral ambiguity of AI. They want to invent, continuing building wealth, and are hellbent on doing it no matter the consequences. It’s pure motivation but also narcissism and entitlement.

Whitney Grace, July 5, 2024

Smart Software and Knowledge Skills: Nothing to Worry About. Nothing.

July 5, 2024

dinosaur30a_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read an article in Bang Premier (an estimable online publication with which I had no prior knowledge). It is now a “fave of the week.” The story “University Researchers Reveal They Fooled Professors by Submitting AI Exam Answers” was one of those experimental results which caused me to chuckle. I like to keep track of sources of entertaining AI information.

image

A doctor and his surgical team used smart software to ace their medical training. Now a patient learns that the AI system does not have the information needed to perform life-saving surgery. Thanks, MSFT Copilot. Good enough.

The Bang Premier article reports:

Researchers at the University of Reading have revealed they successfully fooled their professors by submitting AI-generated exam answers. Their responses went totally undetected and outperformed those of real students, a new study has shown.

Is anyone surprised?

The write up noted:

Dr Peter Scarfe, an associate professor at Reading’s school of psychology and clinical language sciences, said about the AI exams study: “Our research shows it is of international importance to understand how AI will affect the integrity of educational assessments. “We won’t necessarily go back fully to handwritten exams, but the global education sector will need to evolve in the face of AI.”

But the knee slapper is this statement in the write up:

In the study’s endnotes, the authors suggested they might have used AI to prepare and write the research. They stated: “Would you consider it ‘cheating’? If you did consider it ‘cheating’ but we denied using GPT-4 (or any other AI), how would you attempt to prove we were lying?” A spokesperson for Reading confirmed to The Guardian the study was “definitely done by humans”.

The researchers may not have used AI to create their report, but is it possible that some of the researchers thought about this approach?

Generative AI software seems to have hit a plateau for technology, financial, or training issues. Perhaps those who are trying to design a smart system to identify bogus images, machine-produced text and synthetic data, and nifty videos which often look like “real” TikTok-type creations will catch up? But if the AI innovators continue to refine their systems, the “AI identifier” software is effectively in a game of cat-and-mouse. Reacting to smart software means that existing identifiers will be blind to the new systems’ outputs.

The goal is a noble one, but the advantage goes to the AI companies, particularly those who want to go fast and break things. Academics get some benefit. New studies will be needed to determine how much fakery goes undetected. Will a surgeon who used AI to get his or her degree be able to handle a tricky operation and get the post-op drugs right?

Sure. No worries. Some might not think this is a laughing matter. Hey, it’s AI. It is A-Okay.

Stephen E Arnold, July 5, 2024

Microsoft Recall Continues to Concern UK Regulators

July 4, 2024

A “feature” of the upcoming Microsoft Copilot+, dubbed Recall, looks like a giant, built-in security risk. Many devices already harbor software that can hunt through one’s files, photos, emails, and browsing history. Recall intrudes further by also taking and storing a screenshot every few seconds. Wait, what? That is what the British Information Commissioner’s Office (ICO) is asking. The BBC reports, “UK Watchdog Looking into Microsoft AI Taking Screenshots.”

Microsoft asserts users have control and that the data Recall snags is protected. But the company’s pretty words are not enough to convince the ICO. The agency is grilling Microsoft about the details and will presumably update us when it knows more. Meanwhile, journalist Imran Rahman-Jones asked experts about Recall’s ramifications. He writes:

“Jen Caltrider, who leads a privacy team at Mozilla, suggested the plans meant someone who knew your password could now access your history in more detail. ‘[This includes] law enforcement court orders, or even from Microsoft if they change their mind about keeping all this content local and not using it for targeted advertising or training their AIs down the line,’ she said. According to Microsoft, Recall will not moderate or remove information from screenshots which contain passwords or financial account information. ‘That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry,’ said Ms. Caltrider. ‘I wouldn’t want to use a computer running Recall to do anything I wouldn’t do in front of a busload of strangers. ‘That means no more logging into financial accounts, looking up sensitive health information, asking embarrassing questions, or even looking up information about a domestic violence shelter, reproductive health clinic, or immigration lawyer.’”

Calling Recall a privacy nightmare, AI and privacy adviser Dr Kris Shrishak notes just knowing one’s device is constantly taking screenshots will have a chilling effect on users. Microsoft appears to have “pulled” the service. But data and privacy expert Daniel Tozer made a couple more points: How will a company feel if a worker’s Copilot snaps a picture of their proprietary or confidential information? Will anyone whose likeness appears in video chat or a photo be asked for consent before the screenshot is taken? Our guess—not unless it is forced to.

Cynthia Murrell, July 4, 2024

Satire or Marketing: Let Smart Software Decide

July 3, 2024

green-dino_thumb_thumb_thumb_thumb_t_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

What’s PhD level intelligence? In 1962, I had a required class in one of the -ologies. I vaguely remember that my classmates and I had to learn about pigeons, rats, and people who would make decisions that struck me as off the wall. The professor was named after a Scottish family from the Highlands. I do recall looking up the name and finding that it meant “crooked nose.” But the nose, as nice as it was, was nothing to the bed springs the good professor suspended from a second story window. I asked him, “What’s the purpose of the bed springs?” (None of the other students in the class cared, but I found the sight interesting.) His reply was, “I am using it as an antenna.” Okay, that is one example of PhD-level intelligence. I have encountered others, but I will not regale you with are somewhat idiosyncratic behaviors.

image

The illustration demonstrates the common sense approach to problem solving. Thanks, MSFT Copilot. Chugging away on Recall and security over the holiday. Yep, I bet you are.

Why am I thinking about a story about bedsprings suspend from a second story window? I just read “ChatGPT Now Has PhD Level Intelligence, and the Poor Personal Choices to Prove It.” The write up states:

ChatGPT has become indispensable to plagiarists and spambots worldwide. Now, OpenAI is thrilled to introduce ChatGPT 5.0, the most advanced version of the popular virtual assistant to date. With groundbreaking improvements, GPT-5 is like having a doctor of philosophy right at your fingertips.

The write up (in a humorous vein I really hope) identifies some characteristics of the next big thing from Sam AI-Man, the Apple partner he hopes. The article suggests that ChatGPT 5 can cherish a saloon mug with this snappy phrase printed on the container:

Suck it up, liver.

No, I don’t want the mug, nor have I abused my liver as an undergraduate. I actually enjoyed spending time in the so-called university’s library.

The write up includes an short commentary about the ethical concerns or guardrails Sam AI-Man has on the Information Highway populated with smart vehicles and presumably less smart humans who are confident the zeros and ones don’t get scrambled:

Numerous questions have arisen regarding the ethics and legality of training ChatGPT on copyrighted text data without permission. In this latest version, however, reliance on authors’ intellectual property has been dramatically reduced. While GPT-5 started training from a knowledge base of millions of texts, it got around to reading only Frankenstein, plus maybe half of a Donna Haraway book. It basically bluffed its way through prelims by talking about “embodiment” a lot.

I get the Mary Shelly reference. But Donna Haraway forced me to ask Claude Sonnet 3.5, “Please, explain Donna Haraway’s contribution to ethical discourse.” Here’s what I received from the smart system competing with Sam AI-Man’s presumably smarter system:

Here’s Donna Haraway’s contribution to ethical discourse in three sentences:

  1. Donna Haraway challenged traditional ideas about what it means to be human by introducing the concept of a “cyborg” – a blend of human and machine.
  2. She argued that the lines between nature and culture, human and animal, and physical and non-physical are not as clear as we think, which forces us to rethink our ethical responsibilities.
  3. Her work encourages us to consider how technology and science affect our understanding of ourselves and our relationships with other living things, pushing us to develop new ethical frameworks for our changing world.

Thank you, Claude Sonnet 3.5! I have stated that my IQ score pegs me in the “Dumb Cod” percentile. I think Ms. Haraway is into the Ray Kurzweil and Elon Musk concept space. I know I am looking forward to nanodevices able to keep me alive for many, many years. I want to poke fun at smart software, and I quite like to think about PhD level software.

To close, I want to quote the alleged statement of a very smart person who could not remember if OpenAI used YouTube-type content to train ChatGPT. (Hey, even crooked nose remembered that he suspended the bed springs to function like an antenna.) The CTO of OpenAI allegedly said:

“If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence… and then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at PhD intelligence…” — Open AI CTO Mira Murati, in an interview with Dartmouth Engineering

I wonder if a person without a PhD can recognize “PhD intelligence”? Sure. Why not? It’s marketing.

Stephen E Arnold, July 3, 2024

Another Open Source AI Voice Speaks: Yo, Meta!

July 3, 2024

dinosaur30a_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The open source software versus closed source software demonstrates ebbs and flows. Like the “go fast” with AI and “go slow” with AI, strong opinions suggest that big money and power are swirling like the storms on a weather app for Oklahoma in tornado season. The most recent EF5 is captured in “Zuckerberg Disses Closed-Source AI Competitors As Trying to Create God.” The US government seems to be concerned about open source smart software finding its way into the hands of those who are not fans of George Washington-type thinking.

image

Which AI philosophy will win the big pile of money? Team Blue representing the Zuck? Or, the rag tag proprietary wizards? Thanks, MSFT Copilot. You are into proprietary, aren’t you?

The “move fast and break things” personage of Mark Zuckerberg is into open source smart software. In the write up, he allegedly said in a YouTube bit:

“I don’t think that AI technology is a thing that should be kind of hoarded and … that one company gets to use it to build whatever central, single product that they’re building,” Zuckerberg said in a new YouTube interview with Kane Sutter (@Kallaway).

The write up includes this passage:

In the conversation, Zuckerberg said there needs to be a lot of different AIs that get created to reflect people’s different interests.

One interesting item in the article, in my opinion, is this:

“You want to unlock and … unleash as many people as possible trying out different things,” he continued. “I mean, that’s what culture is, right? It’s not like one group of people getting to dictate everything for people.”

But the killer Meta vision is captured in this passage:

Zuckerberg said there will be three different products ahead of convergence: display-less smart glasses, a heads-up type of display and full holographic displays. Eventually, he said that instead of neural interfaces connected to their brain, people might one day wear a wristband that picks up signals from the brain communicating with their hand. This would allow them to communicate with the neural interface by barely moving their hand. Over time, it could allow people to type, too. Zuckerberg cautioned that these types of inputs and AI experiences may not immediately replace smartphones, though. “I don’t think, in the history of technology, the new platform — it usually doesn’t completely make it that people stop using the old thing. It’s just that you use it less,” he said.

In short, the mobile phone is going down, not tomorrow, but definitely to the junk drawer.

Several observations which I know you are panting to read:

  1. Never under estimate making something small or re-invented as a different form factor. The Zuck might be “right.”
  2. The idea of “unleash” is interesting. What happens if employees at WhatsApp unleash themselves? How will the Zuck construct react? Like the Google? Something new like blue chip consulting firms replacing people with smart software? “Unleash” can be interpreted in different ways, but I am thinking of turning loose a pack of hyenas. The Zuck may be thinking about eager kindergartners. Who knows?
  3. The Zuck’s position is different from the government officials who are moving toward restrictions on “free and open” smart software. Those hallucinating large language models can be repurposed into smart weapons. Close enough for horseshoes with enough RDX may do the job.

Net net: The Zuck is an influential and very powerful information channel owner. “Unleash” what? Hungry predators or those innovating children? Perhaps neither. But as OpenAI seems to be closing; the Zuck AI is into opening. Ah, uncertainty is unfolding before my eyes in real time.

Stephen E Arnold, July 3, 2024

x

x

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta