Big Tech and Their Software: The Tent Pole Problem
May 1, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I remember a Boy Scout camping trip. I was a Wolf Scout at the time, and my “pack” had the task of setting up our tent for the night. The scout master was Mr. Johnson, and he left it us. The weather did not cooperate; the tent pegs pulled out in the wind. The center tent pole broke. We stood in the rain. We knew the badge for camping was gone, just like a dry place to sleep. Failure. Whom could we blame? I suggested, “McKinsey & Co.” I had learned that third-parties were usually fall guys. No one knew what I was talking about.
Okay, ChatGPT, good enough.
I thought about the tent pole failure, the miserable camping experience, and the need to blame McKinsey or at least an entity other than ourselves. The memory surfaced as I read “Laws of Software Evolution.” The write up sets forth some ideas which may not be firm guidelines like those articulated by the World Court, but they are about as enforceable.
Let’s look at the laws explicated in the essay.
The first law is that software is to support a real-world task. As result (a corollary maybe?) is that the software has to evolve. That is the old chestnut ““No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.” The problem is change, which consumes money and time. As a result, original software is wrapped, peppered with calls to snappy new modules designed to fix up or extend the original software.
The second law is that when changes are made, the software construct becomes more complex. Complexity is what humans do. A true master makes certain processes simple. Software has artists, poets, and engineers with vision. Simple may not be a key component of the world the programmer wants to create. Thus, increasing complexity creates surprises like unknown dependencies, sluggish performance, and a giant black hole of costs.
The third law is not explicitly called out like Laws One and Two. Here’s my interpretation of the “lurking law,” as I have termed it:
Code can be shaped and built upon.
My reaction to this essay is positive, but the link to evolution eludes me. The one issue I want to raise is that once software is built, deployed, and fiddled with it is like a river pier built by Roman engineers. Moving the pier or fixing it so it will persist is a very, very difficult task. At some point, even the Roman concrete will weather away. The bridge or structure will fall down. Gravity wins. I am okay with software devolution.
The future, therefore, will be stuffed with software breakdowns. The essay makes a logical statement:
… we should embrace the malleability of code and avoid redesign processes at all costs!
Sorry. Won’t happen. Woulda, shoulda, and coulda cannot do the job.
Stephen E Arnold, May 1, 2024
A High-Tech Best Friend and Campfire Lighter
May 1, 2024
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
A dog is allegedly man’s best friend. I have a French bulldog,
and I am not 100 percent sure that’s an accurate statement. But I have a way to get the pal I have wanted for years.
Ars Technica reports “You Can Now Buy a Flame-Throwing Robot Dog for Under $10,000” from Ohio-based maker Throwflame. See the article for footage of this contraption setting fire to what appears to be a forest. Terrific. Reporter Benj Edwards writes:
“Thermonator is a quadruped robot with an ARC flamethrower mounted to its back, fueled by gasoline or napalm. It features a one-hour battery, a 30-foot flame-throwing range, and Wi-Fi and Bluetooth connectivity for remote control through a smartphone. It also includes a LIDAR sensor for mapping and obstacle avoidance, laser sighting, and first-person view (FPV) navigation through an onboard camera. The product appears to integrate a version of the Unitree Go2 robot quadruped that retails alone for $1,600 in its base configuration. The company lists possible applications of the new robot as ‘wildfire control and prevention,’ ‘agricultural management,’ ‘ecological conservation,’ ‘snow and ice removal,’ and ‘entertainment and SFX.’ But most of all, it sets things on fire in a variety of real-world scenarios.”
And what does my desired dog look like? The GenY Tibby asleep at work? Nope.
I hope my Thermonator includes an AI at the controls. Maybe that will be an add-on feature in 2025? Unitree, maker of the robot base mentioned above, once vowed to oppose the weaponization of their products (along with five other robotics firms.) Perhaps Throwflame won them over with assertions their device is not technically a weapon, since flamethrowers are not considered firearms by federal agencies. It is currently legal to own this mayhem machine in 48 states. Certain restrictions apply in Maryland and California. How many crazies can get their hands on a mere $9,420 plus tax for that kind of power? Even factoring in the cost of napalm (sold separately), probably quite a few.
Cynthia Murrell, May 1, 2024
One Half of the Sundar & Prabhakar Act Gets Egged: Garrf.
April 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
After I wrote Google Version 2: The Calculating Predator, BearStearns bought the rights to portions of my research and published one of its analyst reports. In that report, a point was made about Google’s research into semantic search. Remember, this was in 2005, long before the AI balloon inflated to the size of Taylor Swift’s piggy bank. My client (whom I am not allowed to name) and I were in the Manhattan BearStearns’ office. We received a call from Prabhakar Raghavan, who was the senior technology something at Yahoo at that time. I knew of Dr. Raghavan because he had been part of the Verity search outfit. On that call, Dr. Raghavan was annoyed that BearStearns suggested Yahoo was behind the eight ball in Web search. We listened, and I pointed out that Yahoo was not matching Google’s patent filing numbers. Although not an indicator of innovation, it is one indicator. The Yahoo race car had sputtered and had lost the search race. I recall one statement Dr. Raghavan uttered, “I can do a better search engine for $300,000 dollars.” Well, I am still waiting. Dr. Raghavan may have an opportunity to find his future elsewhere if he continues to get the type of improvised biographical explosive device shoved under his office door at Google. I want to point out that I thought Dr. Raghavan’s estimate of the cost of search was a hoot. How could he beat that for a joke worthy of Jack Benny?
A big dumb bunny gets egged. Thanks, MSFT Copilot. Good enough.
I am referring to “The Man Who Killed Google Search,” written by Edward Zitron. For those to whom Mr. Zitron is not a household name like Febreze air freshener, he is “the CEO of national Media Relations and Public Relations company EZPR, of which I am both the E (Ed) and the Z (Zitron). I host the Better Offline Podcast, coming to iHeartRadio and everywhere else you find your podcasts February 2024.” For more about Mr. Zitron, navigate to this link. (Yep, it takes quite a while to load, but be patient.)
The main point of the write up is that the McKinsey-experienced Sundar Pichai (the other half of the comedy act) hired the article-writing, Verity-seasoned Dr. Raghavan to help steer the finely-crafted corporate aircraft carrier, USS Google into the Sea of Money. Even though, the duo are not very good at comedy, they are doing a bang up job of making the creaking online advertising machine output big money. If you don’t know how big, just check out the earning for the most recent financial quarter at this link. If you don’t want to wade through Silicon Valley jargon, Google is “a two trillion dollar company.” How do you like that, Mr. and Mrs. Traditional Advertising?
The write up is filled with proper names of Googlers past and present. The point is that the comedy duo dumped some individuals who embraced the ethos of the old, engineering-oriented, relevant search results Google. The vacancies were filled with those who could shove more advertising into what once were clean, reasonably well-lighted places. At the same time, carpetland (my term for the executive corridor down which Messrs. Brin and Page once steered their Segways) elevated above the wonky world of the engineers, the programmers, the Ivory Tower thinker types, and outright wonkiness of the advanced research units. (Yes, there were many at one time.)
Using the thought processes of McKinsey (the opioid idea folks) and the elocutionary skills of Dr. Raghavan, Google search degraded while the money continued to flow. The story presented by Mr. Zitron is interesting. I will leave it to you to internalize it and thank your luck stars you are not given the biographical improvised explosive device as a seat cushion. Yowzah.
Several observations:
- I am not sure the Sundar & Prabhakar duo wrote the script for the Death of Google Search. Believe me, there were other folks in Google carpetland aiding the process. How about a baby maker in the legal department as an example of ground principles? What about an attempted suicide by a senior senior senior manager’s squeeze? What about a big time thinker’s untimely demise as a result of narcotics administered by a rental female?
- The problems at Google are a result of decades of high school science club members acting out their visions of themselves as masters of the universe and a desire to rig the game so money flowed. Cleverness, cute tricks, and owning the casino and the hotel and the parking lot were part of Google’s version of Hotel California. The business set up was money in, fancy dancing in public, and nerdland inside. Management? Hey, math is hard. Managing is zippo.
- The competitive arena was not set up for a disruptor like the Google. I do not want to catalog what the company did to capture what appears to be a very good market position in online advertising. After a quarter century, the idea that Google might be an alleged monopoly is getting some attention. But alleged is one thing; change is another.
- The innovator’s dilemma has arrived in the lair of Googzilla. After inventing tensors, OpenAI made something snazzy with them and cut a deal with Microsoft. The result was the AI hyper moment with Google viewed as a loser. Forget the money. Google is not able to respond, some said. Perception is important. The PR gaffe in Paris where Dr. Prabhakar showed off Bard outputting incorrect information; the protests and arrests of staff; and the laundry list of allegations about the company’s business practices in the EU are compounding the one really big problem — Google’s ability to control its costs. Imagine. A corporate grunt sport could be the hidden disease. Is Googzilla clear headed or addled? Time will tell I believe.
Net net: The man who killed Google is just an clueless accomplice, not the wizard with the death ray cooking the goose and its eggs. Ultimately, in my opinion, we have to blame the people who use Google products and services, rely on Google advertising, and trust search results. Okay, Dr. Raghavan, suspended sentence. Now you can go build your $300,000 Web search engine. I will be available to evaluate it as I did Search2, Neeva, and the other attempts to build a better Google. Can you do it? Sure, you will be a Xoogler. Xooglers can do anything. Just look at Mr. Brin’s airship. And that egg will wash off unlike that crazy idea to charge Verity customers for each entry in an index passed for each user’s query. And that’s the joke that’s funnier than the Paris bollocksing of smart software. Taxi meter pricing for an in-house, enterprise search system. That is truly hilarious.
Stephen E Arnold, April 30, 2024
The Google Explains the Future of the Google Cloud: Very Googley, Of Course
April 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
At its recent Next 24 conference, Google Cloud and associates shared their visions for the immediate future of AI. Through the event’s obscurely named Session Library, one can watch hundreds of sessions and access resources connected to many more. The idea — if you have not caught on to the Googley nomenclature — is to make available videos of the talks at the conference. To narrow, one can filter by session category, conference track, learning level, solution, industry, topic of interest, and whether video is available. Keep in mind that the words you (a normal human, I presume) may use to communicate your interest may not be the lingo Googzilla speaks. AI and Machine Learning feature prominently. Other key areas include data and databases, security, development and architecture, productivity, and revenue growth (naturally). There is even a considerable nod to diversity, equity, and inclusion (DEI). Okay, nod, nod.
Here are a few session titles from just the “AI and ML” track to illustrate the scope of this event and the available information:
- A cybersecurity expert’s guide to securing AI products with Google SAIF
- AI for banking: Streamline core banking services and personalize customer experiences
- AI for manufacturing: Enhance productivity and build innovative new business models
- AI for telecommunications: Transform customer interactions and network operations
- AI in capital markets: The biggest bets in the industry
- Accelerate software delivery with Gemini and Code Transformations
- Revolutionizing healthcare with AI
- Streamlining access to youth mental health services
It looks like there is something for everybody. We think the titles make reasonably clear the scope and bigness of Google’s aspirations. Nor would we expect less from a $2 trillion outfit based on advertising, would we? Run a query for Code Red or in Google lingo CodeRED, and you will be surprised that the state of emergency, Microsoft is a PR king mentality persists. (Is this the McKinsey way?) Well, not for those employed at McKinsey. Former McKinsey professionals have more latitude in their management methods; for example, emulating high school science club planning techniques. There are no sessions we could spot about Google’s competition. If one is big enough, there is no competition. One of Googzilla’s relatives made a mess of Tokyo real estate largely without lasting consequences.
Cynthia Murrell, April 30, 2024
NSO Pegasus: No Longer Flying Below the Radar
April 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “AP Exclusive: Polish Opposition Senator Hacked with Spyware.” I remain fearful of quoting the AP or Associated Press. I think it is a good business move to have an 89 year old terrified of an American “institution, don’t you. I think I am okay if I tell you the AP recycled a report from the University of Toronto’s Citizen Lab. Once again, the researchers have documented the use of what I call “intelware” by a nation state. The AP and other “real” news outfits prefer the term “spyware.” I think it has more sizzle, but I am going to put NSO Group’s mobile phone system and method in the category of intelware. The reason is that specialized software like Pegasus gathers information for a nation’s intelligence entities. Well, that’s the theory. The companies producing these platforms and tools want to answer such questions as “Who is going to undermine our interests?” or “What’s the next kinetic action directed at our facilities?” or “Who is involved in money laundering, human trafficking, or arms deals?”
Thanks, MSFT Copilot. Cutting down the cycles for free art, are you?
The problem is that specialized software is no longer secret. The Citizen Lab and the AP have been diligent in explaining how some of the tools work and what type of information can be gathered. My personal view is that information about these tools has been converted into college programming courses, open source software tools, and headline grabbing articles. I know from personal experience that most people do not have a clue how data from an iPhone can be exfiltrated, cross correlated, and used to track down those who would violate the laws of a nation state. But, as the saying goes, information wants to be free. Okay, it’s free. How about that?
The write up contains an interesting statement. I want to note that I am not plagiarizing, undermining advertising sales, or choking off subscriptions. I am offering the information as a peg on which to hang some observations. Here’s the quote:
“My heart sinks with each case we find,” Scott-Railton [a senior researcher at UT’s Citizen Lab] added. “This seems to be confirming our worst fear: Even when used in a democracy, this kind of spyware has an almost immutable abuse potential.”
Okay, we have malware, a command-and-control system, logs, and a variety of delivery mechanisms.
I am baffled because malware is used by both good and bad actors. Exactly what does the University of Toronto and the AP want to happen. The reality is that once secret information is leaked, it becomes the Teflon for rapidly diffusing applications. Does writing about what I view an “old” story change what’s happening with potent systems and methods? Will government officials join in a kumbaya moment and force the systems and methods to fall into disuse? Endless recycling of an instrumental action by this country or that agency gets us where?
In my opinion, the sensationalizing of behavior does not correlate with responsible use of technologies. I think the Pegasus story is a search for headlines or recognition for saying, “Look what we found. Country X is a problem!” Spare me. Change must occur within institutions. Those engaged in the use of intelware and related technologies are aware of issues. These are, in my experience, not ignored. Improper behavior is rampant in today’s datasphere.
Standing on the sidelines and yelling at a player who let the team down does what exactly? Perhaps a more constructive approach can be identified and offered as a solution beyond Pegasus again? Broken record. I know you are “just doing your job.” Fine but is there a new tune to play?
Stephen E Arnold, April l29, 2024
A Modern Spy Novel: A License to Snoop
April 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
“UK’s Investigatory Powers Bill to Become Law Despite Tech World Opposition” reports the Investigatory Powers Amendment Bill or IPB is now a law. In a nutshell, the law expands the scope of data collection by law enforcement and intelligence services. The Register, a UK online publication, asserts:
Before the latest amendments came into force, the IPA already allowed authorized parties to gather swathes of information on UK citizens and tap into telecoms activity – phone calls and SMS texts. The IPB’s amendments add to the Act’s existing powers and help authorities trawl through more data, which the government claims is a way to tackle “modern” threats to national security and the abuse of children.
Thanks, Copilot. A couple of omissions from my prompt, but your illustration is good enough.
One UK elected official said:
“Additional safeguards have been introduced – notably, in the most recent round of amendments, a ‘triple-lock’ authorization process for surveillance of parliamentarians – but ultimately, the key elements of the Bill are as they were in early versions – the final version of the Bill still extends the scope to collect and process bulk datasets that are publicly available, for example.”
Privacy advocates are concerned about expanding data collections’ scope. The Register points out that “big tech” feels as though it is being put on the hot seat. The article includes this statement:
Abigail Burke, platform power program manager at the Open Rights Group, previously told The Register, before the IPB was debated in parliament, that the proposals amounted to an “attack on technology.”
Several observations:
- The UK is a member in good standing of an intelligence sharing entity which includes Australia, Canada, New Zealand, and the US. These nation states watch one another’s activities and sometimes emulate certain policies and legal frameworks.
- The IPA may be one additional step on a path leading to a ban on end-to-end-encrypted messaging. Such a ban, if passed, would prove disruptive to a number of business functions. Bad actors will ignore such a ban and continue their effort to stay ahead of law enforcement using homomorphic encryption and other sophisticated techniques to keep certain content private.
- Opportunistic messaging firms like Telegram may incorporate technologies which effectively exploit modern virtual servers and other technology to deploy networks which are hidden and effectively less easily “seen” by existing monitoring technologies. Bad actors can implement new methods forcing LE and intelligence professionals to operate in reaction mode. IPA is unlikely to change this cat-and-mouse game.
- Each day brings news of new security issues with widely used software and operating systems. Banning encryption may have some interesting downstream and unanticipated effects.
Net net: I am not sure that modern threats will decrease under IPA. Even countries with the most sophisticated software, hardware, and humanware security systems can be blindsided. Gaffes in Israel have had devastating consequences that an IPA-type approach would remedy.
Stephen E Arnold, April 29, 2024
Right, Professor. No One Is Using AI
April 29, 2024
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Artificial intelligence and algorithms aren’t new buzzwords but they are the favorite technology jargon being tossed around BI and IT water coolers. (Or would it be Zoom conferences these days?) AI has been a part of modern life for years but AI engines are finally “smart enough” to do actual jobs—sort of. There are still big problems with AI, but one expert shares his take on why the technology isn’t being adopted more in the UiPath article: “3 Common Barriers AI Adoption And How To Overcome Them.”
Whenever new technology hits the market, experts write lists about why more companies aren’t implementing it. The first “mistake” is lack of how to adopt AI because they don’t know about all the work processes within their companies. The way to overcome this issue is to take an inventory of the processes and this can be done via data mining. That’s not so simple if a company doesn’t have the software or know-how.
The second “mistake” is lack of expertise about the subject. The cure for this is classes and “active learning.” Isn’t that another term for continuing education? The third “mistake” is lack of trust and risks surrounding AI. Those exist because the technology is new and needs to be tested more before it’s deployed on a mass scale. Smaller companies don’t want to be guinea pigs so they wait until the technology becomes SOP.
AI is another tool that will become as ubiquitous as mobile phones but the expert is correct about this:”
These barriers are significant, but they pale in comparison to the risk of delaying AI adoption. Early adopters are finding new AI use cases and expanding their lead on the competition every day.
There’s lots to do to prepare your organization for this new era, but there’s also plenty of value and advantages waiting for you along your AI adoption journey. Automation can do a lot to help you move forward quickly to capture AI’s value across your organization.”
If your company finds an AI solution that works then that’s wonderful. Automation is part of advancing technology, but AI isn’t ready to be deployed by all companies. If something works for a business and it’s not too archaic than don’t fix what ain’t broke.
But students have figured out how to use AI to deal with certain professors. No, I am not mentioning any names.
Whitey Grace, April 29, 2024
AI Does Prediction about Humans: What Could Go Wrong
April 26, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The academic institution which took money from everyone’s favorite expert on exploitation has revealed an interesting chunk of research. Sadly it is about broader concept of exploitation than those laboring in his mansions. “MIT Study Reveals an AI Model That Can Predict Future Actions of Human.” The title seems a bit incomplete, but no doubt Mr. Epstein would embrace the technology. Imagine. Feed in data about those with whom he employed and match the outputs to the interests of his clients and friends.
The write up says:
A new study from researchers at MIT and the University of Washington reveals an AI model that can accurately predict a person or a machine’s future actions. The AI is known as the latent inference budget model (L-IBM). The study authors claim that L-IBM is better than other previously proposed frameworks capable of modeling human decision-making. It works by examining past behavior, actions, and limitations linked to the thinking process of an agent (which could be either a human or another AI). The data or result obtained after the assessment is called the inference budget.
Very academic sounding. I expected no less from MIT and its companion institution.
To model the decision-making process of an agent, L-IBM first analyzes an individual’s behavior and the different variables that affect it. “In other words, we seek to model both what agents wish to do and what agents will actually do in any given state,” the researchers said. This step involved observing agents placed in a maze at random positions. The L-IBM model was then employed to understand their thinking/computational limitations and predict their behavior.
A predictive system allows for more efficient use of available resources. Smart software does not protest, require benefits, or take vacations. Thanks, MSFT Copilot. Good enough. Just four tries today.
The method seems less labor intensive that the old, cancer wizard IBM Watson relied upon. This model processes behavior data, not selected information; for example, cancer treatments. Then, the new system will observe actions and learn what those humans will do next.
Then the clever researchers arranged a game:
The researchers made the subjects play a reference game. The game involves a speaker and a listener. The latter receives a set of different colors, they pick one but can’t tell the name of the color they picked directly to the listener. The speaker describes the color for the speakers through natural language utterances (basically the speaker gives out different words as hints). If the listener selects the same color the speaker picked from the set, they both win.
At this point in the write up, I was wondering how long the process requires and what the fully loaded costs would be to get one useful human prediction. The write up makes clear that more work was required. Now the model played chess with humans. (I thought the Google cracked this problem with DeepMind methods after IBM’s chess playing system beat up a world champion human.
One of the wizards is quoted in the write up as stating:
“For me, the most striking thing was the fact that this inference budget is very interpretable. It is saying tougher problems require more planning or being a strong player means planning for longer. When we first set out to do this, we didn’t think that our algorithm would be able to pick up on those behaviors naturally.
Yes, there are three steps. But the expert notes:
“We demonstrated that it can outperform classical models of bounded rationality while imputing meaningful measures of human skill and task difficulty,” the researchers note. If we know that a human is about to make a mistake, having seen how they have behaved before, the AI agent could step in and offer a better way to do it. Or the agent could adapt to the weaknesses that its human collaborators have. Being able to model human behavior is an important step toward building an AI agent that can actually help that human…
If Mr. Epstein had access to a model with this capability, he might still be with us. Other applications of the technology may lead to control of malleable humans.
Net net: MIT is a source of interesting investigations like the one conducted after the Epstein antics became more widely known. Light the light of learning.
Stephen E Arnold, April 26, 2024
Not Only Those Chasing Tenure Hallucinate, But Some Citations Are Wonky Too
April 26, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “ChatGPT Hallucinates Fake But Plausible Scientific Citations at a Staggering Rate, Study Finds.” Wow. “Staggering.” The write up asserts:
A recent study has found that scientific citations generated by ChatGPT often do not correspond to real academic work
In addition to creating non-reproducible research projects, now those “inventing the future” and “training tomorrow’s research leaders” appear to find smart software helpful in cooking up “proof” and “evidence” to help substantiate “original” research. Note: The quotes are for emphasis and added by the Beyond Search editor.
Good enough, ChatGPT. Is the researcher from Harvard health?
Research conducted by a Canadian outfit sparked this statement in the article:
…these fabricated citations feature elements such as legitimate researchers’ names and properly formatted digital object identifiers (DOIs), which could easily mislead both students and researchers.
The student who did the research told PsyPost:
“Hallucinated citations are easy to spot because they often contain real authors, journals, proper issue/volume numbers that match up with the date of publication, and DOIs that appear legitimate. However, when you examine hallucinated citations more closely, you will find that they are referring to work that does not exist.”
The researcher added:
“The degree of hallucination surprised me,” MacDonald told PsyPost. “Almost every single citation had hallucinated elements or were just entirely fake, but ChatGPT would offer summaries of this fake research that was convincing and well worded.”
My thought is that more work is needed to determine the frequency with which AI made up citations appear in papers destined for peer review or personal aggrandizement on services like ArXiv.
Coupled with the excitement of a president departing Stanford University and the hoo hah at Harvard related to “ethics” raises questions about the moral compass used by universities to guide their educational battleships. Now we learn that the professors are using AI and including made up or fake data in their work?
What’s the conclusion?
[a] On the beam and making ethical behavior part of the woodwork
[b] Supporting and rewarding crappy work
[c] Ignoring the reality that the institutions have degraded over time
[d] Scrolling TikTok looking for grant tips.
If you don’t know, ask You.com or a similar free smart service.
Stephen E Arnold, April 26, 2024
AI Girlfriends: The Opportunity of a Lifetime
April 26, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Relationships are hard. Navigating each other’s unique desires, personalities, and expectations can be a serious challenge. That is, if one is dating a real person. Why bother when, for just several thousand dollars a month, you can date a tailor-made AI instead? We learn from The Byte, “Tech Exec Predicts Billion-Dollar AI Girlfriend Industry.” Writer Noor Al-Sibai tells us:
“When witnessing the sorry state of men addicted to AI girlfriends, one Miami tech exec saw dollar signs instead of red flags. In a blog-length post on X-formerly-Twitter, former WeWork exec Greg Isenberg said that after meeting a young guy who claims to spend $10,000 a month on so-called ‘AI girlfriends,’ or relationship-simulating chatbots, he realized that eventually, someone is going to capitalize upon that market the way Match Group has with dating apps. ‘I thought he was kidding,’ Isenberg wrote. ‘But, he’s a 24-year-old single guy who loves it.’ To date, Match Group — which owns Tinder, Hinge, Match.com, OKCupid, Plenty of Fish, and several others — has a market cap of more than $9 billion. As the now-CEO of the Late Checkout holding company startup noted, someone is going to build the AI version and make a billion or more.”
Obviously. They are probably collaborating with the makers of sex robots already. Though many strongly object, it seems only a matter of time before fake women replace real ones for a significant number of men. Will this help assuage the loneliness epidemic, or only make it worse? There is also the digital privacy angle to consider. On the other hand, perhaps this is for the best in the long run. The Earth is overpopulated, anyway.
Cynthia Murrell, April 26, 2024


