Good Fences, Right, YouTube? And Good Fences in Winter Even Better

December 4, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Remember that line from the grumpy American poet Bobby Frost. (I have on good authority that Bobby was not a charmer. And who, pray tell, was my source. How about a friend of the poet’s who worked with him in South Shaftsbury.)

Like those in the Nor’East say, “Good fences make good neighbors.”

The line is not original. Bobby’s pal told me that the saying was a “pretty common one” among the Shaftsburians. Bobby appropriated the line in his poem “Mending Wall. (It is loved by millions of high school students). The main point of the poem is that “Something there is that doesn’t love a wall.” The key is “something.”

The fine and judicious, customer centric, and well-managed outfit Google is now in the process of understanding the “something that doesn’t love a wall,” digital or stone.

Inside the Arms Race between YouTube and Ad Blockers” updates the effort of the estimable advertising outfit and — well — almost everyone. The article explains:

YouTube recently took dramatic action against anyone visiting its site with an ad blocker running — after a few pieces of content, it’ll simply stop serving you videos. If you want to get past the wall, that ad blocker will (probably) need to be turned off; and if you want an ad-free experience, better cough up a couple bucks for a Premium subscription.

The write up carefully explains that one must pay a “starting” monthly fee of $13.99 to avoid the highly relevant advertisements for metal men’s wallets, the total home gym which seems only inappropriate for a 79 year old dinobaby like me, and some type of women’s undergarment. Yeah, that ad matching to a known user is doing a bang up job in my opinion. I bet the Skim’s marketing manager is thrilled I am getting their message. How many packs of Skims do I buy in a lifetime? Zero. Yep, zero.

image

Yes, sir. Good fences make good neighbors. Good enough, MSFT Copilot. Good enough.

Okay, that’s the ad blocker thing, which I have identified as Google’s digital Battle of Waterloo in honor of a movie about everyone’s favorite French emperor, Nappy B.

But what the cited write up and most of the coverage is not focusing on is the question, “Why the user hostile move?” I want to share some of my team’s ideas about the motive force behind this disliked and quite annoying move by that company everyone loves (including the Skim’s marketing manager?).

First, the emergence of ChatGPT type services is having a growing impact on Google’s online advertising business. One can grind though Google’s financials and not find any specific item that says, “The Duke of Wellington and a crazy old Prussian are gearing up for a fight. So I will share some information we have rounded up by talking to people and looking through the data gathered about Googzilla. Specifically, users want information packaged to answer or to “appear” to answer their question. Some want lists; some want summaries; and some just want to avoid the enter the query, click through mostly irrelevant results, scan for something that is sort of close to an answer, and use that information to buy a ticket or get a Taylor Swift poster, whatever. That means that the broad trend in the usage of Google search is a bit like the town of Grindavik, Iceland. “Something” is going on, and it is unlikely to bode well for the future that charming town in Iceland. That’s the “something” that is hostile to walls. Some forces are tough to resist even by Googzilla and friends.

Second, despite the robust usage of YouTube, it costs more money to operate that service than it does to display from a cache ads and previously spidered information from Google compliant Web sites. Thus, as pressure on traditional search goes up from the ChatGPT type services, the darker the clouds on the search business horizon look. The big storm is not pelting the Googleplex yet, but it does looks ominous perched on the horizon and moving slowly. Don’t get our point wrong: Running a Google scale search business is expensive, but it has been engineered and tuned to deliver a tsunami of cash. The YouTube thing just costs more and is going to have a tough time replacing lost old-fashioned search revenue. What’s a pressured Googzilla going to do? One answer is, “Charge users.” Then raise prices. Gee, that’s the much-loved cable model, isn’t it? And the pressure point is motivating some users who are developers to find ways to cut holes in the YouTube fence. The fix? Make the fence bigger and more durable? Isn’t that a Rand arms race scenario? What’s an option? Where’s a J. Robert Oppenheimer-type when one needs him?

The third problem is that there is a desire on the part of advertisers to have their messages displayed in a non offensive context. Also, advertisers — because the economy for some outfits sucks — now are starting to demand proof that their ads are being displayed in front of buyers known to have an interest in their product. Yep, I am talking about the Skims’ marketing officer as well as any intermediary hosing money into Google advertising. I don’t want to try to convince those who are writing checks to the Google the following: “Absolutely. Your ad dollars are building your brand. You are getting leads. You are able to reach buyers no other outfit can deliver.” Want proof. Just look at this dinobaby. I am not buying health food, hidden carry holsters, and those really cute flesh colored women’s undergarments. The question is, “Are the ads just being dumped or are they actually targeted to someone who is interested in a product category?” Good question, right?

Net net: The YouTube ad blocking is shaping up to be a Google moment. Now Google has sparked an adversarial escalation in the world of YouTube ad blockers. What are Google’s options now that Googzilla is backed into a corner? Maybe Bobby Frost has a poem about it: “Some say the world will end in fire, Some say in ice.” How do Googzilla fare in the ice?

Stephen E Arnold, December 4, 2023

Microsoft, the Techno-Lord: Avoid My Galloping Steed, Please

November 27, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Merriam-Webster.com online site defines “responsibility” this way:

re·?spon·?si·?bil·?I·?ty

1 : the quality or state of being responsible: such as
: moral, legal, or mental accountability
: RELIABILITY, TRUSTWORTHINESS
: something for which one is responsible

The online sector has a clever spin on responsibility; that is, in my opinion, the companies have none. Google wants people who use its online tools and post content created with those tools to make sure that what the Google system outputs does not violate any applicable rules, regulations, or laws.

image

In a traditional fox hunt, the hunters had the “right” to pursue the animal. If a farmer’s daughter were in the way, it was the farmer’s responsibility to keep the silly girl out of the horse’s path. That will teach them to respect their betters I assume. Thanks, MSFT Copilot. I know you would not put me in a legal jeopardy, would you? Now what are the laws pertaining to copyright for a cartoon in Armenia? Darn, I have to know that, don’t I.

Such a crafty way of  defining itself as the mere creator of software machines has inspired Microsoft to follow a similar path. The idea is that anyone using Microsoft products, solutions, and services is “responsible” to comply with applicable rules, regulations, and laws.

Tidy. Logical. Complete. Just like a nifty algebra identity.

Microsoft Wants YOU to Be Sued for Copyright Infringement, Washes Its Hands of AI Copyright Misuse and Says Users Should Be Liable for Copyright Infringement” explains:

Microsoft believes they have no liability if an AI, like Copilot, is used to infringe on copyrighted material.

The write up includes this passage:

So this all comes down to, according to Microsoft, that it is providing a tool, and it is up to users to use that tool within the law. Microsoft says that it is taking steps to prevent the infringement of copyright by Copilot and its other AI products, however, Microsoft doesn’t believe it should be held legally responsible for the actions of end users.

The write up (with no Jimmy Kimmel spin) includes this statement, allegedly from someone at Microsoft:

Microsoft is willing to work with artists, authors, and other content creators to understand concerns and explore possible solutions. We have adopted and will continue to adopt various tools, policies, and filters designed to mitigate the risk of infringing outputs, often in direct response to the feedback of creators. This impact may be independent of whether copyrighted works were used to train a model, or the outputs are similar to existing works. We are also open to exploring ways to support the creative community to ensure that the arts remain vibrant in the future.

From my drafty office in rural Kentucky, the refusal to accept responsibility for its business actions, its products, its policies to push tools and services on users, and the outputs of its cloudy system is quite clever. Exactly how will a user of products pushed at users like Edge and its smart features prevent a user from acquiring from a smart Microsoft system something that violates an applicable rule, regulation, or law?

But legal and business cleverness is the norm for the techno-feudalists. Let the surfs deal with the body of the child killed when the barons chase a fox through a small leasehold. I can hear the brave royals saying, “It’s your fault. Your daughter was in the way. No, I don’t care that she was using the free Microsoft training materials to learn how to use our smart software.”

Yep, responsible. The death of the hypothetical child frees up another space in the training course.

Stephen E Arnold, November 27, 2023

A Former Yahooligan and Xoogler Offers Management Advice: Believe It or Not!

November 22, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a remarkable interview / essay / news story called “Former Yahoo CEO Marissa Mayer Delivers Sharp-Elbowed Rebuke of OpenAI’s Broken Board.” Marissa Mayer was a Googler. She then became the Top Dog at Yahoo. Highlights of her tenure at Yahoo include, according to Inc.com, included:

  • Fostering a “superstar status” for herself
  • Pointing a finger is a chastising way at remote workers
  • Trying to obfuscate Yahooligan layoffs
  • Making slow job cuts
  • Lack of strategic focus (maybe Tumblr, Yahoo’s mobile strategy, the search service, perhaps?)
  • Tactical missteps in diversifying Yahoo’s business (the Google disease in my opinion)
  • Setting timetables and then ignoring, missing, or changing them
  • Weird PR messages
  • Using fear (and maybe uncertainty and doubt) as management methods.

image

The senior executives of a high technology company listen to a self-anointed management guru. One of the bosses allegedly said, “I thought Bain and McKinsey peddled a truckload of baloney. We have the entire factory in front of use.” Thanks, MSFT Copilot. Is Sam the AI-Man on duty?

So what’s this exemplary manager have to say? Let’s go to the original story:

“OpenAI investors (like @Microsoft) need to step up and demand that the governance weaknesses at @OpenAI be fixed,” Mayer wrote Sunday on X, formerly known as Twitter.

Was Microsoft asleep at the switch or simply operating within a Cloud of Unknowing? Fast-talking Satya Nadella was busy trying to make me think he was operating in a normal manner. Had he known something was afoot, is he equipped to deal with burning effigies as a business practice?

Ms. Mayer pointed out:

“The fact that Ilya now regrets just shows how broken and under advised they are/were,” Mayer wrote on social media. “They call them board deliberations because you are supposed to be deliberate.”

Brilliant! Was that deliberative process used to justify the purchase of Tumblr?

The Business Insider write up revealed an interesting nugget:

The Information reported that the former Yahoo CEO’s name had been tossed around by “people close to OpenAI” as a potential addition to the board…

Okay, a Xoogler and a Yahooligan in one package.

Stephen E Arnold, November 22, 2023

Poli Sci and AI: Smart Software Boosts Bad Actors (No Kidding?)

November 22, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Smart software (AI, machine learning, et al) has sparked awareness in some political scientists. Until I read “Can Chatbots Help You Build a Bioweapon?” — I thought political scientists were still pondering Frederick William, Elector of Brandenburg’s social policies or Cambodian law in the 11th century. I was incorrect. Modern poli sci influenced wonks are starting to wrestle with the immense potential of smart software for bad actors. I think this dispersal of the cloud of unknowing I perceived among similar academic group when I entered a third-rate university in 1962 is a step forward. Ah, progress!

image

“Did you hear that the Senate Committee used my testimony about artificial intelligence in their draft regulations for chatbot rules and regulations?” says the recently admitted elected official. The inmates at the prison facility laugh at the incongruity of the situation. Thanks, Microsoft Bing, you do understand the ways of white collar influence peddling, don’t you?

The write up points out:

As policymakers consider the United States’ broader biosecurity and biotechnology goals, it will be important to understand that scientific knowledge is already readily accessible with or without a chatbot.

The statement is indeed accurate. Outside the esteemed halls of foreign policy power, STM (scientific, technical, and medical) information is abundant. Some of the data are online and reasonably easy to find with such advanced tools as Yandex.com (a Russian centric Web search system) or the more useful Chemical Abstracts data.

The write up’s revelations continue:

Consider the fact that high school biology students, congressional staffers, and middle-school summer campers already have hands-on experience genetically engineering bacteria. A budding scientist can use the internet to find all-encompassing resources.

Yes, more intellectual sunlight in the poli sci journal of record!

Let me offer one more example of ground breaking insight:

In other words, a chatbot that lowers the information barrier should be seen as more like helping a user step over a curb than helping one scale an otherwise unsurmountable wall. Even so, it’s reasonable to worry that this extra help might make the difference for some malicious actors. What’s more, the simple perception that a chatbot can act as a biological assistant may be enough to attract and engage new actors, regardless of how widespread the information was to begin with.

Is there a step government deciders should take? Of course. It is the step that US high technology companies have been begging bureaucrats to take. Government should spell out rules for a morphing, little understood, and essentially uncontrollable suite of systems and methods.

There is nothing like regulating the present and future. Poli sci professionals believe it is possible to repaint the weird red tail on the Boeing F 7A aircraft while the jet is flying around. Trivial?

Here’s the recommendation which I found interesting:

Overemphasizing information security at the expense of innovation and economic advancement could have the unforeseen harmful side effect of derailing those efforts and their widespread benefits. Future biosecurity policy should balance the need for broad dissemination of science with guardrails against misuse, recognizing that people can gain scientific knowledge from high school classes and YouTube—not just from ChatGPT.

My take on this modest proposal is:

  1. Guard rails allow companies to pursue legal remedies as those companies do exactly what they want and when they want. Isn’t that why the Google “public” trial underway is essentially “secret”?
  2. Bad actors loves open source tools. Unencumbered by bureaucracies, these folks can move quickly. In effect the mice are equipped with jet packs.
  3. Job matching services allow a bad actor in Greece or Hong Kong to identify and hire contract workers who may have highly specialized AI skills obtained doing their day jobs. The idea is that for a bargain price expertise is available to help smart software produce some AI infused surprises.
  4. Recycling the party line of a handful of high profile AI companies is what makes policy.

With poli sci professional becoming aware of smart software, a better world will result. Why fret about livestock ownership in the glory days of what is now Cambodia? The AI stuff is here and now, waiting for the policy guidance which is sure to come even though the draft guidelines have been crafted by US AI companies?

Stephen E Arnold, November 22, 2023

OpenAI: What about Uncertainty and Google DeepMind?

November 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A large number of write ups about Microsoft and its response to the OpenAI management move populate my inbox this morning (Monday, November 20, 2023).

To give you a sense of the number of poohbahs, mavens, and “real” journalists covering Microsoft’s hiring of Sam (AI-Man) Altman, I offer this screen shot of Techmeme.com taken at 1100 am US Eastern time:

image

A single screenshot cannot do justice to  the digital bloviating on this subject as well as related matters.

I did a quick scan because I simply don’t have the time at age 79 to read every item in this single headline service. Therefore, I admit that others may have thought about the impact of the Steve Jobs’s like termination, the revolt of some AI wizards, and Microsoft’s creating a new “company” and hiring Sam AI-Man and a pride of his cohorts in the span of 72 hours (give or take time for biobreaks).

In this short essay, I want to hypothesize about how the news has been received by that merry band of online advertising professionals.

To begin, I want to suggest that the turmoil about who is on first at OpenAI sent a low voltage signal through the collective body of the Google. Frisson resulted. Uncertainty and opportunity appeared together like the beloved Scylla and Charybdis, the old pals of Ulysses. The Google found its right and left Brainiac hemispheres considering that OpenAI would experience a grave set back, thus clearing a path for Googzilla alone. Then one of the Brainiac hemisphere reconsidered and perceive a grave threat from the split. In short, the Google tipped into its zone of uncertainty.

image

A group of online advertising experts meet to consider the news that Microsoft has hired Sam Altman. The group looks unhappy. Uncertainty is an unpleasant factor in some business decisions. Thanks Microsoft Copilot, you captured the spirit of how some Silicon Valley wizards are reacting to the OpenAI turmoil because Microsoft used the OpenAI termination of Sam Altman as a way to gain the upper hand in the cloud and enterprise app AI sector.

Then the matter appeared to shift back to the pre-termination announcement. The co-founder of OpenAI gained more information about the number of OpenAI employees who were planning to quit or, even worse, start posting on Instagram, WhatsApp, and TikTok (X.com is no longer considered the go-to place by the in crowd.

The most interesting development was not that Sam AI-Man would return to the welcoming arms of Open AI. No, Sam AI-Man and another senior executive were going to hook up with the geniuses of Redmond. A new company would be formed with Sam AI-Man in charge.

As these actions unfolded, the Googlers sank under a heavy cloud of uncertainty. What if the Softies could use Google’s own open source methods, integrate rumored Microsoft-developed AI capabilities, and make good on Sam AI-Man’s vision of an AI application store?

The Googlers found themselves reading every “real news” item about the trajectory of Sam AI-Man and Microsoft’s new AI unit. The uncertainty has morphed into another January 2023 Davos moment. Here’s my take as of 230 pm US Eastern, November 20, 2023:

  1. The Google faces a significant threat when it comes to enterprise AI apps. Microsoft has a lock on law firms, the government, and a number of industry sectors. Google has a presence, but when it comes to go-to apps, Microsoft is the Big Dog. More and better AI raises the specter of Microsoft putting an effective laser defense behinds its existing enterprise moat.
  2. Microsoft can push its AI functionality as the Azure difference. Furthermore, whether Google or Amazon for that matter assert their cloud AI is better, Microsoft can argue, “We’re better because we have Sam AI-Man.” That is a compelling argument for government and enterprise customers who cannot imagine work without Excel and PowerPoint. Put more AI in those apps, and existing customers will resist blandishments from other cloud providers.
  3. Google now faces an interesting problem: It’s own open source code could be converted into a death ray, enhanced by Sam AI-Man, and directed at the Google. The irony of Googzilla having its left claw vaporized by its own technology is going to be more painful than Satya Nadella rolling out another Davos “we’re doing AI” announcement.

Net net: The OpenAI machinations are interesting to many companies. To the Google, the OpenAI event and the Microsoft response is like an unsuspecting person getting zapped by Nikola Tesla’s coil. Google’s mastery of high school science club management techniques will now dig into the heart of its DeepMind.

Stephen E Arnold, November 20, 2023

OpenAI: Permanent CEO Needed

November 17, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My rather lame newsreader spit out an “urgent alert” for me. Like the old teletype terminal: Ding, ding, ding, and a bunch of asterisks.

Surprise. Sam AI-Man allegedly has been given the opportunity to find his future elsewhere. Let me translate blue chip consultant speak for you. The “find your future elsewhere” phrase means you have been fired, RIFed, terminated with extreme prejudice, or “there’s the door. Use it now.” The particularly connotative spin depends on the person issuing the formal statement.

image

“Keep in mind that we will call you,” says the senior member of the Board of Directors. The head of the human resources committee says, “Remember. We don’t provide a reference. Why not try the Google AI system?” Thank you, MSFT Copilot. You must have been trained on content about Mr. Ballmer’s departure.

OpenAI Fires Co-Founder and CEO Sam Altman for Lying to Company Board” states as rock solid basaltic truth:

OpenAI CEO and co-founder Sam Altman was fired for lying to the board of his company.

The good news is that a succession option, of sorts, is in place. Accordingly, OpenAI’s chief technical officer, has become the “interim CEO.” I like the “interim.” That’s solid.

For the moment, let’s assume the RIF statement is true. Furthermore, on this rainy Saturday in rural Kentucky, I shall speculate about the reasons for this announcement. Here we go:

  1. The problem is money, the lack thereof, or the impossibility of controlling the costs of the OpenAI system. Perhaps Sam AI-Man said, “Money is no problem.” The Board did not agree. Money is the problem.
  2. The lovey dovey relationship with the Microsofties has hit a rough patch. MSFT’s noises have been faint and now may become louder about AI chips, options, and innovations. Will these Microsoft bleats become more shrill as the ageing giant feels pain as it tries to make marketing hyperbole a reality. Let’s ask the Copilot, shall we?
  3. The Board has realized that the hyperbole has exceeded OpenAI’s technical ability to solve such problems as made up data (hallucinations), the resources to cope with the the looming legal storm clouds related to unlicensed use of some content (the Copyright Shield “promise”), fixing up the baked in bias of the system, and / or OpenAI ChatGPT’s vulnerability to nifty prompt engineering to override alleged “guardrails”.

What’s next?

My answer is, “Uncertainty.” Cue the Ray Charles’ hit with the lyric “Hit the road, Jack. Don’t you come back no more, no more, no more, no more. (I did not steal this song; I found it via Google on the Google YouTube. Honest.) I admit I did hear the tune playing in my head when I read the Guardian story.

Stephen E Arnold, November 17, 2023

x

x

x

x

x

x

An Odd Couple Sharing a Soda at a Holiday Data Lake

November 16, 2023

What happens when love strikes the senior managers of the technology feudal lords? I will tell you what happens — Love happens. The proof appears in “Microsoft and Google Join Forces on OneTable, an Open-Source Solution for Data Lake Challenges.” Yes, the lakes around Redmond can be a challenge. For those living near Googzilla’s stomping grounds, the risk is that a rising sea level will nuke the outdoor recreation areas and flood the parking lots.

But any speed dating between two techno feudalists is news. The “real news” outfit Venture Beat reports:

In a new open-source partnership development effort announced today, Microsoft is joining with Google and Onehouse in supporting the OneTable project, which could reshape the cloud data lake landscape for years to come

And what does “reshape” mean to these outfits? Probably nothing more than making sure that Googzilla and Mothra become the suppliers to those who want to vacation at the data lake. Come to think of it. The concessions might be attractive as well.

image

Googzilla says to Mothra-Soft, a beast living in Mercer Island, “I know you live on the lake. It’s a swell nesting place. I think we should hook up and cooperate. We can share the money from merged data transfers the way you and I —  you good looking Lepidoptera — are sharing this malted milk. Let’s do more together if you know what I mean.” The delightful Mothra-Soft croons, “I thought you would wait until our high school reunion to ask, big boy. Let’s find a nice, moist, uncrowded place to consummate our open source deal, handsome.” Thanks, Microsoft Bing. You did a great job of depicting a senior manager from the company that developed Bob, the revolutionary interface.

The article continues:

The ability to enable interoperability across formats is critical for Google as it expands the availability of its BigQuery Omni data analytics technology. Kazmaier said that Omni basically extends BigQuery to AWS and Microsoft Azure and it’s a service that has been growing rapidly. As organizations look to do data processing and analytics across clouds there can be different formats and a frequent question that is asked is how can the data landscape be interconnected and how can potential fragmentation be stopped.

Is this alleged linkage important? Yeah, it is. Data lakes are great places to part AI training data. Imagine the intelligence one can glean monitoring inflows and outflows of bits. To make the idea more interesting think in terms of the metadata. Exciting because open source software is really for the little guys too.

Stephen E Arnold, November 16, 2023

Google Apple: These Folks Like Geniuses and Numbers in the 30s

November 13, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The New York Post published a story which may or may not be one the money. I would suggest that the odds of it being accurate are in the 30 percent range. In fact, 30 percent is emerging as a favorite number. Apple, for instance, imposes what some have called a 30 percent “Apple tax.” Don’t get me wrong. Apple is just trying to squeak by in a tough economy. I love the connector on the MacBook Air which is unlike any Apple connector in my collection. And the $130 USB cable? Brilliant.

image

The poor Widow Apple is pleading with the Bank of Googzilla for a more favorable commission. The friendly bean counter is not willing to pay more than one third of the cash take. “I want to pay you more, but hard times are upon us, Widow Apple. Might we agree on a slightly higher number?” The poor Widow Apple sniffs and nods her head in agreement as the frail child Mac Air the Third whimpers.

The write up which has me tangled in 30s is “Google Witness Accidentally Reveals Company Pays Apple 36% of Search Ad Revenue.” I was enthralled with the idea that a Google witness could do something by accident. I assumed Google witnesses were in sync with the giant, user centric online advertising outfit.

The write up states:

Google pays Apple a 36% share of search advertising revenue generated through its Safari browser, one of the tech giant’s witnesses accidentally revealed in a bombshell moment during the Justice Department’s landmark antitrust trial on Monday. The flub was made by Ken Murphy, a University of Chicago economist and the final witness expected to be called by Google’s defense team.

Okay, a 36 percent share: Sounds fair. True, it is a six percent premium on the so-called “Apple tax.” But Google has the incentive to pay more for traffic. That “pay to play” business model is indeed popular it seems.

The write up “Usury in Historical Perspective” includes an interesting passage; to wit:

Mews and Abraham write that 5,000 years ago Sumer (the earliest known human civilization) had its own issues with excessive interest. Evidence suggests that wealthy landowners loaned out silver and barley at rates of 20 percent or more, with non-payment resulting in bondage. In response, the Babylonian monarch occasionally stepped in to free the debtors.

A measly 20 percent? Flash forward to the present. At 36 percent inflation has not had much of an impact on the Apple Google deal.

Who is University of Chicago economist who allegedly revealed a super secret number? According to the always-begging Wikipedia, he is a person who has written more than 50 articles. He is a recipient of the MacArthur Fellowship sometimes known as a “genius grant.” Ergo a genius.

I noted this passage in the allegedly accurate write up:

Google had argued as recently as last week that the details of the agreement were sensitive company information – and that revealing the info “would unreasonably undermine Google’s competitive standing in relation to both competitors and other counterparties.” Schmidtlein [Google’s robust legal eagle]  and other Google attorneys have pushed back on DOJ’s assertions regarding the default search engine deals. The company argues that its payments to Apple, AT&T and other firms are fair compensation.

I like the phrase “fair compensation.” It matches nicely with the 36 percent commission on top of the $25 billion Google paid Apple to make the wonderful Google search system the default in Apple’s Safari browser. The money, in my opinion, illustrates the depth of love users have for the Google search system. Presumably Google wants to spare the Safari user the hassle required to specify another Web search system like Bing.com or Yandex.com.

Goodness, Google cares about its users so darned much, I conclude.

Despite the heroic efforts of Big Tech on Trial, I find that getting information about a trial between the US and everyone’s favorite search system difficult. Why the secrecy? Why the redactions? Why the cringing when the genius revealed the 36 percent commission?

I think I know why. Here are three reasons for the cringe:

  1. Google is thin skinned. Criticism is not part of the game plan, particularly with high school reunions coming up.
  2. Google understands that those not smart enough (like the genius Ken Murphy) would not understand the logic of the number. Those who are not Googley won’t get it, so why bother to reveal the number?
  3. Google hires geniuses. Geniuses don’t make mistakes. Therefore, the 36 percent reveal is numeric proof of the sophistication of Google’s analytic expertise. Apple could have gotten more money; Google is the winner.

Net net: My hunch is that the cloud of unknowing wrapped around the evidence in this trial makes clear that the Google is just doing what anyone smart enough to work at Google would do. Cleverness is good. Being a genius is good. Appearing to be dumb is not Googley.  Oh, oh. I am not smart enough to see the sheer brilliance of the number, its revelation, and how it makes Google even more adorable with its super special deals.

Stephen E Arnold, November 13, 2023

Looking at the Future Through a $100 Bill: Quite a Vision

November 9, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Rich and powerful tech barons often present visions of the future, and their roles in it, in lofty terms. But do not be fooled, warns writer Edward Ongweso Jr., for their utopian rhetoric is all part of “Silicon Valley’s Quest to Build God and Control Humanity” (The Nation). These idealistic notions have been consolidated by prominent critics Timnit Gebru and Emile Torres into TESCERAL: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. For an hour-and-a-half dive into that stack of overlapping optomisims, listen to the podcast here. Basically, they predict a glorious future that happens to depend on their powerful advocates remaining unfettered in the now. How convenient.

Ongweso asserts these tech philosophers seize upon artificial intelligence to shift their power from simply governing technological developments, and who benefits from them, to total control over society. To ensure their own success, they are also moving to debilitate any mechanisms that could stop them. All while distracting the masses with their fanciful visions. Ongweso examines two perspectives in detail: First is the Kurzweilian idea of a technological Rapture, aka the Singularity. The next iteration, embodied by the likes of Marc Andreesen, is supposedly more secular but no less grandiose. See the article for details on both. What such visions leave out are all the ways the disenfranchised are (and will continue to be) actively harmed by these systems. Which is, of course, the point. Ongweso concludes:

“Regardless of whether saving the world with AI angels is possible, the basic reason we shouldn’t pursue it is because our technological development is largely organized for immoral ends serving people with abhorrent visions for society. The world we have is ugly enough, but tech capitalists desire an even uglier one. The logical conclusion of having a society run by tech capitalists interested in elite rule, eugenics, and social control is ecological ruin and a world dominated by surveillance and apartheid. A world where our technological prowess is finely tuned to advance the exploitation, repression, segregation, and even extermination of people in service of some strict hierarchy. At best, it will be a world that resembles the old forms of racist, sexist, imperialist modes of domination that we have been struggling against. But the zealots who enjoy control over our tech ecosystem see an opportunity to use new tools—and debates about them—to restore the old regime with even more violence that can overcome the funny ideas people have entertained about egalitarianism and democracy for the last few centuries. Do not fall for the attempt to limit the debate and distract from their political projects. The question isn’t whether AI will destroy or save the world. It’s whether we want to live in the world its greatest shills will create if given the chance.”

Good question.

Cynthia Murrell, November 9, 2023

The AI Bandwagon: A Hoped for Lawyer Billing Bonanza

November 8, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The AI bandwagon is picking up speed. A dark smudge appears in the sky. What is it? An unidentified aerial phenomenon? No, it is a dense cloud of legal eagles. I read “U.S. Regulation of Artificial Intelligence: Presidential Executive Order Paves the Way for Future Action in the Private Sector.”

image

A legal eagle — aka known as a lawyer or the segment of humanity one of Shakespeare’s characters wanted to drown — is thrilled to read an official version of the US government’s AI statement. Look at what is coming from above. It is money from fees. Thanks, Microsoft Bing, you do understand how the legal profession finds pots of gold.

In this essay, which is free advice and possibly marketing hoo hah, I noted this paragraph:

While the true measure of the Order’s impact has yet to be felt, clearly federal agencies and executive offices are now required to devote rigorous analysis and attention to AI within their own operations, and to embark on focused rulemaking and regulation for businesses in the private sector. For the present, businesses that have or are considering implementation of AI programs should seek the advice of qualified counsel to ensure that AI usage is tailored to business objectives, closely monitored, and sufficiently flexible to change as laws evolve.

Absolutely. I would wager a 25 cents coin that the advice, unlike the free essay, will incur a fee. Some of those legal fees make the pittance I charge look like the cost of chopped liver sandwich in a Manhattan deli.

Stephen E Arnold, November 8, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta