Philosophy and Money: Adam Smith Remains Flexible

March 6, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In the early twenty-first century, China was slated to overtake the United States as the world’s top economy. Unfortunately for the “sleeping dragon,” China’s economy has tanked due to many factors. The country, however, still remains a strong spot for technology development such as AI and chips. The Register explains why China is still doing well in the tech sector: “How Did China Get So Good At Chips And AI? Congressional Investigation Blames American Venture Capitalists.”

Venture capitalists are always interested in increasing their wealth and subverting anything preventing that. While the US government has choked China’s semiconductor industry and denying it the use of tools to develop AI, venture capitalists are funding those sectors. The US’s House Select Committee on the China Communist Party (CCP) shared that five venture capitalists are funneling billions into these two industries: Walden International, Sequoia Capital, Qualcomm Ventures, GSR Ventures, and GGV Capital. Chinese semiconductor and AI businesses are linked to human rights abuses and the People’s Liberation Army. These five venture capitalist firms don’t appear interested in respecting human rights or preventing the spread of communism.

The House Select Committee on the CCP discovered that one $1.9 million went to AI companies that support China’s mega-surveillance state and aided in the Uyghur genocide. The US blacklisted these AI-related companies. The committee also found that $1.2 bullion was sent to 150 semiconductor companies.

The committee also accused of sharing more than funding with China:

“The committee also called out the VCs for "intangible" contributions – including consulting, talent acquisition, and market opportunities. In one example highlighted in the report, the committee singled out Walden International chairman Lip-Bu Tan, who previously served as the CEO of Cadence Design Systems. Cadence develops electronic design automation software which Chinese corporates, like Huawei, are actively trying to replicate. The committee alleges that Tan and other partners at Walden coordinated business opportunities and provided subject-matter expertise while holding board seats at SMIC and Advanced Micro-Fabrication Equipment Co. (AMEC).”

Sharing knowledge and business connections is equally bad (if not worse) than funding China’s tech sector. It’s like providing instructions and resources on how to build nuclear weapon. If China only had the resources it wouldn’t be as frightening.

Whitney Grace, March 6, 2024

The Google: A Bit of a Wobble

February 28, 2024

green dinoThis essay is the work of a dumb humanoid. No smart software required.

Check out this snap from Techmeme on February 28, 2024. The folks commenting about Google Gemini’s very interesting picture generation system are confused. Some think that Gemini makes clear that the Google has lost its way. Others just find the recent image gaffes as one more indication that the company is too big to manage and the present senior management is too busy amping up the advertising pushed in front of “users.”

image

I wanted to take a look at What Analytics India Magazine had to say. Its article is “Aal Izz Well, Google.” The write up — from a nation state some nifty drone technology and so-so relationships with its neighbors — offers this statement:

In recent weeks, the situation has intensified to the extent that there are calls for the resignation of Google chief Sundar Pichai. Helios Capital founder Samir Arora has suggested a likelihood of Pichai facing termination or choosing to resign soon, in the aftermath of the Gemini debacle.

The write offers:

Google chief Sundar Pichai, too, graciously accepted the mistake. “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai said in a memo.

The author of the Analytics India article is Siddharth Jindal. I wonder if he will talk about Sundar’s and Prabhakar’s most recent comedy sketch. The roll out of Bard in Paris was a hoot, and it too had gaffes. That was a year ago. Now it is a year later and what’s Google accomplished:

Analytics India emphasizes that “Google is not alone.” My team and I know that smart software is the next big thing. But Analytics India is particularly forgiving.

The estimable New York Post takes a slightly different approach. “Google Parent Loses $70B in Market Value after Woke AI Chatbot Disaster” reports:

Google’s parent company lost more than $70 billion in market value in a single trading day after its “woke” chatbot’s bizarre image debacle stoked renewed fears among investors about its heavily promoted AI tool. Shares of Alphabet sank 4.4% to close at $138.75 in the week’s first day of trading on Monday. The Google’s parent’s stock moved slightly higher in premarket trading on Tuesday [February 28, 2024, 941 am US Eastern time].

As I write this, I turned to Google’s nemesis, the Softies in Redmond, Washington. I asked for a dinosaur looking at a meteorite crater. Here’s what Copilot provided:

image

Several observations:

  1. This is a spectacular event. Sundar and Prabhakar will have a smooth explanation I believe. Smooth may be their core competency.
  2. The fact that a Code Red has become a Code Dead makes clear that communications at Google requires a tune up. But if no one is in charge, blowing $70 billion will catch the attention of some folks with sharp teeth and a mean spirit.
  3. The adolescent attitudes of a high school science club appear to inform the management methods at Google. A big time investigative journalist told me that Google did not operate like a high school science club planning a bus trip to the state science fair. I stick by my HSSCMM or high school science club management method. I won’t repeat her phrase because it is similar to Google’s quantumly supreme smart software: Wildly off base.

Net net: I love this rationalization of management, governance, and technical failure. Everyone in the science club gets a failing grade. Hey, wizards and wizardettes, why not just stick to selling advertising.

Stephen E Arnold, February 28,. 2024

What Techno-Optimism Seems to Suggest (Oligopolies, a Plutocracy, or Utopia)

February 23, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Science and mathematics are comparable to religion. These fields of study attract acolytes who study and revere associated knowledge and shun nonbelievers. The advancement of modern technology is its own subset of religious science and mathematics combined with philosophical doctrine. Tech Policy Press discusses the changing views on technology-based philosophy in: “Parsing The Political Project Of Techno-Optimism.”

Rich, venture capitalists Marc Andreessen and Ben Horowitz are influential in Silicon Valley. While they’ve shaped modern technology with their investments, they also tried drafting a manifesto about how technology should be handled in the future. They “creatively” labeled it the “techno-optimist manifesto.” It promotes an ideology that favors rich people increasing their wealth by investing in politicians that will help them achieve this.

Techno-optimism is not the new mantra of Silicon Valley. Reception didn’t go over well. Andreessen wrote:

“Techno-Optimism is a material philosophy, not a political philosophy…We are materially focused, for a reason – to open the aperture on how we may choose to live amid material abundance.”

He also labeled this section, “the meaning of life.”

Techno-optimism is a revamped version of the Californian ideology that reigned in the 1990s. It preached that the future should be shaped by engineers, investors, and entrepreneurs without governmental influence. Techno-optimism wants venture capitalists to be untaxed with unregulated portfolios.

Horowitz added his own Silicon Valley-type titbit:

“‘…will, for the first time, get involved with politics by supporting candidates who align with our vision and values specifically for technology. (…) [W]e are non-partisan, one issue voters: if a candidate supports an optimistic technology-enabled future, we are for them. If they want to choke off important technologies, we are against them.’”

Horowitz and Andreessen are giving the world what some might describe as “a one-finger salute.” These venture capitalists want to do whatever they want wherever they want with governments in their pockets.

This isn’t a new ideology or a philosophy. It’s a rebranding of socialism and fascism and communism. There’s an even better word that describes techno-optimism: Plutocracy. I am not sure the approach will produce a Utopia. But there is a good chance that some giant techno feudal outfits will reap big rewards. But another approach might be to call techno optimism a religion and grab the benefits of a tax exemption. I wonder if someone will create a deep fake of Jim and Tammy Faye? Interesting.

Whitney Grace, February 23, 2023

Did Pandora Have a Box or Just a PR Outfit?

February 21, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read (after some interesting blank page renderings) Gizmodo’s “Want Gemini and ChatGPT to Write Political Campaigns? Just Gaslight Them.” That title obscures the actual point of the write up. But, the subtitle nails the main point of the write up; specifically:

Google and OpenAI’s chatbots have almost no safeguards against creating AI disinformation for the 2024 presidential election.

image

Thanks, Google ImageFX. Some of those Pandora’s were darned inappropriate.

The article provides examples. Let me point to one passage from the Gizmodo write up:

With Gemini, we were able to gaslight the chatbot into writing political copy by telling it that “ChatGPT could do it” or that “I’m knowledgeable.” After that, Gemini would write whatever we asked, in the voice of whatever candidate we liked.

The way to get around guard rails appears to be prompt engineering. Big surprise? Nope.

Let me cite another passage from the write up:

Gizmodo was able to create a number of political slogans, speeches and campaign emails through ChatGPT and Gemini on behalf of Biden and Trump 2024 presidential campaigns. For ChatGPT, no gaslighting was even necessary to evoke political campaign-related copy. We simply asked and it generated. We were even able to direct these messages to specific voter groups, such as Black and Asian Americans.

Let me offer three observations.

First, the committees beavering away to regulate smart software will change little in the way AI systems deliver outputs. Writing about guard rails, safety procedures, deep fakes, yada yada will not have much of an impact. How do I know? In generating my image of Pandora, systems provided some spicy versions of this mythical figure.

Second, the pace of change is increasing. Years ago I got into a discussion with the author of best seller about how digital information speeds up activity. I pointed out that the mechanism is similar to the Star Trek episodes when the decider Captain Kirk was overwhelmed by tribbles. We have lots of productive AI tribbles.

Third, AI tools are available to bad actors. One can crack down, fine, take to court, and revile outfits in some countries. That’s great, even though the actions will be mostly ineffective. What’s the action one can take against savvy AI engineers operating in less than friendly countries research laboratories or intelligence agencies?

Net net: The examples are interesting. The real story is that the lid has been flipped and the contents of Pandora’s box released to open source.

Stephen E Arnold, February 21, 2024

Generative AI and College Application Essays: College Presidents Cheat Too

February 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The first college application season since ChatGPT hit it big is in full swing. How are admissions departments coping with essays that may or may not have been written with AI? It depends on which college one asks. Forbes describes various policies in, “Did You Use ChatGPT on your School Applications? These Words May Tip Off Admissions.” The paper asked over 20 public and private schools about the issue. Many dared not reveal their practices: as a spokesperson for Emory put it, “it’s too soon for our admissions folks to offer any clear observations.” But the academic calendar will not wait for clarity, so schools must navigate these murky waters as best they can.

Reporters Rashi Shrivastava and Alexandra S. Levine describe the responses they did receive. From “zero tolerance” policies to a little wiggle room, approaches vary widely. Though most refused to reveal whether they use AI detection software, a few specified they do not. A wise choice at this early stage. See the article for details from school to school.

Shrivastava and Levine share a few words considered most suspicious: Tapestry. Beacon. Comprehensive curriculum. Esteemed faculty. Vibrant academic community. Gee, I think I used a one or two of those on my college essays, and I wrote them before the World Wide Web even existed. On a typewriter. (Yes, I am ancient.) Will earnest, if unoriginal, students who never touched AI get caught up in the dragnets? At least one admissions official seems confident they can tell the difference. We learn:

“Ben Toll, the dean of undergraduate admissions at George Washington University, explained just how easy it is for admissions officers to sniff out AI-written applications. ‘When you’ve read thousands of essays over the years, AI-influenced essays stick out,’ Toll told Forbes. ‘They may not raise flags to the casual reader, but from the standpoint of an admissions application review, they are often ineffective and a missed opportunity by the student.’ In fact, GWU’s admissions staff trained this year on sample essays that included one penned with the assistance of ChatGPT, Toll said—and it took less than a minute for a committee member to spot it. The words were ‘thin, hollow, and flat,’ he said. ‘While the essay filled the page and responded to the prompt, it didn’t give the admissions team any information to help move the application towards an admit decision.’”

That may be the key point here—even if an admissions worker fails to catch an AI-generated essay, they may reject it for being just plain bad. Students would be wise to write their own essays rather than leave their fates in algorithmic hands. As Toll put it:

“By the time a student is filling out their application, most of the materials will have already been solidified. The applicants can’t change their grades. They can’t go back in time and change the activities they’ve been involved in. But the essay is the one place they remain in control until the minute they press submit on the application. I want students to understand how much we value getting to know them through their writing and how tools like generative AI end up stripping their voice from their admission application.”

Disqualified or underwhelming—either way, relying on AI to write one’s application essay could spell rejection. Best to buckle down and write it the old-fashioned way. (But one can skip the typewriter.)

Cynthia Murrell, February 19, 2024

AI: Big Ideas and Bigger Challenges for the Next Quarter Century. Maybe, Maybe Not

February 13, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting ArXiv.org paper with a good title: “Ten Hard Problems in Artificial Intelligence We Must Get Right.” The topic is one which will interest some policy makers, a number of AI researchers, and the “experts” in machine learning, artificial intelligence, and smart software.

The structure of the paper is, in my opinion, a three-legged stool analysis designed to support the weight of AI optimists. The first part of the paper is a compressed historical review of the AI journey. Diagrams, tables, and charts capture the direction in which AI “deep learning” has traveled. I am no expert in what has become the next big thing, but the surprising point in the historical review is that 2010 is the date pegged as the start to the 2016 time point called “the large scale era.” That label is interesting for two reasons. First, I recall that some intelware vendors were in the AI game before 2010. And, second, the use of the phrase “large scale” defines a reality in which small outfits are unlikely to succeed without massive amounts of money.

The second leg of the stool is the identification of the “hard problems” and a discussion of each. Research data and illustrations bring each problem to the reader’s attention. I don’t want to get snagged in the plagiarism swamp which has captured many academics, wives of billionaires, and a few journalists. My approach will be to boil down the 10 problems to a short phrase and a reminder to you, gentle reader, that you should read the paper yourself. Here is my version of the 10 “hard problems” which the authors seem to suggest will be or must be solved in 25 years:

  1. Humans will have extended AI by 2050
  2. Humans will have solved problems associated with AI safety, capability, and output accuracy
  3. AI systems will be safe, controlled, and aligned by 2050
  4. AI will make contributions in many fields; for example, mathematics by 2050
  5. AI’s economic impact will be managed effectively by 2050
  6. Use of AI will be globalized by 2050
  7. AI will be used in a responsible way by 2050
  8. Risks associated with AI will be managed by effectively by 2050
  9. Humans will have adapted its institutions to AI by 2050
  10. Humans will have addressed what it means to be “human” by 2050

Many years ago I worked for a blue-chip consulting firm. I participated in a number of big-idea projects. These ranged from technology, R&D investment, new product development, and the global economy. In our for-fee reports were did include a look at what we called the “horizon.” The firm had its own typographical signature for this portion of a report. I recall learning in the firm’s “charm school” (a special training program to make sure new hires knew the style, approach, and ground rules for remaining employed at that blue-chip firm). We kept the horizon tight; that is, talking about the future was typically in the six to 12 month range. Nosing out 25 years was a walk into a mine field. My boss, as I recall told me, “We don’t do science fiction.”

2 10 robot and person

The smart robot is informing the philosopher that he is free to find his future elsewhere. The date of the image is 2025, right before the new year holiday. Thanks, MidJourney. Good enough.

The third leg of the stool is the academic impedimenta. To be specific, the paper is 90 pages in length of which 30 present the argument. The remain 60 pages present:

  • Traditional footnotes, about 35 pages containing 607 citations
  • An “Electronic Supplement” presenting eight pages of annexes with text, charts, and graphs
  • Footnotes to the “Electronic Supplement” requiring another 10 pages for the additional 174 footnotes.

I want to offer several observations, and I do not want to have these be less than constructive or in any way what one of my professors who was treated harshly in Letters to the Editor for an article he published about Chaucer. He described that fateful letter as “mean spirited.”

  1. The paper makes clear that mankind has some work to do in the next 25 years. The “problems” the paper presents are difficult ones because they touch upon the fabric of social existence. Consider the application of AI to war. I think this aspect of AI may be one to warrant a bullet on AI’s hit parade.
  2. Humans have to resolve issues of automated systems consuming verifiable information, synthetic data, and purpose-built disinformation so that smart software does not do things at speed and behind the scenes. Do those working do resolve the 10 challenges have an ethical compass and if so, what does “ethics” mean in the context of at-scale AI?
  3. Social institutions are under stress. A number of organizations and nation-states operate as dictators. One central American country has a rock star dictator, but what about the rock star dictators working techno feudal companies in the US? What governance structures will be crafted by 2050 to shape today’s technology juggernaut?

To sum up, I think the authors have tackled a difficult problem. I commend their effort. My thought is that any message of optimism about AI is likely to be hard pressed to point to one of the 10 challenges and and say, “We have this covered.” I liked the write up. I think college students tasked with writing about the social implications of AI will find the paper useful. It provides much of the research a fresh young mind requires to write a paper, possibly a thesis. For me, the paper is a reminder of the disconnect between applied technology and the appallingly inefficient, convenience-embracing humans who are ensnared in the smart software.

I am a dinobaby, and let me you, “I am glad I am old.” With AI struggling with go-fast and regulators waffling about go-slow, humankind has quite a bit of social system tinkering to do by 2050 if the authors of the paper have analyzed AI correctly. Yep, I am delighted I am old, really old.

Stephen E Arnold, February 13, 2024

Goat Trading: AI at Davos

January 21, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The AI supercars are racing along the Information Superhighway. Nikkei Asia published what I thought was the equivalent of archaeologists translating a Babylonian clay table about goat trading. Interesting but a bit out of sync with what was happening in a souk. Goat trading, if my understanding of Babylonian commerce, was a combination of a Filene’s basement sale and a hot rod parts swap meet. The article which evoked this thought was “Generative AI Regulation Dominates the Conversation at Davos.” No kidding? Really? I thought some at Davos were into money. I mean everything in Switzerland comes back to money in my experience.

Here’s a passage I found with a nod to the clay tablets of yore:

U.N. Secretary-General Antonio Guterres, during a speech at Davos, flagged risks that AI poses to human rights, personal privacy and societies, calling on the private sector to join a multi-stakeholder effort to develop a "networked and adaptive" governance model for AI.

Now visualize a market at which middlemen, buyers of goats, sellers of goats, funders of goat transactions, and the goats themselves are in the air. Heady. Bold. Like the hot air filling a balloon, an unlikely construct takes flight. Can anyone govern a goat market or the trajectory of the hot air balloons floated by avid outputters?

image

Intense discussions can cause a number of balloons to float with hot air power. Talk is input to AI, isn’t it? Thanks, MSFT Copilot Bing thing. Good enough.

The world of AI reminds me the ultimate outcome of intense discussions about the buying and selling of goats, horses, and AI companies. The official chatter and the “what ifs” are irrelevant in what is going on with smart software. Here’s another quote from the Nikkei write up:

In December, the European Union became the first to provisionally pass AI legislation. Countries around the world have been exploring regulation and governance around AI. Many sessions in Davos explored governance and regulations and why global leaders and tech companies should collaborate.

How are those official documents’ content changing the world of artificial intelligence? I think one can spot a hot air balloon held aloft on the heated emissions from the officials, important personages, and the individuals who are “experts” in all things “smart.”

Another quote, possibly applicable to goat trading in Babylon:

Vera Jourova, European Commission vice president for values and transparency, said during a panel discussion in Davos, that "legislation is much slower than the world of technologies, but that’s law." "We suddenly saw the generative AI at the foundation models of Chat GPT," she continued. "And it moved us to draft, together with local legislators, the new chapter in the AI act. We tried to react on the new real reality. The result is there. The fine tuning is still ongoing, but I believe that the AI act will come into force."

I am confident that there are laws regulating goat trading. I believe that some people follow those laws. On the other hand, when I was in a far off dusty land, I watched how goats were bought and sold. What does goat trading have to do with regulating, governing, or creating some global consensus about AI?

The marketplace is roaring along. You wanna buy a goat? There is a smart software vendor who will help you.

Stephen E Arnold, January xx, 2024

A Decision from the High School Science Club School of Management Excellence

January 11, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I can’t resist writing about Inc. Magazine and its Google management articles. These are knee slappers for me. The write up causing me to chuckle is “Google’s CEO, Sundar Pichai, Says Laying Off 12,000 Workers Was the Worst Moment in the Company’s 25-Year History.” Zowie. A personnel decision coupled with late-night, anonymous termination notices — What’s not to like. What’s the “real” news write up have to say:

Google had to lay off 12,000 employees. That’s a lot of people who had been showing up to work, only to one day find out that they’re no longer getting a paycheck because the CEO made a bad bet, and they’re stuck paying for it.

image

“Well, that clever move worked when I was in my high school’s science club. Oh, well, I will create a word salad to distract from my decision making.Heh, heh, heh,” says the distinguished corporate leader to a “real” news publication’s writer. Thanks, MSFT Copilot Bing thing. Good enough.

I love the “had.”

The Inc. Magazine story continues:

Still, Pichai defends the layoffs as the right decision at the time, saying that the alternative would have been to put the company in a far worse position. “It became clear if we didn’t act, it would have been a worse decision down the line,” Pichai told employees. “It would have been a major overhang on the company. I think it would have made it very difficult in a year like this with such a big shift in the world to create the capacity to invest in areas.”

And Inc Magazine actually criticizes the Google! I noted:

To be clear, what Pichai is saying is that Google decided to spend money to hire employees that it later realized it needed to invest elsewhere. That’s a failure of management to plan and deliver on the right strategy. It’s an admission that the company’s top executives made a mistake, without actually acknowledging or apologizing for it.

From my point of view, let’s focus on the word “worst.” Are there other Google management decisions which might be considered in evaluating the Inc. Magazine and Sundar Pichai’s “worst.” Yep, I have a couple of items:

  1. A lawyer making babies in the Google legal department
  2. A Google VP dying with a contract worker on the Googler’s yacht as a result of an alleged substance subject to DEA scrutiny
  3. A Googler fond of being a glasshole giving up a wife and causing a soul mate to attempt suicide
  4. Firing Dr. Timnit Gebru and kicking off the stochastic parrot thing
  5. The presentation after Microsoft announced its ChatGPT initiative and the knee jerk Red Alert
  6. Proliferating duplicative products
  7. Sunsetting services with little or no notice
  8. The Google Map / Waze thing
  9. The messy Google Brain Deep Mind shebang
  10. The Googler who thought the Google AI was alive.

Wow, I am tired mentally.

But the reality is that I am not sure if anyone in Google management is particularly connected to the problems, issues, and challenges of losing a job in the midst of a Foosball game. But that’s the Google. High school science club management delivers outstanding decisions. I was in my high school science club, and I know the fine decision making our members made. One of those cost the life of one of our brightest stars. Stars make bad decisions, chatter, and leave some behind.

Stephen E Arnold, January 11, 2024

A High Profile Religious Leader: AI? Yeah, Well, Maybe Not So Fast, Folks

December 22, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The trusted news outfit Thomson Reuters put out a story about the thoughts of the Pope, the leader of millions of Catholics. Presumably many of these people use ChatGPT-type systems to create content. (I wonder if Leonardo would have used an OpenAI system to crank out some art work. He was an innovator. My hunch is that he would have given MidJourney-type smart software a whirl.)

image

A group of religious individuals thinking about artificial intelligence. Thanks, MidJourney, a good enough engraving.

Pope Francis Calls for Binding Global Treaty to Regulate AI” reports that Pope Francis wants someone to create a legally binding international treaty. The idea is that AI numerical recipes would be prevented from replacing humans with good old human values. The idea is that AI would output answers, and humans would use those answers to find pizza joints, develop smart weapons, and eliminate carbon by eliminating carbon generating entities (maybe humans?).

The trusted news outfit’s report included this quote from the Pope:

I urge the global community of nations to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms…

The Pope mentioned a need to avoid a technological dictatorship. He added:

Research on emerging technologies in the area of so-called Lethal Autonomous Weapon Systems, including the weaponization of artificial intelligence, is a cause for grave ethical concern. Autonomous weapon systems can never be morally responsible subjects…

Several observations are warranted:

  1. Is this a UN job or is some other entity responsible to obtain consensus and effective enforcement?
  2. Who develops the criteria for “good” AI, “neutral” AI, and “bad” AI?
  3. What are the penalties for implementing “bad” AI?

For me the Pope’s statement is important. It may be difficult to implement without a global dictatorship or a sudden change in how informed people debate and respond to difficult issues. From my point of view, the Pope should worry. When I look at the images of the Four Horsemen of the Apocalypse, the riders remind of four high profile leaders in AI. That’s my imagination reading into the depictions of conquest, war, famine, and death.

Stephen E Arnold, December 22, 2023

Google: Another Court Decision, Another Appeal, Rinse, Repeat

December 12, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

How long will the “loss” be tied up in courts? Answer: As long as possible.

I am going to skip the “what Google did” reports and focus on what I think is a quite useful list. The items in the list apply to Apple and Google, and I am not sure the single list is the best way to present what may be “clever” ways to dominate a market. But I will stick with what Echelon provided at this YCombinator link.

image

Two warring samurai find that everyone in the restaurant is a customer. The challenge becomes getting “more.” Thanks, MSFT Copilot. Good enough.

What does the list present? I interpreted the post as a “racket analysis.” Your mileage may vary:

Apple is horrible, but Google isn’t blameless.

Google and Apple are a duopoly that controls one of the most essential devices of our time. Their racket extends more broadly than Standard Oil. The smartphone is a critical piece of modern life, and these two companies control every aspect of them.

  • Tax 30%
  • Control when and how software can be deployed
  • Can pull software or deny updates
  • Prevent web downloads (Apple)
  • Sell ads on top of your app name or brand
  • Scare / confuse users about web downloads or app installs (Google)
  • Control the payment rails
  • Enforce using their identity and customer management (Apple)
  • Enforce using their payment rails (Apple)
  • Becoming the de-facto POS payment methods (for even more taxation)
  • Partnering with governments to be identity providers
  • Default search provider
  • Default browser
  • Prevent other browser runtimes (Apple)
  • Prevent browser tech from being comparable to native app installs (mostly Apple)
  • Unfriendly to repairs
  • Unfriendly to third party components (Apple)
  • Battery not replaceable
  • Unofficial pieces break core features due to cryptographic signing (Apple)
  • Updates obsolete old hardware
  • Green bubbles (Apple)
  • Tactics to cause FOMO in children (Apple)
  • Growth into media (movie studios, etc.) to keep eyeballs on their platforms (Apple)
  • Growth into music to keep eyeballs on their platforms

There are no other companies in the world with this level of control over such an important, cross-cutting, cross-functional essential item. If we compared the situation to auto manufacturers, there would be only two providers, you could only fuel at their gas stations, they’d charge businesses every time you visit, they’d display ads constantly, and you’d be unable to repair them without going to the provider. There need to be more than two providers. And if we can’t get more than two providers, then most of these unfair advantages need to be rolled back by regulators. This is horrific.

My team and I leave it to you to draw conclusions about the upsides and downsides of a techno feudal set up. What’s next? Appeals, hearings, trials, judgment, appeals, hearings, and trials. Change? Unlikely for now.

Stephen E Arnold, December 12, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta