Palantir: Fear Is Good. Fear Sells.

June 18, 2024

President Eisenhower may not have foreseen AI when he famously warned of the military-industrial complex, but certain software firms certainly fit the bill. One of the most successful, Palantir, is pursuing Madison Avenue type marketing with a message of alarm. The company’s co-founder, Alex Karp, is quoted in the fear-mongering post at right-wing Blaze Media, “U.S. Prepares for War Amid Growing Tensions that China Could Invade Taiwan.”

After several paragraphs of panic over tensions between China and Taiwan, writer Collin Jones briefly admits “It is uncertain if and when the Chinese president will deploy an attack against the small country.” He quickly pivots to the scary AI arms race, intimating Palantir and company can save us as long as we let (fund) them. The post concludes:

“Palantir’s CEO and co-founder Alex Karp said: ‘The way to prevent a war with China is to ramp up not just Palantir, but defense tech startups that produce software-defining weapons systems that scare the living F out of our adversaries.’ Karp noted that the U.S. must stay ahead of its military opponents in the realm of AI. ‘Our adversaries have a long tradition of being not interested in the rule of law, not interested in fairness, not interested in human rights and on the battlefield. It really is going to be us or them.’ Karp noted that the U.S. must stay ahead of its military opponents in the realm of AI. You do not want a world order where our adversaries try to define new norms. It would be very bad for the world, and it would be especially bad for America,’ Karp concluded.”

Wow. But do such scare tactics work? Of course they do. For instance, we learn from DefenseScoop, “Palantir Lands $480M Army Contract for Maven Artificial Intelligence Tech.” That article reports on not one but two Palantir deals: the titular Maven expansion and, we learn:

“The company was recently awarded another AI-related deal by the Army for the next phase of the service’s Tactical Intelligence Targeting Access Node (TITAN) ground station program, which aims to provide soldiers with next-generation data fusion and deep-sensing capabilities via artificial intelligence and other tools. That other transaction agreement was worth $178 million.”

Those are just two recent examples of Palantir’s lucrative government contracts, ones that have not, as of this writing, been added this running tally. It seems the firm has found its winning strategy. Ramping up tensions between world powers is a small price to pay for significant corporate profits, apparently.

Cynthia Murrell, June 18, 2024

The Gray Lady Tap Dances

June 17, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The collision of myth, double talk, technology, and money produces some fascinating tap dancing. Tip tap tip tap. Tap tap. That’s the sound of the folks involved with explaining that technology is no big deal. Drum roll. Then the coda. Tip tap tip tap. Tap tap tap. It is not money. Tip tap tip tap. tap tap.

I think quite a few business decisions are about money; specifically, getting a bonus or a hefty raise because “efficiency” improves “quality.” One can dance around the dead horse, but at some point that horse needs to be relocated.

image

The “real” Mona Lisa. Can she be enhanced, managed, and be populated with metadata without a human art director? Yep. Thanks, MSFT Copilot. Good enough.

I read “New York Times Union Urges Management to Reconsider 9 Art Department Cuts as Paper Ramps Up AI Tools | Exclusive.” The write up weaves a number of themes together. There is the possibility of management waffling, a common practice these days. Recall, an incident, Microsoft? The ever-present next big thing makes an appearance. Plus, there is the Gray Lady, working hard to maintain its position as the newspaper for for the USA today. (That sounds familiar, doesn’t it?)

The main point of the write up is that the NYT’s art department might lose staff. The culprit is not smart software. Money is not the issue. Quality will not suffer. Yada yada. The write up says:

The Times denies that the reductions are in any way related to the newspaper’s AI initiatives.

And the check is in the mail.

I also noted:

A spokesman for the Times said the affected employees are being offered a buyout, and have nothing to do with the use of AI. “Last month, The Times’s newsroom made the difficult decision to reduce the size of its art production team with workflow changes to make photo toning and color correction work more efficient,” Charlie Stadtlander told TheWrap.”On May 30th, we offered generous voluntary buyouts for 9 employees to accept. These changes involve the adoption of new workflows and the expanded use of industry-standard tools that have been in use for years — they are not related to The Times’s AI efforts.”

Nope. Never. Impossible. Unthinkable.

What is the smart software identified as a staff reducer? It is Claro but that is not the name of the company. The current name of the service is Pixometry, which is a mashup of Claro and Elpical. So what does this controversial smart software do? The firm’s Web site says:

Pixometry is the latest evolution of Claro, the leading automated image enhancement platform for Publishers and Retailers around the globe. Combining exceptional software with outstanding layered AI services, Pixometry delivers a powerful image processing engine capable of creating stunning looking images, highly accurate cut-outs and automatic keywording in seconds. Reducing the demands upon the Photoshop teams, Pixometry integrates seamlessly with production systems and prepares images for use in printed and digital media.

The Pixometry software delivers:

Cloud based automatic image enhancement & visual asset management solutions for publishers & retail business.

Its functions include:

  • Automatic image “correction” because “real” is better than real
  • Automatic cut outs and key wording (I think a cut out is a background remover so a single image can be plucked from a “real” photo
  • Consistent, high quality results. None of that bleary art director eye contact.
  • Multi-channel utilization. The software eliminates telling a Photoshop wizard I need a high-res image for the magazine and a then a 96 spot per inch version for the Web. How long will that take? What? I need the images now.
  • Applied AI image intelligence. Hey, no hallucinations here. This is “real” image enhancement and better than what those Cooper Union space cadets produce when they are not wandering around looking for inspiration or whatever.

Does that sound like reality shaping or deep fake territory? Hmmm. That’s a question none of the hair-on-fire write ups addresses. But if you are a Photoshop  and Lightroom wizard, the software means hasta la vista in my opinion. Smart software may suck at office parties but it does not require vacays, health care (just minor system updates), or unions. Software does not argue, wear political buttons, or sit around staring into space because of a late night at the “library.”

Pretty obscure unless you are a Photoshop wizard. The Pixometry Web site explains that it provides a searchable database of images and what looks like one click enhancement of images. Hey, every image needs a bit of help to be “real”, just like “real” news and “real” management explanations. The Pixometry Web site identifies some organizations as “loving” Pixometry; for example, the star-crossed BBC, News UK, El Mercurio, and the New York Times. Yes, love!

Let’s recap. Most of the reporting about this use of applied smart software gets the name of the system wrong. None of the write ups point out that art director functions in the hands of a latte guzzling professional are not quick, easy, or without numerous glitches. Furthermore, the humans in the “art” department must be managed.

The NYT is, it appears, trying to do the two-step around software that is better, faster, and cheaper than the human powered options. Other observations are:

  1. The fast-talking is not going to change the economic benefit of smart software
  2. The notion of a newspaper fixing up photos underscores that deep fakes have permeated institutions which operate as if it were 1923 skidoo time
  3. The skilled and semi-skilled workers in knowledge industries may taste blood when the titanium snake of AI bites them on the ankle. Some bites will be fatal.

Net net: Being up front may have some benefits. Skip the old soft shoe, please.

Stephen E Arnold, June 17, 2024

Wow, Criticism from Moscow

June 17, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “Edward Snowden Eviscerates OpenAI’s Decision to Put a Former NSA Director on Its Board: This Is a Willful, Calculated Betrayal of the Rights of Every Person on Earth.” The source is the interesting public figure Edward Snowden. He rose to fame by violating his secrecy requirement imposed by the US government on individuals with access to sensitive, classified, or top secret information. He then ended his dalliance with “truth” by relocating to Russia. From that bastion of truth and justice, he gives speeches and works (allegedly) at a foundation. He is a symbol of modern something. I find him a fascinating character, complete with the on-again, off-again glasses and his occasion comments about security. He is an expert on secrets it seems.

image

Thanks, MSFT Copilot.

Fortune Magazine obviously views him as a way to get clicks, sell subscriptions, and cement its position as a source of high-value business information. I am not sure my perception of Fortune is congruent with that statement. Let’s look and see what Mr. Snowden’s “news” is telling Fortune to tell us to cause me to waste a perfectly good Saturday (June 14, 2024) morning writing about an individual who willfully broke the law and decamped to that progressive nation state so believed by its neighbors in Eastern Europe.

Fortune reports:

“Do not ever trust OpenAI or its products,” the NSA employee turned whistleblower wrote on X Friday morning, after the company announced retired U.S. Army Gen. Paul Nakasone’s appointment to the board’s new safety and security committee. “There’s only one reason for appointing [an NSA director] to your board. This is a willful, calculated betrayal of the rights of every person on earth. You have been warned.”

Okay, I am warned. Several observations:

  1. Telegram, allegedly linked in financial and technical ways, to Russia recently began censoring the flow of information from Ukraine into Russia. Does Mr. Snowden have an opinion about that interesting development. Telegram told Tucker Carlson that it embraced freedom. Perhaps OpenAI is simply being pragmatic in the Telegram manner?
  2. Why should Mr. Snowden’s opinion warrant coverage in Fortune Magazine? Oh, sorry. I answered that already. Fortune wants clicks, money, and to be perceived as relevant. News flash: Publishing has changed. Please, tape the memo to your home office wall.
  3. Is Mr. Snowden correct? I am neither hot nor cold when it comes to Sam AI Man, the Big Dog at OpenAI. My thought is that OpenAI might be taking steps to understand how much value the information OpenAI can deliver to the US government once the iPhone magic moves from “to be” to reality. Most Silicon Valley outfits are darned clumsy in their response to warrants. Maybe OpenAI’s access to someone who knows interesting information can be helpful to the company and ultimately to its users who reside in the US?

Since 2013, the “Snowden thing” has created considerable ripples. If one accepts Mr. Snowden’s version of events, he is a hero. As such, shouldn’t he be living in the US, interacting with journalists directly not virtually, and presenting his views to the legal eagles who want to have a chat with him? Mr. Snowden’s response is to live in Moscow. It is okay in the spring and early summer. The rest of the year can be brutal. But there’s always Sochi for a much-needed vacay and the wilds of Siberia for a bit of prison camp exploration.

Moscow has its charms and an outstanding person like Mr. Snowden. Thanks, Fortune, for reminding me how important his ideas and laptop stickers are. I like the “every person on earth.” That will impress people in Latvia.

Stephen E Arnold, June 17, 2024

Hallucinations in the Courtroom: AI Legal Tools Add to Normal Wackiness

June 17, 2024

Law offices are eager to lighten their humans’ workload with generative AI. Perhaps too eager. Stanford University’s HAI reports, “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries.” Close enough for horseshoes, but for justice? And that statistic is with improved, law-specific software. We learn:

“In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported. And our previous study of general-purpose chatbots found that they hallucinated between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice. In his 2023 annual report on the judiciary, Chief Justice Roberts took note and warned lawyers of hallucinations.”

But that was before tailor-made retrieval-augmented generation tools. The article continues:

“Across all areas of industry, retrieval-augmented generation (RAG) is seen and promoted as the solution for reducing hallucinations in domain-specific contexts. Relying on RAG, leading legal research services have released AI-powered legal research products that they claim ‘avoid’ hallucinations and guarantee ‘hallucination-free’ legal citations. RAG systems promise to deliver more accurate and trustworthy legal information by integrating a language model with a database of legal documents. Yet providers have not provided hard evidence for such claims or even precisely defined ‘hallucination,’ making it difficult to assess their real-world reliability.”

So the Stanford team tested three of the RAG systems for themselves, Lexis+ AI from LexisNexis and Westlaw AI-Assisted Research & Ask Practical Law AI from Thomson Reuters. The authors note they are not singling out LexisNexis or Thomson Reuters for opprobrium. On the contrary, these tools are less opaque than their competition and so more easily examined. They found that these systems are more accurate than the general-purpose models like GPT-4. However, the authors write:

“But even these bespoke legal AI tools still hallucinate an alarming amount of the time: the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.”

These hallucinations come in two flavors. Many responses are flat out wrong. Others are misgrounded: they are correct about the law but cite irrelevant sources. The authors stress this second type of error is more dangerous than it may seem, for it may lure users into a false sense of security about the tool’s accuracy.

The post examines challenges particular to RAG-based legal AI systems and discusses responsible, transparent ways to use them, if one must. In short, it recommends public benchmarking and rigorous evaluations. Will law firms listen?

Cynthia Murrell, June 17, 2024

A Fancy Way of Saying AI May Involve Dragons

June 14, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The essay “What Apple’s AI Tells Us: Experimental Models” makes clear that pinning down artificial intelligence is proving to be more difficult than some anticipated in January 2023, the day when Google’s Red Alert squawked and many people said, “AI is the silver bullet I want for my innovation cannon.”

image

Image source: https://www.geographyrealm.com/here-be-dragons/

Here’s a sentence I found important in the One Useful Thing essay:

What is worth paying attention to is how all the AI giants are trying many different approaches to see what works.

The write up explains different approach to AI that the author has identified. These are:

  1. Apps
  2. Business models with subscription fees

The essay concludes with a specter “haunting AI.” The write up says:

I do not know if AGI[artificial general intelligence] is achievable, but I know that the mere idea of AGI being possible soon bends everything around it, resulting in wide differences in approach and philosophy in AI implementations.

Today’s smart software environment has an upside other than the money churn the craziness vortices generate:

Having companies take many approaches to AI is likely to lead to faster adoption in the long term. And, as companies experiment, we will learn more about which sets of models are correct.

Several observations are warranted.

First, the confessions of McKinsey’s AI team make it clear that smart outfits may not know what they are doing. The firms just plunge forward and then after months of work recycle the floundering into lessons. Presumably these lessons are “hire McKinsey.” See my write up “What Is McKinsey & Co. Telling Its Clients about AI?”

Second, another approach is to use AI in the hopes that staff costs can be reduced. I think this is the motivation of some AI enthusiasts. PwC (I am not sure if it is a consulting firm, an accounting firm, or some 21st century mutation) fell in lust with OpenAI. Not only did the firm kick OpenAI’s tires, PwC signed up to be what’s called an “enterprise reseller.” A client pays PwC to just make something work. In this case, PwC becomes the equivalent of a fix it shop with a classy address and workers with clean fingernails. The motivation, in my opinion, is cutting staff. “PwC Is Doing Quiet Layoffs. It’s a Brilliant Example of What Not to Do” says:

This is PwC in the U.K., and obviously, they operate under different laws than we do here in the United States. But in case you’re thinking about following this bad example, I asked employment attorney Jon Hyman for advice. He said, "This request would seem to fall under the umbrella of ‘protected concerted activity’ that the NLRB would take issue with. That said, the National Labor Relations Act does not apply to supervisors — defined as one with the authority to make personnel decisions using independent judgment. "Thus," he continues, "whether this specific PwC request runs afoul of the NLRA’s legal protections for employees to engage in protected concerted activity would depend on whether the laid-off employees were ‘supervisors’ under the Act."

I am a simpler person. The quiet layoffs complement the AI initiative. Quiet helps keep staff from making the connection I am suggesting. But consulting firms keep one eye on expenses and the other on partners’ profits. AI is a catalyst, not a technology.

Third, more AI fatigue write ups are appearing. One example is “The AI Fatigue: Are We Getting Tired of Artificial Intelligence?” reports:

Hema Sridhar, Strategic Advisor for Technological Futures at the University of Auckland, says that there is a lot of “noise on the topic” so it is clear that “people are overwhelmed”. “Almost every company is using AI. Pretty much every app that you’re currently using on your phone has recently released some version with some kind of AI-feature or AI-enhanced features,” she adds. “Everyone’s using it and [it’s] going to be part of day-to-day life, so there are going to be some significant improvements in everything from how you search for your own content on your phone, to more improved directions or productivity tools that just fundamentally change the simple things you do every day that are repetitive.”

Let me reference Apple Intelligence to close this write up. Apple did not announce hardware. It talked about “to be” services. Instead of doing the Meta open source thing, the Google wrong answers with historically flawed images, or the MSFT on-again, off-again roll outs — Apple just did “to be.”

My hunch is that Apple is not cautious; its professionals know that AI products and services may be like those old maps which say, “Here be dragons.” Sailing close to the shore  makes sense.

Stephen E Arnold, June 14, 2024

More on TikTok Managing the News Streams

June 14, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

TikTok does not occupy much of my day. I don’t have an account, and I am blissfully unaware of the content on the system. I have heard from those on my research team and from people who attend my lectures at law enforcement / intelligence conferences that it is an influential information conduit. I am a dinobaby, and I am not “into” video. I don’t look up information using TikTok. I don’t follow fashion trends other than those popular among other 80-year-old dinobabies. I am hopeless.

However, I did note “TikTok Users Being Fed Misleading Election News, BBC Finds.” I am mostly unaffected by King Charles’s and his subjects activities. What snagged my attention was the presence of videos which were disseminated via TikTok. These videos delivered

content promoted by social media algorithms has found – alongside funny montages – young people on TikTok are being exposed to misleading and divisive content. It is being shared by everyone from students and political activists to comedians and anonymous bot-like accounts.

Tucked in the BBC write up weas this statement:

TikTok has boomed since the last [British] election. According to media regulator Ofcom, it was the fastest-growing source of news in the UK for the second year in a row in 2023 – used by 10% of adults in this way. One in 10 teenagers say it is their most important news source. TikTok is engaging a new generation in the democratic process. Whether you use the social media app or not, what is unfolding on its site could shape narratives about the election and its candidates – including in ways that may be unfounded.

Shortly after reading the BBC item I saw in my feed (June 3, 2024) this story: “Trump Joins TikTok, the App He Once Tried to Ban.” Interesting.

Several observations are warranted:

  1. Does the US have a similar video channel currently disseminating information into China, the home base of TikTok and its owner? If “No,” why not? Should the US have a similar policy regarding non-US information conduits?
  2. Why has education in Britain failed to educate young people about obtaining and vetting information? Does the US have a similar problem?
  3. Have other countries fallen into the scroll and swipe deserts?

Scary.

Stephen E Arnold, June 14, 2024

Another Captain Obvious AI Report

June 14, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

We’re well aware that biased data pollutes AI algorithms and yields disastrous results. The real life examples include racism, sexism, and prejudice towards people with low socioeconomic status. Beta News adds its take to the opinions of poor data with: “Poisoning The Data Well For Generative AI.” The article restates what we already know: bad large language models (LLMs) lead to bad outcomes. It’s like poisoning a well.

Beta News does bring new idea to the discussion: hackers purposely corrupting data. Bad actors could alter LLMs to teach AI how to be deceptive and malicious. This leads to unreliable and harmful results. What’s horrible is that these LLMs can’t be repaired.

Bad actors are harming generative by inserting malware, phishing, disinformation installing backdoors, data manipulation, and retrieval augmented generation (RAG) in LLMs. If you’re unfamiliar with RAG, it’s when :

“With RAG, a generative AI tool can retrieve data from external sources to address queries. Models that use a RAG approach are particularly vulnerable to poisoning. This is because RAG models often gather user feedback to improve response accuracy. Unless the feedback is screened, attackers can put in fake, deceptive, or potentially compromising content through the feedback mechanism.”

Unfortunately it is difficult to detect data poisoning, so it’s very important for AI security experts to be aware of current scams and how to minimize risks. There aren’t any set guidelines on how to prevent AI data breaches and the experts are writing the procedures as they go. The best advice is to be familiar with AI projects, code, current scams, and run frequent security checks. It’s also wise to not doubt gut instincts.

Whitney Grace, June 14, 2024

Googzilla: Pointing the Finger of Blame Makes Sense I Guess

June 13, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Here you are: The Thunder Lizard of Search Advertising. Pesky outfits like Microsoft have been quicker than Billy the Kid shooting drunken farmers when it comes to marketing smart software. But the real problem in Deadwood is a bunch of do-gooders turned into revolutionaries undermining the granite foundation of the Google. I have this information from an unimpeachable source: An alleged Google professional talking on a podcast. The news release titled “Google Engineer Says Sam Altman-Led OpenAI Set Back AI Research Progress By 5-10 Years: LLMs Have Sucked The Oxygen Out Of The Room” explains that the actions of OpenAI is causing the Thunder Lizard to wobble.

image

One of the team sets himself apart by blaming OpenAI and his colleagues, not himself. Will the sleek, entitled professionals pay attention to this criticism or just hear “OpenAI”? Thanks, MSFT Copilot. Good enough art.

Consider this statement in the cited news release:

He [an employee of the Thunder Lizard] stated that OpenAI has “single-handedly changed the game” and set back progress towards AGI by a significant number of years. Chollet pointed out that a few years ago, all state-of-the-art results were openly shared and published, but this is no longer the case. He attributed this change to OpenAI’s influence, accusing them of causing a “complete closing down of frontier research publishing.”

I find this interesting. One company, its deal with Microsoft, and that firm’s management meltdown produced a “complete closing down of frontier research publishing.” What about the Dr. Timnit Gebru incident about the “stochastic parrot”?

The write up included this gem from the Googley acolyte of the Thunder Lizard of Search Advertising:

He went on to criticize OpenAI for triggering hype around Large Language Models or LLMs, which he believes have diverted resources and attention away from other potential areas of AGI research.

However, DeepMind — apparently the nerve center of the one best way to generate news releases about computational biology — has been generating PR. That does not count because its is real world smart software I assume.

But there are metrics to back up the claim that OpenAI is the Great Destroyer. The write up says:

Chollet’s [the Googler, remember?] criticism comes after he and Mike Knoop, [a non-Googler] the co-founder of Zapier, announced the $1 million ARC-AGI Prize. The competition, which Chollet created in 2019, measures AGI’s ability to acquire new skills and solve novel, open-ended problems efficiently. Despite 300 teams attempting ARC-AGI last year, the state-of-the-art (SOTA) score has only increased from 20% at inception to 34% today, while humans score between 85-100%, noted Knoop. [emphasis added, editor]

Let’s assume that the effort and money poured into smart software in the last 12 months boosted one key metric by 14 percent. Doesn’t’ that leave LLMs and smart software in general far, far behind the average humanoid?

But here’s the killer point?

… training ChatGPT on more data will not result in human-level intelligence.

Let’s reflect on the information in the news release.

  1. If the data are accurate, LLM-based smart software has reached a dead end. I am not sure the law suits will stop, but perhaps some of the hyperbole will subside?
  2. If these insights into the weaknesses of LLMs, why has Google continued to roll out services based on a dead-end model, suffer assorted problems, and then demonstrated its management prowess by pulling back certain services?
  3. Who is running the Google smart software business? Is it the computationalists combining components of proteins or is the group generating blatantly wonky images? A better question is, “Is anyone in charge of non-advertising activities at Google?”

My hunch is that this individual is representing a percentage of a fractionalized segment of Google employees. I do not think a senior manager is willing to say, “Yes, I am responsible.” The most illuminating facet of the article is the clear cultural preference at Google: Just blame OpenAI. Failing that, blame the users, blame the interns, blame another team, but do not blame oneself. Am I close to the pin?

Stephen E Arnold, June 13, 2024

Modern Elon Threats: Tossing Granola or Grenades

June 13, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Bad me. I ignored the Apple announcements. I did spot one interesting somewhat out-of-phase reaction to Tim Apple’s attempt to not screw up again. “Elon Musk Calls Apple Devices with ChatGPT a Security Violation.” Since the Tim Apple crowd was learning about what was “to be,” not what is, this statement caught my attention:

If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.

I want to comment about the implicit “then” in this remarkable prose output from Elon Musk. On the surface, the “then” is that the most affluent mobile phone users will be prohibited from the X.com service. I wonder how advertisers are reacting to this idea of cutting down the potential eyeballs for their product if advertised to an group of prospects no longer clutching Apple iPhones. I don’t advertise, but I can game out how the meetings between the company with advertising dollars and the agency helping the company make informed advertising decisions. (Let’s assume that advertising “works”, and advertising outfits are informed for the purpose of this blog post.)

image

A tortured genius struggles against the psychological forces that ripped the Apple car from the fingers of its rightful owner. Too bad. Thanks, MSFT Copilot. How is your coding security coming along? What about the shut down of the upcharge for Copilot? Oh, no answer. That’s okay. Good enough.

Let’s assume Mr. Musk “sees” something a dinobaby like me cannot. What’s with the threat logic? The loss of a beloved investment? A threat to a to-be artificial intelligence company destined to blast into orbit on a tower of intellectual rocket fuel? Mr. Musk has detected a signal. He has interpreted. And he has responded with an ultimatum. That’s pretty fast action, even for a genius. I started college in 1962, and I dimly recall a class called Psych 101. Even though I attended a low-ball institution, the knowledge value of the course was evident in the large and shabby lecture room with a couple of hundred seats.

Threats, if I am remembering something that took place 62 years ago, tell more about the entity issuing the threat than the actual threat event itself.  The words worming from the infrequently accessed cupboards of my mind are linked to an entity wanting to assert, establish, or maintain some type of control. Slapping quasi-ancient psycho-babble on Mr. Musk is not fair to the grand profession of psychology. However, it does appear to reveal that whatever Apple thinks it will do in its “to be”, coming-soon service struck a nerve into Mr. Musk’s super-bright, well-developed brain.

I surmise there is some insecurity with the Musk entity. I can’t figure out the connection between what amounts to vaporware and a threat to behead or de-iPhone a potentially bucket load of prospects for advertisers to pester. I guess that’s why I did not invent the Cybertruck, a boring machine, and a rocket ship.

But a threat over vaporware in a field which has demonstrated that Googzilla, Microsoft, and others have dropped their baskets of curds and whey is interesting. The speed with which Mr. Musk reacts suggests to me that he perceives the Apple vaporware as an existential threat. I see it as another big company trying to grab some fruit from the AI tree until the bubble deflates. Software does have a tendency to disappoint, build up technical debt, and then evolve to the weird service which no one can fix, change, or kill because meaningful competition no longer exists. When will the IRS computer systems be “fixed”? When will airline reservations systems serve the customer? When will smart software stop hallucinating?

I actually looked up some information about threats from the recently disgraced fake research publisher John Wiley & Sons. “Exploring the Landscape of Psychological Threat” reminded me why I thought psychology was not for me. With weird jargon and some diagrams, the threat may be linked to Tesla’s rumored attempt to fall in love with Apple. The product of this interesting genetic bonding would be the Apple car, oodles of cash for Mr. Musk, and the worshipful affection of the Apple acolytes. But the online date did not work out. Apple swiped Tesla into the loser bin. Now Mr. Musk can get some publicity, put X.com (don’t you love Web sites that remind people of pornography on the Dark Web?) in the news, and cause people like me to wonder. “Why dump on Apple?” (The outfit has plenty of worries with the China thing, doesn’t it? What about some anti-trust action? What about the hostility of M3 powered devices?)

Here’s my take:

  1. Apple Intelligence is a better “name” than Mr. Musk’s AI company xAI. Apple gets to use “AI” but without the porn hook.
  2. A controversial social media emission will stir up the digital elite. Publicity is good. Just ask Michael Cimino of Heaven’s Gate fame?
  3. Mr. Musk’s threat provides an outlet for the failure to make Tesla the Apple car.

What if I am wrong? [a] I don’t care. I don’t use an iPhone, Twitter, or online advertising. [b] A GenX, Y, or Z pooh-bah will present the “truth” and set the record straight. [c] Mr. Musk’s threat will be like the result of a Boring Company operation. A hole, a void.

Net net: Granola. The fast response to what seems to be “coming soon” vaporware suggests a potential weak spot in Mr. Musk’s make up. Is Apple afraid? Probably not. Is Mr. Musk? Yep.

Stephen E Arnold, June 13, 2024

The UK AI Safety Institute: Coming to America

June 13, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

One of Neil Diamond’s famous songs is “They’re Coming To America” that explains an immigrant’s journey to the US to pursue to the America. The chorus is the most memorable part of the song, because it repeats the word “today” until the conclusion. In response to the growing concerns about AI’s impact society, The Daily Journal reports that, “UK’s AI Safety Institute Expands To The US, Set To Open US Counterpart In San Francisco.” AI safety is coming to America today or summer 2024.

The Daily Journal’s Bending feature details the following:

“The UK announced it will open a US counterpart to its AI Safety Summit institute in San Francisco this summer to test advanced AI systems and ensure their safety. The expansion aims to recruit a research director and technical staff headed in San Francisco and increase cooperation with the US on AI safety issues. The original UK AI Safety Institute currently has a team of 30 people and is chaired by tech entrepreneur Ian Hogarth. Since being founded last year, the Institute has tested several AI models on challenges but they still struggle with more advanced tests and producing harmful outputs.”

The UK and US will shape the global future of the AI. Because of their prevalence in western, capitalist societies, the these countries are home to huge tech companies. These tech companies are profit driven and often forgo safety and security for it. They forget the importance of the consumer and the common good. Thankfully there are organizations that fight for consumers’ rights and ensuring there will accountability. On the other hand, these organizations can be just as damaging. Is this a lesser of two evils or the evil that you know situation? Oh well, the UK Safety Summit Institute is coming to America!

Whitney Grace, June 13, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta