Two EU Firms Unite in Pursuit of AI Sovereignty
June 25, 2024
Europe would like to get out from under the sway of North American tech firms. This is unsurprising, given how differently the EU views issues like citizen privacy. Then there are the economic incentives of localizing infrastructure, data, workforce, and business networks. Now, two generative AI firms are uniting with that goal in mind. The Next Web reveals, “European AI Leaders Aleph Alpha and Silo Ink Deal to Deliver ‘Sovereign AI’.” Writer Thomas Macaulay reports:
“Germany’s Aleph Alpha and Finland’s Silo AI announced the partnership [on June 13, 2024]. The duo plan to create a ‘one-stop-solution’ for European industrial firms exploring generative AI. Their collaboration brings together distinctive expertise. Aleph Alpha has been described a European rival to OpenAI, but with a stronger focus on data protection, security, and transparency. The company also claims to operate Europe’s fastest commercial AI data center. Founded in 2019, the firm has become Germany’s leading AI startup. In November, it raised $500mn in a funding round backed by Bosch, SAP, and Hewlett Packard Enterprise. Silo AI, meanwhile, calls itself ‘Europe’s largest private AI lab.’ The Helsinki-based startup provides custom LLMs through a SaaS subscription. Use cases range from smart devices and cities to autonomous vehicles and industry 4.0. Silo also specializes in building LLMs for low-resource languages, which lack the linguistic data typically needed to train AI models. By the end of this year, the company plans to cover every official EU language.”
Both Aleph Alpha CEO Jonas Andrulis and Silo AI CEO Peter Sarlin enthusiastically advocate European AI sovereignty. Will the partnership strengthen their mutual cause?
Cynthia Murrell, June 25, 2024
Ad Hominem Attack: A Revived Rhetorical Form
June 24, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I remember my high school debate coach telling my partner Nick G. (I have forgotten the budding prosecutor’s name, sorry) you should not attack the character of our opponents. Nick G. had interacted with Bill W. on the basketball court in an end-of-year regional game. Nick G., as I recall got a bloody nose, and Bill W. was thrown out of the basketball game. When fisticuffs ensued, I thanked my lucky stars I was a hopeless athlete. Give me the library, a debate topic, a pile of notecards, and I was good to go. Nick G. included in his rebuttal statement comments about the character of Bill W. When the judge rendered a result and his comments, Nick G. was singled out as being wildly inappropriate. After the humiliating defeat, the coach explained that an ad hominem argument is not appropriate for 15-year-olds. Nick G.’s attitude was, “I told the truth.” As Nick G. learned, the truth is not what wins debate tournaments or life in some cases.
I thought about ad hominem arguments as I read “Silicon Valley’s False Prophet.” This essay reminded me of the essay by the same author titled “The Man Who Killed Google Search.” I must admit the rhetorical trope is repeatable. Furthermore it can be applied to an individual who may be clueless about how selling advertising nuked relevance (or what was left of it) at the Google and to the dealing making of a person whom I call Sam AI-Man. Who knows? Maybe other authors will emulate these two essays, and a new Silicon Valley genre may emerge ready for the real wordsmiths and pooh-bahs of Silicon Valley to crank out a hit piece every couple of days.
To the essay at hand: The false profit is the former partner of Elon Musk and the on-again-off-again-on-again Big Dog at OpenAI. That’s an outfit where “open” means closed, and closed means open to the likes of Apple. The main idea, I think, is that AI sucks and Sam AI-Man continues to beat the drum for a technology that is likely to be headed for a correction. In Silicon Valley speak, the bubble will burst. It is, I surmise, Mr. AI-man’s fault.
The essay explains:
Sam Altman, however, exists in a category of his own. There are many, many, many examples of him saying that OpenAI — or AI more broadly — will do something it can’t and likely won’t, and it being meekly accepted by the Fourth Estate without any real pushback. There are more still of him framing the limits of the present reality as a positive — like when, in a fireside sitdown with
1980s used car salesmanSalesforce CEO Marc Benioff, Altman proclaimed that AI hallucinations (when an LLM asserts something untrue as fact, because AI doesn’t know anything) are a feature, not a bug, and rather than being treated as some kind of fundamental limitation, should be regarded as a form of creative expression.
I understand. Salesperson. Quite a unicorn in Silicon Valley. I mean when I worked there I would encounter hyperbole artists every few minutes. Yeah, Silicon Valley. Anchored in reality, minimum viable products, and lots of hanky pinky.
The essay provides a bit of information about the background of Mr. AI-Man:
When you strip away his ability to convince people that he’s smart, Altman had actually done very little — he was a college dropout with a failing-then-failed startup, one where employees tried to get him fired twice.
If true, that takes some doing. Employees tried to get the false prophet fired twice. In olden times, burning at the stake might have been an option. Now it is just move on to another venture. Progress.
The essay does provide some insight into Sam AI-Man’s core competency:
Altman is adept at using connections to make new connections, in finding ways to make others owe him favors, in saying the right thing at the right time when he knew that nobody would think about it too hard. Altman was early on Stripe, and Reddit, and Airbnb — all seemingly-brilliant moments in the life of a man who had many things handed to him, who knew how to look and sound to get put in the room and to get the capital to make his next move. It’s easy to conflate investment returns with intellectual capital, even though the truth is that people liked Altman enough to give him the opportunity to be rich, and he took it.
I cannot figure out if the author envies Sam AI-Man, reviles him for being clever (a key attribute in some high-technology outfits), or genuinely perceives Mr. AI-Man as the first cousin to Beelzebub. Whatever the motivation, I find the phoenix-like rising of the ad hominem attack a refreshing change from the entitled pooh-bahism of some folks writing about technology.
The only problem: I think it is unlikely that the author will be hired by OpenAI. Chance blown.
Stephen E Arnold, June 24, 2024
The Key to Success at McKinsey & Company: The 2024 Truth Is Out!
June 21, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
When I was working at a “real” company, I wanted to labor in the vineyards of a big-time, blue-chip consulting firm. I achieved that goal and, after a suitable period of time in the penal colony, I escaped to a client. I made it out, unscathed, and entered a more interesting, less nutso working life. When the “truth” about big-time, blue-chip consulting firms appears in public sources, I scan the information. Most of it is baloney; for example, the yip yap about McKinsey and its advice pertaining to addictive synthetics. Hey, stuff happens when one is objective. “McKinsey Exec Tells Summer Interns That Learning to Ask AI the Right Questions Is the Key to Success” contains some information which I find quite surprising. First, I don’t know if the factoids in the write up are accurate or if they are the off-the-cuff baloney recruiters regularly present to potential 60-hour-a-week knowledge worker serfs or if the person has a streaming video connection to the McKinsey managing partner’s work-from-the-resort office.
Let’s assume the information is correct and consider some of its implications. An intern is a no-pay or low-pay job for students from the right institutions, the right background, or the right connections. The idea is that associates (one step above the no-pay serf) and partners (the set for life if you don’t die of heart failure crowd) can observe, mentor, and judge these field laborers. The write up states:
Standing out in a summer internship these days boils down to one thing — learning to talk to AI. At least, that’s the advice McKinsey’s chief client officer, Liz Hilton Segel, gave one eager intern at the firm. “My advice to her was to be an outstanding prompt engineer,” Hilton Segel told The Wall Street Journal.
But what about grades? What about my family’s connections to industry, elected officials, and a supreme court judge? What about my background scented with old money, sheepskin from prestigious universities, and a Nobel Prize awarded a relative 50 years ago? These questions, its seems, may no longer be relevant. AI is coming to the blue-chip consulting game, and the old-school markers of building big revenues may not longer matter.
AI matters. Despite McKinsey’s 11-month effort, the firm has produced Lilli. The smart systems, despite fits and starts, has delivered results; that is, a payoff, cash money, engagement opportunities. The write up says:
Lilli’s purpose is to aggregate the firm’s knowledge and capabilities so that employees can spend more time engaging with clients, Erik Roth, a senior partner at McKinsey who oversaw Lili’s development, said last year in a press release announcing the tool.
And the proof? I learned:
“We’ve [McKinsey humanoids] answered over 3 million prompts and add about 120,000 prompts per week,” he [Erik Roth] said. “We are saving on average up to 30% of a consultants’ time that they can reallocate to spend more time with their clients instead of spending more time analyzing things.”
Thus, the future of success is to learn to use Lilli. I am surprised that McKinsey does not sell internships, possibly using a Ticketmaster-type system.
Several observations:
- As Lilli gets better or is replaced by a more cost efficient system, interns and newly hired professionals will be replaced by smart software.
- McKinsey and other blue-chip outfits will embrace smart software because it can sell what the firm learns to its clients. AI becomes a Petri dish for finding marketable information.
- The hallucinative functions of smart software just create an opportunity for McKinsey and other blue-chip firms to sell their surviving professionals at a more inflated fee. Why fail and lose money? Just pay the consulting firm, sidestep the stupidity tax, and crush those competitors to whom the consulting firms sell the cookie cutter knowledge.
Net net: Blue-chip firms survived the threat from gig consultants and the Gerson Lehrman-type challenge. Now McKinsey is positioning itself to create a no-expectation environment for new hires, cut costs, and increase billing rates for the consultants at the top of the pyramid. Forget opioids. Go AI.
Stephen E Arnold, June 21, 2024
DeepMind Is Going to Make Products, Not Science
June 18, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Crack that Google leadership whip. DeepMind is going to make products. Yes, just like that. I am easily confused. I thought Google consolidated its smart software efforts. I thought Dr. Jeffrey Dean did a lateral arabesque making way for new leadership. The company had new marching orders under the calming light of a Red Alert, hair-on-fire, OpenAI and Microsoft will be the new Big Dogs.
From Google DeepMind to greener pastures. Thanks, OpenAI art thing.
Now I learn from “Google’s DeepMind Shifting From Research Powerhouse To AI Product Giant, Redefining Industry Dynamics”:
Alphabet Inc‘s subsidiary Google DeepMind has decided to transition from a research lab to an AI product factory. This move could potentially challenge the company’s long-standing dominance in foundational research… Google DeepMind, has merged its two AI labs to focus on developing commercial services. This strategic change could potentially disrupt the company’s traditional strength in fundamental research
From wonky images of the US founding fathers to weird outputs which appear to be indicative of Google’s smart software and its knowledge of pizza cheese interaction, the company seems to be struggling. To further complicate matters, Google’s management finesse created this interesting round of musical chairs:
…the departure of co-founder Mustafa Suleyman to Microsoft in March adds another layer of complexity to DeepMind’s journey. Suleyman’s move to Microsoft, where he has described his experience as “truly transformational,” indicates the competitive and dynamic nature of the AI industry.
Several observations:
- Microsoft seems to be suffering the AI wobblies. The more it tries to stabilize its AI activities, the more unstable the company seems to be
- Who is in charge of AI at Google?
- Has Google turned off the blinking red and yellow alert lights and operates in what might be called low lumen normalcy?
- xx
However, Google’s thrashing may not matter. OpenAI cannot get its system to stay online. Microsoft has a herd of AI organizations to manage and has managed to create a huge PR gaffe with its “smart” Recall feature. Apple deals in “to be” smart products and wants to work with everyone just without paying.
Net net: Is Google representative of the unraveling of the Next Big Thing?
Stephen E Arnold, June 18, 2024
x
x
x
Palantir: Fear Is Good. Fear Sells.
June 18, 2024
President Eisenhower may not have foreseen AI when he famously warned of the military-industrial complex, but certain software firms certainly fit the bill. One of the most successful, Palantir, is pursuing Madison Avenue type marketing with a message of alarm. The company’s co-founder, Alex Karp, is quoted in the fear-mongering post at right-wing Blaze Media, “U.S. Prepares for War Amid Growing Tensions that China Could Invade Taiwan.”
After several paragraphs of panic over tensions between China and Taiwan, writer Collin Jones briefly admits “It is uncertain if and when the Chinese president will deploy an attack against the small country.” He quickly pivots to the scary AI arms race, intimating Palantir and company can save us as long as we let (fund) them. The post concludes:
“Palantir’s CEO and co-founder Alex Karp said: ‘The way to prevent a war with China is to ramp up not just Palantir, but defense tech startups that produce software-defining weapons systems that scare the living F out of our adversaries.’ Karp noted that the U.S. must stay ahead of its military opponents in the realm of AI. ‘Our adversaries have a long tradition of being not interested in the rule of law, not interested in fairness, not interested in human rights and on the battlefield. It really is going to be us or them.’ Karp noted that the U.S. must stay ahead of its military opponents in the realm of AI. You do not want a world order where our adversaries try to define new norms. It would be very bad for the world, and it would be especially bad for America,’ Karp concluded.”
Wow. But do such scare tactics work? Of course they do. For instance, we learn from DefenseScoop, “Palantir Lands $480M Army Contract for Maven Artificial Intelligence Tech.” That article reports on not one but two Palantir deals: the titular Maven expansion and, we learn:
“The company was recently awarded another AI-related deal by the Army for the next phase of the service’s Tactical Intelligence Targeting Access Node (TITAN) ground station program, which aims to provide soldiers with next-generation data fusion and deep-sensing capabilities via artificial intelligence and other tools. That other transaction agreement was worth $178 million.”
Those are just two recent examples of Palantir’s lucrative government contracts, ones that have not, as of this writing, been added this running tally. It seems the firm has found its winning strategy. Ramping up tensions between world powers is a small price to pay for significant corporate profits, apparently.
Cynthia Murrell, June 18, 2024
A Fancy Way of Saying AI May Involve Dragons
June 14, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The essay “What Apple’s AI Tells Us: Experimental Models” makes clear that pinning down artificial intelligence is proving to be more difficult than some anticipated in January 2023, the day when Google’s Red Alert squawked and many people said, “AI is the silver bullet I want for my innovation cannon.”
Image source: https://www.geographyrealm.com/here-be-dragons/
Here’s a sentence I found important in the One Useful Thing essay:
What is worth paying attention to is how all the AI giants are trying many different approaches to see what works.
The write up explains different approach to AI that the author has identified. These are:
- Apps
- Business models with subscription fees
The essay concludes with a specter “haunting AI.” The write up says:
I do not know if AGI[artificial general intelligence] is achievable, but I know that the mere idea of AGI being possible soon bends everything around it, resulting in wide differences in approach and philosophy in AI implementations.
Today’s smart software environment has an upside other than the money churn the craziness vortices generate:
Having companies take many approaches to AI is likely to lead to faster adoption in the long term. And, as companies experiment, we will learn more about which sets of models are correct.
Several observations are warranted.
First, the confessions of McKinsey’s AI team make it clear that smart outfits may not know what they are doing. The firms just plunge forward and then after months of work recycle the floundering into lessons. Presumably these lessons are “hire McKinsey.” See my write up “What Is McKinsey & Co. Telling Its Clients about AI?”
Second, another approach is to use AI in the hopes that staff costs can be reduced. I think this is the motivation of some AI enthusiasts. PwC (I am not sure if it is a consulting firm, an accounting firm, or some 21st century mutation) fell in lust with OpenAI. Not only did the firm kick OpenAI’s tires, PwC signed up to be what’s called an “enterprise reseller.” A client pays PwC to just make something work. In this case, PwC becomes the equivalent of a fix it shop with a classy address and workers with clean fingernails. The motivation, in my opinion, is cutting staff. “PwC Is Doing Quiet Layoffs. It’s a Brilliant Example of What Not to Do” says:
This is PwC in the U.K., and obviously, they operate under different laws than we do here in the United States. But in case you’re thinking about following this bad example, I asked employment attorney Jon Hyman for advice. He said, "This request would seem to fall under the umbrella of ‘protected concerted activity’ that the NLRB would take issue with. That said, the National Labor Relations Act does not apply to supervisors — defined as one with the authority to make personnel decisions using independent judgment. "Thus," he continues, "whether this specific PwC request runs afoul of the NLRA’s legal protections for employees to engage in protected concerted activity would depend on whether the laid-off employees were ‘supervisors’ under the Act."
I am a simpler person. The quiet layoffs complement the AI initiative. Quiet helps keep staff from making the connection I am suggesting. But consulting firms keep one eye on expenses and the other on partners’ profits. AI is a catalyst, not a technology.
Third, more AI fatigue write ups are appearing. One example is “The AI Fatigue: Are We Getting Tired of Artificial Intelligence?” reports:
Hema Sridhar, Strategic Advisor for Technological Futures at the University of Auckland, says that there is a lot of “noise on the topic” so it is clear that “people are overwhelmed”. “Almost every company is using AI. Pretty much every app that you’re currently using on your phone has recently released some version with some kind of AI-feature or AI-enhanced features,” she adds. “Everyone’s using it and [it’s] going to be part of day-to-day life, so there are going to be some significant improvements in everything from how you search for your own content on your phone, to more improved directions or productivity tools that just fundamentally change the simple things you do every day that are repetitive.”
Let me reference Apple Intelligence to close this write up. Apple did not announce hardware. It talked about “to be” services. Instead of doing the Meta open source thing, the Google wrong answers with historically flawed images, or the MSFT on-again, off-again roll outs — Apple just did “to be.”
My hunch is that Apple is not cautious; its professionals know that AI products and services may be like those old maps which say, “Here be dragons.” Sailing close to the shore makes sense.
Stephen E Arnold, June 14, 2024
More on TikTok Managing the News Streams
June 14, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
TikTok does not occupy much of my day. I don’t have an account, and I am blissfully unaware of the content on the system. I have heard from those on my research team and from people who attend my lectures at law enforcement / intelligence conferences that it is an influential information conduit. I am a dinobaby, and I am not “into” video. I don’t look up information using TikTok. I don’t follow fashion trends other than those popular among other 80-year-old dinobabies. I am hopeless.
However, I did note “TikTok Users Being Fed Misleading Election News, BBC Finds.” I am mostly unaffected by King Charles’s and his subjects activities. What snagged my attention was the presence of videos which were disseminated via TikTok. These videos delivered
content promoted by social media algorithms has found – alongside funny montages – young people on TikTok are being exposed to misleading and divisive content. It is being shared by everyone from students and political activists to comedians and anonymous bot-like accounts.
Tucked in the BBC write up weas this statement:
TikTok has boomed since the last [British] election. According to media regulator Ofcom, it was the fastest-growing source of news in the UK for the second year in a row in 2023 – used by 10% of adults in this way. One in 10 teenagers say it is their most important news source. TikTok is engaging a new generation in the democratic process. Whether you use the social media app or not, what is unfolding on its site could shape narratives about the election and its candidates – including in ways that may be unfounded.
Shortly after reading the BBC item I saw in my feed (June 3, 2024) this story: “Trump Joins TikTok, the App He Once Tried to Ban.” Interesting.
Several observations are warranted:
- Does the US have a similar video channel currently disseminating information into China, the home base of TikTok and its owner? If “No,” why not? Should the US have a similar policy regarding non-US information conduits?
- Why has education in Britain failed to educate young people about obtaining and vetting information? Does the US have a similar problem?
- Have other countries fallen into the scroll and swipe deserts?
Scary.
Stephen E Arnold, June 14, 2024
Googzilla: Pointing the Finger of Blame Makes Sense I Guess
June 13, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Here you are: The Thunder Lizard of Search Advertising. Pesky outfits like Microsoft have been quicker than Billy the Kid shooting drunken farmers when it comes to marketing smart software. But the real problem in Deadwood is a bunch of do-gooders turned into revolutionaries undermining the granite foundation of the Google. I have this information from an unimpeachable source: An alleged Google professional talking on a podcast. The news release titled “Google Engineer Says Sam Altman-Led OpenAI Set Back AI Research Progress By 5-10 Years: LLMs Have Sucked The Oxygen Out Of The Room” explains that the actions of OpenAI is causing the Thunder Lizard to wobble.
One of the team sets himself apart by blaming OpenAI and his colleagues, not himself. Will the sleek, entitled professionals pay attention to this criticism or just hear “OpenAI”? Thanks, MSFT Copilot. Good enough art.
Consider this statement in the cited news release:
He [an employee of the Thunder Lizard] stated that OpenAI has “single-handedly changed the game” and set back progress towards AGI by a significant number of years. Chollet pointed out that a few years ago, all state-of-the-art results were openly shared and published, but this is no longer the case. He attributed this change to OpenAI’s influence, accusing them of causing a “complete closing down of frontier research publishing.”
I find this interesting. One company, its deal with Microsoft, and that firm’s management meltdown produced a “complete closing down of frontier research publishing.” What about the Dr. Timnit Gebru incident about the “stochastic parrot”?
The write up included this gem from the Googley acolyte of the Thunder Lizard of Search Advertising:
He went on to criticize OpenAI for triggering hype around Large Language Models or LLMs, which he believes have diverted resources and attention away from other potential areas of AGI research.
However, DeepMind — apparently the nerve center of the one best way to generate news releases about computational biology — has been generating PR. That does not count because its is real world smart software I assume.
But there are metrics to back up the claim that OpenAI is the Great Destroyer. The write up says:
Chollet’s [the Googler, remember?] criticism comes after he and Mike Knoop, [a non-Googler] the co-founder of Zapier, announced the $1 million ARC-AGI Prize. The competition, which Chollet created in 2019, measures AGI’s ability to acquire new skills and solve novel, open-ended problems efficiently. Despite 300 teams attempting ARC-AGI last year, the state-of-the-art (SOTA) score has only increased from 20% at inception to 34% today, while humans score between 85-100%, noted Knoop. [emphasis added, editor]
Let’s assume that the effort and money poured into smart software in the last 12 months boosted one key metric by 14 percent. Doesn’t’ that leave LLMs and smart software in general far, far behind the average humanoid?
But here’s the killer point?
… training ChatGPT on more data will not result in human-level intelligence.
Let’s reflect on the information in the news release.
- If the data are accurate, LLM-based smart software has reached a dead end. I am not sure the law suits will stop, but perhaps some of the hyperbole will subside?
- If these insights into the weaknesses of LLMs, why has Google continued to roll out services based on a dead-end model, suffer assorted problems, and then demonstrated its management prowess by pulling back certain services?
- Who is running the Google smart software business? Is it the computationalists combining components of proteins or is the group generating blatantly wonky images? A better question is, “Is anyone in charge of non-advertising activities at Google?”
My hunch is that this individual is representing a percentage of a fractionalized segment of Google employees. I do not think a senior manager is willing to say, “Yes, I am responsible.” The most illuminating facet of the article is the clear cultural preference at Google: Just blame OpenAI. Failing that, blame the users, blame the interns, blame another team, but do not blame oneself. Am I close to the pin?
Stephen E Arnold, June 13, 2024
Modern Elon Threats: Tossing Granola or Grenades
June 13, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Bad me. I ignored the Apple announcements. I did spot one interesting somewhat out-of-phase reaction to Tim Apple’s attempt to not screw up again. “Elon Musk Calls Apple Devices with ChatGPT a Security Violation.” Since the Tim Apple crowd was learning about what was “to be,” not what is, this statement caught my attention:
If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.
I want to comment about the implicit “then” in this remarkable prose output from Elon Musk. On the surface, the “then” is that the most affluent mobile phone users will be prohibited from the X.com service. I wonder how advertisers are reacting to this idea of cutting down the potential eyeballs for their product if advertised to an group of prospects no longer clutching Apple iPhones. I don’t advertise, but I can game out how the meetings between the company with advertising dollars and the agency helping the company make informed advertising decisions. (Let’s assume that advertising “works”, and advertising outfits are informed for the purpose of this blog post.)
A tortured genius struggles against the psychological forces that ripped the Apple car from the fingers of its rightful owner. Too bad. Thanks, MSFT Copilot. How is your coding security coming along? What about the shut down of the upcharge for Copilot? Oh, no answer. That’s okay. Good enough.
Let’s assume Mr. Musk “sees” something a dinobaby like me cannot. What’s with the threat logic? The loss of a beloved investment? A threat to a to-be artificial intelligence company destined to blast into orbit on a tower of intellectual rocket fuel? Mr. Musk has detected a signal. He has interpreted. And he has responded with an ultimatum. That’s pretty fast action, even for a genius. I started college in 1962, and I dimly recall a class called Psych 101. Even though I attended a low-ball institution, the knowledge value of the course was evident in the large and shabby lecture room with a couple of hundred seats.
Threats, if I am remembering something that took place 62 years ago, tell more about the entity issuing the threat than the actual threat event itself. The words worming from the infrequently accessed cupboards of my mind are linked to an entity wanting to assert, establish, or maintain some type of control. Slapping quasi-ancient psycho-babble on Mr. Musk is not fair to the grand profession of psychology. However, it does appear to reveal that whatever Apple thinks it will do in its “to be”, coming-soon service struck a nerve into Mr. Musk’s super-bright, well-developed brain.
I surmise there is some insecurity with the Musk entity. I can’t figure out the connection between what amounts to vaporware and a threat to behead or de-iPhone a potentially bucket load of prospects for advertisers to pester. I guess that’s why I did not invent the Cybertruck, a boring machine, and a rocket ship.
But a threat over vaporware in a field which has demonstrated that Googzilla, Microsoft, and others have dropped their baskets of curds and whey is interesting. The speed with which Mr. Musk reacts suggests to me that he perceives the Apple vaporware as an existential threat. I see it as another big company trying to grab some fruit from the AI tree until the bubble deflates. Software does have a tendency to disappoint, build up technical debt, and then evolve to the weird service which no one can fix, change, or kill because meaningful competition no longer exists. When will the IRS computer systems be “fixed”? When will airline reservations systems serve the customer? When will smart software stop hallucinating?
I actually looked up some information about threats from the recently disgraced fake research publisher John Wiley & Sons. “Exploring the Landscape of Psychological Threat” reminded me why I thought psychology was not for me. With weird jargon and some diagrams, the threat may be linked to Tesla’s rumored attempt to fall in love with Apple. The product of this interesting genetic bonding would be the Apple car, oodles of cash for Mr. Musk, and the worshipful affection of the Apple acolytes. But the online date did not work out. Apple swiped Tesla into the loser bin. Now Mr. Musk can get some publicity, put X.com (don’t you love Web sites that remind people of pornography on the Dark Web?) in the news, and cause people like me to wonder. “Why dump on Apple?” (The outfit has plenty of worries with the China thing, doesn’t it? What about some anti-trust action? What about the hostility of M3 powered devices?)
Here’s my take:
- Apple Intelligence is a better “name” than Mr. Musk’s AI company xAI. Apple gets to use “AI” but without the porn hook.
- A controversial social media emission will stir up the digital elite. Publicity is good. Just ask Michael Cimino of Heaven’s Gate fame?
- Mr. Musk’s threat provides an outlet for the failure to make Tesla the Apple car.
What if I am wrong? [a] I don’t care. I don’t use an iPhone, Twitter, or online advertising. [b] A GenX, Y, or Z pooh-bah will present the “truth” and set the record straight. [c] Mr. Musk’s threat will be like the result of a Boring Company operation. A hole, a void.
Net net: Granola. The fast response to what seems to be “coming soon” vaporware suggests a potential weak spot in Mr. Musk’s make up. Is Apple afraid? Probably not. Is Mr. Musk? Yep.
Stephen E Arnold, June 13, 2024
Detecting AI-Generated Research Increasingly Difficult for Scientific Journals
June 12, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Reputable scientific journals would like to only publish papers written by humans, but they are finding it harder and harder to enforce that standard. Researchers at the University of Chicago Medical Center examined the issue and summarize their results in, “Detecting Machine-Written Content in Scientific Articles,” published at Medical Xpress. Their study was published in Journal of Clinical Oncology Clinical Cancer Informatics on June 1. We presume it was written by humans.
The team used commercial AI detectors to evaluate over 15,000 oncology abstracts from 2021-2023. We learn:
“They found that there were approximately twice as many abstracts characterized as containing AI content in 2023 as compared to 2021 and 2022—indicating a clear signal that researchers are utilizing AI tools in scientific writing. Interestingly, the content detectors were much better at distinguishing text generated by older versions of AI chatbots from human-written text, but were less accurate in identifying text from the newer, more accurate AI models or mixtures of human-written and AI-generated text.”
Yes, that tracks. We wonder if it is even harder to detect AI generated research that is, hypothetically, run through two or three different smart rewrite systems. Oh, who would do that? Maybe the former president of Stanford University?
The researchers predict:
“As the use of AI in scientific writing will likely increase with the development of more effective AI language models in the coming years, Howard and colleagues warn that it is important that safeguards are instituted to ensure only factually accurate information is included in scientific work, given the propensity of AI models to write plausible but incorrect statements. They also concluded that although AI content detectors will never reach perfect accuracy, they could be used as a screening tool to indicate that the presented content requires additional scrutiny from reviewers, but should not be used as the sole means to assess AI content in scientific writing.”
That makes sense, we suppose. But humans are not perfect at spotting AI text, either, though there are ways to train oneself. Perhaps if journals combine savvy humans with detection software, they can catch most AI submissions. At least until the next generation of ChatGPT comes out.
Cynthia Murrell, June 12, 2024