Definitive AI Market Snapshot

February 3, 2023

Matt Shumer, the co-founder of Otherside AI, posted on Twitter “the definitive market map Twitter thread.” You can find the tweet at this link. Mr. Shumer and his colleagues developed HyperWrite AI which is “”the world’s most powerful AI writing assistant.” The format of the information in Twitter is helpful, but I prefer information in tabular form. Here is Mr. Shumer’s “map” presented with the company name and its url. Some companies appear in the tables more than one time. Also, where necessary I have added a url which some readers may find useful. I have used Mr. Shumer’s categories, clarifying where necessary. Other minor edits included are for readability.

Category: Art/Images  
Blimey Create
Playground AI
Stockimg AI
Note: Craiyon added to original list  
Category: Audio Summaries  
Meet Jamie AI
Word Cab
Category: Coding  
GitHub Copilot
Category: Conversational+A60  
Ask delphi
Category: Customer Service  
Brainfish AI
Category: Databases  
weaviate io
Category: Design  
Diagram ttps://
Vizcom AI
Category: Gaming  
InWorld AI
Category: Inference and Training  
Goose ai NLP
Leap api
Weights Biases
Category: Legal  
Note: Casetext presents itself as an alternative to LexisNexis.  
Cateogory: Marketing Copy Generation  
Copy ai
Copysmith ai
Category: Models  
Anthropic AI
Cohere AI
Google AI
Open AI
Stability AI
Category: Music  
Category: Photographs (Some Enhancers)  
Bloom AI
Stylized AI
Category: Prompts and Chaining  
Dust4 AI
GPT index
LangChain AI
Category: Question Answering  
Avanty App
Olli AI
Vazy Data
Category: Search and Retrieval  
Perplexity AI
Category: Survey Analysis  
Category: Video  
Bria AI
Gloss AI
Category: Writing Tools  


The taxonomy is essentially Mr. Shumer’s. But the list of companies was interesting to me and my research team.

More on the AI Betamax Versus VHS Dust Up

February 2, 2023

24 Seriously Embarrassing Hours for AI” gathers four smart software stumbles. The examples are highly suggestive that some butchers have been putting their fingers on the scales. The examples include the stage set approach to Tesla’s self driving and OpenAI’s reliance on humans to beaver away out of sight to make outputs better. In general I agree with the points in the write up.

However, there is one statement which attracted my yellow high light pen like a sci-fi movie tractor beam. Here it is:

Sometimes the slower road is the better road.

It may be that the AI TGV has already left the station and is hurtling down the rails from Paris to Nimes. Microsoft announced that the lovable Teams video chat and Swiss Army knife of widgets will be helping users lickity split. Other infusions are almost certain to follow. Even airlines are thinking smart software. Airlines! These outfits lose luggage with bar codes. Perhaps AI will help, but I remain skeptical. How does one lose a bag with a bar code in our post 9/11 world?

The challenge for Google, Facebook (which wants to be a leader in AI), and the other organizations betting their investors’ money on AI going to take a “slower road”?

My TGV high speed train reference is not poetical; it is a reflection of the momentum of information. The OpenAI machine — with or without legerdemain — is rolling along. OpenAI has momentum. With foresight or dumb luck, Microsoft is riding along.

The “slower road” echoes Google’s conservative approach. Remember that Google sacrificed credibility in AI with the Dr. Timnit Gebru affair. Like a jockey with a high value horse, the beast is now carrying lead pads. Combine that with bureaucratic bloat and concern for ad revenues, I am not sure Google and some other outfits can become the high twitch muscled creature needed to cope with market momentum.

Betamax was better. Well, it did not dominate the market. VHS was pushed into the ditch, but that required time and technological innovation. The AI race is not over but the “slow” angle is late from the gate.

Stephen E Arnold, February 2, 2023

Has Microsoft Drilled into a Google Weak Point?

February 2, 2023

I want to point to a paper written by someone who is probably not on the short list to replace Jeff Dean or Prabhakar Raghavan at Google. The analysis of synthetic data and its role in smart software is titled “Machine Learning and the Politics of Synthetic Data.” The author is Benjamin N Jacobsen at Durham University. However, the first sentence of the paper invokes Microsoft’s AI Labs at Microsoft Cambridge. Clue? Maybe?

The paper does a good job of defining synthetic data. These are data generated by a smart algorithm. The fake data train other smart software. What could go wrong? The paper consumes 12 pages explaining that quite a bit can go off the rails; for example, just disconnected from the real world or delivering incorrect outputs. No big deal.

For me the key statement in the paper is this one:

… as I have sought to show in this paper, the claims that synthetic data are ushering in a new era of generated inclusion and non-risk for machinelearning algorithms is both misguided and dangerous. For it obfuscates how synthetic data are fundamentally a technology of risk, producing the parameters and conditions of what gets to count as risk in a certain context.

The idea of risk generated from synthetic data is an important one. I have been compiling examples of open source intelligence blind spots. How will a researcher know when an output is “real”? What if an output increases the risk of a particular outcome? Has the smart software begun to undermine human judgment and decision making? What happens if one approach emerges as the winner — for example the SAIL, Snorkel, Google method? What if a dominant company puts its finger on the scale to cause certain decisions to fall out of the synthetic training set?

With many rushing into the field of AI windmills, what will Google’s Code Red actions spark? Perhaps more synthetic data to make training easier, cheaper, and faster? Notice I did not use the word better. Did the stochastic parrot utter something?

Stephen E Arnold, February 2, 2023

Two Interesting Numbers

February 2, 2023

I spotted two interesting numbers.

The first appeared in this headline: “Facebook Now Has 2 Billion Users.” I am not sure how many people are alive on earth, but this seems like a big number. Facebook or what I call the Zuckbook has morphed into a penny pinching mini-metaverse. But there is the number: two billion. What happens if regulators want to trim down the Zuck’s middle age spread. Chop off WhatsApp. Snip away Instagram. What’s left? The Zuckbook. But is it exciting? Not for me.

Let’s look at the second number. The factoid appears in “ChatGPT Sets Record for Fastest-Growing User Base in History, Report Says.” I quote:

[The] AI bot ChatGPT reached an estimated 100 million active monthly users last month, a mere two months from launch, making it the “fastest-growing consumer application in history…

The Zuckbook thinks ChatGPT is a meh-thing.

Three differences:

First, the ChatGPT thing is a welcome change from the blah announcements about technology in the last six months. I mean another video card, more layoffs, and another Apple sort of new device. Now there is some zing.

Second, the speed of uptake is less of a positive move because ChatGPT is flawless. Nope. The uptake is an example of people annoyed with the status quo and grabbing something that seems a heck of a lot better than ads and more of Dr. Zuboff’s reminders about surveillance.

Third, ChatGPT offers something that almost anyone can use. The learning curve is nearly zero. Can you figure out how to see street views in Google Maps? Can you make Windows update leave your settings alone?

Net net: Fasten your seat belts. A wild ride is beginning.

Stephen E Arnold, February 2, 2023

Doom: An Interesting Prediction from a Xoogler

January 31, 2023

I spotted an interesting prediction about Google or what I call Googzilla. The comment appeared in “Gmail Creator Says ChatGPT Will Destroy Google’s Business in Two Years.”

Google may be only a year or two away from total disruption. AI will eliminate the Search Engine Result Page, which is where they make most of their money. Even if they catch up on AI, they can’t fully deploy it without destroying the most valuable part of their business!

The alleged Xoogler posting the provocative comment was Paul Buchheit. (Once I heard that it was he who turned the phrase, “Don’t be evil.) Mr. Buchheit is attributed with “inventing” Gmail.

The article stated:

The company has built its business largely around its most successful product; the search engine could soon face a crisis… Google charges advertisers a fee for displaying their products and services right next to the search results, increasing the likelihood of the provider being found. In 2021, the company raked in over $250 billion in revenue, its best-ever income in its nearly 25-year-old existence.

Let’s think about ways Google could recover this predicted loss. Here are a few ideas:

  1. Stop paying vendors like Apple to feature Google search results. (A billion here and a billion there could add up.)
  2. Create new services and charge users for them. (I know Google tried to cook up a way to sell Loon balloons and a nifty early stab at the metaverse, but maybe the company will find a way to innovate without me toos.)
  3. Raise prices for consumer services. (That might cause a problem because companies with diversified revenue may lower the already low, low prices for video chat, online word processing, and email. One trick ponies by definition may have difficulty learning another trick or three.)

Will ChatGPT kill the Google? My thought is that even Xooglers feel that the Googzilla is getting arthritic and showing off its middle age spread. Nevertheless, Google’s Sundar and Raghavan management act will have to demonstrate some Fancy Dancing. The ChatGPT may output content that seems okay but tucks errors in its nouns and verbs. But there is the historical precedent of the Sony Betamax to keep in mind. ChatGPT may be flawed but people bought Pintos, and some of these could explode when rear ended. Ouch!

Why are former Google employees pointing out issues? That’s interesting apart from ChatGPT Code Red silliness.

Stephen E Arnold, January 31, 2023

Does Google Need a Better Snorkel and a Deeper Mind?

January 31, 2023

Recession, Sillycon Valley meltdown, and a disaffected workforce? Gloomy, right? Consider this paragraph from “ChatGPT Pro Is Coming. Here’s How You Can Join the Waitlist”:

ChatGPT has probably the fastest-growing user base ever, with a staggering million-plus users signing up a week after its release. That’s four times faster than Dall-E2, which took a month to reach a million users. Microsoft is already mulling an investment of $10 billion, bringing the total valuation of OpenAI, the startup behind ChatGPT, to $29 billion.

A more telling example of the PR coup Microsoft and OpenAI have achieved is the existence of this write up in Sportskeeda. Imagine Sportskeeda publishing “How Google’s AI Tool Sparrow Is Looking to Kill ChatGPT.” Google’s marketing has lured Sportskeeda to help make Google’s case. Impressive.

More blue sky reality, the next big thing has arrived, and the pot of gold at the end of the rainbow is visible. High school and college students have embraced ChatGPT. Lawyers find it unlawyerlike. Google finds it a bit of a problem.

How do I know?

Navigate to the Wall Street Journal, owned by Rupert Murdoch and sufficiently technologically challenged to use humans to write stories. Consider this one: “Google’s AI Now Plays Catch-Up to Newbies.” Imagine the joy of the remaining Google marketing types when news of a big story circulated. Now consider the disappointment when the Googlers read:

… Google employees began asking whether the company had missed a chance to attract users. During a company-wide meeting in December [2022], Mr. Dean [a Google senior wizard] Google had to move slower than startups because people place a high degree o trust in the company’s products, and current chatbots had issues with accuracy, said people who heard the remarks.

Okay, in that month what happened to ChatGPT? It became big and dominated the regular news and the high-tech news streams. What has Google accomplished:

  1. Promises that more than 20 products and services are coming? Is that a forward looking statement or vaporware?
  2. Google rolls over to the EU as it gets ready for the US probe of its modest advertising business
  3. New applications of Dall-E, ChatGPT, and variants clog the trendy online service Product Hunt.

Net net: Jeff Dean, the champion of recipes and Chubby (a Google technology known to few in my experience) is explaining what I call “to be” innovations. Due to Google’s size and customer base, these to-be smart software powered solutions may overwhelm the ChatGPT thing. Google’s snorkels will deliver life giving oxygen to the the beastie. The DeepMind crew will welcome their colleagues from Mountain View and roll out something that does not require a PhD in genetics to understand.

Yep, to be or not to be. That is a question for the Google.

Stephen E Arnold, January 31, 2023

Newton and Shoulders of Giants? Baloney. Is It Everyday Theft?

January 31, 2023

Here I am in rural Kentucky. I have been thinking about the failure of education. I recall learning from Ms. Blackburn, my high school algebra teacher, this statement by Sir Isaac Newton, the apple and calculus guy:

If I have seen further, it is by standing on the shoulders of giants.

Did Sir Isaac actually say this? I don’t know, and I don’t care too much. It is the gist of the sentence that matters. Why? I just finished reading — and this is the actual article title — “CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism. CNET’s AI-Written Articles Aren’t Just Riddled with Errors. They Also Appear to Be Substantially Plagiarized.”

How is any self-respecting, super buzzy smart software supposed to know anything without ingesting, indexing, vectorizing, and any other math magic the developers have baked into the system? Did Brunelleschi wake up one day and do the Eureka! thing? Maybe he stood on line and entered the Pantheon and looked up? Maybe he found a wasp’s nest and cut it in half and looked at what the feisty insects did to build a home? Obviously intellectual theft. Just because the dome still stands, when it falls, he is an untrustworthy architect engineer. Argument nailed.

The write up focuses on other ideas; namely, being incorrect and stealing content. Okay, those are interesting and possibly valid points. The write up states:

All told, a pattern quickly emerges. Essentially, CNET‘s AI seems to approach a topic by examining similar articles that have already been published and ripping sentences out of them. As it goes, it makes adjustments — sometimes minor, sometimes major — to the original sentence’s syntax, word choice, and structure. Sometimes it mashes two sentences together, or breaks one apart, or assembles chunks into new Frankensentences. Then it seems to repeat the process until it’s cooked up an entire article.

For a short (very, very brief) time I taught freshman English at a big time university. What the Futurism article describes is how I interpreted the work process of my students. Those entitled and enquiring minds just wanted to crank out an essay that would meet my requirements and hopefully get an A or a 10, which was a signal that Bryce or Helen was a very good student. Then go to a local hang out and talk about Heidegger? Nope, mostly about the opposite sex, music, and getting their hands on a copy of Dr. Oehling’s test from last semester for European History 104. Substitute the topics you talked about to make my statement more “accurate”, please.

I loved the final paragraphs of the Futurism article. Not only is a competitor tossed over the argument’s wall, but the Google and its outstanding relevance finds itself a target. Imagine. Google. Criticized. The article’s final statements are interesting; to wit:

As The Verge reported in a fascinating deep dive last week, the company’s primary strategy is to post massive quantities of content, carefully engineered to rank highly in Google, and loaded with lucrative affiliate links. For Red Ventures, The Verge found, those priorities have transformed the once-venerable CNET into an “AI-powered SEO money machine.” That might work well for Red Ventures’ bottom line, but the specter of that model oozing outward into the rest of the publishing industry should probably alarm anybody concerned with quality journalism or — especially if you’re a CNET reader these days — trustworthy information.

Do you like the word trustworthy? I do. Does Sir Isaac fit into this future-leaning analysis. Nope, he’s still pre-occupied with proving that the evil Gottfried Wilhelm Leibniz was tipped off about tiny rectangles and the methods thereof. Perhaps Futurism can blame smart software?

Stephen E Arnold, January 31, 2023

Another Betamax Battle: An Intellectual Spat

January 30, 2023

The AI search fight is officially underway. True, the Baidu AI won’t be available until March 2023, but the trumpet has sounded.

chatgpt fight small

The illustration of two AI mud wrestlers engaging in a contest was produced by Craiyon. I assume that the Craiyon crowd has the © because I can’t draw worth a lick. 

The fighters are making their way from the changing room to the pit. In the stands are dozens of AI infused applications. provided a glimpse of its capabilities during its warm up. The somewhat unsteady Googzilla is late. Microsoft has been in the ring waiting for what seems to be a dozen or more news cycles. More spectators are showing up. Look. Baidu is here.

However, there is a spectator with a different point of view from the verdant groves and pizza joints of Princeton University. This Merlin is named Arvind Narayanan, who according to “Decoding the Hype About AI”, once gave a lecture called “How to Recognize AI Snake Oil.” That talk is going to be a book called “AI Snake Oil.” Yep, snake oil: A product of no real worth. No worth. Sharp point: Worth versus no worth. What’s worth?

Please, read the article which is an interview with a person who wants to slow the clapping and stomping of the attendees. Here’s a quote from Dr. Arvind Narayanan’s interview:

Even with something as profound as the internet or search engines or smartphones, it’s turned out to be an adaptation, where we maximize the benefits and try to minimize the risks, rather than some kind of revolution. I don’t think large language models are even on that scale. There can potentially be massive shifts, benefits, and risks in many industries, but I cannot see a scenario where this is a “sky is falling” kind of issue.

Okay, the observations:

  1. Google and its Code Red suggest that Dr. Narayanan is way off base for the Google search brain trust. Maybe Facebook and its “meh” response are better? Microsoft’s bet on OpenAI is going with the adaptation approach. Smart Word may be better than Clippy plus it may sell software licenses to big companies, marketers, and students who need essay writing help
  2. If ChatGPT is snake oil, what’s the fuss? Could it be that some people who are exposed to ChatGPT perceive the smart software as new, exciting, promising, and an opportunity? That seems a reasonable statement at this time.
  3. The split between the believers (Microsoft, et al) and the haters (Google, et al) surfaced with the Timnit Gebru incident at Google. More intellectual warfare is likely: Bias, incorrect output pretending to be correct, copyright issues, etc.

Is technology exciting again? Finally.

Stephen E Arnold, January 30, 2023

Does Google Have the Sony Betamax of Smart Software?

January 30, 2023

Does Google have the Sony Betamax of smart software? If you cannot answer this question as well as ChatGPT, you can take a look at “VHS or Beta? A Look Back at Betamax, and How Sony Lost the VCR Format War to VHS Recorders.” Boiling down the problem Sony faced, let me suggest better did not win. Maybe adult content outfits tipped the scales? Maybe not? The best technology does not automatically dominate the market.

googzilla betamax fixed

Flash forward from the anguish of Sony in the 1970s and the even more excruciating early 1980s to today. Facebook dismisses ChatGPT as not too sophisticated. I heard one of the big wizards at the Zuckbook say this to a Sillycon Alley journalist on a podcast called Big Technology. The name says it all. Big technology, just not great technology. That’s what the Zuckbooker suggested everyone’s favorite social media company has.

The Google has emitted a number of marketing statements about more than a dozen amazing smart software apps. These, please, note, will be forthcoming. The most recent application of the Google’s advanced, protein folding, Go winning system is explained in words—presumably output by a real journalist—in “Google AI Can Create Music in Any Genre from a Text Description.” One can visualize the three exclamation points that a human wanted to insert in this headline. Amazing, right. That too is forthcoming. The article quickly asserts something that could have been crafted by one of Googzilla’s non-terminated executives believes:

MusicLM is surprisingly talented.

The GOOG has talent for sure.

What the Google does not have is the momentum of consumer craziness. Whether it the buzz among some high school and college students that ChatGPT can write or help write term papers or the in-touch outfit Buzzfeed which will use ChatGPT to create listicles — the indomitable Alphabet is not in the information flow.

But the Google technology is better.  That sounds like a statement I heard from a former wizard at RCA who was interviewing for a job at the blue chip consulting firm for which I worked when I was a wee lad. That fellow invented some type of disc storage system, maybe a laser-centric system. I don’t know. His statement still resonates with me today:

The Sony technology was better.

The flaw is that the better technology can win. The inventors of the better technology or the cobblers who glue together other innovations to create a “better” technology never give up their convictions. How can a low resolution, cheaper recording solution win? The champions of Sony’s technology complained about fairness a superior resolution for the recorded information.

I jotted down this morning (January28, 2023), why Googzilla may be facing, like the Zuckbook, a Sony Betamax moment:

  1. The demonstrations of the excellence of the Google smart capabilities are esoteric and mean essentially zero outside of the Ivory Tower worlds of specialists. Yes, I am including the fans of Go and whatever other game DeepMind can win. Fan frenzy is not broad consumer uptake and excitement.
  2. Applications which ordinary Google search users can examine are essentially vaporware. The Dall-E and ChatGPT apps are coming fast and furious. I saw a database of AI apps based on these here-and-now systems, and I had no idea so many clever people were embracing the meh-approach of OpenAI. “Meh,” obviously may not square with what consumers perceive or experience. Remember those baffled professors or the Luddite lawyers who find smart software a bit of a threat.
  3. OpenAI has hit a marketing home run. Forget the Sillycon Alley journalists. Think about the buzz among the artists about their potential customers typing into a search box and getting an okay image. Take a look at Googzilla trying to comprehend the Betamax device.

Toss in the fact that Google’s ad business is going to have some opportunities to explain why owning the bar, the stuff on the shelves, the real estate, and the payment system is a net gain for humanity. Yeah, that will be a slam dunk, won’t it?

Perhaps more significantly, in the post-Covid crazy world in which those who use computers reside, the ChatGPT and OpenAI have caught a big wave. That wave can swamp some very sophisticated, cutting edge boats in a short time.

Here’s a question for you (the last one in this essay I promise): Can the Google swim?

Stephen E Arnold, January 30, 2023

Synthetic Content: A Challenge with No Easy Answer

January 30, 2023

Open source intelligence is the go-to method for many crime analysts, investigators, and intelligence professionals. Whether social media or third-party data from marketing companies, useful insights can be obtained. The upside of OSINT means that many of its supporters downplay or choose to sidestep its downsides. I call this “OSINT blindspots”, and each day I see more information about what is becoming a challenge.

For example, “As Deepfakes Flourish, Countries Struggle with Response” is a useful summary of one problem posed by synthetic (fake) content. What looks “real” may not be. A person sifting through data assumes that information is suspect. Verification is needed. But synthetic data can output multiple instances of fake information and then populate channels with “verification” statements of the initial item of information.

The article states:

Deepfake technology — software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a crypto currency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone. In most of the world, authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.

For some government professionals, the article says:

problematic applications are also plentiful. Legal experts worry that deepfakes could be misused to erode trust in surveillance videos, body cameras and other evidence. (A doctored recording submitted in a British child custody case in 2019 appeared to show a parent making violent threats, according to the parent’s lawyer.) Digital forgeries could discredit or incite violence against police officers, or send them on wild goose chases. The Department of Homeland Security has also identified risks including cyber bullying, blackmail, stock manipulation and political instability.

The most interesting statement in the essay, in my opinion, is this one:

Some experts predict that as much as 90 per cent of online content could be synthetically generated within a few years.

The number may overstate what will happen because no one knows the uptake of smart software and the applications to which the technology will be put.

Thinking in terms of OSINT blindspots, there are some interesting angles to consider:

  1. Assume the write up is correct and 90 percent of content is authored by smart software, how does a person or system determine accuracy? What happens when a self learning system learns from itself?
  2. How does a human determine what is correct or incorrect? Education appears to be struggling to teach basic skills? What about journals with non reproducible results which spawn volumes of synthetic information about flawed research? Is a person, even one with training in a narrow discipline, able to determine “right” or “wrong” in a digital environment?
  3. Are institutions like libraries being further marginalized? The machine generated content will exceed a library’s capacity to acquire certain types of information? Does one acquire books which are “right” when machine generated content produces information that shouts “wrong”?
  4. What happens to automated sense making systems which have been engineered on the often flawed assumption that available data and information are correct?

Perhaps an OSINT blind spot is a precursor to going blind, unsighted, or dark?

Stephen E Arnold, January 30, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta