Doom: An Interesting Prediction from a Xoogler

January 31, 2023

I spotted an interesting prediction about Google or what I call Googzilla. The comment appeared in “Gmail Creator Says ChatGPT Will Destroy Google’s Business in Two Years.”

Google may be only a year or two away from total disruption. AI will eliminate the Search Engine Result Page, which is where they make most of their money. Even if they catch up on AI, they can’t fully deploy it without destroying the most valuable part of their business!

The alleged Xoogler posting the provocative comment was Paul Buchheit. (Once I heard that it was he who turned the phrase, “Don’t be evil.) Mr. Buchheit is attributed with “inventing” Gmail.

The article stated:

The company has built its business largely around its most successful product; the search engine could soon face a crisis… Google charges advertisers a fee for displaying their products and services right next to the search results, increasing the likelihood of the provider being found. In 2021, the company raked in over $250 billion in revenue, its best-ever income in its nearly 25-year-old existence.

Let’s think about ways Google could recover this predicted loss. Here are a few ideas:

  1. Stop paying vendors like Apple to feature Google search results. (A billion here and a billion there could add up.)
  2. Create new services and charge users for them. (I know Google tried to cook up a way to sell Loon balloons and a nifty early stab at the metaverse, but maybe the company will find a way to innovate without me toos.)
  3. Raise prices for consumer services. (That might cause a problem because companies with diversified revenue may lower the already low, low prices for video chat, online word processing, and email. One trick ponies by definition may have difficulty learning another trick or three.)

Will ChatGPT kill the Google? My thought is that even Xooglers feel that the Googzilla is getting arthritic and showing off its middle age spread. Nevertheless, Google’s Sundar and Raghavan management act will have to demonstrate some Fancy Dancing. The ChatGPT may output content that seems okay but tucks errors in its nouns and verbs. But there is the historical precedent of the Sony Betamax to keep in mind. ChatGPT may be flawed but people bought Pintos, and some of these could explode when rear ended. Ouch!

Why are former Google employees pointing out issues? That’s interesting apart from ChatGPT Code Red silliness.

Stephen E Arnold, January 31, 2023

Does Google Need a Better Snorkel and a Deeper Mind?

January 31, 2023

Recession, Sillycon Valley meltdown, and a disaffected workforce? Gloomy, right? Consider this paragraph from “ChatGPT Pro Is Coming. Here’s How You Can Join the Waitlist”:

ChatGPT has probably the fastest-growing user base ever, with a staggering million-plus users signing up a week after its release. That’s four times faster than Dall-E2, which took a month to reach a million users. Microsoft is already mulling an investment of $10 billion, bringing the total valuation of OpenAI, the startup behind ChatGPT, to $29 billion.

A more telling example of the PR coup Microsoft and OpenAI have achieved is the existence of this write up in Sportskeeda. Imagine Sportskeeda publishing “How Google’s AI Tool Sparrow Is Looking to Kill ChatGPT.” Google’s marketing has lured Sportskeeda to help make Google’s case. Impressive.

More blue sky reality, the next big thing has arrived, and the pot of gold at the end of the rainbow is visible. High school and college students have embraced ChatGPT. Lawyers find it unlawyerlike. Google finds it a bit of a problem.

How do I know?

Navigate to the Wall Street Journal, owned by Rupert Murdoch and sufficiently technologically challenged to use humans to write stories. Consider this one: “Google’s AI Now Plays Catch-Up to Newbies.” Imagine the joy of the remaining Google marketing types when news of a big story circulated. Now consider the disappointment when the Googlers read:

… Google employees began asking whether the company had missed a chance to attract users. During a company-wide meeting in December [2022], Mr. Dean [a Google senior wizard] Google had to move slower than startups because people place a high degree o trust in the company’s products, and current chatbots had issues with accuracy, said people who heard the remarks.

Okay, in that month what happened to ChatGPT? It became big and dominated the regular news and the high-tech news streams. What has Google accomplished:

  1. Promises that more than 20 products and services are coming? Is that a forward looking statement or vaporware?
  2. Google rolls over to the EU as it gets ready for the US probe of its modest advertising business
  3. New applications of Dall-E, ChatGPT, and variants clog the trendy online service Product Hunt.

Net net: Jeff Dean, the champion of recipes and Chubby (a Google technology known to few in my experience) is explaining what I call “to be” innovations. Due to Google’s size and customer base, these to-be smart software powered solutions may overwhelm the ChatGPT thing. Google’s snorkels will deliver life giving oxygen to the the beastie. The DeepMind crew will welcome their colleagues from Mountain View and roll out something that does not require a PhD in genetics to understand.

Yep, to be or not to be. That is a question for the Google.

Stephen E Arnold, January 31, 2023

Crypto and Crime: Interesting Actors Get Blues and Twos on Their Systems

January 31, 2023

I read a widely available document which presents information once described to me as a “close hold.” The article is “Most Criminal Crypto currency Is Funneled Through Just 5 Exchanges.” Most of the write up is the sort of breathless “look what we know” information. The article which recycles information from Wired and from the specialized services firm Chainalysis does not mention the five outfits currently under investigation. The write up does not provide much help to a curious reader by omitting open source intelligence tools which can rank order exchanges by dollar volume. Why not learn about this listing by CoinMarketCap and include that information instead of recycling OPI (other people’s info)? Also, why not point to resources on one of the start.me pages? I know. I know. That’s work that interferes with getting a Tall, Non-Fat Latte With Caramel Drizzle.

The key points for me is the inclusion of some companies/organizations allegedly engaged in some fascinating activities. (Fascinating for crime analysts and cyber fraud investigators. For the individuals involved with these firms, “fascinating” is not the word one might use to describe the information in the Ars Technica article.)

Here are the outfits mentioned in the article:

  • Bitcoin Fog – Offline
  • Bitzlato
  • Chatex
  • Garantex
  • Helix – Offline
  • Suex
  • Tornado Cash – Offline

Is there a common thread connecting these organizations? Who are the stakeholders? Who are the managers? Where are these outfits allegedly doing business?

Could it be Russia?

Stephen E Arnold, February 1, 2023

Newton and Shoulders of Giants? Baloney. Is It Everyday Theft?

January 31, 2023

Here I am in rural Kentucky. I have been thinking about the failure of education. I recall learning from Ms. Blackburn, my high school algebra teacher, this statement by Sir Isaac Newton, the apple and calculus guy:

If I have seen further, it is by standing on the shoulders of giants.

Did Sir Isaac actually say this? I don’t know, and I don’t care too much. It is the gist of the sentence that matters. Why? I just finished reading — and this is the actual article title — “CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism. CNET’s AI-Written Articles Aren’t Just Riddled with Errors. They Also Appear to Be Substantially Plagiarized.”

How is any self-respecting, super buzzy smart software supposed to know anything without ingesting, indexing, vectorizing, and any other math magic the developers have baked into the system? Did Brunelleschi wake up one day and do the Eureka! thing? Maybe he stood on line and entered the Pantheon and looked up? Maybe he found a wasp’s nest and cut it in half and looked at what the feisty insects did to build a home? Obviously intellectual theft. Just because the dome still stands, when it falls, he is an untrustworthy architect engineer. Argument nailed.

The write up focuses on other ideas; namely, being incorrect and stealing content. Okay, those are interesting and possibly valid points. The write up states:

All told, a pattern quickly emerges. Essentially, CNET‘s AI seems to approach a topic by examining similar articles that have already been published and ripping sentences out of them. As it goes, it makes adjustments — sometimes minor, sometimes major — to the original sentence’s syntax, word choice, and structure. Sometimes it mashes two sentences together, or breaks one apart, or assembles chunks into new Frankensentences. Then it seems to repeat the process until it’s cooked up an entire article.

For a short (very, very brief) time I taught freshman English at a big time university. What the Futurism article describes is how I interpreted the work process of my students. Those entitled and enquiring minds just wanted to crank out an essay that would meet my requirements and hopefully get an A or a 10, which was a signal that Bryce or Helen was a very good student. Then go to a local hang out and talk about Heidegger? Nope, mostly about the opposite sex, music, and getting their hands on a copy of Dr. Oehling’s test from last semester for European History 104. Substitute the topics you talked about to make my statement more “accurate”, please.

I loved the final paragraphs of the Futurism article. Not only is a competitor tossed over the argument’s wall, but the Google and its outstanding relevance finds itself a target. Imagine. Google. Criticized. The article’s final statements are interesting; to wit:

As The Verge reported in a fascinating deep dive last week, the company’s primary strategy is to post massive quantities of content, carefully engineered to rank highly in Google, and loaded with lucrative affiliate links. For Red Ventures, The Verge found, those priorities have transformed the once-venerable CNET into an “AI-powered SEO money machine.” That might work well for Red Ventures’ bottom line, but the specter of that model oozing outward into the rest of the publishing industry should probably alarm anybody concerned with quality journalism or — especially if you’re a CNET reader these days — trustworthy information.

Do you like the word trustworthy? I do. Does Sir Isaac fit into this future-leaning analysis. Nope, he’s still pre-occupied with proving that the evil Gottfried Wilhelm Leibniz was tipped off about tiny rectangles and the methods thereof. Perhaps Futurism can blame smart software?

Stephen E Arnold, January 31, 2023

Another Betamax Battle: An Intellectual Spat

January 30, 2023

The AI search fight is officially underway. True, the Baidu AI won’t be available until March 2023, but the trumpet has sounded.

chatgpt fight small

The illustration of two AI mud wrestlers engaging in a contest was produced by Craiyon. I assume that the Craiyon crowd has the © because I can’t draw worth a lick. 

The fighters are making their way from the changing room to the pit. In the stands are dozens of AI infused applications. You.com provided a glimpse of its capabilities during its warm up. The somewhat unsteady Googzilla is late. Microsoft has been in the ring waiting for what seems to be a dozen or more news cycles. More spectators are showing up. Look. Baidu is here.

However, there is a spectator with a different point of view from the verdant groves and pizza joints of Princeton University. This Merlin is named Arvind Narayanan, who according to “Decoding the Hype About AI”, once gave a lecture called “How to Recognize AI Snake Oil.” That talk is going to be a book called “AI Snake Oil.” Yep, snake oil: A product of no real worth. No worth. Sharp point: Worth versus no worth. What’s worth?

Please, read the article which is an interview with a person who wants to slow the clapping and stomping of the attendees. Here’s a quote from Dr. Arvind Narayanan’s interview:

Even with something as profound as the internet or search engines or smartphones, it’s turned out to be an adaptation, where we maximize the benefits and try to minimize the risks, rather than some kind of revolution. I don’t think large language models are even on that scale. There can potentially be massive shifts, benefits, and risks in many industries, but I cannot see a scenario where this is a “sky is falling” kind of issue.

Okay, the observations:

  1. Google and its Code Red suggest that Dr. Narayanan is way off base for the Google search brain trust. Maybe Facebook and its “meh” response are better? Microsoft’s bet on OpenAI is going with the adaptation approach. Smart Word may be better than Clippy plus it may sell software licenses to big companies, marketers, and students who need essay writing help
  2. If ChatGPT is snake oil, what’s the fuss? Could it be that some people who are exposed to ChatGPT perceive the smart software as new, exciting, promising, and an opportunity? That seems a reasonable statement at this time.
  3. The split between the believers (Microsoft, et al) and the haters (Google, et al) surfaced with the Timnit Gebru incident at Google. More intellectual warfare is likely: Bias, incorrect output pretending to be correct, copyright issues, etc.

Is technology exciting again? Finally.

Stephen E Arnold, January 30, 2023

Does Google Have the Sony Betamax of Smart Software?

January 30, 2023

Does Google have the Sony Betamax of smart software? If you cannot answer this question as well as ChatGPT, you can take a look at “VHS or Beta? A Look Back at Betamax, and How Sony Lost the VCR Format War to VHS Recorders.” Boiling down the problem Sony faced, let me suggest better did not win. Maybe adult content outfits tipped the scales? Maybe not? The best technology does not automatically dominate the market.

googzilla betamax fixed

Flash forward from the anguish of Sony in the 1970s and the even more excruciating early 1980s to today. Facebook dismisses ChatGPT as not too sophisticated. I heard one of the big wizards at the Zuckbook say this to a Sillycon Alley journalist on a podcast called Big Technology. The name says it all. Big technology, just not great technology. That’s what the Zuckbooker suggested everyone’s favorite social media company has.

The Google has emitted a number of marketing statements about more than a dozen amazing smart software apps. These, please, note, will be forthcoming. The most recent application of the Google’s advanced, protein folding, Go winning system is explained in words—presumably output by a real journalist—in “Google AI Can Create Music in Any Genre from a Text Description.” One can visualize the three exclamation points that a human wanted to insert in this headline. Amazing, right. That too is forthcoming. The article quickly asserts something that could have been crafted by one of Googzilla’s non-terminated executives believes:

MusicLM is surprisingly talented.

The GOOG has talent for sure.

What the Google does not have is the momentum of consumer craziness. Whether it the buzz among some high school and college students that ChatGPT can write or help write term papers or the in-touch outfit Buzzfeed which will use ChatGPT to create listicles — the indomitable Alphabet is not in the information flow.

But the Google technology is better.  That sounds like a statement I heard from a former wizard at RCA who was interviewing for a job at the blue chip consulting firm for which I worked when I was a wee lad. That fellow invented some type of disc storage system, maybe a laser-centric system. I don’t know. His statement still resonates with me today:

The Sony technology was better.

The flaw is that the better technology can win. The inventors of the better technology or the cobblers who glue together other innovations to create a “better” technology never give up their convictions. How can a low resolution, cheaper recording solution win? The champions of Sony’s technology complained about fairness a superior resolution for the recorded information.

I jotted down this morning (January28, 2023), why Googzilla may be facing, like the Zuckbook, a Sony Betamax moment:

  1. The demonstrations of the excellence of the Google smart capabilities are esoteric and mean essentially zero outside of the Ivory Tower worlds of specialists. Yes, I am including the fans of Go and whatever other game DeepMind can win. Fan frenzy is not broad consumer uptake and excitement.
  2. Applications which ordinary Google search users can examine are essentially vaporware. The Dall-E and ChatGPT apps are coming fast and furious. I saw a database of AI apps based on these here-and-now systems, and I had no idea so many clever people were embracing the meh-approach of OpenAI. “Meh,” obviously may not square with what consumers perceive or experience. Remember those baffled professors or the Luddite lawyers who find smart software a bit of a threat.
  3. OpenAI has hit a marketing home run. Forget the Sillycon Alley journalists. Think about the buzz among the artists about their potential customers typing into a search box and getting an okay image. Take a look at Googzilla trying to comprehend the Betamax device.

Toss in the fact that Google’s ad business is going to have some opportunities to explain why owning the bar, the stuff on the shelves, the real estate, and the payment system is a net gain for humanity. Yeah, that will be a slam dunk, won’t it?

Perhaps more significantly, in the post-Covid crazy world in which those who use computers reside, the ChatGPT and OpenAI have caught a big wave. That wave can swamp some very sophisticated, cutting edge boats in a short time.

Here’s a question for you (the last one in this essay I promise): Can the Google swim?

Stephen E Arnold, January 30, 2023

Synthetic Content: A Challenge with No Easy Answer

January 30, 2023

Open source intelligence is the go-to method for many crime analysts, investigators, and intelligence professionals. Whether social media or third-party data from marketing companies, useful insights can be obtained. The upside of OSINT means that many of its supporters downplay or choose to sidestep its downsides. I call this “OSINT blindspots”, and each day I see more information about what is becoming a challenge.

For example, “As Deepfakes Flourish, Countries Struggle with Response” is a useful summary of one problem posed by synthetic (fake) content. What looks “real” may not be. A person sifting through data assumes that information is suspect. Verification is needed. But synthetic data can output multiple instances of fake information and then populate channels with “verification” statements of the initial item of information.

The article states:

Deepfake technology — software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a crypto currency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone. In most of the world, authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.

For some government professionals, the article says:

problematic applications are also plentiful. Legal experts worry that deepfakes could be misused to erode trust in surveillance videos, body cameras and other evidence. (A doctored recording submitted in a British child custody case in 2019 appeared to show a parent making violent threats, according to the parent’s lawyer.) Digital forgeries could discredit or incite violence against police officers, or send them on wild goose chases. The Department of Homeland Security has also identified risks including cyber bullying, blackmail, stock manipulation and political instability.

The most interesting statement in the essay, in my opinion, is this one:

Some experts predict that as much as 90 per cent of online content could be synthetically generated within a few years.

The number may overstate what will happen because no one knows the uptake of smart software and the applications to which the technology will be put.

Thinking in terms of OSINT blindspots, there are some interesting angles to consider:

  1. Assume the write up is correct and 90 percent of content is authored by smart software, how does a person or system determine accuracy? What happens when a self learning system learns from itself?
  2. How does a human determine what is correct or incorrect? Education appears to be struggling to teach basic skills? What about journals with non reproducible results which spawn volumes of synthetic information about flawed research? Is a person, even one with training in a narrow discipline, able to determine “right” or “wrong” in a digital environment?
  3. Are institutions like libraries being further marginalized? The machine generated content will exceed a library’s capacity to acquire certain types of information? Does one acquire books which are “right” when machine generated content produces information that shouts “wrong”?
  4. What happens to automated sense making systems which have been engineered on the often flawed assumption that available data and information are correct?

Perhaps an OSINT blind spot is a precursor to going blind, unsighted, or dark?

Stephen E Arnold, January 30, 2023

Have You Ever Seen a Killer Dinosaur on a Leash?

January 27, 2023

I have never seen a Tyrannosaurus Rex allow a European regulators to put a leash on its neck and lead the beastie around like a tamed circus animal?

google on a leash

Another illustration generated by the smart software outfit Craiyon.com. The copyright is up in the air just like the outcome of Google’s battles with regulators, OpenAI, and assorted employees.

I think something similar just happened. I read “Consumer Protection: Google Commits to Give Consumers Clearer and More Accurate Information to Comply with EU Rules.” The statement said:

Google has committed to limit its capacity to make unilateral changes related to orders when it comes to price or cancellations, and to create an email address whose use is reserved to consumer protection authorities, so that they can report and request the quick removal of illegal content. Moreover, Google agreed to introduce a series of changes to its practices…

The details appear in the this EU table of Google changes.

Several observations:

  1. A kind and more docile Google may be on parade for some EU regulators. But as the circus act of Roy and Siegfried learned, one must not assume a circus animal will not fight back
  2. More problematic may be Google’s internal management methods. I have used the phrase “high school science club management methods.” Now that wizards were and are being terminated like insects in a sophomore biology class, getting that old team spirit back may be increasingly difficult. Happy wizards do not create problems for their employer or former employer as the case may be. Unhappy folks can be clever, quite clever.
  3. The hyper-problem in my opinion is how the tide of online user sentiment has shifted from “just Google it” to ladies in my wife’s bridge club asking me, “How can I use ChatGPT to find a good hotel in Paris?” Yep, really old ladies in a bridge club in rural Kentucky. Imagine how the buzz is ripping through high school and college students looking for a way to knock out an essay about the Louisiana Purchase for that stupid required American history class? ChatGPT has not needed too much search engine optimization, has it.

Net net: The friendly Google faces a multi-bladed meat grinder behind Door One, Door Two, and Door Three. As Monte Hall, game show host of “Let’s Make a Deal” said:

“It’s time for the Big Deal of the Day!”

Stephen E Arnold, January 27, 2023

Microsoft Security and the Azure Cloud: Good Enough?

January 27, 2023

I don’t know anything about the cyber security firm called Silverfort. The company’s Web site makes it clear that the company’s management likes moving icons and Microsoft. Nevertheless, “Microsoft Azure-Based Kerberos Attacks Crack Open Cloud Accounts” points out some alleged vulnerabilities in what Microsoft has positioned as its present and future money machine. The article says:

Silverfort disclosed the issues to Microsoft, and while the company is aware of the weaknesses, it does not plan to fix them, because they are not “traditional” vulnerabilities, Segal says. Microsoft also confirmed that the company does not consider them vulnerabilities. “This technique is not a vulnerability, and to be used successfully a potential attacker would need elevated or administrative rights that grant access to the storage account data,” a Microsoft spokesperson tells Dark Reading [the online service publishing the report].

So a nothingburger (wow, I detest that trendy jargon). I would view Microsoft’s product with a somewhat skeptical eye. Bad actors show some fondness for Microsoft’s approach to engineering.

Shift gears, the article “Microsoft Is Beating Google at Its Own Game.” I thought, “Advertising.” The write up has a different angle:

Following the news of Microsoft’s $10 billion investment, Wedbush analyst Daniel Ives wrote that ChatGPT is a “potential game changer” for Microsoft, and that the company was “not going to repeat the same mistakes” of missing out on social and mobile that it made two decades ago. Microsoft “is clearly being aggressive on this front and not going to be left behind,” Ives wrote.

Yep, smart software. I think the idea is that using OpenAI as a springboard, Microsoft will leapfrog into high clover. The announcement of Microsoft’s investment in OpenAI provides compute resources. If the bet pays off, Microsoft will get real money.

However, what happens when Microsoft’s “good enough” engineering meets OpenAI.

You may disagree, but I think the security vulnerabilities will continue to exist. Furthermore, it is impossible to know what issues will arise when smart software begins to think for Microsoft systems and users.

Security is a cat-and-mouse game. How quickly will bad actors integrate smart software into malware? How easy will it be for smart software to trawl through technical documents looking for interesting information?

The integration of OpenAI into Microsoft systems, services, and software may require more than “good enough” engineering. Now tell me again why I cannot print after updating Windows 11? Exactly what is Google’s game? Excitement about what people believe is the next big thing is one thing. Ignoring some here-and-now issues may be another.

Stephen E Arnold, January 27, 2023

Killing Wickr … Quickly and Without Love

January 27, 2023

Encrypted messaging services are popular for privacy-concerned users as well as freedom fighters in authoritarian countries.  Tech companies consider these messaging services to be a wise investment, so Amazon purchased Wickr in 2020.  Wickr is an end-to-end encrypted messaging app and it was made available for AWS users.  Gizmodo explains that Wickr will soon be nonexistent in the article, “Amazon Plans To Close Up Wickr’s User-Centric Encrypted Messaging App.”

Amazon no longer wants to be part of the encrypted messaging services, because it got too saturated like the ugly Christmas sweater market.  Amazon is killing the Wickr Me app, limiting use to business and public sectors through AWS Wickr and Wickr Enterprise.  New registrations end on December 31 and the app will be obsolete by the end of 2023.  

Wickr was worth $60 million went Amazon purchased it.  Amazon, however, lost $1 trillion in stock vaguer in November 2022, becoming the first company in history to claim that “honor.”  Amazon is laying off employees and working through company buyouts.  Changing Wickr’s target market could recoup some of the losses:

“But AWS apparently wants Wickr to focus on its business and government customers much more than its regular users. Among those public entities using Wickr is U.S. Customs and Border Protection. That contract was reportedly worth around $900,000 when first reported in September last year. Sure, the CBP wants encrypted communications, but Wickr can delete all messages sent via the app, which is an increasingly dangerous proposition for open government advocates.”

Wickr, like other encryption services, does not have a clean record.  It has been used for illegal drug sales and other illicit items via the Dark Web.  

Whitney Grace, January 27, 2022

Next Page »

  • Archives

  • Recent Posts

  • Meta