Google: Timing Is Everything

April 28, 2023

Vea4_thumb_thumb_thumbNote: This short blog post is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Alphabet or the bastion of excellent judgment in matters of management captured headlines in the Wall Street Journal, Bloomberg, the Financial Times, yada yada. My hunch is that you think Google has knocked the socks off the smart software world. Wrong. Maybe Google has introduced an unbeatable approach to online advertising? Wrong. Perhaps you think that Google has rolled out a low-cost, self-driving vehicle? Sorry, wrong.

In the midst of layoffs, lawsuits, and the remarkable market reach of OpenAI, Google’s most recent brilliant move is the release of information abut a big payday for Sundar Pichai. The Reuters’ story “Alphabet CEO Pichai reaps Over $200 Million in 2022 Amid Cost-Cutting” reported:

The pay disparity comes at a time when Alphabet, the parent company of Google, has been cutting jobs globally, The Mountain View, California-based company announced plans to cut 12,000 jobs around the world in January [2023], equivalent to 6% of its global workforce.

Google employees promptly fouled traffic as protestors mumbled and shouted algorithms at the company.

Alphabet’s Board of Directors is quite tolerant and pleased with one half of the Sundar and Prabhakar Comedy Duo. The Paris Bard show which sucked more value than the Ticketmaster and Taylor Swift swizzle. Then the Google management wizards fired people. With Microsoft releasing smart software on a weekly cadence, Mr. Pichai’s reward for a job well done makes headlines.

Timing is everything.

Stephen E Arnold, April 28, 2023

Google Innovates in Smart Software: A Reorganization

April 28, 2023

Vea4_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Someone once told me that it takes six months for staff to adjust to a reorganization. Is this a bit of folklore. Nope, I just think the six month estimate is dead wrong. I think it takes longer, often a year or more to integrate two units of the same company. How do I know? I watched Halliburton take over Nuclear Utility Services. Then I watched Bell + Howell take over the Courier Journal’s database publishing unit. Finally, I have quite direct memories of not being able to find much of anything when we last moved.

Now the Alphabet Google thing is addressing its marketing problem with a reorganization. I learned this by reading “Announcing Google DeepMind.” The write up by a founder of DeepMind says:

Sundar is announcing that DeepMind and the Brain team from Google Research will be joining forces as a single, focused unit called Google DeepMind. Combining our talents and efforts will accelerate our progress towards a world in which AI helps solve the biggest challenges facing humanity…

Not a word about catching up with Microsoft’s Bing ChatGPT marketing, not a peep about the fast cycle integration of orchestration software across discrete ChatGPT-type functions, and not a whisper about why Google is writing about what is to happen.

What’s my take on this Code Red or Red Alert operational status which required the presence of Messrs. Brin and Page?

  1. Google is demonstrating that a reorganization will address the Microsoft ChatGPT marketing. A reorganization and a close partnership among Sundar [Pichai], Jeff Dean, James Manyika, and Demis [Hassabis]? Okay.
  2. Google announced quantum supremacy, its protein folding breakthrough, and the game playing ability of its smart software. Noble achievements, but Microsoft is pushing smart Bing into keyboards. That’s one way to get Android and iPhone users’ attention. Will it work for Microsoft? Probably not, but it is something visible.
  3. Google is simply not reacting. A baby ecosystem is growing up around Midjourney. I learned about unprompt.ai. The service provides a search and point-to-get the prompt service. When I saw this service, I realized that ChatGPT may be morphing in ways that any simple Web search engine could implement. For Google, deploying the service would be trivial. The problem is that reorgs don’t pay much attention outside of the fox hole in which senior management prefers to dwell.

Net net: Google is too big and has too much money to concede. However, the ChatGPT innovation off road vehicle is zipping along. Google is organizing the wizards who will on Google’s glitzy glamping rig. ChatGPT is hitting the rocks and crawling over obstacles. The Google machine is in a scenic observation point with a Pebble Beach-type of view. What’s the hurry? Google is moving… with a reorg.

Stephen E Arnold, April 28, 2023

The Google: A Digital Knife Twisted after Stabbing

April 27, 2023

This essay is the work of a real, still-living dinobaby. No smart software involved.

Brian Lee captures a personal opinion with the somewhat misleading title “Why Does Did Google Brain Exist?” To be fair, the typographic trope of striking out the “does” makes it clear that something changed in the GOOD’s smart software theme park. The lights on one thrill ride seem to have been turned off. Shadows flicker across other attractions, and it is not clear if maintenance is making repairs or if the shows are changing.

The article offers an analysis of the shotgun marriage of Google Brain with DeepMind. I heard the opening notes of “Duelling Banjos” from the 1972 film Deliverance. Instead of four city slickers floating on a raft, the theme accentuates the drama of similar but culturally different digital cruises on Alphabet’s river of cash.

I agree with most of the points presented in the article; for example, presenting “research” as a pretense for amping advertising revenue, the “hubris” of Google, and Google’s effort to be the big dog in smart software. Instead of offering snippets, I recommend that you read Mr. Lee’s essay.

I do want to quote what I think is the twisting of the knife after stabbing Googzilla in the heart. Mr. Lee shoves the knife deeper and pushed it side to side:

Despite Brain’s tremendous value creation from its early funding of open-ended ML research, it is becoming increasingly apparent to Google that it does not know how to capture that value. Google is of course not obligated to fund open-ended research, but it will nevertheless be a sad day for researchers and for the world if Google turns down its investments. Google is already a second-mover in many consumer and business product offerings and it seems like that’s the way it will be in ML research as well. I hope that Google at least does well at being second place. 

The message is clear: The train carrying the company’s top acts has stalled on the way to big show. No longer getting top billing, the Sundar and Prabhakar Act is listed along with a trained pony act and a routine recycling Fibber McGee and Molly gags. Does the forced reorganization mean that Google has lost its star power?

Stephen E Arnold, April 27, 2023

Amusing Moments in Business Analysis

April 27, 2023

Vea4_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I noted two interesting examples of business analysis crashing into reality. I like to visualize the misstep as a well-dressed professional slipping in a doggy deposit and creating a “smelly shoe in a big meeting problem.”

Let me explain the two examples.

The first is MBA baloney about business change or as it is now jargonized “transformation.” If you are a bit rusty on business baloney, a quick look at the so-far free “Organizational Change Management: What It Is & Why It’s Important.” But McKinsey, a blue chip consulting company with a core competency in opioid-related insights, published its version as “What Is Business Transformation?”

The write up says:

Research by McKinsey has long documented that enterprise-wide transformation is difficult, with less than a third of transformations reaching their goals to improve organizational performance and sustain these improvements over time.

I found this recommendation notable:

Many transformations are enabled by a central transformation office (TO), with the CTO at the helm.

As I recall, McKinsey allegedly worked two sides of the street; that is, getting paid to advise certain companies and government agencies about the same subject. I won’t go into details, but the advice proved problematic, and some connect McKinsey’s input with the firm’s efforts to change.

So, does McKinsey have a chief transformation officer? It appears that a Microsoft veteran occupies that position at the venerable, bluest of the blue chip consulting firms. However, this professional has two jobs according to the McKinsey blog. But I thought the chief transformation officer had to operate according to the precepts outlined in the “What Is Business Transformation?” article? Now the job is not just transformation; it is platform. What does platform mean?

Here’s the answer:

Jacky will accelerate this work by helping our firm further leverage technology in our client work and innovate new platforms to help client organizations transform and grow. She will also lead McKinsey’s internal technology team, which serves our more than 40,000 colleagues across 66 countries.

Does this mean that McKinsey’s chief transform officer has to do the change thing and manage the internal technology staff globally?

If I keep in mind the chilling factoid that one third of transformation efforts fail, McKinsey has to focus to make the transformation work. The problem is that, as I understand how the McKinsey and other blue-chip experts operate, is that incentive plans for those leading practices allow the loose confederation of “partners” to hit their goals. In order to hit those goals, partners will have to generate money in ways that are known to work; for example, work for industry, work for the government, heck, work for any entity with the ability to pay.

Will McKinsey change under the firm and informed hand of a chief transformation officer? Not unless that “hand” writes specific incentive plans to cause change from the pocketbook outwards. I wonder whether McKinsey will be in the 33 percent failure set? ‘

The second example comes from Mr. Murdoch’s Wall Street Journal. The essay (not real news in my opinion) appeared in the April 21, 2023 edition. The article’s title was “Justice Thomas and the Plague o Bad Reporting.” The author, according to my dead tree edition of the newspaper, is James Taranto, who is the Journal’s editorial features editor. What’s amazing about this essay is that it criticizes other “real” news outfits for their coverage of what appears to be some dog-doody moments for one of the Supreme Court justices. Pardon the pun, but I don’t have a dog in this fight.

What caught my attention is that the essay makes zero intellectual vibration of a sentient being in the wake of the Rupert Murdoch settlement of the Fox News and Dominion matter. Paying about a billion dollars for exactly the type of “real” news the WSJ essay addresses makes clear that more than the Foxy folks are intellectually dishonest. Amazing.

Net net: Two classy outfits, and each is happily, willingly writing baloney. Transformation without altering executive compensation plans and excoriating other publications for bad reporting illustrates the stuck dials on some organizations’ ethical compasses. I hate to repeat myself, but I have to end with: Amazing.

Stephen E Arnold, April 27, 2023

A Googley Rah Rah for Synthetic Data

April 27, 2023

Vea4_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I want to keep this short. I know from experience that most people don’t think too much about synthetic data. The idea is important, but other concepts are important and no one really cares too much. When was the last time Euler’s Number came up at lunch?

A gaggle of Googlers extoll the virtues of synthetic in a 19 page ArXiv document called “Synthetic Data from Diffusion Models Improves ImageNet Classification.” The main idea is that data derived from “real” data are an expedient way to improve some indexing tasks.

I am not sure that a quote from the paper will do much to elucidate this facet of the generative model world. The paper includes charts, graphs, references to math, footnotes, a few email addresses, some pictures, wonky jargon, and this conclusion:

And we have shown improvements to ImageNet classification accuracy extend to large amounts of generated data, across a range of ResNet and Transformer-based models.

The specific portion of this quote which is quite important in my experience is the segment “across a range of ResNet and Transformer-based models.” Translating to Harrod’s Creek lingo, I think the wizards are saying, “Synthetic data is really good for text too.”

What’s bubbling beneath the surface of this archly-written paper? Here are my answers to this question:

  1. Synthetic data are a heck of a lot cheaper to generate for model training; therefore, embrace “good enough” and move forward. (Think profits and bonuses.)
  2. Synthetic data can be produced and updated more easily that fooling around with “real” data. Assembling training sets, tests, deploying and reprocessing are time sucks. (There is more work to do than humanoids to do it when it comes to training, which is needed frequently for some applications.)
  3. Synthetic datasets can be smaller. Even baby Satan aka Sam Altman is down with synthetic data. Why? Elon could only buy so many nVidia processing units. Thus finding a way to train models with synthetic data works around a supply bottleneck.

My summary of the Googlers’ article is much more brief than the original: Better, faster, cheaper.

You don’t have to pick one. Just believe the Google. Who does not trust the Google? Why not buy synthetic data and ready-to-deploy models for your next AutoGPT product? Google’s approach worked like a champ for online ads. Therefore, Google’s approach will work for your smart software. Trust Google.

Stephen  E Arnold, April 27, 2023

Is It Lights Out on the Information Superhighway?

April 26, 2023

Vea4_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

We just completed a lecture about the shadow web. This is our way of describing a number of technologies specifically designed to prevent law enforcement, tax authorities, and other entities charged with enforcing applicable laws in the dark.

Among the tools available are roulette services. These can be applied to domain proxies so it is very difficult to figure out where a particular service is at a particular point in time. Tor has uttered noises about supporting the Mullvad browser and baking in a virtual private network. But there are other VPNs available, and one of the largest infrastructure service providers is under what appears to be “new” ownership. Change may create a problem for some enforcement entities. Other developers work overtime to provide services primarily to those who want to deploy what we call “traditional Dark Web sites.” Some of these obfuscation software components are available on Microsoft’s GitHub.

I want to point to “Global Law Enforcement Coalition Urges Tech Companies to Rethink Encryption Plans That Put Children in Danger from Online Abusers.” The main idea behind the joint statement (the one to which I point is from the UK’s National Crime Agency) is:

The announced implementation of E2EE on META platforms Instagram and Facebook is an example of a purposeful design choice that degrades safety systems and weakens the ability to keep child users safe. META is currently the leading reporter of detected child sexual abuse to NCMEC. The VGT has not yet seen any indication from META that any new safety systems implemented post-E2EE will effectively match or improve their current detection methods.

From my point of view, a questionable “player” has an opportunity to make it possible to enforce laws related to human trafficking, child safety, and related crimes like child pornography. The “player” seems interested in implementing encryption that would make government enforcement more difficult, if not impossible in some circumstances.

The actions of this “player” illustrate what’s part of a fundamental change in the Internet. What was underground is now moving above ground. The implementation of encryption in messaging applications is a big step toward making the “regular” Internet or what some called the Clear Web into a new version of the Dark Web. Not surprisingly, the Dark Web will not go away, but why develop Dark Web sites when Clear Web services provide communications, secrecy, the ability to transmit images and videos, and perform financial transactions related to these data. Thus the Clear Web is falling into the shadows.

My team and I are not pleased with ignoring appropriate and what we call “ethical” behavior with specific actions to increase risks to average Internet users. In fact, some of the “player’s actions” are specifically designed to make the player’s service more desirable to a market segment once largely focused on the Dark Web.

More than suggestions are needed in my opinion. Direct action is required.

Stephen E Arnold, April 26, 2023

What Smart Software Will Not Know and That May be a Problem

April 26, 2023

This blog post is the work of a real, live dinobaby. No smart software involved.

I read a short item called “Who Owns History? How Remarkable Historical Footage Is Hidden and Monetized.” The main point of the article was to promote a video which makes clear that big companies are locking “extraordinary footage… behind paywalls.” The focus is on images, and I know from conversations with people with whom I worked who managed image rights years ago. The companies are history; for example, BlackStar and Modern Talking Pictures. And there were others.

Images are now a volleyball, and the new spiker on the Big Dog Team is smart software generated images. I have a hunch that individuals and companies will aggregate as many of these as possible. The images will then be subject to the classic “value adding” process and magically become for fee. Image trolls will feast.

I don’t care too much about images. I do think more about textual and tabular content. The rights issue is a big one, but I came at smart software from a different angle. Smart software has to be trained, whether via a traditional human constructed corpus, a fake-o corpus courtesy of the synthetic data wizards, or some shotgun marriage of “self training” and a mash up of other methods.

But what if important information are not available to the smart software? Won’t that smart software be like a student who signs up for Differential Geometry without Algebraic Topology? Lots of effort but that insightful student may not be in gear to keep pace with other students in the class. Is not knowing the equivalent of being uninformed or just dumb?

One of the issues I have with smart software is that some content, which I think is essential to clear thinking, is not available to today’s systems. Let me give one example. In 1963, when I was sophomore at a weird private university, a professor urged me to read the metaphysics text by a person named A. E. Taylor. The college I attended did not have too many of Dr. Taylor’s books. There was a copy of his Aristotle and nothing else. I did some hunting and located a copy of Elements of Metaphysics, a snappy thriller.

However, Dr. Taylor wrote a number of other books. I went looking for these because I assume that the folks training smart data want to make sure the “model” has information about the nature of information and related subjects. Guess what? Project Gutenberg, the Internet Archive, and the online gem Amazon have the Aristotle book and a couple of others. FYI: You can get a copy of A. E. Taylor’s Metaphysics for $3.88, a price illustrating the esteem in which Dr. Taylor’s work is held today.

My team and I ran some queries on the smart software systems to which we have access. We learned that information from Dr. Taylor is a scarce as hen’s teeth. We shifted gears and checked out information generated by the much loved brother of Henry James. More of William James’s books were available at bargain basement prices. A collection of essays was less than $2 on Amazon.

My point is that images are likely to be locked up behind a paywall. However, books which may be important to one’s understanding of useless subjects like ethics, perception, and information are not informing the outputs of the smart software we probed. (Yes, we mean you, gentle Bard, and you too ChatGPT.)

Does the possible omission of these types of content make a difference?

Probably not. Embrace synthetic data. The “old” content is not digitally massaged. Who cares? We are in “good enough” land. It’s like a theme park with a broken rollercoaster and some dicey carnies.,

Stephen E Arnold, April 26, 2023

The Google Reorg. Will It Output Xooglers, Not Innovations?

April 25, 2023

Vea4_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

My team and I have been talking about the Alphabet decision to merge DeepMind with Google Brain. Viewed from one angle, the decision reflects the type of efficiency favored by managers who value the idea of streamlining. The arguments for consolidation are logical; for example, the old tried-and-true buzzword synergy may be invoked to explain the realignment. The decision makes business sense, particularly for an engineer or a number-oriented MBA, accountant, or lawyer.

Arguing against the “one plus one equals three” viewpoint may be those who have experienced the friction generated when staff, business procedures, and projects get close, interact, and release energy. I use the term “energy” to explain the dormant forces unleashed as the reorganization evolves. When I worked at a nuclear consulting firm early in my career, I recall the acrimonious and irreconcilable differences between a smaller unit in Florida and a major division in Maryland. The fix was to reassign personnel and give up on the dream of one big, happy group.

googler for hire

This somewhat pathos-infused image was created using NightCafe Creator and Craiyon. The author (a dinobaby) added the caption which may appeal to large language model-centric start ups with money, ideas, and a “we can do this” vibe.

Over the years, my team and I have observed Google’s struggles to innovate. The successes have been notable. Before the Alphabet entity was constructed, the “old” Google purchased Keyhole, Inc. (a spin-off of the gaming company Intrinsic). That worked after the US government invested in the company. There have been some failures too. My team followed the Orkut product which evolved from a hire named Orkut Büyükkökten, who had developed an allegedly similar system while working at InCircle. Orkut was a success, particularly among users in Brazil and a handful of other countries. However, some Orkut users relied on the system for activities which some found unacceptable. Google killed the social networking system in 2014 as Facebook surged to global prominence as Google’s efforts fell to earth. The company was in a position to be a player in social media, and it botched the opportunity. Steve Ballmer allegedly described Google as a “one-trick pony.” Mr. Ballmer’s touch point was Google’s dependence on online advertising: One source of revenue; therefore, a circus pony able to do one thing. Mr. Ballmer’s quip illustrates the fact that over the firm’s 20-plus year history, Google has not been able to diversify its revenue. More than two-thirds of the company’s money comes directly or indirectly from advertising.

My team and I have watched Google struggle to accept adapt its free-wheeling style to a more traditional business approach to policies and procedures. In one notable incident, my team and I were involved in reviewing proposals to index the content of the US Federal government. Google was one of the bidders. The Google proposal did not follow the expected format of responding to each individual requirement in the request for proposal. In 2000, Google professionals made it clear its method did not require that the government’s statement of work be followed. Other vendors responded, provided the required technical commentary, and produced cost estimates in a format familiar to those involved in the contracting award process. Flash forward 23 years, and Google has figured out how to capture US government work.

The key point: The learning process took a long time.

Why is this example relevant to the Alphabet decision to blend the Brain and DeepMind units? Change — despite the myths of Silicon Valley — is difficult for Alphabet. The tensions at the company are well known. Employees and part-time workers grouse and sometimes carry signs and disturb traffic. Specific personnel matters become, rightly or wrongly, messages that say, Google is unfair. The Google management generated an international spectacle with its all-thumbs approach to human relations. Dr. Timnit Gebru was a co-author of a technical paper which identified a characteristic of smart software. She and several colleagues explained that bias in training data produces results which are skewed. Anyone who has used any of the search systems which used open source libraries created by Google know that outputs are variable, which is a charitable way of saying, “Dr. Gebru was correct.” She became a Xoogler, set up a new organization, and organized a conference to further explain her research — the same research which ruffled the feathers of some Alphabet big birds.

The pace of generative artificial intelligence is accelerating. Disruption can be smelled like ozone in an old-fashioned electric power generation station. My team and I attempt to continue tracking innovations in smart software. We cannot do it. I am prepared to suggest that the job is quite challenging because the flow of new ChatGPT-type products, services, applications, and features is astounding. I recall the early days of the Internet when in 1993 I could navigate to a list of new sites via Mosaic browser and click on the ones of interest. I recall that in a matter of months the list grew too long to scan and was eventually discontinued. Smart software is behaving in this way: Too many people are doing too many new things.

I want to close this short personal essay with several points.

First, mashing up different cultures and a history of differences will act like a brake and add friction to innovative work. Such reorganizations will generate “heat” in the form of disputes, overt or quiet quitting, and an increase in productivity killers like planning meetings, internal product pitches, and getting legal’s blessing on a proposed service.

Second, a revenue monoculture is in danger when one pest runs rampant. Alphabet does not have a mechanism to slow down what is happening in the generative AI space. In online advertising, Google has knobs and levers. In the world of creating applications and hooking them together to complete tasks, Alphabet management seems to lack a magic button. The pests just eat the monoculture’s crop.

Third, the unexpected consequence of merging Brain and DeepMind may be creating what I call a “Xoogler Manufacturing Machine.” Annoyed or “grass is greener” Google AI experts may go to one of the many promising generative AI startups. Note: A former Google employee is sometimes labeled a “Xoogler,” which is shorthand for ex-Google employee.

Net net: In a conversation in 2005 with a Google professional whom I cannot name due to the confidentiality agreement I signed with the firm, I asked, “Do you think people and government officials will figure out what Google is really doing?” This person, who was a senior manager, said to the best of my recollection, “Sure and when people do, it’s game.” My personal view is that Alphabet is in a game in which the clock is ticking. And in the process of underperforming, Alphabet’s advertisers and users of free and for-fee services will shift their attention elsewhere, probably to a new or more agile firm able to leverage smart software. Alphabet’s most recent innovation is the creation of a Xoogler manufacturing system. The product? Former Google employees who want to do something instead of playing in the Alphabet sandbox with argumentative wizards and several ill-behaved office pets.

Stephen E Arnold, April 24, 2023

Google: A PR Special Operation Underway

April 25, 2023

Vea4_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

US television on Sunday, April 16, 2023. Assorted blog posts and articles by Google friends like Inc. Magazine. Now the British Guardian newspaper hops on the bandwagon.

Navigate toGoogle Chief Warns AI Could Be Harmful If Deployed Wrongly.” Let me highlight a couple of statements in the write up and then offer a handful of observations designed intentionally to cause some humanoids indigestion.

The article includes this statement:

Sundar Pichai also called for a global regulatory framework for AI similar to the treaties used to regulate nuclear arms use, as he warned that the competition to produce advances in the technology could lead to concerns about safety being pushed aside.

Also, this gem:

Pichai added that AI could cause harm through its ability to produce disinformation.

And one more:

Pichai admitted that Google did not fully understand how its AI technology produced certain responses.

Enough. I want to shift to the indigestion inducing portion of this short essay.

First, Google is in Code Red. Why? What were the search wizards under the guidance of Sundar and Prabhakar doing for the last year? Obviously not paying attention to the activity of OpenAI. Microsoft was and stole the show at the hoe down in Davos. Now Microsoft has made available a number of smart services designed to surf on its marketing tsunami and provide more reasons for enterprise customers to pay for smart Microsoft software. Neither the Guardian nor Sundar seem willing to talk about the reality of Google finding itself in the position of Alta Vista, Lycos, or WebCrawler in the late 1990s and early 2000s when Google search delivered relevant results. At least Google did until it was inspired by the Yahoo, GoTo, and Overture approach to making cash. Back to the question: Why ignore the fact that Google is in Code Red? Why not ask one half of the Sundar and Prabhakar Comedy Team how they got aced by a non-headliner act at the smart software vaudeville show?

Second, I loved the “could cause harm.” What about the Android malware issue? What about the ads which link to malware in Google search results? What about the monopolization of online advertising and pricing ads beyond the reach of many small businesses? What about the “interesting” videos on YouTube? Google has its eye on the “could” of smart software without paying much attention to the here-and-now downsides of its current business. And disinformation? What is Google doing to scrub that content from its search results. My team identified a distributor of pornography operating in Detroit. That operator’s content can be located with a single Google query. If Google cannot identify porn, how will it flag smart software’s “disinformation”?

Finally, Google for decades has made a big deal of hiring the smartest people in the world. There was a teen whiz kid in Moscow. There was a kid in San Jose with a car service to get him from high school to the Mountain View campus. There is deep mind with its “deep” team of wizards. Now this outfit with more than 100,000 (more or less full time geniuses) does not know how its software works. How will that type of software be managed by the estimable Google? The answer is, “It won’t.” Google’s ability to manage is evident with heart breaking stories about its human relations and personnel actions. There are smart Googlers who think the software is alive. Does this person have company-paid mental health care? There are small businesses like an online automobile site in ruins because a Googler downchecked the site years ago for an unknown reason. The Google is going to manage something well?

My hunch is that Google wants to make sure that it becomes the primary vendor of ready-to-roll training data and microwavable models. The fact that Amazon, Microsoft, and a group of Chinese outfits are on the same information superhighway illustrates one salient fact: The PR tsunami highlights Google’s lack of positive marketing action and the taffy-pull sluggishness of demos that sort of work.

What about the media which ask softball questions and present as substance recommendations that the world agree on AI rules? Perhaps Google should offer to take over the United Nations or form a World Court of AI Technology? Maybe Google should just be allowed to put other AI firms out of business and keep trying to build a monopoly based on software the company doesn’t appear to understand?

The good news is that Sundar did not reprise the Paris demonstration of Bard. That only cost the company a few billion when the smart software displayed its ignorance. That was comedic, and I think these PR special operations are fodder for the spring Sundar and Prabhakar tour of major cities.

The T shirts will not feature a dinosaur (Googzilla, I believe) freezing in a heavy snow storm. The art can be produced using Microsoft Bing’s functions too. And that will be quite convenient if Samsung ditches Google search for Bing and its integrated smart software. To add a bit of spice to Googzilla’s catered lunch is the rumor that Apple may just go Bing. Bye, bye billions, baby, bye bye.

If that happens, Google loses: [a] a pickup truck filled with cash, [b] even more technical credibility, and [c] maybe Googzilla’s left paw and a fang. Can Sundar and Prabhakar get applause when doing one-liners with one or two performers wearing casts and sporting a tooth gap?

Stephen E Arnold, April 25, 2023

NSO Group: How Easy Are Mobile Hacks?

April 25, 2023

I am at the 2023 US National Cyber Crime Conference, and I have been asked, “What companies offer NSO-type mobile phone capabilities?” My answer is, “Quite a few.” Will I name these companies in a free blog post? Sure, just call us at 1-800-YOU-WISH.

A more interesting question is, “Why is Israel-based NSO Group the pointy end of a three meter stick aimed at mobile devices?” (To get some public information about newly recognized NSO Group (Pegasus) tricks, navigate to “Triple Threat. NSO Group’s Pegasus Spyware Returns in 2022 with a Trio of iOS 15 and iOS 16 Zero-Click Exploit Chains.” I would point out that the reference to Access Now is interesting, and a crime analyst may find a few minutes examining what the organization does, its “meetings,” and its hosting services time well spent. Will I provide that information in a free blog post. Please, call the 800 number listed above.)

Now let’s consider the question regarding the productivity of the NSO technical team.

First, Israel’s defense establishment contains many bright people and a world-class training program. What happens when you take well educated people, the threat of war without warning, and an outstanding in-service instructional set up? The answer is, “Ideas get converted into exercises. Exercises become test code. Test code gets revised. And the functional software becomes weaponized.”

Second, the “in our foxhole” mentality extends once trained military specialists leave the formal service and enter the commercial world. As a result, individuals who studied, worked, and in some cases, fought together set up companies. These individuals are a bit like beavers. Beavers do what beavers do. Some of these firms replicate functionality similar to that developed under the government’s watch and sell those products. Please, note, that NSO Group is an exception of sorts. Some of the “insights” originated when the founders were repairing mobile phones. The idea, however, is the same. Learning, testing, deploying, and the hiring individuals with specialized training by the Israeli government. Keep in mind the “in my foxhole” notion, please.

Third, directly or indirectly important firms in Israel or, in some cases, government-assisted development programs provide: [a] Money, [b] meet up opportunities like “tech fests” in Tel Aviv, and [c] suggestions about whom to hire, partner with, consult with, or be aware of.

Do these conditions exist in other countries? In my experience, to some degree this approach to mobile technology exploits does. There are important differences. If you want to know what these are, you know the answer. Buzz that 800 number.

My point is that the expertise, insights, systems, and methods of what the media calls “the NSO Group” have diffused. As a result, there are more choices than ever before when it comes to exploiting mobile devices.

Where’s Apple? Where’s Google? Where’s Samsung? The firms, in my opinion, are in reactive mode, and, in some cases, they don’t know what they don’t know.

Stephen E Arnold, April 25, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta