An Interesting Prediction about Mobile Phones

April 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have hated telephone calls for decades: Intrusive, phone tag baloney, crappy voice mail systems, and wacko dialing codes when in a country in which taxis are donkeys. No thanks. But the mobile phone revolution is here. Sure, I have a mobile phone. Plus, I have a Chinese job just to monitor data flows. And I have an iPhone which I cart around to LE trade shows to see if a vendor can reveal the bogus data we put on the device.

4 14 mobile implant

What’s the future? An implant? Yeah, that sounds like a Singularity thing or a big ear ring, a wire, and a battery pack which can power a pacemaker, an artificial kidney, and an AI processing unit. What about a device that is smart and replaces the metal candy bar, which has not manifested innovations in the last five or six years? I don’t care about a phone which is capable of producing TikToks.

The future of the phone has been revealed in the online publication Phone Arena. “AI Will Kill the Smartphone As We Know It. Here’s Why!” explains:

I know the idea may sound very radical at first glance, but if we look with a cold, objective eye at where the world is going with the software as a service model, it suddenly starts to sound less radical.

The idea is that the candy bar device will become a key fob, a decorative pin (maybe a big decorative pin), a medallion on a thick gold chain (rizz, right?), or maybe a shrinkflation candy bar?

My own sense of the future is skewed because I am a dinobaby. I have a cheapo credit card which is a semi-reliable touch-and-tap gizmo. Why not use a credit card form factor with a small screen (obviously unreadable by a dinobaby but designers don’t care about dinobabies in my experience). With ambient functionality, the card “just connects” and one can air talk and read answers on the unreadable screen. Alternatively, one’s wireless ear buds can handle audio duties.

Net net: The AI function is interesting. However, other technical functions will have to become available. Until then, keep upgrading those mobile phones. No, I won’t answer. No, I won’t click on texts from numbers I don’t have on a white list. No, I won’t read social media baloney. That’s a lot of no’s, isn’t it? Too bad. When you are a dinobaby, you will understand.

Stephen E Arnold, April 15, 2024

Taming AI Requires a Combo of AskJeeves and Watson Methods

April 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I spotted a short item called “A Faster, Better Way to Prevent an AI Chatbot from Giving Toxic Responses.” The operative words from my point of view are “faster” and “better.” The write up reports (with a serious tone, of course):

Teams of human testers write prompts aimed at triggering unsafe or toxic text from the model being tested. These prompts are used to teach the chatbot to avoid such responses.

Yep, AskJeeves created rules. As long as the users of the system asked a question for which there was a rule, the helpful servant worked; for example, What’s the weather in San Francisco? However, ask a question for which there was no rule, what happens? The search engine reality falls behind the marketing juice and gets shopped until a less magical version appears as Ask.com. And then there is IBM Watson. That system endeared itself to groups of physicians who were invited to answer IBM “experts’” questions about cancer treatments. I heard when Watson was in full medical-revolution mode that some docs in a certain Manhattan hospital used dirty words to express his view about the Watson method. Rumor or actual factual? I don’t know, but involving humans in making software smart can be fraught with challenges: Managerial and financial to name but two.

image

The write up says:

Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab used machine learning to improve red-teaming. They developed a technique to train a red-team large language model to automatically generate diverse prompts that trigger a wider range of undesirable responses from the chatbot being tested. They do this by teaching the red-team model to be curious when it writes prompts, and to focus on novel prompts that evoke toxic responses from the target model. The technique outperformed human testers and other machine-learning approaches by generating more distinct prompts that elicited increasingly toxic responses. Not only does their method significantly improve the coverage of inputs being tested compared to other automated methods, but it can also draw out toxic responses from a chatbot that had safeguards built into it by human experts.

How much improvement? Does the training stick or does it demonstrate that charming “Bayesian drift” which allows the probabilities to go walk-about, nibble some magic mushrooms, and generate fantastical answers? How long did the process take? Was it iterative? So many questions, and so few answers.

But for this group of AI wizards, the future is curiosity-driven red-teaming. Presumably the smart software will not get lost, suffer heat stroke, and hallucinate. No toxicity, please.

Stephen E Arnold, April 15, 2024

Publishers Not Thrilled with Internet Archive

April 15, 2024

So you are saving the library of an island? So what?

The non-profit Internet Archive (IA) preserves digital history. It also archives a wealth of digital media, including a large number of books, for the public to freely access. Certain major publishers are trying to stop the organization from sharing their books. These firms just scored a win in a New York federal court. However, the IA is not giving up. In its defense, the organization has pointed to the opinions of authors and copyright scholars. Now, Hachette, HarperCollins, John Wiley, and Penguin Random House counter with their own roster of experts. TorrentFreak reports, “Publishers Secure Widespread Support in Landmark Copyright Battle with Internet Archive.” Journalist Ernesto Van der Sar writes:

“The importance of this legal battle is illustrated by the large number of amicus briefs that are filed by third parties. Previously, IA received support from copyright scholars and the Authors Alliance, among others. A few days ago, another round of amicus came in at the Court of Appeals, this time to back the publishers who filed their reply last week. In more than a handful of filings, prominent individuals and organizations urge the Appeals Court not to reverse the district court ruling, arguing that this would severely hamper the interests of copyright holders. The briefs include positions from industry groups such as the MPA, RIAA, IFPI, Copyright Alliance, the Authors Guild, various writers unions, and many others. Legal scholars, professors, and former government officials, also chimed in.”

See the article for more details on those chimes. A couple points to highlight: First, AI is a part of this because of course it is. Several trade groups argue IA makes high-quality texts too readily available for LLMs to train upon, posing an “artificial intelligence” threat. Also of interest are the opinions that differentiate this case from the Google Books precedent. We learn:

“[Scholars of relevant laws] stress that IA’s practice should not be seen as ‘transformative’ fair use, arguing that the library offers a ‘substitution’ for books that are legally offered by the publishers. This sets the case apart from current legal precedents including the Google Books case, where Google’s mass use of copyrighted books was deemed fair use. ‘IA’s exploitation of copyrighted books is thus the polar opposite of the copying that was found to be transformative in Google Books and HathiTrust. IA offers no “utility-expanding” searchable database to its subscribers.’”

Ah, the devilish details. Will these amicus-rich publishers prevail, or will the decision be overturned on IA’s appeal?

Cynthia Murrell, April 15, 2024

Is This Incident the Price of Marketing: A Lesson for Specialized Software Companies

April 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A comparatively small number of firms develop software an provide specialized services to analysts, law enforcement, and intelligence entities. When I started work at a nuclear consulting company, these firms were low profile. In fact, if one tried to locate the names of the companies in one of those almost-forgotten reference books (remember telephone books), the job was a tough one. First, the firms would have names which meant zero; for example, Rice Labs or Gray & Associates. Next, if one were to call, a human (often a person with a British accent) would politely inquire, “To whom did you wish to speak?” The answer had to conform to a list of acceptable responses. Third, if you were to hunt up the address, you might find yourself in Washington, DC, staring at the second floor of a non-descript building once used to bake pretzels.

image

Decisions, decisions. Thanks, MSFT Copilot. Good enough. Does that phrase apply to one’s own security methods?

Today, the world is different. Specialized firms in a country now engaged in a controversial dust up in the Eastern Mediterranean has companies which have Web sites, publicize their capabilities as mechanisms to know your customer, or make sense of big data. The outfits have trade show presences. One outfit, despite between the poster child from going off the rails, gives lectures and provides previews of its technologies at public events. How times have changed since I have been working in commercial and government work since the early 1970s.

Every company, including those engaged in the development and deployment of specialized policeware and intelware are into marketing. The reason is cultural. Madison Avenue is the whoo-whoo part of doing something quite interesting and wanting to talk about the activity. The other reason is financial. Cracking tough technical problems costs money, and those who have the requisite skills are in demand. The fix, from my point of view, is to try to operate with a public presence while doing the less visible, often secret work required of these companies. The evolution of the specialized software business has been similar to figuring out how to walk a high wire over a circus crowd. Stay on the wire and the outfit is visible and applauded. Fall off the wire and fail big time. But more and more specialized software vendors make the decision to try to become visible and get recognition for their balancing act. I think the optimal approach is to stay out of the big tent avoid the temptations of fame, bright lights, and falling to one’s death.

Why CISA Is Warning CISOs about a Breach at Sisense” provides a good example of public visibility and falling off the high wire. The write up says:

New York City based Sisense has more than a thousand customers across a range of industry verticals, including financial services, telecommunications, healthcare and higher education. On April 10, Sisense Chief Information Security Officer Sangram Dash told customers the company had been made aware of reports that “certain Sisense company information may have been made available on what we have been advised is a restricted access server (not generally available on the internet.)”

Let me highlight one other statement in the write up:

The incident raises questions about whether Sisense was doing enough to protect sensitive data entrusted to it by customers, such as whether the massive volume of stolen customer data was ever encrypted while at rest in these Amazon cloud servers. It is clear, however, that unknown attackers now have all of the credentials that Sisense customers used in their dashboards.

This firm enjoys some visibility because it markets itself using the hot button “analytics.” The function of some of the Sisense technology is to integrate “analytics” into other products and services. Thus it is an infrastructure company, but one that may have more capabilities than other types of firms. The company has non commercial companies as well. If one wants to get “inside” data, Sisense has done a good job of marketing. The visibility makes it easy to watch. Someone with skills and a motive can put grease on the high wire. The article explains what happens when the actor slips up: “More than a thousand customers.”

How can a specialized software company avoid a breach? One step is to avoid visibility. Another is to curtail dreams of big money. Redefine success because those in your peer group won’t care much about you with or without big bucks. I don’t think that is just not part of the game plan of many specialized software companies today. Each time I visit a trade show featuring specialized software firms as speakers and exhibitors I marvel at the razz-ma-tazz the firms bring to the show. Yes, there is competition. But when specialized software companies, particularly those in the policeware and intelware business, market to both commercial and non-commercial firms, that visibility increases their visibility. The visibility attracts bad actors the way Costco roasted chicken makes my French bulldog shiver with anticipation. Tibby wants that chicken. But he is not a bad actor and will not get out of bounds. Others do get out of bounds. The fix is to move the chicken, then put it in the fridge. Tibby will turn his attention elsewhere. He is a dog.

Net net: Less blurring of commercial and specialized customer services might be useful. Fewer blogs, podcasts, crazy marketing programs, and oddly detailed marketing write ups to government agencies. (Yes, these documents can be FOIAed by the Brennan folks, for instance. Yes, those brochures and PowerPoints can find their way to public repositories.) Less marketing. More judgment. Increased security attention, please.

Stephen E Arnold, April 12, 2024

Are Experts Misunderstanding Google Indexing?

April 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google is not perfect. More and more people are learning that the mystics of Mountain View are working hard every day to deliver revenue. In order to produce more money and profit, one must use Rust to become twice as wonderful than a programmer who labors to make C++ sit up, bark, and roll over. This dispersal of the cloud of unknowing obfuscating the magic of the Google can be helpful. What’s puzzling to me is that what Google does catches people by surprise. For example, consider the “real” news presented in “Google Books Is Indexing AI-Generated Garbage.” The main idea strikes me as:

But one unintended outcome of Google Books indexing AI-generated text is its possible future inclusion in Google Ngram viewer. Google Ngram viewer is a search tool that charts the frequencies of words or phrases over the years in published books scanned by Google dating back to 1500 and up to 2019, the most recent update to the Google Books corpora. Google said that none of the AI-generated books I flagged are currently informing Ngram viewer results.

image

Thanks, Microsoft Copilot. I enjoyed learning that security is a team activity. Good enough again.

Indexing lousy content has been the core function of Google’s Web search system for decades. Search engine optimization generates information almost guaranteed to drag down how higher-value content is handled. If the flagship provides the navigation system to other ships in the fleet, won’t those vessels crash into bridges?

In order to remediate Google’s approach to indexing requires several basic steps. (I have in various ways shared these ideas with the estimable Google over the years. Guess what? No one cared, understood, and if the Googler understood, did not want to increase overhead costs. So what are these steps? I shall share them:

  1. Establish an editorial policy for content. Yep, this means that a system and method or systems and methods are needed to determine what content gets indexed.
  2. Explain the editorial policy and what a person or entity must do to get content processed and indexed by the Google, YouTube, Gemini, or whatever the mystics in Mountain View conjure into existence
  3. Include metadata with each content object so one knows the index date, the content object creation date, and similar information
  4. Operate in a consistent, professional manner over time. The “gee, we just killed that” is not part of the process. Sorry, mystics.

Let me offer several observations:

  1. Google, like any alleged monopoly, faces significant management challenges. Moving information within such an enterprise is difficult. For an organization with a Foosball culture, the task may be a bit outside the wheelhouse of most young people and individuals who are engineers, not presidents of fraternities or sororities.
  2. The organization is under stress. The pressure is financial because controlling the cost of the plumbing is a reasonably difficult undertaking. Second, there is technical pressure. Google itself made clear that it was in Red Alert mode and keeps adding flashing lights with each and every misstep the firm’s wizards make. These range from contentious relationships with mere governments to individual staff member who grumble via internal emails, angry Googler public utterances, or from observed behavior at conferences. Body language does speak sometimes.
  3. The approach to smart software is remarkable. Individuals in the UK pontificate. The Mountain View crowd reassures and smiles — a lot. (Personally I find those big, happy looks a bit tiresome, but that’s a dinobaby for you.)

Net net: The write up does not address the issue that Google happily exploits. The company lacks the mental rigor setting and applying editorial policies requires. SEO is good enough to index. Therefore, fake books are certainly A-OK for now.

Stephen E Arnold, April 12, 2024

AI Will Take Jobs for Sure: Money Talks, Humans Walk

April 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Report Shows Managers Eager to Replace or Devalue Workers with AI Tools

Bosses have had it with the worker-favorable labor market that emerged from the pandemic. Fortunately, there is a new option that is happy to be exploited. We learn from TechSpot that a recent “Survey Reveals Almost Half of All Managers Aim to Replace Workers with AI, Could Use It to Lower Wages.” The report is by beautiful.ai, which did its best to spin the results as a trend toward collaboration, not pink slips. Nevertheless, the numbers seem to back up worker concerns. Writer Rog Thubron summarizes:

“A report by Beautiful.ai, which makes AI-powered presentation software, surveyed over 3,000 managers about AI tools in the workplace, how they’re being implemented, and what impact they believe these technologies will have. The headline takeaway is that 41% of managers said they are hoping that they can replace employees with cheaper AI tools in 2024. … The rest of the survey’s results are just as depressing for worried workers: 48% of managers said their businesses would benefit financially if they could replace a large number of employees with AI tools; 40% said they believe multiple employees could be replaced by AI tools and the team would operate well without them; 45% said they view AI as an opportunity to lower salaries of employees because less human-powered work is needed; and 12% said they are using AI in hopes to downsize and save money on worker salaries. It’s no surprise that 62% of managers said that their employees fear that AI tools will eventually cost them their jobs. Furthermore, 66% of managers said their employees fear that AI tools will make them less valuable at work in 2024.”

Managers themselves are not immune to the threat: Half of them said they worry their pay will decrease, and 64% believe AI tools do their jobs better than experienced humans do. At least they are realistic. Beautiful.ai stresses another statistic: 60% of respondents who are already using AI tools see them as augmenting, not threatening, jobs. The firm also emphasizes the number of managers who hope to replace employees with AI decreased “significantly” since last year’s survey. Progress?

Cynthia Murrell, April 12, 2024

The Only Dataset Search Tool: What Does That Tell Us about Google?

April 11, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

If you like semi-jazzy, academic write ups, you will revel in “Discovering Datasets on the Web Scale: Challenges and Recommendations for Google Dataset Search.” The write up appears in a publication associated with Jeffrey Epstein’s favorite university. It may be worth noting that MIT and Google have teamed to offer a free course in Artificial Intelligence. That is the next big thing which does hallucinate at times while creating considerable marketing angst among the techno-giants jousting to emerge as the go-to source of the technology.

Back to the write up. Google created a search tool to allow a user to locate datasets accessible via the Internet. There are more than 700 data brokers in the US. These outfits will sell data to most people who can pony up the cash. Examples range from six figure fees for the Twitter stream to a few hundred bucks for boat license holders in states without much water.

The write up says:

Our team at Google developed Dataset Search, which differs from existing dataset search tools because of its scope and openness: potentially any dataset on the web is in scope.

image

A very large, money oriented creature enjoins a worker to gather data. If someone asks, “Why?”, the monster says, “Make up something.” Thanks MSFT Copilot. How is your security today? Oh, that’s too bad.

The write up does the academic thing of citing articles which talk about data on the Web. There is even a table which organizes the types of data discovery tools. The categorization of general and specific is brilliant. Who would have thought there were two categories of a vertical search engine focused on Web-accessible data. I thought there was just one category; namely, gettable. The idea is that if the data are exposed, take them. Asking permission just costs time and money. The idea is that one can apologize and keep the data.

The article includes a Googley graphic. The French portal, the Italian “special” portal, and the Harvard “dataverse” are identified. Were there other Web accessible collections? My hunch is that Google’s spiders such down as one famous Googler said, “All” the world’s information. I will leave it to your imagination to fill in other sources for the dataset pages. (I want to point out that Google has some interesting technology related to converting data sets into normalized data structures. If you are curious about the patents, just write benkent2020 at yahoo dot com, and one of my researchers will send along a couple of US patent numbers. Impressive system and method.)

The section “Making Sense of Heterogeneous Datasets” is peculiar. First, the Googlers discovered the basic fact of data from different sources — The data structures vary. Think in terms  of grapes and deer droppings. Second, the data cannot be “trusted.” There is no fix to this issue for the team writing the paper. Third, the authors appear to be unaware of the patents I mentioned, particularly the useful example about gathering and normalizing data about digital cameras. The method applies to other types of processed data as well.

I want to jump to the “beyond metadata” idea. This is the mental equivalent of “popping” up a perceptual level. Metadata are quite important and useful. (Isn’t it odd that Google strips high value metadata from its search results; for example, time and data?) The authors of the paper work hard to explain that the Google approach to data set search adds value by grouping, sorting, and tagging with information not in any one data set. This is common sense, but the Googley spin on this is to build “trust.” Remember: This is an alleged monopolist engaged in online advertising and co-opting certain Web services.

Several observations:

  1. This is another of Google’s high-class PR moves. Hooking up with MIT and delivering razz-ma-tazz about identifying spiderable content collections in the name of greater good is part of the 2024 Code Red playbook it seems. From humble brag about smart software to crazy assertions like quantum supremacy, today’s Google is a remarkable entity
  2. The work on this “project” is divorced from time. I checked my file of Google-related information, and I found no information about the start date of a vertical search engine project focused on spidering and indexing data sets. My hunch is that it has been in the works for a while, although I can pinpoint 2006 as a year in which Google’s technology wizards began to talk about building master data sets. Why no time specifics?
  3. I found the absence of AI talk notable. Perhaps Google does not think a reader will ask, “What’s with the use of these data? I can’t use this tool, so why spend the time, effort, and money to index information from a country like France which is not one of Google’s biggest fans. (Paris was, however, the roll out choice for the answer to Microsoft and ChatGPT’s smart software announcement. Plus that presentation featured incorrect information as I recall.)

Net net: I think this write up with its quasi-academic blessing is a bit of advance information to use in the coming wave of litigation about Google’s use of content to train its AI systems. This is just a hunch, but there are too many weirdnesses in the academic write up to write off as intern work or careless research writing which is more difficult in the wake of the stochastic monkey dust up.

Stephen E Arnold, April 11, 2024

Google: The DMA Makes Us Harm Small Business

April 11, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I cannot estimate the number of hours Googlers invested in crafting the short essay “New Competition Rules Come with Trade-Offs.” I find it a work of art. Maybe not the equal of Dante’s La Divina Commedia, but is is darned close.

image

A deity, possibly associated with the quantumly supreme, reassures a human worried about life. Words are reality, at least to some fretful souls. Thanks MSFT Copilot. Good enough.

The essay pivots on unarticulated and assumed “truths.” Particularly charming are these:

  1. “We introduced these types of Google Search features to help consumers”
  2. “These businesses now have to connect with customers via a handful of intermediaries that typically charge large commissions…”
  3. “We’ve always been focused on improving Google Search….”

The first statement implies that Google’s efforts have been the “help.” Interesting: I find Google search often singularly unhelpful, returning results for malware, biased information, and Google itself.

The second statement indicates that “intermediaries” benefit. Isn’t Google an intermediary? Isn’t Google an alleged monopolist in online advertising?

The third statement is particularly quantumly supreme. Note the word “always.” John Milton uses such verbal efflorescence when describing God. Yes, “always” and improving. I am tremulous.

Consider this lyrical passage and the elegant logic of:

We’ll continue to be transparent about our DMA compliance obligations and the effects of overly rigid product mandates. In our view, the best approach would ensure consumers can continue to choose what services they want to use, rather than requiring us to redesign Search for the benefit of a handful of companies.

Transparent invokes an image of squeaky clean glass in a modern, aluminum-framed window, scientifically sealed to prevent its unauthorized opening or repair by anyone other than a specially trained transparency provider. I like the use of the adjective “rigid” because it implies a sturdiness which may cause the transparent window to break when inclement weather (blasts of hot and cold air from oratorical emissions) stress the see-through structures. The adult-father-knows-best reference in “In our view, the best approach”. Very parental. Does this suggest the EU is childish?

Net net: Has anyone compiled the Modern Book of Google Myths?

Stephen E Arnold, April 11, 2024

Tennessee Sends a Hunk of Burnin’ Love to AI Deep Fakery

April 11, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Leave it the state that houses Music City. NPR reports, “Tennessee Becomes the First State to Protect Musicians and Other Artists Against AI.” Courts have demonstrated existing copyright laws are inadequate in the face of generative AI. This update to the state’s existing law is named the Ensuring Likeness Voice and Image Security Act, or ELVIS Act for short. Clever. Reporter Rebecca Rosman writes:

“Tennessee made history on Thursday, becoming the first U.S. state to sign off on legislation to protect musicians from unauthorized artificial intelligence impersonation. ‘Tennessee is the music capital of the world, & we’re leading the nation with historic protections for TN artists & songwriters against emerging AI technology,’ Gov. Bill Lee announced on social media. While the old law protected an artist’s name, photograph or likeness, the new legislation includes AI-specific protections. Once the law takes effect on July 1, people will be prohibited from using AI to mimic an artist’s voice without permission.”

Prominent artists and music industry groups helped push the bill since it was introduced in January. Flanked by musicians and state representatives, Governor Bill Lee theatrically signed it into law on stage at the famous Robert’s Western World. But what now? In its write-up, “TN Gov. Lee Signs ELVIS Act Into Law in Honky-Tonk, Protects Musicians from AI Abuses,” The Tennessean briefly notes:

“The ELVIS Act adds artist’s voices to the state’s current Protection of Personal Rights law and can be criminally enforced by district attorneys as a Class A misdemeanor. Artists—and anyone else with exclusive licenses, like labels and distribution groups—can sue civilly for damages.”

While much of the music industry is located in and around Nashville, we imagine most AI mimicry does not take place within Tennessee. It is tricky to sue someone located elsewhere under state law. Perhaps this legislation’s primary value is as an example to lawmakers in other states and, ultimately, at the federal level. Will others be inspired to follow the Volunteer State’s example?

Cynthia Murrell, April 11, 2024

Has Google Aligned Its AI Messaging for the AI Circus?

April 10, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I followed the announcements at the Google shindig Cloud Next. My goodness, Google’s Code Red has produced quite a new announcements. However, I want to ask a simple question, “Has Google organized its AI acts under one tent?” You can wallow in the Google AI news because TechMeme on April 10, 2024, has a carnival midway of information.

I want to focus on one facet: The enterprise transformation underway. Google wants to cope with Microsoft’s pushing AI into the enterprise, into the Manhattan chatbot, and into the government.  One example of what Google envisions is what Google calls “genAI agents.” Explaining scripts with smarts requires a diagram. Here’s one, courtesy of Constellation Research:

image

Look at the diagram. The “customer”, which is the organization, is at the center of a Googley world: plumbing, models, and a “platform.” Surrounding this core with the customer at the center are scripts with smarts. These will do customer functions. This customer, of course, is the customer of the real customer, the organization. The genAI agents will do employee functions, creative functions, data functions, code functions, and security functions. The only missing function is the “paying Google function,” but that is baked into the genAI approach.

If one accepts the myriad announcements as the “as is” world of Google AI, the Cloud Next conference will have done its job. If you did not get the memo, you may see the Googley diagram as the work of enthusiastic marketers. The quantumly supreme lingo as more evidence that Code Red has been one output of the Code Red initiative.

I want to call attention, however, to the information in the allegedly accurate “Google DeepMind’s CEO Reportedly Thinks It’ll Be Tough to Catch Up with OpenAI’s Sora.” The write up states:

Google DeepMind CEO may think OpenAI’s text-to-video generator, Sora, has an edge. Demis Hassabis told a colleague it’d be hard for Google to draw level with Sora … The Information reported.  His comments come as Big Tech firms compete in an AI race to build rival products.

Am I to believe the genAI system can deliver what enterprises, government organizations, and non governmental entities want: Ways to cut costs and operate in a smarter way?

If I tell myself, “Believe Google’s Cloud Next statements?” Amazon, IBM, Microsoft, OpenAI, and others should fold their tents, put their animals back on the train, and head to another city in Kansas.

If I tell myself, “Google is not delivering and one cannot believe the company which sells ads and outputs weird images of ethnically interesting historical characters,” then the advertising company is a bit disjointed.

Several observations:

  1. The YouTube content processing issues are an indication that Google is making interesting decisions which may have significant legal consequences related to copyright
  2. The senior managers who are in direct opposition about their enthusiasm for Google’s AI capabilities need to get in the same book and preferably read from the same page
  3. The assertions appear to be marketing which is less effective than Microsoft’s at this time.

Net net: The circus has some tired acts. The Sundar and Prabhakar Show seemed a bit tired. The acts were better than those features on the Gong Show but not as scintillating as performances on the Masked Singer. But what about search? Oh, it’s great. And that circus train. Is it powered by steam?

Stephen E Arnold, April 9, 2024

x

x

x

x

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta