Google: FUD Embedded in the Glacier Strategy

June 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Fly to Alaska. Stand on a glacier and let the guide explains the glacier moves, just slowly. That’s the Google smart software strategy in a nutshell. Under Code Red or Red Alert or “My goodness, Microsoft is getting media attention for something other than lousy code and security services. We have to do something sort of quickly.”

One facet of the game plan is to roll out a bit of FUD or fear, uncertainty, and doubt. That will send chills to some interesting places, won’t it. You can see this in action in the article “Exclusive: Google Lays Out Its Vision for Securing AI.” Feel the fear because AI will kill humanoids unless… unless you rely on Googzilla. This is the only creature capable of stopping the evil that irresponsible smart software will unleash upon you, everyone, maybe your dog too.

6 9 fireball of doom

The manager of strategy says, “I think the fireball of AI security doom is going to smash us.” The top dog says, “I know. Google will save us.” Note to image trolls: This outstanding illustration was generated in a nonce by MidJourney, not an under-compensated creator in Peru.

The write up says:

Google has a new plan to help organizations apply basic security controls to their artificial intelligence systems and protect them from a new wave of cyber threats.

Note the word “plan”; that is, the here and now equivalent of vaporware or stuff that can be written about and issued as “real news.” The guts of the Google PR is that Google has six easy steps for its valued users to take. Each step brings that user closer to the thumping heart of Googzilla; to wit:

  • Assess what existing security controls can be easily extended to new AI systems, such as data encryption;
  • Expand existing threat intelligence research to also include specific threats targeting AI systems;
  • Adopt automation into the company’s cyber defenses to quickly respond to any anomalous activity targeting AI systems;
  • Conduct regular reviews of the security measures in place around AI models;
  • Constantly test the security of these AI systems through so-called penetration tests and make changes based on those findings;
  • And, lastly, build a team that understands AI-related risks to help figure out where AI risk should sit in an organization’s overall strategy to mitigate business risks.

Does this sound like Mandiant-type consulting backed up by Google’s cloud goodness? It should because when one drinks Google juice, one gains Google powers over evil and also Google’s competitors. Google’s glacier strategy is advancing… slowly.

Stephen E Arnold, June 9, 2023

How Does One Train Smart Software?

June 8, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It is awesome when geekery collides with the real world, such as the development of AI. These geekery hints prove that fans are everywhere and the influence of fictional worlds leave a lasting impact. Usually these hints are naming a new discovery after a favorite character or franchise, but it might not be good for copyrighted books beloved by geeks everywhere. The New Scientist reports that “ChatGPT Seems To Be Trained On Copyrighted Books Like Harry Potter.”

In order to train AI models, AI developers need large language models or datasets. Datasets can range from information on social media platforms to shopping databases like Amazon. The problem with ChatGPT is that it appears its developers at OpenAI used copyrighted books as language models. If OpenAI used copyrighted materials it brings into question if the datasets were legality created.

Associate Professor David Bamman of the University of California, Berkley campus, and his team studied ChatGPT. They hypothesized that OpenAI used copyrighted material. Using 600 fiction books from 1924-2020, Bamman and his team selected 100 passages from each book that ha a single, named character. The name was blanked out of the passages, then ChatGPT was asked to fill them. ChatGPT had a 98% accuracy rate with books ranging from J.K. Rowling, Ray Bradbury, Lewis Carroll, and George R.R. Martin.

If ChatGPT is only being trained from these books, does it violate copyright?

“ ‘The legal issues are a bit complicated,’ says Andres Guadamuz at the University of Sussex, UK. ‘OpenAI is training GPT with online works that can include large numbers of legitimate quotes from all over the internet, as well as possible pirated copies.’ But these AIs don’t produce an exact duplicate of a text in the same way as a photocopier, which is a clearer example of copyright infringement. ‘ChatGPT can recite parts of a book because it has seen it thousands of times,’ says Guadamuz. ‘The model consists of statistical frequency of words. It’s not reproduction in the copyright sense.’”

Individual countries will need to determine dataset rules, but it is preferential to notify authors their material is being used. Fiascos are already happening with stolen AI generated art.

ChatGPT was mostly trained on science fiction novels, while it did not read fiction from minority authors like Toni Morrison. Bamman said ChatGPT is lacking representation. That his one way to describe the datasets, but it more likely pertains to the human  AI developers reading tastes. I assume there was little interest in books about ethics, moral behavior, and the old-fashioned William James’s view of right and wrong. I think I assume correctly.

Whitney Grace, June 8, 2023

IBM Dino Baby Unhappy about Being Outed as Dinobaby in the Baby Wizards Sandbox

June 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I learned the term “dinobaby” reading blog posts about IBM workers who alleged Big Blue wanted younger workers. After thinking about the term, I embraced it. This blog post features an animated GIF of me dancing in my home office. I try to avoid the following: [a] Millennials, GenX, GenZ, and GenY super wizards; [b] former IBM workers who grouse about growing old and not liking a world without CICS; and [c] individuals with advanced degrees who want to talk with me about “smart software.” I have to admit that I have not been particularly successful in this effort in 2023: Conferences, Zooms, face-to-face meetings, lunches, yada yada. Either I am the most magnetic dinobaby in Harrod’s Creek, or these jejune world changers are clueless. (Maybe I should live in a cave on a mountain and accept acolytes?)

I read “Laid-Off 60-Year-Old Kyndryl Exec Says He Was Told IT Giant Wanted New Blood.” The write up includes a number of interesting statements. Here’s one:

BM has been sued numerous times for age discrimination since 2018 when it was reported that company leadership carried out a plan to de-age its workforce – charges IBM has consistently denied, despite US Equal Employment Opportunity Commission (EEOC) findings to the contrary and confidential settlements.

Would IBM deny allegations of age discrimination? There are so many ways to terminate employees today. Why use the “you are old, so you are RIF’ed” ploy? In my opinion, it is an example of the lack of management finesse evident in many once high-flying companies today. I term the methods apparently in use at outfits like Twitter, Google, Facebook, and others as “high school science club management methods” or H2S2M2. The acronym has not caught one, but I assume that someone with a subscription to ChatGPT will use AI to write a book on the subject soon.

The write up also includes this statement:

Liss-Riordan [an attorney representing the dinobaby] said she has also been told that an algorithm was used to identify those who would lose their jobs, but had no further details to provide with regard to that allegation.

Several observations are warranted:

  1. Discrimination is nothing new. Oldsters will be nuked. No question about it. Why? Old people like me (I am 78) make younger folks nervous because we belong in warehouses for the soon dead, not giving lectures to the leaders of today and tomorrow.
  2. Younger folks do not know what they do not know. Consequently, opportunities exist to [a] make fun of young wizards as I do in this blog Monday through Friday since 2008 and [b] charge these “masters of the universe” money to talk about that which is part of their great unknowing. Billing is rejuvenating.
  3. No one cares. One can sue. One can rage. One can find solace in chemicals, fast cars, or climbing a mountain. But it is important to keep one thing in mind: No one cares.

Net net: Does IBM practice dark arts to rid the firm of those who slow down Zoom meetings, raise questions to which no one knows answers, and burdens on benefits plans? My hunch is that IBM type outfits will do what’s necessary to keep the camp ground free of old timers. Who wouldn’t?

Stephen E Arnold, June 5, 2023

Smart Software and a Re-Run of Paradise Lost Joined Progress

June 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I picked up two non-so-faint and definitely not-encrypted signals about the goals of Google and Microsoft for smart software.

6 3 god 2

Which company will emerge as the one true force in smart software? MidJourney did not pick a winner, just what the top dog will wear to the next quarterly sales report delivered via a neutral Zoom call.

Navigate to the visually thrilling podcast hosted by Lex Fridman, an American MIT wizard. He interviewed the voluble Google wizard Chris Lattner. The subject was the Future of Programming and AI. After listening to the interview, I concluded the following:

  1. Google wants to define and control the “meta” framework for artificial intelligence. What’s this mean? Think a digital version of a happy family: Vishnu, Brahma, and Shiva, among others.
  2. Google has an advantage when it comes to doing smart software because its humanoids have learned what works, what to do, and how to do certain things.
  3. The complexity of Google’s multi-pronged smart software methods, its home-brew programming languages, and its proprietary hardware are nothing more than innovation. Simple? Innovation means no one outside of the Google AI cortex can possibly duplicate, understand, or outperform Googzilla.
  4. Google has money and will continue to spend it to deliver the Vishnu, Brahma, and Shiva experience in my interpretation of programmer speak.

How’s that sound? I assume that the fruit fly start ups are going to ignore the vibrations emitted from Chris Lattner, the voluble Chris Lattner, I want to emphasize. But like those short-lived Diptera, one can derive some insights from the efforts of less well-informed, dependent, and less-well-funded lab experiments.

Okay, that’s signal number one.

Signal number two appears in “Microsoft Signs Deal for AI Computing Power with Nvidia-Backed CoreWeave That Could Be Worth Billions.” This “real news” story asserts:

… Microsoft has agreed to spend potentially billions of dollars over multiple years on cloud computing infrastructure from startup CoreWeave …

CoreWeave? Yep, the company “sells simplified access to Nvidia’s graphics processing units, or GPUs, which are considered the best available on the market for running AI models.” By the way, nVidia has invested in this outfit. What’s this signal mean to me? Here are the flickering lines on my oscilloscope:

  1. Microsoft wants to put smart software into its widely-used enterprise applications in order to make the one true religion of smart software. The idea, of course, is to pass the collection plate and convert dead dog software into racing greyhounds.
  2. Microsoft has an advantage because when an MBA does calculations and probably letters to significant others, Excel is the go-to solution. Some people create art in Excel and then sell it. MBAs just get spreadsheet fever and do leveraged buyouts. With smart software the Microsoft alleged monopoly does the billing.
  3. The wild and wonderful world of Azure is going to become smarter because… well, Microsoft does smart things. Imagine the demand for training courses, certification for Microsoft engineers, and how-to YouTube videos.
  4. Microsoft has money and will continue to achieve compulsory attendance at the Church of Redmond.

Net net: Two titans will compete. I am thinking about the battle between the John Milton’s protagonist and antagonist in “Paradise Lost.” This will be fun to watch whilst eating chicken korma.

Stephen E Arnold, June 5, 2023

The Intellectual Titanic and Sister Ships at Sea: Ethical Ballast and Flawed GPS Aboard

June 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Researchers Retract Over 300 COVID-Era Medical Papers For Scientific Errors, Ethical Concerns.” I ignored the information about the papers allegedly hand crafted with cow outputs. I did note this statement, however:

Gunnveig Grødeland, a senior researcher at the Institute of Immunology at the University of Oslo, said many withdrawn papers during COVID-19 have been the result of ethical shortcomings.

Interesting. I recall hearing that the president of a big time university in Palo Alto was into techno sci-fi paper writing. I also think that the estimable Jeffrey Epstein affiliated MIT published some super positive information about the new IBM smart WatsonX. (Doesn’t IBM invest big bucks in MIT?) I have also memory tickles about inventors and entrepreneurs begging to be regulated.

5 31 bad info and kids

Bad, distorted values chase kids the Lane of Life. Imagine. These young people and their sense of right and wrong will be trampled by darker motives. Image produced by MidJourney, of course.

What this write up about peer reviewed and allegedly scholarly paper says to me is that ethical research and mental gyroscopes no longer align with what I think of as the common good.

Academics lie. Business executives lie. Entrepreneurs lie. Now what’s that mean for the quaint idea that individuals can be trusted? I can hear the response now:

Senator, thank you, for that question. I will provide the information you desire after this hearing.

I suppose one can look forward to made up information as the increasingly lame smart software marketing demonstrations thrill the uninformed.

Is it possible for flawed ethical concepts and out of kilter moral GPS system to terminate certain types of behavior?

Here’s the answer: Sure looks like it. That’s an interesting gain of function.

Stephen E Arnold, June 1, 2023

Does Jugalbandi Mean De-casting?

June 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Microsoft Launches Jugalbandi: An AI Powered Platform and Chatbot to Bridge Information Gap in India.” India connotes for me spicy food and the caste system. My understanding of this term comes from Wikipedia which says:

The caste system in India is the The caste system in India is the paradigmatic ethnographic instance of social classification based on castes. It has its origins in ancient India, and was transformed by various ruling elites in medieval, early-modern, and modern India, especially the Mughal Empire and the British Raj.

Like me, the Wikipedia can be incorrect, one-sided, and PR-ish.

The Jugalbandi write up contains some interesting statements which I interpret against my understanding of the Wikipedia article about castes in India. Here’s one example:

Microsoft, a pioneer in the artificial intelligence (AI) field, has made significant strides with its latest venture, Jugalbandi. This generative AI-driven platform and chatbot aim to revolutionize access to information about government initiatives and public programs in India. With nearly 22 official languages and considerable linguistic variations in the country, Jugalbandi seeks to address the challenges in disseminating information effectively.

I wonder if Microsoft’s pioneering smart software (based largely upon the less than open and often confused OpenAI technology) will do much to “address the challenges in disseminating information effectively.”

Wikipedia points out:

In 1948, negative discrimination on the basis of caste was banned by law and further enshrined in the Indian constitution in 1950; however, the system continues to be practiced in parts of India. There are 3,000 castes and 25,000 sub-castes in India, each related to a specific occupation.

If law and every day behavior have not mitigated castes and how these form fences in India and India outposts in London and Silicon Valley, exactly what will Microsoft (the pioneer in AI) accomplish?

My hunch the write up enshrines:

  1. The image of Microsoft as the champion of knocking down barriers and allowing communication to flow. (Why does smart Bing block certain queries?)
  2. Microsoft’s self-professed role as a “pioneer” in smart software. I think a pioneer in clever Davos messaging is closer to the truth.
  3. The OnMSFT.com’s word salad about something that may be quite difficult to accomplish in many social, business, and cultural settings.

Who created the concept of untouchables?

Stephen E Arnold, June 1, 2023

Stop Smart Software! A Petition to Save the World! Signed by 350 Humans!

May 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

A “real” journalist (Kevin Roose), who was told to divorce his significant other for a chat bot published the calming, measured, non-clickbait story “AI Poses Risk of Extinction, Industry Leaders Warn.” What’s ahead for the forest fire of smart software activity? The headline explains a “risk of extinction.” What no screenshot of a Terminator robot saying”:

The strength of the human heart. The difference between us and machines. [Uplifting music]

Sadly, no.

Write up reports:

Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen. These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.

Isn’t the Gray Lady amplifying fear, uncertainty, and doubt? Didn’t IBM pay sales engineers to spread the FUD?

Enough. AI is bad. Stop those who refined the math and numerical recipes. Pass laws to regulate the AI technology. Act now. Save humanity. Several observations:

  1. The credibility of technologists who “develop” functions and then beg for rules is disingenuous. The idea is to practice self-control and judgment before inviting Mr. Hyde to brunch.
  2. With smart software chock full of “unknown unknowns”, how exactly are elected officials supposed to regulate a diffusing and enabling technology? Appealing to US and EU officials omits common sense in my opinion.
  3. The “fix” for the AI craziness may be emulating the Chinese approach: Do what the CCP wants or be reeducated. What a nation state can d with smart software is indeed a something to consider. But China has taken action and will move forward with militarization no matter what the US and EU do.

Silicon Valley type innovation has created a “myth of excellence.” One need look at the consequences of social media to see the consequences of high school science club decision making. Now a handful of individuals with the Silicon Valley DNA want external forces to reign in their money making experiments and personal theme parks. Sorry, folks. Internal control, ethical behavior, and integrity provide that to mature individuals.

A sheet of paper with “rules” and “regulations” is a bit late to the Silicon Valley game. And the Gray Lady? Chasing clicks in my opinion.

Stephen E Arnold, May 30, 2023

Regulate Does Not Mean Regulate. Leave the EU Does Not Mean Leave the EU. Got That?

May 30, 2023

I wrote about Sam AI-man’s explanation that he wants regulation. I pointed out that his definition of regulate means leaving OpenAI free to do whatever it can to ace out the Google and a handful of other big outfits chasing the pot of gold at the end of the AI rainbow.

I just learned from the self-defined trusted news source (Thomson Reuters) that Mr. AI-man has no plans to leave Europe. I understand. “Leave” does not mean leave as in depart, say adios, or hit the road, Jack.

ChatGPT Maker OpenAI Says Has No Plan to Leave Europe” reports:

OpenAI has no plans to leave Europe, CEO Sam Altman said on Friday, reversing a threat made earlier this week to leave the region if it becomes too hard to comply with upcoming laws on artificial intelligence.

I am not confused. Just as the company’s name OpenAI does not mean “open,” the AI-man’s communication skills are based on the probabilities of certain words following another word. Got it. The slippery fish with AI-man is that definition of the words in his mind do not regress to the mean. The words — like those of some other notable Silicon Valley high tech giants — reflect the deeper machinations of a machine-assisted superior intelligence.

Translated this means: Regulate means shaft our competitors. Leave means stay. Regulate means let those OpenAI sheep run through the drinking water of free range cattle

The trusted write up says:

Reacting to Altman’s tweet on Friday, Dutch MEP Kim van Sparrentak, who has worked closely on the AI draft rules, told Reuters she and her colleagues must stand up to pressure from tech companies…. Voluntary codes of conduct are not the European way.

What does this statement mean to AI-man?

I would suggest from my temporary office in clear thinking Washington, DC, not too much.

I look forward to the next hearing from AI-man. That will be equally easy to understand.

Stephen E Arnold, May 30, 2023

Smart Software Knows Right from Wrong

May 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The AI gold rush is underway. I am not sure if the gold is the stuff of the King’s crown or one of those NFT confections. I am not sure what company will own the mine or sell the miner’s pants with rivets. But gold rush days remind me of forced labor (human indexers), claim jumping (hiring experts from one company to advantage another), and hydraulic mining (ethical and moral world enhancement). Yes, I see some parallels.

I thought of claim jumping and morals after reading “OpenAI Competitor Says Its Chatbot Has a Rudimentary Conscience: A Fascinating Concept.” The following snippet from the article caught my attention:

Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are “basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic.”

Please, read the original.

I want to capture several thoughts which flitted through my humanoid mind:

  1. What is right? What is wrong?
  2. What yardstick will be used to determine “rightness” or “wrongness.”
  3. What is the context for each right or wrong determination; for example, at the National Criminal Justice Training Center, there is a concept called “sexploitation.” The moral compass of You.com prohibits searching for information related to this trendy criminal activity? How will the Anthropic approach address the issue of a user with a “right” intent from a user with a “wrong” intent?

Net net: Baloney. Services will do what’s necessary to generate revenue. I know from watching the trajectories of the Big Tech outfits that right, wrong, ethics, and associated dorm room discussions wobble around and focus on getting rich or just having a job.

The goal for some will be to get their fingers on the knobs and control levers. Right or wrong?

Stephen E Arnold, May 29, 2023

Google AI Moves Slowly to Google Advertising. Soon, That Is. Soon.

May 24, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read l ”Google Search Ads Will Soon Automatically Adapt to Queries Using Generative AI.” The idea of using smart software to sell ads is one that seems obvious to me. What surprised me about this article in TechCrunch is the use of the future tense and the indefinite “soon.” The Sundar Financial Times’ PR write up emphasized that Google has been doing smart software for a looooong time.

How could a company so dependent on ads be in the “will” and “soon” vaporware announcement business?

I noted this passage in the write up:

Google is going to start using generative AI to boost Search ads’ relevance based on the context of a query…

But why so slow in releasing obvious applications of generative software?

I don’t have answers to this quite Googley question, probably asked by those engaged in the internal discussions about who’s on first in the Google Brain versus DeepMind softball game, but I have some observations:

  1. Google had useful technology but lacked the administrative and managerial expertise to get something out the door and into the hands paying customers
  2. Google’s management processes simply do not work when the company is faced with strategic decisions. This signals the end of the go go mentality of the company’s Backrub to Google transformation. And it begs the question, “What else has the company lost over the last 25 years?”
  3. Google’s engineers cannot move from Ivory Tower quantum supremacy mental postures to common sense applications of technology to specific use cases.

In short, after 25 years Googzilla strikes me as arthritic when it comes to hot technology and a little more nimble when it tries to do PR. Except for Paris, of course.

Stephen E Arnold, May 24, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta