Googzilla Annoyed: No Longer to Stomp Around Scaring People

July 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Sweden Orders Four Companies to Stop Using Google Tool” reports that the Swedish government “has ordered four companies to stop using a Google tool that measures and analyzed Web traffic.” The idea informing the Swedish decision to control the rapacious creature’s desire for “personal data.” Is the lovable Googzilla slurping data and allegedly violating privacy? I have no idea.

7 3 godzilla

In this MidJourney visual confection, it appears that a Tyrannosaurus Rex named Googzilla is watching children. Is Googzilla displaying an abnormal and possibly illegal behavior, particularly with regard to personal data.

The write up states:

The IMY said it considers the data sent to Google Analytics in the United States by the four companies to be personal data and that “the technical security measures that the companies have taken are not sufficient to ensure a level of protection that essentially corresponds to that guaranteed within the EU…”

Net net: Sweden is not afraid of the Google. Will other countries try their hand at influencing the lovable beastie?

Stephen E Arnold, July 6, 2023

Quantum Seeks Succor Amidst the AI Tsunami

July 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Imagine the heartbreak of a quantum wizard in the midst of the artificial intelligence tsunami. What can a “just around the corner” technology do to avoid being washed down the drain? The answer is public relations, media coverage, fascinating announcements. And what companies are practicing this dark art of outputting words instead of fully functional, ready-to-use solutions?

Give up?

I suggest that Google and IBM are the dominant players. Imagine an online ad outfit and a consulting firm with mainframes working overtime to make quantum computing exciting again. Frankly I am surprised that Intel has not climbed on its technology stallion and ridden Horse Ridge or Horse whatever into PR Land. But, hey, one has to take what one’s newsfeed delivers. The first 48 hours of July 2023 produced two interesting items.

The first is “Supercomputer Makes Calculations in Blink of an Eye That Take Rivals 47 Years.” The write up is about the Alphabet Google YouTube construct and asserts:

While the 2019 machine had 53 qubits, the building blocks of quantum computers, the next generation device has 70. Adding more qubits improves a quantum computer’s power exponentially, meaning the new machine is 241 million times more powerful than the 2019 machine. The researchers said it would take Frontier, the world’s leading supercomputer, 6.18 seconds to match a calculation from Google’s 53-qubit computer from 2019. In comparison, it would take 47.2 years to match its latest one. The researchers also claim that their latest quantum computer is more powerful than demonstrations from a Chinese lab which is seen as a leader in the field.

Can one see this fantastic machine which is 241 million times more powerful than the 2019 machine? Well, one can see a paper which talks about the machine. That is good enough for the Yahoo real news report. What do the Chinese, who have been kicked to the side of the Information Superhighway, say? Are you joking? That would be work. Writing about a Google paper and calling around is sufficient.

If you want to explore the source of this revelation, navigate to “Phase Transition in Random Circuit Sampling.” Note that the author has more than 175 authors is available from ArXiv.org at  https://arxiv.org/abs/2304.11119. The list of authors does not appear in the PDF until page 37 (see below) and only about 80 appear on the abstract page on the ArXiv splash page. I scanned the list of authors and I did not see Jeff Dean’s name. Dr. Dean is/was a Big Dog at the Google but …

image

Just to make darned sure that Google’s Quantum Supremacy is recognized, the organizations paddling the AGY marketing stream include NASA, NIST, Harvard, and more than a dozen computing Merlins. So there! (Does AGY have an inferiority complex?)

The second quantum goody is the write up “IBM Unlocks Quantum Utility With its 127-Qubit “Eagle” Quantum Processing Unit.” The write up reports as actual factual IBM’s superior leap frogging quantum innovation; to wit, coping with noise and knowing if the results are accurate. The article says via a quote from an expert:

The crux of the work is that we can now use all 127 of Eagle’s qubits to run a pretty sizable and deep circuit — and the numbers come out correct

The write up explains:

The work done by IBM here has already had impact on the company’s [IBM’s] roadmap – ZNE has that appealing quality of making better qubits out of those we already can control within a Quantum Processing Unit (QPU). It’s almost as if we had a megahertz increase – more performance (less noise) without any additional logic. We can be sure these lessons are being considered and implemented wherever possible on the road to a “million + qubits”.

Can one access this new IBM approach? Well, there is this article and a chart.

Which quantum innovation is the more significant? In terms of putting the technology in one laptop, not much. Perhaps one can use the system via the cloud? Some may be able to get outputs… with permission of course.

But which is the PR winner? In my opinion, the Google wins because it presents a description of a concept with more authors. IBM, get your marketing in gear. By the way, what’s going on with the RedHat dust up? Quantum news releases won’t make that open source hassle go away. And, Google, the quantum stuff and the legion of authors is unlikely to impress European regulators.

And why make quantum noises before a US national holiday? My hunch is that quantum is perfect holiday fodder. My question, “When will the burgers be done?”

Stephen E Arnold, July 5, 2023

Google: Is the Company Engaging in F-U-D?

July 3, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

When I was a wee sprout in 1963, I was asked to attend an IBM presentation at the so-so university I attended. Because I was a late-night baby-sitter for the school’s big, hot, and unreliable mainframe, a full day lecture and a free lunch. Of course, I went. I remember one thing more than a half century later. The other attendees from my college were using a word I was hearing but interpreting reasonably well.

7 1 google fud

The artistic MidJourney presents an picture showing executives struggling to process Google’s smart software announcements about the future. One seems to be wondering, “These are the quantum supremacy people. They revolutionized protein folding. Now they want us to wait while our competitors are deploying ChatGPT based services? F-U-D that!”

The word was F-U-D. To make sure I wasn’t confusing the word with a popular epithet, I asked one of the people who worked in the computer center as a supervisor (actually an underpaid graduate student) but superior to my $3 per hour wage, what’s F-U-D.

The fellow explained, “It means fear, uncertainty, and doubt. The idea is that IBM wants us to be afraid of buying something from Burroughs or National Cash Register. The uncertainty means that we have to make sure the competitors’ computers are as good as the IBM machines. And the doubt means that if we buy a Control Data system, we can be fired if it isn’t IBM.”

Yep, F-U-D. The game plan designed to make people like me cautious about anything not embraced by administrators. New things had to be kept in a sandbox. Really new things had to be part of a Federal research grant which could blow up and destroy a less-than-brilliant researcher’s career but cause no ripple in carpetland.

Why am I thinking about F-U-D?

I read “Here’s Why Google Thinks Its Gemini AI Will Surpass ChatGPT.” The write up makes clear:

“At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models,” Hassabis told Wired. “We also have some new innovations that are going to be pretty interesting.”

I interpreted this comment in this way:

  1. Be patient, Google has better, faster, cheaper, more wonderful technology for you coming soon, really soon
  2. Google is creating better AI because we are combining great technology with the open source systems and methods we made available to losers like OpenAI
  3. Google is innovative. (Remember, please, that Google equates innovation with complexity.)

Net net: By Gemini, just slow down. Wait for us. We are THE Google, and we do F-U-D.

Stephen E Arnold, July 3, 2023

Google: Users and Its Ad Construction

June 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

In the last 48 hours, I have heard or learned about some fresh opinions about Alphabet / Google / YouTube (hereinafter AGY). Google Glass III (don’t forget the commercial version, please) has been killed. Augmented Reality? Not for the Google. Also, AGY continues to output promises about its next Bard. Is it really better than ChatGPT? And AGY is back in the games business. (Keep in mind that Google pitched Yahoo with a games deal in 2004 if I remember correctly and then flamed out with its underwhelming online game play a decade later which was followed by the somewhat forgettable Stadia game service. ) Finally, a person told me that Prabhakar Raghavan allegedly said, “We want our customers to be happy.” Inspirational indeed. I think I hit the highlights from the information I encountered since Monday, June 25, 2023.

6 28 bad foundation

The ever sensitive creator MidJourney provided this illustration of a structure with a questionable foundation. Could the construct lose a piece here and piece there until it must be dismantled to save the snail darters living in the dormers? Are the residents aware of the issue?

The fountain of Googliness seems to be copious. I read “Google Ads Can Do More for Its Customers.” The main point of the article is that:

Google’s dominance in the search engine industry, particularly in search ads, is unparalleled, making it virtually the only viable option for advertisers seeking to target search traffic. It’s a conflict of interest, as Google’s profitability is closely tied to ad revenue. As Google doesn’t do enough to make Google Ads a more transparent platform and reduce the cost for its customers, advertisers face inflated costs and fierce competition, making it challenging for smaller businesses with limited budgets to compete effectively.

Gulp. If I understand this statement, Google is exploiting its customers. Remember. These are the entities providing the money to fund AGY’s numerous administrative costs. These are going just one way: Up and up. Imagine the data center, legal fines, and litigation costs. Big numbers before adding in salaries and bonuses.

Observations:

  1. Structural weakness can be ignored until the edifice just collapses.
  2. Unhappy customers might want to drop by for a conversation and the additional weight of these humanoids may cross a tipping point.
  3. US regulators may ignore AGY, but government officials in other countries may not.

Bud Light’s adventures with its customers provide a useful glimpse of that those who are unhappy can do and do quickly. The former Bud Light marketing whiz has a degree from Harvard. Perhaps this individual can tackle the AGY brand? Just a thought.

Stephen E Arnold, June 28, 2023

Google: I Promise to Do Better. No, Really, Really Better This Time

June 27, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The UK online publication The Register made available this article: “Google Accused of Urging Android Devs to Mislabel Apps to Get Forbidden Kids Ad Data.” The write up is not about TikTok. The subject is Google and an interesting alleged action by the online advertising company.

6 24 i promise

The high school science club member who pranked the principal says when caught: “Listen to me, Mr. Principal. I promise I won’t make that mistake again. Honest. Cross my heart and hope to die. Boy scout’s honor. No, really. Never, ever, again.” The illustration was generated by the plagiarism-free MidJourney.

The write up states as “actual factual” behavior by the company:

The complaint says that both Google and app developers creating DFF apps stood to gain by not applying the strict “intended for children” label. And it claims that Google incentivized this mislabeling by promising developers more advertising revenue for mixed-audience apps.

The idea is that intentionally assigned metadata made it possible for Google to acquire information about a child’s online activity.

My initial reaction was, “What’s new? Google says one thing and then demonstrates it adolescent sense of cleverness via a workaround?

After a conversation with my team, I formulated a different hypothesis; specifically, Google has institutionalized mechanisms to make it possible for the company’s actual behavior to be whatever the company wants its behavior to be.

One can hope this was a one-time glitch. My “different hypothesis” points to a cultural and structural policy to make it possible for the company to do what’s necessary to achieve its objective.

Stephen E Arnold, June 27, 2023

News Flash about SEO: Just 20 Years Too Late but, Hey, Who Pays Attention?

June 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read an article which would have been news a couple of decades ago. But I am a dinobaby (please, see anigif bouncing in an annoying manner) and I am hopelessly out of touch with what “real news” is.

6 16 unhappy woman

An entrepreneur who just learned that in order to get traffic to her business Web site, she will have to spend big bucks and do search engine optimization, make YouTube videos (long and short), and follow Google’s implicit and explicit rules. Sad, MBA, I believe. The Moping Mistress of the Universe is a construct generated by the ever-innovative MidJourney and its delightful Discord interface.

The write up catching my attention is — hang on to your latte — “A Storefront for Robots: The SEO Arms Race Has Left Google and the Web Drowning in Garbage Text, with Customers and Businesses Flailing to Find Each Other.” I wondered if the word “flailing” is a typographic error or misspelling of “failing.” Failing strikes me as a more applicable word.

The thesis of the write up is that the destruction of precision and recall as useful for relevant online search and retrieval is not part of the Google game plan.

The write up asserts:

The result is SEO chum produced at scale, faster and cheaper than ever before. The internet looks the way it does largely to feed an ever-changing, opaque Google Search algorithm. Now, as the company itself builds AI search bots, the business as it stands is poised to eat itself.

Ah, ha. Garbage in, garbage out! Brilliant. The write up is about 4,000 words and makes clear that ecommerce requires generating baloney for Google.

To sum up, if you want traffic, do search engine optimization. The problem with the write up is that it is incorrect.

Let me explain. Navigate to “Google Earned $10 Million by Allowing Misleading Anti-Abortion Ads from Fake Clinics, Report Says.” What’s the point of this report? The answer is, “Google ads.” And money from a controversial group of supporters and detractors. Yes! An arms race of advertising.

Of course, SEO won’t work. Why would it? Google’s business is selling advertising. If you don’t believe me, just go to a conference and ask any Googler — including those wearing Ivory Tower Worker” pins — and ask, “How important is Google’s ad business?” But you know what most Googlers will say, don’t you?

For decades, Google has cultivated the SEO ploy for one reason. Failed SEO campaigns end up one place, “Google Advertising.”

Why?

If you want traffic, like the abortion ad buyers, pony up the cash. The Google will punch the Pay to Play button, and traffic results. One change kicked in after 2006. The mom-and-pop ad buyers were not as important as one of the “brand” advertisers. And what was that change? Small advertisers were left to the SEO experts who could then sell “small” ad campaigns when the hapless user learned that no one on the planet could locate the financial advisory firm named “Financial Specialist Advisors.” Ah, then there was Google Local. A Googley spin on Yellow Pages. And there have been other innovations to make it possible for advertisers of any size to get traffic, not much because small advertisers spend small money. But ad dollars are what keeps Googzilla alive.

Net net: Keep in mind that Google wants to be the Internet. (AMP that up, folks.) Google wants people to trust the friendly beastie. The Googzilla is into responsibility. The Google is truth, justice, and the digital way. Is the criticism of the Google warranted? Sure, constructive criticism is a positive for some. The problem I have is that it is 20 years too late. Who cares? The EU seems to have an interest.

Stephen E Arnold, June 21, 2023

The Famous Google Paper about Attention, a Code Word for Transformer Methods

June 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Wow, many people are excited a Bloomberg article calledThe AI Boom Has Silicon Valley on Another Manic Quest to Change the World: A Guide to the New AI Technologies, Evangelists, Skeptics and Everyone Else Caught Up in the Flood of Cash and Enthusiasm Reshaping the Industry.”

In the tweets and LinkedIn posts one small factoid is omitted from the second hand content. If you want to read the famous DeepMind-centric paper which doomed the Google Brain folks to watch their future from the cheap seats, you can find “Attention Is All You Need”, branded with the imprimatur of the Neural Information Processing Systems Conference held in 2017. Here’s the link to the paper.

For those who read the paper, I would like to suggest several questions to consider:

  1. What economic gain does Google derive from proliferation of its transformer system and method; for example, the open sourcing of the code?
  2. What does “attention” mean for [a] the cost of training and [b] the ability to steer the system and method? (Please, consider the question from the point of view of the user’s attention, the system and method’s attention, and a third-party meta-monitoring system such as advertising.)
  3. What other tasks of humans, software, and systems can benefit from the user of the Transformer system and methods?

I am okay with excitement for a 2017 paper, but including a link to the foundation document might be helpful to some, not many, but some.

Net net: Think about Google’s use of the word “trust” and “responsibility” when you answer the three suggested questions.

Stephen E Arnold, June 20, 2023

Google: Smart Software Confusion

June 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I cannot understand. Not only am I old; I am a dinobaby. Furthermore, I am like one of William James’s straw men: Easy to knock down or set on fire. Bear with me this morning.

I read “Google Skeptical of AI: Google Doesn’t Trust Its Own AI Chatbots, Asks Employees Not to Use Bard.” The write up asserts as “real” information:

It seems that Google doesn’t trust any AI chatbot, including its own Bard AI bot. In an update to its security measures, Alphabet Inc., Google’s parent company has asked its employees to keep sensitive data away from public AI chatbots, including their own Bard AI.

The go-to word for the Google in the last few weeks is “trust.” The quote points out that Google doesn’t “trust” its own smart software. Does this mean that Google does not “trust” that which it created and is making available to its “users”?

6 17 google gatekeeper

MidJourney, an interesting but possibly insecure and secret-filled smart software system, generated this image of Googzilla as a gatekeeper. Are gatekeepers in place to make money, control who does what, and record the comings and goings of people, data, and content objects?

As I said, I am a dinobaby, and I think I am dumb. I don’t follow the circular reasoning; for example:

Google is worried that human reviewers may have access to the chat logs that these chatbots generate. AI developers often use this data to train their LLMs more, which poses a risk of data leaks.

Now the ante has gone up. The issue is one of protecting itself from its own software. Furthermore, if the statement is accurate, I take the words to mean that Google’s Mandiant-infused, super duper, security trooper cannot protect Google from itself.

Can my interpretation be correct? I hope not.

Then I read “This Google Leader Says ML Infrastructure Is Conduit to Company’s AI Success.” The “this” refers to an entity called Nadav Eiron, a Stanford PhD and Googley wizard. The use of the word “conduit” baffles me because I thought “conduit” was a noun, not a verb. That goes to support my contention that I am a dumb humanoid.

Now let’s look at the text of this write up about Google’s smart software. I noted this passage:

The journey from a great idea to a great product is very, very long and complicated. It’s especially complicated and expensive when it’s not one product but like 25, or however many were announced that Google I/O. And with the complexity that comes with doing all that in a way that’s scalable, responsible, sustainable and maintainable.

I recall someone telling me when I worked at a Fancy Dan blue chip consulting firm, “Stephen, two objectives are zero objectives.” Obviously Google is orders of magnitude more capable than the bozos at the consulting company. Google can do 25 objectives. Impressive.

I noted this statement:

we created the OpenXLA [an open-source ML compiler ecosystem co-developed by AI/ML industry leaders to compile and optimize models from all leading ML frameworks] because the interface into the compiler in the middle is something that would benefit everybody if it’s commoditized and standardized.

I think this means that Google wants to be the gatekeeper or man in the middle.

Now let’s consider the first article cited. Google does not want its employees to use smart software because it cannot be trusted.

Is it logical to conclude that Google and its partners should use software which is not trusted? Should Google and its partners not use smart software because it is not secure? Given these constraints, how does Google make advances in smart software?

My perception is:

  1. Google is not sure what to do
  2. Google wants to position its untrusted and insecure software as the industry standard
  3. Google wants to preserve its position in a workflow to maximize its profit and influence in markets.

You may not agree. But when articles present messages which are alarming and clearly focused on market control, I turn my skeptic control knob. By the way, the headline should be “Google’s Nadav Eiron Says Machine Learning Infrastructure Is a Conduit to Facilitate Google’s Control of Smart Software.”

Stephen E Arnold, June 19, 2023

Can One Be Accurate, Responsible, and Trusted If One Plagiarizes

June 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Now that AI is such a hot topic, tech companies cannot afford to hold back due to small flaws. Like a tendency to spit out incorrect information, for example. One behemoth seems to have found a quick fix for that particular wrinkle: simple plagiarism. Eager to incorporate AI into its flagship Search platform, Google recently released a beta version to select users. Forbes contributor Matt Novak was among the lucky few and shares his observations in, “Google’s New AI-Powered Search Is a Beautiful Plagiarism Machine.”

The author takes us through his query and results on storing live oysters in the fridge, complete with screenshots of the Googlebot’s response. (Short answer: you can for a few days if you cover them with a damp towel.) He highlights passages that were lifted from websites, some with and some without tiny tweaks. To be fair, Google does link to its source pages alongside the pilfered passages. But why click through when you’ve already gotten what you came for? Novak writes:

“There are positive and negative things about this new Google Search experience. If you followed Google’s advice, you’d probably be just fine storing your oysters in the fridge, which is to say you won’t get sick. But, again, the reason Google’s advice is accurate brings us immediately to the negative: It’s just copying from websites and giving people no incentive to actually visit those websites.

Why does any of this matter? Because Google Search is easily the biggest driver of traffic for the vast majority of online publishers, whether it’s major newspapers or small independent blogs. And this change to Google’s most important product has the potential to devastate their already dwindling coffers. … Online publishers rely on people clicking on their stories. It’s how they generate revenue, whether that’s in the sale of subscriptions or the sale of those eyeballs to advertisers. But it’s not clear that this new form of Google Search will drive the same kind of traffic that it did over the past two decades.”

Ironically, Google’s AI may shoot itself in the foot by reducing traffic to informative websites: it needs their content to answer queries. Quite the conundrum it has made for itself.

Cynthia Murrell, June 14, 2023

Google: FUD Embedded in the Glacier Strategy

June 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Fly to Alaska. Stand on a glacier and let the guide explains the glacier moves, just slowly. That’s the Google smart software strategy in a nutshell. Under Code Red or Red Alert or “My goodness, Microsoft is getting media attention for something other than lousy code and security services. We have to do something sort of quickly.”

One facet of the game plan is to roll out a bit of FUD or fear, uncertainty, and doubt. That will send chills to some interesting places, won’t it. You can see this in action in the article “Exclusive: Google Lays Out Its Vision for Securing AI.” Feel the fear because AI will kill humanoids unless… unless you rely on Googzilla. This is the only creature capable of stopping the evil that irresponsible smart software will unleash upon you, everyone, maybe your dog too.

6 9 fireball of doom

The manager of strategy says, “I think the fireball of AI security doom is going to smash us.” The top dog says, “I know. Google will save us.” Note to image trolls: This outstanding illustration was generated in a nonce by MidJourney, not an under-compensated creator in Peru.

The write up says:

Google has a new plan to help organizations apply basic security controls to their artificial intelligence systems and protect them from a new wave of cyber threats.

Note the word “plan”; that is, the here and now equivalent of vaporware or stuff that can be written about and issued as “real news.” The guts of the Google PR is that Google has six easy steps for its valued users to take. Each step brings that user closer to the thumping heart of Googzilla; to wit:

  • Assess what existing security controls can be easily extended to new AI systems, such as data encryption;
  • Expand existing threat intelligence research to also include specific threats targeting AI systems;
  • Adopt automation into the company’s cyber defenses to quickly respond to any anomalous activity targeting AI systems;
  • Conduct regular reviews of the security measures in place around AI models;
  • Constantly test the security of these AI systems through so-called penetration tests and make changes based on those findings;
  • And, lastly, build a team that understands AI-related risks to help figure out where AI risk should sit in an organization’s overall strategy to mitigate business risks.

Does this sound like Mandiant-type consulting backed up by Google’s cloud goodness? It should because when one drinks Google juice, one gains Google powers over evil and also Google’s competitors. Google’s glacier strategy is advancing… slowly.

Stephen E Arnold, June 9, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta