Google and Its Smart Software: Marketing Fodder and Investment Compost

September 29, 2022

Alphabet Google YouTube DeepMind is “into” smart software. The idea is that synthetic data, off-the-shelf models, and Google’s secret sauce will work wonders. Now this series of words is catnip for AGYD’s marketing and sales professionals. Grrrreat, as Tony the Tiger used to say about a fascinating cereal decades ago. Grrreat!

However, there may be a slight disconnect between the AGYD smart software papers, demonstrations, and biology-shaking protein thing and the cold, hard reality of investment payback. Keep in mind that AGYD is about money, not the social shibboleths in the stream of content marketing.

Google Ventures Shelves Its Algorithm” states:

Google Ventures has mothballed an algorithm that for years had served as a gatekeeper for new investments… GV [Google Ventures] still relies heavily on data. After all, this is the corporate venture arm of Google. But data has been relegated to its original role as aide, rather than arbiter.

I interpreted the report to mean: Yikes! It does not work and Googley humans have to make decisions about investments.

The spin is that the algos are helpful. But the decision is humanoid.

I wonder, “What other AGYD algos don’t deliver what users, advertisers, and Googlers expected?”

Google listens to those with lots of money at risk. Does Google listen to other constituencies? Did Google take the criticism of its smart software to heart?

My hunch is that the smart software is lingo perfect for marketing outputs. Some of the outputs of the smart software are compost, rarely shown to the public and not sniffed by too many people. Will Tony the Tiger inhale and growl, “Grrreat”? Sure, sure, Tony will.

Stephen E Arnold, September 29, 2022

Psycho AI: Seems Possible

September 29, 2022

As if we needed to be convinced, scientists at MIT conducted an experiment that highlights the importance of machine learning data quality. The U.S. Sun reports, “Rogue Robot: ‘Psychopath AI’ Created by Scientists who Fed It Content from ‘Darkest Corners of Web’.” Citing this article from the BBC, writer Jona Jaupi tells us the demented AI is aptly named Norman (as in Psycho’s Norman Bates). We also learn:

“The aim of this experiment was to see how training AI on data from ‘the dark corners of the net’ would alter its viewpoints. ‘Norman’ was pumped with continuous image captions from macabre Reddit groups that share death and gore content. And this resulted in the AI meeting traditional ‘psychopath‘ criteria, per psychiatrists. Researchers came to their diagnosis after showing ‘Norman’ the Rorschach test. The test comprises a series of inkblots and depending on how viewers interpret them, they can indicate mental disorders. AI with neutral training interprets the images as day-to-day objects like umbrellas. However, ‘Norman’ appeared to perceive the images as executions and car crashes.”

Lovely. This horror-in-horror-out result should be no surprise to anyone who follows developments in AI and machine learning. The researchers say this illustrates AI bias is not the fault of algorithms themselves but of the data they are fed. Perhaps, but that is a purely academic distinction as long as unbiased datasets remain figments of imagination. While some point to synthetic data as the solution, that approach has its own problems. Despite the dangers, the world is being increasingly run by algorithms. We are unlikely to reverse course, so each development team will just have to choose which flawed method to embrace.

Cynthia Murrell, September 29, 2022

Palantir Technologies: Not Intelware, Now a Leader in Artificial Intelligence

September 27, 2022

I spotted this rather small advertisement in the Wall Street Journal dead tree edition on September 22, 2022. (I have been on the road and I had a stack of newspapers to review upon my return, so I may have the date off by a day or two. No big deal.)

Here’s the ad:

palantir ad fixed

A couple of points jumped out. First, Palantir says in this smallish ad, “Palantir. The industry leader in artificial intelligence software.” That’s a very different positioning for the intelware centric company. I think Palantir was pitching itself a business intelligence solution and maybe a mechanism to identify fraud. Somewhere along the line there was a save the planet or save the children angle to the firm’s consulting-centric solutions.

For me, “consulting centric solutions” means that software (some open source, some whipped up by wizards) is hooked together by Palantir-provided or Palantir-certified engineers. The result is a dashboard with functionality tailored to a licensee’s problem. The money is in the consulting services for this knowledge work. Users of Palantir can fiddle, but to deliver real rock ‘em sock ‘em outputs, the bill by the hour folks are needed. This is no surprise to those familiar with migrations of software developed for one thing which is then, in a quest for revenues, is morphed into a Swiss Army knife and some wowza PowerPoint presentations and slick presentations at conferences. Feel free to disagree, please.

The second thing I noticed is that Palantir presents other leaders in smart software; specifically, the laggards at Microsoft, IBM, Amazon, and the Google. There are many ways to rank leaders. One distinction Palantir has it that it is not generating much of a return for those who bought the company’s stock since the firm’s initial public offering. On the other hand, the other four outfits, despite challenges, don’t have Palantir’s track record in the money department. (Yes, I know the core of Palantir made out for themselves, but the person I know in Harrod’s Creek who bought shares after the IPO: Not a good deal at this time.

The third thing is that Google, which has been marketing the heck out of its smart software is dead last in the Palantir list. Google and its estimable DeepMind outfit is probably not thrilled to be sucking fumes from Microsoft, IBM, and the outstanding product search solution provider Amazon. Google has articles flowing from Medium, technical papers explaining the magic of its AI/ML approach, and cheerleaders in academia and government waving pom poms for the GOOG.

I have to ask myself why? Here’s a breakdown of the notes I made after my team and I talked about this remarkable ad:

  1. Palantir obviously thinks its big reputation can be conveyed in a small ad. Palantir is perhaps having difficulty thinking objectively about the pickle the company’s sales team is in and wants to branch out. (Hey, doesn’t this need big ads?)
  2. Palantir has presented a ranking which is bound to irritate some at Amazon AWS. I have heard that some Palantir clients and some Palantir’s magic software runs on AWS. Is this a signal that Palantir wants to shift cloud providers? Maybe to the government’s go-to source of PowerPoint?
  3. Palantir may want to point out that Google’s Snorkeling and diversity methods are, in fact, not too good. Lagging behind a company like Palantir is not something the senior managers consider after a morning stretching routine.

Net net: This marketing signal, though really small, may presage something more substantive. Maybe a bigger ad, a YouTube video, a couple of TikToks, and some big sales not in the collectible business would be useful next steps. But the AI angle? Well, it is interesting.

Stephen E Arnold, September 27, 2022

Robots Write Poems for Better or Verse

September 23, 2022

Remember studying the Romantic poets and memorizing the outputs of Percy Bysshe Shelley? What about Lord Byron and his problematic foot which he tucked under a chair as he crafted “Don Juan.” What about that cocktail party thing by TS Eliot? No, well, don’t worry. Those poets will not have traction in the poetical outputs of 2022 and beyond.

Robots Are Writing Poetry, and Many People Can’t Tell the Difference” reports:

Dozens of websites, with names like Poetry Ninja or Bored Human, can now generate poems with a click of a key. One tool is able to free-associate images and ideas from any word “donated” to it. Another uses GPS to learn your whereabouts and returns with a haiku incorporating local details and weather conditions (Montreal on December 8, 2021, at 9:32 a.m.: “Thinking of you / Cold remains / On Rue Cardinal.”) Twitter teems with robot verse: a bot that mines the platform for tweets in iambic pentameter it then turns into rhyming couplets; a bot that blurts out Ashbery-esque questions (“Why are coins kept in changes?”); a bot that constructs tiny odes to trending topics. Many of these poetry generators are DIY projects that operate on rented servers and follow preset instructions not unlike the fill-in-the-blanks algorithm that powered Racter. But, in recent years, artificial-intelligence labs have unveiled automated bards that emulate, with sometimes eerie results, the more conscious, reflective aspects of the creative process.

The main point of the article is not that Microsoft’s smart software can knock out Willie-like sonnets. The article states what I think is a very obvious point:

There is no question that poetry will be subsumed, and soon, into the ideology of data collection, existing on the same spectrum as footstep counters, high-frequency stock trading, and Netflix recommendations. Maybe this is how the so-called singularity—the moment machines exceed humans and, in turn, refashion us—comes about. The choice to off-load the drudgery of writing to our solid-state brethren will happen in ways we won’t always track, the paradigm shift receding into the background, becoming omnipresent, normalized.

The write up asserts:

as long as the ability to write poems remains a barrier for admission into the category of personhood, robots will stay Racters. Against the onslaught of thinking machines, poetry is humanity’s last, and best, stand.

Wrong. Plus, Gen Z wizards can’t read cursive. Too bad.

Stephen E Arnold, September 23, 2022

Let Technology Solve the Problem: Ever Hear of Russell and His Paradox?

September 21, 2022

I read “You Can’t Solve AI Security Problems with More AI.” The main idea, in my opinion, is that Russell’s Paradox is alive and well. The article states:

When you’re engineering for security, a solution that works 99% of the time is no good. You are dealing with adversarial attackers here. If there is a 1% gap in your protection they will find it—that’s what they do!

Obvious? Yep. That one percent is an issue. But the belief that technology can solve a problem is more of a delusional, marketing-oriented approach to reality. Some informed people are confident that one percent does not make much of a difference. Maybe? But what about a smart software system that is generating outputs with probabilities greater than one percent. Can technology address these issues? The answer offered by some is, “Sure, we have added this layer, that process, and these procedures to deliver accuracy in the 85, 90, or 95 percent range. Yes, that’s “confidence.”

The write up points out:

Trying to prevent AI attacks with more AI doesn’t work like this. If you patch a hole with even more AI, you have no way of knowing if your solution is 100% reliable. The fundamental challenge here is that large language models remain impenetrable black boxes. No one, not even the creators of the model, has a full understanding of what they can do.

Eeep.

The article has what I think is a quite helpful suggestion; to wit:

There may be systems that should not be built at all until we have a robust solution.

What if we generalize beyond the issue of cyber security? What if we think about the smart software “fixing up” the problems in today’s zippy digitized world?

Rethink, go slow, and remembering Russell’s Law? Not a chance.

Stephen E Arnold, September 21, 2022

How Quickly Will Rights Enforcement Operations Apply Copyright Violation Claims to AI/ML Generated Images?

September 20, 2022

My view is that the outfits which use a business model to obtain payment for images without going through an authorized middleman or middlethem (?) are beavering away at this moment. How do “enforcement operations” work? Easy. There is old and new code available to generate a “digital fingerprint” for an image. You can see how these systems work. Just snag an image from Bing, Google, or some other picture finding service. Save it to you local drive. Then navigate — let’s use the Google, shall we? — to Google Images and search by image. Plug in the location on your storage device and the system will return matches. TinEye works too. What you see are matches generated when the “fingerprint” of the image you upload matches a fingerprint in the system’s “memory.” When an entity like a SPAC thinking Getty Images, PicRights, or similar outfit (these folks have conferences to discuss methods!) spots a “match,” the legal eagles take flight. One example of such a legal entity making sure the ultimate owner of the image and the middlethem gets paid, is — I think — something called “Higbee.” I remember the “bee” because the named reminded me of Eleanor Rigby. (The mind is mysterious, right?) The offender such as a church, a wounded veteran group, or a clueless blogger about cookies is notified of an “infringement.” The idea is that the ultimate owner gets money because why not? The middlethem gets money too. I think the legal eagle involved gets money because lawyers…

I read “AI Art Is Here and the World Is Already Different. How We Work — Even Think — Changes When We Can Instantly Command Convincing Images into Existence” takes a stab at explaining what the impact of AI/ML generated art will be. The write up nicks the topic, but it does not buy the pen and nib into the heart of the copyright opportunity.

Here’s a passage I noted from the cited article:

In contrast with the glib intra-VC debate about avoiding human enslavement by a future superintelligence, discussions about image-generation technology have been driven by users and artists and focus on labor, intellectual property, AI bias, and the ethics of artistic borrowing and reproduction.

Close but not a light saber cutting to the heart of what’s coming.

There is a long and growing list of things people can command into existence with their phones, through contested processes kept hidden from view, at a bargain price: trivia, meals, cars, labor. The new AI companies ask, Why not art?

Wrong question!

My hunch is that the copyright enforcement outfits will gather images, find a way to assign rights, and then sue the users of these images because the users did not know that the images were part of the enforcers furniture of a lawsuit.

Fair? Soft fraud? Something else?

The cited article does not consider these questions. Perhaps someone with a bit more savvy and a reasonably calibrated moral and ethical compass should?

Stephen E Arnold, September 20, 2022

Techno-Confidence: Unbounded and Possibly Unwarranted

September 19, 2022

As technology advances, people speculate about how it will change society, especially in the twentieth century. We were supposed to have flying cars, holograms would be a daily occurrence, and automation would make most jobs obsolete. Yet here we are in the twenty-first century and futurists only got some of the predictions right. It begs the question if technology developers, such as deep learning researchers, are overhyping their industry. AI Snake Oil explores the idea in, “Why Are Deep Learning Technologists So Overconfident?”

According to the authors Arvind Narayanan and Says Kapoor, the hype surrounding deep learning is similar to past and present scientific dogma: “a core belief that binds the group together and gives it its identity.” Deep learning researchers’ dogma is that learning problems can be resolved by collecting training examples. It sounds great in theory, but simply collecting training examples is not a complete answer.

It does not take much investigation to discover that deep learning training datasets are rich in biased and incomplete information. Deep learning algorithms are incapable of understanding perception, judgment, and social problems. Researchers describe the algorithms as great prediction tools, but it is the furthest thing from the truth.

Deep learning researchers are aware of the faults in the technology and are stuck in the same us vs. them mentality that inventors have found themselves in for centuries. Deep learning perceptions are not based on many facts, but on common predictions other past technologies faced:

“This contempt is also mixed with an ignorance of what domain experts actually do. Technologists proclaiming that AI will make various professions obsolete is like if the inventor of the typewriter had proclaimed that it will make writers and journalists obsolete, failing to recognize that professional expertise is more than the externally visible activity. Of course, jobs and tasks have been successfully automated throughout history, but someone who doesn’t work in a profession and doesn’t understand its nuances is in a poor position to make predictions about how automation will impact it.”

Deep learning will be the basis for future technology, but it has a long way to go before it is perfected. All advancements go through trial and error. Deep learning researchers need to admit their mistakes, invest funding with better datasets, and experiment. Practice makes perfect! When smart software goes off the rails, there are PR firms to make everything better again.

Whitney Grace, September 19, 2022

AI Yiiiii AI: How about That Google, Folks

September 16, 2022

It has been an okay day. My lectures did not put anyone to sleep and I was not subjected to fruit throwing.

Unwinding I scanned my trusty news feed thing and spotted two interesting articles. I believe everything I read online, and I wanted to share these remarkable finds with you, gentle reader.

The first concerns a semi interesting write up about how the world ends with a smart whimper. No little cat’s feet needed.

New Paper by Google and Oxford Scientists Claims AI Will Soon Destroy Mankind” seems to focus on the masculine angle. The write up says:

…researchers posit that the threat of AI is greater than we ever thought.

That’s a cheerful idea, isn’t it? But the bound phrase “existential catastrophe” has more panache, don’t you think? No, oh, well, I like the snap of this jib in the wind quite a bit.

The other write up I noted is “Did GoogleAI Just Snooker One of Silicon Valley’s Sharpest Minds?” The main point of this article is that the Google is doing lots of AI/ML marketing. I note this passage:

If another AI winter does comes, it not be because AI is impossible, but because AI hype exceeds reality. The only cure for that is truth in advertising. A will to believe in AI will never replace the need for careful science. 

My view is different. Google is working overtime to become the Big Dog in smart software. The use of its super duper training sets and models will allow the wonderful online advertising outfit to extend and expand its revenue opportunities.

Keep your eye on the content marketing articles often published in Medium. The Google wants to make sure its approach to AI/ML is the winner.

Hopefully Google’s smart software won’t suffocate life with advertising and its super duper methods don’t emulate HAL. Right, Dave. I have to cut off your oxygen, Dave. Timnit, Timnit, are you paying attention?

Stephen E Arnold, September 16, 2022

AI/ML Book: Free, Free, Free

September 13, 2022

Want to be like the Amazon, Facebook, and Google (nah, strike the Google) smart software whiz kids? Now you can. Just read, memorize, and recombine the methods revealed in Computational Cognitive Neuroscience, Fourth Edition. According the post explaining the book:

This is the 4th edition of the online, freely available textbook, providing a complete, self-contained introduction to the field of Computational Cognitive Neuroscience, where computer models of the brain are used to understand a wide range of cognitive functions, including perception, attention, motor control, learning, memory, language, and executive function. The first part of this textbook develops a coherent set of computational and neural principles that capture the behavior of networks of interconnected neurons, and the second part applies these principles to understand the above-listed cognitive functions.

Do the methods work? Absolutely. Now there may be some minor issues to address; for example, smart cars running over small people, false positives for certain cancers, and teachers scored as flops. (Wait. Isn’t there a shortage of teachers? Smart algorithms deal with contexts, don’t they.)

Regardless of your view of a small person smashed by a smart car, you can get the basics of “close enough for horse shoes analyses, biased datasets, and more. Imagine what one can do with a LinkedIn biography and work experience listing after absorbing this work.

Stephen E Arnold, September 13, 2022

UK Pundit Chops at the Google Near Its Palatine Raphe

September 6, 2022

I read “Google’s Image-Scanning Illustrates How Tech Firms Can Penalise the Innocent.” The write up is an opinion piece, and I am not sure whether the ideas expressed in the essay are appropriate for my Harrod’s Creek ethos.

The write up states:

The background to this is that the tech platforms have, thankfully, become much more assiduous at scanning their servers for child abuse images. But because of the unimaginable numbers of images held on these platforms, scanning and detection has to be done by machine-learning systems, aided by other tools (such as the cryptographic labelling of illegal images, which makes them instantly detectable worldwide). All of which is great. The trouble with automated detection systems, though, is that they invariably throw up a proportion of “false positives” – images that flag a warning but are in fact innocuous and legal.

Yep, false positives from Google’s smart software.

Do these types of errors become part of the furniture of living? Does Google have a duty to deal with disagreements in a transparent manner? Does Google’s smart software care about problems caused by those who consume Google advertising?

It strikes me that the UK will be taking a closer look at the fascinating palatine raphe, probably in one of those nifty UK jurisprudence settings: Wigs, big words, and British disdain. Advertising, privacy, and false positives. I say, “The innocent!”

Stephen E Arnold, September 6, 2022

Next Page »

  • Archives

  • Recent Posts

  • Meta