Palantir Technologies: Not Intelware, Now a Leader in Artificial Intelligence

September 27, 2022

I spotted this rather small advertisement in the Wall Street Journal dead tree edition on September 22, 2022. (I have been on the road and I had a stack of newspapers to review upon my return, so I may have the date off by a day or two. No big deal.)

Here’s the ad:

palantir ad fixed

A couple of points jumped out. First, Palantir says in this smallish ad, “Palantir. The industry leader in artificial intelligence software.” That’s a very different positioning for the intelware centric company. I think Palantir was pitching itself a business intelligence solution and maybe a mechanism to identify fraud. Somewhere along the line there was a save the planet or save the children angle to the firm’s consulting-centric solutions.

For me, “consulting centric solutions” means that software (some open source, some whipped up by wizards) is hooked together by Palantir-provided or Palantir-certified engineers. The result is a dashboard with functionality tailored to a licensee’s problem. The money is in the consulting services for this knowledge work. Users of Palantir can fiddle, but to deliver real rock ‘em sock ‘em outputs, the bill by the hour folks are needed. This is no surprise to those familiar with migrations of software developed for one thing which is then, in a quest for revenues, is morphed into a Swiss Army knife and some wowza PowerPoint presentations and slick presentations at conferences. Feel free to disagree, please.

The second thing I noticed is that Palantir presents other leaders in smart software; specifically, the laggards at Microsoft, IBM, Amazon, and the Google. There are many ways to rank leaders. One distinction Palantir has it that it is not generating much of a return for those who bought the company’s stock since the firm’s initial public offering. On the other hand, the other four outfits, despite challenges, don’t have Palantir’s track record in the money department. (Yes, I know the core of Palantir made out for themselves, but the person I know in Harrod’s Creek who bought shares after the IPO: Not a good deal at this time.

The third thing is that Google, which has been marketing the heck out of its smart software is dead last in the Palantir list. Google and its estimable DeepMind outfit is probably not thrilled to be sucking fumes from Microsoft, IBM, and the outstanding product search solution provider Amazon. Google has articles flowing from Medium, technical papers explaining the magic of its AI/ML approach, and cheerleaders in academia and government waving pom poms for the GOOG.

I have to ask myself why? Here’s a breakdown of the notes I made after my team and I talked about this remarkable ad:

  1. Palantir obviously thinks its big reputation can be conveyed in a small ad. Palantir is perhaps having difficulty thinking objectively about the pickle the company’s sales team is in and wants to branch out. (Hey, doesn’t this need big ads?)
  2. Palantir has presented a ranking which is bound to irritate some at Amazon AWS. I have heard that some Palantir clients and some Palantir’s magic software runs on AWS. Is this a signal that Palantir wants to shift cloud providers? Maybe to the government’s go-to source of PowerPoint?
  3. Palantir may want to point out that Google’s Snorkeling and diversity methods are, in fact, not too good. Lagging behind a company like Palantir is not something the senior managers consider after a morning stretching routine.

Net net: This marketing signal, though really small, may presage something more substantive. Maybe a bigger ad, a YouTube video, a couple of TikToks, and some big sales not in the collectible business would be useful next steps. But the AI angle? Well, it is interesting.

Stephen E Arnold, September 27, 2022

Robots Write Poems for Better or Verse

September 23, 2022

Remember studying the Romantic poets and memorizing the outputs of Percy Bysshe Shelley? What about Lord Byron and his problematic foot which he tucked under a chair as he crafted “Don Juan.” What about that cocktail party thing by TS Eliot? No, well, don’t worry. Those poets will not have traction in the poetical outputs of 2022 and beyond.

Robots Are Writing Poetry, and Many People Can’t Tell the Difference” reports:

Dozens of websites, with names like Poetry Ninja or Bored Human, can now generate poems with a click of a key. One tool is able to free-associate images and ideas from any word “donated” to it. Another uses GPS to learn your whereabouts and returns with a haiku incorporating local details and weather conditions (Montreal on December 8, 2021, at 9:32 a.m.: “Thinking of you / Cold remains / On Rue Cardinal.”) Twitter teems with robot verse: a bot that mines the platform for tweets in iambic pentameter it then turns into rhyming couplets; a bot that blurts out Ashbery-esque questions (“Why are coins kept in changes?”); a bot that constructs tiny odes to trending topics. Many of these poetry generators are DIY projects that operate on rented servers and follow preset instructions not unlike the fill-in-the-blanks algorithm that powered Racter. But, in recent years, artificial-intelligence labs have unveiled automated bards that emulate, with sometimes eerie results, the more conscious, reflective aspects of the creative process.

The main point of the article is not that Microsoft’s smart software can knock out Willie-like sonnets. The article states what I think is a very obvious point:

There is no question that poetry will be subsumed, and soon, into the ideology of data collection, existing on the same spectrum as footstep counters, high-frequency stock trading, and Netflix recommendations. Maybe this is how the so-called singularity—the moment machines exceed humans and, in turn, refashion us—comes about. The choice to off-load the drudgery of writing to our solid-state brethren will happen in ways we won’t always track, the paradigm shift receding into the background, becoming omnipresent, normalized.

The write up asserts:

as long as the ability to write poems remains a barrier for admission into the category of personhood, robots will stay Racters. Against the onslaught of thinking machines, poetry is humanity’s last, and best, stand.

Wrong. Plus, Gen Z wizards can’t read cursive. Too bad.

Stephen E Arnold, September 23, 2022

Let Technology Solve the Problem: Ever Hear of Russell and His Paradox?

September 21, 2022

I read “You Can’t Solve AI Security Problems with More AI.” The main idea, in my opinion, is that Russell’s Paradox is alive and well. The article states:

When you’re engineering for security, a solution that works 99% of the time is no good. You are dealing with adversarial attackers here. If there is a 1% gap in your protection they will find it—that’s what they do!

Obvious? Yep. That one percent is an issue. But the belief that technology can solve a problem is more of a delusional, marketing-oriented approach to reality. Some informed people are confident that one percent does not make much of a difference. Maybe? But what about a smart software system that is generating outputs with probabilities greater than one percent. Can technology address these issues? The answer offered by some is, “Sure, we have added this layer, that process, and these procedures to deliver accuracy in the 85, 90, or 95 percent range. Yes, that’s “confidence.”

The write up points out:

Trying to prevent AI attacks with more AI doesn’t work like this. If you patch a hole with even more AI, you have no way of knowing if your solution is 100% reliable. The fundamental challenge here is that large language models remain impenetrable black boxes. No one, not even the creators of the model, has a full understanding of what they can do.

Eeep.

The article has what I think is a quite helpful suggestion; to wit:

There may be systems that should not be built at all until we have a robust solution.

What if we generalize beyond the issue of cyber security? What if we think about the smart software “fixing up” the problems in today’s zippy digitized world?

Rethink, go slow, and remembering Russell’s Law? Not a chance.

Stephen E Arnold, September 21, 2022

How Quickly Will Rights Enforcement Operations Apply Copyright Violation Claims to AI/ML Generated Images?

September 20, 2022

My view is that the outfits which use a business model to obtain payment for images without going through an authorized middleman or middlethem (?) are beavering away at this moment. How do “enforcement operations” work? Easy. There is old and new code available to generate a “digital fingerprint” for an image. You can see how these systems work. Just snag an image from Bing, Google, or some other picture finding service. Save it to you local drive. Then navigate — let’s use the Google, shall we? — to Google Images and search by image. Plug in the location on your storage device and the system will return matches. TinEye works too. What you see are matches generated when the “fingerprint” of the image you upload matches a fingerprint in the system’s “memory.” When an entity like a SPAC thinking Getty Images, PicRights, or similar outfit (these folks have conferences to discuss methods!) spots a “match,” the legal eagles take flight. One example of such a legal entity making sure the ultimate owner of the image and the middlethem gets paid, is — I think — something called “Higbee.” I remember the “bee” because the named reminded me of Eleanor Rigby. (The mind is mysterious, right?) The offender such as a church, a wounded veteran group, or a clueless blogger about cookies is notified of an “infringement.” The idea is that the ultimate owner gets money because why not? The middlethem gets money too. I think the legal eagle involved gets money because lawyers…

I read “AI Art Is Here and the World Is Already Different. How We Work — Even Think — Changes When We Can Instantly Command Convincing Images into Existence” takes a stab at explaining what the impact of AI/ML generated art will be. The write up nicks the topic, but it does not buy the pen and nib into the heart of the copyright opportunity.

Here’s a passage I noted from the cited article:

In contrast with the glib intra-VC debate about avoiding human enslavement by a future superintelligence, discussions about image-generation technology have been driven by users and artists and focus on labor, intellectual property, AI bias, and the ethics of artistic borrowing and reproduction.

Close but not a light saber cutting to the heart of what’s coming.

There is a long and growing list of things people can command into existence with their phones, through contested processes kept hidden from view, at a bargain price: trivia, meals, cars, labor. The new AI companies ask, Why not art?

Wrong question!

My hunch is that the copyright enforcement outfits will gather images, find a way to assign rights, and then sue the users of these images because the users did not know that the images were part of the enforcers furniture of a lawsuit.

Fair? Soft fraud? Something else?

The cited article does not consider these questions. Perhaps someone with a bit more savvy and a reasonably calibrated moral and ethical compass should?

Stephen E Arnold, September 20, 2022

Techno-Confidence: Unbounded and Possibly Unwarranted

September 19, 2022

As technology advances, people speculate about how it will change society, especially in the twentieth century. We were supposed to have flying cars, holograms would be a daily occurrence, and automation would make most jobs obsolete. Yet here we are in the twenty-first century and futurists only got some of the predictions right. It begs the question if technology developers, such as deep learning researchers, are overhyping their industry. AI Snake Oil explores the idea in, “Why Are Deep Learning Technologists So Overconfident?”

According to the authors Arvind Narayanan and Says Kapoor, the hype surrounding deep learning is similar to past and present scientific dogma: “a core belief that binds the group together and gives it its identity.” Deep learning researchers’ dogma is that learning problems can be resolved by collecting training examples. It sounds great in theory, but simply collecting training examples is not a complete answer.

It does not take much investigation to discover that deep learning training datasets are rich in biased and incomplete information. Deep learning algorithms are incapable of understanding perception, judgment, and social problems. Researchers describe the algorithms as great prediction tools, but it is the furthest thing from the truth.

Deep learning researchers are aware of the faults in the technology and are stuck in the same us vs. them mentality that inventors have found themselves in for centuries. Deep learning perceptions are not based on many facts, but on common predictions other past technologies faced:

“This contempt is also mixed with an ignorance of what domain experts actually do. Technologists proclaiming that AI will make various professions obsolete is like if the inventor of the typewriter had proclaimed that it will make writers and journalists obsolete, failing to recognize that professional expertise is more than the externally visible activity. Of course, jobs and tasks have been successfully automated throughout history, but someone who doesn’t work in a profession and doesn’t understand its nuances is in a poor position to make predictions about how automation will impact it.”

Deep learning will be the basis for future technology, but it has a long way to go before it is perfected. All advancements go through trial and error. Deep learning researchers need to admit their mistakes, invest funding with better datasets, and experiment. Practice makes perfect! When smart software goes off the rails, there are PR firms to make everything better again.

Whitney Grace, September 19, 2022

AI Yiiiii AI: How about That Google, Folks

September 16, 2022

It has been an okay day. My lectures did not put anyone to sleep and I was not subjected to fruit throwing.

Unwinding I scanned my trusty news feed thing and spotted two interesting articles. I believe everything I read online, and I wanted to share these remarkable finds with you, gentle reader.

The first concerns a semi interesting write up about how the world ends with a smart whimper. No little cat’s feet needed.

New Paper by Google and Oxford Scientists Claims AI Will Soon Destroy Mankind” seems to focus on the masculine angle. The write up says:

…researchers posit that the threat of AI is greater than we ever thought.

That’s a cheerful idea, isn’t it? But the bound phrase “existential catastrophe” has more panache, don’t you think? No, oh, well, I like the snap of this jib in the wind quite a bit.

The other write up I noted is “Did GoogleAI Just Snooker One of Silicon Valley’s Sharpest Minds?” The main point of this article is that the Google is doing lots of AI/ML marketing. I note this passage:

If another AI winter does comes, it not be because AI is impossible, but because AI hype exceeds reality. The only cure for that is truth in advertising. A will to believe in AI will never replace the need for careful science. 

My view is different. Google is working overtime to become the Big Dog in smart software. The use of its super duper training sets and models will allow the wonderful online advertising outfit to extend and expand its revenue opportunities.

Keep your eye on the content marketing articles often published in Medium. The Google wants to make sure its approach to AI/ML is the winner.

Hopefully Google’s smart software won’t suffocate life with advertising and its super duper methods don’t emulate HAL. Right, Dave. I have to cut off your oxygen, Dave. Timnit, Timnit, are you paying attention?

Stephen E Arnold, September 16, 2022

AI/ML Book: Free, Free, Free

September 13, 2022

Want to be like the Amazon, Facebook, and Google (nah, strike the Google) smart software whiz kids? Now you can. Just read, memorize, and recombine the methods revealed in Computational Cognitive Neuroscience, Fourth Edition. According the post explaining the book:

This is the 4th edition of the online, freely available textbook, providing a complete, self-contained introduction to the field of Computational Cognitive Neuroscience, where computer models of the brain are used to understand a wide range of cognitive functions, including perception, attention, motor control, learning, memory, language, and executive function. The first part of this textbook develops a coherent set of computational and neural principles that capture the behavior of networks of interconnected neurons, and the second part applies these principles to understand the above-listed cognitive functions.

Do the methods work? Absolutely. Now there may be some minor issues to address; for example, smart cars running over small people, false positives for certain cancers, and teachers scored as flops. (Wait. Isn’t there a shortage of teachers? Smart algorithms deal with contexts, don’t they.)

Regardless of your view of a small person smashed by a smart car, you can get the basics of “close enough for horse shoes analyses, biased datasets, and more. Imagine what one can do with a LinkedIn biography and work experience listing after absorbing this work.

Stephen E Arnold, September 13, 2022

UK Pundit Chops at the Google Near Its Palatine Raphe

September 6, 2022

I read “Google’s Image-Scanning Illustrates How Tech Firms Can Penalise the Innocent.” The write up is an opinion piece, and I am not sure whether the ideas expressed in the essay are appropriate for my Harrod’s Creek ethos.

The write up states:

The background to this is that the tech platforms have, thankfully, become much more assiduous at scanning their servers for child abuse images. But because of the unimaginable numbers of images held on these platforms, scanning and detection has to be done by machine-learning systems, aided by other tools (such as the cryptographic labelling of illegal images, which makes them instantly detectable worldwide). All of which is great. The trouble with automated detection systems, though, is that they invariably throw up a proportion of “false positives” – images that flag a warning but are in fact innocuous and legal.

Yep, false positives from Google’s smart software.

Do these types of errors become part of the furniture of living? Does Google have a duty to deal with disagreements in a transparent manner? Does Google’s smart software care about problems caused by those who consume Google advertising?

It strikes me that the UK will be taking a closer look at the fascinating palatine raphe, probably in one of those nifty UK jurisprudence settings: Wigs, big words, and British disdain. Advertising, privacy, and false positives. I say, “The innocent!”

Stephen E Arnold, September 6, 2022

Ethics Is a Thing in 2022. Oh, Really?

September 5, 2022

When companies toss around the word ethics, I roll my eyes. If I am not mistaken, the high technology luminaries have created an ethical waste land. Each day more examples of peak a-ethical behavior flow to me in an electronic Cuyahoga River complete with flames, smoke, and nifty aromas. Now consider “ethical smart software.”

Why Embedding AI Ethics and Principles into Your Organization Is Critical” is an oddity, almost a prose elegiac appeal. On one hand, the essay admits ethical shortcomings exist. I noted:

Universal adoption of AI in all aspects of life will require us to think about its power, its purpose, and its impact. This is done by focusing on AI ethics and demanding that AI be used in an ethical manner. Of course, the first step to achieving this is to find agreement on what it means to use and develop AI ethically.

On the other hand, businesses must embrace ethics. That sounds like a stretch to me.

Just a possibly irrelevant question: What’s ethics mean? And another: What’s artificial intelligence?

No answers appear in the cited article.

What does appear is this statement:

 If you are not proactively prioritizing inclusivity (among the other ethical principles), you are inherently allowing your model to be subject to overt or internal biases. That means that the users of those AI models — often without knowing it — are digesting the biased results, which have practical consequences for everyday life.

Ah, “you.” I would submit that the cost of developing unbiased trained data means automated systems for building training data will be adopted and then packaged like sardines. The users of these data and the libraries of off-the-shelf models, numerical recipes, and workflow modules will further distance smart software from the pipes beneath the Pergo floor.

Costs and financial payoff, not the undefined and foggy “AI ethics”, will create some darned exciting social, political, and financial knock on effects. As I recall that bastion of MBA thinking added charcoal starter to the opioid opportunity. The world’s online bookstore struggles to cope with fake reviews and designer purses. The world’s largest online advertising outfit is — well, let’s just say — trying to look past its handling of smart software professionals who disagree with the company’s management about bias in AI/ML.

Quite a write up. The conclusion is swell too:

My organization’s development and use of AI is a minor subsection of AI in our world. We have committed to our ethical principles, and we hope that other technology firms do as well.

Absolutely.

Stephen E Arnold, September 5, 2022

BelenderBot: A Peculiar Pattern Indeed

September 1, 2022

Ah well. Consider us unsurprised. Mashable reports “It Took Just One Weekend for Meta’s New AI Chatbot to Become Racist.” Yes, smart software seems to be adept at learning racism. That is why some companies have put strict guardrails on how the public can interact with their budding algorithms. Meta, however, recently threw BlenderBot 3 onto the internet specifically to interact with and learn from anyone who wished to converse with it. Then there is the bias that usually comes from datasets used to train machine learning software. The open internet is probably the worse source to use, yet this is exactly where Meta sends its impressionable bot for answers to user questions. Reporter Christianna Silva tells us:

“Meta’s BlenderBot 3 can search the internet to talk with humans about nearly anything, unlike past versions of the chatbot. It can do that all while leaning on the abilities provided by previous versions of the BlenderBot, like personality, empathy, knowledge, and the ability to have long-term memory pertaining to conversations it’s had. Chatbots learn how to interact by talking with the public, so Meta is encouraging adults to talk with the bot in order to help it learn to have natural conversations about a wide range of topics. But that means the chatbot can also learn misinformation from the public, too. According to Bloomberg, it described Meta CEO Mark Zuckerberg as ‘too creepy and manipulative’ in conversation with a reporter from Insider. It told a Wall Street Journal reporter that Trump ‘will always be’ president and touted the anti-Semitic conspiracy theory that it was ‘not implausible’ that Jewish people control the economy.”

The write-up reminds us of a couple other AIs that caused controversy with racist and sexist perspectives, Google’s LaMDA and Microsoft’s Tay. Will scientists ever find a way to train an algorithm free from such human foibles? Perhaps one day—after we have managed to eliminate them in ourselves. I wouldn’t hold my breath.

Cynthia Murrell, September 1, 2022

Next Page »

  • Archives

  • Recent Posts

  • Meta