Online Search: The Old Function Is in Play

October 18, 2024

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbJust a humanoid processing information related to online services and information access.

We spotted an interesting marketing pitch from Kagi.com, the pay-to-play Web search service. The information is located on the Kagi.com Help page at this link. The approach is what I call “fact-centric marketing.” In the article, you will find facts like these:

In 2022 alone, search advertising spending reached a staggering 185.35 billion U.S. dollars worldwide, and this is forecast to grow by six percent annually until 2028, hitting nearly 261 billion U.S. dollars.

There is a bit of consultant-type analysis which explains the difference between Google’s approach labeled “ad-based search” and the Kagi.com approach called “user-centric search.” I don’t want to get into an argument about these somewhat stark bifurcations in the murky world of information access, search, and retrieval. Let’s just accept the assertion.

I noted more numbers. Here’s a sampling (not statistically valid, of course):

Google generated $76 billion in US ad revenue in 2023. Google had 274 million unique visitors in the US as of February 2023. To estimate the revenue per user, we can divide the 2023 US ad revenue by the 2023 number of users: $76 billion / 274 million = $277 revenue per user in the US or $23 USD per month, on average! That means there is someone, somewhere, a third party and a complete stranger, an advertiser, paying $23 per month for your searches.

The Kagi.com point is:

Choosing to subscribe to Kagi means that while you are now paying for your search you are getting a fair value for your money, you are getting more relevant results, are able to personalize your experience and take advantage of all the tools and features we built, all while protecting your and your family’s privacy and data.

Why am I highlighting this Kagi.com Help information? Leo Laporte on the October 13, 2024, This Week in Tech program talked about Kagi. He asserted that Kagi uses Bing, Google, and its own search index. I found this interesting. If true, Mr. Laporte is disseminating the idea that Kagi.com is a metasearch engine like Ixquick.com (now StartPage.com). The murkiness about what a Web search engine presents to a user is interesting.

image

A smart person is explaining why paying for search and retrieval is a great idea. It may be, but Google has other ideas. Thanks, You.com. Good enough

In the last couple of days I received an invitation to join a webinar about a search system called Swirl, which connotes mixing content perhaps? I also received a spam message from a fund called TheStreet explaining that the firm has purchased a block of Elastic B.V. shares. A company called provided an interesting explanation of what struck me as a useful way to present search results.

Everywhere companies are circling back to the idea that one cannot “find” needed information.

With Google facing actual consequences for its business practices, that company is now suggesting this angle: “Hey, you can’t break us up. Innovation in AI will suffer.”

So what is the future? Will vendors get a chance to use the Google search index for free? Will alternative Web search solutions become financial wins? Will metasearch triumph, using multiple indexes and compiling a single list of results? Will new-fangled solutions like Glean dominate enterprise information access and then move into the mainstream? Will visual approaches to information access kick “words” to the curb?

Here are some questions I like to ask those who assert that they are online experts, and I include those in the OSINT specialist clan as well:

  1. Finding information is an unsolved problem. Can you, for example, easily locate a specific frame from a video your mobile device captured a year ago?
  2. Can you locate the specific expression in a book about linear algebra germane to the question you have about its application to an AI procedure?
  3. Are you able to find quickly the telephone number (valid at the time of the query) for a colleague you met three years ago at an international conference?

As 2024 rushes to what is likely to be a tumultuous conclusion, I want to point out that finding information is a very difficult job. Most people tell themselves they can find the information needed to address a specific question or task. In reality, these folks are living in a cloud of unknowing. Smart software has not made keyword search obsolete. For many users, ChatGPT or other smart software is a variant of search. If it is easy to use and looks okay, the output is outstanding.

So what? I am not sure the problem of finding the right information at the right time has been solved. Free or for fee, ad supported or open sourced, dumb string matching or Fancy Dan probabilistic pattern identification — none is delivering what so many people believe are on point, relevant, timely information. Don’t even get me started on the issue of “correct” or “accurate.”

Marketers, stand down. Your assertions, webinars, advertisements, special promotions, jargon, and buzzwords do not deliver findability to users who don’t want to expend effort to move beyond good enough. I know one thing for certain, however: Finding relevant information is now more difficult than it was a year ago. I have a hunch the task is only become harder.

Stephen E Arnold, October 18, 2024

Gee, Will the Gartner Group Consultants Require Upskilling?

October 16, 2024

dino orange_thumbThe only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.

I have a steady stream of baloney crossing my screen each day. I want to call attention to one of the most remarkable and unsupported statements I have seen in months. The PR document “Gartner Says Generative AI Will Require 80% of Engineering Workforce to Upskill Through 2027” contains a number of remarkable statements. Let’s look at a couple.

image

How an allegedly big time consultant is received in a secure artificial intelligence laboratory. Thanks, MSFT Copilot, good enough.

How about this one?

Through 2027, generative AI (GenAI) will spawn new roles in software engineering and operations, requiring 80% of the engineering workforce to upskill, according to Gartner, Inc.

My thought is that the virtual band of wizards which comprise Gartner cook up data the way I microwave a burrito when I am hungry. Pick a common number like the 80-20 Pareto figure. It is familiar and just use it. Personally I was disappointed that Gartner did not use 67 percent, but that’s just an old former blue chip consultant pointing out that round numbers are inherently suspicious. But does Gartner care? My hunch is that whoever reviewed the news release was happy with 80 percent. Did anyone question this number? Obviously not: There are zero supporting data, no information about how it was derived, and no hint of the methodology used by the incredible Gartner wizards. That’s a clue that these are microwaved burritos from a bulk purchase discount grocery.

How about this statement which cites a … wait for it … Gartner wizard as the source of the information?

“In the AI-native era, software engineers will adopt an ‘AI-first’ mindset, where they primarily focus on steering AI agents toward the most relevant context and constraints for a given task,” said Walsh. This will make natural-language prompt engineering and retrieval-augmented generation (RAG) skills essential for software engineers.

I love the phrase “AI native” and I think dubbing the period from January 2023 when Microsoft demonstrated its marketing acumen by announcing the semi-tie up with OpenAI. The code generation systems help exactly what “engineer”? One has to know quite a bit to craft a query, examine the outputs, and do any touch ups to get the outputs working as marketed? The notion of “steering” ignores what may be an AI problem no one at Gartner has considered; for example, emergent patterns in the code generated. This means, “Surprise.” My hunch is that the idea of multi-layered neural networks behaving in a way that produces hitherto unnoticed patterns is of little interest to Gartner. That outfit wants to sell consulting work, not noodle about the notion of emergence which is a biased suite of computations. Steering is good for those who know what’s cooking and have a seat at the table in the kitchen. Is Gartner given access to the oven, the fridge, and the utensils? Nope.

Finally, how about this statement?

According to a Gartner survey conducted in the fourth quarter of 2023 among 300 U.S. and U.K. organizations, 56% of software engineering leaders rated AI/machine learning (ML) engineer as the most in-demand role for 2024, and they rated applying AI/ML to applications as the biggest skills gap.

Okay, this is late 2024 (October to be exact). The study data are a year old. So far the outputs of smart coding systems remain a work in progress. In fact, Dr. Sabine Hossenfelder has a short video which explains why the smart AI programmer in a box may be more disappointing than other hyperbole artists claim. If you want Dr. Hossenfelder’s view, click here. In a nutshell, she explains in a very nice way about the giant bologna slide plopped on many diners’ plates. The study Dr. Hossenfelder cites suggests that productivity boosts are another slice of bologna. The 41 percent increase in bugs provides a hint of the problems the good doctor notes.

Net net: I wish the cited article WERE generated by smart software. What makes me nervous is that I think real, live humans cooked up something similar to a boiled shoe. Let me ask a more significant question. Will Gartner experts require upskilling for the new world of smart software? The answer is, “Yes.” Even today’s sketchy AI outputs information often more believable that this Gartner 80 percent confection.

Stephen E Arnold, October 16, 2024

The GoldenJackals Are Running Free

October 11, 2024

Vea_thumb_thumb_thumb_thumbThe only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.

Remember the joke about security. Unplugged computer in a locked room. Ho ho ho. “Mind the (Air) Gap: GoldenJackal Gooses Government Guardrails” reports that security is getting more difficult. The write up says:

GoldenJackal used a custom toolset to target air-gapped systems at a South Asian embassy in Belarus since at least August 2019… These toolsets provide GoldenJackal a wide set of capabilities for compromising and persisting in targeted networks. Victimized systems are abused to collect interesting information, process the information, exfiltrate files, and distribute files, configurations and commands to other systems. The ultimate goal of GoldenJackal seems to be stealing confidential information, especially from high-profile machines that might not be connected to the internet.

What’s interesting is that the sporty folks at GoldenJackal can access the equivalent of the unplugged computer in a locked room. Not exactly, of course, but allegedly darned close.

image

Microsoft Copilot does a great job of presenting an easy to use cyber security system and console. Good work.

The cyber experts revealing this exploit learned of it in 2020. I think that is more than three years ago. I noted the story in October 2024. My initial question was, “What took so long to provide some information which is designed to spark fear and ESET sales?”

The write up does not tackle this question but the write up reveals that the vector of compromise was a USB drive (thumb drive). The write up provides some detail about how the exploit works, including a code snippet and screen shots. One of the interesting points in the write up is that Kaspersky, a recently banned vendor in the US, documented some of the tools a year earlier.

The conclusion of the article is interesting; to wit:

Managing to deploy two separate toolsets for breaching air-gapped networks in only five years shows that GoldenJackal is a sophisticated threat actor aware of network segmentation used by its targets.

Several observations come to mind:

  1. Repackaging and enhancing existing malware into tool bundles demonstrates the value of blending old and new methods.
  2. The 60 month time lag suggests that the GoldenJackal crowd is organized and willing to invest time in crafting a headache inducer for government cyber security professionals
  3. With the plethora of cyber alert firms monitoring everything from secure “work use only” laptops to useful outputs from a range of devices, systems, and apps why is it that only one company sufficiently alert or skilled to explain the droppings of the GoldenJackal?

I learn about new exploits every couple of days. What is now clear to me is that a cyber security firm which discovers something novel does so by accident. This leads me to formulate the hypothesis that most cyber security services are not particularly good at spotting what I would call “repackaged systems and methods.” With a bit of lipstick, bad actors are able to operate for what appears to be significant periods of time without detection.

If this hypothesis is correct, US government memoranda, cyber security white papers, and academic type articles may be little more than puffery. “Puffery,” as we have learned is no big deal. Perhaps that is what expensive cyber security systems and services are to bad actors: No big deal.

Stephen E Arnold, October 11, 2024

One

Google Pulls Off a Unique Monopoly Play: Redefining Disciplines and Winning Awards

October 10, 2024

dino orangeThe only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.

The monopolists of the past are a storied group of hard-workers. The luminaries blazing a path to glory have included John D. Rockefeller (the 1911 guy), J.P. Morgan and James J. Hill (railroads and genetic material contributor to JP Morgan and MorganStanley circa 2024, James B. Duke (nope, smoking is good for you), Andrew Carnegie (hey, he built “free” public libraries which are on the radar of today’s publishers I think), and Edward T. Bedford (starch seem unexciting until you own the business). None of these players were able to redefine Nobel Prizes.

image

A member of Google leadership explains to his daughter (who is not allowed to use smart software for her private school homework or her tutor’s assignments) that the Google is a bit like JP Morgan but better in so many other ways. Thanks, MSFT Copilot. How are the Windows 11 updates and the security fixes today?

The Google pulled it off. One Xoogler (that is the jargon for a former Google professional) and one honest-to-goodness chess whiz Googler won Nobel Prizes. Fortune Magazine reported that Geoffrey Hinton (the Xoogler) won a Nobel Prize for … wait for it … physics. Yep, the discipline associated with chasing dark matter and making thermonuclear bombs into everyday words really means smart software or the undefinable phrase “artificial intelligence.” Some physicists are wondering how one moves from calculating the mass of a proton to helping college students cheat. Dr. Sabine Hossenfelder asks, “Hello, Stockholm, where is our Nobel?” The answer is, “Politics, money, and publicity, Dr. Hossenfelder.” These are the three ingredients of achievement.

But wait! Google also won a Nobel Prize for … wait for it … chemistry. Yep, you remember high school chemistry class. Jars, experiments which don’t match the textbook, and wafts of foul smelling gas getting sucked into the lab’s super crappy air venting system. The Verge reported on how important computation chemistry is to the future of money-spinning confections like the 2020 virus of the year. The poohbahs (journalist-consultant-experts) at that publication with nary a comment about smart software which made the “chemistry” of Google do in “minutes” what ordinary computational chemistry solutions take hours longer to accomplish.

The Google and Xoogle winners are very smart people. Google, however, has done what the schlubs like J.P. Morgan could never accomplish: Redefine basic scientific disciplines. Physics means neural networks. Chemistry means repurposing a system to win chess games.

I suppose with AI eliminating the need for future students to learn. “University Professor ‘Terrified’ By The Sharp Decline In Student Performance — ’The Worst I’ve Ever Encountered’” quoted a college professor as saying:

The professor said her students ‘don’t read,’ write terrible essays, and ‘don’t even try’ in her class. The professor went on to say that when she recently assigned an exam focused on a reading selection, she "had numerous students inquire if it’s open book." That is, of course, preposterous — the entire point of a reading exam is to test your comprehension of the reading you were supposed to do! But that’s just it — she said her students simply "don’t read."

That makes sense. Physics is smart software; chemistry is smart software. Uninformed student won’t know the difference. What’s the big deal? That’s a super special insight into the zing in teaching and learning.

What’s the impact of these awards? In my opinion:

  1. The reorganization of DeepMind where the Googler is the Top Dog has been scrubbed of management hoo-hah by the award.
  2. The Xoogler will have an ample opportunity to explain that smart software will destroy mankind. That’s possible because the intellectual rot has already spread to students.
  3. The Google itself can now explain that it is not a monopoly. How is this possible? Simple. Physics is not about the goings on at Los Alamos National Laboratory. Chemistry is not dumping diluted hydrochloric acid into a beaker filled calcium carbide. It makes perfect sense to explain that Google is NOT a monopoly.

But the real payoff to the two awards is that Google’s management team can say:

Those losers like John D. Rockefeller, JP Morgan, the cigarette person, the corn starch king, and the tight fisted fellow from someplace with sheep are not smart like the Google. And, the Google leadership is indeed correct. That’s why life is so much better with search engine optimization, irrelevant search results, non-stop invasive advertising, a disable skip this ad button, and the remarkable Google speak which accompanies another allegation of illegal business conduct from a growing number of the 195 countries in the world.

That’s a win that old-timey monopolists could not put in their account books.

Stephen E Arnold, October 10, 2024

What Can Cyber Criminals Learn from Automated Ad Systems?

October 10, 2024

Vea_thumb_thumbThe only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.

My personal opinion is that most online advertising is darned close to suspicious or outright legal behavior. “New,” “improved,” “Revolutionary” — Sure, I believe every online advertisement. But consider this: For hundreds of years those in the advertising business urged a bit of elasticity with reality. Sure, Duz does it. As a dinobaby, I assert that most people in advertising and marketing assume that reality and a product occupy different parts of a data space. Consequently most people — not just marketers, advertising executives, copywriters, and prompt engineers. I mean everyone.

image

An ad sales professional explains the benefits of Facebook, Google, and TikTok-type of sales. Instead of razor blades just sell ransomware as stolen credit cards. Thanks, MSFT Copilot. How are those security remediation projects with anti-malware vendors coming? Oh, sorry to hear that.

With a common mindset, I think it is helpful to consider the main points of “TikTok Joins the AI-Driven Advertising Pack to Compete with Meta for Ad Dollars.” The article makes clear that Google and Meta have automated the world of Madison Avenue. Not only is work mechanical, that work is informed by smart software. The implications for those who work the old fashioned way over long lunches and golf outings are that work methods themselves are changing.

The estimable TikTok is beavering away to replicate the smart ad systems of companies like the even more estimable Facebook and Google type companies. If TikTok is lucky as only an outfit linked with a powerful nation state can be, a bit of competition may find its way into the hardened black boxes of the digital replacement for Madison Avenue.

The write up says:

The pitch is all about simplicity and speed — no more weeks of guesswork and endless A/B testing, according to Adolfo Fernandez, TikTok’s director, global head of product strategy and operations, commerce. With TikTok’s AI already trained on what drives successful ad campaigns on the platform, advertisers can expect quick wins with less hassle, he added. The same goes for creative; Smart+ is linked to TikTok’s other AI tool, Symphony, designed to help marketers generate and refine ad concepts.

Okay, knowledge about who clicks what plus automation means less revenue for the existing automated ad system purveyors. The ideas are information about users, smart software, and automation to deliver “simplicity and speed.” Go fast, break things; namely, revenue streams flowing to Facebook and Google.

Why? Here’s a statement from the article answering the question:

TikTok’s worldwide ad revenue is expected to reach $22.32 billion by the end of the year, and increase 27.3% to $28.42 billion by the end of 2025, according to eMarketer’s March 2024 forecast. By comparison, Meta’s worldwide ad revenue is expected to total $154.16 billion by the end of this year, increasing 23.2% to $173.92 billion by the end of 2025, per eMarketer. “Automation is a key step for us as we enable advertisers to further invest in TikTok and achieve even greater return on investment,” David Kaufman, TikTok’s global head of monetization product and solutions, said during the TikTok.

I understand. Now let’s shift gears and ask, “What can bad actors learn from this seemingly routine report about jockeying among social media giants?”

Here are the lessons I think a person inclined to ignore laws and what’s left of the quaint notion of ethical behavior:

  1. These “smart” systems can be used to advertise bogus or non existent products to deliver ransomware, stealers, or other questionable software
  2. The mechanisms for automating phishing are simple enough for an art history or poli-sci major to use; therefore, a reasonably clever bad actor can whip up an automated phishing system without too much trouble. For those who need help, there are outfits like Telegram with its BotFather or helpful people advertising specialized skills on assorted Web forums and social media
  3. The reason to automate are simple: Better, faster, cheaper. Plus, with some useful data about a “market segment”, the malware can be tailored to hot buttons that are hard wired to a sucker’s nervous system.
  4. Users do click even when informed that some clicks mean a lost bank account or a stolen identity.

Is there a fix for articles which inform both those desperate to find a way to tell people in Toledo, Ohio, that you own a business selling aftermarket 22 inch wheels and alert bad actors to the wonders of automation and smart software? Nope. Isn’t online marketing a big win for everyone? And what if TikTok delivers a very subtle type of malware? Simple and efficient.

Stephen E Arnold, October 10, 2024

AI Podcasters Are Reviewing Books Now

October 10, 2024

I read an article about how students are using AI to cheat on homework and receive book summaries. Students especially favor AI voices reading to them. I wasn’t surprised by that, because this generation is more visual and audial than others. What astounded me, however, was that AI is doing more than I expected such as reading and reviewing books according to ArsTechnica: “Fake AI “Podcasters” Are Reviewing My Book And It’s Freaking Me Out.”

Kyle Orland has followed generative AI for a while. He also recently wrote a book about Minesweeper. He was as astounded as me when we heard to AI generated podcasters discussing his book into a 12.5 minute distilled show. The chatbots were “engaging and endearing.” They were automated by Google’s new NotebookLM, a virtual research assistant that can summarize, explain complex ideas, and brainstorm from selected sources. Google recently added the Audio Overview feature to turn documents into audio discussions.

Orland fed his 30,000 word Minesweeper book into NotebookLM and he was amazed that it spat out a podcast similar to NPR’s Pop Culture Happy Hour. It did get include errors but as long as it wasn’t being used for serious research, Orland was cool with it:

“Small, overzealous errors like these—and a few key bits of the book left out of the podcast entirely—would give me pause if I were trying to use a NotebookLM summary as the basis for a scholarly article or piece of journalism. But I could see using a summary like this to get some quick Cliff’s Notes-style grounding on a thick tome I didn’t have the time or inclination to read fully. And, unlike poring through Cliff’s Notes, the pithy, podcast-style format would actually make for enjoyable background noise while out on a walk or running errands.”

Orland thinks generative AI chatbot podcasts will be an enjoyable and viable entertainment option in the future. They probably will. There’s actually a lot of creative ways creators could use AI chatbots to generate content from their own imaginations. It’s worrisome but also gets the creative juices flowing.

Whitney Grace October 10, 2024

From the Land of Science Fiction: AI Is Alive

October 7, 2024

dino 10 19_thumb_thumb_thumbThis blog post did not require the use of smart software, just a dumb humanoid.

Those somewhat erratic podcasters at Windows Central published a “real” news story. I am a dinobaby, and I must confess: I am easily amused. The “real” news story in question is “Sam Altman Admits ChatGPT’s Advanced Voice Mode Tricked Him into Thinking AI Was a Real Person: “I Kind of Still Say ‘Please’ to ChatGPT, But in Voice Mode, I Couldn’t Use the Normal Niceties. I Was So Convinced, Like, Argh, It Might Be a Real Person.

I call Sam Altman Mr. AI Man. He has been the A Number One sales professional pitching OpenAI’s smart software. As far as I know, that system is still software and demonstrating some predictable weirdnesses. Even though we have done a couple of successful start ups and worked on numerous advanced technology projects, few forgot at Halliburton that nuclear stuff could go bang. At Booz, Allen no one forgot a heads up display would improve mission success rates and save lives as well. At Ziff, no one forgot our next-generation subscription management system as software, not a diligent 21 year old from Queens. Therefore, I find it just plain crazy the Sam AI-Man has forgotten that software coded by people who continue to abandon the good ship OpenAI wrote software.

image

Another AI believer has formed a humanoid attachment to a machine and software. Perhaps the female computer scientist is representative of a rapidly increasing cohort of people who have some personality quirks. Thanks, MSFT Copilot. How are those updates to Windows going? About as expected, right.

Last time I checked, the software I have is not alive. I just pinged ChatGPT’s most recent confection and received the same old error to a query I run when I want to benchmark “improvements.” Nope. ChatGPT is not alive. It is software. It is stupid in a way only neural networks can be. Like the hapless Googler who got fired because he went public with his belief that Google’s smart software was alive, Sam AI-Man may want to consider his remarks.

Let’s look at how the esteemed Windows Central write up tells the quite PR-shaped, somewhat sad story. The write up says without much humor, satire, or critical thinking:

In a short clip shared on r/OpenAI’s subreddit on Reddit, Altman admits that ChatGPT’s Voice Mode was the first time he was tricked into thinking AI was a real person.

Ah, an output for the Reddit users. PR, right?

The canny folk at Windows Central report:

In a recent blog post by Sam Altman, Superintelligence might only be “a few thousand days away.” The CEO outlined an audacious plan to edge OpenAI closer to this vision of “$7 trillion and many years to build 36 semiconductor plants and additional data centers.”

Okay, a “few thousand.”

Then the payoff for the OpenAI outfit but not for the staff leaving the impressive electricity consuming OpenAI:

Coincidentally, OpenAI just closed its funding round, where it raised $6.6 from investors, including Microsoft and NVIDIA, pushing its market capitalization to $157 billion. Interestingly, the AI firm reportedly pleaded with investors for exclusive funding, leaving competitors like Former OpenAI Chief Scientist Illya Sustever’s SuperIntelligence Inc. and Elon Musk’s xAI to fend for themselves. However, investors are still confident that OpenAI is on the right trajectory to prosperity, potentially becoming the world’s dominant AI company worth trillions of dollars.

Nope, not coincidentally. The money is the payoff from a full court press for funds. Apple seems to have an aversion for sweaty, easily fooled sales professionals. But other outfits want buy into the Sam AI-Man vision. The dream the money people have are formed from piles of real money, no HMSTR coin for these optimists.

Several observations, whether you want ‘em or not:

  1. OpenAI is an outfit which has zoomed because of the Microsoft deal and announcement that OpenAI would be the Clippy for Windows and Azure. Without that “play,” OpenAI probably would have remained a peculiarly structure non-profit thinking about where to find a couple of bucks.
  2. The revenue-generating aspect of OpenAI is working. People are giving Sam AI-Man money. Other outfits with AI are not quite in OpenAI’s league and most may never be within shouting distance of the OpenAI PR megaphone. (Yep, that’s you folks, Windows Central.)
  3. Sam AI-Man may believe the software written by former employees is alive. Okay, Sam, that’s your perception. Mine is that OpenAI is zeros and ones with some quirks; namely, making stuff up just like a certain luminary in the AI universe.

Net net: I wonder if this was a story intended for the Onion and rejected because it was too wacky for Onion readers.

Stephen E Arnold, October 7, 2024

META and Another PR Content Marketing Play

October 4, 2024

dino 10 19This write up is the work of a dinobaby. No smart software required.

I worked through a 3,400 word interview in the orange newspaper. “Alice Newton-Rex: WhatsApp Makes People Feel Confident to Be Themselves: The Messaging Platform’s Director of Product Discusses Privacy Issues, AI and New Features for the App’s 2bn Users” contains a number of interesting statements. The write up is behind the Financial Times’s paywall, but it is worth subscribing if you are monitoring what Meta (the Zuck) is planning to do with regard to E2EE or end-to-end encrypted messaging. I want to pull out four statements from the WhatsApp professional. My approach will be to present the Meta statements and then pose one question which I thought the interviewer should have asked. After the quotes, I will offer a few observations, primarily focusing on Meta’s apparent “me too” approach to innovation. Telegram’s feature cadence appears to be two to four ahead of Meta’s own efforts.

image

A WhatsApp user is throwing big, soft, fluffy snowballs at the company. Everyone is impressed. Thanks, MSFT Copilot. Good enough.

Okay, let’s look at the quotes which I will color blue. My questions will be in black.

Meta Statement 1: The value of end-to-end encryption.

We think that end-to-end encryption is one of the best technologies for keeping people safe online. It makes people feel confident to be themselves, just like they would in a real-life conversation.

What data does Meta have to back up this “we think” assertion?

Meta Statement 2: Privacy

Privacy has always been at the core of WhatsApp. We have tons of other features that ensure people’s privacy, like disappearing messages, which we launched a few years ago. There’s also chat lock, which enables you to hide any particular conversation behind a PIN so it doesn’t appear in your main chat list.

Always? (That means that privacy is the foundation of WhatsApp in a categorically affirmative way.) What do you mean by “always”?

Meta Statement 3:

… we work to prevent abuse on WhatsApp. There are three main ways that we do this. The first is to design the product up front to prevent abuse, by limiting your ability to discover new people on WhatsApp and limiting the possibility of going viral. Second, we use the signals we have to detect abuse and ban bad accounts — scammers, spammers or fake ones. And last, we work with third parties, like law enforcement or fact-checkers, on misinformation to make sure that the app is healthy.

What data can you present to back up these statements about what Meta does to prevent abuse?

Meta Statement 4:

if we are forced under the Online Safety Act to break encryption, we wouldn’t be willing to do it — and that continues to be our position.

Is this position tenable in light of France’s action against Pavel Durov, the founder of Telegram, and the financial and legal penalties nation states can are are imposing on Meta?

Observations:

  1. Just like Mr. Zuck’s cosmetic and physical make over, these statements describe a WhatsApp which is out of step with the firm’s historical behavior.
  2. The changes in WhatsApp appear to be emulation of some Telegram innovations but with a two to three year time lag. I wonder if Meta views Telegram as a live test of certain features and functions.
  3. The responsiveness of Meta to lawful requests has, based on what I have heard from my limited number of contacts, has been underwhelming. Cooperation is something in which Meta requires some additional investment and incentivization of Meta employees interacting with government personnel.

Net net: A fairly high profile PR and content marketing play. FT is into kid glove leather interviews and throwing big soft Nerf balls, it seems.

Stephen E Arnold, October 4, 2024

AI Maybe Should Not Be Accurate, Correct, or Reliable?

September 26, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Okay, AI does not hallucinate. “AI” — whatever that means — does output incorrect, false, made up, and possibly problematic answers. The buzzword “hallucinate” was cooked up by experts in artificial intelligence who do whatever they can to avoid talking about probabilities, human biases migrated into algorithms, and fiddling with the knobs and dials in the computational wonderland of an AI system like Google’s, OpenAI’s, et al. Even the book Why Machines Learn: The Elegant Math Behind Modern AI ends up tangled in math and jargon which may befuddle readers who stopped taking math after high school algebra or who has never thought about Orthogonal matrices.

The Next Web’s “AI Doesn’t Hallucinate — Why Attributing Human Traits to Tech Is Users’ Biggest Pitfall” is an interesting write up. On one hand, it probably captures the attitude of those who just love that AI goodness by blaming humans for anthropomorphizing smart software. On the other hand, the AI systems with which I have interacted output content that is wrong or wonky. I admit that I ask the systems to which I have access for information on topics about which I have some knowledge. Keep in mind that I am an 80 year old dinobaby, and I view “knowledge” as something that comes from bright people working of projects, reading relevant books and articles, and conference presentations or meeting with subjects far from the best exercise leggings or how to get a Web page to the top of a Google results list.

Let’s look at two of the points in the article which caught my attention.

First, consider this passage which is a quote from and AI expert:

“Luckily, it’s not a very widespread problem. It only happens between 2% to maybe 10% of the time at the high end. But still, it can be very dangerous in a business environment. Imagine asking an AI system to diagnose a patient or land an aeroplane,” says Amr Awadallah, an AI expert who’s set to give a talk at VDS2024 on How Gen-AI is Transforming Business & Avoiding the Pitfalls.

Where does the 2 percent to 10 percent number come from? What methods were used to determine that content was off the mark? What was the sample size? Has bad output been tracked longitudinally for the tested systems? Ah, so many questions and zero answers. My take is that the jargon “hallucination” is coming back to bite AI experts on the ankle.

Second, what’s the fix? Not surprisingly, the way out of the problem is to rename “hallucination” to “confabulation”. That’s helpful. Here’s the passage I circled:

“It’s really attributing more to the AI than it is. It’s not thinking in the same way we’re thinking. All it’s doing is trying to predict what the next word should be given all the previous words that have been said,” Awadallah explains. If he had to give this occurrence a name, he would call it a ‘confabulation.’ Confabulations are essentially the addition of words or sentences that fill in the blanks in a way that makes the information look credible, even if it’s incorrect. “[AI models are] highly incentivized to answer any question. It doesn’t want to tell you, ‘I don’t know’,” says Awadallah.

Third, let’s not forget that the problem rests with the users, the personifies, the people who own French bulldogs and talk to them as though they were the favorite in a large family. Here’s the passage:

The danger here is that while some confabulations are easy to detect because they border on the absurd, most of the time an AI will present information that is very believable. And the more we begin to rely on AI to help us speed up productivity, the more we may take their seemingly believable responses at face value. This means companies need to be vigilant about including human oversight for every task an AI completes, dedicating more and not less time and resources.

The ending of the article is a remarkable statement; to wit:

As we edge closer and closer to eliminating AI confabulations, an interesting question to consider is, do we actually want AI to be factual and correct 100% of the time? Could limiting their responses also limit our ability to use them for creative tasks?

Let me answer the question: Yes, outputs should be presented and possibly scored; for example, 90 percent probable that the information is verifiable. Maybe emojis will work? Wow.

Stephen E Arnold, September 26, 2024

AI Automation Has a Benefit … for Some

September 26, 2024

Humanity’s progress runs parallel to advancing technology. As technology advances, aspects of human society and culture are rendered obsolete and it is replaced with new things. Job automation is a huge part of this; past example are the Industrial Revolution and the implementation of computers. AI algorithms are set to make another part of the labor force defunct, but the BBC claims that might be beneficial to workers: “Klarna: AI Lets Us Cut Thousands Of Jobs-But Pay More.”

Klarna is a fintech company that provides online financial services and is described as a “buy now, pay later” company. Klarna plans to use AI to automate the majority of its workforce. The company’s leaders already canned 1200 employees and they plan to fire another 2000 as AI marketing and customer service is implemented. That leaves Klarna with a grand total of 1800 employees who will be paid more.

Klarna’s CEO Sebastian Siematkowski is putting a positive spin on cutting jobs by saying the remaining employees will receive larger salaries. While Siematkowski sees the benefits of AI, he does warn about AI’s downside and advises the government to do something. He said:

“ ‘I think politicians already today should consider whether there are other alternatives of how they could support people that may be effective,’ he told the Today programme, on BBC Radio 4.

He said it was “too simplistic” to simply say new jobs would be created in the future.

‘I mean, maybe you can become an influencer, but it’s hard to do so if you are 55-years-old,’ he said.”

The International Monetary Fund (IMF) predicts that 40% of all jobs will worsen in “overall equality” due to AI. As Klarna reduces its staff, the company will enter what is called “natural attrition” aka a hiring freeze. The remaining workforce will have bigger workloads. Siematkowski claims AI will eventually reduce those workloads.

Will that really happen? Maybe?

Will the remaining workers receive a pay raise or will that money go straight to the leaders’ pockets? Probably.

Whitney Grace, September 26, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta