The GoldenJackals Are Running Free

October 11, 2024

Vea_thumb_thumb_thumb_thumbThe only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.

Remember the joke about security. Unplugged computer in a locked room. Ho ho ho. “Mind the (Air) Gap: GoldenJackal Gooses Government Guardrails” reports that security is getting more difficult. The write up says:

GoldenJackal used a custom toolset to target air-gapped systems at a South Asian embassy in Belarus since at least August 2019… These toolsets provide GoldenJackal a wide set of capabilities for compromising and persisting in targeted networks. Victimized systems are abused to collect interesting information, process the information, exfiltrate files, and distribute files, configurations and commands to other systems. The ultimate goal of GoldenJackal seems to be stealing confidential information, especially from high-profile machines that might not be connected to the internet.

What’s interesting is that the sporty folks at GoldenJackal can access the equivalent of the unplugged computer in a locked room. Not exactly, of course, but allegedly darned close.

image

Microsoft Copilot does a great job of presenting an easy to use cyber security system and console. Good work.

The cyber experts revealing this exploit learned of it in 2020. I think that is more than three years ago. I noted the story in October 2024. My initial question was, “What took so long to provide some information which is designed to spark fear and ESET sales?”

The write up does not tackle this question but the write up reveals that the vector of compromise was a USB drive (thumb drive). The write up provides some detail about how the exploit works, including a code snippet and screen shots. One of the interesting points in the write up is that Kaspersky, a recently banned vendor in the US, documented some of the tools a year earlier.

The conclusion of the article is interesting; to wit:

Managing to deploy two separate toolsets for breaching air-gapped networks in only five years shows that GoldenJackal is a sophisticated threat actor aware of network segmentation used by its targets.

Several observations come to mind:

  1. Repackaging and enhancing existing malware into tool bundles demonstrates the value of blending old and new methods.
  2. The 60 month time lag suggests that the GoldenJackal crowd is organized and willing to invest time in crafting a headache inducer for government cyber security professionals
  3. With the plethora of cyber alert firms monitoring everything from secure “work use only” laptops to useful outputs from a range of devices, systems, and apps why is it that only one company sufficiently alert or skilled to explain the droppings of the GoldenJackal?

I learn about new exploits every couple of days. What is now clear to me is that a cyber security firm which discovers something novel does so by accident. This leads me to formulate the hypothesis that most cyber security services are not particularly good at spotting what I would call “repackaged systems and methods.” With a bit of lipstick, bad actors are able to operate for what appears to be significant periods of time without detection.

If this hypothesis is correct, US government memoranda, cyber security white papers, and academic type articles may be little more than puffery. “Puffery,” as we have learned is no big deal. Perhaps that is what expensive cyber security systems and services are to bad actors: No big deal.

Stephen E Arnold, October 11, 2024

One

Google Pulls Off a Unique Monopoly Play: Redefining Disciplines and Winning Awards

October 10, 2024

dino orangeThe only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.

The monopolists of the past are a storied group of hard-workers. The luminaries blazing a path to glory have included John D. Rockefeller (the 1911 guy), J.P. Morgan and James J. Hill (railroads and genetic material contributor to JP Morgan and MorganStanley circa 2024, James B. Duke (nope, smoking is good for you), Andrew Carnegie (hey, he built “free” public libraries which are on the radar of today’s publishers I think), and Edward T. Bedford (starch seem unexciting until you own the business). None of these players were able to redefine Nobel Prizes.

image

A member of Google leadership explains to his daughter (who is not allowed to use smart software for her private school homework or her tutor’s assignments) that the Google is a bit like JP Morgan but better in so many other ways. Thanks, MSFT Copilot. How are the Windows 11 updates and the security fixes today?

The Google pulled it off. One Xoogler (that is the jargon for a former Google professional) and one honest-to-goodness chess whiz Googler won Nobel Prizes. Fortune Magazine reported that Geoffrey Hinton (the Xoogler) won a Nobel Prize for … wait for it … physics. Yep, the discipline associated with chasing dark matter and making thermonuclear bombs into everyday words really means smart software or the undefinable phrase “artificial intelligence.” Some physicists are wondering how one moves from calculating the mass of a proton to helping college students cheat. Dr. Sabine Hossenfelder asks, “Hello, Stockholm, where is our Nobel?” The answer is, “Politics, money, and publicity, Dr. Hossenfelder.” These are the three ingredients of achievement.

But wait! Google also won a Nobel Prize for … wait for it … chemistry. Yep, you remember high school chemistry class. Jars, experiments which don’t match the textbook, and wafts of foul smelling gas getting sucked into the lab’s super crappy air venting system. The Verge reported on how important computation chemistry is to the future of money-spinning confections like the 2020 virus of the year. The poohbahs (journalist-consultant-experts) at that publication with nary a comment about smart software which made the “chemistry” of Google do in “minutes” what ordinary computational chemistry solutions take hours longer to accomplish.

The Google and Xoogle winners are very smart people. Google, however, has done what the schlubs like J.P. Morgan could never accomplish: Redefine basic scientific disciplines. Physics means neural networks. Chemistry means repurposing a system to win chess games.

I suppose with AI eliminating the need for future students to learn. “University Professor ‘Terrified’ By The Sharp Decline In Student Performance — ’The Worst I’ve Ever Encountered’” quoted a college professor as saying:

The professor said her students ‘don’t read,’ write terrible essays, and ‘don’t even try’ in her class. The professor went on to say that when she recently assigned an exam focused on a reading selection, she "had numerous students inquire if it’s open book." That is, of course, preposterous — the entire point of a reading exam is to test your comprehension of the reading you were supposed to do! But that’s just it — she said her students simply "don’t read."

That makes sense. Physics is smart software; chemistry is smart software. Uninformed student won’t know the difference. What’s the big deal? That’s a super special insight into the zing in teaching and learning.

What’s the impact of these awards? In my opinion:

  1. The reorganization of DeepMind where the Googler is the Top Dog has been scrubbed of management hoo-hah by the award.
  2. The Xoogler will have an ample opportunity to explain that smart software will destroy mankind. That’s possible because the intellectual rot has already spread to students.
  3. The Google itself can now explain that it is not a monopoly. How is this possible? Simple. Physics is not about the goings on at Los Alamos National Laboratory. Chemistry is not dumping diluted hydrochloric acid into a beaker filled calcium carbide. It makes perfect sense to explain that Google is NOT a monopoly.

But the real payoff to the two awards is that Google’s management team can say:

Those losers like John D. Rockefeller, JP Morgan, the cigarette person, the corn starch king, and the tight fisted fellow from someplace with sheep are not smart like the Google. And, the Google leadership is indeed correct. That’s why life is so much better with search engine optimization, irrelevant search results, non-stop invasive advertising, a disable skip this ad button, and the remarkable Google speak which accompanies another allegation of illegal business conduct from a growing number of the 195 countries in the world.

That’s a win that old-timey monopolists could not put in their account books.

Stephen E Arnold, October 10, 2024

What Can Cyber Criminals Learn from Automated Ad Systems?

October 10, 2024

Vea_thumb_thumbThe only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.

My personal opinion is that most online advertising is darned close to suspicious or outright legal behavior. “New,” “improved,” “Revolutionary” — Sure, I believe every online advertisement. But consider this: For hundreds of years those in the advertising business urged a bit of elasticity with reality. Sure, Duz does it. As a dinobaby, I assert that most people in advertising and marketing assume that reality and a product occupy different parts of a data space. Consequently most people — not just marketers, advertising executives, copywriters, and prompt engineers. I mean everyone.

image

An ad sales professional explains the benefits of Facebook, Google, and TikTok-type of sales. Instead of razor blades just sell ransomware as stolen credit cards. Thanks, MSFT Copilot. How are those security remediation projects with anti-malware vendors coming? Oh, sorry to hear that.

With a common mindset, I think it is helpful to consider the main points of “TikTok Joins the AI-Driven Advertising Pack to Compete with Meta for Ad Dollars.” The article makes clear that Google and Meta have automated the world of Madison Avenue. Not only is work mechanical, that work is informed by smart software. The implications for those who work the old fashioned way over long lunches and golf outings are that work methods themselves are changing.

The estimable TikTok is beavering away to replicate the smart ad systems of companies like the even more estimable Facebook and Google type companies. If TikTok is lucky as only an outfit linked with a powerful nation state can be, a bit of competition may find its way into the hardened black boxes of the digital replacement for Madison Avenue.

The write up says:

The pitch is all about simplicity and speed — no more weeks of guesswork and endless A/B testing, according to Adolfo Fernandez, TikTok’s director, global head of product strategy and operations, commerce. With TikTok’s AI already trained on what drives successful ad campaigns on the platform, advertisers can expect quick wins with less hassle, he added. The same goes for creative; Smart+ is linked to TikTok’s other AI tool, Symphony, designed to help marketers generate and refine ad concepts.

Okay, knowledge about who clicks what plus automation means less revenue for the existing automated ad system purveyors. The ideas are information about users, smart software, and automation to deliver “simplicity and speed.” Go fast, break things; namely, revenue streams flowing to Facebook and Google.

Why? Here’s a statement from the article answering the question:

TikTok’s worldwide ad revenue is expected to reach $22.32 billion by the end of the year, and increase 27.3% to $28.42 billion by the end of 2025, according to eMarketer’s March 2024 forecast. By comparison, Meta’s worldwide ad revenue is expected to total $154.16 billion by the end of this year, increasing 23.2% to $173.92 billion by the end of 2025, per eMarketer. “Automation is a key step for us as we enable advertisers to further invest in TikTok and achieve even greater return on investment,” David Kaufman, TikTok’s global head of monetization product and solutions, said during the TikTok.

I understand. Now let’s shift gears and ask, “What can bad actors learn from this seemingly routine report about jockeying among social media giants?”

Here are the lessons I think a person inclined to ignore laws and what’s left of the quaint notion of ethical behavior:

  1. These “smart” systems can be used to advertise bogus or non existent products to deliver ransomware, stealers, or other questionable software
  2. The mechanisms for automating phishing are simple enough for an art history or poli-sci major to use; therefore, a reasonably clever bad actor can whip up an automated phishing system without too much trouble. For those who need help, there are outfits like Telegram with its BotFather or helpful people advertising specialized skills on assorted Web forums and social media
  3. The reason to automate are simple: Better, faster, cheaper. Plus, with some useful data about a “market segment”, the malware can be tailored to hot buttons that are hard wired to a sucker’s nervous system.
  4. Users do click even when informed that some clicks mean a lost bank account or a stolen identity.

Is there a fix for articles which inform both those desperate to find a way to tell people in Toledo, Ohio, that you own a business selling aftermarket 22 inch wheels and alert bad actors to the wonders of automation and smart software? Nope. Isn’t online marketing a big win for everyone? And what if TikTok delivers a very subtle type of malware? Simple and efficient.

Stephen E Arnold, October 10, 2024

AI Podcasters Are Reviewing Books Now

October 10, 2024

I read an article about how students are using AI to cheat on homework and receive book summaries. Students especially favor AI voices reading to them. I wasn’t surprised by that, because this generation is more visual and audial than others. What astounded me, however, was that AI is doing more than I expected such as reading and reviewing books according to ArsTechnica: “Fake AI “Podcasters” Are Reviewing My Book And It’s Freaking Me Out.”

Kyle Orland has followed generative AI for a while. He also recently wrote a book about Minesweeper. He was as astounded as me when we heard to AI generated podcasters discussing his book into a 12.5 minute distilled show. The chatbots were “engaging and endearing.” They were automated by Google’s new NotebookLM, a virtual research assistant that can summarize, explain complex ideas, and brainstorm from selected sources. Google recently added the Audio Overview feature to turn documents into audio discussions.

Orland fed his 30,000 word Minesweeper book into NotebookLM and he was amazed that it spat out a podcast similar to NPR’s Pop Culture Happy Hour. It did get include errors but as long as it wasn’t being used for serious research, Orland was cool with it:

“Small, overzealous errors like these—and a few key bits of the book left out of the podcast entirely—would give me pause if I were trying to use a NotebookLM summary as the basis for a scholarly article or piece of journalism. But I could see using a summary like this to get some quick Cliff’s Notes-style grounding on a thick tome I didn’t have the time or inclination to read fully. And, unlike poring through Cliff’s Notes, the pithy, podcast-style format would actually make for enjoyable background noise while out on a walk or running errands.”

Orland thinks generative AI chatbot podcasts will be an enjoyable and viable entertainment option in the future. They probably will. There’s actually a lot of creative ways creators could use AI chatbots to generate content from their own imaginations. It’s worrisome but also gets the creative juices flowing.

Whitney Grace October 10, 2024

From the Land of Science Fiction: AI Is Alive

October 7, 2024

dino 10 19_thumb_thumb_thumbThis blog post did not require the use of smart software, just a dumb humanoid.

Those somewhat erratic podcasters at Windows Central published a “real” news story. I am a dinobaby, and I must confess: I am easily amused. The “real” news story in question is “Sam Altman Admits ChatGPT’s Advanced Voice Mode Tricked Him into Thinking AI Was a Real Person: “I Kind of Still Say ‘Please’ to ChatGPT, But in Voice Mode, I Couldn’t Use the Normal Niceties. I Was So Convinced, Like, Argh, It Might Be a Real Person.

I call Sam Altman Mr. AI Man. He has been the A Number One sales professional pitching OpenAI’s smart software. As far as I know, that system is still software and demonstrating some predictable weirdnesses. Even though we have done a couple of successful start ups and worked on numerous advanced technology projects, few forgot at Halliburton that nuclear stuff could go bang. At Booz, Allen no one forgot a heads up display would improve mission success rates and save lives as well. At Ziff, no one forgot our next-generation subscription management system as software, not a diligent 21 year old from Queens. Therefore, I find it just plain crazy the Sam AI-Man has forgotten that software coded by people who continue to abandon the good ship OpenAI wrote software.

image

Another AI believer has formed a humanoid attachment to a machine and software. Perhaps the female computer scientist is representative of a rapidly increasing cohort of people who have some personality quirks. Thanks, MSFT Copilot. How are those updates to Windows going? About as expected, right.

Last time I checked, the software I have is not alive. I just pinged ChatGPT’s most recent confection and received the same old error to a query I run when I want to benchmark “improvements.” Nope. ChatGPT is not alive. It is software. It is stupid in a way only neural networks can be. Like the hapless Googler who got fired because he went public with his belief that Google’s smart software was alive, Sam AI-Man may want to consider his remarks.

Let’s look at how the esteemed Windows Central write up tells the quite PR-shaped, somewhat sad story. The write up says without much humor, satire, or critical thinking:

In a short clip shared on r/OpenAI’s subreddit on Reddit, Altman admits that ChatGPT’s Voice Mode was the first time he was tricked into thinking AI was a real person.

Ah, an output for the Reddit users. PR, right?

The canny folk at Windows Central report:

In a recent blog post by Sam Altman, Superintelligence might only be “a few thousand days away.” The CEO outlined an audacious plan to edge OpenAI closer to this vision of “$7 trillion and many years to build 36 semiconductor plants and additional data centers.”

Okay, a “few thousand.”

Then the payoff for the OpenAI outfit but not for the staff leaving the impressive electricity consuming OpenAI:

Coincidentally, OpenAI just closed its funding round, where it raised $6.6 from investors, including Microsoft and NVIDIA, pushing its market capitalization to $157 billion. Interestingly, the AI firm reportedly pleaded with investors for exclusive funding, leaving competitors like Former OpenAI Chief Scientist Illya Sustever’s SuperIntelligence Inc. and Elon Musk’s xAI to fend for themselves. However, investors are still confident that OpenAI is on the right trajectory to prosperity, potentially becoming the world’s dominant AI company worth trillions of dollars.

Nope, not coincidentally. The money is the payoff from a full court press for funds. Apple seems to have an aversion for sweaty, easily fooled sales professionals. But other outfits want buy into the Sam AI-Man vision. The dream the money people have are formed from piles of real money, no HMSTR coin for these optimists.

Several observations, whether you want ‘em or not:

  1. OpenAI is an outfit which has zoomed because of the Microsoft deal and announcement that OpenAI would be the Clippy for Windows and Azure. Without that “play,” OpenAI probably would have remained a peculiarly structure non-profit thinking about where to find a couple of bucks.
  2. The revenue-generating aspect of OpenAI is working. People are giving Sam AI-Man money. Other outfits with AI are not quite in OpenAI’s league and most may never be within shouting distance of the OpenAI PR megaphone. (Yep, that’s you folks, Windows Central.)
  3. Sam AI-Man may believe the software written by former employees is alive. Okay, Sam, that’s your perception. Mine is that OpenAI is zeros and ones with some quirks; namely, making stuff up just like a certain luminary in the AI universe.

Net net: I wonder if this was a story intended for the Onion and rejected because it was too wacky for Onion readers.

Stephen E Arnold, October 7, 2024

META and Another PR Content Marketing Play

October 4, 2024

dino 10 19This write up is the work of a dinobaby. No smart software required.

I worked through a 3,400 word interview in the orange newspaper. “Alice Newton-Rex: WhatsApp Makes People Feel Confident to Be Themselves: The Messaging Platform’s Director of Product Discusses Privacy Issues, AI and New Features for the App’s 2bn Users” contains a number of interesting statements. The write up is behind the Financial Times’s paywall, but it is worth subscribing if you are monitoring what Meta (the Zuck) is planning to do with regard to E2EE or end-to-end encrypted messaging. I want to pull out four statements from the WhatsApp professional. My approach will be to present the Meta statements and then pose one question which I thought the interviewer should have asked. After the quotes, I will offer a few observations, primarily focusing on Meta’s apparent “me too” approach to innovation. Telegram’s feature cadence appears to be two to four ahead of Meta’s own efforts.

image

A WhatsApp user is throwing big, soft, fluffy snowballs at the company. Everyone is impressed. Thanks, MSFT Copilot. Good enough.

Okay, let’s look at the quotes which I will color blue. My questions will be in black.

Meta Statement 1: The value of end-to-end encryption.

We think that end-to-end encryption is one of the best technologies for keeping people safe online. It makes people feel confident to be themselves, just like they would in a real-life conversation.

What data does Meta have to back up this “we think” assertion?

Meta Statement 2: Privacy

Privacy has always been at the core of WhatsApp. We have tons of other features that ensure people’s privacy, like disappearing messages, which we launched a few years ago. There’s also chat lock, which enables you to hide any particular conversation behind a PIN so it doesn’t appear in your main chat list.

Always? (That means that privacy is the foundation of WhatsApp in a categorically affirmative way.) What do you mean by “always”?

Meta Statement 3:

… we work to prevent abuse on WhatsApp. There are three main ways that we do this. The first is to design the product up front to prevent abuse, by limiting your ability to discover new people on WhatsApp and limiting the possibility of going viral. Second, we use the signals we have to detect abuse and ban bad accounts — scammers, spammers or fake ones. And last, we work with third parties, like law enforcement or fact-checkers, on misinformation to make sure that the app is healthy.

What data can you present to back up these statements about what Meta does to prevent abuse?

Meta Statement 4:

if we are forced under the Online Safety Act to break encryption, we wouldn’t be willing to do it — and that continues to be our position.

Is this position tenable in light of France’s action against Pavel Durov, the founder of Telegram, and the financial and legal penalties nation states can are are imposing on Meta?

Observations:

  1. Just like Mr. Zuck’s cosmetic and physical make over, these statements describe a WhatsApp which is out of step with the firm’s historical behavior.
  2. The changes in WhatsApp appear to be emulation of some Telegram innovations but with a two to three year time lag. I wonder if Meta views Telegram as a live test of certain features and functions.
  3. The responsiveness of Meta to lawful requests has, based on what I have heard from my limited number of contacts, has been underwhelming. Cooperation is something in which Meta requires some additional investment and incentivization of Meta employees interacting with government personnel.

Net net: A fairly high profile PR and content marketing play. FT is into kid glove leather interviews and throwing big soft Nerf balls, it seems.

Stephen E Arnold, October 4, 2024

AI Maybe Should Not Be Accurate, Correct, or Reliable?

September 26, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Okay, AI does not hallucinate. “AI” — whatever that means — does output incorrect, false, made up, and possibly problematic answers. The buzzword “hallucinate” was cooked up by experts in artificial intelligence who do whatever they can to avoid talking about probabilities, human biases migrated into algorithms, and fiddling with the knobs and dials in the computational wonderland of an AI system like Google’s, OpenAI’s, et al. Even the book Why Machines Learn: The Elegant Math Behind Modern AI ends up tangled in math and jargon which may befuddle readers who stopped taking math after high school algebra or who has never thought about Orthogonal matrices.

The Next Web’s “AI Doesn’t Hallucinate — Why Attributing Human Traits to Tech Is Users’ Biggest Pitfall” is an interesting write up. On one hand, it probably captures the attitude of those who just love that AI goodness by blaming humans for anthropomorphizing smart software. On the other hand, the AI systems with which I have interacted output content that is wrong or wonky. I admit that I ask the systems to which I have access for information on topics about which I have some knowledge. Keep in mind that I am an 80 year old dinobaby, and I view “knowledge” as something that comes from bright people working of projects, reading relevant books and articles, and conference presentations or meeting with subjects far from the best exercise leggings or how to get a Web page to the top of a Google results list.

Let’s look at two of the points in the article which caught my attention.

First, consider this passage which is a quote from and AI expert:

“Luckily, it’s not a very widespread problem. It only happens between 2% to maybe 10% of the time at the high end. But still, it can be very dangerous in a business environment. Imagine asking an AI system to diagnose a patient or land an aeroplane,” says Amr Awadallah, an AI expert who’s set to give a talk at VDS2024 on How Gen-AI is Transforming Business & Avoiding the Pitfalls.

Where does the 2 percent to 10 percent number come from? What methods were used to determine that content was off the mark? What was the sample size? Has bad output been tracked longitudinally for the tested systems? Ah, so many questions and zero answers. My take is that the jargon “hallucination” is coming back to bite AI experts on the ankle.

Second, what’s the fix? Not surprisingly, the way out of the problem is to rename “hallucination” to “confabulation”. That’s helpful. Here’s the passage I circled:

“It’s really attributing more to the AI than it is. It’s not thinking in the same way we’re thinking. All it’s doing is trying to predict what the next word should be given all the previous words that have been said,” Awadallah explains. If he had to give this occurrence a name, he would call it a ‘confabulation.’ Confabulations are essentially the addition of words or sentences that fill in the blanks in a way that makes the information look credible, even if it’s incorrect. “[AI models are] highly incentivized to answer any question. It doesn’t want to tell you, ‘I don’t know’,” says Awadallah.

Third, let’s not forget that the problem rests with the users, the personifies, the people who own French bulldogs and talk to them as though they were the favorite in a large family. Here’s the passage:

The danger here is that while some confabulations are easy to detect because they border on the absurd, most of the time an AI will present information that is very believable. And the more we begin to rely on AI to help us speed up productivity, the more we may take their seemingly believable responses at face value. This means companies need to be vigilant about including human oversight for every task an AI completes, dedicating more and not less time and resources.

The ending of the article is a remarkable statement; to wit:

As we edge closer and closer to eliminating AI confabulations, an interesting question to consider is, do we actually want AI to be factual and correct 100% of the time? Could limiting their responses also limit our ability to use them for creative tasks?

Let me answer the question: Yes, outputs should be presented and possibly scored; for example, 90 percent probable that the information is verifiable. Maybe emojis will work? Wow.

Stephen E Arnold, September 26, 2024

AI Automation Has a Benefit … for Some

September 26, 2024

Humanity’s progress runs parallel to advancing technology. As technology advances, aspects of human society and culture are rendered obsolete and it is replaced with new things. Job automation is a huge part of this; past example are the Industrial Revolution and the implementation of computers. AI algorithms are set to make another part of the labor force defunct, but the BBC claims that might be beneficial to workers: “Klarna: AI Lets Us Cut Thousands Of Jobs-But Pay More.”

Klarna is a fintech company that provides online financial services and is described as a “buy now, pay later” company. Klarna plans to use AI to automate the majority of its workforce. The company’s leaders already canned 1200 employees and they plan to fire another 2000 as AI marketing and customer service is implemented. That leaves Klarna with a grand total of 1800 employees who will be paid more.

Klarna’s CEO Sebastian Siematkowski is putting a positive spin on cutting jobs by saying the remaining employees will receive larger salaries. While Siematkowski sees the benefits of AI, he does warn about AI’s downside and advises the government to do something. He said:

“ ‘I think politicians already today should consider whether there are other alternatives of how they could support people that may be effective,’ he told the Today programme, on BBC Radio 4.

He said it was “too simplistic” to simply say new jobs would be created in the future.

‘I mean, maybe you can become an influencer, but it’s hard to do so if you are 55-years-old,’ he said.”

The International Monetary Fund (IMF) predicts that 40% of all jobs will worsen in “overall equality” due to AI. As Klarna reduces its staff, the company will enter what is called “natural attrition” aka a hiring freeze. The remaining workforce will have bigger workloads. Siematkowski claims AI will eventually reduce those workloads.

Will that really happen? Maybe?

Will the remaining workers receive a pay raise or will that money go straight to the leaders’ pockets? Probably.

Whitney Grace, September 26, 2024

But What about the Flip Side of Smart Software Swaying Opinion

September 20, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Silicon Valley “fight of the century” might be back on. I think I heard, “Let’s scrap” buzzing in the background when I read “Musk Has Turned Twitter into Hyper-Partisan Hobby Horse, Says Nick Clegg.” Here in Harrod’s Creek, Kentucky, them is fightin’ words. When they are delivered by a British luminary educated at Westminster School before going on to study at the University of Cambridge, the pronouncement is particularly grating on certain sensitive technology super heroes.

image

The Silicon Valley Scrap is ramping up. One one digital horse is the Zuck. On the other steed is Musk. When the two titans collide, who will emerge as the victor? How about the PR and marketing professionals working for each of the possible chevaliers? Thanks, MSFT Copilot. Good enough.

The write up in the Telegraph (a British newspaper which uses a paywall to discourage the riff raff from reading its objective “real news” stories reports:

Sir Nick, who is now head of global affairs for Facebook-owner Meta, said Mr Musk’s platform, which is now known as X, was used by a tiny group of elite people to “yell at each other” about politics. By contrast, Facebook and Instagram had deprioritized news and politics because people did not want to read it, he said.

Of course, Cambridge University graduates who have studied at the home of the Golden Gophers and the (where is it again?) College of Europe would not “yell.” How plebeian! How nouveau riche! My, my, how déclassé.

The Telegraph reports without a hint of sarcasm:

Meta launched a rival service last year called Threads, but has said it will promote subjects such as sports above news and politics in feeds. Sir Nick, who will next week face a Senate committee about tech companies’ role in elections, said that social media has very little impact on voters’ choices. “People tend to somewhat exaggerate the role that technology plays in how people vote and political behavior,” he said.

As a graduate of a loser school, I wish to humbly direct Sir Richard’s attention to “AI Chatbots Might Be Better at Swaying Conspiracy Theorists Than Humans.” The main idea of the write up of a research project is that:

Experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory showed that the interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual.

Keeping in mind that I am not the type of person the University of Europe or Golden Gopher U. wants on its campus, I would ask, “Wouldn’t smart software work to increase the power of bad actors or company owners who use AI chatbots to hold opinions promoted by the high-technology companies. If so, Mr. Clegg’s description of X.com as a hobby horse would apply to Sir Richard’s boss, Mark Zuckerberg (aka the Zuck). Surely social media and smart software are able to slice, dice, chop, and cut in multiple directions. Wouldn’t a filter tweaked a certain way provide a powerful tool to define “reality” and cause some users to ramp up their interest in a topic? Could these platforms with a digital finger on the filter controls make some people roll over, pat their tummies, and believe something that the high technology “leadership” wants?

Which of these outstanding, ethical high-technology social media platforms will win a dust up in Silicon Valley? How much will Ticketmaster charge for a ring-side seat? What other pronouncements will the court jesters for these two highly-regarded companies say?

Stephen E Arnold, September 20, 2024

Too Bad Google and OpenAI. Perplexity Is a Game Changer, Says Web Pro News!

September 10, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have tested a  number of smart software systems. I can say, based on my personal experience, none is particularly suited to my information needs. Keep in mind that I am a dinobaby, more at home in a research library or the now-forgotten Dialog command line. ss cc=7900, thank you very much.

I worked through the write up “Why Perplexity AI Is (Way) Better Than Google: A Deep Dive into the Future of Search.” The phrase “Deep Dive’ reminded me of a less-than-overwhelming search service called Deepdyve. (I just checked and, much to my surprise, the for-fee service is online at https://www.deepdyve.com/. Kudos, Deepdyve, which someone told me was a tire kicker or maybe more with the Snorkle system. (I could look it up using a smart software system, but performance is crappy today, and I don’t want to get distracted from the Web Pro News pronouncement. But that smart software output requires a lot of friction; that is, verifying that the outputs are accurate.)

image

A dinobaby (the author of this blog post) works in a library. Thanks, MSFT Copilot, good enough.

Here’s the subtitle to the article. Its verbosity smacks of that good old and mostly useless search engine optimization tinkering:

Perplexity AI is not just a new contender; it’s a game-changer that could very well dethrone Google in the years to come. But what exactly makes Perplexity AI better than Google? Let’s explore the…

No, I didn’t truncate the subtitle. That’s it.

The write up explains what differentiates Perplexity from the other smart software, question-answering marvels. Here’s a list:

  • Speed and Precision at Its Core
  • Specialized Search Experience for Enterprise Needs
  • Tailored Results and User Interaction
  • Innovations in Data Privacy
  • Ad-Free Experience: A Breath of Fresh Air
  • Standardized Interface and High Accuracy
  • The Potential to Revolutionize Search

In my experience, I am not sure about the speed of Perplexity or any smart search and retrieval system. Speed must be compared to something. I can obtain results from my installation of Everything search pretty darned quick. None of the cloud search solutions comes close. My Mistal installation grunts and sweats on a corpus of 550 patent documents. How about some benchmarks, WebProNews?

Precision means that the query returns documents matching a query. There is a formula (which is okay as formulae go) which is, as I recall, Relevant retrieved instances divided by All retrieved instances. To calculate this, one must take a bounded corpus, run queries, and develop an understanding of what is in the corpus by reading documents and comparing outputs from test queries. Then one uses another system and repeats the queries, comparing the results. The process can be embellished, particularly by graduate students working on an advanced degree. But something more than generalizations are needed to convince me of anything related to “precision.” Determining precision is impossible when vendors do not disclose sources and make the data sets available. Subjective impressions are okay for messy water lilies, but in the dinobaby world of precision and its sidekick recall, a bit of work is necessary.

The “specialized search experience” means what? To me, I like to think about computational chemists. The interface has to support chemical structures, weird CAS registry numbers, words (mostly ones unknown to a normal human), and other assorted identifiers. As far as I know, none of the smart software I have examined does this for computational chemists or most of the other “specialized” experiences engineers, mathematicians, or physicists, among others, use in their routine work processes. I simply don’t know what Web Pro News wants me to understand. I am baffled, a normal condition for dinobabies.

I like the idea of tailored results. That’s what Instagram, TikTok, and YouTube try to deliver in order to increase stickiness. I think in terms of citations to relevant documents relevant to my query. I don’t like smart software which tries to predict what I want or need. I determine that based on the information I obtain, read, and write down in a notebook. Web Pro News and I are not on the same page in my paper notebook. Dinobabies are a pain, aren’t they?

I like the idea of “data privacy.” However, I need evidence that Perplexity’s innovations actually work. No data, no trust: Is that difficult for a younger person to understand?

The standardized interface makes life easy for the vendor. Think about the computational chemist. The interface must match her specific work processes. A standard interface is likely to be wide of the mark for some enterprise professionals. The phrase “high accuracy” means nothing without one’s knowing the corpus from which the index is constructed. Furthermore the notion of probability means “close enough for horseshoes.” Hallucination refers to outputs from smart software which are wide of the mark. More insidious are errors which cannot be easily identified. A standard interface and accuracy don’t go together like peanut butter and jelly or bread and butter. The interface is separate from the underlying system. The interface might be “accurate” if the term were defined in the write up, but it is not. Therefore, accuracy is like “love,” “mom,” and “ethics.” Anything goes just not for me, however.

The “potential to revolutionize search” is marketing baloney. Search today is more problematic than anytime in my more than half century of work in information retrieval. The only thing “revolutionary” are the ways to monetize users’ belief that the outputs are better, faster, cheaper than other available options. When one thinks about better, faster, and cheaper, I must add the caveat to pick two.

What’s the conclusion to this content marketing essay? Here it is:

As we move further into the digital age, the way we search for information is changing. Perplexity AI represents a significant step forward, offering a faster, more accurate, and more user-centric alternative to traditional search engines like Google. With its advanced AI technologies, ad-free experience, and commitment to data privacy, Perplexity AI is well-positioned to lead the next wave of innovation in search. For enterprise users, in particular, the benefits of Perplexity AI are clear. The platform’s ability to deliver precise, context-aware insights makes it an invaluable tool for research-intensive tasks, while its user-friendly interface and robust privacy measures ensure a seamless and secure search experience. As more organizations recognize the potential of Perplexity AI, we may well see a shift away from Google and towards a new era of search, one that prioritizes speed, precision, and user satisfaction above all else.

I know one thing the stakeholders and backers of the smart software hope that one of the AI players generates tons of cash and dump trucks of profit sharing checks. That day is, I think, lies in the future. Perplexity hopes it will be the winner; hence, content marketing is money well spent. If I were not a dinobaby, I might be excited. So far I am just perplexed.

Stephen E Arnold, September 10, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta