Allegations of Personal Data Flows from X.com to Au10tix

June 4, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I work from my dinobaby lair in rural Kentucky. What the heck to I know about Hod HaSharon, Israel? The answer is, “Not much.” However, I read an online article called “Elon Musk Now Requiring All X Users Who Get Paid to Send Their Personal ID Details to Israeli Intelligence-Linked Corporation.”I am not sure if the statements in the write up are accurate. I want to highlight some items from the write up because I have not seen information about this interesting identify verification process in my other feeds. This could be the second most covered news item in the last week or two. Number one goes to Google’s telling people to eat a rock a day and its weird “not our fault” explanation of its quantumly supreme technology.

Here’s what I carried away from this X to Au10tix write up. (A side note: Intel outfits like obscure names. In this case, Au10tix is a cute conversion of the word authentic to a unique string of characters. Aw ten tix. Get it?)

Yes, indeed. There is an outfit called Au10tix, and it is based about 60 miles north of Jerusalem, not in the intelware capital of the world Tel Aviv. The company, according to the cited write up, has a deal with Elon Musk’s X.com. The write up asserts:

X now requires new users who wish to monetize their accounts to verify their identification with a company known as Au10tix. While creator verification is not unusual for online platforms, Elon Musk’s latest move has drawn intense criticism because of Au10tix’s strong ties to Israeli intelligence. Even people who have no problem sharing their personal information with X need to be aware that the company they are using for verification is connected to the Israeli government. Au10tix was founded by members of the elite Israeli intelligence units Shin Bet and Unit 8200.

Sounds scary. But that’s the point of the article. I would like to remind you, gentle reader, that Israel’s vaunted intelligence systems failed as recently as October 2023. That event was described to me by one of the country’s former intelligence professionals as “our 9/11.” Well, maybe. I think it made clear that the intelware does not work as advertised in some situations. I don’t have first-hand information about Au10tix, but I would suggest some caution before engaging in flights of fancy.

The write up presents as actual factual information:

The executive director of the Israel-based Palestinian digital rights organization 7amleh, Nadim Nashif, told the Middle East Eye: “The concept of verifying user accounts is indeed essential in suppressing fake accounts and maintaining a trustworthy online environment. However, the approach chosen by X, in collaboration with the Israeli identity intelligence company Au10tix, raises significant concerns. “Au10tix is located in Israel and both have a well-documented history of military surveillance and intelligence gathering… this association raises questions about the potential implications for user privacy and data security.” Independent journalist Antony Loewenstein said he was worried that the verification process could normalize Israeli surveillance technology.

What the write up did not significant detail. The write up reports:

Au10tix has also created identity verification systems for border controls and airports and formed commercial partnerships with companies such as Uber, PayPal and Google.

My team’s research into online gaming found suggestions that the estimable 888 Holdings may have a relationship with Au10tix. The company pops up in some of our research into facial recognition verification. The Israeli gig work outfit Fiverr.com seems to be familiar with the technology as well. I want to point out that one of the Fiverr gig workers based in the UK reported to me that she was no longer “recognized” by the Fiverr.com system. Yeah, October 2023 style intelware.

Who operates the company? Heading back into my files, I spotted a few names. These individuals may no longer involved in the company, but several names remind me of individuals who have been active in the intelware game for a few years:

  • Ron Atzmon: Chairman (Unit 8200 which was not on the ball on October 2023 it seems)
  • Ilan Maytal: Chief Data Officer
  • Omer Kamhi: Chief Information Security Officer
  • Erez Hershkovitz: Chief Financial Officer (formerly of the very interesting intel-related outfit Voyager Labs, a company about which the Brennan Center has a tidy collection of information related to the LAPD)

The company’s technology is available in the Azure Marketplace. That description identifies three core functions of Au10tix’ systems:

  1. Identity verification. Allegedly the system has real-time identify verification. Hmm. I wonder why it took quite a bit of time to figure out who did what in October 2023. That question is probably unfair because it appears no patrols or systems “saw” what was taking place. But, I should not nit pick. The Azure service includes a “regulatory toolbox including disclaimer, parental consent, voice and video consent, and more.” That disclaimer seems helpful.
  2. Biometrics verification. Again, this is an interesting assertion. As imagery of the October 2023 emerged I asked myself, “How did that ID to selfie, selfie to selfie, and selfie to token matches” work? Answer: Ask the families of those killed.
  3. Data screening and monitoring. The system can “identify potential risks and negative news associated with individuals or entities.” That might be helpful in building automated profiles of individuals by companies licensing the technology. I wonder if this capability can be hooked to other Israeli spyware systems to provide a particularly helpful, real-time profile of a person of interest?

Let’s assume the write up is accurate and X.com is licensing the technology. X.com — according to “Au10tix Is an Israeli Company and Part of a Group Launched by Members of Israel’s Domestic Intelligence Agency, Shin Bet” — now includes this

image

The circled segment of the social media post says:

I agree to X and Au10tix using images of my ID and my selfie, including extracted biometric data to confirm my identity and for X’s related safety and security, fraud prevention, and payment purposes. Au10tix may store such data for up to 30 days. X may store full name, address, and hashes of my document ID number for as long as I participate in the Creator Subscription or Ads Revenue Share program.

This dinobaby followed the October 2023 event with shock and surprise. The dinobaby has long been a champion of Israel’s intelware capabilities, and I have done some small projects for firms which I am not authorized to identify. Now I am skeptical and more critical. What if X’s identity service is compromised? What if the servers are breached and the data exfiltrated? What if the system does not work and downstream financial fraud is enabled by X’s push beyond short text messaging? Much intelware is little more than glorified and old-fashioned search and retrieval.

Does Mr. Musk or other commercial purchasers of intelware know about cracks and fissures in intelware systems which allowed the October 2023 event to be undetected until live-fire reports arrived? This tie up is interesting and is worth monitoring.

Stephen E Arnold, June 4, 2024

Telegram May Play a Larger Role In Future Of War And Education

June 4, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Telegram is an essential tool for the future of crime. The Dark Web is still a hotbed of criminal activity, but as authorities crack down on it the bad actors need somewhere else to go. Stephen Arnold, Erik Arnold, et al. wrote a white paper titled E2EE: The Telegram Platform about how Telegram is replacing the Dark Web. Telegram is Dubai-based company with nefarious ties to Russia. The app offers data transfer for streaming audio and video, robust functions, and administrative tools. It’s being used to do everything from stealing people’s personal information to being an anti-US information platform.

The white paper details how Telegram is used to steal credit, gift, debit, and other card information. The process is called “carding” and a simple Google search reveals where stolen card information buyable. The team specifically investigated the Altenens.is, a paywall website to buy stolen information. It’s been removed from the Internet only to reappear again.

Altenens.is hosts forums, a chat, places to advertise products and services related to the website’s theme. Users are required to download and register with Telegram, because it offers encryption services for financial tractions. Altenen.is is only one of the main ways Telegram is used for bad acts:

“The Telegram service today is multi-faceted. One can argue that Telegram is a next-generation social network. Plus, it is a file transfer and rich media distribution service too. A bad actor can collect money from another Telegram user and then stream data or a video to an individual or a group. In the Altenen case example, the buyer of stolen credit cards gets a file with some carding data and the malware payload. The transaction takes place within Telegram. Its lax or hit-and-miss moderation method allows alleged illegal activity on the platform. ”

Telegram is becoming more advanced with its own cryptocurrency and abilities to mask and avoid third-party monitors. It’s used as a tool for war propaganda, but it’s also used to eschew authoritarian governments who want to control information. It’s interesting and warrants monitoring. If you work in an enforcement agency or a unit of the US government, you can request a copy of the white paper by writing benkent2020 @ yahoo dot com. Please, mention Beyond Search in your request. We do need to know your organization and area of interest.

Whitney Grace, June 4, 2024

Encryption Battles Continue

June 4, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Privacy protections are great—unless you are law-enforcement attempting to trace a bad actor. India has tried to make it easier to enforce its laws by forcing messaging apps to track each message back to its source. That is challenging for a platform with encryption baked in, as Rest of World reports in, “WhatsApp Gives India an Ultimatum on Encryption.” Writer Russell Brandom tells us:

“IT rules passed by India in 2021 require services like WhatsApp to maintain ‘traceability’ for all messages, allowing authorities to follow forwarded messages to the ‘first originator’ of the text. In a Delhi High Court proceeding last Thursday, WhatsApp said it would be forced to leave the country if the court required traceability, as doing so would mean breaking end-to-end encryption. It’s a common stance for encrypted chat services generally, and WhatsApp has made this threat before — most notably in a protracted legal fight in Brazil that resulted in intermittent bans. But as the Indian government expands its powers over online speech, the threat of a full-scale ban is closer than it’s been in years.”

And that could be a problem for a lot of people. We also learn:

“WhatsApp is used by more than half a billion people in India — not just as a chat app, but as a doctor’s office, a campaigning tool, and the backbone of countless small businesses and service jobs. There’s no clear competitor to fill its shoes, so if the app is shut down in India, much of the digital infrastructure of the nation would simply disappear. Being forced out of the country would be bad for WhatsApp, but it would be disastrous for everyday Indians.”

Yes, that sounds bad. For the Electronic Frontier Foundation, it gets worse: The civil liberties organization insists the regulation would violate privacy and free expression for all users, not just suspected criminals.

To be fair, WhatsApp has done a few things to limit harmful content. It has placed limits on message forwarding and has boosted its spam and disinformation reporting systems. Still, there is only so much it can do when enforcement relies on user reports. To do more would require violating the platform’s hallmark: its end-to-end encryption. Even if WhatsApp wins this round, Brandom notes, the issue is likely to come up again when and if the Bharatiya Janata Party does well in the current elections.

Cynthia Murrell, June 4, 2024

Lunch at a Big Time Publisher: Humble Pie and Sour Words

June 4, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Years ago I did some work for a big time New York City publisher. The firm employed people who used words like “fungible” and “synergy” when talking with me. I took the time to read an article with this title: “So Much for Peer Review — Wiley Shuts Down 19 Science Journals and Retracts 11,000 Gobbledygook Papers.” Was this the staid, conservative, and big vocabulary?

Yep.

The essay is little more than a wrapper for a Wall Street Journal story with the title “Flood of Fake Science Forces Multiple Journal Closures Tainted by Fraud.” I quite like that title, particularly the operative word “fraud.” What in the world is going on?

The write up explains:

Wiley — a mega publisher of science articles has admitted that 19 journals are so worthless, thanks to potential fraud, that they have to close them down. And the industry is now developing AI tools to catch the AI fakes (makes you feel all warm inside?)

image

A group of publishing executives becomes the focal point of a Midtown lunch in an upscale restaurant. The titans of publishing are complaining about the taste of humble pie and user secret NYAC gestures to express their disapproval. Thanks, MSFT Copilot. Your security expertise may warrant a special banquet too.

The information in the cited article contains some tasty nuggets which complement humble pie in my opinion; for instance:

  • The shut down of the junk food publications has required two years. If Sillycon Valley outfits can fire thousands via email or Zoom, “Why are those uptown shoes being dragged?” I asked myself.
  • Other high-end publishers have been doing the same thing. Sadly there are no names.
  • The bogus papers included something called a “AI gobbledygook sandwich.” Interesting. Human reviews who are experts could not recognize the vernacular of academic and research fraudsters.
  • Some in Australia think that the credibility of universities might be compromised. Oh, come now. Just because the president of Stanford had to search for his future elsewhere after some intellectual fancy dancing and the head of the Harvard ethic department demonstrated allegedly sci-fi ethics in published research, what’s the problem? Don’t students just get As and Bs. Professors are engaged in research, chasing consulting gigs, and ginning up grant money. Actual research? Oh, come now.
  • Academic journals are or were a $30 billion dollar industry.

Observations are warranted:

  • In today’s datasphere, I am not surprised. Scams, frauds, and cheats seems to be as common as ants at a picnic. A cultural shift has occurred. Cheating has become the norm.
  • Will the online databases, produced by some professional publishers and commercial database companies, be updated to remove or at least flag the baloney? Probably not. That costs money. Spending money is not a modern publishing CEO’s favorite activity. (Hence the two-year draw down of the fake information at the publishing house identified in the cited write up.)
  • How many people have died or been put out of work because of specious research data? I am not holding my breath for the peer reviewed journals to provide this information.

Net net: Humiliating and a shame. Quite a cultural mismatch between what some publishers say and this alleged what the firm ordered from the deli. I thought the outfit had a knowledge-based reason to tell me that it takes the high road. It seems that on that road, there are places where a bad humble pie is served.

Stephen E Arnold, June 4, 2024

AI Will Not Definitely, Certainly, Absolute Not Take Some Jobs. Whew. That Is News

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Outfits like McKinsey & Co. are kicking the tires of smart software. Some bright young sprouts I have heard arrive with a penchant for AI systems to create summaries and output basic information on a subject the youthful masters of the universe do not know. Will consulting services firms, publishers, and customer service outfits embrace smart software? The answer is, “You bet your bippy.”

“Why?” Answer: Potential cost savings. Humanoids require vacations, health care, bonuses, pension contributions (ho ho ho), and an old-fashioned and inefficient five-day work week.

image

Cost reductions over time, cost controls in real time, and more consistent outputs mean that as long as smart software is good enough, the technologies will go through organizations with more efficiency than Union General William T. Sherman led some 60,000 soldiers on a 285-mile march from Atlanta to Savannah, Georgia. Thanks, MSFT Copilot. Working on security today?

Software is allegedly better, faster, and cheaper. Software, particularly AI, may not be better, faster, or cheaper. But once someone is fired, the enthusiasm to return to the fold may be diminished. Often the response is a semi-amusing and often negative video posted on social media.

Here’s Why AI Probably Isn’t Coming for Your Job Anytime Soon” disagrees with my fairly conservative prediction that consulting, publishing, and some service outfits will be undergoing what I call “humanoid erosion” and “AI accretion.” The write up asserts:

We live in an age of hyper specialization. This is a trend that’s been evolving for centuries. In his seminal work, The Wealth of Nations (written within months of the signing of the Declaration of Independence), Adam Smith observed that economic growth was primarily driven by specialization and division of labor. And specialization has been a hallmark of computing technology since its inception. Until now. Artificial intelligence (AI) has begun to alter, even reverse, this evolution.

Okay, Econ 101. Wonderful. But… and there are some, of course. the write up says:

But the direction is clear. While society is moving toward ever more specialization, AI is moving in the opposite direction and attempting to replicate our greatest evolutionary advantage—adaptability.

Yikes. I am not sure that AI is going in any direction. Senior managers are going toward reducing costs. “Good enough,” not excellence, is the high-water mark today.

Here’s another “but”:

But could AI take over the bulk of legal work or is there an underlying thread of creativity and judgment of the type only speculative super AI could hope to tackle? Put another way, where do we draw the line between general and specific tasks we perform? How good is AI at analyzing the merits of a case or determining the usefulness of a specific document and how it fits into a plausible legal argument? For now, I would argue, we are not even close.

I don’t remember much about economics. In fact, I only think about economics in terms of reducing costs and having more money for myself. Good old Adam wrote:

Wherever there is great property there is great inequality. For one very rich man, there must be at least five hundred poor, and the affluence of the few supposes the indigence of the many.

When it comes to AI, inequality is baked in. The companies that are competing fiercely to dominate the core technology are not into equality. The senior managers who want to reduce costs associated with publishing, writing consulting reports based on business school baloney, or reviewing documents hunting for nuggets useful in a trial. AI is going into these and similar knowledge professions. Most of those knowledge workers will have an opportunity to find their future elsewhere. But what about in-take professionals in hospitals? What about dispatchers at trucking companies? What about government citizen service jobs? Sorry. Software is coming. Companies are developing orchestrator software to allow smart software to function across multiple related and inter-related tasks. Isn’t that what most work in a many organizations is?

Here’s another test question from Econ 101:

Discuss the meaning of “It was not by gold or by silver, but by labor, that all wealth of the world was originally purchased.” Give examples of how smart software will replace labor and generate more money for those who own the rights to digital gold or silver.

Send me you blue book answers within 24 hours. You must write in legible cursive. You are not permitted to use artificial intelligence in any form to answer this question which counts for 95 percent of your grade in Economics 102: Work in the Age of AI.

Stephen E Arnold, June 3, 2024

Price Fixing Is Price Fixing with or without AI

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Small time landlords, such as mom and pops who invested in property for retirement, shouldn’t be compared to large, corporate landlords. The corporate landlords, however, give them all a bad name. Why? Because of actions like price fixing. ProPublicia details how politicians are fighting against the bad act: “We Found That Landlords Could Be Using Algorithms To Fix Rent Prices. Now Lawmakers Want To make The Practice Illegal.”

RealPage sells software programmed with AI algorithm that collect rent data and recommends how much landlords should charge. Lawmakers want to ban AI-base price fixing so landlords won’t become cartels that coordinate pricing. RealPage and its allies defend the software while lawmakers introduced a bill to ban it.

The FTC also states that AI-based real estate software has problems: “Price Fixing By Algorithm Is Still Price Fixing.” The FTC isn’t against technology. They’re against technology being used as a tool to cheat consumers:

“Meanwhile, landlords increasingly use algorithms to determine their prices, with landlords reportedly using software like “RENTMaximizer” and similar products to determine rents for tens of millions(link is external) of apartments across the country. Efforts to fight collusion are even more critical given private equity-backed consolidation(link is external) among landlords and property management companies. The considerable leverage these firms already have over their renters is only exacerbated by potential algorithmic price collusion. Algorithms that recommend prices to numerous competing landlords threaten to remove renters’ ability to vote with their feet and comparison-shop for the best apartment deal around.”

This is an example of how to use AI for evil. The problem isn’t the tool it’s the humans using it.

Whitney Grace, June 3, 2024

Spot a Psyop Lately?

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Psyops or psychological operations is also known as psychological warfare. It’s defines as actions used to weaken an enemy’s morale. Psyops can range from simple propaganda poster to a powerful government campaign. According to Annalee Newitz on her Hypothesis Buttondown blog, psyops are everywhere and she explains: “How To Recognize A Psyop In Three Easy Steps.”

Newitz smartly condenses the history of American psyops into a paragraph: it’s a mixture of pulp fiction tropes, advertising techniques, and pop psychology. In the twentieth century, US military harnessed these techniques to make messages to hurt, demean, and distract people. Unlike weapons, psyops can be avoided with a little bit of critical thinking.

The first step is to pay attention when people claim something is “anti-American.” The term “anti-American” can be interpreted in many ways, but it comes down to media saying one group of people (foreign, skin color, sexual orientation, etc.) is against the American way of life.

The second step is spreading lies with hints of truth. Newitz advises to read psychological warfare military manuals and uses an example of leaflets the Japanese dropped on US soldiers in the Philippines. The leaflets warned the soldiers about venomous snakes in jungles and they were signed by with “US Army.” Soldiers were told the leaflets were false, but it made them believe there were coverups:

“Psyops-level lies are designed to destabilize an enemy, to make them doubt themselves and their compatriots, and to convince them that their country’s institutions are untrustworthy. When psyops enter culture wars, you start to see lies structured like this snake “warning.” They don’t just misrepresent a specific situation; they aim to undermine an entire system of beliefs.”

The third step is the easiest to recognize and the most extreme: you can’t communicate with anyone who says you should be dead. Anyone who believes you should be dead is beyond rational thought. Her advice is to ignore it and not engage.

Another way to recognize psyops tactics is to question everything. Thinking isn’t difficult, but thinking critically takes practice.

Whitney Grace, June 3, 2024

So AI Is — Maybe, Just Maybe — Not the Economic Big Kahuna?

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I find it amusing how AI has become the go-to marketing word. I suppose if I were desperate, lacking an income, unsure about what will sell, and a follow-the-hyperbole-type person I would shout, “AI.” Instead I vocalize, “Ai-Yai-Ai” emulating the tones of a Central American death whistle. Yep, “Ai-Yai-AI.”

image

Thanks, MSFT Copilot. A harbinger? Good enough.

I read “MIT Professor Hoses Down Predictions AI Will Put a Rocket under the Economy.” I won’t comment upon the fog of distrust which I discern around Big Name Universities, nor will I focus my adjustable Walgreen’s spectacles on MIT’s fancy dancing with the quite interesting and decidedly non-academic Jeffrey Epstein. Nope. Forget those two factoids.

The write up reports:

…Daron Acemoglu, professor of economics at Massachusetts Institute of Technology, argues that predictions AI will improve productivity and boost wages in a “blue-collar bonanza” are overly optimistic.

The good professor is rowing against the marketing current. According to the article, the good professor identifies some wild and crazy forecasts. One of these is from an investment bank whose clients are unlikely to be what some one percenters perceive as non-masters of the universe.

That’s interesting. But it pales in comparison to the information in “Few People Are Using ChatGPT and Other AI Tools Regularly, Study Suggests.” (I love suggestive studies!) That write up reports about a study involving Thomson Reuters, the “trust” outfit:

Carried out by the Reuters Institute and Oxford University and involving 6,000 respondents from the U.S., U.K., France, Denmark, Japan, and Argentina, the researchers found that OpenAI’s ChatGPT is by far the most widely used generative-AI tool and is two or three times more widespread than the next most widely used products — Google Gemini and Microsoft Copilot. But despite all the hype surrounding generative AI over the last 18 months, only 1% of those surveyed are using ChatGPT on a daily basis in Japan, 2% in France and the UK, and 7% in the U.S. The study also found that between 19% and 30% of the respondents haven’t even heard of any of the most popular generative AI tools, and while many of those surveyed have tried using at least one generative-AI product, only a very small minority are, at the current time, regular users deploying them for a variety of tasks.

My hunch is that these contrarians want clicks. Well, the tactic worked for me. However, how many of those in AI-Land will take note? My thought is that these anti-AI findings are likely to be ignored until some of the Big Money folks lose their cash. Then the voices of negativity will be heard.

Several observations:

  1. The economics of AI seem similar to some early online ventures like Pets.com, not “all” mind you, just some
  2. Expertise in AI may not guarantee a job at a high-flying techno-feudalist outfit
  3. The difficulties Google appears to be having suggest that the road to AI-Land on the information superhighway may have some potholes. (If Google cannot pull AI off, how can Bob’s Trucking Company armed with Microsoft Word with Copilot?)

Net net: It will be interesting to monitor the frequency of “AI balloon deflating” analyses.

Stephen E Arnold,  June 3, 2024

x

Google: Lost in Its Own AI Maze

May 31, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

One “real” news items caught my attention this morning. Let me tell you. Even with the interesting activities in the Manhattan court, these jumped at me. Let’s take a quick look and see if Googzilla (see illustration) can make a successful exit from the AI maze in which the online advertising giant finds itself.

image

Googzilla is lost in its own AI maze. Can it find a way out? Thanks, MSFT Copilot. Three tries and I got a lizard in a maze. Keep allocating compute cycles to security because obviously Copilot is getting fewer and fewer these days.

Google Pins Blame on Data Voids for Bad AI Overviews, Will Rein Them In” makes it clear that Google is not blaming itself for some of the wacky outputs its centerpiece AI function has been delivering. I won’t do the guilty-34-times thing. I will just mention the non-toxic glue and pizza item. This news story reports:

Google thinks the AI Overviews for its search engine are great, and is blaming viral screenshots of bizarre results on "data voids" while claiming some of the other responses are actually fake. In a Thursday post, Google VP and Head of Google Search Liz Reid doubles down on the tech giant’s argument that AI Overviews make Google searches better overall—but also admits that there are some situations where the company "didn’t get it right."

So let’s look at that Google blog post titled “AI Overviews: About Last Week.”

How about this statement?

User feedback shows that with AI Overviews, people have higher satisfaction with their search results, and they’re asking longer, more complex questions that they know Google can now help with. They use AI Overviews as a jumping off point to visit web content, and we see that the clicks to webpages are higher quality — people are more likely to stay on that page, because we’ve done a better job of finding the right info and helpful webpages for them.

The statement strikes me as something that a character would say in an episode of the Twilight Zone, a TV series in the 50s and 60s. The TV show had a weird theme, and I thought I heard it playing when I read the official Googley blog post. Is this the Google “bullseye” method or a bullsh*t method?

The official Googley blog post notes:

This means that AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might. When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available. (These are challenges that occur with other Search features too.) This approach is highly effective. Overall, our tests show that our accuracy rate for AI Overviews is on par with another popular feature in Search — featured snippets — which also uses AI systems to identify and show key info with links to web content.

Okay, we are into bullsh*t method. Google search is now a key moment in the Sundar & Prabhakar Comedy Act. Since the début in Paris which featured incorrect data, the Google has been in Code Red or Red Alert of red faced-embarrassment mode. Now the company wants people to eat rocks, and it is not the online advertising giant’s fault. The blog post explains:

There isn’t much web content that seriously contemplates that question, either. This is what is often called a “data void” or “information gap,” where there’s a limited amount of high quality content about a topic. However, in this case, there is satirical content on this topic … that also happened to be republished on a geological software provider’s website. So when someone put that question into Search, an AI Overview appeared that faithfully linked to one of the only websites that tackled the question. In other examples, we saw AI Overviews that featured sarcastic or troll-y content from discussion forums. Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.

Okay, I think one component of the bullsh*t method is that it is not Google’s fault. “Users” — not customers because Google has advertising clients, partners, and some lobbyists. Everyone else is a user, and it is users’ fault, the data creators’ fault, and probably Sam AI-Man’s fault. (Did I omit anyone on whom to blame the let them “eat rocks” result?)

And the Google cares. This passage is worthy of a Hallmark card with a foldout:

At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors. We’ve learned a lot over the past 25 years about how to build and maintain a high-quality search experience, including how to learn from these errors to make Search better for everyone. We’ll keep improving when and how we show AI Overviews and strengthening our protections, including for edge cases, and we’re very grateful for the ongoing feedback.

What’s my take on this?

  1. The assumption that Google search is “good” is interesting, just not in line with what I hear, read, and experience when I do use Google. Note: That my personal usage has decreased over time.
  2. Google is trying to explain away its obvious flaws. The Google speak may work for some people, just not for me.
  3. The tone is that of a entitled seventh-grader from a wealthy family, not the type of language I find particularly helpful when the “smart” Google software has to be remediated by humans. Google is terminating humans, right? Now Google needs humans. What’s up Google?

Net net: Google is snagged it ins own AI maze. I am growing less confident in the company’s ability to extricate itself. The Sam AI-Man has crafted deals with two outfits big enough to make Google’s life more interesting. Google’s own management seems ineffectual despite the flashing red and yellow lights and the honking of alarms. Google’s wordsmiths and lawyers are running out of verbal wiggle room. But most important, the failure of the bullseye method and the oozing comfort of the bullsh*it method marks a turning point for the company.

Stephen E Arnold, May 31, 2024

NSO Group: Making Headlines Again and Again and Again

May 31, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

NSO Group continues to generate news. One example is the company’s flagship sponsorship of an interesting conference going on in Prague from June 4th to the 6th. What’s interesting mean? I think those who attend the conference are engaged in information-related activities connected in some way to law enforcement and intelligence. How do I know NSO Group ponied up big bucks to be the “lead sponsor”? Easy. I saw this advertisement on the conference organizer’s Web site. I know you want me to reveal the url, but I will treat the organizer in a professional manner. Just use those Google Dorks, and you will locate the event. The ad:

image

What’s the ad from the “lead sponsor” say? Here are a few snippets from the marketing arm of NSO Group:

NSO Group develops and provides state-of-the-art solutions, designed to assist in preventing terrorism and crime. Our solutions address diverse strategical, tactical and operational needs and scenarios to serve authorized government agencies including intelligence, military and law enforcement. Developed by the top technology and data science experts, the NSO portfolio includes cyber intelligence, network and homeland security solutions. NSO Group is proud to help to protect lives, security and personal safety of citizens around the world.

Innocent stuff with a flavor jargon-loving Madison Avenue types prefer.

image

Citizen’s Lab is a bit like mules in an old-fashioned grist mill. The researchers do not change what they think about. Source: Royal Mint Museum in the UK.

Just for some fun, let’s look at the NSO Group through a different lens. The UK newspaper The Guardian, which counts how many stories I look at a year, published “Critics of Putin and His Allies Targeted with Spyware Inside the EU.” Here’s a sample of the story’s view of NSO Group:

At least seven journalists and activists who have been vocal critics of the Kremlin and its allies have been targeted inside the EU by a state using Pegasus, the hacking spyware made by Israel’s NSO Group, according to a new report by security researchers. The targets of the hacking attempts – who were first alerted to the attempted cyber-intrusions after receiving threat notifications from Apple on their iPhones – include Russian, Belarusian, Latvian and Israeli journalists and activists inside the EU.

And who wrote the report?

Access Now, the Citizen Lab at the Munk School of Global Affairs & Public Policy at the University of Toronto (“the Citizen Lab”), and independent digital security expert Nikolai Kvantiliani

The Citizen Lab has been paying attention to NSO Group for years. The people surveilled or spied upon via the NSO Group’s Pegasus technology are anti-Russia; that is, none of the entities will be invited to a picnic at Mr. Putin’s estate near Sochi.

Obviously some outfit has access to the Pegasus software and its command-and-control system. It is unlikely that NSO Group provided the software free of charge. Therefore, one can conclude that NSO Group could reveal what country was using its software for purposes one might consider outside the bounds of the write up’s words cited above.

NSO Group remains one of the — if not the main — poster children for specialized software. The company continues to make headlines. Its technology remains one of the leaders in the type of software which can be used to obtain information for a mobile device. There are some alternatives, but NSO Group remains the Big Dog.

One wonders why Israel, presumably with the Pegasus tool, could not have obtained information relevant to the attack in October 2023. My personal view is that having Fancy Dan ways to get data from a mobile phone, human analysts have to figure out what’s important and what to identify as significant.

My point is that the hoo-hah about NSO Group and Pegasus may not be warranted. Information without the trained analysts and downstream software may have difficulty getting the information required to take a specific action. Israel’s lack of intelligence means that software alone can’t do the job. No matter what the marketing material says or how slick the slide deck used to brief those with a “need to know” appears — software is not intelligence.

Will NSO Group continue to make headlines? Probably. Those with access to Pegasus will make errors and disclose their ineptness. Citizen’s Lab will be at the ready. New reports will be forthcoming.

Net net: Is anyone surprised Mr. Putin is trying to monitor anti-Russia voices? Is Pegasus the only software pressed into service? My answer to this question is: “Mr. Putin will use whatever tool he can to achieve his objectives.” Perhaps Citizen’s Lab should look for other specialized software and expand its opportunities to write reports? When will Apple address the vulnerability which NSO Group continues to exploit?

Stephen E Arnold, May 31, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta