Google AdWords in Russia?

July 23, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I have been working on a project requiring me to examine a handful of Web sites hosted in Russia, in the Russian language, and tailored for people residing in Russia and its affiliated countries. I came away today with a screenshot from the site for IT Cube Studio. The outfit creates Web sites and provides advertising services. Here’s a screenshot in Russian which advertises the firm’s ability to place Google AdWords for a Russian client:

image

If you don’t read Russian, here’s the translation of the text. I used Google Translate which seems to do an okay job with the language pair Russian to English. The ad says:

Contextual advertising. Potential customers and buyers on your website a week after the start of work.

The word

image

is the Russian spelling of Yandex. The Google word is “Google.”

I thought there were sanctions. In fact, I navigated to Google and entered this query “google AdWords Russia.” What did Google tell me on July 22, 2024, 503 pm US Eastern time?

Here’s the Google results page:

image

The screenshot is difficult to read, but let me highlight the answer to my question about Google’s selling AdWords in Russia.

There is a March 10, 2022, update which says:

Mar 10, 2022 — As part of our recent suspension of ads in Russia, we will also pause ads on Google properties and networks globally for advertisers based in [Russia] …

Plus there is one of those “smart” answers which says:

People also ask

Does Google Ads work in Russia?

Due to the ongoing war in Ukraine, we will be temporarily pausing Google ads from serving to users located in Russia. [Emphasis in the original Google results page display}

I know my Russian is terrible, but I am probably slightly better equipped to read and understand English. The Google results seem to say, “Hey, we don’t sell AdWords in Russia.”

I wonder if the company IT Cube Studio is just doing some marketing razzle dazzle. Is it possible that Google is saying one thing and doing another in Russia? I recall that Google said it wasn’t WiFi sniffing in Germany a number of years ago. I believe that Google was surprised when the WiFi sniffing was documented and disclosed.

I find these big company questions difficult to answer. I am certainly not a Google-grade intellect. I am a dinobaby. And I am inclined to believe that there is a really simple explanation or a very, very sincere apology if the IT Cube Studio outfit is selling Google AdWords when sanctions are in place.

If anyone of the two or three people who follow my Web log knows the answer to my questions, please, let me know. You can write me at benkent2020 at yahoo dot com. For now, I find this interesting. The Google would not violate sanctions, would it?

Stephen E Arnold, July 23, 2024

What Will the AT&T Executives Serve Their Lawyers at the Security Breach Debrief?

July 15, 2024

dinosaur30a_thumb_thumb_thumb_thumb_[1]_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

On the flight back to my digital redoubt in rural Kentucky, I had the thrill of sitting behind a couple of telecom types who were laughing at the pickle AT&T has plopped on top of what I think of a Judge Green slushee. Do lime slushees and dill pickles go together? For my tastes, nope. Judge Green wanted to de-monopolize the Ma Bell I knew and loved. (Yes, I cashed some Ma Bell checks and I had a Young Pioneers hat.)

We are back to what amounts a Ma Bell trifecta: AT&T (the new version which wears spurs and chaps), Verizon (everyone’s favorite throw back carrier), and the new T-Mobile (bite those customer pocketbooks as if they were bratwursts mit sauerkraut). Each of these outfits is interesting. But at the moment, AT&T is in the spotlight.

Data of Nearly All AT&T Customers Downloaded to a Third-Party Platform in a 2022 Security Breach” dances around a modest cyber misstep at what is now a quite old and frail Ma Bell. Imagine the good old days before the Judge Green decision to create Baby Bells. Security breaches were possible, but it was quite tough to get the customer data. Attacks were limited to those with the knowledge (somewhat tough to obtain), the tools (3B series computers and lots of mainframes), and access to network connections. Technology has advanced. Consequently competition means that no one makes money via security. Security is better at old-school monopolies because money can be spent without worrying about revenue. As one AT&T executive said to my boss at a blue-chip consulting company, “You guys charge so much we will have to get another railroad car filled with quarters to pay your bill.” Ho ho ho — except the fellow was not joking. At the pre-Judge Green AT&T, spending money on security was definitely not an issue. Today? Seems to be different.

A more pointed discussion of Ma Bell’s breaking her hip again appears in “AT&T Breach Leaked Call and Text Records from Nearly All Wireless Customers” states:

AT&T revealed Friday morning (July 12, 2024) that a cybersecurity attack had exposed call records and texts from “nearly all” of the carrier’s cellular customers (including people on mobile virtual network operators, or MVNOs, that use AT&T’s network, like Cricket, Boost Mobile, and Consumer Cellular). The breach contains data from between May 1st, 2022, and October 31st, 2022, in addition to records from a “very small number” of customers on January 2nd, 2023.

The “problem” if I understand the reference to Snowflake. Is AT&T suggesting that Snowflake is responsible for the breach? Big outfits like to identify the source of the problem. If Snowflake made the misstep, isn’t it the responsibility of AT&T’s cyber unit to make sure that the security was as good as or better than the security implemented before the Judge Green break up? I think AT&T, like other big companies, wants to find a way to shift blame, not say, “We put the pickle in the lime slushee.”

My posture toward two year old security issues is, “What’s the point of covering up a loss of ‘nearly all’ customers’ data?” I know the answer: Optics and the share price.

As a person who owned a Young Pioneers’ hat, I am truly disappointed in the company. The Regional Managers for whom I worked as a contractor had security on the list of top priorities from day one. Whether we were fooling around with a Western Electric data service or the research charge back system prior to the break up, security was not someone else’s problem.

Today it appears that AT&T has made some decisions which are now perched on the top officer’s head. Security problems  are, therefore, tough to miss. Boeing loses doors and wheels from aircraft. Microsoft tantalizes bad actors with insecure systems. AT&T outsources high value data and then moves more slowly than the last remaining turtle in the mine run off pond near my home in Harrod’s Creek.

Maybe big is not as wonderful as some expect the idea to be? Responsibility for one’s decisions and an ethical compass are not cyber tools, but both notions are missing in some big company operations. Will the after-action team guzzle lime slushees with pickles on top?

Stephen E Arnold, July 15, 2024

NSO Group Determines Public Officials Are Legitimate Targets

July 12, 2024

Well, that is a point worth making if one is the poster child of the specialized software industry.

NSO Group, makers of the infamous Pegasus spyware, makes a bold claim in a recent court filing: “Government and Military Officials Fair Targets of Pegasus Spyware in All Cases, NSO Group Argues,” reports cybersecurity news site The Record. The case at hand is Pegasus’ alleged exploitation of a WhatsApp vulnerability back in 2019. Reporter Suzanne Smalley cites former United Nations official David Kaye, who oversaw the right to free expression at that time. Smalley writes:

“Friday’s filing seems to suggest a broader purpose for Pegasus, Kaye said, pointing to NSO’s explanation that the technology can be used on ‘persons who, by virtue of their positions in government or military organizations, are the subject of legitimate intelligence investigations.’ ‘This appears to be a much more extensive claim than made in 2019, since it suggests that certain persons are legitimate targets of Pegasus without a link to the purpose for the spyware’s use,’ said Kaye, who was the U.N.’s special rapporteur on freedom of opinion and expression from 2014 to 2020. … The Israeli company’s statement comes as digital forensic researchers are increasingly finding Pegasus infections on phones belonging to activists, opposition politicians and journalists in a host of countries worldwide. NSO Group says it only sells Pegasus to governments, but the frequent and years-long discoveries of the surveillance technology on civil society phones have sparked a public uproar and led the U.S. government to crack down on the company and commercial spyware manufacturers in general.”

See the article for several examples of suspected targets around the world. We understand both the outrage and the crack down. However, publicly arguing about the targets of spyware may have unintended consequences. Now everyone knows about mobile phone data exfiltration and how that information can be used to great effect.

As for the WhatsApp court case, it is proceeding at the sluggish speed of justice. In March 2024, a California federal judge ordered NSO Group to turn over its secret spyware code. What will be the verdict? When will it be handed down? And what about the firm’s senior managers?

Cynthia Murrell, July 12, 2024

Falling Apples: So Many to Harvest and Sell to Pay the EU

June 25, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

What’s goes up seems to come down. Apple is peeling back on the weird headset gizmo. The company’s AI response — despite the thrills Apple Intelligence produced in some acolytes — is “to be” AI or vaporware. China dependence remains a sticky wicket. And if the information in “Apple Has Very Serious Issues Under Sweeping EU Digital Rules, Competition Chief Says,” the happy giant in Cupertino will be writing some Jupiter-sized checks. Imagine. Pesky Europeans are asserting that Apple has a monopoly and has been acting less like Johnny Appleseed and more like Andrew Carnegie.

image

A powerful force causes Tim Apple to wonder why so many objects are falling on his head. Thanks, MSFT Copilot. Good enough.

The write up says:

… regulators are preparing charges against the iPhone maker. In March [2024], the European Commission, the EU’s executive arm, opened a probe into Apple, Alphabet and Meta, under the sweeping Digital Markets Act tech legislation that became applicable this year. The investigation featured several concerns about Apple, including whether the tech giant is blocking businesses from telling their users about cheaper options for products or about subscriptions outside of the App Store.

Would Apple, the flag bearer for almost-impossible to repaid products and software that just won’t charge laptop batteries no matter what the user needs to do prior to a long airplane flight prevent the free flow of information?

The EU nit pickers believe that Apple’s principles and policies are a “serious issue.”

How much money is possibly involved if the EU finds Apple a — pardon the pun — a bad apple in a barrel of rotten US high technology companies? The write up says:

If it is found in breach of Digital Markets Act rules, Apple could face fines of up to 10% of the company’s total worldwide annual turnover.

For FY2023, Apple captured about $380 billion, this works out to a potential payday for the EU of about US$ 38 billion and change.

Speaking of change, will a big fine cause those Apples to levitate? Nope.

Stephen E Arnold, June 25, 2024

Meta Case Against Intelware Vendor Voyager Lags to Go Forward

June 21, 2024

Another clever intelware play gets trapped and now moves to litigation. Meta asserts that when Voyager Labs scraped data on over 600,000 Facebook users, it violated its contract. Furthermore, it charges, the scraping violated anti-hacking laws. While Voyager insists the case should be summarily dismissed, U.S. District Court Judge Araceli Martinez-Olguin disagrees. MediaDailyNews reports, “Meta Can Proceed With Claims that Voyager Labs Scraped Users’ Data.” Writer Wendy Davis explains:

“Voyager argued the complaint should be dismissed at an early stage for several reasons. Among others, Voyager said the allegations regarding Facebook’s terms of service were too vague. Meta’s complaint ‘refers to a catchall category of contracts … but then says nothing more about those alleged contracts, their terms, when they are supposed to have been executed, or why they allegedly bind Voyager UK today,’ Voyager argued to Martinez-Olguin in a motion filed in February. The company also said California courts lacked jurisdiction to decide whether the company violated federal or state anti-hacking laws. Martinez-Olguin rejected all of Voyager’s arguments on Thursday. She wrote that while Meta’s complaint could have set out the company’s terms of service ‘with more clarity,’ the allegations sufficiently informed Voyager of the basis for Meta’s claim.”

This battle began in January 2023 when Meta first filed the complaint. Now it can move forward. How long before the languid wheels of justice turn out a final ruling? A long time we wager.

Cynthia Murrell, June 21, 2024

Hallucinations in the Courtroom: AI Legal Tools Add to Normal Wackiness

June 17, 2024

Law offices are eager to lighten their humans’ workload with generative AI. Perhaps too eager. Stanford University’s HAI reports, “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries.” Close enough for horseshoes, but for justice? And that statistic is with improved, law-specific software. We learn:

“In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported. And our previous study of general-purpose chatbots found that they hallucinated between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice. In his 2023 annual report on the judiciary, Chief Justice Roberts took note and warned lawyers of hallucinations.”

But that was before tailor-made retrieval-augmented generation tools. The article continues:

“Across all areas of industry, retrieval-augmented generation (RAG) is seen and promoted as the solution for reducing hallucinations in domain-specific contexts. Relying on RAG, leading legal research services have released AI-powered legal research products that they claim ‘avoid’ hallucinations and guarantee ‘hallucination-free’ legal citations. RAG systems promise to deliver more accurate and trustworthy legal information by integrating a language model with a database of legal documents. Yet providers have not provided hard evidence for such claims or even precisely defined ‘hallucination,’ making it difficult to assess their real-world reliability.”

So the Stanford team tested three of the RAG systems for themselves, Lexis+ AI from LexisNexis and Westlaw AI-Assisted Research & Ask Practical Law AI from Thomson Reuters. The authors note they are not singling out LexisNexis or Thomson Reuters for opprobrium. On the contrary, these tools are less opaque than their competition and so more easily examined. They found that these systems are more accurate than the general-purpose models like GPT-4. However, the authors write:

“But even these bespoke legal AI tools still hallucinate an alarming amount of the time: the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.”

These hallucinations come in two flavors. Many responses are flat out wrong. Others are misgrounded: they are correct about the law but cite irrelevant sources. The authors stress this second type of error is more dangerous than it may seem, for it may lure users into a false sense of security about the tool’s accuracy.

The post examines challenges particular to RAG-based legal AI systems and discusses responsible, transparent ways to use them, if one must. In short, it recommends public benchmarking and rigorous evaluations. Will law firms listen?

Cynthia Murrell, June 17, 2024

Will the Judge Notice? Will the Clients If Convicted?

June 12, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Law offices are eager to lighten their humans’ workload with generative AI. Perhaps too eager. Stanford University’s HAI reports, “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries.” Close enough for horseshoes, but for justice? And that statistic is with improved, law-specific software. We learn:

“In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported. And our previous study of general-purpose chatbots found that they hallucinated between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice. In his 2023 annual report on the judiciary, Chief Justice Roberts took note and warned lawyers of hallucinations.”

But that was before tailor-made retrieval-augmented generation tools. The article continues:

“Across all areas of industry, retrieval-augmented generation (RAG) is seen and promoted as the solution for reducing hallucinations in domain-specific contexts. Relying on RAG, leading legal research services have released AI-powered legal research products that they claim ‘avoid’ hallucinations and guarantee ‘hallucination-free’ legal citations. RAG systems promise to deliver more accurate and trustworthy legal information by integrating a language model with a database of legal documents. Yet providers have not provided hard evidence for such claims or even precisely defined ‘hallucination,’ making it difficult to assess their real-world reliability.”

So the Stanford team tested three of the RAG systems for themselves, Lexis+ AI from LexisNexis and Westlaw AI-Assisted Research & Ask Practical Law AI from Thomson Reuters. The authors note they are not singling out LexisNexis or Thomson Reuters for opprobrium. On the contrary, these tools are less opaque than their competition and so more easily examined. They found that these systems are more accurate than the general-purpose models like GPT-4. However, the authors write:

“But even these bespoke legal AI tools still hallucinate an alarming amount of the time: the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.”

These hallucinations come in two flavors. Many responses are flat out wrong. Others are misgrounded: they are correct about the law but cite irrelevant sources. The authors stress this second type of error is more dangerous than it may seem, for it may lure users into a false sense of security about the tool’s accuracy.

The post examines challenges particular to RAG-based legal AI systems and discusses responsible, transparent ways to use them, if one must. In short, it recommends public benchmarking and rigorous evaluations. Will law firms listen?

Cynthia Murrell, June 12, 2024

Price Fixing Is Price Fixing with or without AI

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Small time landlords, such as mom and pops who invested in property for retirement, shouldn’t be compared to large, corporate landlords. The corporate landlords, however, give them all a bad name. Why? Because of actions like price fixing. ProPublicia details how politicians are fighting against the bad act: “We Found That Landlords Could Be Using Algorithms To Fix Rent Prices. Now Lawmakers Want To make The Practice Illegal.”

RealPage sells software programmed with AI algorithm that collect rent data and recommends how much landlords should charge. Lawmakers want to ban AI-base price fixing so landlords won’t become cartels that coordinate pricing. RealPage and its allies defend the software while lawmakers introduced a bill to ban it.

The FTC also states that AI-based real estate software has problems: “Price Fixing By Algorithm Is Still Price Fixing.” The FTC isn’t against technology. They’re against technology being used as a tool to cheat consumers:

“Meanwhile, landlords increasingly use algorithms to determine their prices, with landlords reportedly using software like “RENTMaximizer” and similar products to determine rents for tens of millions(link is external) of apartments across the country. Efforts to fight collusion are even more critical given private equity-backed consolidation(link is external) among landlords and property management companies. The considerable leverage these firms already have over their renters is only exacerbated by potential algorithmic price collusion. Algorithms that recommend prices to numerous competing landlords threaten to remove renters’ ability to vote with their feet and comparison-shop for the best apartment deal around.”

This is an example of how to use AI for evil. The problem isn’t the tool it’s the humans using it.

Whitney Grace, June 3, 2024

Legal Eagles Get Some Tail Feathers Plucked about BitTorrent

May 27, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

One Finnish law firm thinks it should be able to cut one party in out of the copyright enforcement process—the rightsholders themselves. The court disagrees. TorrentFreak reports, “Court Rejects Law Firm’s Bid to Directly Obtain BitTorrent Users’ Identities.” Writer Andy Maxwell explains:

“Requirements vary from region to region but when certain conditions are met, few courts deny genuine copyright holders the ability to enforce their rights under relevant law. One of the most fundamental requirements is that the entity making the claim has the necessary rights to do so. … In an application submitted to Finland’s Market Court on March 15, 2024, the law firm Hedman Partners Oy sought a court order to compel an unnamed internet service provider to provide the personal details of an unspecified number of subscribers. According to Hedman’s application, all are suspected of sharing copyrighted movies via BitTorrent, without first obtaining permission from two Danish rightsholders; Mis. Label ApS and Scanbox Entertainment A/S. Hedman Partners are well known for their work in the piracy settlement business in Scandinavia. The company fully understands the standards required before courts will issue a disclosure order. However, for reasons that aren’t made clear, the law firm would prefer to deal with these cases from a position of greater authority. This application appears to have served as the testing ground to determine whether that’s possible under Finland’s Copyright Act.”

The short answer: It is not possible. For the long, legalese-laced answer, see the article. Why did Hedman Partners try the move? Maxwell points out settlement efforts spearheaded by aggressive third-party legal teams tend to bring in more cash. Ah, there it is. A decision in favor of the firm would certainly not have benefitted the BitTorrent users, he notes. We may yet see whether that is correct—Hedman Partners has until June 18 to appeal the decision to the Supreme Court.

Will law enforcement step in?

Cynthia Murrell, May 27, 2024

US Big Tech to EU: Please, Knock Off the Outputs

May 23, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “Big Tech to EU: “Drop Dead.” I think the write up depicts the US alleged quasi monopolies of indifference to the wishes of the European Union. Stated another way, “The Big Dogs are battling for AI dominance.” The idea is that these outfits do not care what the EU wants. The Big Dogs care about what they want.

The write up contains several interesting statements. Let me highlight a handful and encourage you to read this article which explains some of the tension between governments and companies with more cash than some nation states. In fact, some of the Big Boys control more digitally inclined people than the annoying countries complaining about predatory business models. The illustration shows how much attention some Big Dogs allow EU and other government regulatory authorities.

image

The Big Dogs of technology participate in a Microsoft Teams’s session with and EU official. The Big Dogs seem to be more interested in their mobile phones than the political word salad from the august official. Thanks, MSFT Copilot. Keep following your security recipe.

Consider this statement:

Right from the start, it was obvious that the tech giants were going to war against the [European Digital Markets Act or] DMA, and the freedom it promised to their users.

But isn’t that what companies in a free market do?

Here’s another gem:

Apple charges app vendors a whopping 30 percent commission on most transactions, both the initial price of the app and everything you buy from it thereafter. This is a remarkably high transaction fee —compare it to the credit-card sector, itself the subject of sharp criticism for its high 3-5 percent fees. To maintain those high commissions, Apple also restricts its vendors from informing their customers about the existence of other ways of paying (say, via their website) and at various times has also banned its vendors from offering discounts to customers who complete their purchases without using the app.

What’s the markup for blue chip consulting firms or top end lawyers? Plus, Apple is serving its shareholders. As a public company, that is what shareholders have a right to expect. Once again, the underlying issue is how capitalism works in the US market.

And this statement:

These are high-stakes clashes. As the tech sector grew more concentrated, it also grew less accountable, able to substitute lock-in and regulatory capture for making good products and having their users’ backs. Tech has found new ways to compromise our privacy rights, our labor rights, and our consumer rights – at scale.

Once again the problem is capitalism. The companies have to generate growth, revenue, and profits. Can a government agency manage the day-to-day operations of these technology-centric firms? Governments struggle to maintain roads and keep their Web sites updated. The solution may have been a bit more interest 25 years ago. In my opinion, the “better late than never” approach is not going to work unless governments put these outfits out of business… one way or another.

Net net: The write up is not about Big Dog tech companies ignoring the DMA. The write up wants the basic function of publicly-traded companies to change. Go to a zoo. Find a jungle cat. Tell it to change its stripes. How is that going to work out?

Stephen E Arnold, May 23, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta