Google: The DMA Makes Us Harm Small Business

April 11, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I cannot estimate the number of hours Googlers invested in crafting the short essay “New Competition Rules Come with Trade-Offs.” I find it a work of art. Maybe not the equal of Dante’s La Divina Commedia, but is is darned close.

image

A deity, possibly associated with the quantumly supreme, reassures a human worried about life. Words are reality, at least to some fretful souls. Thanks MSFT Copilot. Good enough.

The essay pivots on unarticulated and assumed “truths.” Particularly charming are these:

  1. “We introduced these types of Google Search features to help consumers”
  2. “These businesses now have to connect with customers via a handful of intermediaries that typically charge large commissions…”
  3. “We’ve always been focused on improving Google Search….”

The first statement implies that Google’s efforts have been the “help.” Interesting: I find Google search often singularly unhelpful, returning results for malware, biased information, and Google itself.

The second statement indicates that “intermediaries” benefit. Isn’t Google an intermediary? Isn’t Google an alleged monopolist in online advertising?

The third statement is particularly quantumly supreme. Note the word “always.” John Milton uses such verbal efflorescence when describing God. Yes, “always” and improving. I am tremulous.

Consider this lyrical passage and the elegant logic of:

We’ll continue to be transparent about our DMA compliance obligations and the effects of overly rigid product mandates. In our view, the best approach would ensure consumers can continue to choose what services they want to use, rather than requiring us to redesign Search for the benefit of a handful of companies.

Transparent invokes an image of squeaky clean glass in a modern, aluminum-framed window, scientifically sealed to prevent its unauthorized opening or repair by anyone other than a specially trained transparency provider. I like the use of the adjective “rigid” because it implies a sturdiness which may cause the transparent window to break when inclement weather (blasts of hot and cold air from oratorical emissions) stress the see-through structures. The adult-father-knows-best reference in “In our view, the best approach”. Very parental. Does this suggest the EU is childish?

Net net: Has anyone compiled the Modern Book of Google Myths?

Stephen E Arnold, April 11, 2024

Tennessee Sends a Hunk of Burnin’ Love to AI Deep Fakery

April 11, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Leave it the state that houses Music City. NPR reports, “Tennessee Becomes the First State to Protect Musicians and Other Artists Against AI.” Courts have demonstrated existing copyright laws are inadequate in the face of generative AI. This update to the state’s existing law is named the Ensuring Likeness Voice and Image Security Act, or ELVIS Act for short. Clever. Reporter Rebecca Rosman writes:

“Tennessee made history on Thursday, becoming the first U.S. state to sign off on legislation to protect musicians from unauthorized artificial intelligence impersonation. ‘Tennessee is the music capital of the world, & we’re leading the nation with historic protections for TN artists & songwriters against emerging AI technology,’ Gov. Bill Lee announced on social media. While the old law protected an artist’s name, photograph or likeness, the new legislation includes AI-specific protections. Once the law takes effect on July 1, people will be prohibited from using AI to mimic an artist’s voice without permission.”

Prominent artists and music industry groups helped push the bill since it was introduced in January. Flanked by musicians and state representatives, Governor Bill Lee theatrically signed it into law on stage at the famous Robert’s Western World. But what now? In its write-up, “TN Gov. Lee Signs ELVIS Act Into Law in Honky-Tonk, Protects Musicians from AI Abuses,” The Tennessean briefly notes:

“The ELVIS Act adds artist’s voices to the state’s current Protection of Personal Rights law and can be criminally enforced by district attorneys as a Class A misdemeanor. Artists—and anyone else with exclusive licenses, like labels and distribution groups—can sue civilly for damages.”

While much of the music industry is located in and around Nashville, we imagine most AI mimicry does not take place within Tennessee. It is tricky to sue someone located elsewhere under state law. Perhaps this legislation’s primary value is as an example to lawmakers in other states and, ultimately, at the federal level. Will others be inspired to follow the Volunteer State’s example?

Cynthia Murrell, April 11, 2024

Another Bottleneck Issue: Threat Analysis

April 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My general view of software is that it is usually good enough. You just cannot get ahead of the problems. For example, I recall doing a project to figure out why Visio (an early version) simply did not do what the marketing collateral said it did. We poked around, and in short order, we identified features that were not implemented or did not work as advertised. Were we surprised? Nah. That type of finding holds for consumer software as well as enterprise software. I recall waiting for someone who worked at Fast Search & Transfer in North Carolina to figure out why hit boosting was not functioning. The reason, if memory serves, was that no one had completed the code. What about security of the platform? Not discussed: The enthusiastic worker in North Carolina turned his attention to the task, but it took time to address the issue. The intrepid engineer encountered “undocumented dependencies.” These are tough to resolve when coders disappear, change jobs, or don’t know how to make something work. These functional issues stack up, and many are never resolved. Many are not considered in terms of security. Even worse, the fix applied by a clueless intern fascinated with Foosball screws something up because… the “leadership team” consists of former consultants, accountants, and lawyers. Not too many professionals with MBAs, law degrees and expertise in SEC accounting requirements are into programming, security practices, and technical details. These stellar professionals gain technical expertise watching engineers with PowerPoint presentations. The meetings feature this popular question: “Where’s the lunch menu?”

image

The person in the row boat is going to have a difficult time dealing with software flaws and cyber security issues which emulate the gusher represented in the Microsoft Copilot illustration. Good enough image, just like good enough software security.

I read “NIST Unveils New Consortium to Operate National Vulnerability Database.” The focus is on software which invites bad actors to the Breach Fun Park. The write up says:

In early March, many security researchers noticed a significant drop in vulnerability enrichment data uploads on the NVD website that had started in mid-February. According to its own data, NIST has analyzed only 199 Common Vulnerabilities and Exposures (CVEs) out of the 2957 it has received so far in March. In total, over 4000 CVEs have not been analyzed since mid-February. Since the NVD is the most comprehensive vulnerability database in the world, many companies rely on it to deploy updates and patches.

The backlog is more than 3,800 vulnerability issues. The original fix was to shut down the US National Vulnerability Database. Yep, this action was kicked around at the exact same time as cyber security fires were blazing in a certain significant vendor providing software to the US government and when embedded exploits in open source software were making headlines.

How does one solve the backlog problem. In the examples I mentioned in the first paragraph of this essay, there was a single player and a single engineer who was supposed to solve the problem. Forget dependences, just make the feature work in a manner that was good enough. Where does a government agency get a one-engineer-to-one-issue set up?

Answer: Create a consortium, a voluntary one to boot.

I have a number of observations to offer, but I will skip these. The point is that software vulnerabilities have overwhelmed a government agency. The commercial vendors issue news releases about each new “issue” a specific team of a specific individual in the case of Microsoft have identified. However, vendors rarely stumble upon the same issue. We identified a vector for ransomware which we will explain in our April 24, 2024, National Cyber Crime Conference lecture.

Net net: Software vulnerabilities illustrate the backlog problem associated with any type of content curation or software issue. The volume is overwhelming available resources. What’s the fix? (You will love this answer.) Artificial intelligence. Yep, sure.

Stephen E Arnold, April 8, 2024

India: AI, We Go This Way, Then We Go That Way

April 3, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In early March 2024, the India said it would require all AI-related projects still in development receive governmental approval before they were released to the public. India’s Ministry of Electronics and Information Technology stated it wanted to notify the public of AI technology’s fallacies and its unreliability. The intent was to label all AI technology with a “consent popup” that informed users of potential errors and defects. The ministry also wanted to label potentially harmful AI content, such as deepfakes, with a label or unique identifier.

The Register explains that it didn’t take long for the south Asian country to rescind the plan: “India Quickly Unwinds Requirement For Government Approval Of AIs.” The ministry issued a update that removed the requirement for government approval but they did add more obligations to label potentially harmful content:

"Among the new requirements for Indian AI operations are labelling deepfakes, preventing bias in models, and informing users of models’ limitations. AI shops are also to avoid production and sharing of illegal content, and must inform users of consequences that could flow from using AI to create illegal material.”

Minister of State for Entrepreneurship, Skill Development, Electronics, and Technology Rajeev Chandrasekhar provided context for the government’s initial plan for approval. He explained it was intended only for big technology companies. Smaller companies and startups wouldn’t have needed the approval. Chandrasekhar is recognized for his support of boosting India’s burgeoning technology industry.

Whitney Grace, April 3, 2024

How to Fool a Dinobaby Online

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Marketers take note. Forget about gaming the soon-to-be-on-life-support Google Web search. Embrace fakery. And who, you may ask, will teach me? The answer is The Daily Beast. To begin your life-changing journey, navigate to “Facebook Is Filled With AI-Generated Garbage—and Older Adults Are Being Tricked.”

image

Two government regulators wonder where the Deep Fakes have gone? Thanks, MSFT Copilot. Keep on updating, please.

The write up explains:

So far, the few experiments to analyze seniors’ AI perception seem to align with the Facebook phenomenon…. The team found that the older participants were more likely to believe that AI-generated images were made by humans.

Okay, that’s step one: Identify your target market.

What’s next? The write up points out:

scammers have wielded increasingly sophisticated generative AI tools to go after older adults. They can use deepfake audio and images sourced from social media to pretend to be a grandchild calling from jail for bail money, or even falsify a relative’s appearance on a video call.

That’s step two: Weave in a family or social tug on the heart strings.

Then what? The article helpfully notes:

As of last week, there are more than 50 bills across 30 states aimed to clamp down on deepfake risks. And since the beginning of 2024, Congress has introduced a flurry of bills to address deepfakes.

Yep, the flag has been dropped. The race with few or no rules is underway. But what about government rules and regulations? Yeah, those will be chugging around after the race cars have disappeared from view.

Thanks for the guidelines.

Stephen E Arnold, March 29, 2024

Google: Practicing But Not Learning in France

March 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I had to comment on this Google synthetic gems. The online advertising company with the Cracker Jack management team is cranking out titbits every days or two. True, none of these rank with the Microsoft deal to hire some techno-management wizards with DeepMind experience, but I have to cope with what flows into rural Kentucky.

image

Those French snails are talkative — and tasty. Thanks, MSFT Copilot. Are you going to license, hire, or buy DeepMind?

Google Fined $270 Million by French Regulatory Authority” delivers what strikes me a Lego block information about the estimable company. The write up presents yet another story about Google’s footloose and fancy free approach to French laws, rules, and regulations. The write up reports:

This latest fine is the result of Google’s artificial intelligence training practices. The [French regulatory] watchdog said in a statement that Google’s Bard chatbot — which has since been rebranded as Gemini —”used content from press agencies and publishers to train its foundation model, without notifying either them” or the Authority.

So what did the outstanding online advertising company do? The news story asserts:

The watchdog added that Google failed to provide a technical opt-out solution for publishers, obstructing their ability to “negotiate remuneration.”

The result? Another fine.

Google has had an interesting relationship with France. The country was the scene of the outstanding presentation of the Sundar and Prabhakar demonstration of the quantumly supreme Bard smart software. Google has written checks to France in the past. Now it is associated with flubbing what are relatively straightforward for France requirements to work with publishers.

Not surprisingly, the outfit based in far off California allegedly said, according to the cited news story:

Google criticized a “lack of clear regulatory guidance,” calling for greater clarity in the future from France’s regulatory bodies.  The fine is linked to a copyright case that began in 2020, when the French Authority found Google to be acting in violation of France’s copyright and related rights law of 2019.

My experience with France, French laws, and the ins and outs of working with French organizations is limited. Nevertheless, my son — who attended university in France — told me an anecdote which illustrates how French laws work. Here’s the tale which I assume is accurate. He is a reliable sort.

A young man was in the immigration office in Paris. He and his wife were trying to clarify a question related to her being a French citizen. The bureaucrat had not accepted her birth certificate from a municipal French government, assorted documents from her schooling from pre-school to university, and the oddments of electric bills, rental receipts, and medical records. The husband who was an American told me son, “This office does not think my wife is French. She is. And I think we have it nailed this time. My wife has a photograph of General De Gaulle awarding her father a medal.” My son told me, “Dad, it did not work. The husband and wife had to refile the paperwork to correct an error made on the original form.”

My takeaway from this anecdote is that Google may want to stay within the bright white lines in France. Getting entangled in the legacy of Napoleon’s red tape can be an expensive, frustrating experience. Perhaps the Google will learn? On the other hand, maybe not.

Stephen E Arnold,  March 22, 2023

US Bans Intellexa For Spying On Senator

March 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

One of the worst ideas in modern society is to spy on the United States. The idea becomes worse when the target is a US politician. Intellexa is a notorious company that designs software to hack smartphones and transform them into surveillance devices. NBC News reports how Intellexa’s software was recently used in an attempt to hack a US senator: “US Bans Maker Of Spyware That Targeted A Senator’s Phone.”

Intellexa designed the software Predator that once downloaded onto a phone turns it into a surveillance device. Predator can turn on a phone’s camera and microphone, track a user’s location, and download files. The US Treasure Department banned Intellexa from conducting business in the US and US citizens are banned from working with the company. These are the most aggressive sanctions the US has ever taken against a spyware company.

The official ban also targets Intellexa’s founder Tan Dilian, employee Sara Hamou, and four companies that are affiliated with it. Predator is also used by authoritarian governments to spy on journalists, human rights workers, and anyone deemed “suspicious:”

“An Amnesty International investigation found that Predator has been used to target journalists, human rights workers and some high-level political figures, including European Parliament President Roberta Metsola and Taiwan’s outgoing president, Tsai Ing-Wen. The report found that Predator was also deployed against at least two sitting members of Congress, Rep. Michael McCaul, R-Texas, and Sen. John Hoeven, R-N.D.”

John Scott-Railton is a senior spyware researcher at the University of Toronto’s Citizen Lab and he said the US Treasury’s sanctions will rock the spyware world. He added it could also inspire people to change their careers and leave countries.

Predator isn’t the only company that makes spyware. Hackers can also design their own then share it with other bad actors.

Whitney Grace, March 22, 2024

The TikTok Flap: Wings on a Locomotive?

March 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I find the TikTok flap interesting. The app was purposeless until someone discovered that pre-teens and those with similar mental architecture would watch short videos on semi-forbidden subjects; for instance, see-through dresses, the thrill of synthetic opioids, updating the Roman vomitorium for a quick exit from parental reality, and the always-compelling self-harm presentations. But TikTok is not just a content juicer; it can provide some useful data in its log files. Cross correlating these data can provide some useful insights into human behavior. Slicing geographically makes it possible to do wonderful things. Apply some filters and a psychological profile can be output from a helpful intelware system. Whether these types of data surfing take place is not important to me. The infrastructure exists and can be used (with or without authorization) by anyone with access to the data.

image

Like bird wings on a steam engine, the ban on TikTok might not fly. Thanks, MSFT Copilot. How is your security revamp coming along?

What’s interesting to me is that the US Congress took action to make some changes in the TikTok business model. My view is that social media services required pre-emptive regulation when they first poked their furry, smiling faces into young users’ immature brains. I gave several talks about the risks of social media online in the 1990s. I even suggested remediating actions at the open source intelligence conferences operated by Major Robert David Steele, a former CIA professional and conference entrepreneur. As I recall, no one paid any attention. I am not sure anyone knew what I was talking about. Intelligence, then, was not into the strange new thing of open source intelligence and weaponized content.

Flash forward to 2024, after the US government geared up to “ban” or “force ByteDance” to divest itself of TikTok, many interesting opinions flooded the poorly maintained and rapidly deteriorating information highway. I want to highlight two of these write ups, their main points, and offer a few observations. (I understand that no one cared 30 years ago, but perhaps a few people will pay attention as I write this on March 16, 2024.)

The first write up is “A TikTok Ban Is a Pointless Political Turd for Democrats.” The language sets the scene for the analysis. I think the main point is:

Banning TikTok, but refusing to pass a useful privacy law or regulate the data broker industry is entirely decorative. The data broker industry routinely collects all manner of sensitive U.S. consumer location, demographic, and behavior data from a massive array of apps, telecom networks, services, vehicles, smart doorbells and devices (many of them *gasp* built in China), then sells access to detailed data profiles to any nitwit with two nickels to rub together, including Chinese, Russian, and Iranian intelligence. Often without securing or encrypting the data. And routinely under the false pretense that this is all ok because the underlying data has been “anonymized” (a completely meaningless term). The harm of this regulation-optional surveillance free-for-all has been obvious for decades, but has been made even more obvious post-Roe. Congress has chosen, time and time again, to ignore all of this.

The second write up is “The TikTok Situation Is a Mess.” This write up eschews the colorful language of the TechDirt essay. Its main point, in my opinion, is:

TikTok clearly has a huge influence over a massive portion of the country, and the company isn’t doing much to actually assure lawmakers that situation isn’t something to worry about.

Thus, the article makes clear its concern about the outstanding individuals serving in a representative government in Washington, DC, the true home of ethical behavior in the United States:

Congress is a bunch of out-of-touch hypocrites.

What do I make of these essays? Let me share my observations:

  1. It is too late to “fix up” the TikTok problem or clean up the DC “mess.” The time to act was decades ago.
  2. Virtual private networks and more sophisticated “get around” technology will be tapped by fifth graders to the short form videos about forbidden subjects can be consumed. How long will it take a savvy fifth grader to “teach” her classmates about a point-and-click VPN? Two or three minutes. Will the hungry minds recall the information? Yep.
  3. The idea that “privacy” has not been regulated in the US is a fascinating point. Who exactly was pro-privacy in the wake of 9/11? Who exactly declined to use Google’s services as information about the firm’s data hoovering surfaced in the early 2000s? I will not provide the answer to this question because Google’s 90 percent plus share of the online search market presents the answer.

Net net: TikTok is one example of a software with a penchant for capturing data and retaining those data in a form which can be processed for nuggets of information. One can point to Alibaba.com, CapCut.com, Temu.com or my old Huawei mobile phone which loved to connect to servers in Singapore until our fiddling with the device killed it dead. Sad smile

Stephen E Arnold, March 20, 2024

Worried about TikTok? Do Not Overlook CapCut

March 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I find the excitement about TikTok interesting. The US wants to play the reciprocity card; that is, China disallows US apps so the US can ban TikTok. How influential is TikTok? US elected officials learned first hand that TikTok users can get messages through to what is often a quite unresponsive cluster of elected officials. But let’s leave TikTok aside.

image

Thanks, MSFT Copilot. Good enough.

What do you know about the ByteDance cloud software CapCut? Ah, you have never heard of it. That’s not surprising because it is aimed at those who make videos for TikTok (big surprise) and other video platforms like YouTube.

CapCut has been gaining supporters like the happy-go-lucky people who published “how to” videos about CapCut on YouTube. On TikTok, CapCut short form videos have tallied billions of views. What makes it interesting to me is that it wants to phone home, store content in the “cloud”, and provide high-end tools to handle some tricky video situations like weird backgrounds on AI generated videos.

The product CapCut was named (I believe) JianYing or Viamaker (the story varies by source) which means nothing to me. The Google suggests its meanings could range from hard to paper cut out. I am not sure I buy these suggestions because Chinese is a linguistic slippery fish. Is that a question or a horse? In 2020, the app got a bit of shove into the world outside of the estimable Middle Kingdom.

Why is this important to me? Here are my reasons for creating this short post:

  • Based on my tests of the app, it has some of the same data hoovering functions of TikTok
  • The data of images and information about the users provides another source of potentially high value information to those with access to the information
  • Data from “casual” videos might be quite useful when the person making the video has landed a job in a US national laboratory or in one the high-tech playgrounds in Silicon Valley. Am I suggesting blackmail? Of course not, but a release of certain imagery might be an interesting test of the videographer’s self-esteem.

If you want to know more about CapCut, try these links:

  • Download (ideally to a burner phone or a PC specifically set up to test interesting software) at www.capcut.com
  • Read about the company CapCut in this 2023 Recorded Future write up
  • Learn about CapCut’s privacy issues in this Bloomberg story.

Net net: Clever stuff but who is paying attention. Parents? Regulators? Chinese intelligence operatives?

Stephen E Arnold, March 18, 2024

AI to AI Program for March 12, 2024, Now Available

March 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Erik Arnold, with some assistance from Stephen E Arnold (the father) has produced another installment of AI to AI: Smart Software for Government Use Cases.” The program presents news and analysis about the use of artificial intelligence (smart software) in government agencies.

image

The ad-free program features Erik S. Arnold, Managing Director of Govwizely, a Washington, DC consulting and engineering services firm. Arnold has extensive experience working on technology projects for the US Congress, the Capitol Police, the Department of Commerce, and the White House. Stephen E Arnold, an adviser to Govwizely, also participates in the program. The current episode explores five topics in an father-and-son exploration of important, yet rarely discussed subjects. These include the analysis of law enforcement body camera video by smart software, the appointment of an AI information czar by the US Department of Justice, copyright issues faced by UK artificial intelligence projects, the role of the US Marines in the Department of Defense’s smart software projects, and the potential use of artificial intelligence in the US Patent Office.

The video is available on YouTube at https://youtu.be/nsKki5P3PkA. The Apple audio podcast is at this link.

Stephen E Arnold, March 12, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta