How Generative Graphics AI Might Be Used to Embed Hidden Messages

November 3, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Subliminal advertising is back, now with an AI boost. At least that is the conclusion of one Tweeter (X-er?) who posted a few examples of the allegedly frightful possibilities. The Creative Bloq considers, “Should We Be Scared of Hidden Messages in AI Optical Illusions?” Writer Joseph Foley tells us:

“Some of the AI optical illusions we’ve seen recently have been slightly mesmerizing, but some people are concerned that they could also be dangerous. ‘Many talk about the dangers of “AGI” taking over humans. But you should worry more about humans using AI to control other humans,’ Cocktail Peanut wrote in a post on Twitter, providing the example of the McDonald’s logo embedded in an anime-style AI-generated illustration. The first example wasn’t very subtle. But Peanut followed up with less obvious optical illusions, all made using a Stable Diffusion-powered Hugging Face space called Diffusion Illusion HQ created by Angry PenguinPNG. The workflow for making the illusions, using Monster Labs QR Control Net, was apparently discovered by accident. The ControlNet technique allows users to specify inputs, for example specific images or words, to gain more control over AI image generations. Monster Labs’ tool was created to allow QR codes to be used as input so the AI would generate usable but artistic QR codes as an output, but users discovered that it could also be used to hide patterns or words in AI-generated scenes.”

Hidden messages in ads have been around since 1957, though they are officially banned as “deceptive advertising” in the US. The concern here is that AI will make the technique much, much cheaper and easier. Interesting but not really surprising. Should we be concerned? Foley thinks not. He notes the few studies on subliminal advertising suggest it is not very effective. Will companies, and even some governments, try it anyway? Probably.

Cynthia Murrell, November 3, 2023

Knowledge Workers, AI Software Is Cheaper and Does Not Take Vacations. Worried Yet?

November 2, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I believe the 21st century is the era of good enough or close enough for horseshoes products and services. Excellence is a surprise, not a goal. At a talk I gave at CeBIT years ago, I explained that certain information centric technologies had reached the “let’s give up” stage of development. Fresh in my mind were the lessons I learned writing a compendium of information access systems published as “The Enterprise Search Report” by a company lost to me in the mists of time.

11 1 replaced by ai

“I just learned that our department will be replaced by smart software,” says the MBA from Harvard. The female MBA from Stanford emits a scream just like the one she let loose after scuffing her new Manuel Blahnik (Rodríguez) shoes. Thanks, MidJourney, you delivered an image with a bit of perspective. Good enough work.

I identified the flaws in implementations of knowledge management, information governance, and enterprise search products. The “good enough” comment was made to me during the Q-and-A session. The younger person pointed out that systems for finding information — regardless of the words I used to describe what most knowledge workers did — was “good enough.” I recall the simile the intense young person offered as I was leaving the lecture hall. Vivid now years later was the comment that improving information access was like making catalytic converters deliver zero emissions. Thus, information access can’t get where it should be. The technology is good enough.

I wonder if that person has read “AI Anxiety As Computers Get Super Smart.” Probably not. I believe that young person knew more than I did. As a dinobaby, I just smiled and listened. I am a smart dinobaby in some situations. I noted this passage in the cited article:

Generative AI, however, can take aim at white-collar jobs such as lawyers, doctors, teachers, journalists, and even computer programmers. A report from the McKinsey consulting firm estimates that by the end of this decade, as much as 30 percent of the hours worked in the United States could be automated in a trend accelerated by generative AI.

Executive orders and government proclamations are unlikely to have much effect on some people. The write up points out:

Generative AI makes it easier for scammers to create convincing phishing emails, perhaps even learning enough about targets to personalize approaches. Technology lets them copy a face or a voice, and thus trick people into falling for deceptions such as claims a loved one is in danger, for example.

What’s the fix? One that is good enough probably won’t have much effect.

Stephen E Arnold, November 2, 2023

test

Microsoft at Davos: Is Your Hair on Fire, Google?

November 2, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Microsoft said at the January 2023 Davos, AI is the next big thing. The result? Google shifted into Code Red and delivered a wild and crazy demonstration of a deeply flawed AI system in February 2023. I think the phrase “Code Red” became associated to the state of panic within the comfy confines of Googzilla’s executive suites, real and virtual.

Sam AI-man made appearances speaking to anyone who would listen words like “billion dollar investment,” efficiency, and work processes. The result? Googzilla itself  found out that whether Microsoft’s brilliant marketing of AI worked or not, the Softies had just demonstrated that it — not the Google — was a “leader”. The new Microsoft could create revenue  and credibility problems for the Versailles of technology companies.

Therefore, the Google tried to try and be nimble and make the myth of engineering prowess into reality, not a CGI version of Camelot. The PR Camelot featured Google as the Big Dog in the AI world. After all, Google had done the protein thing, an achievement which made absolutely no sense to 99 percent of the earth’s population. Some asked, “What the heck is a protein folder?” I want a Google Waze service that shows me where traffic cameras are.

The Google executives apparently went to meetings with their hair on fire.

11 2 code red at google

A group of Google executives in a meeting with their hair on fire after Microsoft’s Davos AI announcement. Google wanted teams to manifest AI prowess everywhere, lickity split. Google reorganized. Google probed Anthropic and one Googler invested in the company. Dr. Prabhakar Raghavan demonstrated peculiar communication skills.

I had these thoughts after I read “Google Didn’t Rush Bard Chatbot to Beat Microsoft, Executive Says.” So what was this Code Red thing? Why has Google — the quantum supremacy and global leader in online advertising and protein folding — be lagging behind Microsoft? What is it now? Oh, yeah. Almost a year, a reorganization of the Google’s smart software group, and one of Google’s own employees explaining that AI could have a negative impact on the world. Oh, yeah, that guy is one of the founders of Google’s DeepMind AI group. I won’t mention the Googler who thought his chatbot was alive and ended up with an opportunity to find his future elsewhere. Right. Code Red. I want to note Timnit Gebru and the stochastic parrot, the Jeff Dean lateral arabesque, and the significant investment in a competitor’s AI technology. Right. Standard operating procedure for an online advertising company with a fairly healthy self concept about its excellence and droit du seigneur.

The Bloomberg article reports which I am assuming is “real”, actual factual information:

A senior Google executive disputed suggestions that the company rushed to release its artificial intelligence-based chatbot Bard earlier this year to beat a similar offering from rival Microsoft Corp. Testifying in Google’s defense at the Justice Department’s antitrust trial against the search giant, Elizabeth Reid, a vice president of search, acknowledged that Bard gave “a wrong answer” during its public unveiling in February. But she rejected the contention by government lawyer David Dahlquist that Bard was “rushed” out after Microsoft announced it was integrating generative AI into its own Bing search engine.

The real news story pointed out:

Google’s public demonstration of Bard underwhelmed investors. In one instance, Bard was asked about new discoveries from the James Webb Space Telescope. The chatbot incorrectly stated the telescope was used to take the first pictures of a planet outside the Earth’s solar system. While the Webb telescope was the first to photograph one particular planet outside the Earth’s solar system, NASA first photographed a so-called exoplanet in 2004. The mistake led to a sharp fall in Alphabet’s stock. “It’s a very subtle language difference,” Reid said in explaining the error in her testimony Wednesday. “The amount of effort to ensure that a paragraph is correct is quite a lot of work.” “The challenges of fact-checking are hard,” she added.

Yes, facts are hard in Hallucinationville? I think the concept I take away from this statement is that PR is easier than making technology work. But today Google and similar firms are caught in what I call a “close enough for horseshoes” mind set. Smart software, in my experience, is like my dear, departed mother’s  not-quite-done pineapple upside down cakes. Yikes, those were a mess. I could eat the maraschino cherries but nothing else. The rest was deposited in the trash bin.

And where are the “experts” in smart search? Prabhakar? Danny? I wonder if they are embarrassed by their loss of their thick lustrous hair. I think some of it may have been singed after the outstanding Paris demonstration and subsequent Mountain View baloney festivals. Was Google behaving like a child frantically searching for his mom at the AI carnival? I suppose when one is swathed in entitlements, cashing huge paychecks, and obfuscating exactly how the money is extracted from advertisers, reality is distorted.

Net net: Microsoft at Davos caused Google’s February 2023 Paris presentation. That mad scramble has caused to conclude that talking about AI is a heck of a lot easier than delivering reliable, functional, and thought out products. Is it possible to deliver such products when one’s hair is on fire? Some data say, “Nope.”

Stephen E Arnold, November 2, 2023

By Golly, the Gray Lady Will Not Miss This AI Tech Revolution!

November 2, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The technology beacon of the “real” newspaper is shining like a high-technology beacon. Flash, the New York Times Online. Flash, terminating the exclusive with LexisNexis. Flash. The shift to a — wait for it — a Web site. Flash. The in-house indexing system. Flash. Buying About.com. Flash. Doing podcasts. My goodness, the flashes have impaired my vision. And where are we today after labor strife, newsroom craziness, and a list of bestsellers that gets data from…? I don’t really know, and I just haven’t bothered to do some online poking around.

image

A real journalist of today uses smart software to write listicles for Buzzfeed, essays for high school students, and feature stories for certain high profile newspapers. Thanks for the drawing Microsoft Bing. Trite but okay.

I thought about the technology flashes from the Gray Lady’s beacon high atop its building sort of close to Times Square. Nice branding. I wonder if mobile phone users know why the tourist destination is called Times Square. Since I no longer work in New York, I have forgotten. I do remember the high intensity pinks and greens of a certain type of retail establishment. In fact, I used to know the fellow who created this design motif. Ah, you don’t remember. My hunch is that there are other factoids you and I won’t remember.

For example, what’s the byline on a New York Times’s story? I thought it was the name or names of the many people who worked long hours, made phone calls, visited specific locations, and sometimes visited the morgue (no, the newspaper morgue, not the “real” morgue where the bodies of compromised sources ended up).

If the information in  that estimable source Showbiz411.com is accurate, the Gray Lady may cite zeros and ones. The article is “The New York Times Help Wanted: Looking for an AI Editor to Start Publishing Stories. Six Figure Salary.” Now that’s an interesting assertion. A person like me might ask, “Why not let a recent college graduate crank out machine generated stories?” My assumption is that most people trying to meet a deadline and in sync with Taylor Swift will know about machine-generated information. But, if the story is true, here’s what’s up:

… it looks like the Times is going let bots do their journalism. They’re looking for “a senior editor to lead the newsroom’s efforts to ambitiously and responsibly make use of generative artificial intelligence.” I’m not kidding. How the mighty have fallen. It’s on their job listings.

The Showbiz411.com story allegedly quotes the Gray Lady’s help wanted ad as saying:

“This editor will be responsible for ensuring that The Times is a leader in GenAI innovation and its applications for journalism. They will lead our efforts to use GenAI tools in reader-facing ways as well as internally in the newsroom. To do so, they will shape the vision for how we approach this technology and will serve as the newsroom’s leading voice on its opportunity as well as its limits and risks. “

There are a bunch of requirements for this job. My instinct is that a few high school students could jump into this role. What’s the difference between a ChatGPT output about crossing the Delaware and writing a “real” news article about fashion trends seen at Otto’s Shrunken Head.

Several observations:

  • What does this ominous development mean to the accountants who will calculate the cost of “real” journalists versus a license to smart software? My thought is that the general reaction will be positive. Imagine: No vacays, no sick days, and no humanoid protests. The Promised Land has arrived.
  • How will the Gray Lady’s management team explain this cuddling up to smart software? Perhaps it is just one of those newsroom romances? On the other hand, what if something serious develops and the smart software moves in? Yipes.
  • What will “informed” reads think of stories crafted by the intellectual engine behind a high school student’s essay about great moments in American history? Perhaps the “informed” readers won’t care?

Exciting stuff in the world of real journalism down the street from Times Square and the furries, pickpockets, and gawkers from Ames, Iowa. I wonder if the hallucinating smart software will be as clever as the journalist who fabricates a story? Probably not. “Real” journalists do not shape, weaponized, or filter the actual factual. Is John Wiley & Sons ready to take the leap?

Stephen E Arnold, November 2, 2023

test

Social Media: The Former Big Thing

November 2, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

It’s a common saying that if you aren’t on social media you might as well not exist. Social media profiles are necessary to be successful in the modern world, but Business Insider claims that many people are spending less time glued to their screens: “Great News-Social Media Is Falling Apart.”

Facebook, Instagram, Twitter, and other social media giants alienated their users with too much sponsored content and entertainment hubs. Large social media platforms are less about connections and more about generating revenue via clicks. Users are experiencing social network fatigue so they’re posting less and even jumping ship. Users are now spending time group chats or on small, more intimate social platforms. On the small platforms, users are free from curated content and ads. They’re also using platforms for specific groups or topics.

The current state of social media is a fractured, disconnected mess. New networks pop up and run the popularity gambit before they disappear. Users want a social media platform that connects everything with the niche appeal of small networks:

“Mike McCue, Flipboard’s CEO, believes that the next big, social platform must bring together the benefits of both worlds, he said: ‘the quality and trust in small, transparent communities with the ability for those quality conversations to reach millions." But instead of one platform that manages to appease everyone, the future of social media is looking more like a network of platforms that offer people a customized experience. The ideal system would not only allow you to migrate to new social apps without losing your network or profile but also link them together so that you could post on one and a friend could comment on it from another.’”

None of the smaller social media networks are making money yet but the opportunities are there. Users want a clean, ad-free experience similar to how Facebook and Twitter used to be. If decentralized social media platforms learn to connect, they’ll give the larger companies a run for their money and end their monopolies.

Whitney Grace, November 2, 2023

Telegram: A Super App with Features Al Capone Might Have Liked

November 1, 2023

When I mention in my law enforcement lectures that Telegram, a frisky encrypted super app for thumb typers, is “off the radar” for some analysts, I get more than a few blank looks. Consider this: The “special conflict” or whatever some in the Land of Tolstoy call it, pivots on Telegram. And why not? It allows encrypted messages, both public and private. A safety conscious user can include an image or a video snippet and post it to the Musky service with a couple of taps. Those under attack can disseminate location data to a mailing list of Telegram contacts. The app makes it possible to pay for “stuff,” often that stuff is CSAM or information about where to pick up an order containing contraband.

11 1 soldiiers foxhole

The soldier with the mobile phone says, “Hey, this hot content video content is great on Telegram.” The other soldier says, “Jump to the Spies-R-Us service. I will give you the coordinates for the drone assault. Also, order some noodle latkes to Checkpoint Grhriba at 1800 hours.” Thanks, MidJourney. WW2 cartoonists would be proud of you.

Pivot to the Israel Hamas war. Yep, Telegram is in use. Civilians, war fighters, even those in prison with mobile devices are Telegramming away. The Russian brothers who created the original app may not have anticipated its utility in war zones.

My research team has noted that some Clear Web sites discuss slippery subjects like carding. Then the “buy now” or similar action points to a Telegram “location.” What about the Dark Web? Telegram makes it possible to do “Dark Web things” without the risk and hassle of operating a Dark Web site or service. Pretty innovative, right? And what about that Dark Web traffic? Our analysis suggests that one will find Dark Web bots, law enforcement from numerous countries, and a modest number of human bad actors who cannot or have not embraced Telegram.

Now the super app is getting some enhancements, if the information in Gadgets360 article is accurate. “Telegram Update Brings Advanced Reply Options, Link Preview Customizations, Account Colors, More.” Enhancements include:

Replying to a message from one chat to another. Will this be useful for certain extremist users doing fund raising or recruiting?

  • Customize shared links. Will this be useful to CSAM purveyors?
  • Fast forward and rewind videos in Telegram messages. Winner for some video content vendors.
  • Telegram also has a special feature. Some Telegram users pay for these services. Yep, money. Subscription money.

And the encryption thing? Reasonably good. Possibly less open than the UK Covid information allegedly from WhatsApp.

Stephen E Arnold, November 1, 2023

Cyber Security Professionals May Need Worry Beads. Good Worry Beads

November 1, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I read “SEC Charges SolarWinds and Its CISO With Fraud and Cybersecurity Failures.” Let’s assume the write up is accurate or — to hit today’s target for excellence — the article is close enough for horseshoes. Armed with this assumption, will cyber security professionals find that their employers or customers will be taking a closer look at the actual efficacy of the digital fences and news flows that keep bad actors outside the barn?

10 31 happy hacker

A very happy bad actor laughs after penetrating a corporate security system cackles in a Starbucks: “Hey, that was easy. When will these people wake up that you should not have fired me.” Thanks, MidJourney, not exactly what I wanted but good enough, the new standard of excellence.

The write up suggests that the answer may be a less than quiet yes. I noted this statement in the write up:

According to the complaint filed by the SEC, Austin, Texas-based SolarWinds and Brown [top cyber dog at SolarWinds] are accused of deceiving investors by overstating the company’s cybersecurity practices while understating or failing to disclose known risks. The SEC alleges that SolarWinds misled investors by disclosing only vague and hypothetical risks while internally acknowledging specific cybersecurity deficiencies and escalating threats.

The shoe hit the floor, if the write up is on the money:

A key piece of evidence cited in the complaint is a 2018 internal presentation prepared by a SolarWinds engineer [an employee who stated something senior management does not enjoy knowing] that was shared internally, including with Brown. The presentation stated that SolarWinds’ remote access setup was “not very secure” and that exploiting the vulnerability could lead to “major reputation and financial loss” for the company. Similarly, presentations by Brown in 2018 and 2019 indicated concerns about the company’s cybersecurity posture.

From my point of view, there are several items to jot down on a 4×6 inch notecard and tape on the wall:

  1. The “truth” is often at odds with what senior managers want to believe, think they know, or want to learn. Ignorance is bliss, just not a good excuse after a modest misstep.
  2. There are more companies involved in the foul up than the news sources have identified. Far be it from me to suggest that highly regarded big-time software companies do a C minus job engineering their security. Keep in mind that most senior managers — even at high tech firms — are out of the technology loop no matter what the LinkedIn biography says or employees believe. Accountants and MBA are good at some things, bad at others. Cyber security is in the “bad” ledger.
  3. The marketing collateral for most cyber security, threat intelligence services, and predictive alerting services talks about a sci-fi world, not the here and now of computer science students given penetration assignments from nifty places like Estonia and Romania, among others. There are disaffected employees who want to leave their former employers a digital hickey. There are developers, hired via a respected gig matcher, who will do whatever an anonymous customer requires for hard cash or a crypto payment. Most companies have no idea how or where the problem originates.
  4. Think about insider threats, particularly when insiders include contractors, interns, employees who are unloved, or consulting firm with a sketchy wizard gathering data inside of a commercial operation.

Sure, cyber security just works. Yeah, right. Maybe this alleged action toward a security professional will create some discomfort and a few troubled dreams. Will there be immediate and direct change? Nope. But the PowerPoint decks will be edited. The software will not be fixed up as quickly. That’s expensive and may not be possible with a cyber security firm’s current technical staff and financial resources.

Stephen E Arnold, November 1, 2023

How Does One Impede US AI Progress? Have a Government Meeting?

November 1, 2023

green-dino_thumbThis essay is the work of a dumb humanoid. No smart software required.

The Washington Post may be sparking a litigation hoedown. How can a newspaper give legal eagles an opportunity to buy a private island and not worry about the cost of LexisNexis searches? The answer may be in “AI Researchers Uncover Ethical, Legal Risks to Using Popular Data Sets.” The UK’s efforts to get a group to corral smart software are interesting. Lawyers may be the foot that slows AI traffic on the new Information  Superhighway.

The Washington Post reports:

The advent of chatbots that can answer questions and mimic human speech has kicked off a race to build bigger and better generative AI models. It has also triggered questions around copyright and fair use of text taken off the internet, a key component of the massive corpus of data required to train large AI systems. But without proper licensing, developers are in the dark about potential copyright restrictions, limitations on commercial use or requirements to credit a data set’s creators.

There is nothing like jumping in a lake with the local Polar Bears Club to spark investor concern about paying big fines. The chills and thrills of the cold water create a heightened state of awareness.

The article continues:

But without proper licensing, developers are in the dark about potential copyright restrictions, limitations on commercial use or requirements to credit a data set’s creators.

How’s the water this morning?

Several observations:

  1. A collision between the compunction to innovate in AI and the risk of legal liability seems likely
  2. Innovators will forge ahead and investors will have to figure out the risks by looking for legal eagles and big sharks lurking below the surface
  3. Whatever happens in North America and Western Europe will not slow the pace of investment into AI in the Middle East and China.
  4. Are there unpopular data sets perhaps generated by biased smart software?

Uncertainty and risk. Thanks, AI innovators.

Stephen E Arnold, November 1, 2023

China and Russia: Thinking Alike

November 1, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

China’s authoritarian government went to a new extreme with its social credit system. The social credit system a.k.a. a social rating system assigns points to citizens based on arbitrary rules that align with the Chinese government’s ideology. If citizens have a low score, they are denied services and privileges. Gaming Deputy explains that a Russian university is following China’s example: “The Russian State Social University Is Developing A Social Rating System ‘We’.”

The Russian State Social University (RGSU) is developing a social rating system for Russian citizens called “We.” RGSU invited its students and other interested people to participate in We testing. The We social credit platform rates people on numerous factors:

“The pilot rating system will include questions about various aspects of citizens’ lives, such as education, presence of children and dependents, sources of income, benefits, credit history, criminal records, social media accounts, participation in public life, government awards, language skills (especially Chinese), commitment to sports, healthy lifestyle and so on. All these parameters will be used to determine the social status and level of each person.”

People will receive a two-digit scoring code. The first number will be an individual’s social status and the second will be their social level. In order to ensure the We system’s data is accurate, people’s TIN, passport, SNILS, and telephone will be connected. The RGSU developers claim We will be useful for banks and governors whom want to classify citizens based on their usefulness.

A social credit system might sound useful but it doesn’t take long to become a tool of nightmares. The article emphasizes that transparency, data protection, and a balance between individual rights and government interests is necessary. Does anyone actually believe the Russian government will be held accountable?

Whitney Grace, November 1, 2023

Google Pays Apple to Be More Secure? Petulant, Parental, or Indifferent?

October 31, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I am fascinated by the allegedly “real” information in this Fortune Magazine write up: “Google CEO Sundar Pichai Swears Under Oath That $26 Billion Payment to Device Makers Was Partly to Nudge Them to Make Security Upgrades and Other Improvements.”

As I read the article, this passage pokes me in the nose:

Pichai, the star witness in Google’s defense, testified Monday that Google’s payments to phone manufacturers and wireless phone companies were partly meant to nudge them into making costly security upgrades and other improvements to their devices, not just to ensure Google was the first search engine users encounter when they open their smartphones or computers. Google makes money when users click on advertisements that pop up in its searches and shares the revenue with Apple and other companies that make Google their default search engine.

First, I like the “star witness” characterization. It is good to know where the buck stops at the Alphabet Google YouTube et al enterprise fruit basket.

10 31 buy wisely

The driver and passengers shout to the kids, “Use this money to improve your security. If you need more, just call 1 800 P A Y O F F S. Thanks, MidJourney, you do money reasonably well. By the way, where did the cash come from?

Second, I like the notion of paying billions to nudge someone to do something. I know that getting action from DC lobbyists, hiring people from competitors, pushing out people who disagree with Google management, and buying clicks costs less than billions. In some cases, the fees are considerably lower. Some non US law enforcement entities collection several thousand dollars from wives who want to have their husbands killed by an Albanian or Mexican hit man. Billions does more than nudge. Billions means business.

Third, I liked the reminder that no ruling will result in 2023. Then once a ruling is revealed, “another trial will determine how to rein in its [the Google construct’s] market power.”

Several questions popped into my mind:

  1. Is the “nudge” thing serious? My dinobaby mind interprets the statement as either a bit of insider humor, a disconnect between the Googley world and most people’s everyday reality, or a bit dismissive. I can hear one of my high school science club member’s saying to a teacher perceived as dull normal, “You would not understand the real reason so I am pointing the finger at Plato’s philosophy.”
  2. The “billions” is the giveaway. That is more than the average pay-to-play shyster of Fiverr.com charges. Why such a premium For billions, I can think of several lobbying outfits who would do some pretty wild and crazy things for a couple of hundred million in cash.
  3. Why is the already glacier-like legal process moving slowly with the prospect of yet another trial to come? With a substantial footprint in search and online advertising, are some billions being used to create the world’s most effective brake on a legal process?
  4. Why is so much of the information redacted and otherwise difficult or almost impossible to review? I thought the idea of a public trial involving a publicly traded company in a democratic society was supposed to be done in the sunshine?

Fortune Magazine sees nothing amiss. I wonder if I am the only dinobaby wondering what’s beneath the surface of what seems to be a trial which is showing some indications of being quite Googley. I am not sure if that is a positive thing.

I also wonder why a large outfit like Apple needs to be nudged with Google billions? That strikes me as something worth thinking about. The fake Albanian and Mexican hitmen may learn something new by answering that question. Hey, Fortune Magazine, why not take another shot at covering this story?

Stephen E Arnold, October 31, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta