Predicting the Weather: Another Stuffed Turkey from Google DeepMind?
November 27, 2023
This essay is the work of a dumb dinobaby. No smart software required.
By or design, the adolescents at OpenAI have dominated headlines for the pre-turkey, the turkey, and the post-turkey celebrations. In the midst of this surge in poohbah outputs, Xhitter xheets, and podcast posts, non-OpenAI news has been struggling for a toehold.
An important AI announcement from Google DeepMind stuns a small crowd. Were the attendees interested in predicting the weather or getting a free umbrella? Thank, MSFT Copilot. Another good enough art work whose alleged copyright violations you want me to determine. How exactly am I to accomplish that? Use, Google Bard?
What is another AI company to do?
A partial answer appears in “DeepMind AI Can Beat the Best Weather Forecasts. But There Is a Catch”. This is an article in the esteemed and rarely spoofed Nature Magazine. None of that Techmeme dominating blue link stuff. None of the influential technology reporters asserting, “I called it. I called it.” None of the eye wateringly dorky observations that OpenAI’s organizational structure was a problem. None of the “Satya Nadella learned about the ouster at the same time we did.” Nope. Nope. Nope.
What Nature provided is good, old-fashioned content marketing. The write up points out that DeepMind says that it has once again leapfrogged mere AI mortals. Like the quantum supremacy assertion, the Google can predict the weather. (My great grandmother made the same statement about The Farmer’s Almanac. She believed it. May she rest in peace.)
The estimable magazine reported in the midst of the OpenAI news making turkeyfest said:
To make a forecast, it uses real meteorological readings, taken from more than a million points around the planet at two given moments in time six hours apart, and predicts the weather six hours ahead. Those predictions can then be used as the inputs for another round, forecasting a further six hours into the future…. They [Googley DeepMind experts] say it beat the ECMWF’s “gold-standard” high-resolution forecast (HRES) by giving more accurate predictions on more than 90 per cent of tested data points. At some altitudes, this accuracy rose as high as 99.7 per cent.
No more ruined picnics. No weddings with bridesmaids’ shoes covered in mud. No more visibly weeping mothers because everyone is wet.
But Nature, to the disappointment of some PR professionals presents an alternative viewpoint. What a bummer after all those meetings and presentations:
“You can have the best forecast model in the world, but if the public don’t trust you, and don’t act, then what’s the point? [A statement attributed to Ian Renfrew at the University of East Anglia]
Several thoughts are in order:
- Didn’t IBM make a big deal about its super duper weather capabilities. It bought the Weather Channel too. But when the weather and customers got soaked, I think IBM folded its umbrella. Will Google have to emulate IBM’s behavior. I mean “the weather.” (Note: The owner of the IBM Weather Company is an outfit once alleged to have owned or been involved with the NSO Group.)
- Google appears to have convinced Nature to announce the quantum supremacy type breakthrough only to find that a professor from someplace called East Anglia did not purchase the rubber boots from the Google online store.
- The current edition of The Old Farmer’s Almanac is about US$9.00 on Amazon. That predictive marvel was endorsed by Gussie Arnold, born about 1835. We are not sure because my father’s records of the Arnold family were soaked by sudden thunderstorm.
Just keep in mind that Google’s system can predict the weather 10 days ahead. Another quantum PR moment from the Google which was drowned out in the OpenAI tsunami.
Stephen E Arnold, November 27, 2023
Microsoft, the Techno-Lord: Avoid My Galloping Steed, Please
November 27, 2023
This essay is the work of a dumb dinobaby. No smart software required.
The Merriam-Webster.com online site defines “responsibility” this way:
re·?spon·?si·?bil·?I·?ty
1 : the quality or state of being responsible: such as
: moral, legal, or mental accountability
: RELIABILITY, TRUSTWORTHINESS
: something for which one is responsible
The online sector has a clever spin on responsibility; that is, in my opinion, the companies have none. Google wants people who use its online tools and post content created with those tools to make sure that what the Google system outputs does not violate any applicable rules, regulations, or laws.
In a traditional fox hunt, the hunters had the “right” to pursue the animal. If a farmer’s daughter were in the way, it was the farmer’s responsibility to keep the silly girl out of the horse’s path. That will teach them to respect their betters I assume. Thanks, MSFT Copilot. I know you would not put me in a legal jeopardy, would you? Now what are the laws pertaining to copyright for a cartoon in Armenia? Darn, I have to know that, don’t I.
Such a crafty way of defining itself as the mere creator of software machines has inspired Microsoft to follow a similar path. The idea is that anyone using Microsoft products, solutions, and services is “responsible” to comply with applicable rules, regulations, and laws.
Tidy. Logical. Complete. Just like a nifty algebra identity.
Microsoft believes they have no liability if an AI, like Copilot, is used to infringe on copyrighted material.
The write up includes this passage:
So this all comes down to, according to Microsoft, that it is providing a tool, and it is up to users to use that tool within the law. Microsoft says that it is taking steps to prevent the infringement of copyright by Copilot and its other AI products, however, Microsoft doesn’t believe it should be held legally responsible for the actions of end users.
The write up (with no Jimmy Kimmel spin) includes this statement, allegedly from someone at Microsoft:
Microsoft is willing to work with artists, authors, and other content creators to understand concerns and explore possible solutions. We have adopted and will continue to adopt various tools, policies, and filters designed to mitigate the risk of infringing outputs, often in direct response to the feedback of creators. This impact may be independent of whether copyrighted works were used to train a model, or the outputs are similar to existing works. We are also open to exploring ways to support the creative community to ensure that the arts remain vibrant in the future.
From my drafty office in rural Kentucky, the refusal to accept responsibility for its business actions, its products, its policies to push tools and services on users, and the outputs of its cloudy system is quite clever. Exactly how will a user of products pushed at users like Edge and its smart features prevent a user from acquiring from a smart Microsoft system something that violates an applicable rule, regulation, or law?
But legal and business cleverness is the norm for the techno-feudalists. Let the surfs deal with the body of the child killed when the barons chase a fox through a small leasehold. I can hear the brave royals saying, “It’s your fault. Your daughter was in the way. No, I don’t care that she was using the free Microsoft training materials to learn how to use our smart software.”
Yep, responsible. The death of the hypothetical child frees up another space in the training course.
Stephen E Arnold, November 27, 2023
Amazon Alexa Factoids: A Look Behind the Storefront Curtains
November 24, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Hey, Amazon admirers, I noted some interesting (allegedly accurate factoids) in “Amazon Alexa to Lose $10 Billion This Year.” No, I was not pulled by interesting puddle of red ink.
Alexa loves to sidestep certain questions. Thanks, MSFT Copilot. Nice work even though you are making life difficult for Google’s senior management today.
Let me share four items which I thought interesting. Please, navigate to the original write up to get the full monte. (I support the tailor selling civvies, not the card game.)
- “Just about every plan to monetize Alexa has failed, with one former employee calling Alexa ‘a colossal failure of imagination,’ and ‘a wasted opportunity.’” [I noted the word colossal.]
- “Amazon can’t make money from Alexa telling you the weather”
- “I worked in the Amazon Alexa division. The level of incompetence coupled with arrogance was astounding.”
- “FAANG has gotten so large that the stock bump that comes from narrative outpaces actual revenue from working products.”
Now how about the management philosophy behind these allegedly accurate statements? It sounds like the consequences of doing high school science club field trip planning. Not sure how those precepts work? Just do a bit of reading about the OpenAI – Sam AI-Man hootenanny.
Stephen E Arnold, November 24, 2023
Speeding Up and Simplifying Deep Fake Production
November 24, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Remember the good old days when creating a deep fake required having multiple photographs, maybe a video clip, and minutes of audio? Forget those requirements. To whip up a deep fake, one needs only a short audio clip and a single picture of the person.
The pace of innovation in deep face production is speeding along. Bad actors will find it easier than ever to produce interesting videos for vulnerable grandparents worldwide. Thanks, MidJourney. It was a struggle but your produced a race scene that is good enough, the modern benchmark for excellence.
Researchers at Nanyang Technological University has blasted through the old-school requirements. The teams software can generate realistic videos. These can show facial expressions and head movements. The system is called DIRFA, a tasty acronym for Diverse yet Realistic Facial Animations. One notable achievement of the researchers is that the video is produced in 3D.
The report “Realistic Talking Faces Created from Only and Audio Clip and a Person’s Photo” includes more details about the system and links to demonstration videos. If the story is not available, you may be able to see the video on YouTube at this link.
Stephen E Arnold, November 24, 2023
Facial Recognition: A Bit of Bias Perhaps?
November 24, 2023
This essay is the work of a dumb dinobaby. No smart software required.
It’s a running gag in the tech industry that AI algorithms and related advancements are “racist.” Motion sensors can’t recognize dark pigmented skin. Photo recognition software misidentifies black and other ethnicities as primates. AI-trained algorithms are also biased against ethnic minorities and women in the financial, business, and other industries. AI is “racist” because it’s trained on data sets heavy in white and male information.
Ars Technica shares another story about biased AI: “People Think White AI-Generated Faces Are More Real Than Actual Photos, Study Says.” The journal of Psychological Science published a peer reviewed study, “AI Hyperrealism: Why AI Faces Are Perceived As More Real Than Human Ones.” The study discovered that faces created from three-year old AI technology were found to be more real than real ones. Predominately, AI-generate faces of white people were perceived as the most realistic.
The study surveyed 124 white adults who were shown a mixture of 100 AI-generated images and 100 real ones. They identified 66% of the AI images as human and 51% of the real faces were identified as real. Real and AI images of ethnic minorities with high amounts of melanin were viewed as real 51%. The study also discovered that participants who made the most mistakes were also the most confident, a clear indicator of the Dunning-Kruger effect.
The researchers conducted a second study with 610 participants and learned:
“The analysis of participants’ responses suggested that factors like greater proportionality, familiarity, and less memorability led to the mistaken belief that AI faces were human. Basically, the researchers suggest that the attractiveness and "averageness" of AI-generated faces made them seem more real to the study participants, while the large variety of proportions in actual faces seemed unreal.
Interestingly, while humans struggled to differentiate between real and AI-generated faces, the researchers developed a machine-learning system capable of detecting the correct answer 94 percent of the time.”
The study could be swung in the typical “racist” direction that AI will perpetuate social biases. The answer is simple and should be invested: create better data sets to train AI algorithms.
Whitney Grace, November 24, 2023
Another Xoogler and More Process Insights
November 23, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Google employs many people. Over the last 25 years, quite a few Xooglers (former Google employees) are out and about. I find the essays by the verbal Xooglers interesting. “Reflecting on 18 Years at Google” contains several intriguing comments. Let me highlight a handful of these. You will want to read the entire Hixie article to get the context for the snips I have selected.
The first point I underlined with blushing pink marker was:
I found it quite frustrating how teams would be legitimately actively pursuing ideas that would be good for the world, without prioritizing short-term Google interests, only to be met with cynicism in the court of public opinion.
Old timers share stories about the golden past in the high-technology of online advertising. Thanks, Copilot, don’t overdo the schmaltz.
The “Google as a victim” is a notion not often discussed — except by some Xooglers. I recall a comment made to me by a seasoned manager at another firm, “Yes, I am paranoid. They are out to get me.” That comment may apply to some professionals at Google.
How about this passage?
My mandate was to do the best thing for the web, as whatever was good for the web would be good for Google (I was explicitly told to ignore Google’s interests).
The oft-repeated idea is that Google cares about its users and similar truisms are part of what I call the Google mythology. Intentionally, in my opinion, Google cultivates the “doing good” theme as part of its effort to distract observers from the actual engineering intent of the company. (You love those Google ads, don’t you?)
Google’s creative process is captured in this statement:
We essentially operated like a startup, discovering what we were building more than designing it.
I am not sure if this is part of Google’s effort to capture the “spirit” of the old-timey days of Bell Laboratories or an accurate representation of Google’s directionless methods became over the years. What people “did” is clearly dissociated from the advertising mechanisms on which the oversized tires and chrome do-dads were created and bolted on the ageing vehicle.
And, finally, this statement:
It would require some shake-up at the top of the company, moving the center of power from the CFO’s office back to someone with a clear long-term vision for how to use Google’s extensive resources to deliver value to users.
What happened to the ideas of doing good and exploratory innovation?
Net net: Xooglers pine for the days of the digital gold rush. Googlers may not be aware of what the company is and does. That may be a good thing.
Stephen E Arnold, November 23, 2023
Who Benefits from Advertising Tracking Technology? Teens, Bad Actors, You?
November 23, 2023
This essay is the work of a dumb humanoid. No smart software required.
Don’t get me wrong. I absolutely love advertising. When I click to Sling’s or Tubi’s free TV, a YouTube video about an innovation in physics, or visit the UK’s Daily Mail — I see just a little bit of content. The rest, it seems to this dinobaby, to be advertising. For some reason, YouTube this morning (November 17, 2023) is showing me ads for a video game or undergarments for a female-oriented person before I can watch an update on the solemnity of Judge Engoran’s courtroom.
However, there are some people who are not “into” advertising. I want to point out that these individuals are in the minority; otherwise, people flooded with advertising would not disconnect or navigate to a somewhat less mercantile souk. Yes, a few exist; for example, government Web sites. (I acknowledge that some governments’ Web sites are advertising, but there’s not much I can do about that fusion of pitches and objective information about the location of a nation’s embassy.)
But to the matter at hand. I read a PDF titled “Europe’s Hidden Security Crisis.” The document is a position paper, a white paper, or a special report. The terminology varies depending on the entities involved in the assembly of the information. The group apparently nudging the intrepid authors to reveal the “hidden security crisis” could be the Irish Council for Civil Liberties. I know zero about the group, and I know even less about the authors, Dr. Johnny Ryan and Wolfie Christl. Dr. Ryan has written for the newspaper which looks like a magazine, and Mr. Christl is a principal of Cracked Labs.
So what’s the “hidden security crisis”? There is a special operation underway in Ukraine. The open source information crowd is documenting assorted facts and developments on X.com. We have the public Telegram channels outputting a wealth of information about the special operation and the other unhappy circumstances in Europe. We have the Europol reports about cyber crime, takedowns, and multi-nation operations. I receive in my newsfeed pointers to “real” news about a wide range of illegal activities. In short, what’s hidden?
An evil Web bug is capturing information about a computer user. She is not afraid. She is unaware… apparently. Thanks Microsoft Bing. Ooops. Strike that. Thanks, Copilot. Good Grinch. Did you accidentally replicate a beloved character or just think it up?
The report focuses on what I have identified as my first love — commercial messaging aka advertising.
The “hidden”, I think, refers to data collected when people navigate to a Web site and click, drag a cursor, or hover on a particular region. Those data along with date, time, browser used, and similar information are knitted together into a tidy bundle. These data can be used to have other commercial messages follow a person to another Web site, trigger an email urging the surfer to buy more or just buy something, or populate one of the cross tabulation companies’ databases.
The write up uses the lingo RTB or real time bidding to describe the data collection. The report states:
Our investigation highlights a widespread trade in data about sensitive European personnel and leaders that exposes them to blackmail, hacking and compromise, and undermines the security of their organizations and institutions. These data flow from Real-Time Bidding (RTB), an advertising technology that is active on almost all websites and apps. RTB involves the broadcasting of sensitive data about people using those websites and apps to large numbers of other entities, without security measures to protect the data. This occurs billions of times a day. Our examination of tens of thousands of pages of RTB data reveals that EU military personnel and political decision makers are targeted using RTB.
In the US, the sale of data gathered via advertising cookies, beacons, and related technologies is a business with nearly 1,000 vendors offering data. I am not sure about the “hidden” idea, however. If the term applies to an average Web user, most of those folks do not know about changing defaults. That is not a hidden function; that is an indication of the knowledge the user has about a specific software.
If you are interested in the report, navigate to this link. You may find the “security crisis” interesting. If not, keep in mind that one can eliminate such tracking with fairly straightforward preventative measures. For me, I love advertising. I know the beacons and bugs want to do the right thing: Capture and profile me to the nth degree. Advertising! It is wonderful and its data exhaust informative and useful.
Stephen E Arnold, November 23, 2023
A Rare Moment of Constructive Cooperation from Tech Barons
November 23, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Platform-hopping is one way bad actors have been able to cover their tracks. Now several companies are teaming up to limit that avenue for one particularly odious group. TechNewsWorld reports, “Tech Coalition Launches Initiative to Crackdown on Nomadic Child Predators.” The initiative is named Lantern, and the Tech Coalition includes Discord, Google, Mega, Meta, Quora, Roblox, Snap, and Twitch. Such cooperation is essential to combat a common tactic for grooming and/ or sextortion: predators engage victims on one platform then move the discussion to a more private forum. Reporter John P. Mello Jr. describes how Lantern works:
Participating companies upload ‘signals’ to Lantern about activity that violates their policies against child sexual exploitation identified on their platform.
Signals can be information tied to policy-violating accounts like email addresses, usernames, CSAM hashes, or keywords used to groom as well as buy and sell CSAM. Signals are not definitive proof of abuse. They offer clues for further investigation and can be the crucial piece of the puzzle that enables a company to uncover a real-time threat to a child’s safety.
Once signals are uploaded to Lantern, participating companies can select them, run them against their platform, review any activity and content the signal surfaces against their respective platform policies and terms of service, and take action in line with their enforcement processes, such as removing an account and reporting criminal activity to the National Center for Missing and Exploited Children and appropriate law enforcement agency.”
The visually oriented can find an infographic of this process in the write-up. We learn Lantern has been in development for two years. Why did it take so long to launch? Part of it was designing the program to be effective. Another part was to ensure it was managed responsibly: The project was subjected to a Human Rights Impact Assessment by the Business for Social Responsibility. Experts on child safety, digital rights, advocacy of marginalized communities, government, and law enforcement were also consulted. Finally, we’re told, measures were taken to ensure transparency and victims’ privacy.
In the past, companies hesitated to share such information lest they be considered culpable. However, some hope this initiative represents a perspective shift that will extend to other bad actors, like those who spread terrorist content. Perhaps. We shall see how much tech companies are willing to cooperate. They wouldn’t want to reveal too much to the competition just to help society, after all.
Cynthia Murrell, November 23, 2023
A Former Yahooligan and Xoogler Offers Management Advice: Believe It or Not!
November 22, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I read a remarkable interview / essay / news story called “Former Yahoo CEO Marissa Mayer Delivers Sharp-Elbowed Rebuke of OpenAI’s Broken Board.” Marissa Mayer was a Googler. She then became the Top Dog at Yahoo. Highlights of her tenure at Yahoo include, according to Inc.com, included:
- Fostering a “superstar status” for herself
- Pointing a finger is a chastising way at remote workers
- Trying to obfuscate Yahooligan layoffs
- Making slow job cuts
- Lack of strategic focus (maybe Tumblr, Yahoo’s mobile strategy, the search service, perhaps?)
- Tactical missteps in diversifying Yahoo’s business (the Google disease in my opinion)
- Setting timetables and then ignoring, missing, or changing them
- Weird PR messages
- Using fear (and maybe uncertainty and doubt) as management methods.
The senior executives of a high technology company listen to a self-anointed management guru. One of the bosses allegedly said, “I thought Bain and McKinsey peddled a truckload of baloney. We have the entire factory in front of use.” Thanks, MSFT Copilot. Is Sam the AI-Man on duty?
So what’s this exemplary manager have to say? Let’s go to the original story:
“OpenAI investors (like @Microsoft) need to step up and demand that the governance weaknesses at @OpenAI be fixed,” Mayer wrote Sunday on X, formerly known as Twitter.
Was Microsoft asleep at the switch or simply operating within a Cloud of Unknowing? Fast-talking Satya Nadella was busy trying to make me think he was operating in a normal manner. Had he known something was afoot, is he equipped to deal with burning effigies as a business practice?
Ms. Mayer pointed out:
“The fact that Ilya now regrets just shows how broken and under advised they are/were,” Mayer wrote on social media. “They call them board deliberations because you are supposed to be deliberate.”
Brilliant! Was that deliberative process used to justify the purchase of Tumblr?
The Business Insider write up revealed an interesting nugget:
The Information reported that the former Yahoo CEO’s name had been tossed around by “people close to OpenAI” as a potential addition to the board…
Okay, a Xoogler and a Yahooligan in one package.
Stephen E Arnold, November 22, 2023
Poli Sci and AI: Smart Software Boosts Bad Actors (No Kidding?)
November 22, 2023
This essay is the work of a dumb humanoid. No smart software required.
Smart software (AI, machine learning, et al) has sparked awareness in some political scientists. Until I read “Can Chatbots Help You Build a Bioweapon?” — I thought political scientists were still pondering Frederick William, Elector of Brandenburg’s social policies or Cambodian law in the 11th century. I was incorrect. Modern poli sci influenced wonks are starting to wrestle with the immense potential of smart software for bad actors. I think this dispersal of the cloud of unknowing I perceived among similar academic group when I entered a third-rate university in 1962 is a step forward. Ah, progress!
“Did you hear that the Senate Committee used my testimony about artificial intelligence in their draft regulations for chatbot rules and regulations?” says the recently admitted elected official. The inmates at the prison facility laugh at the incongruity of the situation. Thanks, Microsoft Bing, you do understand the ways of white collar influence peddling, don’t you?
The write up points out:
As policymakers consider the United States’ broader biosecurity and biotechnology goals, it will be important to understand that scientific knowledge is already readily accessible with or without a chatbot.
The statement is indeed accurate. Outside the esteemed halls of foreign policy power, STM (scientific, technical, and medical) information is abundant. Some of the data are online and reasonably easy to find with such advanced tools as Yandex.com (a Russian centric Web search system) or the more useful Chemical Abstracts data.
The write up’s revelations continue:
Consider the fact that high school biology students, congressional staffers, and middle-school summer campers already have hands-on experience genetically engineering bacteria. A budding scientist can use the internet to find all-encompassing resources.
Yes, more intellectual sunlight in the poli sci journal of record!
Let me offer one more example of ground breaking insight:
In other words, a chatbot that lowers the information barrier should be seen as more like helping a user step over a curb than helping one scale an otherwise unsurmountable wall. Even so, it’s reasonable to worry that this extra help might make the difference for some malicious actors. What’s more, the simple perception that a chatbot can act as a biological assistant may be enough to attract and engage new actors, regardless of how widespread the information was to begin with.
Is there a step government deciders should take? Of course. It is the step that US high technology companies have been begging bureaucrats to take. Government should spell out rules for a morphing, little understood, and essentially uncontrollable suite of systems and methods.
There is nothing like regulating the present and future. Poli sci professionals believe it is possible to repaint the weird red tail on the Boeing F 7A aircraft while the jet is flying around. Trivial?
Here’s the recommendation which I found interesting:
Overemphasizing information security at the expense of innovation and economic advancement could have the unforeseen harmful side effect of derailing those efforts and their widespread benefits. Future biosecurity policy should balance the need for broad dissemination of science with guardrails against misuse, recognizing that people can gain scientific knowledge from high school classes and YouTube—not just from ChatGPT.
My take on this modest proposal is:
- Guard rails allow companies to pursue legal remedies as those companies do exactly what they want and when they want. Isn’t that why the Google “public” trial underway is essentially “secret”?
- Bad actors loves open source tools. Unencumbered by bureaucracies, these folks can move quickly. In effect the mice are equipped with jet packs.
- Job matching services allow a bad actor in Greece or Hong Kong to identify and hire contract workers who may have highly specialized AI skills obtained doing their day jobs. The idea is that for a bargain price expertise is available to help smart software produce some AI infused surprises.
- Recycling the party line of a handful of high profile AI companies is what makes policy.
With poli sci professional becoming aware of smart software, a better world will result. Why fret about livestock ownership in the glory days of what is now Cambodia? The AI stuff is here and now, waiting for the policy guidance which is sure to come even though the draft guidelines have been crafted by US AI companies?
Stephen E Arnold, November 22, 2023

