Google and Its Age Verification System: Will There Be a FAES Off?
December 18, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Just in time for the holidays! Google’s user age verification system is ready for 2024. “Google Develops Selfie Scanning Software Ahead of Porn Crackdown” reports:
Google has developed face-scanning technology that would block children from accessing adult websites ahead of a crackdown on online porn. An artificial intelligence system developed by the web giant for estimating a person’s age based on their face has quietly been approved in the UK.
Thanks, MSFT Copilot. A good enough eyeball with a mobile phone, a pencil, a valise, stealthy sneakers, and data.
Facial recognition, although widely used in some countries, continues to make some people nervous. But in the UKL, the Google method will allow the UK government to obtain data to verify one’s age. The objective is to stop those who are younger than 18 from viewing “adult Web sites.”
[Google] says the technology is 99.9pc reliable in identifying that a photo of an 18-year-old is under the age of 25. If users are believed to be under the age of 25, they could be asked to provide additional ID.
The phrase used to describe the approach is “face age estimation system.”
The cited newspaper article points out:
It is unclear what Google plans to use the system for. It could use it within its own services, such as YouTube and the Google Play app download store, or build it into its Chrome web browser to allow websites to verify that visitors are over 18.
Google is not the only outfit using facial recognition to allegedly reduce harm to individuals. Facebook and OnlyFans, according to the write up are already deploying similar technology.
The news story says:
It is unclear what privacy protections Google would apply to the system.
I wonder what interesting insights would be available if data from the FAES were cross cross correlated with other information. That might have value to advertisers and possibly other commercial or governmental entities.
Stephen E Arnold, December 18, 2023
An Effort to Put Spilled Milk Back in the Bottle
December 15, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Microsoft was busy when the Activision Blizzard saga began. I dimly recall thinking, “Hey, one way to distract people from the SolarWinds’ misstep would be to become an alleged game monopoly.” I thought that Microsoft would drop the idea, but, no. I was wrong. Microsoft really wanted to be an alleged game monopoly. Apparently the successes (past and present) of Nintendo and Sony, the failure of Google’s Grand Slam attempt, and the annoyance of refurbished arcade game machines was real. Microsoft has focus. And guess what government agency does not? Maybe the Federal Trade Commission?
Two bureaucrats to be engage in a mature discussioin about the rules for the old-fashioned game of Monopoly. One will become a government executive; the other will become a senior legal professional at a giant high-technology outfit. Thanks, MSFT Copilot. You capture the spirit of rational discourse in a good enough way.
The MSFT game play may not be over. “The FTC Is Trying to Get Back in the Ring with Microsoft Over Activision Deal” asserts:
Nearly five months later, the FTC has appealed the court’s decision, arguing that the lower court essentially just believed whatever Microsoft said at face value…. We said at the time that Microsoft was clearly taking the complaints from various regulatory bodies as some sort of paint by numbers prescription as to what deals to make to get around them. And I very much can see the FTC’s point on this. It brought a complaint under one set of facts only to have Microsoft alter those facts, leading to the courts slamming the deal through before the FTC had a chance to amend its arguments. But ultimately it won’t matter. This last gasp attempt will almost certainly fail. American regulatory bodies have dull teeth to begin with and I’ve seen nothing that would lead me to believe that the courts are going to allow the agency to unwind a closed deal after everything it took to get here.
From my small office in rural Kentucky, the government’s desire or attempt to get “back in the ring” is interesting. It illustrates how many organizations approach difficult issues.
The advantage goes to the outfit with [a] the most money, [b] the mental wherewithal to maintain some semblance of focus, and [c] a mechanism to keep moving forward. The big four wheel drive will make it through the snow better than a person trying to ride a bicycle in a blizzard.
The key sentence in the cited article, in my opinion, is:
“I fail to understand how giving somebody a monopoly of something would be pro-competitive,” said Imad Dean Abyad, an FTC attorney, in the argument Wednesday before the appeals court. “It may be a benefit to some class of consumers, but that is very different than saying it is pro-competitive.”
No problem with that logic.
And who is in charge of today Monopoly games?
Stephen E Arnold, December 15, 2023
FTC Enacts Investigative Process On AI Products and Services
December 15, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Creative types and educational professionals are worried about the influence of AI-generated work. However, law, legal, finance, business operations, and other industries are worried about how AI will impact them. Aware about the upward trend in goods and services that are surreptitiously moving into the market, the Federal Trade Commission (FTC) took action. The FTC released a briefing on the new consumer AI protection: “FTC Authorities Compulsory Process For AI-Related Products And Services.”
The executive recruiter for a government contractor says, “You can earn great money with a side gig helping your government validate AI algorithms. Does that sound good?” Will American schools produce enough AI savvy people to validate opaque and black box algorithms? Thanks, MSFT Copilot. You hallucinated on this one, but your image was good enough.
The FTC passed an omnibus resolution that authorizes a compulsory process in nonpublic investigations about products and services that use or claim to be made with AI or claim to detect it. The new omnibus resolution will increase the FTC’s efficiency with civil investigation demands (CIDs), a compulsory process like a subpoena. CIDs are issued to collect information, similar to legal discovery, for consumer protection and competition investigations. The new resolution will be in effect for ten years and the FTC voted to approve it 3-0.
The FTC defines AI as:
“AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Generative AI can be used to generate synthetic content including images, videos, audio, text, and other digital content that appear to be created by humans. Many companies now offer products and services using AI and generative AI, while others offer products and services that claim to detect content made by generative AI.”
AI can also be used for deception, privacy infringements, fraud, and other illegal activities. AI can causes competition problems, such as if a few companies monopolize algorithms are other AI-related technologies.
The FTC is taking preliminary steps to protect consumers from bad actors and their nefarious AI-generated deeds. However, what constitutes a violation in relation to AI? Will the data training libraries be examined along with the developers? Where will the expert analysts come? An online university training program?
Whitney Grace, December 15, 2023
The Cloud Kids Are Not Happy: Where Is Mom?
December 13, 2023
This essay is the work of a dumb dinobaby. No smart software required.
An amusing item about the trials and tribulations of a cloud techno feudalists seems appropriate today. Navigate to the paywalled story “Microsoft Has Stranglehold on the Cloud, Say Amazon and Google.” With zero irony, the write up reports:
Amazon and Google have complained to the UK’s competition regulator that their rival, Microsoft, uses practices that restrict customer choice in the £7.5 billion cloud computing market.
What’s amusing is that Google allegedly said before it lost its case related to the business practices of its online store:
“These licensing practices are the only insurmountable barrier preventing competition on the merits for new customers migrating to the cloud and for existing workloads. They lead to less choice, less innovation, and increased costs for UK customers of all sizes.”
What was Amazon’s view? According to the article:
“Microsoft changed its licensing terms in 2019 and again in 2022 to make it more difficult for customers to run some of its popular software offerings on Google Cloud, AWS and Alibaba. To use many of Microsoft’s software products with these other cloud services providers, a customer must purchase a separate license even if they already own the software. This often makes it financially unviable for a customer to choose a provider other than Microsoft.”
How similar is this finger pointing and legal activity to a group of rich kids complaining that one child has all the toys? I think the similarities are — well — similar.
The question is, “What entity will become the mom to adjudicate the selfish actions of the cloud kids?”
Stephen E Arnold, December 13, 2023
Allegations That Canadian Officials Are Listening
December 13, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Widespread Use of Phone Surveillance Tools Documented in Canadian Federal Agencies
It appears a baker’s dozen of Canadian agencies are ignoring a longstanding federal directive on privacy protections. Yes, Canada. According to CBC/ Radio Canada, “Tools Capable of Extracting Personal Data from Phones Being Used by 13 Federal Departments, Documents Show.” The trend surprised even York University associate professor Evan Light, who filed the original access-to-information request. Reporter Brigitte Bureau shares:
Many people, it seems, are listening to Grandma’s conversations in a suburb of Calgary. (Nice weather in the winter.) Thanks, MSFT Copilot. I enjoyed the flurry of messages that you were busy creating my other image requests. Just one problemo. I had only one image request.
“Tools capable of extracting personal data from phones or computers are being used by 13 federal departments and agencies, according to contracts obtained under access to information legislation and shared with Radio-Canada. Radio-Canada has also learned those departments’ use of the tools did not undergo a privacy impact assessment as required by federal government directive. The tools in question can be used to recover and analyze data found on computers, tablets and mobile phones, including information that has been encrypted and password-protected. This can include text messages, contacts, photos and travel history. Certain software can also be used to access a user’s cloud-based data, reveal their internet search history, deleted content and social media activity. Radio-Canada has learned other departments have obtained some of these tools in the past, but say they no longer use them. … ‘I thought I would just find the usual suspects using these devices, like police, whether it’s the RCMP or [Canada Border Services Agency]. But it’s being used by a bunch of bizarre departments,’ [Light] said.
To make matters worse, none of the agencies had conducted the required Privacy Impact Assessments. A federal directive issued in 2002 and updated in 2010 required such PIAs to be filed with the Treasury Board of Canada Secretariat and the Office of the Privacy Commissioner before any new activities involving collecting or handling personal data. Light is concerned that agencies flat out ignoring the directive means digital surveillance of citizens has become normalized. Join the club, Canada.
Cynthia Murrell, December 13, 2023
Cyber Security Responsibility: Where It Belongs at Last!
December 5, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I want to keep this item brief. Navigate to “CISA’s Goldstein Wants to Ditch ‘Patch Faster, Fix Faster’ Model.”
CISA means the US government’s Cybersecurity and Infrastructure Security Agency. The “Goldstein” reference points to Eric Goldstein, the executive assistant director of CISA.
The main point of the write up is that big technology companies have to be responsible for cleaning up their cyber security messes. The write up reports:
Goldstein said that CISA is calling on technology providers to “take accountability” for the security of their customers by doing things like enabling default security controls such as multi-factor authentication, making security logs available, using secure development practices and embracing memory safe languages such as Rust.
I may be incorrect, but I picked up a signal that the priorities of some techno feudalists are not security. Perhaps these firms’ goals are maximizing profit, market share, and power over their paying customers. Security? Maybe it is easier to describe in a slide deck or a short YouTube video?
The use of a parental mode seems appropriate for a child? Will it work for techno feudalists who have created a digital mess in kitchens throughout the world? Thanks, MSFT Copilot. You must have ingested some “angry mommy” data when your were but a wee sprout.
Will this approach improve the security of mission-critical systems? Will the enjoinder make a consumer’s mobile phone more secure?
My answer? Without meaningful consequences, security is easier to talk about than deliver. Therefore, minimal change in the near future. I wish I were wrong.
Stephen E Arnold, December 5, 2023
India Might Not Buy the User-Is-Responsible Argument
November 29, 2023
This essay is the work of a dumb dinobaby. No smart software required.
India’s elected officials seem to be agitated about deep fakes. No, it is not the disclosure that a company in Spain is collecting $10,000 a month or more from a fake influencer named Aitana López. (Some in India may be following the deeply faked bimbo, but I would assert that not too many elected officials will admit to their interest in the digital dream boat.)
US News & World Report recycled a Reuters (the trust outfit) story “India Warns Facebook, YouTube to Enforce Ruyles to Deter Deepfakes — Sources” and asserted:
India’s government on Friday warned social media firms including Facebook and YouTube to repeatedly remind users that local laws prohibit them from posting deepfakes and content that spreads obscenity or misinformation
“I know you and the rest of the science club are causing problems with our school announcement system. You have to stop it, or I will not recommend you or any science club member for the National Honor Society.” The young wizard says, “I am very, very sorry. Neither I nor my friends will play rock and roll music during the morning announcements. I promise.” Thanks, MidJourney. Not great but at least you produced an image which is more than I can say for the MSFT Copilot Bing thing.
What’s notable is that the government of India is not focusing on the user of deep fake technology. India has US companies in its headlights. The news story continues:
India’s IT ministry said in a press statement all platforms had agreed to align their content guidelines with government rules.
Amazing. The US techno-feudalists are rolling over. I am someone who wonders, “Will these US companies bend a knee to India’s government?” I have zero inside information about either India or the US techno-feudalists, but I have a recollection that US companies:
- Do what they want to do and then go to court. If they win, they don’t change. If they lose, they pay the fine and they do some fancy dancing.
- Go to a meeting and output vague assurances prefaced by “Thank you for that question.” The companies may do a quick paso double and continue with business pretty much as usual
- Just comply. As Canada has learned, Facebook’s response to the Canadian news edict was simple: No news for Canada. To make the situation more annoying to a real government, other techno-feudalists hopped on Facebook’s better idea.
- Ignore the edict. If summoned to a meeting or hit with a legal notice, companies will respond with flights of legal eagles with some simple messages; for example, no more support for your law enforcement professionals or your intelligence professionals. (This is a hypothetical example only, so don’t develop the shingles, please.)
Net net: Techno-feudalists have to decide: Roll over, ignore, or go to “war.”
Stephen E Arnold, November 29, 2023
Governments Tip Toe As OpenAI Sprints: A Story of the Turtles and the Rabbits
November 27, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Reuters has reported that a pride of lion-hearted countries have crafted “joint guidelines” for systems with artificial intelligence. I am not exactly sure what “artificial intelligence” means, but I have confidence that a group of countries, officials, advisor, and consultants do.
The main point of the news story “US, Britain, Other Countries Ink Agreement to Make AI Secure by Design” is that someone in these countries knows what “secure by design” means. You may not have noticed that cyber breaches seem to be chugging right along. Maine managed to lose control of most of its residents’ personally identifiable information. I won’t mention issues associated with Progress Software, Microsoft systems, and LY Corp and its messaging app with a mere 400,000 users.
The turtle started but the rabbit reacted. Now which AI enthusiast will win the race down the corridor between supercomputers powering smart software? Thanks, MSFT Copilot. It took several tries, but you delivered a good enough image.
The Reuters’ story notes with the sincerity of an outfit focused on trust:
The agreement is the latest in a series of initiatives – few of which carry teeth – by governments around the world to shape the development of AI, whose weight is increasingly being felt in industry and society at large.
Yep, “teeth.”
At the same time, Sam AI-Man was moving forward with such mouth-watering initiatives as the AI app store and discussions to create AI-centric hardware. “I Guess We’ll Just Have to Trust This Guy, Huh?” asserts:
But it is clear who won (Altman) and which ideological vision (regular capitalism, instead of some earthy, restrained ideal of ethical capitalism) will carry the day. If Altman’s camp is right, then the makers of ChatGPT will innovate more and more until they’ve brought to light A.I. innovations we haven’t thought of yet.
As the signatories to the agreement without “teeth” and Sam AI-Man were doing their respective “thing,” I noted the AP story titled “Pentagon’s AI Initiatives Accelerate Hard Decisions on Lethal Autonomous Weapons.” That write up reported:
… the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China.
To deal with the AI challenge, the AP story includes this paragraph:
The Pentagon’s portfolio boasts more than 800 AI-related unclassified projects, much still in testing. Typically, machine-learning and neural networks are helping humans gain insights and create efficiencies.
Will the signatories to the “secure by design” agreement act like tortoises or like zippy hares? I know which beastie I would bet on. Will military entities back the slow or the fast AI faction? I know upon which I would wager fifty cents.
Stephen E Arnold, November 27, 2023
Poli Sci and AI: Smart Software Boosts Bad Actors (No Kidding?)
November 22, 2023
This essay is the work of a dumb humanoid. No smart software required.
Smart software (AI, machine learning, et al) has sparked awareness in some political scientists. Until I read “Can Chatbots Help You Build a Bioweapon?” — I thought political scientists were still pondering Frederick William, Elector of Brandenburg’s social policies or Cambodian law in the 11th century. I was incorrect. Modern poli sci influenced wonks are starting to wrestle with the immense potential of smart software for bad actors. I think this dispersal of the cloud of unknowing I perceived among similar academic group when I entered a third-rate university in 1962 is a step forward. Ah, progress!
“Did you hear that the Senate Committee used my testimony about artificial intelligence in their draft regulations for chatbot rules and regulations?” says the recently admitted elected official. The inmates at the prison facility laugh at the incongruity of the situation. Thanks, Microsoft Bing, you do understand the ways of white collar influence peddling, don’t you?
The write up points out:
As policymakers consider the United States’ broader biosecurity and biotechnology goals, it will be important to understand that scientific knowledge is already readily accessible with or without a chatbot.
The statement is indeed accurate. Outside the esteemed halls of foreign policy power, STM (scientific, technical, and medical) information is abundant. Some of the data are online and reasonably easy to find with such advanced tools as Yandex.com (a Russian centric Web search system) or the more useful Chemical Abstracts data.
The write up’s revelations continue:
Consider the fact that high school biology students, congressional staffers, and middle-school summer campers already have hands-on experience genetically engineering bacteria. A budding scientist can use the internet to find all-encompassing resources.
Yes, more intellectual sunlight in the poli sci journal of record!
Let me offer one more example of ground breaking insight:
In other words, a chatbot that lowers the information barrier should be seen as more like helping a user step over a curb than helping one scale an otherwise unsurmountable wall. Even so, it’s reasonable to worry that this extra help might make the difference for some malicious actors. What’s more, the simple perception that a chatbot can act as a biological assistant may be enough to attract and engage new actors, regardless of how widespread the information was to begin with.
Is there a step government deciders should take? Of course. It is the step that US high technology companies have been begging bureaucrats to take. Government should spell out rules for a morphing, little understood, and essentially uncontrollable suite of systems and methods.
There is nothing like regulating the present and future. Poli sci professionals believe it is possible to repaint the weird red tail on the Boeing F 7A aircraft while the jet is flying around. Trivial?
Here’s the recommendation which I found interesting:
Overemphasizing information security at the expense of innovation and economic advancement could have the unforeseen harmful side effect of derailing those efforts and their widespread benefits. Future biosecurity policy should balance the need for broad dissemination of science with guardrails against misuse, recognizing that people can gain scientific knowledge from high school classes and YouTube—not just from ChatGPT.
My take on this modest proposal is:
- Guard rails allow companies to pursue legal remedies as those companies do exactly what they want and when they want. Isn’t that why the Google “public” trial underway is essentially “secret”?
- Bad actors loves open source tools. Unencumbered by bureaucracies, these folks can move quickly. In effect the mice are equipped with jet packs.
- Job matching services allow a bad actor in Greece or Hong Kong to identify and hire contract workers who may have highly specialized AI skills obtained doing their day jobs. The idea is that for a bargain price expertise is available to help smart software produce some AI infused surprises.
- Recycling the party line of a handful of high profile AI companies is what makes policy.
With poli sci professional becoming aware of smart software, a better world will result. Why fret about livestock ownership in the glory days of what is now Cambodia? The AI stuff is here and now, waiting for the policy guidance which is sure to come even though the draft guidelines have been crafted by US AI companies?
Stephen E Arnold, November 22, 2023
EU Objects to Social Media: Again?
November 21, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Social media is something I observe at a distance. I want to highlight the information in “X Is the Biggest Source of Fake News and Disinformation, EU Warns.” Some Americans are not interested in what the European Union thinks, says, or regulates. On the other hand, the techno feudalistic outfits in the US of A do pay attention when the EU hands out reprimands, fines, and notices of auditions (not for the school play, of course).
This historic photograph shows a super smart, well paid, entitled entrepreneur letting the social media beast out of its box. Now how does this genius put the creature back in the box? Good questions. Thanks, MSFT Copilot. You balked, but finally output a good enough image.
The story in what I still think of as “the capitalist tool” states:
European Commission Vice President Vera Jourova said in prepared remarks that X had the “largest ratio of mis/disinformation posts” among the platforms that submitted reports to the EU. Especially worrisome is how quickly those spreading fake news are able to find an audience.
The Forbes’ article noted:
The social media platforms were seen to have turned a blind eye to the spread of fake news.
I found the inclusion of this statement a grim reminder of what happens when entities refuse to perform content moderation:
“Social networks are now tailor-made for disinformation, but much more should be done to prevent it from spreading widely,” noted Mollica [a teacher at American University]. “As we’ve seen, however, trending topics and algorithms monetize the negativity and anger. Until that practice is curbed, we’ll see disinformation continue to dominate feeds.”
What is Forbes implying? Is an American corporation a “bad” actor? Is the EU parking at a dogwood, not a dog? Is digital information reshaping how established processes work?
From my point of view, putting a decades old Pandora or passel of Pandoras back in a digital box is likely to be impossible. Once social fabrics have been disintegrated by massive flows of unfiltered information, the woulda, coulda, shoulda chatter is ineffectual. X marks the spot.
Stephen E Arnold, November 2023