Sci Fi or Sci Fake: A Post about a Chinese Force Field

January 10, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Imagine a force field which can deflect a drone or other object. Commercial applications could range from a passenger vehicles to directing flows of material in a manufacturing process. Is a force field a confection of science fiction writers or a technical avenue nearing market entry?

image

A Tai Chi master uses his powers to take down a drone. Thanks, MSFT Copilot Bing thing. Good enough.

Chinese Scientists Create Plasma Shield to Guard Drones, Missiles from Attack” presents information which may be a combination of “We’re innovating and you are not” and “science fiction.” The write up reports:

The team led by Chen Zongsheng, an associate researcher at the State Key Laboratory of Pulsed Power Laser Technology at the National University of Defence Technology, said their “low-temperature plasma shield” could protect sensitive circuits from electromagnetic weapon bombardments with up to 170kW at a distance of only 3 metres (9.8 feet). Laboratory tests have shown the feasibility of this unusual technology. “We’re in the process of developing miniaturized devices to bring this technology to life,” Chen and his collaborators wrote in a peer-reviewed paper published in the Journal of National University of Defence Technology last month.

But the write up makes clear that other countries like the US are working to make force fields more effective. China has a colorful way to explain their innovation; to wit:

The plasma-based energy shield is a radical new approach reminiscent of tai chi principles – rather than directly countering destructive electromagnetic assaults it endeavors to convert the attacker’s energy into a defensive force.

Tai chi, as I understand the discipline is a combination of mental discipline and specific movements to develop mental peace, promote physical well being, and control internal force for a range of purposes.

How does the method function. The article explains:

… When attacking electromagnetic waves come into contact with these charged particles, the particles can immediately absorb the energy of the electromagnetic waves and then jump into a very active state. If the enemy continues to attack or even increases the power at this time, the plasma will suddenly increase its density in space, reflecting most of the incidental energy like a mirror, while the waves that enter the plasma are also overwhelmed by avalanche-like charged particles.

One question: Are technologists mining motion pictures, television shows, and science fiction for ideas?

Beam me up, Scotty.

Stephen E Arnold, January 10, 2024

Bugged? Hey, No One Can Get Our Data

December 22, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “The Obscure Google Deal That Defines America’s Broken Privacy Protections.” In the cartoon below, two young people are confident that their lunch will be undisturbed. No “bugs” will chow down on their hummus, sprout sandwiches, or their information. What happens, however, is that the young picnic fans cannot perceive what is out of sight. Are these “bugs” listening? Yep. They are. 24×7.

image

What the young fail to perceive is that “bugs” are everywhere. These digital creatures are listening, watching, harvesting, and consuming every scrap of information. The image of the picnic evokes an experience unfolding in real time. Thanks, MSFT Copilot. My notion of “bugs” is obviously different from yours. Good enough and I am tired of finding words you can convert to useful images.

The essay explains:

While Meta, Google, and a handful of other companies subject to consent decrees are bound by at least some rules, the majority of tech companies remain unfettered by any substantial federal rules to protect the data of all their users, including some serving more than a billion people globally, such as TikTok and Apple.

The situation is simple: Major centers of techno gravity remain unregulated. Law makers, regulators, and “users” either did not understand or just believed what lobbyists told them. The senior executives of certain big firms smiled, said “Senator, thank you for that question,” and continued to build out their “bug” network. Do governments want to lose their pride of place with these firms? Nope. Why? Just reference bad actors who commit heinous acts and invoke “protect our children.” When these refrains from the techno feudal playbook sound, calls to take meaningful action become little more than a faint background hum.

But the article continues:

…there is diminishing transparency about how Google’s consent decree operates.

I think I understand. Google-type companies pretend to protect “privacy.” Who really knows? Just ask a Google professional. The answer in my experience is, “Hey, dude, I have zero idea.”

How does Wired, the voice of the techno age, conclude its write up? Here you go:

The FTC agrees that a federal privacy law is long overdue, even as it tries to make consent decrees more powerful. Samuel Levine, director of the FTC’s Bureau of Consumer Protection, says that successive privacy settlements over the years have become more limiting and more specific to account for the growing, near-constant surveillance of Americans by the technology around them. And the FTC is making every effort to enforce the settlements to the letter…

I love the “every effort.” The reality is that the handling of online data collection presages the trajectory for smart software. We live with bugs. Now those bugs can “think”, adapt, and guide. And what’s the direction in which we are now being herded? Grim, isn’t it?

Stephen E Arnold, December 23, 2023

FTC Enacts Investigative Process for AI Technology

December 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Creative types and educational professionals are worried about the influence of AI-generated work. However, law, legal, finance, business operations, and other industries are worried about how AI will impact them. Aware about the upward trend in goods and services that are surreptitiously moving into the market, the Federal Trade Commission (FTC) took action. The FTC released a briefing on the new consumer AI protection: “FTC Authorities Compulsory Process For AI-Related Products And Services.”

The FTC passed an omnibus resolution that authorizes a compulsory process in nonpublic investigations about products and services that use or claim to be made with AI or claim to detect it. The new omnibus resolution will increase the FTC’s efficiency with civil investigation demands (CIDs), a compulsory process like a subpoena. CIDs are issued to collect information, similar to legal discovery, for consumer protection and competition investigations. The new resolution will be in effect for ten years and the FTC voted to approve it 3-0.

The FTC defines AI as:

“AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Generative AI can be used to generate synthetic content including images, videos, audio, text, and other digital content that appear to be created by humans. Many companies now offer products and services using AI and generative AI, while others offer products and services that claim to detect content made by generative AI.”

AI can also be used for deception, privacy infringements, fraud, and other illegal activities. AI can causes competition problems, such as if a few companies monopolize algorithms are other AI-related technologies.

The FTC is taking preliminary steps to protect consumers from bad actors and their nefarious AI-generated deeds. However, what constitutes a violation in relation to AI? Will the data training libraries be examined along with the developers?

Whitney Grace, December 20, 2023

Google and Its Age Verification System: Will There Be a FAES Off?

December 18, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Just in time for the holidays! Google’s user age verification system is ready for 2024. “Google Develops Selfie Scanning Software Ahead of Porn Crackdown” reports:

Google has developed face-scanning technology that would block children from accessing adult websites ahead of a crackdown on online porn. An artificial intelligence system developed by the web giant for estimating a person’s age based on their face has quietly been approved in the UK.

image

Thanks, MSFT Copilot. A good enough eyeball with a mobile phone, a pencil, a valise, stealthy sneakers, and data.

Facial recognition, although widely used in some countries, continues to make some people nervous. But in the UKL, the Google method will allow the UK government to obtain data to verify one’s age. The objective is to stop those who are younger than 18 from viewing “adult Web sites.”

The story reveals:

[Google] says the technology is 99.9pc reliable in identifying that a photo of an 18-year-old is under the age of 25. If users are believed to be under the age of 25, they could be asked to provide additional ID.

The phrase used to describe the approach is “face age estimation system.”

The cited newspaper article points out:

It is unclear what Google plans to use the system for. It could use it within its own services, such as YouTube and the Google Play app download store, or build it into its Chrome web browser to allow websites to verify that visitors are over 18.

Google is not the only outfit using facial recognition to allegedly reduce harm to individuals. Facebook and OnlyFans, according to the write up are already deploying similar technology.

The news story says:

It is unclear what privacy protections Google would apply to the system.

I wonder what interesting insights would be available if data from the FAES were cross cross correlated with other information. That might have value to advertisers and possibly other commercial or governmental entities.

Stephen E Arnold, December 18, 2023

An Effort to Put Spilled Milk Back in the Bottle

December 15, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Microsoft was busy when the Activision Blizzard saga began. I dimly recall thinking, “Hey, one way to distract people from the SolarWinds’ misstep would be to become an alleged game monopoly.” I thought that Microsoft would drop the idea, but, no. I was wrong. Microsoft really wanted to be an alleged game monopoly. Apparently the successes (past and present) of Nintendo and Sony, the failure of Google’s Grand Slam attempt, and the annoyance of refurbished arcade game machines was real. Microsoft has focus. And guess what government agency does not? Maybe the Federal Trade Commission?

image

Two bureaucrats to be engage in a mature discussioin about the rules for the old-fashioned game of Monopoly. One will become a government executive; the other will become a senior legal professional at a giant high-technology outfit. Thanks, MSFT Copilot. You capture the spirit of rational discourse in a good enough way.

The MSFT game play may not be over. “The FTC Is Trying to Get Back in the Ring with Microsoft Over Activision Deal” asserts:

Nearly five months later, the FTC has appealed the court’s decision, arguing that the lower court essentially just believed whatever Microsoft said at face value…. We said at the time that Microsoft was clearly taking the complaints from various regulatory bodies as some sort of paint by numbers prescription as to what deals to make to get around them. And I very much can see the FTC’s point on this. It brought a complaint under one set of facts only to have Microsoft alter those facts, leading to the courts slamming the deal through before the FTC had a chance to amend its arguments. But ultimately it won’t matter. This last gasp attempt will almost certainly fail. American regulatory bodies have dull teeth to begin with and I’ve seen nothing that would lead me to believe that the courts are going to allow the agency to unwind a closed deal after everything it took to get here.

From my small office in rural Kentucky, the government’s desire or attempt to get “back in the ring” is interesting. It illustrates how many organizations approach difficult issues. 

The advantage goes to the outfit with [a] the most money, [b] the mental wherewithal to maintain some semblance of focus, and [c] a mechanism to keep moving forward. The big four wheel drive will make it through the snow better than a person trying to ride a bicycle in a blizzard.

The key sentence in the cited article, in my opinion, is:

“I fail to understand how giving somebody a monopoly of something would be pro-competitive,” said Imad Dean Abyad, an FTC attorney, in the argument Wednesday before the appeals court. “It may be a benefit to some class of consumers, but that is very different than saying it is pro-competitive.”

No problem with that logic.

And who is in charge of today Monopoly games?

Stephen E Arnold, December 15, 2023

FTC Enacts Investigative Process On AI Products and Services

December 15, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Creative types and educational professionals are worried about the influence of AI-generated work. However, law, legal, finance, business operations, and other industries are worried about how AI will impact them. Aware about the upward trend in goods and services that are surreptitiously moving into the market, the Federal Trade Commission (FTC) took action. The FTC released a briefing on the new consumer AI protection: “FTC Authorities Compulsory Process For AI-Related Products And Services.”

image

The executive recruiter for a government contractor says, “You can earn great money with a side gig helping your government validate AI algorithms. Does that sound good?” Will American schools produce enough AI savvy people to validate opaque and black box algorithms? Thanks, MSFT Copilot. You hallucinated on this one, but your image was good enough.

The FTC passed an omnibus resolution that authorizes a compulsory process in nonpublic investigations about products and services that use or claim to be made with AI or claim to detect it. The new omnibus resolution will increase the FTC’s efficiency with civil investigation demands (CIDs), a compulsory process like a subpoena. CIDs are issued to collect information, similar to legal discovery, for consumer protection and competition investigations. The new resolution will be in effect for ten years and the FTC voted to approve it 3-0.

The FTC defines AI as:

“AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Generative AI can be used to generate synthetic content including images, videos, audio, text, and other digital content that appear to be created by humans. Many companies now offer products and services using AI and generative AI, while others offer products and services that claim to detect content made by generative AI.”

AI can also be used for deception, privacy infringements, fraud, and other illegal activities. AI can causes competition problems, such as if a few companies monopolize algorithms are other AI-related technologies.

The FTC is taking preliminary steps to protect consumers from bad actors and their nefarious AI-generated deeds. However, what constitutes a violation in relation to AI? Will the data training libraries be examined along with the developers? Where will the expert analysts come? An online university training program?

Whitney Grace, December 15, 2023

The Cloud Kids Are Not Happy: Where Is Mom?

December 13, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

An amusing item about the trials and tribulations of a cloud techno feudalists seems appropriate today. Navigate to the paywalled story “Microsoft Has Stranglehold on the Cloud, Say Amazon and Google.” With zero irony, the write up reports:

Amazon and Google have complained to the UK’s competition regulator that their rival, Microsoft, uses practices that restrict customer choice in the £7.5 billion cloud computing market.

What’s amusing is that Google allegedly said before it lost its case related to the business practices of its online store:

“These licensing practices are the only insurmountable barrier preventing competition on the merits for new customers migrating to the cloud and for existing workloads. They lead to less choice, less innovation, and increased costs for UK customers of all sizes.”

What was Amazon’s view? According to the article:

“Microsoft changed its licensing terms in 2019 and again in 2022 to make it more difficult for customers to run some of its popular software offerings on Google Cloud, AWS and Alibaba. To use many of Microsoft’s software products with these other cloud services providers, a customer must purchase a separate license even if they already own the software. This often makes it financially unviable for a customer to choose a provider other than Microsoft.”

How similar is this finger pointing and legal activity to a group of rich kids complaining that one child has all the toys? I think the similarities are — well — similar.

The question is, “What entity will become the mom to adjudicate the selfish actions of the cloud kids?”

Stephen E Arnold, December 13, 2023

Allegations That Canadian Officials Are Listening

December 13, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Widespread Use of Phone Surveillance Tools Documented in Canadian Federal Agencies

It appears a baker’s dozen of Canadian agencies are ignoring a longstanding federal directive on privacy protections. Yes, Canada. According to CBC/ Radio Canada, “Tools Capable of Extracting Personal Data from Phones Being Used by 13 Federal Departments, Documents Show.” The trend surprised even York University associate professor Evan Light, who filed the original access-to-information request. Reporter Brigitte Bureau shares:

image

Many people, it seems, are listening to Grandma’s conversations in a suburb of Calgary. (Nice weather in the winter.) Thanks, MSFT Copilot. I enjoyed the flurry of messages that you were busy creating my other image requests. Just one problemo. I had only one image request.

“Tools capable of extracting personal data from phones or computers are being used by 13 federal departments and agencies, according to contracts obtained under access to information legislation and shared with Radio-Canada. Radio-Canada has also learned those departments’ use of the tools did not undergo a privacy impact assessment as required by federal government directive. The tools in question can be used to recover and analyze data found on computers, tablets and mobile phones, including information that has been encrypted and password-protected. This can include text messages, contacts, photos and travel history. Certain software can also be used to access a user’s cloud-based data, reveal their internet search history, deleted content and social media activity. Radio-Canada has learned other departments have obtained some of these tools in the past, but say they no longer use them. … ‘I thought I would just find the usual suspects using these devices, like police, whether it’s the RCMP or [Canada Border Services Agency]. But it’s being used by a bunch of bizarre departments,’ [Light] said.

To make matters worse, none of the agencies had conducted the required Privacy Impact Assessments. A federal directive issued in 2002 and updated in 2010 required such PIAs to be filed with the Treasury Board of Canada Secretariat and the Office of the Privacy Commissioner before any new activities involving collecting or handling personal data. Light is concerned that agencies flat out ignoring the directive means digital surveillance of citizens has become normalized. Join the club, Canada.

Cynthia Murrell, December 13, 2023

Cyber Security Responsibility: Where It Belongs at Last!

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I want to keep this item brief. Navigate to “CISA’s Goldstein Wants to Ditch ‘Patch Faster, Fix Faster’ Model.”

CISA means the US government’s Cybersecurity and Infrastructure Security Agency. The “Goldstein” reference points to Eric Goldstein, the executive assistant director of CISA.

The main point of the write up is that big technology companies have to be responsible for cleaning up their cyber security messes. The write up reports:

Goldstein said that CISA is calling on technology providers to “take accountability” for the security of their customers by doing things like enabling default security controls such as multi-factor authentication, making security logs available, using secure development practices and embracing memory safe languages such as Rust.

I may be incorrect, but I picked up a signal that the priorities of some techno feudalists are not security. Perhaps these firms’ goals are maximizing profit, market share, and power over their paying customers. Security? Maybe it is easier to describe in a slide deck or a short YouTube video?

image

The use of a parental mode seems appropriate for a child? Will it work for techno feudalists who have created a digital mess in kitchens throughout the world? Thanks, MSFT Copilot. You must have ingested some “angry mommy” data when your were but a wee sprout.

Will this approach improve the security of mission-critical systems? Will the enjoinder make a consumer’s mobile phone more secure?

My answer? Without meaningful consequences, security is easier to talk about than deliver. Therefore, minimal change in the near future. I wish I were wrong.

Stephen E Arnold, December 5, 2023

India Might Not Buy the User-Is-Responsible Argument

November 29, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

India’s elected officials seem to be agitated about deep fakes. No, it is not the disclosure that a company in Spain is collecting $10,000 a month or more from a fake influencer named Aitana López. (Some in India may be following the deeply faked bimbo, but I would assert that not too many elected officials will admit to their interest in the digital dream boat.)

US News & World Report recycled a Reuters (the trust outfit) story “India Warns Facebook, YouTube to Enforce Ruyles to Deter Deepfakes — Sources” and asserted:

India’s government on Friday warned social media firms including Facebook and YouTube to repeatedly remind users that local laws prohibit them from posting deepfakes and content that spreads obscenity or misinformation

11 24 reprimanded

“I know you and the rest of the science club are causing problems with our school announcement system. You have to stop it, or I will not recommend you or any science club member for the National Honor Society.” The young wizard says, “I am very, very sorry. Neither I nor my friends will play rock and roll music during the morning announcements. I promise.” Thanks, MidJourney. Not great but at least you produced an image which is more than I can say for the MSFT Copilot Bing thing.

What’s notable is that the government of India is not focusing on the user of deep fake technology. India has US companies in its headlights. The news story continues:

India’s IT ministry said in a press statement all platforms had agreed to align their content guidelines with government rules.

Amazing. The US techno-feudalists are rolling over. I am someone who wonders, “Will these US companies bend a knee to India’s government?” I have zero inside information about either India or the US techno-feudalists, but I have a recollection that US companies:

  1. Do what they want to do and then go to court. If they win, they don’t change. If they lose, they pay the fine and they do some fancy dancing.
  2. Go to a meeting and output vague assurances prefaced by “Thank you for that question.” The companies may do a quick paso double and continue with business pretty much as usual
  3. Just comply. As Canada has learned, Facebook’s response to the Canadian news edict was simple: No news for Canada. To make the situation more annoying to a real government, other techno-feudalists hopped on Facebook’s better idea.
  4. Ignore the edict. If summoned to a meeting or hit with a legal notice, companies will respond with flights of legal eagles with some simple messages; for example, no more support for your law enforcement professionals or your intelligence professionals. (This is a hypothetical example only, so don’t develop the shingles, please.)

Net net: Techno-feudalists have to decide: Roll over, ignore, or go to “war.”

Stephen E Arnold, November 29, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta