An International AI Panel: Notice Anything Unusual?

February 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

An expert international advisory panel has been formed. The ooomph behind the group is the UK’s prime minister. The Evening Standard newspaper described the panel this way:

The first-of-its-kind scientific report on AI will be used to shape international discussions around the technology.

What most of the reports omit is the list of luminaries named to this entity. You can find the list at this link.
image
A number of individual amateur cooks are working hard to match the giant commercial food processing facility is creating. Why aren’t these capable chefs not working with the big outfits? Can “outsiders” understand the direction of a well-resourced, fast-moving commercial enterprise? Thanks, MSFT Copilot. Good enough.
I want to list the members and then ask, “Do you see anything unusual in the list?” The names are ordered by country and representative:

Australia. Professor Bronwyn Fox, Chief Scientist, The Commonwealth Scientific and Industrial Research Organization (CSIRO)

Brazil. André Carlos Ponce de Leon Ferreira de Carvalho, Professor, Institute of Mathematics and Computer Sciences, University of São Paulo

Canada. Doctor Mona Nemer, Chief Science Advisor of Canada

Canada. Professor Yoshua Bengio, considered one of the “godfathers of AI”.

Chile. Raquel Pezoa Rivera, Academic, Federico Santa María Technical University

China. Doctor Yi Zeng, Professor, Institute of Automation, Chinese Academy of Sciences

EU. Juha Heikkilä, Adviser for Artificial Intelligence, DG Connect

France. Guillame Avrin, National Coordinator for AI, General Directorate of Enterprises

Germany. Professor Antonio Krüger, CEO, German Research Center for Artificial Intelligence.

India. Professor Balaraman Ravindran, Professor at the Department of Computer Science and Engineering, Indian Institute of Technology, Madras

Indonesia. Professor Hammam Riza, President, KORIKA

Ireland. Doctor. Ciarán Seoighe, Deputy Director General, Science Foundation Ireland

Israel. Doctor Ziv Katzir, Head of the National Plan for Artificial Intelligence Infrastructure, Israel Innovation Authority

Italy. Doctor Andrea Monti,Professor of  Digital Law, University of Chieti-Pescara.

Japan. Doctor Hiroaki Kitano, CTO, Sony Group Corporation

Kenya. Awaiting nomination

Mexico. Doctor José Ramón López Portillo, Chairman and Co-founder, Q Element

Netherlands. Professor Haroon Sheikh, Senior Research Fellow, Netherlands’ Scientific Council for Government Policy

New Zealand. Doctor Gill Jolly, Chief Science Advisor, Ministry of Business, Innovation and Employment

Nigeria. Doctor Olubunmi Ajala, Technical Adviser to the Honorable Minister of Communications, Innovation and Digital Economy,
Philippines. Awaiting nomination

Republic of Korea. Professor Lee Kyoung Mu, Professor, Department of Electrical and Computer Engineering, Seoul National University

Rwanda. Crystal Rugege, Managing Director, National Center for AI and Innovation Policy

Kingdom of Saudi Arabia. Doctor Fahad Albalawi, Senior AI Advisor, Saudi Authority for Data and Artificial Intelligence

Singapore. Denise Wong, Assistant Chief Executive, Data Innovation and Protection Group, Infocomm Media Development Authority (IMDA)

Spain. Nuria Oliver, Vice-President, European Laboratory for Learning and Intelligent Systems (ELLISS)

Switzerland. Doctor. Christian Busch, Deputy Head, Innovation, Federal Department of Economic Affairs, Education and Research

Turkey. Ahmet Halit Hatip, Director General of European Union and Foreign Relations, Turkish Ministry of Industry and Technology

UAE. Marwan Alserkal, Senior Research Analyst, Ministry of Cabinet Affairs, Prime Minister’s Office

Ukraine. Oleksii Molchanovskyi, Chair, Expert Committee on the Development of Artificial intelligence in Ukraine

USA. Saif M. Khan, Senior Advisor to the Secretary for Critical and Emerging Technologies, U.S. Department of Commerce

United Kingdom. Dame Angela McLean, Government Chief Scientific Adviser

United Nations. Amandeep Gill, UN Tech Envoy

Give up? My team identified these interesting aspects:

  1. No Facebook, Google, Microsoft, OpenAI or any other US giant in the AI space
  2. Academics and political “professionals” dominate the list
  3. A speed and scale mismatch between AI diffusion and panel report writing.

Net net: More words will be generated for large language models to ingest.

Stephen E Arnold, February 2, 2024

Techno Feudalist Governance: Not a Signal, a Rave Sound Track

January 31, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

One of the UK’s watchdog outfits published a 30-page report titled “One Click Away: A Study on the Prevalence of Non-Suicidal Self Injury, Suicide, and Eating Disorder Content Accessible by Search Engines.” I suggest that you download the report if you are interested in what the consequences of poor corporate governance are. I recommend reading the document while watching your young children or grand children playing with their mobile phones or tablet devices.

Let me summarize the document for you because its contents provide some color and context for the upcoming US government hearings with a handful of techno feudalist companies:

Web search engines and social media services are one-click gateways to self-harm and other content some parents and guardians might deem inappropriate.

Does this report convey information relevant to the upcoming testimony of selected large US technology companies in the Senate? I want to say, “Yes.” However, the realistic answer is, “No.”

Techmeme, an online information service, displayed its interest in the testimony with these headlines on January 31, 2024:

image

Screenshots are often difficult to read. The main story is from the weird orange newspaper whose content is presented under this Techmeme headline:

Ahead of the Senate Hearing, Mark Zuckerberg Calls for Requiring Apple and Google to Verify Ages via App Stores…

Ah, ha, is this a red herring intended to point the finger at outfits not on the hot seat in the true blue Senate hearing room?

The New York Times reports on a popular DC activity: A document reveal:

Ahead of the Senate Hearing, US Lawmakers Release 90 Pages of Internal Meta Emails…

And to remind everyone that an allegedly China linked social media service wants to do the right thing (of course!), Bloomberg’s angle is:

In Prepared Senate Testimony, TikTok CEO Shou Chew Says the Company Plans to Spend $2B+ in 2024 on Trust and Safety Globally…

Therefore, the Senate hearing on January 31, 2024 is moving forward.

What will be the major take-away from today’s event? I would suggest an opportunity for those testifying to say, “Senator, thank you for the question” and “I don’t have that information. I will provide that information when I return to my office.”

And the UK report? What? And the internal governance of certain decisions related to safety in the techno feudal firms? Secondary to generating revenue perhaps?

Stephen E Arnold, January 31, 2024

Fujitsu: Good Enough Software, Pretty Good Swizzling

January 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The USPS is often interesting. But the UK’s postal system, however, is much worse. I think we can thank the public private US postal construct for not screwing over those who manage branch offices. Computer Weekly details how the UK postal system’s leaders knowingly had an IT problem and blamed employees: “Fujitsu Bosses Knew About Post Office Horizon IT Flaws, Says Insider.”

The UK postal system used the Post Office Horizon IT system supplied by Fujitsu. The Fujitsu bosses allowed it to be knowingly installed despite massive problems. Hundreds of UK subpostmasters were accused of fraud and false accounting. They were held liable. Many were imprisoned, had their finances ruined, and lost jobs. Many of the UK subpostmasters fought the accusations. It wasn’t until 2019 that the UK High Court proved it was Horizon IT’s fault.

The Fujitsu that “designed” the postal IT system didn’t have the correct education and experience for the project. It was built on a project that didn’t properly record and process payments. A developer on the project shared with Computer Weekly:

“‘To my knowledge, no one on the team had a computer science degree or any degree-level qualifications in the right field. They might have had lower-level qualifications or certifications, but none of them had any experience in big development projects, or knew how to do any of this stuff properly. They didn’t know how to do it.’”

The Post Office Horizon It system was the largest commercial system in Europe and it didn’t work. The software was bloated, transcribed gibberish, and was held together with the digital equivalent of Scotch tape. This case is the largest miscarriage of justice in current UK history. Thankfully the truth has come out and the subpostmasters will be compensated. The compensation doesn’t return stolen time but it will ease their current burdens.

Fujitsu is getting some scrutiny. Does the company manufacture grocery self check out stations? If so, more outstanding work.

Whitney Grace, January 25, 2024

Regulators Shift into Gear to Investigate an AI Tie Up

January 19, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Solicitors, lawyers, and avocats want to mark the anniversary of the AI big bang. About one year ago, Microsoft pushed Google into hitting its Code Red button. Investment firms, developers, and wild-eyed entrepreneurs knew smart software was the real deal, not a digital file of a cartoon like that NFT baloney. In the last 12 months, AI went from jargon and eliciting yawns to the treasure map to the fabled city of El Dorado (even if it was a suburb of Grants, New Mexico. Google got the message quickly. The lawyers. Well, not too quickly.

image

Regulators look through the technological pile of 2023 gadgets. Despite being last year’s big thing, the law makers and justice deciders move into action mode. Exciting. Thanks, MSFT Copilot Bing thing. Good enough.

EU Joins UK in Scrutinizing OpenAI’s Relationship with Microsoft” documents what happens when lawyers — after decades of inaction — wake to do something constructive. Social media gutted the fabric of many cultural norms. AI isn’t going to be given a 20 year free pass. No way.

The write up reports:

Antitrust regulators in the EU have joined their British counterparts in scrutinizing Microsoft’s alliance with OpenAI.

What will happen now? Here’s my short list of actions:

  1. Legal eagles on both sides of the Atlantic will begin grooming their feathers in order to be selected to deal with the assorted forms, filings, hearings, and advisory meetings. Some of the lawyers will call Ferrari to make sure they are eligible to buy a supercar; others may cast an eye on an impounded oligarch-linked yacht. Yep, big bucks ahead.
  2. Microsoft and OpenAI will let loose an platoon of humanoid art history and business administration majors. These professionals will create a wide range of informative explainers. Smart software will be pressed into duty, and I anticipate some smart automation to provide Teflon the the flow of digital documentation.
  3. Firms — possibly some based in the EU and a few bold souls in the US — will present information making clear that competition is a good thing. Governments must regulate smart software
  4. Entities hostile to the EU and the US will also output information or disinformation. Which is what depends on one’s perspective.

In short, 2024 will be an interesting year because one of the major threat to the Google could be converted to the digital equivalent of a eunuch in an Assyrian ruler’s court. What will this mean? Google wins. Unanticipated consequence? Absolutely.

Stephen E Arnold, January 19, 2024

Stretchy Security and Flexible Explanations from SEC and X

January 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Gizmodo presented an interesting write up about an alleged security issue involving the US Securities & Exchange Commission. Is this an important agency? I don’t know. “X Confirms SEC Hack, Says Account Didn’t Have 2FA Turned On” states:

Turns out that the SEC’s X account was hacked, partially because it neglected a very basic rule of online security.

image

“Well, Pa, that new security fence does not seem too secure to me,” observes the farmer’s wife. Flexible and security with give are not the optimal ways to protect the green. Thanks, MSFT Copilot Bing thing. Four tries and something good enough. Yes!

X.com — now known by some as the former Twitter or the Fail Whale outfit — puts the blame on the US SEC. That’s a familiar tactic in Silicon Valley. The users are at fault. Some people believe Google’s incognito mode is secret, and others assume that Apple iPhones do not have a backdoor. Wow, I believe these companies, don’t you?

The article reports:

[The] hacking episode temporarily threw the web3 community into chaos after the SEC’s compromised account made a post falsely claiming that the SEC had approved the much anticipated Bitcoin ETFs that the crypto world has been obsessed with of late. The claims also briefly sent Bitcoin on a wild ride, as the asset shot up in value temporarily, before crashing back down when it became apparent the news was fake.

My question is, “How stretchy and flexible are security systems available from outfits like Twitter (now X)?” Another question is, “How secure are government agencies?”

The apparent answer is, “Good enough.” That’s the high water mark in today’s world. Excellence? Meh.

Stephen E Arnold, January 18, 2024

Guidelines. What about AI and Warfighting? Oh, Well, Hmmmm.

January 16, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It seems November 2023’s AI Safety Summit, hosted by the UK, was a productive gathering. At the very least, attendees drew up some best practices and brought them to agencies in their home countries. TechRepublic describes the “New AI Security Guidelines Published by NCSC, CISA, & More International Agencies.” Writer Owen Hughes summarizes:

“The Guidelines for Secure AI System Development set out recommendations to ensure that AI models – whether built from scratch or based on existing models or APIs from other companies – ‘function as intended, are available when needed and work without revealing sensitive data to unauthorized parties.’ Key to this is the ‘secure by default’ approach advocated by the NCSC, CISA, the National Institute of Standards and Technology and various other international cybersecurity agencies in existing frameworks. Principles of these frameworks include:

* Taking ownership of security outcomes for customers.

* Embracing radical transparency and accountability.

* Building organizational structure and leadership so that ‘secure by design’ is a top business priority.

A combined 21 agencies and ministries from a total of 18 countries have confirmed they will endorse and co-seal the new guidelines, according to the NCSC. … Lindy Cameron, chief executive officer of the NCSC, said in a press release: ‘We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up. These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.’”

Nice idea, but we noted “OpenAI’s Policy No Longer Explicitly Bans the Use of Its Technology for Military and Warfare.” The article reports that OpenAI:

updated the page on January 10 "to be clearer and provide more service-specific guidance," as the changelog states. It still prohibits the use of its large language models (LLMs) for anything that can cause harm, and it warns people against using its services to "develop or use weapons." However, the company has removed language pertaining to "military and warfare." While we’ve yet to see its real-life implications, this change in wording comes just as military agencies around the world are showing an interest in using AI.

We are told cybersecurity experts and analysts welcome the guidelines. But will the companies vending and developing AI products willingly embrace principles like “radical transparency and accountability”? Will regulators be able to force them to do so? We have our doubts. Nevertheless, this is a good first step. If only it had been taken at the beginning of the race.

Cynthia Murrell, January 16, 2024

Canada and Mobile Surveillance: Is It a Reality?

January 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It appears a baker’s dozen of Canadian agencies are ignoring a longstanding federal directive on privacy protections. Yes, Canada. According to CBC/ Radio Canada, “Tools Capable of Extracting Personal Data from Phones Being Used by 13 Federal Departments, Documents Show.” The trend surprised even York University associate professor Evan Light, who filed the original access-to-information request. Reporter Brigitte Bureau shares:

“Tools capable of extracting personal data from phones or computers are being used by 13 federal departments and agencies, according to contracts obtained under access to information legislation and shared with Radio-Canada. Radio-Canada has also learned those departments’ use of the tools did not undergo a privacy impact assessment as required by federal government directive. The tools in question can be used to recover and analyze data found on computers, tablets and mobile phones, including information that has been encrypted and password-protected. This can include text messages, contacts, photos and travel history. Certain software can also be used to access a user’s cloud-based data, reveal their internet search history, deleted content and social media activity. Radio-Canada has learned other departments have obtained some of these tools in the past, but say they no longer use them. … ‘I thought I would just find the usual suspects using these devices, like police, whether it’s the RCMP or [Canada Border Services Agency]. But it’s being used by a bunch of bizarre departments,’ [Light] said.

To make matters worse, none of the agencies had conducted the required Privacy Impact Assessments. A federal directive issued in 2002 and updated in 2010 required such PIAs to be filed with the Treasury Board of Canada Secretariat and the Office of the Privacy Commissioner before any new activities involving collecting or handling personal data. Light is concerned that agencies flat out ignoring the directive means digital surveillance of citizens has become normalized. Join the club, Canada.

Cynthia Murrell, January 12, 2024

Sci Fi or Sci Fake: A Post about a Chinese Force Field

January 10, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Imagine a force field which can deflect a drone or other object. Commercial applications could range from a passenger vehicles to directing flows of material in a manufacturing process. Is a force field a confection of science fiction writers or a technical avenue nearing market entry?

image

A Tai Chi master uses his powers to take down a drone. Thanks, MSFT Copilot Bing thing. Good enough.

Chinese Scientists Create Plasma Shield to Guard Drones, Missiles from Attack” presents information which may be a combination of “We’re innovating and you are not” and “science fiction.” The write up reports:

The team led by Chen Zongsheng, an associate researcher at the State Key Laboratory of Pulsed Power Laser Technology at the National University of Defence Technology, said their “low-temperature plasma shield” could protect sensitive circuits from electromagnetic weapon bombardments with up to 170kW at a distance of only 3 metres (9.8 feet). Laboratory tests have shown the feasibility of this unusual technology. “We’re in the process of developing miniaturized devices to bring this technology to life,” Chen and his collaborators wrote in a peer-reviewed paper published in the Journal of National University of Defence Technology last month.

But the write up makes clear that other countries like the US are working to make force fields more effective. China has a colorful way to explain their innovation; to wit:

The plasma-based energy shield is a radical new approach reminiscent of tai chi principles – rather than directly countering destructive electromagnetic assaults it endeavors to convert the attacker’s energy into a defensive force.

Tai chi, as I understand the discipline is a combination of mental discipline and specific movements to develop mental peace, promote physical well being, and control internal force for a range of purposes.

How does the method function. The article explains:

… When attacking electromagnetic waves come into contact with these charged particles, the particles can immediately absorb the energy of the electromagnetic waves and then jump into a very active state. If the enemy continues to attack or even increases the power at this time, the plasma will suddenly increase its density in space, reflecting most of the incidental energy like a mirror, while the waves that enter the plasma are also overwhelmed by avalanche-like charged particles.

One question: Are technologists mining motion pictures, television shows, and science fiction for ideas?

Beam me up, Scotty.

Stephen E Arnold, January 10, 2024

Bugged? Hey, No One Can Get Our Data

December 22, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “The Obscure Google Deal That Defines America’s Broken Privacy Protections.” In the cartoon below, two young people are confident that their lunch will be undisturbed. No “bugs” will chow down on their hummus, sprout sandwiches, or their information. What happens, however, is that the young picnic fans cannot perceive what is out of sight. Are these “bugs” listening? Yep. They are. 24×7.

image

What the young fail to perceive is that “bugs” are everywhere. These digital creatures are listening, watching, harvesting, and consuming every scrap of information. The image of the picnic evokes an experience unfolding in real time. Thanks, MSFT Copilot. My notion of “bugs” is obviously different from yours. Good enough and I am tired of finding words you can convert to useful images.

The essay explains:

While Meta, Google, and a handful of other companies subject to consent decrees are bound by at least some rules, the majority of tech companies remain unfettered by any substantial federal rules to protect the data of all their users, including some serving more than a billion people globally, such as TikTok and Apple.

The situation is simple: Major centers of techno gravity remain unregulated. Law makers, regulators, and “users” either did not understand or just believed what lobbyists told them. The senior executives of certain big firms smiled, said “Senator, thank you for that question,” and continued to build out their “bug” network. Do governments want to lose their pride of place with these firms? Nope. Why? Just reference bad actors who commit heinous acts and invoke “protect our children.” When these refrains from the techno feudal playbook sound, calls to take meaningful action become little more than a faint background hum.

But the article continues:

…there is diminishing transparency about how Google’s consent decree operates.

I think I understand. Google-type companies pretend to protect “privacy.” Who really knows? Just ask a Google professional. The answer in my experience is, “Hey, dude, I have zero idea.”

How does Wired, the voice of the techno age, conclude its write up? Here you go:

The FTC agrees that a federal privacy law is long overdue, even as it tries to make consent decrees more powerful. Samuel Levine, director of the FTC’s Bureau of Consumer Protection, says that successive privacy settlements over the years have become more limiting and more specific to account for the growing, near-constant surveillance of Americans by the technology around them. And the FTC is making every effort to enforce the settlements to the letter…

I love the “every effort.” The reality is that the handling of online data collection presages the trajectory for smart software. We live with bugs. Now those bugs can “think”, adapt, and guide. And what’s the direction in which we are now being herded? Grim, isn’t it?

Stephen E Arnold, December 23, 2023

FTC Enacts Investigative Process for AI Technology

December 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Creative types and educational professionals are worried about the influence of AI-generated work. However, law, legal, finance, business operations, and other industries are worried about how AI will impact them. Aware about the upward trend in goods and services that are surreptitiously moving into the market, the Federal Trade Commission (FTC) took action. The FTC released a briefing on the new consumer AI protection: “FTC Authorities Compulsory Process For AI-Related Products And Services.”

The FTC passed an omnibus resolution that authorizes a compulsory process in nonpublic investigations about products and services that use or claim to be made with AI or claim to detect it. The new omnibus resolution will increase the FTC’s efficiency with civil investigation demands (CIDs), a compulsory process like a subpoena. CIDs are issued to collect information, similar to legal discovery, for consumer protection and competition investigations. The new resolution will be in effect for ten years and the FTC voted to approve it 3-0.

The FTC defines AI as:

“AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Generative AI can be used to generate synthetic content including images, videos, audio, text, and other digital content that appear to be created by humans. Many companies now offer products and services using AI and generative AI, while others offer products and services that claim to detect content made by generative AI.”

AI can also be used for deception, privacy infringements, fraud, and other illegal activities. AI can causes competition problems, such as if a few companies monopolize algorithms are other AI-related technologies.

The FTC is taking preliminary steps to protect consumers from bad actors and their nefarious AI-generated deeds. However, what constitutes a violation in relation to AI? Will the data training libraries be examined along with the developers?

Whitney Grace, December 20, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta