Digital Dumplings: AI Outputs Must Be Groomed, Trimmed, and Message Aligned

May 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Beijing Moves to Force AI Bots to Display socialist Core Values.” I am not sure that the write up is spot on, but let’s assume that it is close enough for horseshoes. The main idea is that AI can chat. However, the AI must be steered to that it outputs content displaying “socialist core values.”

The write up states:

Companies will also have to make sure their chatbots create words and pictures that are truthful and respect intellectual property, and will be required to register their algorithms, the software brains behind chatbots, with regulators. The rules are not final, and regulators may continue to modify them, but experts said engineers building AI services in China were already figuring out how to incorporate the edicts into their products.

My reaction is that those who would argue that training smart software plus any post training digital filters will work. Let’s assume that those subject to the edict achieve the objective. What about US smart software whose developers insist that objectivity is the real deal? China’s policy if implemented and delivers, makes it clear that smart software is not objective. Developers can and will use its malleability to achieve their goals.

How about those students who reveal deep secrets on TikTok? Will these individuals be manipulated via smart software informed of the individuals’ hot buttons?

Is that data dumpling a psychographic trigger with a payload different from ground pork, veggies, and spices?

Stephen E Arnold, May 1, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta