AI: Strip Mining Life Itself

May 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I may be — like a AI system — hallucinating. I think I am seeing more philosophical essays and medieval ratio recently. A candidate expository writing is “To Understand the Risks Posed by AI, Follow the Money.” After reading the write up, I did not get a sense that the focus was on following the money. Nevertheless, I circled several statements which caught my attention.

Let’s look at these, and you may want to navigate to the original essay to get each statement’s context.

First, the authors focus on what they as academic thinkers call “an extractive business model.” When I saw the term, I thought of the strip mines in Illinois. Giant draglines stripped the earth to expose coal. Once the coal was extracted, the scarred earth was bulldozed into what looked like regular prairie. It was not. Weeds grew. But to get corn or soy beans, the farmer had to spend big bucks to get chemicals and some Fancy Dan equipment to coax the trashed landscape to utility. Nice.

The essay does not make the downside of extractive practices clear. I will. Take a look at a group of teens in a fast food restaurant or at a public event. The group is a consequence of the online environment in which the individual spends hours each day. I am not sure how well the chemicals and equipment used to rehabilitate the strip minded prairie applies to humans, but I assume someone will do a study and report.

image

The second statement warranting a blue exclamation mark is:

Algorithms have become market gatekeepers and value allocators, and are now becoming producers and arbiters of knowledge.

From my perspective, the algorithms are expressions of human intent. The algorithms are not the gatekeepers and allocators. The algorithms express the intent, goals, and desire of the individuals who create them. The “users” knowingly or unknowingly give up certain thought methods and procedures to provide what appears to be something scratches a Maslow’s Hierarchy of Needs’ itch. I think in terms of the medieval Great Chain of Being. The people at the top own the companies. Their instrument of control is their service. The rest of the hierarchy reflects a skewed social order. A fish understands only the environment of the fish bowl. The rest of the “world” is tough to perceive and understand. In short, the fish is trapped. Online users (addicts?) are trapped.

The third statement I marked is:

The limits we place on algorithms and AI models will be instrumental to directing economic activity and human attention towards productive ends.

Okay, who exactly is going to place limits? The farmer who leased his land to the strip mining outfit made a decision. He traded the land for money. Who is to blame? The mining outfit? The farmer? The system which allowed the transaction?

The situation at this moment is that yip yap about open source AI and the other handwaving cannot alter the fact that a handful of large US companies and a number of motivated nation states are going to spend what’s necessary to obtain control.

Net net: Houston, we have a problem. Money buys power. AI is a next generation way to get it.

Stephen E Arnold, May 2, 2024

Using AI But For Avoiding Dumb Stuff One Hopes

May 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting essay called “How I Use AI To Help With TechDirt (And, No, It’s Not Writing Articles).” The main point of the write up is that artificial intelligence or smart software (my preferred phrase) can be useful for certain use cases. The article states:

I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles. It’s basically to help me brainstorm, critique my articles, and make suggestions on how to improve them.

image

Thanks, MSFT Copilot. Bad grammar and an incorrect use of the apostrophe. Also, I was much dumber looking in the 9th grade. But good enough, the motto of some big software outfits, right?

The idea is that an AI system can function as a partner, research assistant, editor, and interlocutor. That sounds like what Microsoft calls a “copilot.” The article continues:

I initially couldn’t think of anything to ask the AI, so I asked people in Lex’s Discord how they used it. One user sent back a “scorecard” that he had created, which he asked Lex to use to review everything he wrote.

The use case is that smart software function like Miss Dalton, my English composition teacher at Woodruff High School in 1958. She was a firm believer in diagramming sentences, following the precepts of the Tressler & Christ textbook, and arcane rules such as capitalizing the first word following a color (correctly used, of course).

I think her approach was intended to force students in 1958 to perform these word and text manipulations automatically. Then when we trooped to the library every month to do “research” on a topic she assigned, we could focus on the content, the logic, and the structural presentation of the information. If you attend one of my lectures, you can see that I am struggling to live up to her ideals.

However, when I plugged in my comments about Telegram as a platform tailored to obfuscated communications, the delivery of malware and X-rated content, and enforcing a myth that the entity known as Mr. Durov does not cooperate with certain entities to filter content, AI systems failed miserably. Not only were the systems lacking content, one — Microsoft Copilot, to be specific — had no functional content of collapse. Two other systems balked at the idea of delivering CSAM within a Group’s Channel devoted to paying customers of what is either illegal or extremely unpleasant content.

Several observations are warranted:

  1. For certain types of content, the systems lack sufficient data to know what the heck I am talking about
  2. For illegal activities, the systems are either pretending to be really stupid or the developers have added STOP words to the filters to make darned sure to improper output would be presented
  3. The systems’ are not up-to-date; for example, Mr. Durov was interviewed by Tucker Carlson a week before Mr. Durov blocked Ukraine Telegram Groups’ content to Telegram users in Russia.

Is it, therefore, reasonable to depend on a smart software system to provide input on a “newish” topic? Is it possible the smart software systems are fiddled by the developers so that no useful information is delivered to the user (free or paying)?

Net net: I am delighted people are finding smart software useful. For my lectures to law enforcement officers and cyber investigators, smart software is as of May 1, 2024, not ready for prime time. My concern is that some individuals may not discern the problems with the outputs. Writing about the law and its interpretation is an area about which I am not qualified to comment. But perhaps legal content is different from garden variety criminal operations. No, I won’t ask, “What’s criminal?” I would rather rely on Miss Dalton taught in 1958. Why? I am a dinobaby and deeply skeptical of probabilistic-based systems which do not incorporate Kolmogorov-Arnold methods. Hey, that’s my relative’s approach.

Stephen E Arnold, May 1, 2024

Big Tech and Their Software: The Tent Pole Problem

May 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I remember a Boy Scout camping trip. I was a Wolf Scout at the time, and my “pack” had the task of setting up our tent for the night. The scout master was Mr. Johnson, and he left it us. The weather did not cooperate; the tent pegs pulled out in the wind. The center tent pole broke. We stood in the rain. We knew the badge for camping was gone, just like a dry place to sleep. Failure. Whom could we blame? I suggested, “McKinsey & Co.” I had learned that third-parties were usually fall guys. No one knew what I was talking about.

4 27 tent collapse

Okay, ChatGPT, good enough.

I thought about the tent pole failure, the miserable camping experience, and the need to blame McKinsey or at least an entity other than ourselves. The memory surfaced as I read “Laws of Software Evolution.” The write up sets forth some ideas which may not be firm guidelines like those articulated by the World Court, but they are about as enforceable.

Let’s look at the laws explicated in the essay.

The first law is that software is to support a real-world task. As result (a corollary maybe?) is that the software has to evolve. That is the old chestnut ““No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.” The problem is change, which consumes money and time. As a result, original software is wrapped, peppered with calls to snappy new modules designed to fix up or extend the original software.

The second law is that when changes are made, the software construct becomes more complex. Complexity is what humans do. A true master makes certain processes simple. Software has artists, poets, and engineers with vision. Simple may not be a key component of the world the programmer wants to create. Thus, increasing complexity creates surprises like unknown dependencies, sluggish performance, and a giant black hole of costs.

The third law is not explicitly called out like Laws One and Two. Here’s my interpretation of the “lurking law,” as I have termed it:

Code can be shaped and built upon.

My reaction to this essay is positive, but the link to evolution eludes me. The one issue I want to raise is that once software is built, deployed, and fiddled with it is like a river pier built by Roman engineers.  Moving the pier or fixing it so it will persist is a very, very difficult task. At some point, even the Roman concrete will weather away. The bridge or structure will fall down. Gravity wins. I am okay with software devolution.

The future, therefore, will be stuffed with software breakdowns. The essay makes a logical statement:

… we should embrace the malleability of code and avoid redesign processes at all costs!

Sorry. Won’t happen. Woulda, shoulda, and coulda cannot do the job.

Stephen E Arnold, May 1, 2024

A High-Tech Best Friend and Campfire Lighter

May 1, 2024

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

A dog is allegedly man’s best friend. I  have a French bulldog,

tibby asleep 3

and I am not 100 percent sure that’s an accurate statement. But I have a way to get the pal I have wanted for years.

 Ars Technica reports “You Can Now Buy a Flame-Throwing Robot Dog for Under $10,000” from Ohio-based maker Throwflame. See the article for footage of this contraption setting fire to what appears to be a forest. Terrific. Reporter Benj Edwards writes:

“Thermonator is a quadruped robot with an ARC flamethrower mounted to its back, fueled by gasoline or napalm. It features a one-hour battery, a 30-foot flame-throwing range, and Wi-Fi and Bluetooth connectivity for remote control through a smartphone. It also includes a LIDAR sensor for mapping and obstacle avoidance, laser sighting, and first-person view (FPV) navigation through an onboard camera. The product appears to integrate a version of the Unitree Go2 robot quadruped that retails alone for $1,600 in its base configuration. The company lists possible applications of the new robot as ‘wildfire control and prevention,’ ‘agricultural management,’ ‘ecological conservation,’ ‘snow and ice removal,’ and ‘entertainment and SFX.’ But most of all, it sets things on fire in a variety of real-world scenarios.”

And what does my desired dog look like? The GenY Tibby asleep at work? Nope.

image

I hope my Thermonator includes an AI at the controls. Maybe that will be an add-on feature in 2025? Unitree, maker of the robot base mentioned above, once vowed to oppose the weaponization of their products (along with five other robotics firms.) Perhaps Throwflame won them over with assertions their device is not technically a weapon, since flamethrowers are not considered firearms by federal agencies. It is currently legal to own this mayhem machine in 48 states. Certain restrictions apply in Maryland and California. How many crazies can get their hands on a mere $9,420 plus tax for that kind of power? Even factoring in the cost of napalm (sold separately), probably quite a few.

Cynthia Murrell, May 1, 2024

« Previous Page

  • Archives

  • Recent Posts

  • Meta