How Will Smart Cars Navigate Crowded Cityscapes When People Do Humanoid Things?

September 11, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Who collided in San Francisco on July 6, 2024? (No, not the February 2024 incident. Yes, I know it is easy to forget such trivial incidents) Did the Googley Waymo vehicle (self driving and smart, of course) bump into the cyclist? Did the cyclist decide to pull an European Union type stunt and run into the self driving car?

image

If the legal outcome of this San Francisco autonomous car – bicycle incident goes in favor of the bicyclist, autonomous vehicles will have to be smart enough to avoid situations like the one shown in the ChatGPT cartoon. Microsoft Copilot would not render the image. When I responded, “What?” the Copilot hung. Great stuff.

The question is important for insurance, publicity, and other monetary reasons. A good offense is the best defense, someone said. “Waymo Cites Possible Intentional Contact by a Bicyclist to Robotaxi in S.F.” reports:

While the robotaxi was stopped, the cyclist passed in front of it and appeared to dismount, according to the documents. “The cyclist then reached out a hand and made contact with the front passenger side of the stationary Waymo AV (autonomous vehicle), backed the bicycle up slightly, dropped the bicycle, then fell to the ground,” the documents said. The cyclist received medical treatment at the scene and was transported to the hospital, according to the documents. The Waymo vehicle was not damaged during the incident.

In my view, this is the key phrase in the news report:

In the documents, Waymo said it was submitting the report because of the alleged crash and because the cyclist influenced the driving task of the AV and was transported to the hospital, even though the incident “may involve intentional contact by the bicyclist with the Waymo AV and the occurrence of actual impact between the Waymo AV and cycle is not clear.”

We have doubt, reasonable doubt obviously. Googley Waymo is definitely into reasoning. And we have the word pair “intentional contact.” Okay, to me this means, the smart Waymo vehicle did nothing wrong. A human — chock full of possibly malicious if not criminal intent — created a TikTok moment. It is too bad there is no video of the incident. Even my low ball Hyundai records what’s in front of it. Doesn’t the Googley Waymo do that with its array of Star Wars adornments, sensors, probes, and other accoutrements of Googley Waymo vehicles? Guess not.) But the autonomous vehicle had something that could act in an intelligent manner: A human test driver.

What was that person’s recollection of the incident? The news story reports that the Googley Waymo outfit “did not immediately respond to a request for further comment on the incident.”

Several observations:

  1. The bike riding human created the accident with a parked Waymo super intelligent vehicle and test driver in command
  2. The Waymo outfit did not want to talk to the San Francisco Chronicle reporter or editor. (I used to work at a newspaper, and I did not like to talk to the editors and news professionals either.)
  3. Autonomous cars are going to have to be equipped with sufficiently expert AI systems to avoid humans who are acting in a way to convert Googley Waymo services into a source of revenue. Failing that, I anticipate more kinetic interactions between Googley smart cars and humanoids not getting paid to ride shotgun on smart software.

Net net: How long have big time technology companies trying to get autonomous vehicles to produce cash, not liabilities?

Stephen E Arnold, September 11, 2024

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta