Self Driving Cars: Would You Run in Front of One?

August 7, 2023

I worked in what is called by some “Plastic Fantastic.” If you have not heard the phrase, you may have missed the quips which included this phrase in several high profile, big money companies in Silicon Valley. Oh, include Cupertino and a few other outposts. Walnut Creek, I am sorry for you.

If one were to live in Berkeley and have the thrilling option of driving over the Bay Bridge or taking a change with 92 skidoo, the idea of having a car which would drive itself at three miles per hour is obvious. Also, anyone with an opportunity to use 101 or the Foothills would have a similar thought. Why drive? Why not rig a car to creep along?

8 5 traffic jam

One bright driver says, “Self driving cars will solve this problem.” His passenger says, “Isn’t this a self driving car? Aren’t we going the wrong way on a one-way street?” MidJourney understands traffic jams because its guardrails are high.

And what do you know? The self driving car idea captured attention. How is that going after much money and many years of effort? And here’s a better question: Would you run in front of one? Would you encourage your child to stand in front of one to test the auto-braking function? Go to a dealership selling smart cars and ask the sales professional (if you can find one) to let you drive a vehicle toward the sales professional. I tried this at two dealerships and what do you know? No auto sales professional accepted this idea. One dealership had an orange cone which I could use to test auto breaking.

I read “America’s Most Tech-Forward City Has Doubts about Self-Driving Cars.” I do not want to be harsh, but cities do not have doubts. People do. The Murdoch “real” journalists report that people (not cities) will embrace the idea of letting a Silicon Valley inspired vehicle ferry them around without a bit of trepidation. Okay, fear. There I said it. How about the confidence a vehicle without a steering wheel or brake inspires?

If you want to read what is painfully obvious, navigate to the original story.

Oh, the writer is unlikely to be found standing on 101 testing the efficacy of the smart cars. Mr. Murdoch? Yeah, he might give it a whirl. My suggestion is to be confident in the land of Plastic Fantastic. It thrives on illusion. Reality can kill, crash, or just stall at a key intersection. AI can hallucinate and may overlook the squashed jogger. But whiz kids sitting on 101 envision a smarter world. Doesn’t everyone sit on highways like 101 every day?

Stephen E Arnold, August 7, 2023

Learning Means Effort, Attention, and Discipline. No, We Have AI, or AI Has Us

July 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

My newsfeed of headlines produced a three-year young essay titled “How to Learn Better in the Digital Age.” The date on the document is November 2020. (Have you noticed how rare a specific date on a document appears?)

7 4 student doing homework

MidJourney provided this illustration of me doing math homework with both hands in 1952. I was fatter and definitely uglier than the representation in the picture. I want to point out: [a] no mobile phone, [b] no calculator, [c] no radio or TV, [d] no computer, and [e] no mathy father breathing down my neck. (He was busy handling the finances of a weapons manufacturer which dabbled in metal coat hangers.) Was homework hard? Nope, just part of the routine in Campinas, Brazil, and the thrilling Calvert Course.

The write up contains a simile which does not speak to me; namely, the functioning of the human brain is emulated to some degree in smart software. I am not in that dog fight. I don’t care because I am a dinobaby.

For me the important statement in the essay, in my opinion, is this one:

… we need to engage with what we encounter if we wish to absorb it long term. In a smartphone-driven society, real engagement, beyond the share or like or retweet, got fundamentally difficult – or, put another way, not engaging got fundamentally easier. Passive browsing is addictive: the whole information supply chain is optimized for time spent in-app, not for retention and proactivity.

I marvel at the examples of a failure to learn. United Airlines strands people. The CEO has a fix: Take a private jet. Clerks in convenience stores cannot make change even when the cash register displays the amount to return to the customer. Yeah, figuring out pennies, dimes, and quarters is a tough one. New and expensive autos near where I live sit on the side of the road awaiting a tow truck from the Land Rover- or Maserati-type dealer. The local hospital has been unable to verify appointments and allegedly find some X-ray images eight weeks after a cyber attack on an insecure system. Hip, HIPPA hooray, Hip HIPPA hooray. I have a basket of other examples, and I would wager $1.00US you may have one or two to contribute. But why? The impact of poor thinking, reading, math, and writing skills are abundant.

Observations:

  1. AI will take over routine functions because humans are less intelligent and diligent than when I was a fat, slow learning student. AI is fast and good enough.
  2. People today will not be able to identify or find information to validate or invalidate an output from a smart system; therefore, those who are intellectually elite will have their hands on machines that direct behavior, money, and power.
  3. Institutions — staffed by employees who look forward to a coffee break more than working hard — will gladly license smart workflow revolution.

Exciting times coming. I am delighted I a dinobaby and not a third-grade student juggling a mobile, an Xbox, an iPad, and a new M2 Air. I was okay with a paper and pencil. I just wanted to finish my homework and get the best grade I could.

Stephen E Arnold, July

Milestones in 2023 Technology: Wondrous Markers Indeed

June 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I zipped through my messages this morning. Two items helped me think about the milieu of mid 2023.

The first is a memento of the health challenges and its downstream effects. The bat in a lucite block is available in some countries from Alieexpress. You can — if you wish — try to order the item from this url which I verified on June 21, 2023, at 1038 am US Eastern time: https://shorturl.at/wyAFL. My reaction to this item was slightly negative, but I think it would be a candidate for a gain-of-function researcher at the local college or university.

image

The other item is “What is AI Marketing? A Basic Guide to Explosive Growth in 2023.” This article — allegedly written by a humanoid — states:

The real deal with AI marketing is about augmenting human capabilities, not eliminating them. Think of it like your very own marketing superpower, helping you reach the right people, at the right time, with the right message. Or are you worried AI might make marketing impersonal and robotic?  Well, the surprising truth is that AI can actually make your marketing more human. It can save you oodles of time and energy so that you can focus on the tasks that truly matter.

I am not sure about the phrase “truly matter,” but the concept of using smart software to bombard me with more advertising is definitely a thought starter. My question is, “How can I filter these synthetic messages?”

The question you may want to ask me, “How on earth are bats associated with a disease linked to smart software used to generate advertising that truly matters?

The answer in my opinion is “gain of function.” The idea is not to make something better. The objective is to amplify certain effects.

Now that I think about it, perhaps the bat and its potential to make life miserable is the perfect metaphor for summer 2023. Perhaps I should use You.com’s smart software to help me think the idea through?

Nah. I am good. Diseased bats and smart software lashed to marketing. Perfect.

Stephen E Arnold, June 21, 2023

Software Cannot Process Numbers Derived from Getty Pix, Honks Getty Legal Eagle

June 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Getty Asks London Court to Stop UK Sales of Stability AI System.” The write up comes from a service which, like Google, bandies about the word trust with considerable confidence. The main idea is that software is processing images available in the form of Web content, converting these to numbers, and using the zeros and ones to create pictures.

The write up states:

The Seattle-based company [Getty] accuses the company of breaching its copyright by using its images to “train” its Stable Diffusion system, according to the filing dated May 12, [2023].

I found this statement in the trusted write up fascinating:

Getty is seeking as-yet unspecified damages. It is also asking the High Court to order Stability AI to hand over or destroy all versions of Stable Diffusion that may infringe Getty’s intellectual property rights.

When I read this, I wonder if the scribes upon learning about the threat Gutenberg’s printing press represented were experiencing their “Getty moment.” The advanced technology of the adapted olive press and hand carved wooden letters meant that the quill pen champions had to adapt or find their future emptying garderobes (aka chamber pots).

6 3 scribes pushing press in river

Scribes prepare to throw a Gutenberg printing press and the evil innovator Gutenberg in the Rhine River. Image was produced by the evil incarnate code of MidJourney. Getty is not impressed like letters on paper with the outputs of Beelzebub-inspired innovations.

How did that rebellion against technology work out? Yeah. Disruption.

What happens if the legal system in the UK and possibly the US jump on the no innovation train? Japan’s decision points to one option: Using what’s on the Web is just fine. And China? Yep, those folks in the Middle Kingdom will definitely conform to the UK and maybe US rules and regulations. What about outposts of innovation in Armenia? Johnnies on the spot (not pot, please). But what about those computer science students at Cambridge University? Jail and fines are too good for them. To the gibbet.

Stephen E Arnold, June 6, 2023

Will McKinsey Be Replaced by AI: Missing the Point of Money and Power

May 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read a very unusual anti-big company and anti-big tech essay called “Will AI Become the New McKinsey?” The thesis of the essay in my opinion is expressed in this statement:

AI is a threat because of the way it assists capital.

The argument upon which this assertion is perched boils down to capitalism, in its present form, in today’s US of A is roached. The choices available to make life into a hard rock candy mountain world are start: Boast capitalism so that it like cancer kills everything including itself. The other alternative is to wait for the “government” to implement policies to convert the endless scroll into a post-1984 theme park.

Let’s consider McKinsey. Whether the firm likes it or not, it has become the poster child and revenue model for other services firms. Paying to turn on one’s steering wheel heating element is an example of McKinsey-type thinking. The fentanyl problem is an unintended consequence of offering some baller ideas to a few big pharma outfits in the Us. There are other examples. I prefer to focus on some intellectual characteristics which make the firm into the symbol of that which is wrong with the good old US of A; to wit:

  1. MBA think. Numbers drive decisions, not feel good ideas like togetherness, helping others, and emulating Twitch’s AI powered ask_Jesus program. If you have not seen this, check it out at this link. It has 64 viewers as I write this on May 7, 2023 at 2 pm US Eastern.
  2. Hitting goals. These are either expressed as targets to consultants or passed along by executives to the junior MBAs pushing the mill stone round and round with dot points, charts, graphs, and zippy jargon speak. The incentive plan and its goals feed the MBAs. I think of these entities as cattle with some brains.
  3. Being viewed as super smart. I know that most successful consultants know they are smart. But many smart people who work at consulting firms like McKinsey are more insecure than an 11 year old watching an Olympic gymnast flip and spin in a effortless manner. To overcome that insecurity, the MBA consultant seeks approval from his/her/its peers and from clients who eagerly pick the option the report was crafted to make a no-brainer. Yes, slaps on the back, lunch with a senior partner, and identified as a person who would undertake grinding another rail car filled with wheat.

The essay, however, overlooks a simple fact about AI and similar “it will change everything” technology.

The technology does not do anything. It is a tool. The action comes from the individuals smart enough, bold enough, and quick enough to implement or apply it first. Once the momentum is visible, then the technology is shaped, weaponized, and guided to targets. The technology does not have much of a vote. In fact, technology is the mill stone. The owner of the cattle is running the show. The write up ignores this simple fact.

One solution is to let the “government” develop policies. Another is for the technology to kill itself. Another is for those with money, courage, and brains to develop an ethical mindset. Yeah, good luck with these.

The government works for the big outfits in the good old US of A. No firm action against monopolies, right? Why? Lawyers, lobbyists, and leverage.

What’s the essay achieve? [a] Calling attention to McKinsey helps McKinsey sell. [b] Trying to gently push a lefty idea is tough when those who can’t afford an apartment in Manhattan are swiping their iPhones and posting on BlueSky. [c] Accepting the reality that technology serves those who understand and have the cash to use that technology to gain more power and money.

Ugly? Only for those excluded from the top of the social pyramid and juicy jobs at blue chip consulting firms, expertise in manipulating advanced systems and methods, and the mindset to succeed in what is the only game in town.

PS. MBAs make errors like the Bud Light promotion. That type of mistake, not opioid tactics, may be an instrument of change. But taming AI to make a better, more ethical world. That’s a comedy hook worthy of the next Sundar & Prabhakar show.

Stephen E Arnold, May 12, 2023

Researchers Break New Ground with a Turkey Baster and Zoom

April 4, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I do not recall much about my pre-school days. I do recall dropping off at different times my two children at their pre-schools. My recollections are fuzzy. I recall horrible finger paintings carried to the automobile and several times a month, mashed pieces of cake. I recall quite a bit of laughing, shouting, and jabbering about classmates whom I did not know. Truth be told I did not want to meet these progeny of highly educated, upwardly mobile parents who wore clothes with exposed logos and drove Volvo station wagons. I did not want to meet the classmates. The idea of interviewing pre-kindergarten children struck me as a waste of time and an opportunity to get chocolate Baskin & Robbins cake smeared on my suit. (I am a dinobaby, remember. Dress for success. White shirt. Conservative tie. Yada yada._

I thought (briefly, very briefly) about the essay in Science Daily titled “Preschoolers Prefer to Learn from a Competent Robot Than an Incompetent Human.” The “real news” article reported without one hint of sarcastic ironical skepticism:

We can see that by age five, children are choosing to learn from a competent teacher over someone who is more familiar to them — even if the competent teacher is a robot…

Okay. How were these data gathered? I absolutely loved the use of Zoom, a turkey baster, and nonsense terms like “fep.”

Fascinating. First, the idea of using Zoom and a turkey baster would never roamed across this dinobaby’s mind. Second, the intuitive leap by the researchers that pre-schoolers who finger-paint would like to undertake this deeply intellectual task with a robot, not a human. The human, from my experience, is necessary to prevent the delightful sprouts from eating the paint. Third, I wonder if the research team’s first year statistics professor explained the concept of a valid sample.

One thing is clear from the research. Teachers, your days are numbered unless you participate in the Singularity with Ray Kurzweil or are part of the school systems’ administrative group riding the nepotism bus.

“Fep.” A good word to describe certain types of research.

Stephen E Arnold, April 4, 2023

Subscription Thinking: More Risky Than 20-Somethings Think

March 2, 2023

I am delighted I don’t have to sit in meetings with GenX, GenY, and GenZ MBAs any longer. Now I talk to other dinobabies. Why am I not comfortable with the younger bright as a button humanoids? Here’s one reason: “Volkswagen Briefly Refused to Track Car with Abducted Child Inside until It Received Payment.”

I can visualize the group figuring out to generate revenue instead of working to explain and remediate the fuel emission scam allegedly perpetrated by Volkswagen. The reasoning probably ran along the lines, “Hey let’s charge people for monitoring a VW.” Another adds: “Wow, easy money and we avoid the blow back BMW got when it wanted money for heated seats.”

Did the VW young wizards consider downsides of the problem? Did the super bright money spinning ask, “What contingencies are needed for a legitimate law enforcement request?” My hunch is that someone mentioned these and other issues, but the team was thinking about organic pizza for lunch or why the coffee pods were plain old regular coffee.

The cited article states:

The Sheriff’s Office of Lake County, Illinois, has reported on Facebook about a car theft and child abduction incident that took place last week. Notably, it said that a Volkswagen Atlas with tracking technology built in was stolen from a woman and when the police tried asking VW to track the vehicle, it refused until it received payment.

The company floundered and then assisted. The child was unharmed.

Good work VW. Now about software in your electric vehicles and the emission engineering issue? What do I hear?

The sweet notes of Simon & Garfunkel “Sound of Silence”? So relaxing and stress free: Just like the chatter of those who were trying to rescue the child.

No, I never worry about how the snow plow driver gets to work, thank you. I worry about incomplete thinking and specious methods of getting money from a customer.

Stephen E Arnold, March 2, 2023

Why Governments and Others Outsource… Almost Everything

January 24, 2023

I read a very good essay called “Questions for a New Technology.” The core of the write up is a list of eight questions. Most of these are problems for full-time employees. Let me give you one example:

Are we clear on what new costs we are taking on with the new technology? (monitoring, training, cognitive load, etc)

The challenge strike me as the phrase “new technology.” By definition, most people in an organization will not know the details of the new technology. If a couple of people do, these individuals have to get the others up to speed. The other problem is that it is quite difficult for humans to look at a “new technology” and know about the knock on or downstream effects. A good example is the craziness of Facebook’s dating objective and how the system evolved into a mechanism for social revolution. What in-house group of workers can tackle problems like that once the method leaves the dorm room?

The other questions probe similarly difficult tasks.

But my point is that most governments do not rely on their full time employees to solve problems. Years ago I gave a lecture at Cebit about search. One person in the audience pointed out that in that individual’s EU agency, third parties were hired to analyze and help implement a solution. The same behavior popped up in Sweden, the US, and Canada and several other countries in which I worked prior to my retirement in 2013.

Three points:

  1. Full time employees recognize the impossibility of tackling fundamental questions and don’t really try
  2. The consultants retained to answer the questions or help answer the questions are not equipped to answer the questions either; they bill the client
  3. Fundamental questions are dodged by management methods like “let’s push decisions down” or “we decide in an organic manner.”

Doing homework and making informed decisions is hard. A reluctance to learn, evaluate risks, and implement in a thoughtful manner are uncomfortable for many people. The result is the dysfunction evident in airlines, government agencies, hospitals, education, and many other disciplines. Scientific research is often non reproducible. Is that a good thing? Yes, if one lacks expertise and does not want to accept responsibility.

Stephen E Arnold, January 25, 2023

Is SkyNet a Reality or a Plot Device?

January 20, 2023

We humans must resist the temptation to outsource our reasoning to an AI, no matter how trustworthy it sounds. This is because, as iai News points out, “All-Knowing Machines Are a Fantasy.” Society is now in danger of confusing fiction with reality, a mistake that could have serious consequences. Professors Emily M. Bender and Chirag Shah observe:

“Decades of science fiction have taught us that a key feature of a high-tech future is computer systems that give us instant access to seemingly limitless collections of knowledge through an interface that takes the form of a friendly (or sometimes sinisterly detached) voice. The early promise of the World Wide Web was that it might be the start of that collection of knowledge. With Meta’s Galactica, OpenAI’s ChatGPT and earlier this year LaMDA from Google, it seems like the friendly language interface is just around the corner, too. However, we must not mistake a convenient plot device—a means to ensure that characters always have the information the writer needs them to have—for a roadmap to how technology could and should be created in the real world. In fact, large language models like Galactica, ChatGPT and LaMDA are not fit for purpose as information access systems, in two fundamental and independent ways.”

The first problem is that language models do what they are built to do very well: they produce text that sounds human-generated. Authoritative, even. Listeners unconsciously ascribe human thought processes to the results. In truth, algorithms lack understanding, intent, and accountability, making them inherently unreliable as unvetted sources of information.

Next is the nature of information itself. It is impossible for an AI to tap into a comprehensive database of knowledge because such a thing does not exist and probably never will. The Web, with its contradictions, incomplete information, and downright falsehoods, certainly does not qualify. Though for some queries a quick, straightforward answer is appropriate (how many tablespoons in a cup?) most are not so simple. One must compare answers and evaluate provenance. In fact, the authors note, the very process of considering sources helps us refine our needs and context as well as asses the data itself. We miss out on all that when, in search of a quick answer, we accept the first response from any search system. That temptation is hard enough to resist with a good old fashioned Google search. The human-like interaction with chatbots just makes it more seductive. The article notes:

“Over both evolutionary time and every individual’s lived experience, natural language to-and-fro has always been with fellow human beings. As we encounter synthetic language output, it is very difficult not to extend trust in the same way as we would with a human. We argue that systems need to be very carefully designed so as not to abuse this trust.”

That is a good point, though AI developers may not be eager to oblige. It remains up to us humans to resist temptation and take the time to think for ourselves.

Cynthia Murrell, January 20, 2023

Eczema? No, Terminator Skin

January 20, 2023

Once again, yesterday’s science fiction is today’s science fact. ScienceDaily reports, “Soft Robot Detects Damage, Heals Itself.” Led by Rob Shepherd, associate professor of mechanical and aerospace engineering, Cornell University’s Organic Robotics Lab has developed stretchable fiber-optic sensors. These sensors could be incorporated in soft robots, wearable tech, and other components. We learn:

“For self-healing to work, Shepard says the key first step is that the robot must be able to identify that there is, in fact, something that needs to be fixed. To do this, researchers have pioneered a technique using fiber-optic sensors coupled with LED lights capable of detecting minute changes on the surface of the robot. These sensors are combined with a polyurethane urea elastomer that incorporates hydrogen bonds, for rapid healing, and disulfide exchanges, for strength. The resulting SHeaLDS — self-healing light guides for dynamic sensing — provides a damage-resistant soft robot that can self-heal from cuts at room temperature without any external intervention. To demonstrate the technology, the researchers installed the SHeaLDS in a soft robot resembling a four-legged starfish and equipped it with feedback control. Researchers then punctured one of its legs six times, after which the robot was then able to detect the damage and self-heal each cut in about a minute. The robot could also autonomously adapt its gait based on the damage it sensed.”

Some of us must remind ourselves these robots cannot experience pain when we read such brutal-sounding descriptions. As if to make that even more difficult, we learn this material is similar to human flesh: it can easily heal from cuts but has more trouble repairing burn or acid damage. The write-up describes the researchers’ next steps:

“Shepherd plans to integrate SHeaLDS with machine learning algorithms capable of recognizing tactile events to eventually create ‘a very enduring robot that has a self-healing skin but uses the same skin to feel its environment to be able to do more tasks.'”

Yep, sci-fi made manifest. Stay tuned.

Cynthia Murrell, January 20, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta