Wanna Be an AI Entrepreneur? Part 2

August 17, 2023

MIT digital-learning dean Cynthia Breazeal and Yohana founder Yoky Matsuoka have a message for their entrepreneurship juniors. Forbes shares “Why These 50 Over 50 Founders Say Beware of AI ‘Hallucination’.” It is easy to get caught up in the hype around AI and leap into the fray before looking. But would-be AI entrepreneurs must approach their projects with careful consideration.

8 12 money machine

An entrepreneur “listens” to the AI experts. The AI machine spews money to the entrepreneur. How wonderful new technology is! Thanks, MidJourney for not asking me to appeal this image.

Contributor Zoya Hansan introduces these AI authorities:

“‘I’ve been watching generative AI develop in the last several years,’ says Yoky Matsuoka, the founder of a family concierge service called Yohana, and formerly a cofounder at Google X and CTO at Google Nest. ‘I knew this would blow up at some point, but that whole ‘up’ part is far bigger than I ever imagined.’

Matsuoka, who is 51, is one of the 20 AI maestros, entrepreneurs and science experts on the third annual Forbes 50 Over 50 list who’ve been early adopters of the technology. We asked these experts for their best advice to younger entrepreneurs leveraging the power of artificial intelligence for their businesses, and each one had the same warning: we need to keep talking about how to use AI responsibly.”

The pair have four basic cautions. First, keep humans on board. AI can often offer up false information, problematically known as “hallucinations.” Living, breathing workers are required to catch and correct these mistakes before they lead to embarrassment or even real harm. The founders also suggest putting guardrails on algorithmic behavior; in other words, impose a moral (literal) code on one’s AI products. For example, eliminate racial and other biases, or refuse to make videos of real people saying or doing things they never said or did.

In terms of launching a business, resist pressure to start an AI company just to attract venture funding. Yes, AI is the hot thing right now, but there is no point if one is in a field where it won’t actually help operations. The final warning may be the most important: “Do the work to build a business model, not just flashy technology.” The need for this basic foundation of a business does not evaporate in the face of hot tech. Learn from Breazeal’s mistake:

“In 2012, she founded Jibo, a company that created the first social robot that could interact with humans on a social and emotional level. Competition with Amazon’s Alexa—which takes commands in a way that Jibo, created as a mini robot that could talk and provide something like companionship, wasn’t designed to do—was an impediment. So too was the ability to secure funding. Jibo did not survive. ‘It’s not the most advanced, best product that wins,’ says Breazeal. ‘Sometimes it’s the company who came up with the right business model and figured out how to make a profit.'”

So would-be entrepreneurs must proceed with caution, refusing to let the pull of the bleeding edge drag one ahead of oneself. But not too much caution.

Cynthia Murrell, August 17, 2023

The ISP Ploy: Heck, No, Mom. I Cannot Find My Other Sock?

August 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Before I retired, my team and I were doing a job for the US Senate. One day at lunch we learned that Google could not provide employment and salary information  to a government agency housed in the building in which we were working. The talk, as I recall, was tinged with skepticism. If a large company issues paychecks and presumably files forms with the Internal Revenue Service, records about who and wages were available. Google allowed many people to find answers, but the company could not find its employment data. The way things work in Washington, DC, to the best of my recollection, a large company with considerable lobbying help and a flock of legal eagles can make certain processes slow. As staff rotate, certain issues get pushed down the priority pile and some — not everyone, of course — fade away.

8 16 cant find it mom

A young teen who will mature into a savvy ISP tells his mom, “I can’t find my other sock. It is too hard for me to move stuff and find it. If it turns up, I will put it in the laundry.” This basic play is one of the keys to the success of the Internet Service Provider the bright young lad runs today. Thanks, MidJourney. You were back online and demonstrating gradient malfunctioning. Perhaps you need a bit of the old gain of function moxie?

I thought about this “inability” to deliver information when I read “ISPs Complain That Listing Every Fee Is Too Hard, Urge FCC to Scrap New Rule.” I want to focus on one passage in the article and suggest that you read the original report. Keep in mind my anecdote about how a certain big tech outfit handles some US government requests.

Here’s the snippet from the long source document:

…FCC order said the requirement to list “all charges that providers impose at their discretion” is meant to help broadband users “understand which charges are part of the provider’s rate structure, and which derive from government assessments or programs.” These fees must have “simple, accurate, [and] easy-to-understand name[s],” the FCC order said. “Further, the requirement will allow consumers to more meaningfully compare providers’ rates and service packages, and to make more informed decisions when purchasing broadband services. Providers must list fees such as monthly charges associated with regulatory programs and fees for the rental or leasing of modem and other network connection equipment,” the FCC said.

Three observations about the information in the passage:

  1. The argument is identical to that illustrated by the teen in the room filled with detritus. Crap everywhere makes finding easy for the occupant and hard for anyone else. Check out Albert Einstein’s desk on the day he died. Crap piled everywhere. Could he find what he needed? According to his biographers, the answer is, “Yes.”
  2. The idea that a commercial entity which bills its customers does not have the capacity to print out the little row entries in an accounting system is lame in my opinion. The expenses have to labeled and reported. Even if they are chunked like some of the financial statements crafted by the estimable outfits Amazon and Microsoft, someone has the notes or paper for these items. I know some people who could find these scraps of information; don’t you?
  3. The wild and crazy government agencies invite this type of corporate laissez faire behavior. Who is in charge? Probably not the government agency if some recent anti-trust cases are considered as proof of performance.

Net net: Companies want to be able to fiddle the bills. Period. Printing out comprehensive products and services prices reduces the gamesmanship endemic in the online sector.

Stephen E Arnold, August 16, 2023

AI and Increasing Inequality: Smart Software Becomes the New Dividing Line

August 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Will AI Be an Economic Blessing or Curse?” engages is prognosticative “We will be sorry” analysis. Yep, I learned about this idea in Dr. Francis Chivers’ class about Epistemology at Duquesne University. Wow! Exciting. The idea is that knowing is phenomenological. Today’s manifestation of this mental process is in the “fake data” and “alternative facts” approach to knowledge.

8 8 cruising ai highway

An AI engineer cruising the AI highway. This branch of the road does not permit boondocking or begging. MidJourney disappointed me again. Sigh.

Nevertheless, the article makes a point I find quite interesting; specifically, the author invites me to think about the life of a peasant in the Middle Ages. There were some technological breakthroughs despite the Dark Ages and the charmingly named Black Death. Even though plows improved and water wheels were rediscovered, peasants were born into a social system. The basic idea was that the poor could watch rich people riding through fields and sometimes a hovel in pursuit of fun, someone who did not meet meet their quota of wool, or a toothsome morsel. You will have to identify a suitable substitute for the morsel token.

The write up points out (incorrectly in my opinion):

“AI has got a lot of potential – but potential to go either way,” argues Simon Johnson, professor of global economics and management at MIT Sloan School of Management. “We are at a fork in the road.”

My view is that the AI smart software speedboat is roiling the data lakes. Once those puppies hit 70 mph on the water, the casual swimmers or ill prepared people living in houses on stilts will be disrupted.

The write up continues:

Backers of AI predict a productivity leap that will generate wealth and improve living standards. Consultancy McKinsey in June estimated it could add between $14 trillion and $22 trillion of value annually – that upper figure being roughly the current size of the U.S economy.

On the bright side, the write up states:

An OECD survey of some 5,300 workers published in July suggested that AI could benefit job satisfaction, health and wages but was also seen posing risks around privacy, reinforcing workplace biases and pushing people to overwork.
“The question is: will AI exacerbate existing inequalities or could it actually help us get back to something much fairer?” said Johnson.

My view is not populated with an abundance of happy faces. Why? Here are my observations:

  1. Those with knowledge about AI will benefit
  2. Those with money will benefit
  3. Those in the right place at the right time and good luck as a sidekick will benefit
  4. Those not in Groups one, two, and three will be faced with the modern equivalent of laboring as a peasant in the fields of the Loire Valley.

The idea that technology democratizes is not in line with my experience. Sure, most people can use an automatic teller machine and a mobile phone functioning as a credit card. Those who can use, however, are not likely to find themselves wallowing in the big bucks of the firms or bureaucrats who are in the AI money rushes.

Income inequality is one visible facet of a new data flyway. Some get chauffeured; others drift through it. Many stand and marvel at rushing flows of money. Some hold signs with messages like “Work needed” or “Homeless. Please, help.”

The fork in the road? Too late. The AI Flyway has been selected. From my vantage point, one benefit will be that those who can drive have some new paths to explore. For many, maybe orders of magnitude more people, the AI Byway opens new areas for those who cannot afford a place to live.

The write up assumes the fork to the AI Flyway has not been taken. It has, and it is not particularly scenic when viewed from a speeding start up gliding on neural networks.

Stephen E Arnold, August 16, 2023

Wanna Be an AI Entrepreneur: Part 1, A How To from Crypto Experts

August 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

For those looking to learn more about AI, venture capital firm Andreessen Horowitz has gathered resources from across the Internet for a course of study it grandly calls the “AI Canon.” It is a VCs dream curriculum in artificial intelligence. Naturally, the authors include a link to each resource. The post states:

“Research in artificial intelligence is increasing at an exponential rate. It’s difficult for AI experts to keep up with everything new being published, and even harder for beginners to know where to start. So, in this post, we’re sharing a curated list of resources we’ve relied on to get smarter about modern AI. We call it the ‘AI Canon’ because these papers, blog posts, courses, and guides have had an outsized impact on the field over the past several years. We start with a gentle introduction to transformer and latent diffusion models, which are fueling the current AI wave. Next, we go deep on technical learning resources; practical guides to building with large language models (LLMs); and analysis of the AI market. Finally, we include a reference list of landmark research results, starting with ‘Attention is All You Need’ — the 2017 paper by Google that introduced the world to transformer models and ushered in the age of generative AI.”

Yes, the Internet is flooded with articles about AI, some by humans and some by self-reporting algorithms. Even this curated list is a bit overwhelming, but at least it narrows the possibilities. It looks like a good place to start learning more about this inescapable phenomenon. And while there, one can invest in the firm’s hottest prospects we think.

Cynthia Murrell, August 16, 2023

Does Information Filtering Grant the Power to Control People and Money? Yes, It Does

August 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read an article which I found interesting because it illustrates how filtering works. “YouTube Starts Mass Takedowns of Videos Promoting Harmful or Ineffective Cancer Cures.” The story caught my attention because I have seen reports that the US Food & Drug Administration has been trying to explain its use of language in the midst of the Covid anomaly. The problematic word is “quips.” The idea is that official-type information was not intended as more than a “quip.” I noted the explanations as reported in articles similar to “Merely Quips? Appeals Court Says FDA Denunciations of Iv$erm#ctin Look Like Command, Not Advice.” I am not interested in either the cancer or FDA intentions per se.

7 22 digital delphi

Two bright engineers built a “filter machine.” One of the engineers (the one with the hat) says, “Cool. We can accept a list of stop words or a list of urls on a watch list and block the content.” The other says, “Yes, and I have added a smart module so that any content entering the Info Shaper is stored. We don’t want to lose any valuable information, do we?” The fellow with the hat says, “No one will know what we are blocking. This means we can control messaging to about five billion people.” The co-worker says, “It is closer to six billion now.” Hey, MidJourney, despite your troubles with the outstanding Discord system, you have produced a semi-useful image a couple of weeks ago.

The idea which I circled in True Blue was:

The platform will also take action against videos that discourage people from seeking professional medical treatment as it sets out its health policies going forward.

I interpreted this to mean that Alphabet Google is now implementing what I would call editorial policies. The mechanism for deciding what content is “in bounds” and what content is “out of bounds” is not clear to me. In the days when there were newspapers and magazines and non-AI generated books, there were people of a certain type and background who wanted to work in departments responsible for defining and implementing editorial policies. In the days before digital online services destroyed the business models upon which these media depended were destroyed, the editorial policies operated as an important component of information machines. Commercial databases had editorial policies too. These policies helped provide consistent content based on the guidelines. Some companies did not make a big deal out of the editorial policies. Other companies and organizations did. Either way, the flow of digital content operated like a sandblaster. Now we have experienced 25 years of Wild West content output.

Why do II  — a real and still alive dinobaby — care about the allegedly accurate information in “YouTube Starts Mass Takedowns of Videos Promoting Harmful or Ineffective Cancer Cures”? Here are three reasons:

  1. Control of information has shifted from hundreds of businesses and organizations to a few; therefore, some of the Big Dogs want to make certain they can control information. Who wants a fake cancer cure? Like other types of straw men, most people say yes to this type of filtering. A B testing can “prove” that people want this type of filtering I would suggest.
  2. The mechanisms to shape content have been a murky subject for Google and other high technology companies. If the “Mass Takedowns” write up is accurate, Google is making explicit its machine to manage information. Control of information in a society in which many people lack certain capabilities in information analysis and the skills to check the provenance of information are going to operate in a “frame” defined by a commercial enterprise.
  3. The different governmental authorities appear to be content to allow a commercial firm to become the “decider in chief” when it comes to information flow. With concentration and consolidation comes power in my opinion.

Is there a fix? No, because I am not sure that independent thinking individuals have the “horsepower” to redirect the direction the big machine is heading.

Why did I bother to write this? My hope is that someone start thinking about the implications of a filtering machine. If one does not have access to certain information like a calculus book, most people cannot solve calculus problems. The same consequence when information is simply not available. Ban books? Sure, great idea. Ban information about a medication? Sure, great idea. Ban discourse on the Internet? Sure, great idea.

You may see where this type of thinking leads. If you don’t, may I suggest you read Alexis de Tocqueville’s Democracy in America. You can find a copy at this link. (Verified on August 15, 2023, but it may be disappeared at any time. And if you can’t read it, you will not know what the savvy French guy spelled out in the mid 19th century.) If you don’t know something, then the information does not exist and will not have an impact on one’s “thinking.”

One final observation to young people, although I doubt I have any youthful readers: “Keep on scrolling.”

Stephen E Arnold, August 15, 2023

 

Killing Horses? Okay. Killing Digital Information? The Best Idea Ever!

August 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Fans at the 2023 Kentucky Derby were able to watch horses killed. True, the sport of kings parks vehicles and has people stand around so the termination does not spoil a good day at the races. It seems logical to me that killing information is okay too. Personally I want horses to thrive without brutalization with mint juleps, and in my opinion, information deserves preservation. Without some type of intentional or unintentional information, what would those YouTuber videos about ancient technology have to display and describe?

In the Age of Culling” — an article in the online publication tedium.co — I noted a number of ideas which resonated with me. The first is one of the subheads in the write up; to wit:

CNet pruning its content is a harbinger of something bigger.

The basic idea in the essay is that killing content is okay, just like killing horses.

The article states:

I am going to tell you right now that CNET is not the first website that has removed or pruned its archives, or decided to underplay them, or make them hard to access. Far from it.

The idea is that eliminating content creates an information loss. If one cannot find some item of content, that item of content does not exist for many people.

I urge you to read the entire article.

I want to shift the focus from the tedium.co essay slightly.

With digital information being “disappeared,” the cuts away research, some types of evidence, and collective memory. But what happens when a handful of large US companies effectively shape the information training smart software. Checking facts becomes more difficult because people “believe” a machine more than a human in many situations.

8 13 library

Two girls looking at a museum exhibit in 2028. The taller girl says, “I think this is what people used to call a library.” The shorter girl asks, “Who needs this stuff. I get what I need to know online. Besides this looks like a funeral to me.” The taller girl replies, “Yes, let’s go look at the plastic dinosaurs. When you put on the headset, the animals are real.” Thanks MidJourney for not including the word “library” or depicting the image I requested. You are so darned intelligent!

Consider the power information filtering and weaponizing conveys to those relying on digital information. The statement “harbinger of something bigger” is correct. But if one looks forward, the potential for selective information may be the flip side of forgetting.

Trying to figure out “truth” or “accuracy” is getting more difficult each day. How does one talk about a subject when those in conversation have learned about Julius Caesar from a TikTok video and perceive a problem with tools created to sell online advertising?

This dinobaby understands that cars are speeding down the information highway, and their riders are in a reality defined by online. I am reluctant to name the changes which suggest this somewhat negative view of learning. One believes what one experiences. If those experiences are designed to generate clicks, reduce operating costs, and shape behavior — what’s the information landscape look like?

No digital archives? No past. No awareness of information weaponization? No future. Were those horses really killed? Were those archives deleted? Were those Shakespeare plays removed from the curriculum? Were the tweets deleted?

Let’s ask smart software. No thanks, I will do dinobaby stuff despite the efforts to redefine the past and weaponize the future.

Stephen E Arnold, August 14, 2023

MBAs Want to Win By Delivering Value. It Is Like an Abstraction, Right?

August 11, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Is it completely necessary to bring technology into every aspect of one’s business? Maybe, maybe not. But apparently some believe such company-wide “digital transformation” is essential for every organization these days. And, of course, there are consulting firms eager to help. One such outfit, Third Stage Consulting Group, has posted some advice in, “How to Measure Digital Transformation Results and Value Creation.” Value for whom? Third Stage, perhaps? Certainly, if one takes writer Eric Kimberling on his invitation to contact him for a customized strategy session.

Kimberling asserts that, when embarking on a digital transformation, many companies fail to consider how they will keep the project on time, on budget, and in scope while minimizing operational disruption. Even he admits some jump onto the digital-transformation bandwagon without defining what they hope to gain:

“The most significant and crucial measure of success often goes overlooked by many organizations: the long-term business value derived from their digital transformation. Instead of focusing solely on basic reasons and justifications for undergoing the transformation, organizations should delve deeper into understanding and optimizing the long-term business value it can bring. For example, in the current phase of digital transformation, ERP [Enterprise Resource Planning] software vendors are pushing migrations to new Cloud Solutions. While this may be a viable long-term strategy, it should not be the sole justification for the transformation. Organizations need to define and quantify the expected business value and create a benefits realization plan to achieve it. … Considering the significant investments of time, money, and effort involved, organizations should strive to emerge from the transformation with substantial improvements and benefits.”

So companies should consider carefully what, if anything, they stand to gain by going through this process. Maybe some will find the answer is “nothing” or “not much,” saving themselves a lot of hassle and expense. But if one decides it is worth the trouble, rest assured many consultants are eager to guide you through. For a modest fee, of course.

Cynthia Murrell, August 11, 2023

Generative AI: Good or Bad the Content Floweth Forth

August 11, 2023

Hollywood writers are upset that major studios want to replace them with AI algorithms. While writing bots have not replaced human writers yet AI algorithms such as ChatGPT, Ryter, Writing.io, and more are everywhere. Threat Source Newsletter explains that, “Every Company Has Its Own Version of ChatGPT Now.”

8 7 flood of content

A flood of content. Thinking drowned. Thanks Mid Journey. I wanted words but got letters. Great Job.

AI writing algorithms are also known as AI assistants. They are programmed to answer questions and perform text-based tasks. The text-based tasks include writing résumés, outlines, press releases, Web site content, and more. While the AI assistants still cannot pass the Turing test, it is not stopping big tech companies from developing their own bots. Meta released Llama 2 and IBM rebranded its powerful computer system from Watson to watsonx (it went from a big W to a lower case w and got an “x” too).

While Llama 2, the “new” Watson, and ChatGPT are helpful automation tools they are also dangerous tools for bad actors. Bad actors use these tools to draft spam campaigns, phishing emails, and scripts. Author Jonathan Munshaw tested AI assistants to see how they responded to illegal prompts.

Llama 2 refused to assist in generating an email for malware, while ChatGPT “gladly” helped draft an email. When Munshaw asked both to write a script to ask a grandparent for a gift card, each interpreted the task differently. Llama 2 advised Munshaw to be polite and aware of the elderly relative’s financial situation. ChatGPT wrote a TV script.

Munshaw wrote that:

“I commend Meta for seeming to have tighter restrictions on the types of asks users can make to its AI model. But, as always, these tools are far from perfect and I’m sure there are scripts that I just couldn’t think of that would make an AI-generated email or script more convincing.”

It will be awhile before writers are replaced by AI assistants. They are wonderful tools to improve writing but humans are still needed for now.

Whitney Grace, August 10, 2023

The Zuckbook Becomes Cooperative?

August 10, 2023

The Internet empowers people to voice their opinions without fear of repercussions or so they think. While the Internet generally remains anonymous, social media companies must bow to the letter of the law or face fines or other reprisals. Ars Technnica shares how a European court forced Meta to share user information in a civil case: “Facebook To Unmask Anonymous Dutch User Accused Of Repeated Defamatory Posts.”

The Netherlands’ Court of the Hague determined that Meta Ireland must share the identity of a user who defamed the claimant, a male Facebook user. The anonymous user “defamed” the claimant by stating he secretly recorded women he dated. The anonymous user posted the negative statements in private Facebooks groups about dating experiences. The claimant could not access the groups but he did see screenshots. He claimed the posts have harmed his reputation.

8 7 cooperation

After cooperating, executives at a big time technology firm celebrate with joy and enthusiasm. Thanks, MidJourney. You have happiness down pat.

The claimant asked Meta to remove the posts but the company refused based on the grounds of freedom of expression. Meta encouraged the claimant to contact the other user, instead the claimant decided to sue.

Initially, the claimant asked the court to order Meta to delete the posts, identify the anonymous user, and flag any posts in other private Facebook groups that could defame the claimant.

While arguing the case, Meta had defended the anonymous user’s right to freedom of expression, but the court decided that the claimant—whose name is redacted in court documents—deserved an opportunity to challenge the allegedly defamatory statements.

Partly for that reason, the court ordered Meta to provide “basic subscriber information” on the anonymous user, including their username, as well as any names, email addresses, or phone numbers associated with their Facebook account. The court did not order Meta to remove the posts or flag any others that may have been shared in private groups, though.”

The court decided that freedom of speech is not unlimited and the posts could be defamatory. The court also noted posts did not have to be deemed unlawful to de-anonymous a user.

This has the potential to be a landmark case in online user privacy and accountability on social media platforms. In the future, users might need to practice more restraint and think about consequences before posting online. They might want to read etiquette books from the pre-Internet days when constructive behavior was not an anomaly.

Whitney Grace, August 10, 2023

Technology and AI: A Good Enough and Opaque Future for Humans

August 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

What Self Driving Cars Tell Us about AI Risks” provides an interesting view of smart software. I sensed two biases in the write up which I want to mention before commenting on the guts of the essay. The first bias is what I call “engineering blindspots.” The idea is that while flaws exist, technology gets better as wizards try and try again. The problem is that “good enough” may not lead to “better now” in a time measured by available funding. Therefore, the optimism engineers have for technology makes them blind to minor issues created by flawed “decisions” or “outputs.”

7 31 wrong data

A technology wizard who took classes in ethics (got a gentleperson’s “C”, advanced statistics (got close enough to an “A” to remain a math major), and applied machine learning experiences a moment of minor consternation at a smart water treatment plant serving portions of New York City. The engineer looks at his monitor and says, “How did that concentration of 500 mg/L of chlorine get into the Newtown Creek Waste Water Treatment Plant?” MidJourney has a knack for capturing the nuances of an engineer’s emotions who ends up as a water treatment engineer, not an AI expert in Silicon Valley.

The second bias is that engineers understand inherent limitations. Non engineers “lack technical comprehension” and that smart software at this time does not understand “the situation, the context, or any unobserved factors that a person would consider in a similar situation.” The idea is that techno-wizards have a superior grasp of a problem. The gap between an engineer and a user is a big one, and since comprehension gaps are not an engineering problem, that’s the techno-way.

You may disagree. That’s what makes allegedly honest horse races in which stallions don’t fall over dead or have to be terminated in order to spare the creature discomfort and the owners big fees.

Now what about the innards of the write up?

  1. Humans make errors. This begs the question, “Are engineers human in the sense that downstream consequences are important, require moral choices, and like the humorous medical doctor adage “Do no harm”?
  2. AI failure is tough to predict? But predictive analytics, Monte Carlo simulations, and Fancy Dan statistical procedures like a humanoid setting a threshold because someone has to do it.
  3. Right now mathy stuff cannot replicate “judgment under uncertainty.” Ah, yes, uncertainty. I would suggest considering fear and doubt too. A marketing trifecta.
  4. Pay off that technical debt. Really? You have to be kidding. How much of the IBM mainframe’s architecture has changed in the last week, month, year, or — do I dare raise this issue — decade? How much of Google’s PageRank has been refactored to keep pace with the need to discharge advertiser paid messages as quickly as possible regardless of the user’s query? I know. Technical debt. No an issue.
  5. AI raises “system level implications.” Did that Israeli smart weapon make the right decision? Did the smart robot sever a spinal nerve? Did the smart auto mistake a traffic cone for a child? Of course not. Traffic cones are not an issue for smart cars unless one puts some on the road and maybe one on the hood of a smart vehicle.

Net net: Are you ready for smart software? I know I am. At the AutoZone on Friday, two individuals were unable to replace the paper required to provide a customer with a receipt. I know. I watched for 17 minutes until one of the young professionals gave me a scrawled handwritten note with the credit card code transaction number. Good enough. Let ‘er rip.

Stephen E Arnold, August 9, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta