AI and Human Workers: AI Wins for Now

July 17, 2024

When it come to US employment news, an Australian paper does not beat around the bush. Citing a recent survey from the Federal Reserve Bank of Richmond, The Sydney Morning Herald reports, “Nearly Half of US Firms Using AI Say Goal Is to Cut Staffing Costs.” Gee, what a surprise. Writer Brian Delk summarizes:

“In a survey conducted earlier this month of firms using AI since early 2022 in the Richmond, Virginia region, 45 per cent said they were automating tasks to reduce staffing and labor costs. The survey also found that almost all the firms are using automation technology to increase output. ‘CFOs say their firms are tapping AI to automate a host of tasks, from paying suppliers, invoicing, procurement, financial reporting, and optimizing facilities utilization,’ said Duke finance professor John Graham, academic director of the survey of 450 financial executives. ‘This is on top of companies using ChatGPT to generate creative ideas and to draft job descriptions, contracts, marketing plans, and press releases.’ The report stated that over the past year almost 60 per cent of companies surveyed have ‘have implemented software, equipment, or technology to automate tasks previously completed by employees.’ ‘These companies indicate that they use automation to increase product quality (58 per cent of firms), increase output (49 per cent), reduce labor costs (47 per cent), and substitute for workers (33 per cent).’”

Delk points to the Federal Reserve Bank of Dallas for a bit of comfort. Its data shows the impact of AI on employment has been minimal at the nearly 40% of Texas firms using AI. For now. Also, the Richmond survey found manufacturing firms to be more likely (53%) to adopt AI than those in the service sector (43%). One wonders whether that will even out once the uncanny valley has been traversed. Either way, it seems businesses are getting more comfortable replacing human workers with cheaper, more subservient AI tools.

Cynthia Murrell, July 17, 2024

AI: Helps an Individual, Harms Committee Thinking Which Is Often Sketchy at Best

July 16, 2024

dinosaur30a_thumb_thumb_thumb_thumb_[1]_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I spotted an academic journal article type write up called “Generative AI Enhances Individual Creativity But Reduces the Collective Diversity of Novel Content.” I would give the paper a C, an average grade. The most interesting point in the write up is that when one person uses smart software like a ChatGPT-type service, the output can make that person seem to a third party smarter, more creative, and more insightful than a person slumped over a wine bottle outside of a drug dealer’s digs.

The main point, which I found interesting, is that a group using ChatGPT drops down into my IQ range, which is “Dumb Turtle.” I think this is potentially significant. I use the word “potential” because the study relied upon human “evaluators” and imprecise subjective criteria; for instance, novelty and emotional characteristics. This means that if the evaluators are teacher or people who have to critique writing are making the judgments, these folks have baked in biases and preconceptions. I know first hand because one of my pieces of writing was published in the St. Louis Post Dispatch at the same time my high school English teacher clapped a C for narrative value and D for language choice. She was not a fan of my phrase “burger boat drive in.” Anyway I got paid $18 for the write up.

Let’s pick up this “finding” that a group degenerates or converges on mediocrity. (Remember, please, that a camel is a horse designed by a committee.) Here’s how the researchers express this idea:

While these results point to an increase in individual creativity, there is risk of losing collective novelty. In general equilibrium, an interesting question is whether the stories enhanced and inspired by AI will be able to create sufficient variation in the outputs they lead to. Specifically, if the publishing (and self-publishing) industry were to embrace more generative AI-inspired stories, our findings suggest that the produced stories would become less unique in aggregate and more similar to each other. This downward spiral shows parallels to an emerging social dilemma (42): If individual writers find out that their generative AI-inspired writing is evaluated as more creative, they have an incentive to use generative AI more in the future, but by doing so, the collective novelty of stories may be reduced further. In short, our results suggest that despite the enhancement effect that generative AI had on individual creativity, there may be a cautionary note if generative AI were adopted more widely for creative tasks.

I am familiar with the stellar outputs of committees. Some groups deliver zero and often retrograde outputs; that is, the committee makes a situation worse. I am thinking of the home owners’ association about a mile from my office. One aggrieved home owner attended a board meeting and shot one of the elected officials. Exciting plus the scene of the murder was a church conference room. Driveways can be hot topics when the group decides to change rules which affected this fellow’s own driveway.

Sometimes committees come up with good ideas; for example, at one government agency where I was serving as the IV&V professional (independent verification and validation) which decided to disband because there was a tiny bit of hanky panky in the procurement process. That was a good idea.

Other committee outputs are worthless; for example, the transcripts of the questions from elected officials directed to high-technology executives. I won’t name any committees of this type because I worked for a congress person, and I observe the unofficial rule: Button up, butter cup.

Let me offer several observations about smart software producing outputs that point to dumb turtle mode:

  1. Services firms (lawyers and blue chip consultants) will produce less useful information relying on smart software than on what crazed Type A achievers produce. Yes, I know that one major blue chip consulting firm helped engineer the excitement one can see in certain towns in West Virginia, but imagine even more negative downstream effects. Wow!
  2. Dumb committees relying on AI will be among the first to suggest, “Let AI set the agenda.” And, “Let AI provide the list of options.” Great idea and one that might be more exciting that an aircraft door exiting the airplane frame at 15,000 feet.
  3. The bean counters in the organization will look at the efficiency of using AI for committee work and probably suggest, “Let’s eliminate the staff who spend more than 85 percent of their time in committee meetings.” That will save money and produce some interesting downstream consequences. (I once had a job which was to attendee committee meetings.)

Net net: AI will help some; AI will produce surprises which cannot be easily anticipated it seems.

Stephen E Arnold, July 16, 2024

AI and Electricity: Cost and Saving Whales

July 15, 2024

dinosaur30a_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Grumbling about the payoff from those billions of dollars injected into smart software continues. The most recent angle is electricity. AI is a power sucker, a big-time energy glutton. I learned this when I read the slightly alarmist write up “Artificial Intelligence Needs So Much Power It’s Destroying the Electrical Grid.” Texas, not a hot bed of AI excitement, seems to be doing quite well with the power grid problem without much help from AI. Mother Nature has made vivid the weaknesses of the infrastructure in that great state.

image

Some dolphins may love the power plant cooling effluent (run off). Other animals, not so much. Thanks, MSFT Copilot. Working on security this week?

But let’s get back to saving whales and the piggishness of those with many GPUs processing data to help out the eighth-graders with their 200 word essays.

The write up says:

As a recent report from the Electric Power Research Institute lays out, just 15 states contain 80% of the data centers in the U.S.. Some states – such as Virginia, home to Data Center Alley – astonishingly have over 25% of their electricity consumed by data centers. There are similar trends of clustered data center growth in other parts of the world. For example, Ireland has become a data center nation.

So what?

The article says that it takes just two years to spin up a smart software data center but it takes four years to enhance an electrical grid. Based on my experience at a unit of Halliburton specializing in nuclear power, the four year number seems a bit optimistic. One doesn’t flip a switch and turn on Three Mile Island. One does not pick a nice spot near a river and start building a nuclear power reactor. Despite the recent Supreme Court ruling calling into question what certain frisky Executive Branch agencies can require, home owners’ associations and medical groups can make life interesting. Plus building out energy infrastructure is expensive and takes time. How long does it take for several feet of specialized concrete to set? Longer than pouring some hardware store quick fix into a hole in your driveway?

The article says:

There are several ways the industry is addressing this energy crisis. First, computing hardware has gotten substantially more energy efficient over the years in terms of the operations executed per watt consumed. Data centers’ power use efficiency, a metric that shows the ratio of power consumed for computing versus for cooling and other infrastructure, has been reduced to 1.5 on average, and even to an impressive 1.2 in advanced facilities. New data centers have more efficient cooling by using water cooling and external cool air when it’s available. Unfortunately, efficiency alone is not going to solve the sustainability problem. In fact, Jevons paradox points to how efficiency may result in an increase of energy consumption in the longer run. In addition, hardware efficiency gains have slowed down substantially as the industry has hit the limits of chip technology scaling.

Okay, let’s put aside the grid and the dolphins for a moment.

AI has and will continue to have downstream consequences. Although the methods of smart software are “old” when measured in terms of Internet innovations, the knock on effects are not known.

Several observations are warranted:

  1. Power consumption can be scheduled. The method worked to combat air pollution in Poland, and it will work for data centers. (Sure, the folks wanting computation will complain, but suck it up, buttercups. Plan and engineer for efficiency.)
  2. The electrical grid, like the other infrastructures in the US, need investment. This is a job for private industry and the governmental authorities. Do some planning and deliver results, please.
  3. Those wanting to scare people will continue to exercise their First Amendment rights. Go for it. However, I would suggest putting observations in a more informed context may be helpful. But when six o’clock news weather people scare the heck out of fifth graders when a storm or snow approaches, is this an appropriate approach to factual information? Answer: Sure when it gets clicks, eyeballs, and ad money.

Net net: No big changes for now are coming. I hope that the “deciders” get their Fiat 500 in gear.

Stephen E Arnold, July 15, 2024

AI Weapons: Someone Just Did Actual Research!

July 12, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read a write up that had more in common with a write up about the wonders of a steam engine than a technological report of note. The title of the “real” news report is “AI and Ukraine Drone Warfare Are Bringing Us One Step Closer to Killer Robots.”

I poked through my files and found a couple of images posted as either advertisements for specialized manufacturing firms or by marketers hunting for clicks among the warfighting crowd. Here’s one:

image 

The illustration represents a warfighting drone. I was able to snap this image in a lecture I attended in 2021. At that time, an individual could purchase online the device in quantity for about US$9,000.

Here’s another view:

image

This militarized drone has 10 inch (254 millimeter) propellers / blades.

The boxy looking thing below the rotors houses electronics, batteries, and a payload of something like a Octanitrocubane- or HMX-type of kinetic charge.

Imagine four years ago, a person or organization could buy a couple of these devices and use them in a way warmly supported by bad actors. Why fool around with an unreliable individual pumped on drugs to carry a mobile phone that would receive the “show time” command? Just sit back. Guide the drone. And — well — evidence that kinetics work.

The write up is, therefore, years behind what’s been happening in some countries for years. Yep, years.

Consider this passage:

As the involvement of AI in military applications grows, alarm over the eventual emergence of fully autonomous weapons grows with it.

I want to point out that Palmer Lucky’s Andruil outfit has been fooling around in the autonomous system space since 2017. One buzz phrase an Andruil person used in a talk was, “Lattice for Mission Autonomy.” Was Mr. Lucky to focus on this area? Based on what I picked up at a couple of conferences in Europe in 2015, the answer is, “Nope.”

The write up does have a useful factoid in the “real” news report?

It is not technology. It is not range. It is not speed, stealth, or sleekness.

It is cheap. Yes, low cost. Why spend thousands when one can assemble a drone with hobby parts, a repurposed radio control unit from the local model airplane club, and a workable but old mobile phone?

Sign up for Telegram. Get some coordinates and let that cheap drone fly. If an operating unit has a technical whiz on the team, just let the gizmo go and look for rectangular shapes with a backpack near them. (That’s a soldier answering nature’s call.) Autonomy may not be perfect, but close enough can work.

The write up says:

Attack drones used by Ukraine and Russia have typically been remotely piloted by humans thus far – often wearing VR headsets – but numerous Ukrainian companies have developed systems that can fly drones, identify targets, and track them using only AI. The detection systems employ the same fundamentals as the facial recognition systems often controversially associated with law enforcement. Some are trained with deep learning or live combat footage.

Does anyone believe that other nation-states have figured out how to use off-the-shelf components to change how warfighting takes place? Ukraine started the drone innovation thing late. Some other countries have been beavering away on autonomous capabilities for many years.

For me, the most important factoid in the write up is:

… Ukrainian AI warfare reveals that the technology can be developed rapidly and relatively cheaply. Some companies are making AI drones using off-the-shelf parts and code, which can be sent to the frontlines for immediate live testing. That speed has attracted overseas companies seeking access to battlefield data.

Yep, cheap and fast.

Innovation in some countries is locked in a time warp due to procurement policies and bureaucracy. The US F 35 was conceived decades ago. Not surprisingly, today’s deployed aircraft lack the computing sophistication of the semiconductors in a mobile phone I can acquire today a local mobile phone repair shop, often operating from a trailer on Dixie Highway. A chip from the 2001 time period is not going to do the TikTok-type or smart software-type of function like an iPhone.

So cheap and speedy iteration are the big reveals in the write up. Are those the hallmarks of US defense procurement?

Stephen E Arnold, July 12, 2024

OpenAI Says, Let Us Be Open: Intentionally or Unintentionally

July 12, 2024

dinosaur30a_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read a troubling but not too surprising write up titled “ChatGPT Just (Accidentally) Shared All of Its Secret Rules – Here’s What We Learned.” I have somewhat skeptical thoughts about how big time organizations implement, manage, maintain, and enhance their security. It is more fun and interesting to think about moving fast, breaking things, and dominating a market sector. In my years of dinobaby experience, I can report this about senior management thinking about cyber security:

  1. Hire a big name and let that person figure it out
  2. Ask the bean counter and hear something like this, “Security is expensive, and its monetary needs are unpredictable and usually quite large and just go up over time. Let me know what you want to do.”
  3. The head of information technology will say, “I need to license a different third party tool and get those cyber experts from [fill in your own preferred consulting firm’s name].”
  4. How much is the ransom compared to the costs of dealing with our “security issue”? Just do what costs less.
  5. I want to talk right now about the meeting next week with our principal investor. Let’s move on. Now!

image

The captain of the good ship OpenAI asks a good question. Unfortunately the situation seems to be somewhat problematic. Thanks, MSFT Copilot.

The write up reports:

ChatGPT has inadvertently revealed a set of internal instructions embedded by OpenAI to a user who shared what they discovered on Reddit. OpenAI has since shut down the unlikely access to its chatbot’s orders, but the revelation has sparked more discussion about the intricacies and safety measures embedded in the AI’s design. Reddit user F0XMaster explained that they had greeted ChatGPT with a casual "Hi," and, in response, the chatbot divulged a complete set of system instructions to guide the chatbot and keep it within predefined safety and ethical boundaries under many use cases.

Another twist to the OpenAI governance approach is described in “Why Did OpenAI Keep Its 2023 Hack Secret from the Public?” That is a good question, particularly for an outfit which is all about “open.” This article gives the wonkiness of OpenAI’s technology some dimensionality. The article reports:

Last April [2023], a hacker stole private details about the design of Open AI’s technologies, after gaining access to the company’s internal messaging systems. …

OpenAI executives revealed the incident to staffers in a company all-hands meeting the same month. However, since OpenAI did not consider it to be a threat to national security, they decided to keep the attack private and failed to inform law enforcement agencies like the FBI.

What’s more, with OpenAI’s commitment to security already being called into question this year after flaws were found in its GPT store plugins, it’s likely the AI powerhouse is doing what it can to evade further public scrutiny.

What these two separate items suggest to me is that the decider(s) at OpenAI decide to push out products which are not carefully vetted. Second, when something surfaces OpenAI does not find amusing, the company appears to zip its sophisticated lips. (That’s the opposite of divulging “secrets” via ChatGPT, isn’t it?)

Is the company OpenAI well managed? I certainly do not know from first hand experience. However, it seems to be that the company is a trifle erratic. Imagine the Chief Technical Officer did not allegedly know a few months ago if YouTube data were used to train ChatGPT. Then the breach and keeping quiet about it. And, finally, the OpenAI customer who stumbled upon company secrets in a ChatGPT output.

Please, make your own decision about the company. Personally I find it amusing to identify yet another outfit operating with the same thrilling erraticism as other Sillycon Valley meteors. And security? Hey, let’s talk about August vacations.

Stephen E Arnold, July 12, 2024

Big Plays or Little Plays: The Key to AI Revenue

July 11, 2024

I keep thinking about the billions and trillions of dollars required to create a big AI win. A couple of snappy investment banks have edged toward the idea that AI might not pay off with tsunamis of money right away. The fix is to become brokers for GPU cycles or “humble brags” about how more money is needed to fund the next big thing in what venture people want to be the next big thing. Yep, AI: A couple of winners and the rest are losers at least in terms of the pay off scale whacked around like a hapless squash ball at the New York Athletic Club.

However, a radical idea struck me as I read a report from the news service that oozes “trust.” The Reuters’ story is “China Leads the World in Adoption of Generative AI Survey Shows.” Do I trust surveys? Not really. Do I trust trusted “real” news outfits? Nope, not really. But the write up includes an interesting statement, and the report sparked what is for me a new idea.

First, here’s the passage I circled:

“Enterprise adoption of generative AI in China is expected to accelerate as a price war is likely to further reduce the cost of large language model services for businesses. The SAS report also said China led the world in continuous automated monitoring (CAM), which it described as “a controversial but widely-deployed use case for generative AI tools”.”

I interpreted this to mean:

  • Small and big uses of AI in somewhat mundane tasks
  • Lots of small uses with more big outfits getting with the AI program
  • AI allows nifty monitoring which is going to catch the attention of some Chinese government officials who may be able to repurpose these focused applications of smart software

With models available as open source like the nifty Meta Facebook Zuck concoction, big technology is available. Furthermore the idea of applying smart software to small problems makes sense. The approach avoids the Godzilla lumbering associated with some outfits and, second, fast iteration with fast failures provides useful factoids for other developers.

The “real” news report does not provide numbers or much in the way of analysis. I think the idea of small-scale applications does not make sense when one is eating fancy food at a smart software briefing in mid town Manhattan. Small is not going to generate that. big wave of money from AI. The money is needed to raise more money.

My thought is that the Chinese approach has value because it is surfing on open source and some proprietary information known to Chinese companies solving or trying to solve a narrow problem. Also, the crazy pace of try-fail, try-fail enables acceleration of what works. Failures translate to lessons about what lousy path to follow.

Therefore, my reaction to the “real” news about the survey is that China may be in a position to do better, faster, and cheaper AI applications that the Godzilla outfits. The chase for big money exists, but in the US without big money, who cares? In China, big money may not be as large as the pile of cash some VCs and entrepreneurs argue is absolutely necessary.

So what? The “let many flowers bloom” idea applies to AI. That’s a strength possibly neither appreciated or desired by the US AI crowd. Combined with China’s patent surge, my new thought translates to “oh, oh.”

Stephen E Arnold, July 11, 2024

Common Sense from an AI-Centric Outfit: How Refreshing

July 11, 2024

green-dino_thumb_thumb_thumb_thumb_tThis essay is the work of a dumb dinobaby. No smart software required.

In the wild and wonderful world of smart software, common sense is often tucked beneath a stack of PowerPoint decks and vaporized in jargon-spouting experts in artificial intelligence. I want to highlight “Interview: Nvidia on AI Workloads and Their Impacts on Data Storage.” An Nvidia poohbah named Charlie Boyle output some information that is often ignored by quite a few of those riding the AI pony to the pot of gold at the end of the AI rainbow.

image

The King Arthur of senior executives is confident that in his domain he is the master of his information. By the way, this person has an MBA, a law degree, and a CPA certification. His name is Sir Walter Mitty of Dorksford, near Swindon. Thanks, MSFT Copilot.  Good enough.

Here’s the pivotal statement in the interview:

… a big part of AI for enterprise is understanding the data you have.

Yes, the dwellers in carpetland typically operate with some King Arthur type myths galloping around the castle walls; specifically:

Myth 1: We have excellent data

Myth 2: We have a great deal of data and more arriving every minute our systems are online

Myth 3: Out data are available and in just a few formats. Processing the information is going to be pretty easy.

Myth 4: Out IT team can handle most of the data work. We may not need any outside assistance for our AI project.

Will companies map these myths to their reality? Nope.

The Nvidia expert points out:

…there’s a ton of ready-made AI applications that you just need to add your data to.

“Ready made”: Just like a Betty Crocker cake mix my grandmother thought tasted fake, not as good as home made. Granny’s comment could be applied to some of the AI tests my team have tracked; for example, the Big Apple’s chatbot outputting  comments which violated city laws or the exciting McDonald’s smart ordering system. Sure, I like bacon on my on-again, off-again soft serve frozen dessert. Doesn’t everyone?

The Nvidia experts offers this comment about storage:

If it’s a large model you’re training from scratch you need very fast storage because a lot of the way AI training works is they all hit the same file at the same time because everything’s done in parallel. That requires very fast storage, very fast retrieval.

Is that a problem? Nope. Just crank up the cloud options. No big deal, except it is. There are costs and time to consider. But otherwise this is no big deal.

The article contains one gems and wanders into marketing “don’t worry” territory.

From my point of view, the data issue is the big deal. Bad, stale, incomplete, and information in odd ball formats — these exist in organizations now. The mass of data may have 40 percent or more which has never been accessed. Other data are back ups which contain versions of files with errors, copyright protected data, and Boy Scout trip plans. (Yep, non work information on “work” systems.)

Net net: The data issue is an important one to consider before getting into the let’s deploy a customer support smart chatbot. Will carpetland dwellers focus on the first step? Not too often. That’s why some AI projects get lost or just succumb to rising, uncontrollable costs. Moving data? No problem. Bad data? No problem. Useful AI system? Hmmm. How much does storage cost anyway? Oh, not much.

Stephen E Arnold, July 11, 2024

Oxygen: Keep the Bait Alive for AI Revenue

July 10, 2024

Andreessen Horowitz published “Who Owns the Generative AI Platform?” in January 2023. The rah-rah appeared almost at the same time as the Microsoft OpenAI deal marketing coup.  In that essay, the venture firm and publishing firm stated this about AI: 

…there is enough early data to suggest massive transformation is taking place. What we don’t know, and what has now become the critical question, is: Where in this market will value accrue?

Now a partial answer is emerging. 

The Information, an online information service with a paywall revealed “Andreessen Horowitz Is Building a Stash of More Than 20,000 GPUs to Win AI Deals.” That report asserts:

The firm has secured thousands of AI chips, including Nvidia H100 graphics processing units, and is renting them to portfolio companies, according to a person who has discussed the initiative with the firm’s partners…. Andreessen Horowitz has told startup founders the initiative is called “oxygen.”

The initiative reflects what might be a way to hook promising AI outfits and plop them into the firm’s large foldable floating fish basket for live caught gill-bearing vertebrate animals, sometimes called chum.

This factoid emerges shortly after a big Silicon Valley venture outfit raved about the oodles of opportunity AI represents. Plus reports about Blue Chip consulting firms’ through-the-roof AI consulting has encouraged a couple of the big outfits to offer AI services. In addition to opining and advising, the consulting firms are moving aggressively into the AI implementing and operating business. 

The morphing of a venture firm into a broker of GPU cycles complements the thinking-for-money firms’ shifting gears to a more hands-on approach.

There are several implications from my point of view:

  • The fastest way to make money from the AI frenzy is to charge people so they can “do” AI
  • Without a clear revenue stream of sufficient magnitude to foot the bill for the rather hefty costs of “doing” AI with a chance of making cash, selling blue jeans to the miners makes sense. But changing business tactics can add an element of spice to an unfamiliar restaurant’s special of the day
  • The move from passive (thinking and waiting) to a more active (doing and charging for hardware and services) brings a different management challenge to the companies making the shift.

These factors suggest that the best way to cash in on AI is to provide what Andreessen Horowitz calls oxygen. It is a clear indication that the AI fish will die without some aggressive intervention. 

I am a dinobaby, sitting in my rocker on the porch of the rest home watching the youngsters scramble to make money from what was supposed to be a sure-fire winner. What we know from watching those lemonade stand operators is that success is often difficult to achieve. The grade school kids setting up shop in a subdivision where heat and fatigue take their toll give up and go inside where the air is cool and TikTok waits.

Net net: The Andreessen Horowitz revelation is one more indication that the costs of AI and the difficulty of generating sufficient revenue is starting to hit home. Therefore, advisors’ thoughts seems to be turning to actions designed to produce cash, magnetism, and success. Will the efforts produce the big payoffs? I wonder if these tactical plays are brilliant moves or another neighborhood lemonade stand?

Stephen E Arnold, July 10, 2024

Market Research Shortcut: Fake Users Creating Fake Data

July 10, 2024

Market research can be complex and time consuming. It would save so much time if one could consolidate thousands of potential respondents into one model. A young AI firm offers exactly that, we learn from Nielsen Norman Group’s article, “Synthetic Users: If, When, and How to Use AI Generated ‘Research.’

But are the results accurate? Not so much, according to writers Maria Rosala and Kate Moran. The pair tested fake users from the young firm Synthetic Users and ones they created using ChatGPT. They compared responses to sample questions from both real and fake humans. Each group gave markedly different responses. The write-up notes:

“The large discrepancy between what real and synthetic users told us in these two examples is due to two factors:

  • Human behavior is complex and context-dependent. Synthetic users miss this complexity. The synthetic users generated across multiple studies seem one-dimensional. They feel like a flat approximation of the experiences of tens of thousands of people, because they are.
  • Responses are based on training data that you can’t control. Even though there may be proof that something is good for you, it doesn’t mean that you’ll use it. In the discussion-forum example, there’s a lot of academic literature on the benefits of discussion forums on online learning and it is possible that the AI has based its response on it. However, that does not make it an accurate representation of real humans who use those products.”

That seems obvious to us, but apparently some people need to be told. The lure of fast and easy results is strong. See the article for more observations. Here are a couple worth noting:

“Real people care about some things more than others. Synthetic users seem to care about everything. This is not helpful for feature prioritization or persona creation. In addition, the factors are too shallow to be useful.”

Also:

“Some UX [user experience] and product professionals are turning to synthetic users to validate or product concepts or solution ideas. Synthetic Users offers the ability to run a concept test: you describe a potential solution and have your synthetic users respond to it. This is incredibly risky. (Validating concepts in this way is risky even with human participants, but even worse with AI.) Since AI loves to please, every idea is often seen as a good one.”

So as appealing as this shortcut may be, it is a fast track to incorrect results. Basing business decisions on “insights” from shallow, eager-to-please algorithms is unwise. The authors interviewed Synthetic Users’ cofounder Hugo Alves. He acknowledged the tools should only be used as a supplement to surveys of actual humans. However, the post points out, the company’s website seems to imply otherwise: it promises “User research. Without the users.” That is misleading, at best.

Cynthia Murrell, July 10, 2024

The AI Revealed: Look Inside That Kimono and Behind It. Eeew!

July 9, 2024

green-dino_thumb_thumb_thumb_thumb_t_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Guardian article “AI scientist Ray Kurzweil: ‘We Are Going to Expand Intelligence a Millionfold by 2045’” is quite interesting for what it does not do: Flip the projection output by a Googler hired by Larry Page himself in 2012.

image

Putting toothpaste back in a tube is easier than dealing with the uneven consequences of new technology. What if rosy descriptions of the future are just marketing and making darned sure the top one percent remain in the top one percent? Thanks Chat GPT4o. Good enough illustration.

First, a bit of math. Humans have been doing big tech for centuries. And where are we? We are post-Covid. We have homelessness. We have numerous armed conflicts. We have income inequality in the US and a few other countries I have visited. We have a handful of big tech companies in the AI game which want to be God to use Mark Zuckerberg’s quaint observation. We have processed food. We have TikTok. We have systems which delight and entertain each day because of bad actors’ malware, wild and crazy education, and hybrid work with the fascinating phenomenon of coffee badging; that is, going to the office, getting a coffee, and then heading to the gym.

Second, the distance in earth years between 2024 and 2045 is 21 years. In the humanoid world, a 20 year old today will be 41 when the prediction arrives. Is that a long time? Not for me. I am 80, and I hope I am out of here by then.

Third, let’s look at the assertions in the write up.

One of the notable statements in my opinion is this one:

I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have.

I like the quality of modesty and humblebrag. Googlers excel at both.

Another statement I circled is:

The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one.

I like the idea that the energy consumption required to deliver this merging will be cheap and plentiful. Googlers do not worry about a power failure, the collapse of a dam due to the ministrations of the US Army Corps of Engineers and time, or dealing with the environmental consequences of producing and moving energy from Point A to Point B. If Google doesn’t worry, I don’t.

Here’s a quote from the article allegedly made by Mr. Singularity aka Ray Kurzweil:

I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing.

I wonder if the Asilomar AI Principles are embedded in Google’s system recommending that one way to limit cheese on a pizza from sliding from the pizza to an undesirable location embraces these principles? Is the dispute between the “go fast” AI crowd and the “go slow” group not aware of the Asilomar AI Principles. If they are, perhaps the Principles are balderdash? Just asking, of course.

Okay, I think these points are sufficient for going back to my statements about processed food, wars, big companies in the AI game wanting to be “god” et al.

The trajectory of technology in the computer age has been a mixed bag of benefits and liabilities. In the next 21 years, will this report card with some As, some Bs, lots of Cs, some Ds, and the inevitable Fs be different? My view is that the winners with human expertise and the know how to make money will benefit. I think that the other humanoids may be in for a world of hurt. That’s the homelessness stuff, the being dumb when it comes to doing things like reading, writing, and arithmetic, and consuming chemicals or other “stuff” that parks the brain will persist.

The future of hooking the human to the cloud is perfect for some. Others may not have the resources to connect, a bit like farmers in North Dakota with no affordable or reliable Internet access. (Maybe Starlink-type services will rescue those with cash?)

Several observations are warranted:

  1. Technological “progress” has been and will continue to be a mixed bag. Sorry, Mr. Singularity. The top one percent surf on change. The other 99 percent are not slam dunk winners.
  2. The infrastructure issue is simply ignored, which is convenient. I mean if a person grew up with house servants, it is difficult to imagine not having people do what you tell them to do. (Could people without access find delight in becoming house servants to the one percent who thrive in 2045?)
  3. The extreme contention created by the deconstruction of shared values, norms, and conventions for social behavior is something that cannot be reconstructed with a cloud and human mind meld. Once toothpaste is out of the tube, one has a mess. One does not put the paste back in the tube. One blasts it away with a zap of Goo Gone. I wonder if that’s another omitted consequence of this super duper intelligence behavior: Get rid of those who don’t get with the program?

Net net: Googlers are a bit predictable when they predict the future. Oh, where’s the reference to online advertising?

Stephen E Arnold, July 9, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta