AI May Be Like a Red, Red Rose: Fading Fast? Oh, No

September 20, 2023

Well that was fast. Vox ponders, “Is the AI Boom Already Over?” Reporter Sara Morrison recounts generative AI’s adventure over the past year, from the initial wonder at tools like ChatGPT and assorted image generators to the sky-high investments in AI companies. Now, though, the phenomenon may be drifting back to Earth. Morrison writes:

“Several months later, the bloom is coming off the AI-generated rose. Governments are ramping up efforts to regulate the technology, creators are suing over alleged intellectual property and copyright violations, people are balking at the privacy invasions (both real and perceived) that these products enable, and there are plenty of reasons to question how accurate AI-powered chatbots really are and how much people should depend on them. Assuming, that is, they’re still using them. Recent reports suggest that consumers are starting to lose interest: The new AI-powered Bing search hasn’t made a dent in Google’s market share, ChatGPT is losing users for the first time, and the bots are still prone to basic errors that make them impossible to trust. In some cases, they may be even less accurate now than they were before. A recent Pew survey found that only 18 percent of US adults had ever used ChatGPT, and another said they’re becoming increasingly concerned about the use of AI. Is the party over for this party trick?”

The post hastens to add that generative AI is here to stay. It is just that folks are a bit less excited about it. Besides Bing’s mediocre AI showing, cited above, the article supplies examples of several other disappointing projects. One key reason for decline is generative AI’s tendency to simply get things wrong. Many hoped this issue would soon be resolved, but it may actually be getting worse. Other problems, of course, include that stubborn bias problem and inappropriate comments. Until its many flaws are resolved, Morrison observes, generative AI should probably remain no more than a party trick.

Cynthia Murrell, September 20, 2023

Gemini Cricket: Another World Changer from the Google

September 19, 2023

AI lab DeepMind, acquired by Google in 2014, is famous for creating AlphaGo, a program that defeated a human champion Go player in 2016. Since then, its developers have been methodically honing their software. Meanwhile, ChatGPT exploded onto the scene and Google is feeling the pressure to close the distance. Wired reports, “Google DeepMind CEO Demis Hassabis Says Its Next Algorithm Will Eclipse ChatGPT.” We learn the company just combined the DeepMind division with its Brain lab. The combined team hopes its Gemini software will trounce the competition. Someday. Writer Will Knight tells us:

“DeepMind’s Gemini, which is still in development, is a large language model that works with text and is similar in nature to GPT-4, which powers ChatGPT. But Hassabis says his team will combine that technology with techniques used in AlphaGo, aiming to give the system new capabilities such as planning or the ability to solve problems. … AlphaGo was based on a technique DeepMind has pioneered called reinforcement learning, in which software learns to take on tough problems that require choosing what actions to take like in Go or video games by making repeated attempts and receiving feedback on its performance. It also used a method called tree search to explore and remember possible moves on the board.”

Not ones to limit themselves, the Googley researchers may pilfer ideas from other AI realms like robotics and neuroscience. Hassabis is excited about the possibilities AI offers when wielded for good, but acknowledges the need to mitigate potential risks. The article relates:

“One of the biggest challenges right now, Hassabis says, is to determine what the risks of more capable AI are likely to be. ‘I think more research by the field needs to be done—very urgently—on things like evaluation tests,’ he says, to determine how capable and controllable new AI models are. To that end, he says, DeepMind may make its systems more accessible to outside scientists.”

Transparency in AI? That may be the CEO’s most revolutionary idea yet.

Cynthia Murrell, September 19, 2023

Accidental Bias or a Finger on the Scale?

September 18, 2023

Who knew? According to Bezos’ rag The Washington Post, “Chat GPT Leans Liberal, Research Shows.” Writer Gerrit De Vynck cites a study on OpenAI’s ChatGPT from researchers at the University of East Anglia:

“The results showed a ‘significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K.,’ the researchers wrote, referring to Luiz Inácio Lula da Silva, Brazil’s leftist president.”

Then there’s research from Carnegie Mellon’s Chan Park. That study found Facebook’s LLaMA, trained on older Internet data, and Google’s BERT, trained on books, supplied right-leaning or even authoritarian answers. But Chat GPT-4, trained on the most up-to-date Internet content, is more economically and socially liberal. Why might the younger algorithm, much like younger voters, skew left? There’s one more juicy little detail. We learn:

“Researchers have pointed to the extensive amount of human feedback OpenAI’s bots have gotten compared to their rivals as one of the reasons they surprised so many people with their ability to answer complex questions while avoiding veering into racist or sexist hate speech, as previous chatbots often did. Rewarding the bot during training for giving answers that did not include hate speech, could also be pushing the bot toward giving more liberal answers on social issues, Park said.”

Not exactly a point in conservatives’ favor, we think. Near the bottom, the article concedes this caveat:

“The papers have some inherent shortcomings. Political beliefs are subjective, and ideas about what is liberal or conservative might change depending on the country. Both the University of East Anglia paper and the one from Park’s team that suggested ChatGPT had a liberal bias used questions from the Political Compass, a survey that has been criticized for years as reducing complex ideas to a simple four-quadrant grid.”

Read more about the Political Compass here and here. So does ChatGPT lean left or not? Hard to say from the available studies. But will researchers ever be able to pin down the rapidly evolving AI?

Cynthia Murrell, September 18, 2023

An AI to Help Law Firms Craft More Effective Invoices

September 14, 2023

Think money. That answers many AI questions.

Why are big law firms embracing AI? For better understanding of the law? Nay. To help clients? No. For better writing? Nope. What then? Why more fruitful billing, if course. We learn from Above The Law, “Law Firms Struggling with Arcane Billing Guidelines Can Look to AI for Relief.” According to writer and litigator Joe Patrice, law clients rely on labyrinthine billing compliance guidelines to delay paying their invoices. Now AI products like Verify are coming to rescue beleaguered lawyers from penny pinching clients. Patrice writes:

“Artificial intelligence may not be prepared to solve every legal industry problem, but it might be the perfect fit for this one. ZERO CEO Alex Babin is always talking about developing automation to recover the money lawyers lose doing non-billable tasks, so it’s unsurprising that the company has turned its attention to the industry’s billing fiasco. And when it comes to billing guideline compliance, ZERO estimates that firms can recover millions by introducing AI to the process. Because just ‘following the guidelines’ isn’t always enough. Some guidelines are explicit. Others leave a world of interpretation. Still others are explicit, but no one on the client side actually cares enough to force outside counsel to waste time correcting the issue. Where ZERO’s product comes in is in understanding the guidelines and the history of rejections and appeals surrounding the bills to figure out what the bill needs to look like to get the lawyers paid with the least hassle.”

Verify can even save attorneys from their own noncompliant wording, rewriting their narratives to comply with guidelines. And it can do while mimicking each lawyer’s writing style. Very handy.

Cynthia Murrell, September 14, 2023

AI: Juicing Change

September 13, 2023

Do we need to worry about how generative AI will change the world? Yes, but no more than we had to fear automation, the printing press, horseless carriages, and the Internet. The current technology revolution is analogous to the Industrial Revolutions and technology advancements of past centuries. University of Chicago history professor Ada Palmer is aware of humanity’s cyclical relationship with technology and she discusses it in her Microsoft Unlocked piece: “We Are An Information Revolution Species.”

Palmer explains that the human species has been living in an information revolution for twenty generations. She provides historical examples and how people bemoan changes. The changes arguably remove the “art” from tasks. These tasks, however, are simplified and allow humans to create more. It also frees up humanity’s time to conquer harder problems. Changes in technology spur a democratization of information. They also mean that jobs change, so humans need to adapt their skills for continual survival.

Palmer says that AI is just another tool as humanity progresses. She asserts that the bigger problems are outdated systems that no longer serve the current society. While technology has evolved so has humanity:

“This revolution will be faster, but we have something the Gutenberg generations lacked: we understand social safety nets. We know we need them, how to make them. We have centuries of examples of how to handle information revolutions well or badly. We know the cup is already leaking, the actor and the artist already struggling as the megacorp grows rich. Policy is everything. We know we can do this well or badly. The only sure road to real life dystopia is if we convince ourselves dystopia is unavoidable, and fail to try for something better.”

AI does need a social safety net so it does not transform into a sentient computer hellbent on world domination. Palmer should point out that humans learn from their imaginations too. Star Trek or 2001: A Space Odyssey anyone? Nah, too difficult. Just generate content and sell ads.

Whitney Grace, September 13, 2023

Trust in an Online World: Very Heisenbergian Issue

September 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Digital information works a bit like a sandblaster. The idea is that a single grain of sand has little impact. But use a gizmo that pumps out a stream of sand grains at speed, and you have a different type of tool. The flow of online information is similar. No one gets too excited about one email or one short video. But pump out lots of these and the results is different.

9 12 red sofa

The sales person says, “You can this this red sofa for no money down.” The pitch is compelling. The sales person says, “You can read about our products on Facebook and see them in TikToks.” The husband and wife don’t like red sofas. But Facebook and TikTok? Thanks, MidJourney, continue your slide down the gradient descent.

The effects of more than 20 years of unlimited data flow, one can observe the effects in many places. I have described some of these effects in my articles which appeared in specialist publications, my monographs, and in my lectures. I want to focus on one result of the flow of electronic information; that is, the erosion of social structures. Online is not the only culprit, but for this short essay, it will serve my purpose.

The old chestnut is that information  is power is correct. Another truism is that the more information, the more transparency is created. That’s not a spot on statement.

Poll: Americans Believe AI Will Hurt Elections” explains how flows of information have allegedly eroded trust in the American democratic process. The write up states:

Half of Americans expect misinformation spread by AI to impact who wins the 2024 election — and one-third say they’ll be less trusting of the results because of artificial intelligence…

The allegedly accurate factoid can be interpreted in several ways. First, the statement about lack of trust may be disinformation. The idea is that process of voting will be manipulated. Second, a person can interpret the factoid as the truth about how information erodes a social concept. Third, the statement can be viewed as an error, like those which make peer reviewed articles suspect or non reproducible.

The power of information in this case is to view the statement as one of the grains of sand shot from the body shop’s sand blaster. If one pumps out enough “data” about a bad process, why wouldn’t a person just accept the statements as accurate. Propaganda, weaponized information, and online advertising work this way.

Each reader has to figure out how to interpret the statement. As the body of accessible online information expands, think of those items as sand grains. Now let’s allow smart software to “learn” from the sand grains.

At what point is the dividing line between what’s accurate and what’s not disappear.

Net net: Online information erodes. But it is not just trust which is affected. It is the thought process required to determine what is synthetic and what is “real.” Reality consists of flows of online information. Well, that’s an issue, isn’t it?

Net net: The new reality is uncertainty. The act of looking changes things. Very quantum and quite impactful on the social fabric in my opinion.

Stephen E Arnold, September 12, 2023

Good New and Bad News: Smart Software Is More Clever Than Humanoids

September 11, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

After a quick trip to Europe, I will be giving a lecture about fake data. One of the case examples concerns the alleged shortcuts taken by Frank Financial in its efforts to obtain about $175 million from JPMorgan Chase. I like to think of JPMC as “the smartest guys in the room” when it comes to numbers related to money. I suppose wizards at Goldman or McKinsey would disagree. But the interesting swizzle on the JPMC story is that alleged fraudster was a graduate of Wharton.

That’s good news for getting an education in moral probity at a prestigious university.

9 11 computer beats human

A big, impressive university’s smart software beats smart students at Tic Tac Toe. Imagine what these wizards will be able to accomplish when smart software innovates and assists the students with financial fancy dancing. Thanks, Mother MJ. Deep on the gradient descent, please.

Flash forward to the Murdoch real news story “M.B.A. Students vs. ChatGPT: Who Comes Up With More Innovative Ideas?” [The Rupert toll booth is operating.] The main idea of the write up is that humanoid Wharton students were less “creative,” “innovative,” and “inventive” than smart software. What’s this say for the future of financial fraud. Mere humanoids like those now in the spotlight at the Southern District of New York show may become more formidable with the assistance of smart software. The humanoids were caught, granted it took JPMC a few months after the $175 million check was cashed, but JPMC did figure it out via a marketing text.

Imagine. Wharton grads with smart software. How great will that be for the targets of financial friskiness? Let’s hope JPMC gets its own cyber fraud detecting software working. In late 2022, the “smartest guys in the room” were not smart enough to spot synthetic and faked data. Will smart software be able to spot smart software scams?

That’s the bad new. No.

Stephen E Arnold, September 11, 2023

AI and the Legal Eagles

September 11, 2023

Lawyers and other legal professionals know that AI algorithms, NLP, machine learning, and robotic process automation can leverage their practices. They will increase their profits, process cases faster, and increase efficiency. The possibilities for AI in legal practice appear to be win-win situation, ReadWrite discusses how different AI processes can assist law firms and the hurdles for implementation in: “Artificial Intelligence In Legal Practice: A Comprehensive Guide.”

AI will benefit law firms in streamlining research and analytics processes. Machine learning and NLP can consume large datasets faster and more efficiently than humans. Contract management and review processes will greatly be improved, because AI offers more comprehensive analysis, detects discrepancies, and decreases repetitive tasks.

AI will also lighten legal firms workloads with document automation and case management. Legal documents, such as leases, deeds, wills, loan agreements, etc., will decrease errors and reduce review time. AI will lowers costs for due diligence procedures and e-discovery through automation and data analytics. These will benefit clients who want speedy results and low legal bills.

Law firms will benefit the most from NLP applications, predictive analytics, machine learning algorithms, and robotic process automation. Virtual assistants and chatbots also have their place in law firms as customer service representatives.

Despite all the potential improvements from AI, legal professionals need to adhere to data privacy and security procedures. They must also develop technology management plans that include, authentication protocols, backups, and identity management strategies. AI biases, such as diversity and sexism issues, must be evaluated and avoided in legal practices. Transparency and ethical concerns must also be addressed to be compliant with governmental regulations.

The biggest barriers, however, will be overcoming reluctant staff, costs, anticipating ROI, and compliancy with privacy and other regulations.

“With a shift from viewing AI as an expenditure to a strategic advantage across cutting-edge legal firm practices, embracing the power of artificial intelligence demonstrates significant potential for intense transformation within the industry itself.”

These challenges are not any different from past technology implementations, except AI could make lawyers more reliant on technology than their own knowledge. Cue the Jaws theme music.

Whitney Grace, September 11, 2023

Fortune, Trust, and Smart Software: A Delightful Confection

September 8, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Trust. I see this word bandied about like a digital shuttlecock whacked by frantic influencers, pundits, and poobahs. Fortune Magazine likes the idea of trust and uses it in this headline: “Silicon Valley’s Elites Can’t Be Trusted with the Future of AI. We Must Break Their Dominance–and Dangerous God Complex.” The headline  is interesting. First, this if Fortune Magazine. Like Forbes in its pre-sponsored content days was a “capitalist tool.” Fortune Magazine was the giant PR megaphone for making money. Now Forbes is content marketing, and Fortune Magazine is not exactly a fan of modern Silicon Valley high school science club management.  The clue is the word “trust” in the context of the phrase “God complex.”

9 3 definant employee

A senior manager demonstrates a lack of support for a subordinate who does not warrant trust. Does the subordinate look happy? Thanks, MidJourney. No red warning banners for this original art. You are, however, still on the gradient descent I fear.

The write up includes a number of interesting statements. I want to highlight two of these and offer a couple of observations. No, I won’t trot out my favorite “Where have you been for the last 25 years? Collecting Google swag and Zuckbook coffee mugs?”

The first passage I noticed was:

Research shows the market dysfunction created by Google, Amazon, Facebook, and other large players that dominate e-commerce, advertising, and online information-sharing. Big Tech monopolists are already positioning themselves to dominate AI. The shortage of GPUs and massive lobbying dollars spent requesting expensive regulation that would lock out startups are just two examples of this troubling trend.

Yo, Fortune, what do monopolies do? Are these outfits into garden parties for homeless children and cleaning up the environment for the good of walruses? The Fortune Magazine of 2023 would probably complain about Co0rnelius Vanderbilt’s treatment of the business associate he beat and tossed into the street.

The second passage warranting a red checkmark was:

AI will fundamentally change society and billions of lives. Its development is too important to be left to the hubris of Silicon Valley’s elites. India is well positioned to break their dominance and level the AI playing field, accelerating innovation and benefiting all of humankind.

Oh, oh. The U.S. of A. is no longer the sure-fire winner for the sharp pencil people at Fortune Magazine.

Several observations:

  1. The Silicon Valley method has worn thin for Manhattan folk
  2. India is the new big dog
  3. Trust is in vogue.

Okay.

Stephen E Arnold, September 8, 2023

A New Fear: Riding the Gradient Descent to Unemployment

September 8, 2023

Is AI poised to replace living, breathing workers? A business professor from Harvard (the ethics hot spot) reassures us (sort of), “AI Won’t Replace Humans—But Humans with AI Will Replace Humans Without AI.” Harvard Business Review‘s Adi Ignatius interviewed AI scholar Karim Lakhani, who insists AI is a transformational technology on par with the Web browser. Companies and workers in all fields, he asserts, must catch up then keep up or risk being left behind. The professor states:

“This transition is really inevitable. And for the folks that are behind, the good news is that the cost to make the transition keeps getting lower and lower. The playbook for this is now well-known. And finally, the real challenge is not a technological challenge. I would say that’s like a 30% challenge. The real challenge is 70%, which is an organizational challenge. My great colleague Tsedal Neeley talks about the digital mindset. Every executive, every worker needs to have a digital mindset, which means understanding how these technologies work, but also understanding the deployment of them and then the change processes you need to do in terms of your organization to make use of them.”

Later, he advises:

“The first step is to begin, start experimentation, create the sandboxes, run internal bootcamps, and don’t just run bootcamps for technology workers, run bootcamps for everybody. Give them access to tools, figure out what use cases they develop, and then use that as a basis to rank and stack them and put them into play.”

Many of those use cases will be predictable. Many more will be unforeseen. One thing we can anticipate is this: users will rapidly acclimate to technologies that make their lives easier. Already, Lakhani notes, customer expectations have been set by AI-empowered big tech. People expect their Uber to show up within minutes and whisk them away or for an Amazon transaction dispute to be resolved instantly. Younger customers have less and less patience for businesses that operate in slower, antiquated ways. Will companies small, medium, and large have to embrace AI or risk becoming obsolete?

Cynthia Murrell, September 8, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta