Hit Delete. Save Money. Data Liability Is Gone. Is That Right?

July 17, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Reddit Removed Your Chat History from before 2023” stated:

… legacy chats were being migrated to the new chat platform and that only 2023 data is being brought over, adding that they “hope” a data export will help the user get back the older chats. The admin told another user asking whether there was an option to stay on the legacy chat that no, there isn’t, and Reddit is “working on making new chats better.”

7 17 bugin amber

A young attorney studies ancient Reddit data from 2023. That’s when information began because the a great cataclysm destroyed any previous, possibly useful data for a legal matter. But what about the Library of Congress? But what about the Internet Archive? But what about back up tapes at assorted archives? Yeah, right. Thanks for the data in amber MidJourney.

The cited article does not raise the following obviously irrelevant questions:

  1. Are there backups which can be consulted?
  2. Are their copies of the Reddit data chat data?
  3. Was the action taken to reduce costs or legal liability?

I am not a Reddit user, nor do I affix site:reddit or append the word “reddit” to my queries. Some may find the service useful, but I am a dinobaby and hopeless out of touch with where the knowledge action is.

As an outsider, my initial reaction is that dumping data has two immediate paybacks: Reduce storage and the likelihood that a group of affable lawyers will ask for historic data about a Reddit user’s activity. My hunch is that users of a free service cannot fathom why a commercial enterprise would downgrade or eliminate a free service. Gee, why?

I think I would answer the question with one word, “Adulting.”

Stephen E Arnold, July 17, 2023

Adolescent Technology Mavens: From the Cage to the Court House

July 11, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Ladieees and gentlemennnnn, in this corner we have the King of Space and EVs. Weighing 187.3 pounds, the Musker brings a devastating attitude and a known world class skill in naming things. With a record of three and one, his only loss was a self-inflicted KO fighting a large blue bird. Annnnd in this corner, we have the regulator’s favorite wizard, Mark the Eloquent. Weighing in at 155.7 pounds, the Zuckster has a record of 3 and 3. His losses to Cambridge Analytica, the frightening Andrea Jelinek, chair of the European Data Protection Board, and his neighbor in Hawaii who won’t sell land to the social whirlwind.

Where are these young-at-heart wizards fighting? In Las Vegas for a big pile of money? Nope. These estimable wizards will duke it out in the court house. “Scared Musk Sends Legal Threat to Meta after Threads Lures 30 Million on Launch Day” states as fresh-from-the-playground news:

Musk supplemented his tweet [https://twitter.com/elonmusk/status/1676770522200252417] with a legal threat against Meta that echoed despair and fear in the face of his potent adversary. The lawsuit alleges Meta of enticing Twitter’s former employees — many of whom Musk dismissed without honoring severance promises — to contribute to Threads, a move that Twitter asserts infringes upon its intellectual property rights.

One big time journalist took issue with my describing the senior managers of certain high technology firms as practicing “high school science club management methods.” I wish to suggest that rumored cage fight and the possible legal dust up illustrates the thought processes of high school science club members. Yeah, go all in with those 16-year-old decision processes.

The threads are indeed tangled.

Stephen E Arnold, July 11, 2023

Crackdown on Fake Reviews: That Is a Hoot!

July 3, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “The FTC Wants to Put a Ban on Fake Reviews.” My first reaction was, “Shouldn’t the ever-so-confident Verge poobah have insisted on the word “impose”; specifically, The FTC wants to impose a ban on a fake reviews” or maybe “The FTC wants to rein in fake reviews”? But who cares? The Verge is the digital New York Times and go-to source of “real” Silicon Valley type news.

The write up states:

If you, too, are so very tired of not knowing which reviews to trust on the internet, we may eventually get some peace of mind. That’s because the Federal Trade Commission now wants to penalize companies for engaging in shady review practices. Under the terms of a new rule proposed by the FTC, businesses could face fines for buying fake reviews — to the tune of up to $50,000 for each time a customer sees one.

For more than 30 years, I worked with an individual named Robert David Steele, who was an interesting figure in the intelligence world. He wrote and posted on Amazon more than 5,000 reviews. He wrote these himself, often in down times with me between meetings. At breakfast one morning in the Hague, Steele was writing at the breakfast table, and he knocked over his orange juice. He said, “Give me your napkin.” He used it to jot down a note; I sopped up the orange juice.

7 2 man laughing

“That’s a hoot,” says a person who wrote a product review to make a competitor’s offering look bad. A $50,000 fine. Legal eagles take flight. The laughing man is an image flowing from the creative engine at MidJourney.

 

He wrote what I call humanoid reviews.

Now reviews of any type are readily available. Here’s an example from Fiverr.com, an Israel-based outfit with gig workers from many countries and free time on their hands:

image

How many of these reviews will be written by a humanoid? How many will be spat out via a ChatGPT-type system?

What about reviews written by someone with a bone to pick? The reviews are shaded so that the product or the book or whatever is presented in a questionable way? Did Mr. Steele write a review of an intelligence-related book and point out that the author was misinformed about the “real” intel world?

Several observations:

  1. Who or what is going to identify fake reviews?
  2. What’s the difference between a Fiverr-type review and a review written by a humanoid motivated by doing good or making the author or product look bad?
  3. As machine-generated text improves, how will software written to identify machine-generated reviews keep up with advances in the machine-generating software itself?

Net net: External editorial and ethical controls may be impractical. In my opinion, a failure of ethical controls within social structures creates a greenhouse in which fakery, baloney, misinformation, and corrupted content to thrive. In this context, who cares about the headline. It too is a reflection of the pickle barrel in which we soak.

Stephen E Arnold, July 3, 2023

Trust: Some in the European Union Do Not Believe the Google. Gee, Why?

June 13, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Google’s Ad Tech Dominance Spurs More Antitrust Charges, Report Says.” The write up seems to say that some EU regulators do not trust the Google. Trust is a popular word at the alleged monopoly. Yep, trust is what makes Google’s smart software so darned good.

6 13 fat man

A lawyer for a high tech outfit in the ad game says, “Commissioner, thank you for the question. You can trust my client. We adhere to the highest standards of ethical behavior. We put our customers first. We are the embodiment of ethical behavior. We use advanced technology to enhance everyone’s experience with our systems.” The rotund lawyer is a confection generated by MidJourney, an example of in this case, pretty smart software.

The write up says:

These latest charges come after Google spent years battling and frequently bending to the EU on antitrust complaints. Seeming to get bigger and bigger every year, Google has faced billions in antitrust fines since 2017, following EU challenges probing Google’s search monopoly, Android licensing, Shopping integration with search, and bundling of its advertising platform with its custom search engine program.

The article makes an interesting point, almost as an afterthought:

…Google’s ad revenue has continued increasing, even as online advertising competition has become much stiffer…

The article does not ask this question, “Why is Google making more money when scrutiny and restrictions are ramping up?”

From my vantage point in the old age “home” in rural Kentucky, I certainly have zero useful data about this interesting situation, assuming that it is true of course. But, for the nonce, let’s speculate, shall we?

Possibility A: Google is a monopoly and makes money no matter what laws, rules, and policies are articulated. Game is now in extra time. Could the referee be bent?

This idea is simple. Google’s control of ad inventory, ad options, and ad channels is just a good, old-fashioned system monopoly. Maybe TikTok and Facebook offer options, but even with those channels, Google offers options. Who can resist this pitch: “Buy from us, not the Chinese. Or, buy from us, not the metaverse guy.”

Possibility B: Google advertising is addictive and maybe instinctual. Mice never learn and just repeat their behaviors.

Once there is a cheese pay off for the mouse, those mice are learning creatures and in some wild and non-reproducible experiments inherit their parents’ prior learning. Wow. Genetics dictate the use of Google advertising by people who are hard wired to be Googley.

Possibility C: Google’s home base does not regulate the company in a meaningful way.

The result is an advanced and hardened technology which is better, faster, and maybe cheaper than other options. How can the EU, with is squabbling “union”, hope to compete with what is weaponized content delivery build on a smart, adaptive global system? The answer is, “It can’t.”

Net net: After a quarter century, what’s more organized for action, a regulatory entity or the Google? I bet you know the answer, don’t you?

Stephen E Arnold, June xx, 2023

Microsoft: Just a Minor Thing

June 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Several years ago, I was asked to be a technical advisor to a UK group focused on improper actions directed toward children. Since then, I have paid some attention to the information about young people that some online services collect. One of the more troubling facets of improper actions intended to compromise the privacy, security, and possibly the safety of minors is the role data aggregators play. Whether gathering information from “harmless” apps favored by young people to surreptitious collection and cross correlation of young users’ online travels, these often surreptitious actions of people and their systems trouble me.

The “anything goes” approach of some organizations is often masked by public statements and the use of words like “trust” when explaining how information “hoovering” operations are set up, implemented, and used to generate revenue or other outcomes. I am not comfortable identifying some of these, however.

6 6 good deal

A regulator and a big company representative talking about a satisfactory resolution to the regrettable collection of kiddie data. Both appear to be satisfied with another job well done. The image was generated by the MidJourney smart software.

Instead, let me direct your attention to the BBC report “Microsoft to Pay $20m for Child Privacy Violations.” The write up states as “real news”:
Microsoft will pay $20m (£16m) to US federal regulators after it was found to have illegally collected

data on children who had started Xbox accounts.

The write up states:

From 2015 to 2020 Microsoft retained data “sometimes for years” from the account set up, even when a parent failed to complete the process …The company also failed to inform parents about all the data it was collecting, including the user’s profile picture and that data was being distributed to third parties.

Will the leader in smart software and clever marketing have an explanation? Of course. That’s what advisory firms and lawyers help their clients deliver; for example:

“Regrettably, we did not meet customer expectations and are committed to complying with the order to continue improving upon our safety measures,” Microsoft’s Dave McCarthy, CVP of Xbox Player Services, wrote in an Xbox blog post. “We believe that we can and should do more, and we’ll remain steadfast in our commitment to safety, privacy, and security for our community.”

Sounds good.

From my point of view, something is out of alignment. Perhaps it is my old-fashioned idea that young people’s online activities require a more thoughtful approach by large companies, data aggregators, and click capturing systems. The thought, it seems, is directed at finding ways to take advantage of weak regulation, inattentive parents and guardians, and often-uninformed young people.

Like other ethical black holes in certain organizations, surfing for fun or money on children seems inappropriate. Does $20 million have an impact on a giant company? Nope. The ethical and moral foundation of decision making is enabling these data collection activities. And $20 million causes little or no pain. Therefore, why not continue these practices and do a better job of keeping the procedures secret?

Pragmatism is the name of the game it seems. And kiddie data? Fair game to some adrift in an ethical swamp. Just a minor thing.

Stephen E Arnold, June 6, 2023

IBM Dino Baby Unhappy about Being Outed as Dinobaby in the Baby Wizards Sandbox

June 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I learned the term “dinobaby” reading blog posts about IBM workers who alleged Big Blue wanted younger workers. After thinking about the term, I embraced it. This blog post features an animated GIF of me dancing in my home office. I try to avoid the following: [a] Millennials, GenX, GenZ, and GenY super wizards; [b] former IBM workers who grouse about growing old and not liking a world without CICS; and [c] individuals with advanced degrees who want to talk with me about “smart software.” I have to admit that I have not been particularly successful in this effort in 2023: Conferences, Zooms, face-to-face meetings, lunches, yada yada. Either I am the most magnetic dinobaby in Harrod’s Creek, or these jejune world changers are clueless. (Maybe I should live in a cave on a mountain and accept acolytes?)

I read “Laid-Off 60-Year-Old Kyndryl Exec Says He Was Told IT Giant Wanted New Blood.” The write up includes a number of interesting statements. Here’s one:

BM has been sued numerous times for age discrimination since 2018 when it was reported that company leadership carried out a plan to de-age its workforce – charges IBM has consistently denied, despite US Equal Employment Opportunity Commission (EEOC) findings to the contrary and confidential settlements.

Would IBM deny allegations of age discrimination? There are so many ways to terminate employees today. Why use the “you are old, so you are RIF’ed” ploy? In my opinion, it is an example of the lack of management finesse evident in many once high-flying companies today. I term the methods apparently in use at outfits like Twitter, Google, Facebook, and others as “high school science club management methods” or H2S2M2. The acronym has not caught one, but I assume that someone with a subscription to ChatGPT will use AI to write a book on the subject soon.

The write up also includes this statement:

Liss-Riordan [an attorney representing the dinobaby] said she has also been told that an algorithm was used to identify those who would lose their jobs, but had no further details to provide with regard to that allegation.

Several observations are warranted:

  1. Discrimination is nothing new. Oldsters will be nuked. No question about it. Why? Old people like me (I am 78) make younger folks nervous because we belong in warehouses for the soon dead, not giving lectures to the leaders of today and tomorrow.
  2. Younger folks do not know what they do not know. Consequently, opportunities exist to [a] make fun of young wizards as I do in this blog Monday through Friday since 2008 and [b] charge these “masters of the universe” money to talk about that which is part of their great unknowing. Billing is rejuvenating.
  3. No one cares. One can sue. One can rage. One can find solace in chemicals, fast cars, or climbing a mountain. But it is important to keep one thing in mind: No one cares.

Net net: Does IBM practice dark arts to rid the firm of those who slow down Zoom meetings, raise questions to which no one knows answers, and burdens on benefits plans? My hunch is that IBM type outfits will do what’s necessary to keep the camp ground free of old timers. Who wouldn’t?

Stephen E Arnold, June 5, 2023

Trust in Google and Its Smart Software: What about the Humans at Google?

May 26, 2023

The buzz about Google’s injection of its smart software into its services is crowding out other, more interesting sounds. For example, navigate to “Texas Reaches $8 Million Settlement With Google Over Blatantly False Pixel Ads: Google Settled a Lawsuit Filed by AG Ken Paxton for Alleged False Advertisements for its Google Pixel 4 Smartphone.”

The write up reports:

A press release said Google was confronted with information that it had violated Texas laws against false advertising, but instead of taking steps to correct the issue, the release said, “Google continued its deceptive advertising, prioritizing profits over truthfulness.”

Google is pushing forward with its new mobile devices.

Let’s consider Google’s seven wonders of its software. You can find these at this link or summarized in my article “The Seven Wonders of the Google AI World.”

Let’s consider principle one: Be socially beneficial.

I am wondering how the allegedly deceptive advertising encourages me to trust Google.

Principle 4 is Be accountable to people.

My recollection is that Google works overtime to avoid being held accountable. The company relies upon its lawyers, its lobbyists, and its marketing to float above the annoyances of nation states. In fact, when greeted with substantive actions by the European Union, Google stalls and does not make available its latest and greatest services. The only accountability seems to be a legal action despite Google’s determined lawyerly push back. Avoiding accountability requires intermediaries because Google’s senior executives are busy working on principles.

Kindergarten behavior.

5 13 kids squabbling

MidJourney captures the thrill of two young children squabbling over a piggy bank. I wonder if MidJourney knows what is going in the newly merged Google smart software units.

Google approaches some problems like kids squabbling over a piggy bank.

Net net: The Texas fine makes clear that some do not trust Google. The “principles” are marketing hoo hah. But everyone loves Google, including me, my French bulldog, and billions of users worldwide. Everyone will want a new $1800 folding Pixel, which is just great based on the marketing information I have seen. It has so many features and works wonders.

Stephen E Arnold, May 26, 2023

OpenAI Clarifies What “Regulate” Means to the Sillycon Valley Crowd

May 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Sam AI-man begged (at least he did not get on his hands and knees) the US Congress to regulate artificial intelligence (whatever that means). I just read “Sam Altman Says OpenAI Will Leave the EU if There’s Any Real AI Regulation.” I know I am old. I know I lose my car keys a couple of times every 24 hours. I do recall Mr. AI-man wanted regulation.

However, the write up reports:

Though unlike in the AI-friendly U.S., Altman has threatened to take his big tech toys to the other end of the sandbox if they’re not willing to play by his rules.

The vibes of the Zuckster zip through my mind. Facebook just chugs along, pays fines, and mostly ignores regulators. China seems to be an exception for Facebook, the Google, and some companies I don’t know about. China had a mobile death-mobile. A person accused and convicted would be executed in the mobile death van as soon as it arrived at the location where the convicted bad actor was. Re-education camps and mobile death-mobiles suggest that some US companies choose to exit China. Lawyers who have to arrive quickly or their client has been processed are not much good in some of China’s efficient state machines. Fines, however, are okay. Write a check and move on.

Mr. AI-man is making clear that the word “regulate” means one thing to Mr. AI-man and another thing to those who are not getting with the smart software program. The write up states:

Altman said he didn’t want any regulation that restricted users’ access to the tech. He told his London audience he didn’t want anything that could harm smaller companies or the open source AI movement (as a reminder, OpenAI is decidedly more closed off as a company than it’s ever been, citing “competition”). That’s not to mention any new regulation would inherently benefit OpenAI, so when things inevitably go wrong it can point to the law to say they were doing everything they needed to do.

I think “regulate” means what the declining US fast food outfit who told me “have it your way” meant. The burger joint put in a paper bag whatever the professionals behind the counter wanted to deliver. Mr. AI-man doesn’t want any “behind the counter” decision making by a regulatory cafeteria serving up its own version of lunch.

Mr. AI-man wants “regulate” to mean his way.

In the US, it seems, that is exactly what big tech and promising venture funded outfits are going to get; that is, whatever each company wants. Competition is good. See how well OpenAI and Microsoft are competing with Facebook and Google. Regulate appears to mean “let us do what we want to do.”

I am probably wrong. OpenAI, Google, and other leaders in smart software are at this very moment consuming the Harvard Library of books to read in search of information about ethical behavior. The “moral” learning comes later.

Net net: Now I understand the new denotation of “regulate.” Governments work for US high-tech firms. Thus, I think the French term laissez-faire nails it.

Stephen E Arnold, May 25, 2023

HP Autonomy: A Modest Disagreement Escalates

May 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

About 12 years ago, Hewlett Packard acquired Autonomy. The deal was, as I understand the deal, HP wanted to snap up Autonomy to make a move in the enterprise services business. Autonomy was one of the major providers of search and some related content processing services in 2010. Autonomy’s revenues were nosing toward $800 million, a level no other search and retrieval software company had previously achieved.

However, as Qatalyst Partners reported in an Autonomy profile, the share price was not exactly hitting home runs each quarter:

image

Source: Autonomy Trading and Financial Statistics, 2011 by Qatalyst Partners

After some HP executive turmoil, the deal was done. After a year or so, HP analysts determined that the Silicon Valley company paid too much for Autonomy. The result was high profile litigation. One Autonomy executive found himself losing and suffering the embarrassment of jail time.

Autonomy Founder Mike Lynch Flown to US for HPE Fraud Trial” reports:

Autonomy founder Mike Lynch has been extradited to the US under criminal charges that he defrauded HP when he sold his software business to them for $11 billion in 2011. The 57-year-old is facing allegations that he inflated the books at Autonomy to generate a higher sale price for the business, the value of which HP subsequently wrote down by billions of dollars.

Although I did some consulting work for Autonomy, I have no unique information about the company, the HP allegations, or the legal process which will unspool in the US.

In a recent conversation with a person who had first hand knowledge of the deal, I learned that HP was disappointed with the Autonomy approach to business. I pushed back and pointed out three things to a person who was quite agitated that I did not share his outrage. My points, as I recall, were:

  1. A number of search-and-retrieval companies failed to generate revenue sufficient to meet their investors’ expectations. These included outfits like Convera (formerly Excalibur Technologies), Entopia, and numerous other firms. Some were sold and were operated as reasonably successful businesses; for example, Dassault Systèmes and Exalead. Others were folded into a larger business; for example, Microsoft’s purchase of Fast Search & Transfer and Oracle’s acquisition of Endeca. The period from 2008 to 2013 was particularly difficult for vendors of enterprise search and content processing systems. I documented these issues in The Enterprise Search Report and a couple of other books I wrote.
  2. Enterprise search vendors and some hybrid outfits which developed search-related products and services used bundling as a way to make sales. The idea was not new. IBM refined the approach. Buy a mainframe and get support free for a period of time. Then the customer could pay a license fee for the software and upgrades and pay for services. IBM charged me $850 to roll a specialist to look at my three out-of-warranty PC 704 servers. (That was the end of my reliance on IBM equipment and its marvelous ServeRAID technology.) Libraries, for example, could acquire hardware. The “soft” components had a different budget cycle. The solution? Split up the deal. I think Autonomy emulated this approach and added some unique features. Nevertheless, the market for search and content related services was and is a difficult one. Fast Search & Transfer had its own approach. That landed the company in hot water and the founder on the pages of newspapers across Scandinavia.
  3. Sales professionals could generate interest in search and content processing systems by describing the benefits of finding information buried in a company’s file cabinets, tucked into PowerPoint presentations, and sleeping peacefully in email. Like the current buzz about OpenAI and ChatGPT, expectations are loftier than the reality of some implementations. Enterprise search vendors like Autonomy had to deal with angry licensees who could not find information, heated objections to the cost of reindexing content to make it possible for employees to find the file saved yesterday (an expensive and difficult task even today), and howls of outrage because certain functions had to be coded to meet the specific content requirements of a particular licensee. Remember that a large company does not need one search and retrieval system. There are many, quite specific requirements. These range from engineering drawings in the R&D center to the super sensitive employee compensation data, from the legal department’s need to process discovery information to the mandated classified documents associated with a government contract.

These issues remain today. Autonomy is now back in the spot light. The British government, as I understand the situation, is not chasing Dr. Lynch for his methods. HP and the US legal system are.

The person with whom I spoke was not interested in my three points. He has a Harvard education and I am a geriatric. I will survive his anger toward Autonomy and his obvious affection for the estimable HP, its eavesdropping Board and its executive revolving door.

What few recall is that Autonomy was one of the first vendors of search to use smart software. The implementation was described as Neuro Linguistic Programming. Like today’s smart software, the functioning of the Autonomy core technology was a black box. I assume the litigation will expose this Autonomy black box. Is there a message for the ChatGPT-type outfits blossoming at a prodigious rate?

Yes, the enterprise search sector is about to undergo a rebirth. Organizations have information. Findability remains difficult. The fix? Merge ChatGPT type methods with an organization’s content. What do you get? A party which faded away in 2010 is coming back. The Beatles and Elvis vibe will be live, on stage, act fast.

Stephen E Arnold, May 15, 2023

More Fake Drake and a Google Angle

May 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Copyright law was never designed to address algorithms that can flawlessly mimic artists and writers based on what it learns from the Internet. Absent any more relevant litigation, however, it may be up to the courts to resolve this thorny and rapidly escalating issue. And poor Google, possessor of both YouTube and lofty AI ambitions, is stuck between a rock and a hard place. The Verge reports, “AI Drake Just Set an Impossible Legal Trap for Google.”

To make a winding story short, someone used AI to create a song that sounded eerily like Drake and The Weekend and post it in TikTok. From there it made its way to Apple Music, Spotify, and YouTube. While Apple and Spotify could and did pull the track from their platforms right away, user-generated-content platforms TikTok and Google are bound by established takedown processes that rest on copyright law. And new content generated by AI that mimics humans is not protected by copyright. Yet.

The track was eventually removed on TikTok and YouTube based on an unauthorized sample of a producer tag at the beginning. But what if the song were re-released without that snippet? Publishers now assert that training AI on bodies of artists’ work is itself copyright infringement, and a fake Drake (or Taylor Swift or Tim McGraw) song is therefore a derivative work. Sounds logical to me. But for Google, both agreeing and disagreeing pose problems. Writer Nilay Patel explains:

“So now imagine that you are Google, which on the one hand operates YouTube, and on the other hand is racing to build generative AI products like Bard, which is… trained by scraping tons of data from the internet under a permissive interpretation of fair use that will definitely get challenged in a wave of lawsuits. AI Drake comes along, and Universal Music Group, one of the largest labels in the world, releases a strongly worded statement about generative AI and how its streaming partners need to respect its copyrights and artists. What do you do?

*If Google agrees with Universal that AI-generated music is an impermissible derivative work based on the unauthorized copying of training data, and that YouTube should pull down songs that labels flag for sounding like their artists, it undercuts its own fair use argument for Bard and every other generative AI product it makes — it undercuts the future of the company itself.

*If Google disagrees with Universal and says AI-generated music should stay up because merely training an AI with existing works is fair use, it protects its own AI efforts and the future of the company, but probably triggers a bunch of future lawsuits from Universal and potentially other labels, and certainly risks losing access to Universal’s music on YouTube, which puts YouTube at risk.”

Quite the conundrum. And of course, it is not just music. YouTube is bound to face similar issues with movies, TV shows, news, podcasts, and other content. Patel notes creators and their publishers are highly motivated to wage this fight because, for them, it is a fight to the potential death of their industries. Will Google sacrifice the currently lucrative YouTube or its potentially more profitable AI aspirations?

Cynthia Murrell, May 5, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta