Microsoft: Just a Minor Thing

June 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Several years ago, I was asked to be a technical advisor to a UK group focused on improper actions directed toward children. Since then, I have paid some attention to the information about young people that some online services collect. One of the more troubling facets of improper actions intended to compromise the privacy, security, and possibly the safety of minors is the role data aggregators play. Whether gathering information from “harmless” apps favored by young people to surreptitious collection and cross correlation of young users’ online travels, these often surreptitious actions of people and their systems trouble me.

The “anything goes” approach of some organizations is often masked by public statements and the use of words like “trust” when explaining how information “hoovering” operations are set up, implemented, and used to generate revenue or other outcomes. I am not comfortable identifying some of these, however.

6 6 good deal

A regulator and a big company representative talking about a satisfactory resolution to the regrettable collection of kiddie data. Both appear to be satisfied with another job well done. The image was generated by the MidJourney smart software.

Instead, let me direct your attention to the BBC report “Microsoft to Pay $20m for Child Privacy Violations.” The write up states as “real news”:
Microsoft will pay $20m (£16m) to US federal regulators after it was found to have illegally collected

data on children who had started Xbox accounts.

The write up states:

From 2015 to 2020 Microsoft retained data “sometimes for years” from the account set up, even when a parent failed to complete the process …The company also failed to inform parents about all the data it was collecting, including the user’s profile picture and that data was being distributed to third parties.

Will the leader in smart software and clever marketing have an explanation? Of course. That’s what advisory firms and lawyers help their clients deliver; for example:

“Regrettably, we did not meet customer expectations and are committed to complying with the order to continue improving upon our safety measures,” Microsoft’s Dave McCarthy, CVP of Xbox Player Services, wrote in an Xbox blog post. “We believe that we can and should do more, and we’ll remain steadfast in our commitment to safety, privacy, and security for our community.”

Sounds good.

From my point of view, something is out of alignment. Perhaps it is my old-fashioned idea that young people’s online activities require a more thoughtful approach by large companies, data aggregators, and click capturing systems. The thought, it seems, is directed at finding ways to take advantage of weak regulation, inattentive parents and guardians, and often-uninformed young people.

Like other ethical black holes in certain organizations, surfing for fun or money on children seems inappropriate. Does $20 million have an impact on a giant company? Nope. The ethical and moral foundation of decision making is enabling these data collection activities. And $20 million causes little or no pain. Therefore, why not continue these practices and do a better job of keeping the procedures secret?

Pragmatism is the name of the game it seems. And kiddie data? Fair game to some adrift in an ethical swamp. Just a minor thing.

Stephen E Arnold, June 6, 2023

IBM Dino Baby Unhappy about Being Outed as Dinobaby in the Baby Wizards Sandbox

June 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I learned the term “dinobaby” reading blog posts about IBM workers who alleged Big Blue wanted younger workers. After thinking about the term, I embraced it. This blog post features an animated GIF of me dancing in my home office. I try to avoid the following: [a] Millennials, GenX, GenZ, and GenY super wizards; [b] former IBM workers who grouse about growing old and not liking a world without CICS; and [c] individuals with advanced degrees who want to talk with me about “smart software.” I have to admit that I have not been particularly successful in this effort in 2023: Conferences, Zooms, face-to-face meetings, lunches, yada yada. Either I am the most magnetic dinobaby in Harrod’s Creek, or these jejune world changers are clueless. (Maybe I should live in a cave on a mountain and accept acolytes?)

I read “Laid-Off 60-Year-Old Kyndryl Exec Says He Was Told IT Giant Wanted New Blood.” The write up includes a number of interesting statements. Here’s one:

BM has been sued numerous times for age discrimination since 2018 when it was reported that company leadership carried out a plan to de-age its workforce – charges IBM has consistently denied, despite US Equal Employment Opportunity Commission (EEOC) findings to the contrary and confidential settlements.

Would IBM deny allegations of age discrimination? There are so many ways to terminate employees today. Why use the “you are old, so you are RIF’ed” ploy? In my opinion, it is an example of the lack of management finesse evident in many once high-flying companies today. I term the methods apparently in use at outfits like Twitter, Google, Facebook, and others as “high school science club management methods” or H2S2M2. The acronym has not caught one, but I assume that someone with a subscription to ChatGPT will use AI to write a book on the subject soon.

The write up also includes this statement:

Liss-Riordan [an attorney representing the dinobaby] said she has also been told that an algorithm was used to identify those who would lose their jobs, but had no further details to provide with regard to that allegation.

Several observations are warranted:

  1. Discrimination is nothing new. Oldsters will be nuked. No question about it. Why? Old people like me (I am 78) make younger folks nervous because we belong in warehouses for the soon dead, not giving lectures to the leaders of today and tomorrow.
  2. Younger folks do not know what they do not know. Consequently, opportunities exist to [a] make fun of young wizards as I do in this blog Monday through Friday since 2008 and [b] charge these “masters of the universe” money to talk about that which is part of their great unknowing. Billing is rejuvenating.
  3. No one cares. One can sue. One can rage. One can find solace in chemicals, fast cars, or climbing a mountain. But it is important to keep one thing in mind: No one cares.

Net net: Does IBM practice dark arts to rid the firm of those who slow down Zoom meetings, raise questions to which no one knows answers, and burdens on benefits plans? My hunch is that IBM type outfits will do what’s necessary to keep the camp ground free of old timers. Who wouldn’t?

Stephen E Arnold, June 5, 2023

Trust in Google and Its Smart Software: What about the Humans at Google?

May 26, 2023

The buzz about Google’s injection of its smart software into its services is crowding out other, more interesting sounds. For example, navigate to “Texas Reaches $8 Million Settlement With Google Over Blatantly False Pixel Ads: Google Settled a Lawsuit Filed by AG Ken Paxton for Alleged False Advertisements for its Google Pixel 4 Smartphone.”

The write up reports:

A press release said Google was confronted with information that it had violated Texas laws against false advertising, but instead of taking steps to correct the issue, the release said, “Google continued its deceptive advertising, prioritizing profits over truthfulness.”

Google is pushing forward with its new mobile devices.

Let’s consider Google’s seven wonders of its software. You can find these at this link or summarized in my article “The Seven Wonders of the Google AI World.”

Let’s consider principle one: Be socially beneficial.

I am wondering how the allegedly deceptive advertising encourages me to trust Google.

Principle 4 is Be accountable to people.

My recollection is that Google works overtime to avoid being held accountable. The company relies upon its lawyers, its lobbyists, and its marketing to float above the annoyances of nation states. In fact, when greeted with substantive actions by the European Union, Google stalls and does not make available its latest and greatest services. The only accountability seems to be a legal action despite Google’s determined lawyerly push back. Avoiding accountability requires intermediaries because Google’s senior executives are busy working on principles.

Kindergarten behavior.

5 13 kids squabbling

MidJourney captures the thrill of two young children squabbling over a piggy bank. I wonder if MidJourney knows what is going in the newly merged Google smart software units.

Google approaches some problems like kids squabbling over a piggy bank.

Net net: The Texas fine makes clear that some do not trust Google. The “principles” are marketing hoo hah. But everyone loves Google, including me, my French bulldog, and billions of users worldwide. Everyone will want a new $1800 folding Pixel, which is just great based on the marketing information I have seen. It has so many features and works wonders.

Stephen E Arnold, May 26, 2023

OpenAI Clarifies What “Regulate” Means to the Sillycon Valley Crowd

May 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Sam AI-man begged (at least he did not get on his hands and knees) the US Congress to regulate artificial intelligence (whatever that means). I just read “Sam Altman Says OpenAI Will Leave the EU if There’s Any Real AI Regulation.” I know I am old. I know I lose my car keys a couple of times every 24 hours. I do recall Mr. AI-man wanted regulation.

However, the write up reports:

Though unlike in the AI-friendly U.S., Altman has threatened to take his big tech toys to the other end of the sandbox if they’re not willing to play by his rules.

The vibes of the Zuckster zip through my mind. Facebook just chugs along, pays fines, and mostly ignores regulators. China seems to be an exception for Facebook, the Google, and some companies I don’t know about. China had a mobile death-mobile. A person accused and convicted would be executed in the mobile death van as soon as it arrived at the location where the convicted bad actor was. Re-education camps and mobile death-mobiles suggest that some US companies choose to exit China. Lawyers who have to arrive quickly or their client has been processed are not much good in some of China’s efficient state machines. Fines, however, are okay. Write a check and move on.

Mr. AI-man is making clear that the word “regulate” means one thing to Mr. AI-man and another thing to those who are not getting with the smart software program. The write up states:

Altman said he didn’t want any regulation that restricted users’ access to the tech. He told his London audience he didn’t want anything that could harm smaller companies or the open source AI movement (as a reminder, OpenAI is decidedly more closed off as a company than it’s ever been, citing “competition”). That’s not to mention any new regulation would inherently benefit OpenAI, so when things inevitably go wrong it can point to the law to say they were doing everything they needed to do.

I think “regulate” means what the declining US fast food outfit who told me “have it your way” meant. The burger joint put in a paper bag whatever the professionals behind the counter wanted to deliver. Mr. AI-man doesn’t want any “behind the counter” decision making by a regulatory cafeteria serving up its own version of lunch.

Mr. AI-man wants “regulate” to mean his way.

In the US, it seems, that is exactly what big tech and promising venture funded outfits are going to get; that is, whatever each company wants. Competition is good. See how well OpenAI and Microsoft are competing with Facebook and Google. Regulate appears to mean “let us do what we want to do.”

I am probably wrong. OpenAI, Google, and other leaders in smart software are at this very moment consuming the Harvard Library of books to read in search of information about ethical behavior. The “moral” learning comes later.

Net net: Now I understand the new denotation of “regulate.” Governments work for US high-tech firms. Thus, I think the French term laissez-faire nails it.

Stephen E Arnold, May 25, 2023

HP Autonomy: A Modest Disagreement Escalates

May 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

About 12 years ago, Hewlett Packard acquired Autonomy. The deal was, as I understand the deal, HP wanted to snap up Autonomy to make a move in the enterprise services business. Autonomy was one of the major providers of search and some related content processing services in 2010. Autonomy’s revenues were nosing toward $800 million, a level no other search and retrieval software company had previously achieved.

However, as Qatalyst Partners reported in an Autonomy profile, the share price was not exactly hitting home runs each quarter:

image

Source: Autonomy Trading and Financial Statistics, 2011 by Qatalyst Partners

After some HP executive turmoil, the deal was done. After a year or so, HP analysts determined that the Silicon Valley company paid too much for Autonomy. The result was high profile litigation. One Autonomy executive found himself losing and suffering the embarrassment of jail time.

Autonomy Founder Mike Lynch Flown to US for HPE Fraud Trial” reports:

Autonomy founder Mike Lynch has been extradited to the US under criminal charges that he defrauded HP when he sold his software business to them for $11 billion in 2011. The 57-year-old is facing allegations that he inflated the books at Autonomy to generate a higher sale price for the business, the value of which HP subsequently wrote down by billions of dollars.

Although I did some consulting work for Autonomy, I have no unique information about the company, the HP allegations, or the legal process which will unspool in the US.

In a recent conversation with a person who had first hand knowledge of the deal, I learned that HP was disappointed with the Autonomy approach to business. I pushed back and pointed out three things to a person who was quite agitated that I did not share his outrage. My points, as I recall, were:

  1. A number of search-and-retrieval companies failed to generate revenue sufficient to meet their investors’ expectations. These included outfits like Convera (formerly Excalibur Technologies), Entopia, and numerous other firms. Some were sold and were operated as reasonably successful businesses; for example, Dassault Systèmes and Exalead. Others were folded into a larger business; for example, Microsoft’s purchase of Fast Search & Transfer and Oracle’s acquisition of Endeca. The period from 2008 to 2013 was particularly difficult for vendors of enterprise search and content processing systems. I documented these issues in The Enterprise Search Report and a couple of other books I wrote.
  2. Enterprise search vendors and some hybrid outfits which developed search-related products and services used bundling as a way to make sales. The idea was not new. IBM refined the approach. Buy a mainframe and get support free for a period of time. Then the customer could pay a license fee for the software and upgrades and pay for services. IBM charged me $850 to roll a specialist to look at my three out-of-warranty PC 704 servers. (That was the end of my reliance on IBM equipment and its marvelous ServeRAID technology.) Libraries, for example, could acquire hardware. The “soft” components had a different budget cycle. The solution? Split up the deal. I think Autonomy emulated this approach and added some unique features. Nevertheless, the market for search and content related services was and is a difficult one. Fast Search & Transfer had its own approach. That landed the company in hot water and the founder on the pages of newspapers across Scandinavia.
  3. Sales professionals could generate interest in search and content processing systems by describing the benefits of finding information buried in a company’s file cabinets, tucked into PowerPoint presentations, and sleeping peacefully in email. Like the current buzz about OpenAI and ChatGPT, expectations are loftier than the reality of some implementations. Enterprise search vendors like Autonomy had to deal with angry licensees who could not find information, heated objections to the cost of reindexing content to make it possible for employees to find the file saved yesterday (an expensive and difficult task even today), and howls of outrage because certain functions had to be coded to meet the specific content requirements of a particular licensee. Remember that a large company does not need one search and retrieval system. There are many, quite specific requirements. These range from engineering drawings in the R&D center to the super sensitive employee compensation data, from the legal department’s need to process discovery information to the mandated classified documents associated with a government contract.

These issues remain today. Autonomy is now back in the spot light. The British government, as I understand the situation, is not chasing Dr. Lynch for his methods. HP and the US legal system are.

The person with whom I spoke was not interested in my three points. He has a Harvard education and I am a geriatric. I will survive his anger toward Autonomy and his obvious affection for the estimable HP, its eavesdropping Board and its executive revolving door.

What few recall is that Autonomy was one of the first vendors of search to use smart software. The implementation was described as Neuro Linguistic Programming. Like today’s smart software, the functioning of the Autonomy core technology was a black box. I assume the litigation will expose this Autonomy black box. Is there a message for the ChatGPT-type outfits blossoming at a prodigious rate?

Yes, the enterprise search sector is about to undergo a rebirth. Organizations have information. Findability remains difficult. The fix? Merge ChatGPT type methods with an organization’s content. What do you get? A party which faded away in 2010 is coming back. The Beatles and Elvis vibe will be live, on stage, act fast.

Stephen E Arnold, May 15, 2023

More Fake Drake and a Google Angle

May 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Copyright law was never designed to address algorithms that can flawlessly mimic artists and writers based on what it learns from the Internet. Absent any more relevant litigation, however, it may be up to the courts to resolve this thorny and rapidly escalating issue. And poor Google, possessor of both YouTube and lofty AI ambitions, is stuck between a rock and a hard place. The Verge reports, “AI Drake Just Set an Impossible Legal Trap for Google.”

To make a winding story short, someone used AI to create a song that sounded eerily like Drake and The Weekend and post it in TikTok. From there it made its way to Apple Music, Spotify, and YouTube. While Apple and Spotify could and did pull the track from their platforms right away, user-generated-content platforms TikTok and Google are bound by established takedown processes that rest on copyright law. And new content generated by AI that mimics humans is not protected by copyright. Yet.

The track was eventually removed on TikTok and YouTube based on an unauthorized sample of a producer tag at the beginning. But what if the song were re-released without that snippet? Publishers now assert that training AI on bodies of artists’ work is itself copyright infringement, and a fake Drake (or Taylor Swift or Tim McGraw) song is therefore a derivative work. Sounds logical to me. But for Google, both agreeing and disagreeing pose problems. Writer Nilay Patel explains:

“So now imagine that you are Google, which on the one hand operates YouTube, and on the other hand is racing to build generative AI products like Bard, which is… trained by scraping tons of data from the internet under a permissive interpretation of fair use that will definitely get challenged in a wave of lawsuits. AI Drake comes along, and Universal Music Group, one of the largest labels in the world, releases a strongly worded statement about generative AI and how its streaming partners need to respect its copyrights and artists. What do you do?

*If Google agrees with Universal that AI-generated music is an impermissible derivative work based on the unauthorized copying of training data, and that YouTube should pull down songs that labels flag for sounding like their artists, it undercuts its own fair use argument for Bard and every other generative AI product it makes — it undercuts the future of the company itself.

*If Google disagrees with Universal and says AI-generated music should stay up because merely training an AI with existing works is fair use, it protects its own AI efforts and the future of the company, but probably triggers a bunch of future lawsuits from Universal and potentially other labels, and certainly risks losing access to Universal’s music on YouTube, which puts YouTube at risk.”

Quite the conundrum. And of course, it is not just music. YouTube is bound to face similar issues with movies, TV shows, news, podcasts, and other content. Patel notes creators and their publishers are highly motivated to wage this fight because, for them, it is a fight to the potential death of their industries. Will Google sacrifice the currently lucrative YouTube or its potentially more profitable AI aspirations?

Cynthia Murrell, May 5, 2023

What a Difference a Format Makes. 24 Little Bytes

May 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Lawyer Carl Oppedahl has strong feelings about the Patent Office’s push to shift applications from PDF format to the DOCX format. In his most recent blog post on the subject he considers, “How Successful Have USPTO;s DOCX Training Webinars Been?” His answer, in short, is not very.

Oppendahl recently conducted two webinars for law offices that regularly file clients’ patent applications. He polled his attendees and reports the vast majority of them felt the Patent Office has not done a good job of communicating the pros and cons of DOCX filing. More significant, though, may be the majority of attendees who say they will not or might not submit filings in DOCX in the future, despite the $200 – $400 fee for stubbornly sticking with PDFs. In our experience PDFs are a PITA, so why is there such a strong resistance to change?

I sat through a recording of Oppendahl’s first webinar on the subject, and if you believe his account there are actually some very good reasons. It is all about protecting one’s client. Oh, and protecting oneself from a malpractice claim. That could be worth a few hundred bucks (which one might pass on to the client anyway.) His executive-summary slide specifies:

“DOCX filing puts you more at risk than PDF filing

PDF filing:

*You can protect yourself tomorrow or next month or TYFNIL [ten years from now in litigation].

*The Ack Receipt Message Digest allows you to prove the PDF file you preserved is the same PDF file that was uploaded to the PTO.

*You get an audit trail.

DOCX filing:

*You cannot prove what DOCX file you actually uploaded.

*The PTO throws away the DOCX file you uploaded (D1) and only keeps their manipulated version (D2).

*There is no Ack Receipt Message Digest available to prove the DOCX file you preserved is the same DOCX file that you uploaded to the USPTO.

*The USPTO destroys the audit trail.

*There is an Ack Receipt Message Digest relating to DOCX. It does not match the file you uploaded (D1) so you cannot use it to prove what you filed. It does match the file D2 that became authoritative the instant that you clicked ‘submit,’ so TYFNIL it permits the infringer to prove that you must have clicked ‘submit’ and you agreed that your uploaded DOCX file D1 was not controlling.

*In other words TYFNIL if you try to point to what you say you uploaded, and you try to say that this is what should have issued in the patent the Message Digest will serve to say that you agreed that what you uploaded was irrelevant to what should have issued in the patent. The Message Digest serves to say that you agreed that the patent should issue based on what was in that manipulated version D2.

*In the DOCX filing system, the Message Digest has been repurposed to protect the USPTO and to protect infringers, and no longer protects you, the applicant or practitioner.”

Like I said, strong feelings. For details on each of these points, one really just needs to listen to the first 45 minutes of the webinar, not all one-and-a-half hours. A key point lies in that D1 versus D2 issue. The D2, which submitters are required to verify, is what emerges from the other side of the PTO’s proprietary docx validator software. According to Oppendahl, that software has been proven to introduce errors, like changing a mu to a u or a square root sign to a smiley face for example. For patents that involve formulas or the like, that can be a huge issue. To avoid such errors being set in stone, filers (or their paralegals) must check the submitted document against the new one character by character while the midnight EST deadline looms. Not ideal.

Another important issue is the value of the Ack Receipt Message Digest facilitated by PDFs but not DOCX documents. The technology involves hash functions and is an interesting math tangent if you’re into that kind of thing.

So why is the Patent Office pushing so hard? Apparently it is so they can automate their approval process. Automation is often a good thing, and we understand why they are eager to speed up the process and reduce their backlog. But the Patent Office may be jumping the gun if applicants’ legitimate legal standing is falling through the cracks.

Cynthia Murrell, May 5, 2023

Google Smart Software: Lawyers to the Rescue

May 2, 2023

The article “Beginning of the End of OpenAI” in Analytics India raised an interesting point about Google’s smart software. The essay suggests that a legal spat over a trademark for “GPT” could allow Google to make a come-from-behind play in the generative software race. I noted this passage:

A lot of product names appear with the term ‘GPT’ in it. Now, if OpenAI manages to get its trademark application decided in favour, all of these applications would have to change their name, and ultimately not look appealing to customers.

Flip this idea to “if Google wins…”, OpenAI could — note “could” — face a fleet of Google legal eagles and the might of Google’s prescient, forward forward, quantumly supreme marketing army.

What about useful products, unbiased methods of generating outputs, and slick technology? Wait. I know the answer. “That stuff is secondary to our new core competency. The outputs of lawyers and marketing specialists.”

Stephen E Arnold May 2, 2023

Divorcing the Google: Legal Eagles Experience a Frisson of Anticipation

April 24, 2023

No smart software has been used to create this dinobaby’s blog post.

I have poked around looking for a version or copy of the contract Samsung signed with Google for the firms’ mobile phone tie up. Based on what I have heard at conferences and read on the Internet (of course, I believe everything I read on the Internet, don’t you?), it appears that there are several major deals.

The first is the use of and access to the mindlessly fragmented Android mobile phone software. Samsung can do some innovating, but the Google is into providing “great experiences.” Why would a mobile phone maker like Samsung allow a user to manage contacts and block mobile calls without implementing a modern day hunt for gold near Placer.

The second is the “suggestion” — mind you, the suggestion is nothing more than a gentle nudge — to keep that largely-malware-free Google Play Store front and center.

The third is the default search engine. Buy a Samsung get Google Search.

Now you know why the legal eagles a shivering when they think of litigation to redo the Google – Samsun deal. For those who think the misinformation zipping around about Microsoft Bing displacing Google Search, my thought would be to ask yourself, “Who gains by pumping out this type of disinformation?” One answer is big Chinese mobile phone manufacturers. This is Art of War stuff, and I won’t dwell on this. What about Microsoft? Maybe but I like to think happy thoughts about Microsoft. I say, “No one at Microsoft would engage in disinformation intended to make life difficult for the online advertising king. Another possibility is Silicon Valley type journalists who pick up rumors, amplify them, and then comment that Samsung is kicking the tires of Bing with ChatGPT. Suddenly a “real” news outfit emits the Samsung rumor. Exciting for the legal eagles.

The write up “Samsung Can’t Dump Google for Bing As the Default Search Engine on Its Phones” does a good job of explaining the contours of a Google – Samsung tie up.

Several observations:

First, the alleged Samsung search replacement provides a glimpse of how certain information can move from whispers at conferences to headlines.

Second, I would not bet against lawyers. With enough money, contracts can be nullified, transformed, or left alone. The only option which disappoints attorneys is the one that lets sleeping dogs lie.

Third, the growing upswell of anti-Google sentiment is noticeable. That may be a far larger problem for Googzilla than rumors about Samsung. Perceptions can be quite real, and they translate into impacts. I am tempted to quote William James, but I won’t.

Net net: If Samsung wants to swizzle a deal with an entity other than the Google, the lawyers may vibrate with such frequency that a feather or two may fall off.

Stephen E Arnold, April 24, 2023

Italy Has an Interesting Idea Similar to Stromboli with Fried Flying Termites Perhaps?

April 19, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Bureaucratic thought processes are amusing, not as amusing as Google’s Paris demonstration of Bard, but darned close. I spotted one example of what seems so darned easy but may be as tough as getting 15th century Jesuits to embrace the concept of infinity. In short, mandating is different from doing.

Italy Says ChatGPT Must Allow Users to Correct Inaccurate Personal Information” reports in prose which may or may not have been written by smart software. I noted this passage about “rights”:

[such as] allowing users and non-users of ChatGPT to object to having their data processed by OpenAI and letting them correct false or inaccurate information about them generated by ChatGPT…

Does anyone recall the Google right to remove capability. The issue was blocking data, not making a determination if the information was “accurate.”

In one of my lectures at the 2023 US National Cyber Crime Conference I discuss with examples the issue of determining “accuracy.” My audience consists of government professionals who have resources to determine accuracy. I will point out that accuracy is a slippery fish.

The other issue is getting whiz bang Sillycon Valley hot stuff companies to implement reliable, stable procedures. Most of these outfits operate with Philz coffee in mind, becoming a rock star at a specialist conference, or the future owner of a next generation Italian super car. Listening to Italian bureaucrats is not a key part of their Italian thinking.

How will this play out? Hearing, legal proceedings, and then a shrug of the shoulders.

Stephen E Arnold, April 19, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta