OpenAI: Someone, Maybe the UN? Take Action Before We Sign Up More Users

June 8, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I wrote about Sam AI-man’s use of language my humanoid-written essay “Regulate Does Not Mean Regulate. Leave the EU Does Not Mean Leave the EU. Got That?” Now the vocabulary of Mr. AI-man has been enriched. For a recent example, please, navigate to “OpenAI CEO Suggests International Agency Like UN’s Nuclear Watchdog Could Oversee AI.” I am loath to quote from an AP (once an “associated press”) due to the current entity’s policy related to citing their “real news.”

In the allegedly accurate “real news” story, I learned that Mr. AI-man has floated the idea for a United Nation’s agency to oversee global smart software. Now that is an idea worthy of a college dorm room discussion at Johns Hopkins University’s School of Advanced International Studies in always-intellectually sharp Washington, DC.

6 8 bureaucrats

UN Representative #1: What exactly is artificial intelligence? UN Representative #2. How can we leverage it for fund raising? UN Representative # 3. Does anyone have an idea how we could use smart software to influence our friends in certain difficult nation states? UN Representative #4. Is it time for lunch? Illustration crafted with imagination, love, and care by MidJourney.

The model, as I understand the “real news” story is that the UN would be the guard dog for bad applications of smart software. Mr. AI-man’s example of UN effectiveness is the entity’s involvement in nuclear power. (How is that working out in Iran?) The write up also references the notion of guard rails. (Are there guard rails on other interesting technology; for example, Instagram’s somewhat relaxed approach to certain information related to youth?)

If we put the “make sure we come together as a globe” statement in the context of Sam AI-man’s other terminology, I wonder if PR and looking good is more important than generating traction and revenue from OpenAI’s innovations.

Of course not. The UN can do it. How about those UN peace keeping actions in Africa? Complete success from Mr. AI-man’s point of view.

Stephen E Arnold, June 8, 2023, 929 am US Eastern

Japan and Copyright: Pragmatic and Realistic

June 8, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Japan Goes All In: Copyright Doesn’t Apply To AI Training.” In a nutshell, Japan’s alleged stance is accompanied with a message for “creators”: Tough luck.

6 1 ripping off my content

You are ripping off my content. I don’t think that is fair. I am a creator. The image of a testy office lady is the product of MidJourney’s derivative capabilities.

The write up asserts:

It seems Japan’s stance is clear – if the West uses Japanese culture for AI training, Western literary resources should also be available for Japanese AI. On a global scale, Japan’s move adds a twist to the regulation debate. Current discussions have focused on a “rogue nation” scenario where a less developed country might disregard a global framework to gain an advantage. But with Japan, we see a different dynamic. The world’s third-largest economy is saying it won’t hinder AI research and development. Plus, it’s prepared to leverage this new technology to compete directly with the West.

If this is the direction in which Japan is heading, what’s the posture in China, Viet-Nam and other countries in the region? How can the US regulate for an unknown future? We know Japan’s approach it seems.

Stephen E Arnold, June 8, 2023

How Does One Train Smart Software?

June 8, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It is awesome when geekery collides with the real world, such as the development of AI. These geekery hints prove that fans are everywhere and the influence of fictional worlds leave a lasting impact. Usually these hints are naming a new discovery after a favorite character or franchise, but it might not be good for copyrighted books beloved by geeks everywhere. The New Scientist reports that “ChatGPT Seems To Be Trained On Copyrighted Books Like Harry Potter.”

In order to train AI models, AI developers need large language models or datasets. Datasets can range from information on social media platforms to shopping databases like Amazon. The problem with ChatGPT is that it appears its developers at OpenAI used copyrighted books as language models. If OpenAI used copyrighted materials it brings into question if the datasets were legality created.

Associate Professor David Bamman of the University of California, Berkley campus, and his team studied ChatGPT. They hypothesized that OpenAI used copyrighted material. Using 600 fiction books from 1924-2020, Bamman and his team selected 100 passages from each book that ha a single, named character. The name was blanked out of the passages, then ChatGPT was asked to fill them. ChatGPT had a 98% accuracy rate with books ranging from J.K. Rowling, Ray Bradbury, Lewis Carroll, and George R.R. Martin.

If ChatGPT is only being trained from these books, does it violate copyright?

“ ‘The legal issues are a bit complicated,’ says Andres Guadamuz at the University of Sussex, UK. ‘OpenAI is training GPT with online works that can include large numbers of legitimate quotes from all over the internet, as well as possible pirated copies.’ But these AIs don’t produce an exact duplicate of a text in the same way as a photocopier, which is a clearer example of copyright infringement. ‘ChatGPT can recite parts of a book because it has seen it thousands of times,’ says Guadamuz. ‘The model consists of statistical frequency of words. It’s not reproduction in the copyright sense.’”

Individual countries will need to determine dataset rules, but it is preferential to notify authors their material is being used. Fiascos are already happening with stolen AI generated art.

ChatGPT was mostly trained on science fiction novels, while it did not read fiction from minority authors like Toni Morrison. Bamman said ChatGPT is lacking representation. That his one way to describe the datasets, but it more likely pertains to the human  AI developers reading tastes. I assume there was little interest in books about ethics, moral behavior, and the old-fashioned William James’s view of right and wrong. I think I assume correctly.

Whitney Grace, June 8, 2023

Blue Chip Consultants Embrace Smart Software: Some Possible But Fanciful Reasons Offered

June 7, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

VentureBeat published an interesting essay about the blue-chip consulting firm, McKinsey & Co. Some may associate the firm with its work for the pharmaceutical industry. Others may pull up a memory of a corporate restructuring guided by McKinsey consultants which caused employees to have an opportunity to find their futures elsewhere. A select few can pull up a memory of a McKinsey recruiting pitch at one of the business schools known to produce cheerful Type As who desperately sought the approval of their employer. I recall do a miniscule project related to a mathematical technique productized by a company almost no one remembers. I think the name of the firm was CrossZ. Ah, memories.

McKinsey Says About Half of Its Employees Are Using Generative AI” reports via a quote from Ben Ellencweig (a McKinsey blue chip consultant):

About half of [our employees] are using those services with McKinsey’s permission.

Is this half “regular” McKinsey wizards? That’s ambiguous. McKinsey has set up QuantumBlack, a unit focused on consulting about artificial intelligence.

The article included a statement which reminded me of what I call the “vernacular of the entitled”; to wit:

Ellencweig emphasized that McKinsey had guardrails for employees using generative AI, including “guidelines and principles” about what information the workers could input into these services. “We do not upload confidential information,” Ellencweig said.

6 7 23 mckinsey opioids

A senior consultant from an unknown consulting firm explains how a hypothetical smart fire hydrant disguised as a beverage dispenser can distribute variants of Hydrocodone Bitartrate or an analog to “users.” The illustration was created by the smart bytes at MidJourney.

Yep, guardrails. Guidelines. Principles. I wonder if McKinsey and Google are using the same communications consulting firm. The lingo is designed to reassure, to suggest an ethical compass in good working order.

Another McKinsey expert said, according to the write up:

McKinsey was testing most of the leading generative AI services: “For all the major players, our tech folks have them all in a sandbox, [and are] playing with them every day,” he said.

But what about the “half”? If half means those in Black Quantum, McKinsey is in the Wright Bros. stage of technological application. However, if the half applies to the entire McKinsey work force, that raises a number of interesting questions about what information is where and how those factoids are being used.

If I were not a dinobaby with a few spins in the blue chip consulting machine, I would track down what Bain, BCG, Booz, et al were saying about their AI practice areas. I am a dinobaby.

What catches my attention is the use of smart software in these firms; for example, here are a couple of questions I have:

  1. Will McKinsey and similar firms use the technology to reduce the number of expensive consultants and analysts while maintaining or increasing the costs of projects?
  2. Will McKinsey and similar firms maintain their present staffing levels and boost the requirements for a bonus or promotion as measured by billability and profit?
  3. Will McKinsey and similar firms use the technology, increase the number of staff who can integrate smart software into their work, and force out the Luddites who do not get with the AI program?
  4. Will McKinsey cherry pick ways to use the technology to maximize partner profits and scramble to deal with glitches in the new fabric of being smarter than the average client?

My instinct is that more money will be spent on marketing the use of smart software. Luddites will be allowed to find their future at an azure chip firm (lower tier consulting company) or return to their parents’ home. New hires with AI smarts will ride the leather seats in McKinsey’s carpetland. Decisions will be directed at [a] maximizing revenue, [b] beating out other blue chip outfits for juicy jobs, and [c] chasing terminated high tech professionals who own a suit and don’t eat enhanced gummies during an interview.

And for the clients? Hey, check out the way McKinsey produces payoff for its clients in “When McKinsey Comes to Town: The Hidden Influence of the World’s Most Powerful Consulting Firm.”

Stephen E Arnold, June 7, 2023

Google: Responsible and Trustworthy Chrome Extensions with a Dab of Respect the User

June 7, 2023

More Malicious Extensions in Chrome Web Store” documents some Chrome extensions (add ins) which allegedly compromise a user’s computer. Google has been using words like responsible and trust with increasing frequency. With Chrome in use by more than half of those with computing devices, what’s the dividing line between trust and responsibility for Google smart software and stupid but market leading software like Chrome. If a non-Google third party can spot allegedly problematic extensions, why can’t Google? Is part of the answer, “Talk is cheap. Fixing software is expensive”? That’s a good question.

The cited article states:

… we are at 18 malicious extensions with a combined user count of 55 million. The most popular of these extensions are Autoskip for Youtube, Crystal Ad block and Brisk VPN: nine, six and five million users respectively.

The write up crawfishes, stating:

Mind you: just because these extensions monetized by redirecting search pages two years ago, it doesn’t mean that they still limit themselves to it now. There are way more dangerous things one can do with the power to inject arbitrary JavaScript code into each and every website.

My reaction is that why are these allegedly malicious components in the Google “store” in the first place?

I think the answer is obvious: Talk is cheap. Fixing software is expensive. You may disagree, but I hold fast to my opinion.

Stephen E Arnold, June 7, 2023

Old School Book Reviewers, BookTok Is Eating Your Lunch Now

June 7, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Perhaps to counter recent aspersions on its character, TikTok seems eager to transfer prestige from one of its popular forums to itself. Mashable reports, “TikTok Is Launching its Own Book Awards.” The BookTok community has grown so influential it apparently boosts book sales and inspires TV and movie producers. Writer Meera Navlakha reports:

“TikTok knows the power of this community, and is expanding on it. First, a TikTok Book Club was launched on the platform in July 2022; a partnership with Penguin Random House followed in September. Now, the app is officially launching the TikTok Book Awards: a first-of-its-kind celebration of the BookTok community, specifically in the UK and Ireland. The 2023 TikTok Book Awards will honour favourite authors, books, and creators across nine categories. These range ‘Creator of the Year’ to ‘Best BookTok Revival’ to ‘Best Book I Wish I Could Read Again For The First Time’. Those within the BookTok ecosystem, including creators and fans, will help curate the nominees, using the hashtag #TikTokBookAwards. The long-list will then be judged by experts, including author Candice Brathwaite, creators Coco and Ben, and Trâm-Anh Doan, the head of social media at Bloomsbury Publishing. Finally, the TikTok community within the UK and Ireland will vote on the short-list in July, through an in-app hub.”

What an efficient plan. This single, geographically limited initiative may not be enough to outweigh concerns about TikTok’s security. But if the platform can appropriate more of its communities’ deliberations, perhaps it can gain the prestige of a digital newspaper of record. All with nearly no effort on its part.

Cynthia Murrell, June 7, 2023

Microsoft: Just a Minor Thing

June 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Several years ago, I was asked to be a technical advisor to a UK group focused on improper actions directed toward children. Since then, I have paid some attention to the information about young people that some online services collect. One of the more troubling facets of improper actions intended to compromise the privacy, security, and possibly the safety of minors is the role data aggregators play. Whether gathering information from “harmless” apps favored by young people to surreptitious collection and cross correlation of young users’ online travels, these often surreptitious actions of people and their systems trouble me.

The “anything goes” approach of some organizations is often masked by public statements and the use of words like “trust” when explaining how information “hoovering” operations are set up, implemented, and used to generate revenue or other outcomes. I am not comfortable identifying some of these, however.

6 6 good deal

A regulator and a big company representative talking about a satisfactory resolution to the regrettable collection of kiddie data. Both appear to be satisfied with another job well done. The image was generated by the MidJourney smart software.

Instead, let me direct your attention to the BBC report “Microsoft to Pay $20m for Child Privacy Violations.” The write up states as “real news”:
Microsoft will pay $20m (£16m) to US federal regulators after it was found to have illegally collected

data on children who had started Xbox accounts.

The write up states:

From 2015 to 2020 Microsoft retained data “sometimes for years” from the account set up, even when a parent failed to complete the process …The company also failed to inform parents about all the data it was collecting, including the user’s profile picture and that data was being distributed to third parties.

Will the leader in smart software and clever marketing have an explanation? Of course. That’s what advisory firms and lawyers help their clients deliver; for example:

“Regrettably, we did not meet customer expectations and are committed to complying with the order to continue improving upon our safety measures,” Microsoft’s Dave McCarthy, CVP of Xbox Player Services, wrote in an Xbox blog post. “We believe that we can and should do more, and we’ll remain steadfast in our commitment to safety, privacy, and security for our community.”

Sounds good.

From my point of view, something is out of alignment. Perhaps it is my old-fashioned idea that young people’s online activities require a more thoughtful approach by large companies, data aggregators, and click capturing systems. The thought, it seems, is directed at finding ways to take advantage of weak regulation, inattentive parents and guardians, and often-uninformed young people.

Like other ethical black holes in certain organizations, surfing for fun or money on children seems inappropriate. Does $20 million have an impact on a giant company? Nope. The ethical and moral foundation of decision making is enabling these data collection activities. And $20 million causes little or no pain. Therefore, why not continue these practices and do a better job of keeping the procedures secret?

Pragmatism is the name of the game it seems. And kiddie data? Fair game to some adrift in an ethical swamp. Just a minor thing.

Stephen E Arnold, June 6, 2023

Software Cannot Process Numbers Derived from Getty Pix, Honks Getty Legal Eagle

June 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Getty Asks London Court to Stop UK Sales of Stability AI System.” The write up comes from a service which, like Google, bandies about the word trust with considerable confidence. The main idea is that software is processing images available in the form of Web content, converting these to numbers, and using the zeros and ones to create pictures.

The write up states:

The Seattle-based company [Getty] accuses the company of breaching its copyright by using its images to “train” its Stable Diffusion system, according to the filing dated May 12, [2023].

I found this statement in the trusted write up fascinating:

Getty is seeking as-yet unspecified damages. It is also asking the High Court to order Stability AI to hand over or destroy all versions of Stable Diffusion that may infringe Getty’s intellectual property rights.

When I read this, I wonder if the scribes upon learning about the threat Gutenberg’s printing press represented were experiencing their “Getty moment.” The advanced technology of the adapted olive press and hand carved wooden letters meant that the quill pen champions had to adapt or find their future emptying garderobes (aka chamber pots).

6 3 scribes pushing press in river

Scribes prepare to throw a Gutenberg printing press and the evil innovator Gutenberg in the Rhine River. Image was produced by the evil incarnate code of MidJourney. Getty is not impressed like letters on paper with the outputs of Beelzebub-inspired innovations.

How did that rebellion against technology work out? Yeah. Disruption.

What happens if the legal system in the UK and possibly the US jump on the no innovation train? Japan’s decision points to one option: Using what’s on the Web is just fine. And China? Yep, those folks in the Middle Kingdom will definitely conform to the UK and maybe US rules and regulations. What about outposts of innovation in Armenia? Johnnies on the spot (not pot, please). But what about those computer science students at Cambridge University? Jail and fines are too good for them. To the gibbet.

Stephen E Arnold, June 6, 2023

India and Its Management Secrets: Under Utilized Staff Motivation Technique

June 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am not sure if the information in the article is accurate, but it sure is entertaining. If true, I think I have identified one of those management secrets which makes wizards from India such outstanding managers. Navigate to “Company Blocks Employees from Leaving Office:  Now, a Clarification.” The write up states:

Coding Ninjas, a Gurugram-based edtech institute, has issued clarification on a recent incident that saw its employees being ‘locked’ inside the office so that they cannot exit ‘without permission.’

And what was the clarification? Let me guess. Heck, no. Just a misunderstanding. The write up explains:

… the company [Coding Ninjas, remember?], while acknowledging the incident, attributed it to a ‘regrettable’ action by an employee. The ‘action,’ noted, was ‘immediately rectified within minutes,’ and the individual in question acknowledged his ‘mistake’ and apologized for it. Further saying that the founders had expressed their ‘regret’ and apologized to the staff, Coding Ninjas described this as an ‘isolated’ incident. Coding Ninjas’ senior executive gets gate locked to stop employees from leaving office; company says action ‘regrettable…’

For another take on this interesting management approach to ensuring productivity, check out “Coding Ninjas’ Senior Executive Gets Gate Locked to Stop Employees from Leaving Office; Company Says Action ‘Regrettable’.

What if you were to look for a link to this story on Reddit? I located a page which showed a door being locked. Exciting footage was available at this link on June 6, 2023 at this link. (If the information has been deleted, you have learned something about Reddit.com in my opinion.)

My interpretation of this enjoyable incident (if indeed true) is:

  1. Something to keep in mind when accepting a job in Mumbai or similar technology hot spot
  2. Why graduates of the Indian Institutes of Technology are in such demand; those folks are indeed practical and focused on maximizing employee productivity as measured in minutes in a facility
  3. A solution to employees who want to work from home. When an employee wants a paycheck, make them come to the office and lock the employees in. Works well and the effectiveness is evident in prisons and re-education facilities in a number of countries.

And regrettable? Yes, in terms of PR. No, in terms of getting snagged in what may be fake news. Is this a precept of the high school science club management method. Yep. Yep.

Stephen E Arnold, June 6, 2023

The Google AI Way: EEAT or Video Injection?

June 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Over the weekend, I spotted a couple of signals from the Google marketing factory. The first is the cheerleading by that great champion of objective search results, Danny Sullivan who wrote with Chris Nelson “Rewarding High Quality Content, However, It Is Produced.” The authors pointed out that their essay is on behalf of the Google Search Quality team. This “team” speaks loudly to me when we run test queries on Google.com. Once in a while — not often, mind you — a relevant result will appear in the first page or two of results.

The subject of this essay by Messrs.Sullivan and Nelson is EEAT. My research team and I think that the fascinating acronym is pronounced like to word “eat” in the sense of ingesting gummy cannabinoids. (One hopes these are not the prohibited compounds such as Delta-9 THC.) The idea is to pop something in your mouth and chew. As the compound (fact and fiction, GPT generated content and factoids) dissolve and make their way into one’s system, the psychoactive reaction is greater perceived dependence on the Google products. You may not agree, but that’s how I interpret the essay.

So what’s EEAT? I am not sure my team and I are getting with the Google script. The correct and Googley answer is:

Expertise, experience, authoritativeness, and trustworthiness.

The write up says:

Focusing on rewarding quality content has been core to Google since we began. It continues today, including through our ranking systems designed to surface reliable information and our helpful content system. The helpful content system was introduced last year to better ensure those searching get content created primarily for people, rather than for search ranking purposes.

I wonder if this text has been incorporated in the Sundar and Prabhakar Comedy Show? I would suggest that it replace the words about meeting users’ needs.

The meat of the synthetic turkey burger strikes me as:

it’s important to recognize that not all use of automation, including AI generation, is spam. Automation has long been used to generate helpful content, such as sports scores, weather forecasts, and transcripts. AI has the ability to power new levels of expression and creativity, and to serve as a critical tool to help people create great content for the web.

Synthetic or manufactured information, content objects, data, and other outputs are okay with us. We’re Google, of course, and we are equipped with expertise, experience, authoritativeness, and trustworthiness to decide what is quality and what is not.

I can almost visualize a T shirt with the phrase “EEAT It” silkscreened on the back with a cheerful Google logo on the front. Catchy. EEAT It. I want one. Perhaps a pop tune can be sampled and used to generate a synthetic song similar to Michael Jackson’s “Beat It”? Google AI would dodge the Weird Al Yankovic version of the 1983 hit. Google’s version might include the refrain:

Just EEAT it (EEAT it, EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it, ha, ha, ha, ha)
EEAT it (EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it)

If chowing down on this Google information is not to your liking, one can get with the Google program via a direct video injection. Google has been publicizing its free video training program from India to LinkedIn (a Microsoft property to give the social media service its due). Navigate to “Master Generative AI for Free from Google’s Courses.” The free, free courses are obviously advertisements for the Google way of smart software. Remember the key sequence: Expertise, experience, authoritativeness, and trustworthiness.

The courses are:

  1. Introduction to Generative AI
  2. Introduction to Large Language Models
  3. Attention Mechanism
  4. Transformer Models and BERT Model
  5. Introduction to Image Generation
  6. Create Image Captioning Models
  7. Encoder-Decoder Architecture
  8. Introduction to Responsible AI (remember the phrase “Expertise, experience, authoritativeness, and trustworthiness.”)
  9. Introduction to Generative AI Studio
  10. Generative AI Explorer (Vertex AI).

Why is Google offering free infomercials about its approach to AI?

The cited article answers the question this way:

By 2030, experts anticipate the generative AI market to reach an impressive $109.3 billion, signifying a promising outlook that is captivating investors across the board. [Emphasis added.]

How will Microsoft respond to the EEAT It positioning?

Just EEAT it (EEAT it, EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it, ha, ha, ha, ha)
EEAT it (EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it)

Stephen E Arnold, June 5, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta