Libraries: Who Needs Them? Perhaps Everyone

May 3, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

How dare libraries try to make the works they purchase more easily accessible to their patrons! The Nation ponders, “When You Buy a Book, You Can Loan It to Anyone. This Judge Says Libraries Can’t. Why Not?” The case was brought before the U.S. District Court in Manhattan by four publishers unhappy with the Internet Archive’s (IA) controlled digital lending (CDL) program. We learn the IA does plan to appeal the decision. Writer Michelle M. Wu explains:

“At issue was whether a library could legally digitize the books it already owned and lend the digital copies in place of the print. The IA maintained that it could, as long as it lent only the same number of copies it owned and locked down the digital copies so that a borrower could not copy or redistribute them. It would be doing what libraries had always done, lend books—just in a different format. The publishers, on the other hand, asserted that CDL infringed on authors’ copyrights, making unauthorized copies and sharing these with libraries and borrowers, thereby depriving the authors and publishers of rightful e-book sales. They viewed CDL as piracy. While Judge John G. Koeltl’s opinion addressed many issues, all his reasoning was based on one assumption: that copyright primarily is about authors’ and publishers’ right to profit. Despite the pervasiveness of this belief, the history of copyright tells us something different.”

Wu recounts copyright’s evolution from a means to promote the sharing of knowledge to a way for publishers to rake in every possible dime. The shift was driven by a series of developments in technology. In the 1980s, the new ability to record content to video tape upset Hollywood studios. Apparently, being able to (re)watch a show after its initial broadcast was so beyond the pale a lawsuit was required. Later, Internet-based innovations prompted more legal proceedings. On the other hand, tools evolved that enabled publishers to enforce their interpretation of copyright, no judicial review required. Wu asserts:

“Increasing the impact on the end user, publishers—not booksellers or authors—now control prices and access. They can charge libraries multiple times what they charge an individual and bill them repeatedly for the same content. They can limit the number of copies a library buys, or even refuse to sell e-books to libraries at all. Such actions ultimately reduce the amount of content that libraries can provide to their readers.”

So that is how the original intention of copyright law has been turned on its head. And how publishers are undermining the whole purpose of libraries, which are valiantly trying to keep pace with technology. Perhaps the IA will win it’s appeal and the valuable CDL program will be allowed to continue. Either way, their litigious history suggests publishers will keep fighting for control over content.

Cynthia Murrell, May 3, 2023

Google AI Reorganization: Hinton Bails Out

May 2, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I saw a number of pointers to a New York Times’ story about an AI wizard bailing out of the smooth riding Google AI operation. “‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead” states that the AI expert “worries it will cause serious harm.” I liked this statement because it displays the Times’s penchant for adding opinions to information provided by an expert. I love psycho-journalism!

Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education. But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

Remember the halcyon days of “objective” Google search results? What about the excitement of sending short messages for free and harmlessly capturing followers with a pithy bon mot? Has the warm flush of Facebook’s ability to build communities among users and predators faded? Each of these looked benign. Entertaining curiosities.

Now smart software is viewed with some skepticism. Gee. It only took a quarter century for people to figure out that flowing information is sometimes good and many times a bit like water blasted from a nozzle at great speed.

I found this comment interesting:

Until last year, he [Hinton] said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said. His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

I wonder if the OSINT cheerleaders have considered that what may be a multi-billion dollar industry could be facing a bit of a challenge. Mixing up Ukrainian field survey tags with Russian targeting devices will be small potatoes if Mr. Hinton is correct.

The photograph of the wizard captures a person who is not a 20 something Googler. The expression seems to suggest a growing awareness of a rework of the Information Superhighway and some other furniture of the modern world.

Stephen E Arnold, May 2, 2023

Google Smart Software: Lawyers to the Rescue

May 2, 2023

The article “Beginning of the End of OpenAI” in Analytics India raised an interesting point about Google’s smart software. The essay suggests that a legal spat over a trademark for “GPT” could allow Google to make a come-from-behind play in the generative software race. I noted this passage:

A lot of product names appear with the term ‘GPT’ in it. Now, if OpenAI manages to get its trademark application decided in favour, all of these applications would have to change their name, and ultimately not look appealing to customers.

Flip this idea to “if Google wins…”, OpenAI could — note “could” — face a fleet of Google legal eagles and the might of Google’s prescient, forward forward, quantumly supreme marketing army.

What about useful products, unbiased methods of generating outputs, and slick technology? Wait. I know the answer. “That stuff is secondary to our new core competency. The outputs of lawyers and marketing specialists.”

Stephen E Arnold May 2, 2023

Digital Dumplings: AI Outputs Must Be Groomed, Trimmed, and Message Aligned

May 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Beijing Moves to Force AI Bots to Display socialist Core Values.” I am not sure that the write up is spot on, but let’s assume that it is close enough for horseshoes. The main idea is that AI can chat. However, the AI must be steered to that it outputs content displaying “socialist core values.”

The write up states:

Companies will also have to make sure their chatbots create words and pictures that are truthful and respect intellectual property, and will be required to register their algorithms, the software brains behind chatbots, with regulators. The rules are not final, and regulators may continue to modify them, but experts said engineers building AI services in China were already figuring out how to incorporate the edicts into their products.

My reaction is that those who would argue that training smart software plus any post training digital filters will work. Let’s assume that those subject to the edict achieve the objective. What about US smart software whose developers insist that objectivity is the real deal? China’s policy if implemented and delivers, makes it clear that smart software is not objective. Developers can and will use its malleability to achieve their goals.

How about those students who reveal deep secrets on TikTok? Will these individuals be manipulated via smart software informed of the individuals’ hot buttons?

Is that data dumpling a psychographic trigger with a payload different from ground pork, veggies, and spices?

Stephen E Arnold, May 1, 2023

Gmail: An Example of Control Addiction

May 1, 2023

Vea4_thumb_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Is Gmail Killing Independent Email?” The main idea for the essay by an outfit called Tutanota is to answer the question with a reasonably well-reasoned, “Yes.” I am not going to work through the headaches caused by Google’s spam policies. Instead I want to present one statement from the write up and invite you to consider it in the content of “control addiction.”

I circled one statement which illustrates how Alphabet responds to what I call “control addiction.” My definition of the term is that a firm in a position of power wants more power because it validates the company plus it creates revenue opportunities via lock in. Addicts generally feel compelled to keep buying from their supplier I believe.

Is it okay that Gmail has the power to decide whether a business is sending spam or not? At the very least, Gmail support team should have listened to the company and looked into the issue to fix it. If Google is not willing to do this, it is just another sign of how Google can abuse their market power and hinder smaller services or – in this case – self-hosting emails, limiting the options people and businesses have when they want that their emails are reliably received by Gmail.

Several observations:

  1. Getting a human at Google is possible; however, some sort of positive relationship with a Googler of influence is necessary in my experience.
  2. That Googler may not know what to do about the problem. Command-and-control at the Alphabet, Google, YouTube construct is — how shall I phrase it? — quantumly supreme. The idea is that procedures and staff responsible for something wink in an out of existence without warning and change state following the perturbations of mysterious dynamical forces.
  3. Google is not into customer service, user service, or any other type of other directed service unless it benefits the Googler involved.

Net net: Decades of regulatory floundering have made life cushy for Googlers. Some others? Yeah, not so much.

Stephen E Arnold, May 1, 2023

Google: Timing Is Everything

April 28, 2023

Vea4_thumb_thumb_thumbNote: This short blog post is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Alphabet or the bastion of excellent judgment in matters of management captured headlines in the Wall Street Journal, Bloomberg, the Financial Times, yada yada. My hunch is that you think Google has knocked the socks off the smart software world. Wrong. Maybe Google has introduced an unbeatable approach to online advertising? Wrong. Perhaps you think that Google has rolled out a low-cost, self-driving vehicle? Sorry, wrong.

In the midst of layoffs, lawsuits, and the remarkable market reach of OpenAI, Google’s most recent brilliant move is the release of information abut a big payday for Sundar Pichai. The Reuters’ story “Alphabet CEO Pichai reaps Over $200 Million in 2022 Amid Cost-Cutting” reported:

The pay disparity comes at a time when Alphabet, the parent company of Google, has been cutting jobs globally, The Mountain View, California-based company announced plans to cut 12,000 jobs around the world in January [2023], equivalent to 6% of its global workforce.

Google employees promptly fouled traffic as protestors mumbled and shouted algorithms at the company.

Alphabet’s Board of Directors is quite tolerant and pleased with one half of the Sundar and Prabhakar Comedy Duo. The Paris Bard show which sucked more value than the Ticketmaster and Taylor Swift swizzle. Then the Google management wizards fired people. With Microsoft releasing smart software on a weekly cadence, Mr. Pichai’s reward for a job well done makes headlines.

Timing is everything.

Stephen E Arnold, April 28, 2023

Google Innovates in Smart Software: A Reorganization

April 28, 2023

Vea4_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Someone once told me that it takes six months for staff to adjust to a reorganization. Is this a bit of folklore. Nope, I just think the six month estimate is dead wrong. I think it takes longer, often a year or more to integrate two units of the same company. How do I know? I watched Halliburton take over Nuclear Utility Services. Then I watched Bell + Howell take over the Courier Journal’s database publishing unit. Finally, I have quite direct memories of not being able to find much of anything when we last moved.

Now the Alphabet Google thing is addressing its marketing problem with a reorganization. I learned this by reading “Announcing Google DeepMind.” The write up by a founder of DeepMind says:

Sundar is announcing that DeepMind and the Brain team from Google Research will be joining forces as a single, focused unit called Google DeepMind. Combining our talents and efforts will accelerate our progress towards a world in which AI helps solve the biggest challenges facing humanity…

Not a word about catching up with Microsoft’s Bing ChatGPT marketing, not a peep about the fast cycle integration of orchestration software across discrete ChatGPT-type functions, and not a whisper about why Google is writing about what is to happen.

What’s my take on this Code Red or Red Alert operational status which required the presence of Messrs. Brin and Page?

  1. Google is demonstrating that a reorganization will address the Microsoft ChatGPT marketing. A reorganization and a close partnership among Sundar [Pichai], Jeff Dean, James Manyika, and Demis [Hassabis]? Okay.
  2. Google announced quantum supremacy, its protein folding breakthrough, and the game playing ability of its smart software. Noble achievements, but Microsoft is pushing smart Bing into keyboards. That’s one way to get Android and iPhone users’ attention. Will it work for Microsoft? Probably not, but it is something visible.
  3. Google is simply not reacting. A baby ecosystem is growing up around Midjourney. I learned about unprompt.ai. The service provides a search and point-to-get the prompt service. When I saw this service, I realized that ChatGPT may be morphing in ways that any simple Web search engine could implement. For Google, deploying the service would be trivial. The problem is that reorgs don’t pay much attention outside of the fox hole in which senior management prefers to dwell.

Net net: Google is too big and has too much money to concede. However, the ChatGPT innovation off road vehicle is zipping along. Google is organizing the wizards who will on Google’s glitzy glamping rig. ChatGPT is hitting the rocks and crawling over obstacles. The Google machine is in a scenic observation point with a Pebble Beach-type of view. What’s the hurry? Google is moving… with a reorg.

Stephen E Arnold, April 28, 2023

The Google: A Digital Knife Twisted after Stabbing

April 27, 2023

This essay is the work of a real, still-living dinobaby. No smart software involved.

Brian Lee captures a personal opinion with the somewhat misleading title “Why Does Did Google Brain Exist?” To be fair, the typographic trope of striking out the “does” makes it clear that something changed in the GOOD’s smart software theme park. The lights on one thrill ride seem to have been turned off. Shadows flicker across other attractions, and it is not clear if maintenance is making repairs or if the shows are changing.

The article offers an analysis of the shotgun marriage of Google Brain with DeepMind. I heard the opening notes of “Duelling Banjos” from the 1972 film Deliverance. Instead of four city slickers floating on a raft, the theme accentuates the drama of similar but culturally different digital cruises on Alphabet’s river of cash.

I agree with most of the points presented in the article; for example, presenting “research” as a pretense for amping advertising revenue, the “hubris” of Google, and Google’s effort to be the big dog in smart software. Instead of offering snippets, I recommend that you read Mr. Lee’s essay.

I do want to quote what I think is the twisting of the knife after stabbing Googzilla in the heart. Mr. Lee shoves the knife deeper and pushed it side to side:

Despite Brain’s tremendous value creation from its early funding of open-ended ML research, it is becoming increasingly apparent to Google that it does not know how to capture that value. Google is of course not obligated to fund open-ended research, but it will nevertheless be a sad day for researchers and for the world if Google turns down its investments. Google is already a second-mover in many consumer and business product offerings and it seems like that’s the way it will be in ML research as well. I hope that Google at least does well at being second place. 

The message is clear: The train carrying the company’s top acts has stalled on the way to big show. No longer getting top billing, the Sundar and Prabhakar Act is listed along with a trained pony act and a routine recycling Fibber McGee and Molly gags. Does the forced reorganization mean that Google has lost its star power?

Stephen E Arnold, April 27, 2023

Amusing Moments in Business Analysis

April 27, 2023

Vea4_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I noted two interesting examples of business analysis crashing into reality. I like to visualize the misstep as a well-dressed professional slipping in a doggy deposit and creating a “smelly shoe in a big meeting problem.”

Let me explain the two examples.

The first is MBA baloney about business change or as it is now jargonized “transformation.” If you are a bit rusty on business baloney, a quick look at the so-far free “Organizational Change Management: What It Is & Why It’s Important.” But McKinsey, a blue chip consulting company with a core competency in opioid-related insights, published its version as “What Is Business Transformation?”

The write up says:

Research by McKinsey has long documented that enterprise-wide transformation is difficult, with less than a third of transformations reaching their goals to improve organizational performance and sustain these improvements over time.

I found this recommendation notable:

Many transformations are enabled by a central transformation office (TO), with the CTO at the helm.

As I recall, McKinsey allegedly worked two sides of the street; that is, getting paid to advise certain companies and government agencies about the same subject. I won’t go into details, but the advice proved problematic, and some connect McKinsey’s input with the firm’s efforts to change.

So, does McKinsey have a chief transformation officer? It appears that a Microsoft veteran occupies that position at the venerable, bluest of the blue chip consulting firms. However, this professional has two jobs according to the McKinsey blog. But I thought the chief transformation officer had to operate according to the precepts outlined in the “What Is Business Transformation?” article? Now the job is not just transformation; it is platform. What does platform mean?

Here’s the answer:

Jacky will accelerate this work by helping our firm further leverage technology in our client work and innovate new platforms to help client organizations transform and grow. She will also lead McKinsey’s internal technology team, which serves our more than 40,000 colleagues across 66 countries.

Does this mean that McKinsey’s chief transform officer has to do the change thing and manage the internal technology staff globally?

If I keep in mind the chilling factoid that one third of transformation efforts fail, McKinsey has to focus to make the transformation work. The problem is that, as I understand how the McKinsey and other blue-chip experts operate, is that incentive plans for those leading practices allow the loose confederation of “partners” to hit their goals. In order to hit those goals, partners will have to generate money in ways that are known to work; for example, work for industry, work for the government, heck, work for any entity with the ability to pay.

Will McKinsey change under the firm and informed hand of a chief transformation officer? Not unless that “hand” writes specific incentive plans to cause change from the pocketbook outwards. I wonder whether McKinsey will be in the 33 percent failure set? ‘

The second example comes from Mr. Murdoch’s Wall Street Journal. The essay (not real news in my opinion) appeared in the April 21, 2023 edition. The article’s title was “Justice Thomas and the Plague o Bad Reporting.” The author, according to my dead tree edition of the newspaper, is James Taranto, who is the Journal’s editorial features editor. What’s amazing about this essay is that it criticizes other “real” news outfits for their coverage of what appears to be some dog-doody moments for one of the Supreme Court justices. Pardon the pun, but I don’t have a dog in this fight.

What caught my attention is that the essay makes zero intellectual vibration of a sentient being in the wake of the Rupert Murdoch settlement of the Fox News and Dominion matter. Paying about a billion dollars for exactly the type of “real” news the WSJ essay addresses makes clear that more than the Foxy folks are intellectually dishonest. Amazing.

Net net: Two classy outfits, and each is happily, willingly writing baloney. Transformation without altering executive compensation plans and excoriating other publications for bad reporting illustrates the stuck dials on some organizations’ ethical compasses. I hate to repeat myself, but I have to end with: Amazing.

Stephen E Arnold, April 27, 2023

A Googley Rah Rah for Synthetic Data

April 27, 2023

Vea4_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I want to keep this short. I know from experience that most people don’t think too much about synthetic data. The idea is important, but other concepts are important and no one really cares too much. When was the last time Euler’s Number came up at lunch?

A gaggle of Googlers extoll the virtues of synthetic in a 19 page ArXiv document called “Synthetic Data from Diffusion Models Improves ImageNet Classification.” The main idea is that data derived from “real” data are an expedient way to improve some indexing tasks.

I am not sure that a quote from the paper will do much to elucidate this facet of the generative model world. The paper includes charts, graphs, references to math, footnotes, a few email addresses, some pictures, wonky jargon, and this conclusion:

And we have shown improvements to ImageNet classification accuracy extend to large amounts of generated data, across a range of ResNet and Transformer-based models.

The specific portion of this quote which is quite important in my experience is the segment “across a range of ResNet and Transformer-based models.” Translating to Harrod’s Creek lingo, I think the wizards are saying, “Synthetic data is really good for text too.”

What’s bubbling beneath the surface of this archly-written paper? Here are my answers to this question:

  1. Synthetic data are a heck of a lot cheaper to generate for model training; therefore, embrace “good enough” and move forward. (Think profits and bonuses.)
  2. Synthetic data can be produced and updated more easily that fooling around with “real” data. Assembling training sets, tests, deploying and reprocessing are time sucks. (There is more work to do than humanoids to do it when it comes to training, which is needed frequently for some applications.)
  3. Synthetic datasets can be smaller. Even baby Satan aka Sam Altman is down with synthetic data. Why? Elon could only buy so many nVidia processing units. Thus finding a way to train models with synthetic data works around a supply bottleneck.

My summary of the Googlers’ article is much more brief than the original: Better, faster, cheaper.

You don’t have to pick one. Just believe the Google. Who does not trust the Google? Why not buy synthetic data and ready-to-deploy models for your next AutoGPT product? Google’s approach worked like a champ for online ads. Therefore, Google’s approach will work for your smart software. Trust Google.

Stephen  E Arnold, April 27, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta