Bogus Research Papers: They Are Here to Stay

November 27, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Science Is Littered with Zombie Studies. Here’s How to Stop Their Spread” is a Don Quixote-type write up. The good Don went to war against windmills. The windmills did not care. The people watching Don and his trusty sidekick did not care, and many found the site of a person of stature trying to gore a mill somewhat amusing.

image

A young researcher meets the ghosts of fake, distorted, and bogus information. These artefacts of a loss of ethical fabric wrap themselves around the peer-reviewed research available in many libraries and in for-fee online databases. When was the last time you spotted a correction to a paper in an online database? Thanks, MSFT Copilot. After several tries I got ghosts in a library. Wow, that was a task.

Fake research, non-reproducible research, and intellectual cheating like the exemplars at Harvard’s ethic department and the carpetland of Stanford’s former president’s office seem commonplace today.

The Hill’s article states:

Just by citing a zombie publication, new research becomes infected: A single unreliable citation can threaten the reliability of the research that cites it, and that infection can cascade, spreading across hundreds of papers. A 2019 paper on childhood cancer, for example, cites 51 different retracted papers, making its research likely impossible to salvage. For the scientific record to be a record of the best available knowledge, we need to take a knowledge maintenance perspective on the scholarly literature.

The idea is interesting. It shares a bit of technical debt (the costs accrued by not fixing up older technology) and some of the GenX, GenY, and GenZ notions of “what’s right.” The article sidesteps a couple of thorny bushes on its way to the Promised Land of Integrity.

First, the academic paper is designed to accomplish several things. First, it is a demonstration of one’s knowledge value. “Hey, my peers said this paper was fit to publish” some authors say. Yeah, as a former peer reviewer, I want to tell you that harsh criticism is not what the professional publisher wanted. These papers mean income. Don’t screw up the cash flow,” was the message I heard.

Second, the professional publisher certainly does not want to spend the resources (time and money) required to do crapola archeology. The focus of a professional publisher is to make money by publishing information to niche markets and charging as much money as possible for that information. Academic accuracy, ethics, and idealistic hand waving are not part of the Officers’ Meetings at some professional publisher off-sites. The focus is on cost reduction, market capture, and beating the well-known cousins in other companies who compete with one another. The goal is not the best life partner; the objective is revenue and profit margin.

Third, the academic bureaucracy has to keep alive the mechanisms for brain stratification. Therefore, publishing something “groundbreaking” in a blog or putting the information in a TikTok simply does not count. In fact, despite the brilliance of the information, the vehicle is not accepted. No modern institution building its global reputation and its financial services revenue wants to accept a person unless that individual has been published in a peer reviewed journal of note. Therefore, no one wants to look at data or a paper. The attention is on the paper’s appearing in the peer reviewed journal.

Who pays for this knowledge garbage? The answer is [a] libraries who have to “get” the journals departments identify as significant, [b] the US government which funds quite a bit of baloney and hocus pocus research via grants, [c] the authors of the paper who have to pay for proofs, corrections, and goodness knows what else before the paper is enshrined in a peer-reviewed journal.

Who fixes the baloney? No one. The content is either accepted as accurate and never verified or the researcher cites that which is perceived as important. Who wants to criticize one’s doctoral advisor?

News flash: The prevalence and amount of crapola is unlikely to change. In fact, with the easy availability of smart software, the volume of bad scholarly information is likely to increase. Only the disinformation entities working for nation states hostile to the US of A will outpace US academics in the generation of bogus information.

Net net: The wise researcher will need to verify a lot. But that’s work. So there we are.

Stephen E Arnold, November 27, 2023

Another Xoogler and More Process Insights

November 23, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google employs many people. Over the last 25 years, quite a few Xooglers (former Google employees) are out and about. I find the essays by the verbal Xooglers interesting. “Reflecting on 18 Years at Google” contains several intriguing comments. Let me highlight a handful of these. You will want to read the entire Hixie article to get the context for the snips I have selected.

The first point I underlined with blushing pink marker was:

I found it quite frustrating how teams would be legitimately actively pursuing ideas that would be good for the world, without prioritizing short-term Google interests, only to be met with cynicism in the court of public opinion.

image

Old timers share stories about the golden past in the high-technology of online advertising. Thanks, Copilot, don’t overdo the schmaltz.

The “Google as a victim” is a notion not often discussed — except by some Xooglers. I recall a comment made to me by a seasoned manager at another firm, “Yes, I am paranoid. They are out to get me.” That comment may apply to some professionals at Google.

How about this passage?

My mandate was to do the best thing for the web, as whatever was good for the web would be good for Google (I was explicitly told to ignore Google’s interests).

The oft-repeated idea is that Google cares about its users and similar truisms are part of what I call the Google mythology. Intentionally, in my opinion, Google cultivates the “doing good” theme as part of its effort to distract observers from the actual engineering intent of the company. (You love those Google ads, don’t you?)

Google’s creative process is captured in this statement:

We essentially operated like a startup, discovering what we were building more than designing it.

I am not sure if this is part of Google’s effort to capture the “spirit” of the old-timey days of Bell Laboratories or an accurate representation of Google’s directionless methods became over the years. What people “did” is clearly dissociated from the advertising mechanisms on which the oversized tires and chrome do-dads were created and bolted on the ageing vehicle.

And, finally, this statement:

It would require some shake-up at the top of the company, moving the center of power from the CFO’s office back to someone with a clear long-term vision for how to use Google’s extensive resources to deliver value to users.

What happened to the ideas of doing good and exploratory innovation?

Net net: Xooglers pine for the days of the digital gold rush. Googlers may not be aware of what the company is and does. That may be a good thing.

Stephen E Arnold, November 23, 2023

OpenAI: What about Uncertainty and Google DeepMind?

November 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A large number of write ups about Microsoft and its response to the OpenAI management move populate my inbox this morning (Monday, November 20, 2023).

To give you a sense of the number of poohbahs, mavens, and “real” journalists covering Microsoft’s hiring of Sam (AI-Man) Altman, I offer this screen shot of Techmeme.com taken at 1100 am US Eastern time:

image

A single screenshot cannot do justice to  the digital bloviating on this subject as well as related matters.

I did a quick scan because I simply don’t have the time at age 79 to read every item in this single headline service. Therefore, I admit that others may have thought about the impact of the Steve Jobs’s like termination, the revolt of some AI wizards, and Microsoft’s creating a new “company” and hiring Sam AI-Man and a pride of his cohorts in the span of 72 hours (give or take time for biobreaks).

In this short essay, I want to hypothesize about how the news has been received by that merry band of online advertising professionals.

To begin, I want to suggest that the turmoil about who is on first at OpenAI sent a low voltage signal through the collective body of the Google. Frisson resulted. Uncertainty and opportunity appeared together like the beloved Scylla and Charybdis, the old pals of Ulysses. The Google found its right and left Brainiac hemispheres considering that OpenAI would experience a grave set back, thus clearing a path for Googzilla alone. Then one of the Brainiac hemisphere reconsidered and perceive a grave threat from the split. In short, the Google tipped into its zone of uncertainty.

image

A group of online advertising experts meet to consider the news that Microsoft has hired Sam Altman. The group looks unhappy. Uncertainty is an unpleasant factor in some business decisions. Thanks Microsoft Copilot, you captured the spirit of how some Silicon Valley wizards are reacting to the OpenAI turmoil because Microsoft used the OpenAI termination of Sam Altman as a way to gain the upper hand in the cloud and enterprise app AI sector.

Then the matter appeared to shift back to the pre-termination announcement. The co-founder of OpenAI gained more information about the number of OpenAI employees who were planning to quit or, even worse, start posting on Instagram, WhatsApp, and TikTok (X.com is no longer considered the go-to place by the in crowd.

The most interesting development was not that Sam AI-Man would return to the welcoming arms of Open AI. No, Sam AI-Man and another senior executive were going to hook up with the geniuses of Redmond. A new company would be formed with Sam AI-Man in charge.

As these actions unfolded, the Googlers sank under a heavy cloud of uncertainty. What if the Softies could use Google’s own open source methods, integrate rumored Microsoft-developed AI capabilities, and make good on Sam AI-Man’s vision of an AI application store?

The Googlers found themselves reading every “real news” item about the trajectory of Sam AI-Man and Microsoft’s new AI unit. The uncertainty has morphed into another January 2023 Davos moment. Here’s my take as of 230 pm US Eastern, November 20, 2023:

  1. The Google faces a significant threat when it comes to enterprise AI apps. Microsoft has a lock on law firms, the government, and a number of industry sectors. Google has a presence, but when it comes to go-to apps, Microsoft is the Big Dog. More and better AI raises the specter of Microsoft putting an effective laser defense behinds its existing enterprise moat.
  2. Microsoft can push its AI functionality as the Azure difference. Furthermore, whether Google or Amazon for that matter assert their cloud AI is better, Microsoft can argue, “We’re better because we have Sam AI-Man.” That is a compelling argument for government and enterprise customers who cannot imagine work without Excel and PowerPoint. Put more AI in those apps, and existing customers will resist blandishments from other cloud providers.
  3. Google now faces an interesting problem: It’s own open source code could be converted into a death ray, enhanced by Sam AI-Man, and directed at the Google. The irony of Googzilla having its left claw vaporized by its own technology is going to be more painful than Satya Nadella rolling out another Davos “we’re doing AI” announcement.

Net net: The OpenAI machinations are interesting to many companies. To the Google, the OpenAI event and the Microsoft response is like an unsuspecting person getting zapped by Nikola Tesla’s coil. Google’s mastery of high school science club management techniques will now dig into the heart of its DeepMind.

Stephen E Arnold, November 20, 2023

Google Pulls Out a Rhetorical Method to Try to Win the AI Spoils

November 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In high school in 1958, our debate team coach yapped about “framing.” The idea was new to me, and Kenneth Camp pounded it into our debate’s collective “head” for the four years of my high school tenure. Not surprisingly, when I read “Google DeepMind Wants to Define What Counts As Artificial General Intelligence” I jumped back in time 65 years (!) to Mr. Camp’s explanation of framing and how one can control the course of a debate with the technique.

Google should not have to use a rhetorical trick to make its case as the quantum wizard of online advertising and universal greatness. With its search and retrieval system, the company can boost, shape, and refine any message it wants. If those methods fall short, the company can slap on a “filter” or “change its rules” and deprecate certain Web sites and their messages.

But Google values academia, even if the university is one that welcomed a certain Jeffrey Epstein into its fold. (Do you remember the remarkable Jeffrey Epstein. Some of those who he touched do I believe.) The estimable Google is the subject of referenced article in the MIT-linked Technology Review.

From my point of view, the big idea is the write up is, and I quote:

To come up with the new definition, the Google DeepMind team started with prominent existing definitions of AGI and drew out what they believe to be their essential common features. The team also outlines five ascending levels of AGI: emerging (which in their view includes cutting-edge chatbots like ChatGPT and Bard), competent, expert, virtuoso, and superhuman (performing a wide range of tasks better than all humans, including tasks humans cannot do at all, such as decoding other people’s thoughts, predicting future events, and talking to animals). They note that no level beyond emerging AGI has been achieved.

Shades of high school debate practice and the chestnuts scattered about the rhetorical camp fire as John Schunk, Jimmy Bond, and a few others (including the young dinobaby me) learned how one can set up a frame, populate the frame with logic and facts supporting the frame, and then point out during rebuttal that our esteemed opponents were not able to dent our well formed argumentative frame.

Is Google the optimal source for a definition of artificial general intelligence, something which does not yet exist. Is Google’s definition more useful than a science fiction writer’s or a scene from a Hollywood film?

Even the trusted online source points out:

One question the researchers don’t address in their discussion of _what_ AGI is, is _why_ we should build it. Some computer scientists, such as Timnit Gebru, founder of the Distributed AI Research Institute, have argued that the whole endeavor is weird. In a talk in April on what she sees as the false (even dangerous) promise of utopia through AGI, Gebru noted that the hypothetical technology “sounds like an unscoped system with the apparent goal of trying to do everything for everyone under any environment.” Most engineering projects have well-scoped goals. The mission to build AGI does not. Even Google DeepMind’s definitions allow for AGI that is indefinitely broad and indefinitely smart. “Don’t attempt to build a god,” Gebru said.

I am certain it is an oversight, but the telling comment comes from an individual who may have spoken out about Google’s systems and methods for smart software.

image

Mr. Camp, the high school debate coach, explains how a rhetorical trope can gut even those brilliant debaters from other universities. (Yes, Dartmouth, I am still thinking of you.) Google must have had a “coach” skilled in the power of framing. The company is making a bold move to define that which does not yet exist and something whose functionality is unknown. Such is the expertise of the Google. Thanks, Bing. I find your use of people of color interesting. Is this a pre-Sam ouster or a post-Sam ouster function?

What do we learn from the write up? In my view of the AI landscape, we are given some insight into Google’s belief that its rhetorical trope packaged as content marketing within an academic-type publication will lend credence to the company’s push to generate more advertising revenue. You may ask, “But won’t Google make oodles of money from smart software?” I concede that it will. However, the big bucks for the Google come from those willing to pay for eyeballs. And that, dear reader, translates to advertising.

Stephen E Arnold, November 20, 2023

Adobe: Delivers Real Fake War Images

November 17, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Gee, why are we not surprised? Crikey. reveals, “Adobe Is Selling Fake AI Images of the War in Israel-Gaza.” While Adobe did not set out to perpetuate fake news about the war, neither it did not try very hard to prevent it. Reporter Cam Wilson writes:

“As part of the company’s embrace of generative artificial intelligence (AI), Adobe allows people to upload and sell AI images as part of its stock image subscription service, Adobe Stock. Adobe requires submitters to disclose whether they were generated with AI and clearly marks the image within its platform as ‘generated with AI’. Beyond this requirement, the guidelines for submission are the same as any other image, including prohibiting illegal or infringing content. People searching Adobe Stock are shown a blend of real and AI-generated images. Like ‘real’ stock images, some are clearly staged, whereas others can seem like authentic, unstaged photography. This is true of Adobe Stock’s collection of images for searches relating to Israel, Palestine, Gaza and Hamas. For example, the first image shown when searching for Palestine is a photorealistic image of a missile attack on a cityscape titled ‘Conflict between Israel and Palestine generative AI’. Other images show protests, on-the-ground conflict and even children running away from bomb blasts — all of which aren’t real.”

Yet these images are circulating online, adding to the existing swirl of misinformation. Even several small news outlets have used them with no disclaimers attached. They might not even realize the pictures are fake.

Or perhaps they do. Wilson consulted RMIT’s T.J. Thomson, who has been researching the use of AI-generated images. He reports that, while newsrooms are concerned about misinformation, they are sorely tempted by the cost-savings of using generative AI instead of on-the-ground photographers. One supposes photographer safety might also be a concern. Is there any stuffing this cat into the bag, or must we resign ourselves to distrusting any images we see online?

A loss suffered in the war is real. Need an image of this?

Cynthia Murrell, November 17, 2023

Buy Google Traffic: Nah, Paying May Not Work

November 16, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Tucked into a write up about the less than public trial of the Google was an interesting factoid. The source of the item was “More from the US v Google Trial: Vertical Search, Pre-Installs and the Case of Firefox / Yahoo.” Here’s the snippet:

Expedia execs also testified about the cost of ads and how increases had no impact on search results. On October 19, Expedia’s former chief operating officer, Jeff Hurst, told the court the company’s ad fees increased tenfold from $21 million in 2015 to $290 million in 2019. And yet, Expedia’s traffic from Google did not increase. The implication was that this was due to direct competition from Google itself. Hurst pointed out that Google began sharing its own flight and hotel data in search results in that period, according to the Seattle Times.

image

“Yes, sir, you can buy a ticket and enjoy a ticket to our entertainment,” says the theater owner. The customer asks, “Is the theater in good repair?” The ticket seller replies, “Of course, you get your money’s worth at our establishment. Next.” Thanks, Microsoft Bing. It took several tries before I gave up.

I am a dinobaby, and I am, by definition, hopelessly out of it. However, I interpret this passage in this way:

  1. Despite protestations about the Google algorithm’s objectivity, Google has knobs and dials it can use to cause the “objective” algorithm to be just a teenie weenie less objective. Is this a surprise? Not to me. Who builds a system without a mechanism for controlling what it does. My favorite example of this steering involves the original FirstGov.gov search system circa 2000. After Mr. Clinton lost the election, the new administration, a former Halliburton executive wanted a certain Web page result to appear when certain terms were searched. No problemo. Why? Who builds a system one cannot control? Not me. My hunch is that Google may have a similar affection for knobs and dials.
  2. Expedia learned that buying advertising from a competitor (Google) was expensive and then got more expensive. The jump from $21 million to $290 million is modest from the point of view of some technology feudalists. To others the increase is stunning.
  3. Paying more money did not result in an increase in clicks or traffic. Again I was not surprised. What caught my attention is that it has taken decades for others to figure out how the digital highway men came riding like a wolf on the fold. Instead of being bedecked with silver and gold, these actors wore those cheerful kindergarten colors. Oh, those colors are childish but those wearing them carried away the silver and gold it seems.

Net net: Why is this US v Google trial not more public? Why so many documents withheld? Why is redaction the best billing tactic of 2023? So many questions that this dinobaby cannot answer. I want to go for a ride in the Brin-A-Loon too. I am a simple dinobaby.

Stephen E Arnold, November 16, 2023

An Odd Couple Sharing a Soda at a Holiday Data Lake

November 16, 2023

What happens when love strikes the senior managers of the technology feudal lords? I will tell you what happens — Love happens. The proof appears in “Microsoft and Google Join Forces on OneTable, an Open-Source Solution for Data Lake Challenges.” Yes, the lakes around Redmond can be a challenge. For those living near Googzilla’s stomping grounds, the risk is that a rising sea level will nuke the outdoor recreation areas and flood the parking lots.

But any speed dating between two techno feudalists is news. The “real news” outfit Venture Beat reports:

In a new open-source partnership development effort announced today, Microsoft is joining with Google and Onehouse in supporting the OneTable project, which could reshape the cloud data lake landscape for years to come

And what does “reshape” mean to these outfits? Probably nothing more than making sure that Googzilla and Mothra become the suppliers to those who want to vacation at the data lake. Come to think of it. The concessions might be attractive as well.

image

Googzilla says to Mothra-Soft, a beast living in Mercer Island, “I know you live on the lake. It’s a swell nesting place. I think we should hook up and cooperate. We can share the money from merged data transfers the way you and I —  you good looking Lepidoptera — are sharing this malted milk. Let’s do more together if you know what I mean.” The delightful Mothra-Soft croons, “I thought you would wait until our high school reunion to ask, big boy. Let’s find a nice, moist, uncrowded place to consummate our open source deal, handsome.” Thanks, Microsoft Bing. You did a great job of depicting a senior manager from the company that developed Bob, the revolutionary interface.

The article continues:

The ability to enable interoperability across formats is critical for Google as it expands the availability of its BigQuery Omni data analytics technology. Kazmaier said that Omni basically extends BigQuery to AWS and Microsoft Azure and it’s a service that has been growing rapidly. As organizations look to do data processing and analytics across clouds there can be different formats and a frequent question that is asked is how can the data landscape be interconnected and how can potential fragmentation be stopped.

Is this alleged linkage important? Yeah, it is. Data lakes are great places to part AI training data. Imagine the intelligence one can glean monitoring inflows and outflows of bits. To make the idea more interesting think in terms of the metadata. Exciting because open source software is really for the little guys too.

Stephen E Arnold, November 16, 2023

SolarWinds: Huffing and Puffing in a Hot Wind on a Sunny Day

November 16, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Remember the SolarWinds’ misstep? Time has a way deleting memories of security kerfuffles. Who wants to recall ransomware, loss of data, and the general embarrassment of getting publicity for the failure of existing security systems? Not too many. A few victims let off steam by blaming their cyber vendors. Others — well, one — relieve their frustrations by emulating a crazed pit bull chasing an M1 A2 battle tank. The pit bull learns that the M1 A2 is not going to stop and wait for the pit bull to stop barking and snarling. The tank grinds forward, possibly over Solar (an unlikely name for a pit bull in my opinion).

11 11 political speech

The slick business professional speaks to a group of government workers gathered outside on the sidewalk of 100 F Street NW. The talker is semi-shouting, “Your agency is incompetent. You are unqualified. My company knows how to manage our business, security, and personnel affairs.” I am confident this positive talk will win the hearts and minds of the GS-13s listening. Thanks, Microsoft Bing. You obviously have some experience with government behaviors.

I read “SolarWinds Says SEC Sucks: Watchdog Lacks Competence to Regulate Cybersecurity.” The headline attributes the statement to a company. My hunch is that the criticism of the SEC is likely someone other than the firm’s legal counsel, the firm’s CFO, or its PR team.

The main idea, of course, is that SolarWinds should not be sued by the US Securities & Exchange Commission. The SEC does have special agents, but no criminal authority. However, like many US government agencies and their Offices of Inspector General, the investigators can make life interesting for those in whom the US government agency has an interest. (Tip: I will now offer an insider tip. Avoid getting crossways with a US government agency. The people may change but the “desks” persist through time along with documentation of actions. The business processes in the US government mean that people and organizations of interest can be the subject to scrutiny. Like the poem says, “Time cannot wither nor custom spoil the investigators’ persistence.”)

The write up presents information obtained from a public blog post by the victim of a cyber incident. I call the incident a misstep because I am not sure how many organizations, software systems, people, and data elements were negatively whacked by the bad actors. In general, the idea is that a bad actor should not be able to compromise commercial outfits.

The write up reports:

SolarWinds has come out guns blazing to defend itself following the US Securities and Exchange Commission’s announcement that it will be suing both the IT software maker and its CISO over the 2020 SUNBURST cyberattack.

The vendor said the SEC’s lawsuit is "fundamentally flawed," both from a legal and factual perspective, and that it will be defending the charges "vigorously." A lengthy blog post, published on Wednesday, dissected some of the SEC’s allegations, which it evidently believes to be false. The first of which was that SolarWinds lacked adequate security controls before the SUNBURST attack took place.

The right to criticize is baked into the ethos of the US of A. The cited article includes this quote from the SolarWinds’ statement about the US Securities & Exchange Commission:

It later went on to accuse the regulator of overreaching and "twisting the facts" in a bid to expand its regulatory footprint, as well as claiming the body "lacks the authority or competence to regulate public companies’ cybersecurity. The SEC’s cybersecurity-related capabilities were again questioned when SolarWinds addressed the allegations that it didn’t follow the NIST Cybersecurity Framework (CSF) at the time of the attack.

SolarWinds feels strongly about the SEC and its expertise. I have several observations to offer:

  1. Annoying regulators and investigators is not perceived in some government agencies as a smooth move
  2. SolarWinds may find that its strong words may be recast in the form of questions in the legal forum which appears to be roaring down the rails
  3. The SolarWinds’ cyber security professionals on staff and the cyber security vendors whose super duper bad actor stoppers appear to have an opportunity to explain their view of what I call a “misstep.”

Do I have an opinion? Sure. You have read it in my blog posts or heard me say it in my law enforcement lectures, most recently at the Massachusetts / New York Association of Crime Analysts’ meeting in Boston the first week of October 2023.

Cyber security is easier to describe in marketing collateral than do in real life. The SolarWinds’ misstep is an interesting case example of reality being different from the expectation.

Stephen E Arnold, November 16, 2023

Google Solves Fake Information with the Tom Sawyer Method

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

How does one deliver “responsible AI”? Easy. Shift the work to those who use a system built on smart software. I call the approach the “Tom Sawyer Method.” The idea is that the fictional character (Tom) convinced lesser lights to paint the fence for him. Sammy Clemmons (the guy who invested in the typewriter) said:

“Work consists of whatever a body is obliged to do. Play consists of whatever a body is not obliged to do.”

Thus the information in “Our Approach to Responsible AI Innovation” is play. The work is for those who cooperate to do the real work. The moral is, “We learn more about Google than we do about responsible AI innovation.”

image

The young entrepreneur says, “You fellows chop the wood.  I will go and sell it to one of the neighbors. Do a good job. Once you finish you can deliver the wood and I will give you your share of the money. How’s that sound?” The friends are eager to assist their pal. Thanks Microsoft Bing. I was surprised that you provided people of color when I asked for “young people chopping wood.” Interesting? I think so.

The Google write up from a trio of wizard vice presidents at the online advertising company says:

…we’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools. When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material.

Yep, “require.” But what I want to do is to translate Google speak into something dinobabies understand. Here’s my translation:

  1. Google cannot determine what content is synthetic and what is not; therefore, the person using our smart software has to tell us, “Hey, Google, this is fake.”
  2. Google does not want to increase headcount and costs related to synthetic content detection and removal. Therefore, the work is moved via the Tom Sawyer Method to YouTube “creators” or fence painters. Google gets the benefit of reduced costs, hopefully reduced liability, and “play” like Foosball.
  3. Google can look at user provided metadata and possibly other data in the firm’s modest repository and determine with acceptable probability that a content object and a creator should be removed, penalized, or otherwise punished by a suitable action; for example, not allowing a violator to buy Google merchandise. (Buying Google AdWords is okay, however.)

The write up concludes with this bold statement: “The AI transformation is at our doorstep.” Inspiring. Now wood choppers, you can carry the firewood into the den and stack it buy the fireplace in which we burn the commission checks the offenders were to receive prior to their violating the “requirements.”

Ah, Google, such a brilliant source of management inspiration: A novel written in 1876. I did not know that such old information was in the Google index. I mean DejaVu is consigned to the dust bin. Why not Mark Twain’s writings?

Stephen  E Arnold, November 14, 2023

test

Pundit Recounts Amazon Sins and Their Fixes

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Sci-fi author and Pluralistic blogger Cory Doctorow is not a fan of Amazon. In fact, he declares, “Amazon Is a Ripoff.” His article references several sources to support this assertion, beginning with Lina Khan’s 2017 cautionary paper published in the Yale Law Journal. Now head of the FTC, Khan is bringing her expertise to bear in a lawsuit against the monopoly. We are reminded how tech companies have been able to get away with monopolistic practices thus far:

“There’s a cheat-code in US antitrust law, one that’s been increasingly used since the Reagan administration, when the ‘consumer welfare’ theory (‘monopolies are fine, so long as the lower prices’) shoved aside the long-established idea that antitrust law existed to prevent monopolies from forming at all. The idea that a company can do anything to create or perpetuate a monopoly so long as its prices go down and/or its quality goes up is directly to blame for the rise of Big Tech.”

But what, exactly, is shady about Amazon’s practices? From confusing consumers through complexity and gouging them with “drip pricing” to holding vendors over a barrel, Doctorow describes the company’s sins in this long, specific, and heavily linked diatribe. He then pulls three rules to hold Amazon accountable from a paper by researchers Tim O’Reilly, Ilan Strauss, and Mariana Mazzucato: Force the company to halt its most deceptive practices, mandate interoperability between it and comparison shopping sites, and create legal safe harbors for the scraping that underpins such interoperability. The invective concludes:

“I was struck by how much convergence there is among different kinds of practitioners, working against the digital sins of very different kinds of businesses. From the CFPB using mandates and privacy rules to fight bank rip-offs to behavioral economists thinking about Amazon’s manipulative search results. This kind of convergence is exciting as hell. After years of pretending that Big Tech was good for ‘consumers,’ we’ve not only woken up to how destructive these companies are, but we’re also all increasingly in accord about what to do about it. Hot damn!”

He sounds so optimistic. Are big changes ahead? Don’t forget to sign up for Prime.

Cynthia Murrell, November 14, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta