Cognitive Blind Spot 1: Can You Identify Synthetic Data? Better Learn.

October 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It has been a killer with the back-to-back trips to Europe and then to the intellectual hub of the old-fashioned America. In France, I visited a location allegedly the office of a company which “owns” the domain rrrrrrrrrrr.com. No luck. Fake address. I then visited a semi-sensitive area in Paris, walking around in the confused fog only a 78 year old can generate. My goal was to spot a special type of surveillance camera designed to provide data to a smart software system. The idea is that the images can be monitored through time so a vehicle making frequent passes of a structure can be flagged, its number tag read, and a bit of thought given to answer the question, “Why?” I visited with a friend and big brain who was one of the technical keystones of an advanced search system. He gave me his most recent book and I paid for my Orangina. Exciting.

10 5 financial documents

One executive tells his boss, “Sir, our team of sophisticated experts reviewed these documents. The documents passed scrutiny.” One of the “smartest people in the room” asks, “Where are we going for lunch today?” Thanks, MidJourney. You do understand executive stereotypes, don’t you?

On the flights, I did some thinking about synthetic data. I am not sure that most people can provide a definition which will embrace the Google’s efforts in the money saving land of synthetic. I don’t think too many people know about Charlie Javice’s use of synthetic data to whip up JPMC’s enthusiasm for her company Frank Financial. I don’t think most people understand that when typing a phrase into the Twitch AI Jesus that software will output a video and mostly crazy talk along with some Christian lingo.

The purpose of this short blog post is to present an example of synthetic data and conclude by revisiting the question, “Can You Identify Synthetic Data?” The article I want to use as a hook for this essay is from Fortune Magazine. I love that name, and I think the wolves of Wall Street find it euphonious as well. Here’s the title: “Delta Is Fourth Major U.S. Airline to Find Fake Jet Aircraft Engine Parts with Forged Airworthiness Documents from U.K. Company.”

The write up states:

Delta Air Lines Inc. has discovered unapproved components in “a small number” of its jet aircraft engines, becoming the latest carrier and fourth major US airline to disclose the use of fake parts.  The suspect components — which Delta declined to identify — were found on an unspecified number of its engines, a company spokesman said Monday. Those engines account for less than 1% of the more than 2,100 power plants on its mainline fleet, the spokesman said. 

Okay, bad parts can fail. If the failure is in a critical component of a jet engine, the aircraft could — note that I am using the word could — experience a catastrophic failure. Translating catastrophic into more colloquial lingo, the sentence means catch fire and crash or something slightly less terrible; namely, catch fire, explode, eject metal shards into the tail assembly, or make a loud noise and emit smoke. Exciting, just not terminal.

I don’t want to get into how the synthetic or fake data made its way through the UK company, the UK bureaucracy, the Delta procurement process, and into the hands of the mechanics working in the US or offshore. The fake data did elude scrutiny for some reason. With money being of paramount importance, my hunch is that saving some money played a role.

If organizations cannot spot fake data when it relates to a physical and mission-critical component, how will organizations deal with fake data generated by smart software. The smart software can get it wrong because an engineer-programmer screwed up his or her math or the complex web of algorithms just generate unanticipated behaviors from dependencies no one knew to check and validate.

What happens when computers which many people are “always” more right than a human, says, “Here’s the answer.” Many humans will skip the hard work because they are in a hurry, have no appetite for grunt work, or are scheduled by a Microsoft calendar to do something else when the quality assurance testing is supposed to take place.

Let’s go back to the question in the title of the blog post, “Can You Identify Synthetic Data?”

I don’t want to forget this part of the title, “Better learn.”

JPMC paid out more than $100 million in November 2022 because some of the smartest guys in the room weren’t that smart. But get this. JPMC is a big, rich bank. People who could die because of synthetic data are a different kettle of fish. Yeah, that’s what I thought about as I flew Delta back to the US from Paris. At the time, I thought Delta had not fallen prey to the scam.

I was wrong. Hence, I “better learn” myself.

Stephen E Arnold, October 5, 2023

A Pivot al Moment in Management Consulting

October 4, 2023

The practice of selling “management consulting” has undergone a handful of tectonic shifts since Edwin Booz convinced Sears, the “department” store outfit to hire him. (Yes, I am aware I am cherry picking, but this is a blog post, not a for fee report.)

The first was the ability of a consultant to move around quickly. Trains and Chicago became synonymous with management razzle dazzle. The center of gravity shifted to New York City because consulting thrives where there are big companies. The second was the institutionalization of the MBA as a certification of a 23 year old’s expertise. The third was the “invention” of former consultants for hire. The innovator in this business was Gerson Lehrman Group, but there are many imitators who hire former blue-chip types and resell them without the fee baggage of the McKinsey & Co. type outfits. And now the fourth earthquake is rattling carpetland and the windows in corner offices (even if these offices are in an expensive home in Wyoming.)

9 30 centaur and cybord

A centaur and a cyborg working on a client report. Thanks, MidJourney. Nice hair style on the cyborg.

Now we have the era of smart software or what I prefer to call the era of hyperbole about semi-smart semi-automated systems which output “information.” I noted this write up from the estimable Harvard University. Yes, this is the outfit who appointed an expert in ethics to head up the outfit’s ethics department. The same ethics expert allegedly made up data for peer reviewed publications. Yep, that Harvard University.

Navigating the Jagged Technological Frontier” is an essay crafted by the D^3 faculty. None of this single author stuff in an institution where fabrication of research is a stand up comic joke. “What’s the most terrifying word for a Harvard ethicist?” Give up? “Ethics.” Ho ho ho.

What are the highlights of this esteemed group of researches, thinkers, and analysts. I quote:

  • For tasks within the AI frontier, ChatGPT-4 significantly increased performance, boosting speed by over 25%, human-rated performance by over 40%, and task completion by over 12%.
  • The study introduces the concept of a “jagged technological frontier,” where AI excels in some tasks but falls short in others.
  • Two distinct patterns of AI use emerged: “Centaurs,” who divided and delegated tasks between themselves and the AI, and “Cyborgs,” who integrated their workflow with the AI.

Translation: We need fewer MBAs and old timers who are not able to maximize billability with smart or semi smart software. Keep in mind that some consultants view clients with disdain. If these folks were smart, they would not be relying on 20-somethings to bail them out and provide “wisdom.”

This dinobaby is glad he is old.

Stephen E Arnold, October 4, 2023

Kill Off the Dinobabies and Get Younger, Bean Counter-Pleasing Workers. Sound Familiar?

September 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Google, Meta, Amazon Hiring low-Paid H1B Workers after US Layoffs: Report.” Is it accurate? Who knows? In the midst of a writers’ strike in Hollywood, I thought immediately about endless sequels to films like “Batman 3: Deleting Robin” and Halloween 8: The Night of the Dinobaby Purge.”

The write up reports a management method similar to those implemented when the high school science club was told that a school field trip to the morgue was turned down. The school’s boiler suffered a mysterious malfunction and school was dismissed for a day. Heh heh heh.

I noted this passage:

Even as global tech giants are carrying out mass layoffs, several top Silicon Valley companies are reportedly looking to hire lower-paid tech workers from foreign countries. Google, Meta, Amazon, Microsoft, Zoom, Salesforce and Palantir have applied for thousands of H1B worker visas this year…

I heard a rumor that IBM used a similar technique. Would Big Blue replace older, highly paid employees with GenX professionals not born in the US? Of course not! The term “dinobabies” was a product of spontaneous innovation, not from a personnel professional located in a suburb of New York City. Happy bean counters indeed. Saving money with good enough work. I love the phrase “minimal viable product” for “minimally viable” work environments.

There are so many ways to allow people to find their futures elsewhere. Shelf stockers are in short supply I hear.

Stephen E Arnold, September 21, 2023

Profits Over Promises: IBM Sells Facial Recognition Tech to British Government

September 18, 2023

Just three years after it swore off any involvement in facial recognition software, IBM has made an about-face. The Verge reports, “IBM Promised to Back Off Facial Recognition—Then it Signed a $69.8 Million Contract to Provide It.” Amid the momentous Black Lives Matter protests of 2020, IBM’s Arvind Krishna wrote a letter to Congress vowing to no longer supply “general purpose” facial recognition tech. However, it appears that is exactly what the company includes within the biometrics platform it just sold to the British government. Reporter Mark Wilding writes:

“The platform will allow photos of individuals to be matched against images stored on a database — what is sometimes known as a ‘one-to-many’ matching system. In September 2020, IBM described such ‘one-to-many’ matching systems as ‘the type of facial recognition technology most likely to be used for mass surveillance, racial profiling, or other violations of human rights.'”

In the face of this lucrative contract IBM has changed its tune. It now insists one-to-many matching tech does not count as “general purpose” since the intention here is to use it within a narrow scope. But scopes have a nasty habit of widening to fit the available tech. The write-up continues:

“Matt Mahmoudi, PhD, tech researcher at Amnesty International, said: ‘The research across the globe is clear; there is no application of one-to-many facial recognition that is compatible with human rights law, and companies — including IBM — must therefore cease its sale, and honor their earlier statements to sunset these tools, even and especially in the context of law and immigration enforcement where the rights implications are compounding.’ Police use of facial recognition has been linked to wrongful arrests in the US and has been challenged in the UK courts. In 2019, an independent report on the London Metropolitan Police Service’s use of live facial recognition found there was no ‘explicit legal basis’ for the force’s use of the technology and raised concerns that it may have breached human rights law. In August of the following year, the UK’s Court of Appeal ruled that South Wales Police’s use of facial recognition technology breached privacy rights and broke equality laws.”

Wilding notes other companies similarly promised to renounce facial recognition technology in 2020, including Amazon and Microsoft. Will governments also be able to entice them into breaking their vows with tantalizing offers?

Cynthia Murrell, September 18, 2023

An AI to Help Law Firms Craft More Effective Invoices

September 14, 2023

Think money. That answers many AI questions.

Why are big law firms embracing AI? For better understanding of the law? Nay. To help clients? No. For better writing? Nope. What then? Why more fruitful billing, if course. We learn from Above The Law, “Law Firms Struggling with Arcane Billing Guidelines Can Look to AI for Relief.” According to writer and litigator Joe Patrice, law clients rely on labyrinthine billing compliance guidelines to delay paying their invoices. Now AI products like Verify are coming to rescue beleaguered lawyers from penny pinching clients. Patrice writes:

“Artificial intelligence may not be prepared to solve every legal industry problem, but it might be the perfect fit for this one. ZERO CEO Alex Babin is always talking about developing automation to recover the money lawyers lose doing non-billable tasks, so it’s unsurprising that the company has turned its attention to the industry’s billing fiasco. And when it comes to billing guideline compliance, ZERO estimates that firms can recover millions by introducing AI to the process. Because just ‘following the guidelines’ isn’t always enough. Some guidelines are explicit. Others leave a world of interpretation. Still others are explicit, but no one on the client side actually cares enough to force outside counsel to waste time correcting the issue. Where ZERO’s product comes in is in understanding the guidelines and the history of rejections and appeals surrounding the bills to figure out what the bill needs to look like to get the lawyers paid with the least hassle.”

Verify can even save attorneys from their own noncompliant wording, rewriting their narratives to comply with guidelines. And it can do while mimicking each lawyer’s writing style. Very handy.

Cynthia Murrell, September 14, 2023

New Wave Management or Is It Leaderment?

September 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Here’s one of my biases, and I am rather proud of it. I like to word “manager.” According to my linguistics professor Lev Soudek, the word “manage” used to mean trickery and deceit. When I was working at a blue chip consulting firm, the word meant using tactics to achieve a goal. I think of management as applied trickery. The people whom one pays will go along with the program, but not 24×7. In a company which expects 60 hours of work a week the minimum for survival of a Spanish inquisition inspired personnel approach, mental effort had to be expended.

I read “I’m a Senior Leader at Amazon and Have Seen Many Bad Managers. Here Are 3 Reasons Why There Are So Few Great Ones.” The intense, clear-eyed young person explains that he has worked at some outfits which are not among my list of the Top 10 high-technology outfits. His résumé includes eBay (a digital yard sale), a game retailer, and the somewhat capricious Amazon (are we a retail outfit, are we a cloud outfit, are we a government services company, are we a data broker, are we a streaming company, etc.).

9 3 leader

A modern practitioner of leaderment is having trouble getting the employees to fall in, throw their shoulders back, and mark in step to the cadence of Am-a-zon, Am-a-zon like a squad of French Foreign Legion troops on Bastille Day. Thanks, MidJourney. The illustration did not warrant a red alert, but it is also disappointing.

I assume that these credentials are sufficient to qualify for a management guru. Here are the three reasons managers are less than outstanding.

First, managers just sort of happen. Few people decide to be a manager. Ah, serendipity or just luck.

Second, managers don’t lead. (Huh, the word is “management”, not “leaderment.”)

Third, pressure for results means some managers are “sacrificing employee growth.” (I am not sure what this statement means. If one does not achieve results, then that individual and maybe his direct reports, the staff he leaderments, and his boss will be given an opportunity to find their future elsewhere. Translation for the GenZ reader: You are fired.

Let’s step back and think about these insights. My initial reaction is that a significant re-languaging has taken place in the write up. A good manager does not have to be a leader. In fact, when I was a guest lecturer at the Kansai Institute of Technology, I met a number of respected Japanese managers. I suppose some were leaders, but a number made it clear that results were number one or ichiban.

In my work career, confusing to manage with to lead would create some confusion. I recall when I was working in the US Congress with a retired admiral who was elected to represent an upscale LA district, the way life worked was simple: The retired admiral issued orders. Lesser entities like myself figured out how to execute, tapped appropriate resources, and got the job done. There was not much leadership required of me. I organized; I paid people money; and I hassled everyone until the retired admiral grunted in a happy way. There was no leaderment for me. The retired admiral said, “I want this in two days.” There was not much time for leaderment.

I listened to a podcast called GeekWire. The September 2, 2023, program made it clear that the current big dog at Amazon wants people to work in the office. If not, these folks are going to go away. What makes this interesting is that the GeekWire pundits pointed out that the Big Dog had changed his story, guidelines, and procedures for this work from home and work from office approach multiple times.

Therefore, I am not sure if there is management or leaderment at the world’s largest digital mall. I do know that modern leaderment is not for me. The old-fashioned meaning of manage seems okay to me.

Stephen E Arnold, September 12, 2023

Google: An Ad Crisis Looms from the Cancer of Short Videos

September 7, 2023

The weird orange newspaper ran a story which I found important. To read the article, you will need to pony up cash; I suggest you consider doing that. I want to highlight a couple of key points in the news story and offer a couple of observations.

9 3 sick ads

An online advertising expert looks out his hospital window and asks, “I wonder if the cancer in my liver will be cured before the cancer is removed from my employer’s corporate body?” The answer may be, “Liver cancer can be has a five year survival rate between 13 to 43 percent (give or take a few percentage points).” Will the patient get back to Foosball and off-site meetings? Is that computer capable of displaying TikTok videos? Thanks, Mother MJ. No annoying red appeal this banners today.

The article “Shorts Risks Cannibalising Core YouTube Business, Say Senior Staff” contains an interesting (although one must take with a dollop of mustard and some Dead Sea salt):

Recent YouTube strategy meetings have discussed the risk that long-form videos, which produce more revenue for the company, are “dying out” as a format, according to these people.

I am suspicious of quotes from “these people.” Nevertheless, let’s assume that the concern at the Google is real like news from “these people.”

The idea is that Google has been asleep at the switch as TikTok (the China linked short video service) became a go-to destination for people seeking information. Yep, some young people search TikTok for information, not just tips on self-harm and body dysmorphia. Google’s reaction was slow and predictable: Me too me too me too. Thus, Google rolled out “Shorts,” a TikTok clone and began pushing it to its YouTube faithful.

The party was rolling along until “these people” sat down and looked at viewing time for longer videos and the ad revenue from shorter videos. Another red alert siren began spinning up.

The orange newspaper story asserted:

In October last year, YouTube reported its first-ever quarterly decline in ad revenue since the company started giving its performance separately in 2020. In the following two quarters, the platform reported further falls compared with the same periods the previous year.

With a decline in longer videos, the Google cannot insert as many ads. If people watch shorter videos, Google has reduced ad opportunities. Although Google would love to pump ads into 30 second videos, viewers (users) might decide to feed their habit elsewhere. And where one may ask? How about TikTok or the would be cage fighter’s Meta service?

Several observations:

  1. Any decline in ad revenue is a force multiplier at the Google. The costs of running the outfit are difficult to control. Google has not been the best outfit in the world in creating new, non ad revenue streams in the last 25 years. That original pay-to-play inspiration has had legs, but with age, knees and hips wear out. Googzilla is not as spry as it used to be and its bright idea department has not found sustainable new revenue able to make up for a decline in traditional Google ad revenue… yet.
  2. The cost of video is tough to weasel out of Google’s financial statements. The murky “cloud” makes it easy to shift some costs to the enabler of the magical artificial intelligence push at the company. In reality, video is a black hole of costs. Storage, bandwidth, legal compliance, creator hassles, and overhead translate to more ads. Long videos are one place to put ads every few minutes. But when the videos are short like those cutting shapes dance lessons, the “short” is a killer proposition.
  3. YouTube is a big deal. Depending on whose silly traffic estimates one believes, YouTube is as big a fish in terms of eyeballs as Google.com search. Google search is under fire from numerous directions. Prabhakar Raghavan has not mounted much of a defense to the criticisms directed at Google search’s genuine inability to deliver relevant search results. Now the YouTube ad money flow is drying up like streams near Moab.

Net net: YouTube has become a golden goose. But short videos are a cancer and who can make fois gras out of a cancerous liver?

Stephen E Arnold, September 7, 2023

Gannett: Whoops! AI Cost Cutting Gets Messy

September 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Gannett, the “real” news bastion of excellence experimented with smart software. The idea is that humanoids are expensive, unreliable, and tough to manage. Software — especially smart software — is just “set it an forget it.”

9 2 mother kid mess

A young manager / mother appears in distress after her smart software robot spilled the mild. Thanks, MidJourney. Not even close to what I requested.

That was the idea in the Gannett carpetland. How did that work out?

Gannett to Pause AI Experiment after Botched High School Sports Articles” reports:

Newspaper chain Gannett has paused the use of an artificial intelligence tool to write high school sports dispatches after the technology made several major flubs in articles in at least one of its papers.

The estimable Gannett organization’s effort generated some online buzz. The CNN article adds:

The reports were mocked on social media for being repetitive, lacking key details, using odd language and generally sounding like they’d been written by a computer with no actual knowledge of sports.

That statement echoes my views of MBAs with zero knowledge of business making bonehead management decisions. Gannett is well managed; therefore, the executives are not responsible for the decision to use smart software to cut costs and expand the firm’s “real” news coverage.

I wonder if the staff terminated would volunteer to return to work to write “real” news? You know. The hard stuff like high school sports articles.

Stephen E Arnold, September 6, 2023

Generative AI: Not So Much a Tool But Something Quite Different

August 24, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Thirty years ago I had an opportunity to do a somewhat peculiar job. I had written for a publisher in the UK a version of a report my team and I prepared about Japanese investments in its Fifth Generation Computer Revolution or some such government effort. A wealthy person who owned a medium-sized financial firm asked me if I would comment on a book called The Meaning of the Microcosm. “Sure,” I said.

8 24 sea creature

This tiny, cute technology creature has just crawled from the ocean, and it is looking for lunch. Who knew that it could morph into a much larger and more disruptive beast? Thanks, MidJourney. No review committee for me this morning.

What I described was technology’s Darwinian behavior. I am not sure I was breaking new ground, but it seemed safe for me to point to how a technology survived. Therefore, I argued in a private report to this wealthy fellow, that if betting on a winner would make one rich. I tossed in an idea that I have thought about for many years; specifically, as technologies battle to “survive,” the technologies evolve and mutate. The angle I have commented about for many years is simple: Predicting how a technology mutates is a tricky business. Mutations can be tough to spot or just pop up. Change just says, “Hello, I am here.”

I thought about this “book commentary project” when I read “How ChatGPT Turned Generative AI into an Anything Tool.” The article makes a number of interesting observations. Here’s one I noted:

But perhaps inadvertently, these same changes let the successors to GPT3, like GPT3.5 and GPT4, be used as powerful, general-purpose information-processing tools—tools that aren’t dependent on the knowledge the AI model was originally trained on or the applications the model was trained for. This requires using the AI models in a completely different way—programming instead of chatting, new data instead of training. But it’s opening the way for AI to become general purpose rather than specialized, more of an “anything tool.”

I am not sure that “anything tool” is a phrase with traction, but it captures the idea of a technology that began as a sea creature, morphing, and then crawling out of the ocean looking for something to eat. The current hungry technology is smart software. Many people see the potential of combining repetitive processes with smart software in order to combine functions, reduce costs, or create alternatives to traditional methods of accomplishing a task. A good example is the use college students are making of the “writing” ability of free or low cost services like ChatGPT or You.com.

But more is coming. As I recall, in my discussion of the microcosm book, I made the point that Mr. Gilder’s point that small-scale systems and processes can have profound effects on larger systems and society as a whole. But a technology “innovation” like generative AI is simultaneously “small” and “large”. Perspective and point of view are important in software. Plus, the innovations of the transformer and the larger applications of generative AI to college essays illustrate the scaling impact.

What makes AI interesting for me at this time is that genetic / Darwinian change is occurring across the scale spectrum. On one hand, developers are working to create big applications; for instance, SaaS solutions that serve millions of users. On the other hand, shifting from large language models to smaller, more efficient methods of getting smart aim to reduce costs and speed the functioning of the plumbing.

The cited essay in Arstechnica is on the right track. However, the examples chosen are, it seems to me, ignoring the surprises the iterations of the technology will deliver. Is this good or bad? I have no opinion. What is important than wild and crazy ideas about control and regulation strike me as bureaucratic time wasting. It was millions a years ago to get out of the way of the hungry creature from the ocean of ones and zeros and try to figure out how to make catch the creature and have dinner, turn its body parts into jewelry which can be sold online, or processing the beastie into a heat-and-serve meal at Trader Joe’s.

My point is that the generative innovations do not comprise a “tool.” We’re looking at something different, semi-intelligent, and evolving with speed. Will it be let’s have lunch or one is lunch?

Stephen E Arnold, August 24, 2023

Amazon: You Are Lovable… to Some I Guess

August 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Three “real news” giants have make articles about the dearly beloved outfit Amazon. My hunch is that the publishers were trepidatious when the “real” reporters turned in their stories. I can hear the “Oh, my goodness. A negative Amazon story.” Not to worry. It is unlikely that the company will buy ad space in the publications.

8 17 giant

A young individual finds that the giant who runs an alleged monopoly is truly lovable. Doesn’t everyone? MidJourney, after three tries I received an original image somewhat close to my instructions.

My thought is the fear that executives at the companies publishing negative information about the lovable Amazon could hear upon coming home from work, “You published this about Amazon. What if our Prime membership is cancelled? What if our Ring doorbell is taken offline? And did you think about the loss of Amazon videos? Of course not, you are just so superior. Fix your own dinner tonight. I am sleeping in the back bedroom tonight.”

The first story is “How Amazon’s In-House First Aid Clinics Push Injured Employees to Keep Working.” Imagine. Amazon creating a welcoming work environment in which injured employees are supposed to work. Amazon is pushing into healthcare. The article states:

“What some companies are doing, and I think Amazon is one of them, is using their own clinics to ‘treat people’ and send them right back to the job, so that their injury doesn’t have to be recordable,” says Jordan Barab, a former deputy assistant secretary at OSHA who writes a workplace safety newsletter.

Will Amazon’s other health care units operate in a similar way? Of course not.

The second story is “Authors and Booksellers Urge Justice Dept. to Investigate Amazon.” Imagine. Amazon exploiting its modest online bookstore and its instant print business to take sales away from the “real” publishers. The article states:

On Wednesday[August 16, 2023], the Open Markets Institute, an antitrust think tank, along with the Authors Guild and the American Booksellers Association, sent a letter to the Justice Department and the Federal Trade Commission, calling on the government to curb Amazon’s “monopoly in its role as a seller of books to the public.”

Wow. Unfair? Some deliveries arrive in a day. A Kindle book pops up in the incredibly cluttered and reader-hostile interface in seconds. What’s not to like?

The third story is from the “real news outfit” MSN which recycles the estimable CNBC “talking heads”. This story is “Amazon Adds a New Fee for Sellers Who Ship Their Own Packages.” The happy family of MSN and CNBC report:

Beginning Oct. 1, members of Amazon’s Seller Fulfilled Prime program will pay the company a 2% fee on each product sold, according to a notice sent to merchants … The e-commerce giant also charges sellers a referral fee between 8% and 15% on each sale. Sellers may also pay for things like warehouse storage, packing and shipping, as well as advertising fees.

What’s the big deal?

To admirer who grew up relying on a giant company, no problem.

Stephen E Arnold, August 21, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta