Prediction: Next Target Up — Public Libraries

June 26, 2024

dinosaur30a_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The publishers (in spirit at least) have kneecapped the Internet Archive. If you don’t know what the online service does or did, it does not matter. I learned from the estimable ShowBiz411.com site, a cultural treasure is gone. Forget digital books, the article “Paramount Erases Archives of MTV Website, Wipes Music, Culture History After 30 Plus Years” says:

Parent company Paramount, formerly Viacom, has tossed twenty plus years of news archives. All that’s left is a placeholder site for reality shows. The M in MTV – music — is gone, and so is all the reporting and all the journalism performed by music and political writers ever written. It’s as if MTV never existed. (It’s the same for VH1.com, all gone.)

Why? The write up couches the savvy business decision of the Paramount leadership this way:

There’s no precedent for this, and no valid reason. Just cheapness and stupidity.

image

Tibby, my floppy ear Frenchie, is listening to music from the Internet Archive. He knows the publishers removed 500,000 books. Will he lose access to his beloved early 20th century hill music? Will he ever be able to watch reruns of the rock the casbah music video? No. He is a risk. A threat. A despicable knowledge seeker. Thanks to myself for this nifty picture.

My knowledge of MTV and VH1 is limited. I do recall telling my children, “Would you turn that down, please?” What a waste of energy. Future students of American culture will have a void. I assume some artifacts of the music videos will remain. But the motherlode is gone. Is this a loss? On one hand, no. Thank goodness I will not have to glimpse performs rocking the casbah. On the other hand, yes. Archaeologists study bits of stone, trying to figure out how those who left them built Machu Pichu did it. The value of lost information to those in the future is tough to discuss. But knowledge products may be like mine tailings. At some point, a bright person can figure out how to extract trace elements in quantity.

I have a slightly different view of these two recent cultural milestones. I have a hunch that the publishers want to protect their intellectual property. Internet Archive rolled over because its senior executives learned from their lawyers that lawsuits about copyright violations would be tough to win. The informed approach was to delete 500,000 books. Imagine an online service like the Internet Archive trying to be a library.

That brings me to what I think is going on. Copyright litigation will make quite a lot of digital information disappear. That means that increasing fees to public libraries for digital copies of books to “loan” to patrons must go up. Libraries who don’t play ball may find that those institutions will be faced with other publisher punishments: No American Library Association after parties, no consortia discounts, and at some point no free books.

Yes, libraries will have to charge a patron to check out a physical book and then the “publishers” will get a percentage.

The Andrew Carnegie “free” thing is wrong. Libraries rip off the publishers. Authors may be mentioned, but what publisher cares about 99 percent of its authors? (I hear crickets.)

Several thoughts struck me as I was walking my floppy ear Frenchie:

  1. The loss of information (some of which may have knowledge value) is no big deal in a social structure which does not value education. If people cannot read, who cares about books? Publishers and the wretches who write them. Period.
  2. The video copyright timebomb of the Paramount video content has been defused. Let’s keep those lawyers at bay, please. Who will care? Nostalgia buffs and the parents of the “stars”?
  3. The Internet Archive has music; libraries have music. Those are targets not on Paramount’s back. Who will shoot at these targets? Copyright litigators. Go go go.

Net net: My prediction is that libraries must change to a pay-to-loan model or get shut down. Who wants informed people running around disagreeing with lawyers, accountants, and art history majors?

Stephen E Arnold, June 26, 2024

Chasing a Folly: Identifying AI Content

June 24, 2024

As are other academic publishers, Springer Nature Group is plagued by fake papers. Now the company announces, “Springer Nature Unveils Two New AI Tools to Protect Research Integrity.” How effective the tools are remains to be proven, but at least the company is making an effort. The press release describes text-checker Geppetto and image-analysis tool SnappShot. We learn:

“Geppetto works by dividing the paper up into sections and uses its own algorithms to check the consistency of the text in each section. The sections are then given a score based on the probability that the text in them has been AI generated. The higher the score, the greater the probability of there being problems, initiating a human check by Springer Nature staff. Geppetto is already responsible for identifying hundreds of fake papers soon after submission, preventing them from being published – and from taking up editors’ and peer reviewers’ valuable time.

SnappShot, also developed in-house, is an AI-assisted image integrity analysis tool. Currently used to analyze PDF files containing gel and blot images and look for duplications in those image types – another known integrity problem within the industry – this will be expanded to cover additional image types and integrity problems and speed up checks on papers.”

Springer Nature’s Chris Graf emphasizes the importance of research integrity and vows to continue developing and improving in-house tools. To that end, we learn, the company is still growing its fraud-detection team. The post points out Springer Nature is a contributing member of the STM Integrity Hub.

Based in Berlin, Springer Nature was formed in 2015 through the combination of Nature Publishing Group, Macmillan Education, and Springer Science+Business Media. A few of its noteworthy publications include Scientific American, Nature, and this collection of Biology, Clinical Medicine, and Health journals.

Cynthia Murrell, June 24, 2024

Thomson Reuters: A Trust Report about Trust from an Outfit with Trust Principles

June 21, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Thomson Reuters is into trust. The company has a Web page called “Trust Principles.” Here’s a snippet:

The Trust Principles were created in 1941, in the midst of World War II, in agreement with The Newspaper Proprietors Association Limited and The Press Association Limited (being the Reuters shareholders at that time). The Trust Principles imposed obligations on Reuters and its employees to act at all times with integrity, independence, and freedom from bias. Reuters Directors and shareholders were determined to protect and preserve the Trust Principles when Reuters became a publicly traded company on the London Stock Exchange and Nasdaq. A unique structure was put in place to achieve this. A new company was formed and given the name ‘Reuters Founders Share Company Limited’, its purpose being to hold a ‘Founders Share’ in Reuters.

Trust nestles in some legalese and a bit of business history. The only reason I mention this anchoring in trust is that Thomson Reuters reported quarterly revenue of $1.88 billion in May 2024, up from $1.74 billion in May 2023. The financial crowd had expected $1.85 billion in the quarter, and Thomson Reuters beat that. Surplus funds makes it possible to fund many important tasks; for example, a study of trust.

image

The ouroboros, according to some big thinkers, symbolizes the entity’s journey and the unity of all things; for example, defining trust, studying trust, and writing about trust as embodied in the symbol.

My conclusion is that trust as a marketing and business principle seems to be good for business. Therefore, I trust, and I am confident that the information in “Global Audiences Suspicious of AI-Powered Newsrooms, Report Finds.” The subject of the trusted news story is the Reuters Institute for the Study of Journalism. The Thomson Reuters reporter presents in a trusted way this statement:

According to the survey, 52% of U.S. respondents and 63% of UK respondents said they would be uncomfortable with news produced mostly with AI. The report surveyed 2,000 people in each country, noting that respondents were more comfortable with behind-the-scenes uses of AI to make journalists’ work more efficient.

To make the point a person working for the trusted outfit’s trusted report says in what strikes me as a trustworthy way:

“It was surprising to see the level of suspicion,” said Nic Newman, senior research associate at the Reuters Institute and lead author of the Digital News Report. “People broadly had fears about what might happen to content reliability and trust.”

In case you have lost the thread, let me summarize. The trusted outfit Thomson Reuters funded a study about trust. The research was conducted by the trusted outfit’s own Reuters Institute for the Study of Journalism. The conclusion of the report, as presented by the trusted outfit, is that people want news they can trust. I think I have covered the post card with enough trust stickers.

I know I can trust the information. Here’s a factoid from the “real” news report:

Vitus “V” Spehar, a TikTok creator with 3.1 million followers, was one news personality cited by some of the survey respondents. Spehar has become known for their unique style of delivering the top headlines of the day while laying on the floor under their desk, which they previously told Reuters is intended to offer a more gentle perspective on current events and contrast with a traditional news anchor who sits at a desk.

How can one not trust a report that includes a need met by a TikTok creator? Would a Thomson Reuters’ professional write a news story from under his or her desk or cube or home office kitchen table?

I think self funded research which finds that the funding entity’s approach to trust is exactly what those in search of “real” news need. Wikipedia includes some interesting information about Thomson Reuters in its discussion of the company in the section titled “Involvement in Surveillance.” Wikipedia alleges that Thomson Reuters licenses data to Palantir Technologies, an assertion which if accurate I find orthogonal to my interpretation of the word “trust.” But Wikipedia is not Thomson Reuters.

I will not ask questions about the methodology of the study. I trust the Thomson Reuters’ professionals. I will not ask questions about the link between revenue and digital information. I have the trust principles to assuage any doubt. I will not comment on the wonderful ouroboros-like quality of an enterprise embodying trust, funding a study of trust, and converting those data into a news story about itself. The symmetry is delicious and, of course, trustworthy. For information about Thomson Reuters’s trust use of artificial intelligence see this Web page.

Stephen E Arnold, June 21, 2024

What Is That Wapo Wapo Wapo Sound?

June 20, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Do you hear that thumping wapo wapo wapo sound? I do. It reminds me of an old school pickup truck with a flat tire on a hot summer’s day? Yep, wapo wapo wapo. That’s it!

Jeff Bezos Has Worst Response Ever to Washington Post Turmoil” emitted this sound when I read the essay in New Republic. The newspaper for Washington, DC and its environs is the Post. When I lived in Washington, DC, the newspaper was a must read. Before I trundled off to the cheerful workplace of Halliburton Nuclear and later to the incredibly sensitive and human blue chip consulting firm known affectionately as the Boozer, I would read the WaPo. I had to be prepared. If I were working with a Congress person like Admiral Craig Hosmer, USN Retired, I had to know what Miss Manners had to say that day. A faux pas could be fatal.

image

The old pickup truck has a problem because one of the tires went wapo wapo wapo and then the truck stopped. Thanks, MSFT Copilot. Good enough.

The WaPo is now a Jeff Bezos property. I have forgotten how the financial deal was structured, but he has a home in DC and every person who is in contention as one of the richest men on earth needs a newspaper. The write up explains:

In a memo to the paper’s top personnel on Tuesday, the billionaire technocrat backed the new CEO Will Lewis, a former lieutenant to right-wing media mogul Richard Murdoch, whose controversial appointment at the Post has made waves across the industry in the wake of reporting on his shady journalistic practices.

That’s inspiring for a newspaper: A political angle and “shady journalistic practices.” What happened to that old every day is Day One and the customer is important? I suppose a PR person could trot those out. But the big story seems to be the newspaper is losing readers and money. Don’t people in DC read? Oh, silly question. No, now the up-and-come movers and shakers doom scroll and watch YouTube. The cited article includes a snippet from the Bezos bulldozer it appears. That item states:

…the journalistic standards and ethics at The Post will not change… You have my full commitment to n maintaining the quality, ethics, and standards we all believe in.

Two ethics in one short item. Will those add up this way: ethics plus ethics equals trust? Sure. I believe everything one of the richest people in the world says. It seems that one of the new hires to drive the newspaper world’s version of Jack Benny’s wheezing Maxwell was involved in some hanky-panky from private telephone conversations.

Several observations:

  1. “Real” newspapers seem to be facing some challenges. These range from money to money to money. Did I mention money?
  2. The newspaper owner and the management team have to overcome the money hurdle. How does one do that? Maybe smart software from an outfit like AWS and the Sagemaker product line? The AI can output good enough content at a lower cost and without grousing humans, vacations, health care, and annoying reporters poking into the lifestyle of the rich, powerful, famous, and rich. Did I mention “rich” twice? But if Mr. Bezos can work two ethics into one short memo, I can fit two into a longer blog post.
  3. The readers and journalists are likely to lose. I think readers will just suck down content from their mobile devices and the journalists will have to find their futures elsewhere like certain lawyers, many customer service personnel, and gig workers who do “art” for publishers, among others.

Net net: Do you hear the wapo wapo wapo? How long will the Bezos pickup truck roll along?

Stephen E Arnold, June 20, 2024

The Gray Lady Tap Dances

June 17, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The collision of myth, double talk, technology, and money produces some fascinating tap dancing. Tip tap tip tap. Tap tap. That’s the sound of the folks involved with explaining that technology is no big deal. Drum roll. Then the coda. Tip tap tip tap. Tap tap tap. It is not money. Tip tap tip tap. tap tap.

I think quite a few business decisions are about money; specifically, getting a bonus or a hefty raise because “efficiency” improves “quality.” One can dance around the dead horse, but at some point that horse needs to be relocated.

image

The “real” Mona Lisa. Can she be enhanced, managed, and be populated with metadata without a human art director? Yep. Thanks, MSFT Copilot. Good enough.

I read “New York Times Union Urges Management to Reconsider 9 Art Department Cuts as Paper Ramps Up AI Tools | Exclusive.” The write up weaves a number of themes together. There is the possibility of management waffling, a common practice these days. Recall, an incident, Microsoft? The ever-present next big thing makes an appearance. Plus, there is the Gray Lady, working hard to maintain its position as the newspaper for for the USA today. (That sounds familiar, doesn’t it?)

The main point of the write up is that the NYT’s art department might lose staff. The culprit is not smart software. Money is not the issue. Quality will not suffer. Yada yada. The write up says:

The Times denies that the reductions are in any way related to the newspaper’s AI initiatives.

And the check is in the mail.

I also noted:

A spokesman for the Times said the affected employees are being offered a buyout, and have nothing to do with the use of AI. “Last month, The Times’s newsroom made the difficult decision to reduce the size of its art production team with workflow changes to make photo toning and color correction work more efficient,” Charlie Stadtlander told TheWrap.”On May 30th, we offered generous voluntary buyouts for 9 employees to accept. These changes involve the adoption of new workflows and the expanded use of industry-standard tools that have been in use for years — they are not related to The Times’s AI efforts.”

Nope. Never. Impossible. Unthinkable.

What is the smart software identified as a staff reducer? It is Claro but that is not the name of the company. The current name of the service is Pixometry, which is a mashup of Claro and Elpical. So what does this controversial smart software do? The firm’s Web site says:

Pixometry is the latest evolution of Claro, the leading automated image enhancement platform for Publishers and Retailers around the globe. Combining exceptional software with outstanding layered AI services, Pixometry delivers a powerful image processing engine capable of creating stunning looking images, highly accurate cut-outs and automatic keywording in seconds. Reducing the demands upon the Photoshop teams, Pixometry integrates seamlessly with production systems and prepares images for use in printed and digital media.

The Pixometry software delivers:

Cloud based automatic image enhancement & visual asset management solutions for publishers & retail business.

Its functions include:

  • Automatic image “correction” because “real” is better than real
  • Automatic cut outs and key wording (I think a cut out is a background remover so a single image can be plucked from a “real” photo
  • Consistent, high quality results. None of that bleary art director eye contact.
  • Multi-channel utilization. The software eliminates telling a Photoshop wizard I need a high-res image for the magazine and a then a 96 spot per inch version for the Web. How long will that take? What? I need the images now.
  • Applied AI image intelligence. Hey, no hallucinations here. This is “real” image enhancement and better than what those Cooper Union space cadets produce when they are not wandering around looking for inspiration or whatever.

Does that sound like reality shaping or deep fake territory? Hmmm. That’s a question none of the hair-on-fire write ups addresses. But if you are a Photoshop  and Lightroom wizard, the software means hasta la vista in my opinion. Smart software may suck at office parties but it does not require vacays, health care (just minor system updates), or unions. Software does not argue, wear political buttons, or sit around staring into space because of a late night at the “library.”

Pretty obscure unless you are a Photoshop wizard. The Pixometry Web site explains that it provides a searchable database of images and what looks like one click enhancement of images. Hey, every image needs a bit of help to be “real”, just like “real” news and “real” management explanations. The Pixometry Web site identifies some organizations as “loving” Pixometry; for example, the star-crossed BBC, News UK, El Mercurio, and the New York Times. Yes, love!

Let’s recap. Most of the reporting about this use of applied smart software gets the name of the system wrong. None of the write ups point out that art director functions in the hands of a latte guzzling professional are not quick, easy, or without numerous glitches. Furthermore, the humans in the “art” department must be managed.

The NYT is, it appears, trying to do the two-step around software that is better, faster, and cheaper than the human powered options. Other observations are:

  1. The fast-talking is not going to change the economic benefit of smart software
  2. The notion of a newspaper fixing up photos underscores that deep fakes have permeated institutions which operate as if it were 1923 skidoo time
  3. The skilled and semi-skilled workers in knowledge industries may taste blood when the titanium snake of AI bites them on the ankle. Some bites will be fatal.

Net net: Being up front may have some benefits. Skip the old soft shoe, please.

Stephen E Arnold, June 17, 2024

Publishers Sign Up for the Great Unknown: Risky, Oh, Yeah

June 7, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

OpenAI is paying for content. Why? Maybe to avoid lawsuits? Maybe to get access to “real” news to try to get ahead of its perceived rivals? Maybe because Sam AI-Man pushes forward while its perceived competitors do weird things like add features, launch services which are lousy, or which have the taste of the bitter fruit of Zuckus nepenthes.

image

Publishers are like beavers. Publishers have to do whatever they can to generate cash. Thanks, MSFT Copilot. Good enough. Not a cartoon and not a single dam, but just like MSFT security good enough, today’s benchmark of excellence.

Journalists Deeply Troubled by OpenAI’s Content Deals with Vox, The Atlantic” is a good example of the angst Sam AI-Man is causing among “real” news outfits and their Fourth Estate professionals. The write up reports:

“Alarmed” writers unions question transparency of AI training deals with ChatGPT maker.

Oh, oh. An echo of Google’s Code Red am I hearing? No, what I hear is the ka-ching of the bank teller’s deposit system as the “owner” of the Fourth Estate professional business process gets Sam AI-Man’s money. Let’s not confuse “real” news with “real” money, shall we? In the current economic climate, money matters. Today it is difficult to sell advertising unless one is a slam dunk monopoly with an ad sales system that is tough to beat. Today it is tough to get those who consume news via a podcast or a public Web site to subscribe. I think that the number I heard for conversions is something like one or two subscribers per 100 visitors on a really good day. Most days are not really good.

“Real” journalists can be unionized. The idea is that their services have to be protected from the lawyers and bean counters who run many high profile publishing outfit. The problem with unions is that these seek to limit what the proprietors can do in a largely unregulated capitalist set up like the one operating within the United States. In a long-forgotten pre-digital era, those in a union dust up in 1921 at Blair Mountain in my favorite state, West Virginia. Today, the union members are more likely to launch social media posts and hook up with a needy lawyering outfit.

Let me be clear. Some of the “real” journalists will find fame as YouTubers, pundits on what’s left of traditional TV or cable news programs, or by writing a book which catches the attention of Netflix. Most, however, will do gig work and migrate to employment adjacent to “real” news. The problem is that in any set of “real” journalists, the top 10 percent will be advantaged. The others may head to favelas, their parent’s basement, or a Sheetz parking lot in my favorite state for some chemical relief. Does that sound scary?

Think about this.

Sam AI-Man, according to the Observer’s story “Sam Altman Says OpenAI Doesn’t Fully Understand How GPT Works Despite Rapid Progress.” These money-focused publishers are signing up for something that not only do they not understand but the fellow who is surfing the crazy wave of smart software does not understand. But taking money and worrying about the future is not something publishing executives in their carpetlands think about. Money in hand is good. Worrying about the future, according to their life coach, is not worth the mental stress. It is go-go in a now-now moment.

I cannot foretell the future. If I could, I would not be an 80-year-old dinobaby sitting in my home office marveling at the downstream consequences of what amounts to a 2024 variant of the DR-LINK technology. I can offer a handful of hypotheses:

  1. “Real” journalists are going to find that publishers cut deals to get cash without thinking of the “real” journalists or the risks inherent in hopping in a small cabin with Sam AI-Man for a voyage in the unknown.
  2. Money and cost reductions will fuel selling information to Sam AI-Man and any other Big Tech outfit which comes calling with a check book. Money now is better than looking at a graph of advertising sales over the last five years. Money trumps “real” journalists’ complaints when they are offered part-time work or an opportunity to find their future elsewhere.
  3. Publishing outfits have never been technology adept, and I think that engineered blindness is now built into the companies’ management processes. Change is going to make publishing an interesting business. That’s good for consultants and bankruptcy specialists. It will not be so good for those who do not have golden parachutes or platinum flying cars.

Net net: What are the options for the “real” journalists’ unions? Lawyers, maybe. Social media posts. Absolutely. Will these prevent publishers from doing what publishers have to do? Nope.

Stephen E Arnold, June 7, 2024

AI in the Newsroom

June 7, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

It seems much of the news we encounter is already, at least in part, generated by AI. Poynter discusses how “AI Is Already Reshaping Newsrooms, AP Study Finds.” The study asked 292 respondents from legacy media, public broadcasters, magazines, and other news outlets. Writer Alex Mahadevan summarizes:

“Nearly 70% of newsroom staffers from a variety of backgrounds and organizations surveyed in December say they’re using the technology for crafting social media posts, newsletters and headlines; translation and transcribing interviews; and story drafts, among other uses. One-fifth said they’d used generative AI for multimedia, including social graphics and videos.”

Surely these professionals are only using these tools under meticulous guidelines, right? Well, a few are. We learn:

“The tension between ethics and innovation drove Poynter’s creation of an AI ethics starter kit for newsrooms last month. The AP — which released its own guidelines last August — found less than half of respondents have guidelines in their newsrooms, while about 60% were aware of some guidelines about the use of generative AI.”

The survey found the idea of guidelines was not even on most respondents’ minds. That is unsettling. Mahadevan lists some other interesting results:

“*54% said they’d ‘maybe’ let AI companies train their models using their content.

*49% said their workflows have already changed because of generative AI.

*56% said the AI generation of entire pieces of content should be banned.

*Only 7% of those who responded were worried about AI displacing jobs.

*18% said lack of training was a big challenge for ethical use of AI. ‘Training is lovely, but time spent on training is time not spent on journalism — and a small organization can’t afford to do that,’ said one respondent.”

That last statement is disturbing, given the gradual deterioration and impoverishment of large news outlets. How can we ensure best practices make their way into this mix, and can it be done before any news may be fake news?

Cynthia Murrell, June 7, 2024

Lunch at a Big Time Publisher: Humble Pie and Sour Words

June 4, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Years ago I did some work for a big time New York City publisher. The firm employed people who used words like “fungible” and “synergy” when talking with me. I took the time to read an article with this title: “So Much for Peer Review — Wiley Shuts Down 19 Science Journals and Retracts 11,000 Gobbledygook Papers.” Was this the staid, conservative, and big vocabulary?

Yep.

The essay is little more than a wrapper for a Wall Street Journal story with the title “Flood of Fake Science Forces Multiple Journal Closures Tainted by Fraud.” I quite like that title, particularly the operative word “fraud.” What in the world is going on?

The write up explains:

Wiley — a mega publisher of science articles has admitted that 19 journals are so worthless, thanks to potential fraud, that they have to close them down. And the industry is now developing AI tools to catch the AI fakes (makes you feel all warm inside?)

image

A group of publishing executives becomes the focal point of a Midtown lunch in an upscale restaurant. The titans of publishing are complaining about the taste of humble pie and user secret NYAC gestures to express their disapproval. Thanks, MSFT Copilot. Your security expertise may warrant a special banquet too.

The information in the cited article contains some tasty nuggets which complement humble pie in my opinion; for instance:

  • The shut down of the junk food publications has required two years. If Sillycon Valley outfits can fire thousands via email or Zoom, “Why are those uptown shoes being dragged?” I asked myself.
  • Other high-end publishers have been doing the same thing. Sadly there are no names.
  • The bogus papers included something called a “AI gobbledygook sandwich.” Interesting. Human reviews who are experts could not recognize the vernacular of academic and research fraudsters.
  • Some in Australia think that the credibility of universities might be compromised. Oh, come now. Just because the president of Stanford had to search for his future elsewhere after some intellectual fancy dancing and the head of the Harvard ethic department demonstrated allegedly sci-fi ethics in published research, what’s the problem? Don’t students just get As and Bs. Professors are engaged in research, chasing consulting gigs, and ginning up grant money. Actual research? Oh, come now.
  • Academic journals are or were a $30 billion dollar industry.

Observations are warranted:

  • In today’s datasphere, I am not surprised. Scams, frauds, and cheats seems to be as common as ants at a picnic. A cultural shift has occurred. Cheating has become the norm.
  • Will the online databases, produced by some professional publishers and commercial database companies, be updated to remove or at least flag the baloney? Probably not. That costs money. Spending money is not a modern publishing CEO’s favorite activity. (Hence the two-year draw down of the fake information at the publishing house identified in the cited write up.)
  • How many people have died or been put out of work because of specious research data? I am not holding my breath for the peer reviewed journals to provide this information.

Net net: Humiliating and a shame. Quite a cultural mismatch between what some publishers say and this alleged what the firm ordered from the deli. I thought the outfit had a knowledge-based reason to tell me that it takes the high road. It seems that on that road, there are places where a bad humble pie is served.

Stephen E Arnold, June 4, 2024

The Death of the Media: Remember Clay Tablets?

May 24, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Did the home in which you grew from a wee one to a hyperspeed teen have a plaster cast which said, “Home sweet home” or “Welcome” hanging on the wall. My mother had those craft sale treasures everywhere. I have none. The point is that the clay tablets from ancient times were not killed, put out of business, or bankrupted because someone wrote on papyrus, sheep skin, or bits of wood. Eliminating a communications medium is difficult. Don’t believe me? Go to an art fair and let me know if you were unable to spot something made of clay with writing or a picture on it.

image

I mention these older methods of disseminating a message because I read “Publishers Horrified at New Google AI Feature That Could Kill What’s Left of Journalism.” Really?

The write up states:

… preliminary studies on Google’s use of AI in its search engine has the potential to reduce website traffic by 25 percent, The Associated Press reports. That could be billions in revenue lost, according to an interview with Marc McCollum, chief innovation officer for content creator consultancy Raptive, who was interviewed by the AP.

The idea is that “real” journalism depends on Google for revenue. If the revenue from Google’s assorted ad programs tossing pennies to Web sites goes away, so will the “real” journalism on these sites.

If my dinobaby memory is working, the AP (Associated Press) was supported by newspapers. Then the AP was supported by Google. What’s next? I don’t know, but the clay tablet fellows appear to have persisted. The producers of the tablets probably shifted to tableware. Those who wrote on the tablets learned to deal with ink and sheepskin.

image

Chilling in the room thinking thoughts of doom. Thanks, MSFT Copilot. Keep following your security recipe.

AI seems to be capable of creating stories like those in Smartnews or one of the AI-powered spam outfits. The information is recycled. But it is good enough. Some students today seem incapable of tearing themselves from their mobile devices to read words. The go-to method for getting information is a TikTok-type service. People who write words may be fighting to make the shift to new media.

One thing is reasonably clear: Journalists and media-mavens are concerned that a person will take an answered produced by a Google-like service. The entering a query approach to information is a “hot medium thing.” Today kicking back and letting video do the work seems to be a winner.

Google, however, has in my opinion been fiddling with search since it “innovated” in its implementation of the GoTo.com/Overture.com approach to “pay to play” search. If you want traffic, buy ads. The more one spends, the more traffic one’s site gets. That’s simple. There are some variations, but the same Google model will be in effect with or without Google little summaries. The lingo may change, but where there are clicks. When there are clicks, advertisers will pay to be there.

Google can, of course, kill its giant Googzilla mom laying golden eggs. That will take some time. Googzilla is big. My theory is that enterprising people with something to say will find a way to get paid for their content outputs regardless of their form. True, there is the cost of paying, but that’s the same hit the clay table took thousands of years ago. But those cast plaster and porcelain art objects are probably on sale at an art fair this weekend.

Observations:

  1. The fear is palpable. Why not direct it to a positive end? Griping about Google which has had 25 years to do what it wanted to do means Google won’t change too much. Do something to generate money. Complaining is unlikely to produce a result.
  2. The likelihood Google shaft a large number of outfits and individuals is nearly 99 percent. Thus, moving in a spritely manner may be a good idea. Google is not a sprinter as its reaction to Microsoft’s Davos marketing blitz made clear.
  3. New things do appear. I am not sure what the next big thing will be. But one must pay attention.

Net net: The sky may be falling. The question is, “How fast?” Another is, “Can you get out of the way?”

Stephen E Arnold, May 24, 2024

Using AI But For Avoiding Dumb Stuff One Hopes

May 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting essay called “How I Use AI To Help With TechDirt (And, No, It’s Not Writing Articles).” The main point of the write up is that artificial intelligence or smart software (my preferred phrase) can be useful for certain use cases. The article states:

I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles. It’s basically to help me brainstorm, critique my articles, and make suggestions on how to improve them.

image

Thanks, MSFT Copilot. Bad grammar and an incorrect use of the apostrophe. Also, I was much dumber looking in the 9th grade. But good enough, the motto of some big software outfits, right?

The idea is that an AI system can function as a partner, research assistant, editor, and interlocutor. That sounds like what Microsoft calls a “copilot.” The article continues:

I initially couldn’t think of anything to ask the AI, so I asked people in Lex’s Discord how they used it. One user sent back a “scorecard” that he had created, which he asked Lex to use to review everything he wrote.

The use case is that smart software function like Miss Dalton, my English composition teacher at Woodruff High School in 1958. She was a firm believer in diagramming sentences, following the precepts of the Tressler & Christ textbook, and arcane rules such as capitalizing the first word following a color (correctly used, of course).

I think her approach was intended to force students in 1958 to perform these word and text manipulations automatically. Then when we trooped to the library every month to do “research” on a topic she assigned, we could focus on the content, the logic, and the structural presentation of the information. If you attend one of my lectures, you can see that I am struggling to live up to her ideals.

However, when I plugged in my comments about Telegram as a platform tailored to obfuscated communications, the delivery of malware and X-rated content, and enforcing a myth that the entity known as Mr. Durov does not cooperate with certain entities to filter content, AI systems failed miserably. Not only were the systems lacking content, one — Microsoft Copilot, to be specific — had no functional content of collapse. Two other systems balked at the idea of delivering CSAM within a Group’s Channel devoted to paying customers of what is either illegal or extremely unpleasant content.

Several observations are warranted:

  1. For certain types of content, the systems lack sufficient data to know what the heck I am talking about
  2. For illegal activities, the systems are either pretending to be really stupid or the developers have added STOP words to the filters to make darned sure to improper output would be presented
  3. The systems’ are not up-to-date; for example, Mr. Durov was interviewed by Tucker Carlson a week before Mr. Durov blocked Ukraine Telegram Groups’ content to Telegram users in Russia.

Is it, therefore, reasonable to depend on a smart software system to provide input on a “newish” topic? Is it possible the smart software systems are fiddled by the developers so that no useful information is delivered to the user (free or paying)?

Net net: I am delighted people are finding smart software useful. For my lectures to law enforcement officers and cyber investigators, smart software is as of May 1, 2024, not ready for prime time. My concern is that some individuals may not discern the problems with the outputs. Writing about the law and its interpretation is an area about which I am not qualified to comment. But perhaps legal content is different from garden variety criminal operations. No, I won’t ask, “What’s criminal?” I would rather rely on Miss Dalton taught in 1958. Why? I am a dinobaby and deeply skeptical of probabilistic-based systems which do not incorporate Kolmogorov-Arnold methods. Hey, that’s my relative’s approach.

Stephen E Arnold, May 1, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta