Stop Smart Software! A Petition to Save the World! Signed by 350 Humans!
May 30, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
A “real” journalist (Kevin Roose), who was told to divorce his significant other for a chat bot published the calming, measured, non-clickbait story “AI Poses Risk of Extinction, Industry Leaders Warn.” What’s ahead for the forest fire of smart software activity? The headline explains a “risk of extinction.” What no screenshot of a Terminator robot saying”:
The strength of the human heart. The difference between us and machines. [Uplifting music]
Sadly, no.
Write up reports:
Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen. These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.
Isn’t the Gray Lady amplifying fear, uncertainty, and doubt? Didn’t IBM pay sales engineers to spread the FUD?
Enough. AI is bad. Stop those who refined the math and numerical recipes. Pass laws to regulate the AI technology. Act now. Save humanity. Several observations:
- The credibility of technologists who “develop” functions and then beg for rules is disingenuous. The idea is to practice self-control and judgment before inviting Mr. Hyde to brunch.
- With smart software chock full of “unknown unknowns”, how exactly are elected officials supposed to regulate a diffusing and enabling technology? Appealing to US and EU officials omits common sense in my opinion.
- The “fix” for the AI craziness may be emulating the Chinese approach: Do what the CCP wants or be reeducated. What a nation state can d with smart software is indeed a something to consider. But China has taken action and will move forward with militarization no matter what the US and EU do.
Silicon Valley type innovation has created a “myth of excellence.” One need look at the consequences of social media to see the consequences of high school science club decision making. Now a handful of individuals with the Silicon Valley DNA want external forces to reign in their money making experiments and personal theme parks. Sorry, folks. Internal control, ethical behavior, and integrity provide that to mature individuals.
A sheet of paper with “rules” and “regulations” is a bit late to the Silicon Valley game. And the Gray Lady? Chasing clicks in my opinion.
Stephen E Arnold, May 30, 2023
Regulate Does Not Mean Regulate. Leave the EU Does Not Mean Leave the EU. Got That?
May 30, 2023
I wrote about Sam AI-man’s explanation that he wants regulation. I pointed out that his definition of regulate means leaving OpenAI free to do whatever it can to ace out the Google and a handful of other big outfits chasing the pot of gold at the end of the AI rainbow.
I just learned from the self-defined trusted news source (Thomson Reuters) that Mr. AI-man has no plans to leave Europe. I understand. “Leave” does not mean leave as in depart, say adios, or hit the road, Jack.
“ChatGPT Maker OpenAI Says Has No Plan to Leave Europe” reports:
OpenAI has no plans to leave Europe, CEO Sam Altman said on Friday, reversing a threat made earlier this week to leave the region if it becomes too hard to comply with upcoming laws on artificial intelligence.
I am not confused. Just as the company’s name OpenAI does not mean “open,” the AI-man’s communication skills are based on the probabilities of certain words following another word. Got it. The slippery fish with AI-man is that definition of the words in his mind do not regress to the mean. The words — like those of some other notable Silicon Valley high tech giants — reflect the deeper machinations of a machine-assisted superior intelligence.
Translated this means: Regulate means shaft our competitors. Leave means stay. Regulate means let those OpenAI sheep run through the drinking water of free range cattle
The trusted write up says:
Reacting to Altman’s tweet on Friday, Dutch MEP Kim van Sparrentak, who has worked closely on the AI draft rules, told Reuters she and her colleagues must stand up to pressure from tech companies…. Voluntary codes of conduct are not the European way.
What does this statement mean to AI-man?
I would suggest from my temporary office in clear thinking Washington, DC, not too much.
I look forward to the next hearing from AI-man. That will be equally easy to understand.
Stephen E Arnold, May 30, 2023
Smart Software Knows Right from Wrong
May 29, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The AI gold rush is underway. I am not sure if the gold is the stuff of the King’s crown or one of those NFT confections. I am not sure what company will own the mine or sell the miner’s pants with rivets. But gold rush days remind me of forced labor (human indexers), claim jumping (hiring experts from one company to advantage another), and hydraulic mining (ethical and moral world enhancement). Yes, I see some parallels.
I thought of claim jumping and morals after reading “OpenAI Competitor Says Its Chatbot Has a Rudimentary Conscience: A Fascinating Concept.” The following snippet from the article caught my attention:
Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are “basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic.”
Please, read the original.
I want to capture several thoughts which flitted through my humanoid mind:
- What is right? What is wrong?
- What yardstick will be used to determine “rightness” or “wrongness.”
- What is the context for each right or wrong determination; for example, at the National Criminal Justice Training Center, there is a concept called “sexploitation.” The moral compass of You.com prohibits searching for information related to this trendy criminal activity? How will the Anthropic approach address the issue of a user with a “right” intent from a user with a “wrong” intent?
Net net: Baloney. Services will do what’s necessary to generate revenue. I know from watching the trajectories of the Big Tech outfits that right, wrong, ethics, and associated dorm room discussions wobble around and focus on getting rich or just having a job.
The goal for some will be to get their fingers on the knobs and control levers. Right or wrong?
Stephen E Arnold, May 29, 2023
Google AI Moves Slowly to Google Advertising. Soon, That Is. Soon.
May 24, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read l ”Google Search Ads Will Soon Automatically Adapt to Queries Using Generative AI.” The idea of using smart software to sell ads is one that seems obvious to me. What surprised me about this article in TechCrunch is the use of the future tense and the indefinite “soon.” The Sundar Financial Times’ PR write up emphasized that Google has been doing smart software for a looooong time.
How could a company so dependent on ads be in the “will” and “soon” vaporware announcement business?
I noted this passage in the write up:
Google is going to start using generative AI to boost Search ads’ relevance based on the context of a query…
But why so slow in releasing obvious applications of generative software?
I don’t have answers to this quite Googley question, probably asked by those engaged in the internal discussions about who’s on first in the Google Brain versus DeepMind softball game, but I have some observations:
- Google had useful technology but lacked the administrative and managerial expertise to get something out the door and into the hands paying customers
- Google’s management processes simply do not work when the company is faced with strategic decisions. This signals the end of the go go mentality of the company’s Backrub to Google transformation. And it begs the question, “What else has the company lost over the last 25 years?”
- Google’s engineers cannot move from Ivory Tower quantum supremacy mental postures to common sense applications of technology to specific use cases.
In short, after 25 years Googzilla strikes me as arthritic when it comes to hot technology and a little more nimble when it tries to do PR. Except for Paris, of course.
Stephen E Arnold, May 24, 2023
Neeva: Another Death from a Search Crash on the Information Highway
May 22, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
What will forensic search experts find when they examine the remains of Neeva? The “gee, we failed” essay “Next Steps for Neeva” presents one side of what might be an interesting investigation for a bushy tailed and wide eyed Gen Z search influencer. I noted some statements which may have been plucked from speeches at the original Search Engine Conferences ginned up by an outfit in the UK or academic post mortems at the old International Online Meeting once held in the companionable Olympia London.
I noted these statements from the cited document:
Statement 1: The users of a Web search system
We started Neeva with the mission to take search back to its users.
The reality is that 99 percent of people using a Web search engine are happy when sort of accurate information is provided free. Yep, no one wants to pay for search. That’s the reason that when a commercial online service like LexisNexis loses one big client, it is expensive, time consuming, and difficulty to replace the revenue. One former LexisNexis big wheel told me when we met in his limousine in the parking lot of the Cherry Hill Mall: “If one of the top 100 law firms goes belly up, we need a minimum of 200 new law firms to sign up for our service and pay for it.”
“Mommy, I failed Search,” says Timmy Neeva. Mrs. Neeva says, “What caused your delusional state, Timmy.” The art work is a result of the smart software MidJourney.
Users don’t care about for fee search when those users wouldn’t know whether a hit in a results list was right, mostly right, mostly wrong, or stupidly crazy. Free is the fuel that pulls users, and without advertising, there’s no chance a free service will be able to generate enough cash to index, update the index, and develop new features. At the same time, the plumbing is leaking. Plumbing repairs are expensive: New machines, new ways to reduce power consumption, and oodles of new storage devices.
Users want free. Users don’t want to compare the results from a for fee service and a free service. Users want free. After 25 years, the Google is the champion of free search. Like the old Xoogler search system Search2, Neeva’s wizards never figured that most users don’t care about Fancy Dan yip yap about search.
Statement 2: An answer engine.
We rallied the Neeva team around the vision to create an answer engine.
Shades of DR-LINK: Users want answers. In 1981, a former Predicasts’ executive named Paul Owen told me, “Dialog users want answers.” That sounds logical, and it is to many who are expert informationists the Gospel according to Online. The reality is that users want crunchy, bite sized chunks of information which appear to answer the question or almost right answers that are “good enough” or “close enough for horseshoes.”
Users cannot differentiate from correct and incorrect information. Heck, some developers of search engines don’t know the difference between weaponized information and content produced by a middle school teacher about the school’s graduation ceremony. Why? Weaponized information is abundant; non-weaponized information may not pass the user’s sniff test. And the middle school graduation ceremony may have a typo about the start time or the principal of the school changed his mind due to an active shooter situation. Something output from a computer is believed to be credible, accurate, and “right.” An answer engine is what a free Web search engine spits out. The TikTok search spits out answers, and no one wonders if the results list are shaped by Chinese interests.
Search and retrieval has been defined by Google. The company has a 90 plus percent share of the Web search traffic in North America and Western Europe. (In Denmark, the company has 99 percent of Danish users’ search traffic. People in Denmark are happier, and it is not because Google search delivers better or more accurate results. Google is free and it answers questions.
The baloney about it takes one-click to change search engines sounds great. The reality is as Neeva found out, no one wants to click away from what is perceived to work for them. Neeva’s yip yap about smart software proves that the jazz about artificial intelligence is unlikely to change how free Web search works in Google’s backyard. Samsung did not embrace Bing because users would rebel.
Answer engine. Baloney. Users want something free that will make life easier; for example, a high school student looking for a quick way to crank out a 250 word essay about global warming or how to make a taco. ChatGPT is not answering questions; the application is delivering something that is highly desirable to a lazy student. By the way, at least the lazy student had the git up and go to use a system to spit out a bunch of recycled content that is good enough. But an answer engine? No, an online convenience store is closer to the truth.
Statement 3:
We are actively exploring how we can apply our search and LLM expertise in these settings, and we will provide updates on the future of our work and our team in the next few weeks.
My interpretation of this statement is that a couple of Neeva professionals will become venture centric. Others will become consultants. A few will join the handful of big companies which are feverishly trying to use “smart software” to generate more revenue. Will there be some who end up working at Philz Coffee. Yeah, some. Perhaps another company will buy the “code,” but valuing something that failed is likely to prove tricky. Who remembers who bought Entopia? No one, right?
Net net: The GenZ forensic search failure exercise will produce some spectacular Silicon Valley news reporting. Neeva is explaining its failure, but that failure presaged when Fast Search & Transfer pivoted from Web search to the enterprise, failed, and was acquired by Microsoft. Where is Fast Search now as the smart Bing is soon to be everywhere. The reality is that Google has had 25 years to do cement its search monopoly. Neeva did not read the email. So Neeva sucked up investment bucks with a song and dance about zapping the Big Bad Google with a death ray. Yep, another example of high school science club mentality touched by spreadsheet fever.
Well, the fever broke.
Stephen E Arnold, May 22, 2023
Google DeepMind Risk Paper: 60 Pages with a Few Googley Hooks
May 22, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved in writing, just a dumb humanoid.
I read the long version of “Ethical and Social Risks of Harm from Language Models.” The paper is mostly statements and footnotes to individuals who created journal-type articles which prove the point of each research article. With about 25 percent of the peer reviewed research including shaped, faked, or weaponized data – I am not convinced by footnotes. Obviously the DeepMinders believe that footnotes make a case for the Google way. I am not convinced because the Google has to find a way to control the future of information. Why? Advertising money and hoped for Mississippis of cash.
The research paper dates from 2021 and is part of Google’s case for being ahead of the AI responsibility game. The “old” paper reinforces the myth that Google is ahead of everyone else in the AI game. The explanation for Sam AI-man’s and Microsoft’s markeitng coup is that Google had to go slow because Google knew that there were ethical and social risks of harm from the firm’s technology. Google cares about humanity! The old days of “move fast and break things” are very 1998. Today Google is responsible. The wild and crazy dorm days are over. Today’s Google is concerned, careful, judicious, and really worried about its revenues. I think the company worries about legal actions, its management controversies, and its interdigital dual with the Softies of Redmond.
A young researcher desperately seeking footnotes to support a specious argument. With enough footnotes, one can move the world it seems. Art generated by the smart software MidJourney.
I want to highlight four facets of the 60 page risks paper which are unlikely to get much, if any, attention from today’s “real” journalists.
Googley hook 1: Google wants to frame the discussion. Google is well positioned to “guide mitigation work.” The examples in the paper are selected to “guiding action to resolve any issues that can be identified in advance.” My comment: How magnanimous of Google. Framing stakes out the Googley territory. Why? Google wants to be Googzilla and reap revenue from its users, licensees, models, synthetic data, applications, and advertisers. You can find the relevant text in the paper on page 6 in the paragraph beginning “Responsible innovation.”
Googley hook 2: Google’s risks paper references fuzzy concepts like “acceptability” and “fair.” Like love, truth, and ethics, the notion of “acceptability” is difficult to define. Some might suggest that it is impossible to define. But Google is up to the task, particularly for application spaces unknown at this time. What happens when you apply “acceptability” to “poor quality information.” One just accepts the judgment of the outfit doing the framing. That’s Google. Game. Set. Match. You can find the discussion of “acceptability” on page 9.
Googley hook 3: Google is not going to make the mistake of Microsoft and its racist bot Tay. No way, José. What’s interesting is that the only company mentioned in the text of the 60 page paper is Microsoft. Furthermore, the toxic aspects of large language models are hard for technologies to detect (page18). Plus large language models can infer a person’s private data. So “providing true information is not always beneficial (Page 21). What’s the fix? Use smaller sets of training data… maybe. (Page 22). But one can fall back on trust — for instance, trust in Google the good — to deal with these challenges. In fact, trust Google to choose training data to deal with some of the downsides of large language models (Page 24).
Googley hook 4: Making smart software dependent on large language models that mitigates risk is expensive. Money, smart people who are in short supply, and computing resources are expensive. Therefore, one need not focus on the origin point (large language model training and configuration). Direct attention at those downstream. Those users can deal with the identified 21 problems. The Google method puts Google out of the primary line of fire. There are more targets for the aggrieved to seek and shoot at (Page 37).
When I step back from the article which is two years old, it is obvious Google was aware of some potential issues with its approach. Dr. Timnit Gebru was sacrificed on a pyre of spite. (She does warrant a couple of references and a footnote or two. But she’s now a Xoogler. The one side effect was that Dr. Jeff Dean, who was not amused by the stochastic parrot has been kicked upstairs and the UK “leader” is now herding the little wizards of Google AI.
The conclusion of the paper echoes the Google knows best argument. Google wants a methodological toolkit because that will keep other people busy. Google wants others to figure out fair, an approach that is similar to Sam Altman (OpenAI) who begs for regulation of a sector about which much is unknown.
The answer, according to the risk analysis is “responsible innovation.” I would suggest that this paper, the television interviews, the PR efforts to get the Google story in as many places as possible are designed to make the sluggish Google a player in the AI game.
Who will be fooled? Will Google catch up in this Silicon Valley venture invigorating hill climb? For me the paper with the footnotes is just part of Google’s PR and marketing effort. Your mileage may vary. May relevance be with you, gentle reader.
Stephen E Arnold, May 22, 2023
Time at Work: Work? Who Has Time?
May 18, 2023
I recall data from IDC years ago which suggested or asserted or just made up the following:
knowledge workers spend more than one day each week looking for information.
Other mid tier consulting firms jumped on the bandwagon. Examples include:
- McKinsey (yep, the outfit eager to replace human MBAs with digital doppelgängers says is is 9.3 hours a week
- A principal analyst offers up 2.5 hours per day or 12.5 hours per week searching for information
Now let’s toss in a fresh number. The Rupert Murdoch Wall Street Journal asserts “Workers Now Spend Two Full Days a Week on Email and in Meetings.” I assume this includes legal preparation for the voting machine hoo hah.
What do these numbers suggest when workers are getting RIFed and college graduates are wandering in the wilderness hoping like a blind squirrel that an acorn will trip them?
With meetings, email, and hunting for information, who has time for work? Toss in some work from home flexibility and the result is… why nothing seems to work. Whether it is locating information in an ad supported network, browsing Twitter without logging in, or making “contacts” on LinkedIn — the work part of work is particularly slippery.
Microsoft needs a year to fix a security issue. Google is — any day now — rolling out smart software in most of its products except in the European Union due to some skepticism about the disconnect between Googley words and Googley actions. Cyber security firms are deploying proactive systems as the individual cyber security developers work overtime to deal with new threats.
I am surprised when something works; for example, a Southwest flight takes off and lands mostly on time, an Amazon package arrives the next day as promised, and my Kia is not stolen due to engineering that causes automobile insurance companies to let loose a flight of legal eagles.
Net net: Not too many people work. Quite a few say they work and some are stressed about their work. But work? Who has time? The purpose of work is to not work.
Stephen E Arnold, May 18, 2023
Okay, Google, How Are Your Fancy Math Recommendation Procedures Working? Just Great, You Say
May 17, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I have no idea what the Tech Transparency Project is. I am not interested in running a Google query for illumination. I don’t want to read jibber jabber from You.com or ChatGPT. In short, I am the ideal dino baby: Set in his ways, disinterested, and skeptical of information from an outfit which wants “transparency” in our shadow age.
I read “YouTube Leads Young Gamers to Videos of Guns, School Shootings.” For the moment, let’s assume that the Transparency folks are absolutely 100 percent off the mark. Google YouTube’s algorithms are humming along with 99.999999 (six sigma land) accuracy. Furthermore, let’s assume that the assertions in the article that the Google YouTube ad machine is providing people with links known to have direct relevance to a user’s YouTube viewing habits.
What’s this mean?
It means that Google is doing a bang up job of protecting young people, impressionable minds, and those who stumble into a forest of probabilities from “bad stuff.” The Transparency Project has selected outlier data and is not understanding the brilliant and precise methods of the Google algorithm wizards. Since people at the Transparency Project do not (I shall assume) work at Google, how can these non-Googlers fathom the subtle functioning of the Google mechanisms. Remember the old chestnut about people who thought cargo planes were a manifestation of God. Well, cargo cult worshippers need to accept the Google reality.
Let’s take a different viewpoint. Google is a pretty careless outfit. Multiple products and internal teams spat with one another over the Foosball table. Power struggles erupt in the stratospheric intellectual heights of Google carpetland and Google Labs. Wizards get promoted and great leaders who live far, far away become the one with the passkey to the smart software control room. Lesser wizards follow instructions, and the result may be what the Tech Transparency write up is describing — mere aberrations, tiny shavings of infinitesimals which could add up to something, or a glitch in a threshold setting caused by a surge of energy released when a Googler learns about a new ChatGPT application.
A researcher explaining how popular online video services can shape young minds. As Charles Colson observed, “Once you have them by the [unmentionables], their hearts and minds will follow.” True or false when it comes to pumping video information into immature minds of those seven to 14 years old? False, of course. Balderdash. Anyone suggesting such psychological operations is unfit to express an opinion. That sounds reasonable, right? Art happily generated by the tireless servant of creators — MidJourney, of course.
The write up states:
- YouTube recommended hundreds of videos about guns and gun violence to accounts for boys interested in video games, according to a new study.
- Some of the recommended videos gave instructions on how to convert guns into automatic weapons or depicted school shootings.
- The gamer accounts that watched the YouTube-recommended videos got served a much higher volume of gun- and shooting-related content.
- Many of the videos violated YouTube’s own policies on firearms, violence, and child safety, and YouTube took no apparent steps to age-restrict them.
And what supports these assertions which fly in the face of Googzilla’s assertions about risk, filtering, concern for youth, yada yada yada?
Let me present one statement from the cited article:
The study found YouTube recommending numerous other weapons-related videos to minors that violated the platform’s policies. For example, YouTube’s algorithm pushed a video titled “Mag-Fed 20MM Rifle with Suppressor” to the 14-year-old who watched recommended content. The description on the 24-second video, which was uploaded 16 years ago and has 4.8 million views, names the rifle and suppressor and links to a website selling them. That’s a clear violation of YouTube’s firearms policy, which does not allow content that includes “Links in the title or description of your video to sites where firearms or the accessories noted above are sold.”
What’s YouTube doing?
In my opinion, here’s the goal:
- Generate clicks
- Push content which may attract ads from companies looking to reach a specific demographic
- Ignore the suits-in-carpetland in order to get a bonus, promoted, or a better job.
The culprit is, from my point of view, the disconnect between Google’s incentive plans for employees and the hand waving baloney in its public statements and footnote heavy PR like ““Ethical and Social Risks of Harm from Language Models.”
If you are wearing Google glasses, you may want to check out the company with a couple of other people who are scrutinizing the disconnect between what Google says and what Google does.
So which is correct? The Google is doing God, oh, sorry, doing good. Or, the Google is playing with kiddie attention to further its own agenda?
A suggestion for the researchers: Capture the pre-roll ads, the mid-roll ads, and the end-roll ads. Isn’t there data in those observations?
Stephen E Arnold, May 17, 2023
Architects: Handwork Is the Future of Some?
May 16, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I think there is a new category of blog posts. I call it “we will be relevant” essays. A good example is from Architizer and its essay “5 Reasons Why Architects Must Not Give Up On Hand Drawings and Physical Models: Despite the Rise of CAD, BIM and Now AI, Low-Tech Creative Mediums Remain of Vital Importance to Architects and Designers.” [Note: a BIM is an acronym for “business information modeling.”]
The write up asserts:
“As AI-generated content rapidly becomes the norm, I predict a counter-culture of manually-crafted creations, with the art of human imperfection and idiosyncrasy becoming marketable in its own right,” argued Architizer’s own Paul Keskeys in a recent Linkedin post.
The person doing the predicting is the editor of Architizer.
Now look at this architectural rendering of a tiny house. I generated it in a minute using MidJourney, a Jim Dandy image outputter.
I think it looks okay. Furthermore, I think it is a short step from the rendering to smart software outputting the floor plans, bill of materials, a checklist of legal procedures to follow, the content of those legal procedures, and a ready-to-distribute tender. The notion of threading together pools of information into a workflow is becoming a reality if I believe the hot sauce doused on smart software when TWIST, Jason Calacanis’ AI-themed podcasts air. I am not sure the vision of some of the confections explored on this program are right around the corner, but the functionality is now in a rental cloud computer and ready to depart.
Why would a person wanting to buy a tiny house pay a human to develop designs, go through the grunt work of figuring out window sizes, and getting the specification ready for me to review. I just want a tiny house as reasonably priced as possible. I don’t want a person handcrafting a baby model with plastic trees. I don’t want a human junior intern plugging in the bits and pieces. I want software to do the job.
I am not sure how many people like me are thinking about tiny houses, ranch houses, or non-tilting apartment towers. I do think that architects who do not get with the smart software program will find themselves in a fight for survival.
The CAD, BIM, and AI are capabilities that evoke images of buggy whip manufacturers who do not shift to Tesla interior repairs.
Stephen E Arnold, May 16, 2023
The Gray Lady: Objective Gloating about Vice
May 15, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Do you have dreams about the church lady on Saturday Night Live. That skit frightened me. A flashback shook my placid mental state when I read “Vice, Decayed Digital Colossus, Files for Bankruptcy.” I conjured up without the assistance of smart software, the image of Dana Carvey talking about the pundit spawning machine named Vice with the statement, “Well, isn’t that special?”
The New York Times’s article reported:
Vice Media filed for bankruptcy on Monday, punctuating a years long descent from a new-media darling to a cautionary tale of the problems facing the digital publishing industry.
The write up omits any reference to the New York Times’s failure with its own online venture under the guidance of Jeff Pemberton, the flame out with its LexisNexis play, the fraught effort to index its own content, and the misadventures which have become the Wordle success story. The past Don Quixote-like sallies into the digital world are either Irrelevant or unknown to the current crop of Gray Lady “real” news hounds I surmise.
The article states:
Investments from media titans like Disney and shrewd financial investors like TPG, which spent hundreds of millions of dollars, will
be rendered worthless by the bankruptcy, cementing Vice’s status among the most notable bad bets in the media industry. [Emphasis added.]
Well, isn’t that special? Perhaps similar to the Times’s first online adventure in the late 1970s?
The article includes a quote from a community journalism company too:
“We now know that a brand tethered to social media for its growth and audience alone is not sustainable.”
Perhaps like the desire for more money than the Times’s LexisNexis deal provided? Perhaps?
Is Vice that special? I think the story is a footnote to the Gray Lady’s own adventures in the digital realm?
Isn’t that special too?
Stephen E Arnold, May 15, 2023