Commercial Open Source: Fantastic Pipe Dream or Revenue Pipe Line?

March 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Open source is a term which strikes me as au courant. Artificial intelligence software is often described as “open source.” The idea has a bit of “do good” mixed with the idea that commercial software puts customers in handcuffs. (I think I hear Kumbaya playing faintly in the background.) Is it possible to blend the idea of free and open software with the principles of commercial software lock in? Notable open source entrepreneurs have become difficult to differentiate from a run-of-the-mill technology company. Examples include RedHat, Elastic, and OpenAI. Ooops. Sorry. OpenAI is a different type of company. I think.

image

Will open source software, particularly open source AI components, end up like this private playground? Thanks, MSFT Copilot. You are into open source, aren’t you? I hope your commitment is stronger than for server and cloud security.

I had these open source thoughts when I read “AI and Data Infrastructure Drives Demand for Open Source Startups.” The source of the information is Runa Capital, now located in Luxembourg. The firm publishes a report called the Runa Open Source Start Up Index, and it is a “rosy” document. The point of the article is that Runa sees open source as a financial opportunity. You can start your exploration of the tables and charts at this link on the Runa Capital Web site.

I want to focus on some information tucked into the article, just not presented in bold face or with a snappy chart. Here’s the passage I noted:

Defining what constitutes “open source” has its own inherent challenges too, as there is a spectrum of how “open source” a startup is — some are more akin to “open core,” where most of their major features are locked behind a premium paywall, and some have licenses which are more restrictive than others. So for this, the curators at Runa decided that the startup must simply have a product that is “reasonably connected to its open-source repositories,” which obviously involves a degree of subjectivity when deciding which ones make the cut.

The word “reasonably” invokes an image of lawyers negotiating on behalf of their clients. Nothing is quite so far from the kumbaya of the “real” open source software initiative as lawyers. Just look at the licenses for open source software.

I also noted this statement:

Thus, according to Runa’s methodology, it uses what it calls the “commercial perception of open-source” for its report, rather than the actual license the company attaches to its project.

What is “open source”? My hunch it is whatever the lawyers and courts conclude.

Why is this important?

The talk about “open source” is relevant to the “next big thing” in technology. And what is that? ANSWER: A fresh set of money making plays.

I know that there are true believers in open source. I wish them financial and kumbaya-type success.

My take is different: Open source, as the term is used today, is one of the phrases repurposed to breathe life in what some critics call a techno-feudal world. I don’t have a dog in the race. I don’t want a dog in any race. I am a dinobaby. I find amusement in how language becomes the Teflon on which money (one hopes) glides effortlessly.

And the kumbaya? Hmm.

Stephen E Arnold, March 26, 2024

AI Innovation: Do Just Big Dogs Get the Fat, Farmed Salmon?

March 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Let’s talk about statements like “AI will be open source” and “AI has spawned hundreds, if not thousands, of companies.” Those are assertions which seem to be slightly different from what’s unfolding at some of the largest technology outfits in the world. The circling and sniffing allegedly underway between the Apple and the Google pack is interesting. Apple and Google have a relationship, probably one that will need marriage counselor, but it is a relationship.

image

The wizard scientists have created an interesting digital construct. Thanks, MSFT Copilot. How are you coming along with your Windows 11 updates and Azure security today? Oh, that’s too bad.

The news, however, is that Microsoft is demonstrating that it wants to eat the fattest salmon in the AI stream. Microsoft has a deal of some type with OpenAI, operating under the steady hand of Sam AI-Man. Plus the Softies have cozied up to the French outfit Mistral. Today at 530 am US Eastern I learned that Microsoft has embraced an outstanding thinker, sensitive manager, and pretty much the entire Inflection AI outfit.

The number of stories about this move reflect the interest in smart software and what may be one of world’s purveyor of software which attracts bad actors from around the world. Thinking about breaches in the new Microsoft world is not a topic in the write ups about this deal. Why? I think the management move has captured attention because it is surprising, disruptive, and big in terms of money and implications.

Microsoft Hires DeepMind Co-Founder Suleyman to Run Consumer AI” states:

DeepMind workers complained about his [former Googler Mustafa Suleyman and subsequent Inflection.ai senior manager] management style, the Financial Times reported. Addressing the complaints at the time, Suleyman said: “I really screwed up. I was very demanding and pretty relentless.” He added that he set “pretty unreasonable expectations” that led to “a very rough environment for some people. I remain very sorry about the impact that caused people and the hurt that people felt there.” Suleyman was placed on leave in 2019 and months later moved to Google, where he led AI product management until exiting in 2022.

Okay, a sensitive manager learns from his mistakes joins Microsoft.

And Microsoft demonstrates that the AI opportunity is wide open. “Why Microsoft’s Surprise Deal with $4 Billion Startup Inflection Is the Most Important Non-Acquisition in AI” states:

Even since OpenAI launched ChatGPT in November 2022, the tech world has been experiencing a collective mania for AI chatbots, pouring billions of dollars into all manner of bots with friendly names (there’s Claude, Rufus, Poe, and Grok — there’s event a chatbot name generator). In January, OpenAI launched a GPT store that’s chock full of bots. But how much differentiation and value can these bots really provide? The general concept of chatbots and copilots is probably not going away, but the demise of Pi may signal that reality is crashing into the exuberant enthusiasm that gave birth to a countless chatbots.

Several questions will be answered in the weeks ahead:

  1. What will regulators in the EU and US do about the deal when its moving parts become known?
  2. How will the kumbaya evolve when Microsoft senior managers, its AI partners, and reassigned Microsoft employees have their first all-hands Teams or off-site meeting?
  3. Does Microsoft senior management have the capability of addressing the attack surface of the new technologies and the existing Microsoft software?
  4. What happens to the AI ecosystem which depends on open source software related to AI if Microsoft shifts into “commercial proprietary” to hit revenue targets?
  5. With multiple AI systems, how are Microsoft Certified Professional agents going to [a] figure out what broke and [b] how to fix it?
  6. With AI the apparent “next big thing,” how will adversaries like nations not pals with the US respond?

Net net: How unstable is the AI ecosystem? Let’s ask IBM Watson because its output is going to be as useful as any other in my opinion. My hunch is that the big dogs will eat the fat, farmed salmon. Who will pull that lucious fish from the big dog’s maw? Not me.

Stephen E Arnold, March 20, 2024

A Single Google Gem for March 19, 2024

March 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I want to focus on what could be the star sapphire of Googledom. The story appeared on the estimable Murdoch confection Fox News. Its title? “Is Google Too Broken to Be Fixed? Investors Deeply Frustrated and Angry, Former Insider Warns”? The word choice in this Googley headline signals the alert reader that the Foxy folks have a juicy story to share. “Broken,” “Frustrated,” “Angry,” and “Warns” suggest that someone has identified some issues at the beloved Google.

! google gems

A Google gem. Thanks, MSFT Copilot Bing thing. How’s the staff’s security today?

The write up states:

A former Google executive [David Friedberg] revealed that investors are “deeply frustrated” that the scandal surrounding their Gemini artificial intelligence (AI) model is becoming a “real threat” to the tech company. Google has issued several apologies for Gemini after critics slammed the AI for creating “woke” content.

The Xoogler, in what seems to be tortured prose, allegedly said:

“The real threat to Google is more so, are they in a position to maintain their search monopoly or maintain the chunk of profits that drive the business under the threat of AI? Are they adapting? And less so about the anger around woke and DEI,” Friedberg explained. “Because most of the investors I spoke with aren’t angry about the woke, DEI search engine, they’re angry about the fact that such a blunder happened and that it indicates that Google may not be able to compete effectively and isn’t organized to compete effectively just from a consumer competitiveness perspective,” he continued.

The interesting comment in the write up (which is recycled podcast chatter) seems to be:

Google CEO Sundar Pichai promised the company was working “around the clock” to fix the AI model, calling the images generated “biased” and “completely unacceptable.”

Does the comment attributed to a Big Dog Microsoftie reflect the new perception of the Google. The Hindustan Times, which should have radar tuned to the actions, of certain executives with roots entwined in India reported:

Satya Nadella said that Google “should have been the default winner” of Big Tech’s AI race as the resources available to it are the maximum which would easily make it a frontrunner.

My interpretation of this statement is that Google had a chance to own the AI casino, roulette wheel, and the croupiers. Instead, Google’s senior management ran over the smart squirrel with the Paris demonstration of the fantastic Bard AI system, a series of me-too announcements, and the outputting of US historical scenes with people of color turning up in what I would call surprising places.

Then the PR parade of Google wizards explains the online advertising firm’s innovations in playing games, figuring out health stuff (shades of IBM Watson), and achieving quantum supremacy in everything. Well, everything except smart software. The predicament of the ad giant is illuminated with the burning of billions in market cap coincident with the wizards’ flubs.

Net net: That’s a gem. Google losing a game it allegedly owned. I am waiting for the next podcast about the Sundar & Prabhakar Comedy Tour.

Stephen E Arnold, March 19, 2024

Microsoft Decides to Work with CISPE on Cloudy Concerns

March 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Perhaps a billion and a half dollars in fines can make a difference to a big tech company after all. In what looks like a move to avoid more regulatory scrutiny, Yahoo Finance reports, “Microsoft in Talks to End Trade Body’s Cloud Computing Complaint.” The trade body here is CISPE, a group of firms that provide cloud services in Europe. Amazon is one of those, but 26 smaller companies are also members. The group asserts certain changes Microsoft made to its terms of service in October of 2022 have harmed Europe’s cloud computing ecosystem. How, exactly, is unclear. Writer Foo Yun Chee tells us:

“[CISPE] said it had received several complaints about Microsoft, including in relation to its product Azure, which it was assessing based on its standard procedures, but declined to comment further. Azure is Microsoft’s cloud computing platform. CISPE said the discussions were at an early stage and it was uncertain whether these would result in effective remedies but said ‘substantive progress must be achieved in the first quarter of 2024’. ‘We are supportive of a fast and effective resolution to these harms but reiterate that it is Microsoft which must end its unfair software licensing practices to deliver this outcome,’ said CISPE secretary general Francisco Mingorance. Microsoft, which notched up 1.6 billion euros ($1.7 billion) in EU antitrust fines in the previous decade, has in recent years changed its approach towards regulators to a more accommodative one.”

Just how accommodating with Microsoft will be remains to be seen.

Cynthia Murrell, March 19, 2024

Harvard University: William James Continues Spinning in His Grave

March 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

William James, the brother of a novelist which caused my mind to wander just thinking about any one of his 20 novels, loved Harvard University. In a speech at Stanford University, he admitted his untoward affection. If one wanders by William’s grave in Cambridge Cemetery (daylight only, please), one can hear a sound similar to a giant sawmill blade emanating from the a modest tombstone. “What’s that horrific sound?” a by passer might ask. The answer: “William is spinning in his grave. It a bit like a perpetual motion machine now,” one elderly person says. “And it is getting louder.”

image

William is spinning in his grave because his beloved Harvard appears to foster making stuff up. Thanks, MSFT Copilot. Working on security today or just getting printers to work?

William is amping up his RPMs. Another distinguished Harvard expert, professor, shaper of the minds of young men and women and thems has been caught fabricating data. This is not the overt synthetic data shop at Stanford University’s Artificial Intelligence Lab and the commercial outfit Snorkel. Nope. This is just a faculty member who, by golly, wanted to be respected it seems.

The Chronicle of Higher Education (the immensely popular online information service consumed by thumb typers and swipers) published “Here’s the Unsealed Report Showing How Harvard Concluded That a Dishonesty Expert Committed Misconduct.” (Registration required because, you know, information about education is sensitive and users must be monitored.) The report allegedly required 1,300 pages. I did not read it. I get the drift: Another esteemed scholar just made stuff up. In my lingo, the individual shaped reality to support her / its vision of self. Reality was not delivering honor, praise, rewards, money, and freedom from teaching horrific undergraduate classes. Why not take the Excel macro to achievement: Invent and massage information. Who is going to know?

The write up says:

the committee wrote that “she does not provide any evidence of [research assistant] error that we find persuasive in explaining the major anomalies and discrepancies.” Over all, the committee determined “by a preponderance of the evidence” that Gino “significantly departed from accepted practices of the relevant research community and committed research misconduct intentionally, knowingly, or recklessly” for five alleged instances of misconduct across the four papers. The committee’s findings were unanimous, except for in one instance. For the 2012 paper about signing a form at the top, Gino was alleged to have falsified or fabricated the results for one study by removing or altering descriptions of the study procedures from drafts of the manuscript submitted for publication, thus misrepresenting the procedures in the final version. Gino acknowledged that there could have been an honest error on her part. One committee member felt that the “burden of proof” was not met while the two other members believed that research misconduct had, in fact, been committed.

Hey, William, let’s hook you up to a power test dynamometer so we can determine exactly how fast you are spinning in your chill, dank abode. Of course, if the data don’t reveal high-RPM spinning, someone at Harvard can be enlisted to touch up the data. Everyone seems to be doing from my vantage point in rural Kentucky.

Is there a way to harness the energy of professors who may cut corners and respected but deceased scholars to do something constructive? Oh, look. There’s a protest group. Let’s go ask them for some ideas. On second thought… let’s not.

Stephen E Arnold, March 15, 2024

AI Limits: The Wind Cannot Hear the Shouting. Sorry.

March 14, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

One of my teachers had a quote on the classroom wall. It was, I think, from a British novelist. Here’s what I recall:

Decide on what you think is right and stick to it.

I never understood the statement. In school, I was there to learn. How could I decide whether what I was reading was correct. Making a decision about what I thought was stupid because I was uninformed. The notion of “stick” is interesting and also a little crazy. My family was going to move to Brazil, and I knew that sticking to what I did in the Midwest in the 1950s would have to change. For one thing, we had electricity. The town to which we were relocating had electricity a few hours each day. Change was necessary. Even as a young sprout, trying to prevent something required more than talk, writing a Letter to the Editor, or getting a petition signed.

I thought about this crazy quote as soon as I read “AI Bioweapons? Scientists Agree to Policies to Reduce Risk of Human Disaster.” The fear mongering note of the write up’s title intrigued me. Artificial intelligence is in what I would call morph mode. What this means is that getting a fix on what is new and impactful in the field of artificial intelligence is difficult. An electrical engineering publication reported that experts are not sure if what is going on is good or bad.

image

Shouting into the wind does not work for farmers nor AI scientists. Thanks, MSFT Copilot. Busy with security again?

The “AI Bioweapons” essay is leaning into the bad side of the AI parade. The point of the write up is that “over 100 scientists” want to “prevent the creation of AI bioweapons.” The article states:

The agreement, crafted following a 20230 University of Washington summit and published on Friday, doesn’t ban or condemn AI use. Rather, it argues that researchers shouldn’t develop dangerous bioweapons using AI. Such an ask might seem like common sense, but the agreement details guiding principles that could help prevent an accidental DNA disaster.

That sounds good, but is it like the quote about “decide on what you think is right and stick to it”? In a dynamic environment, change is appears to accelerate. Toss in technology and the potential for big wins (either financial, professional, or political), and the likelihood of slowing down the rate of change is reduced.

To add some zip to the AI stew, much of the technology required to do some AI fiddling around is available as open source software or low-cost applications and APIs.

I think it is interesting that 100 scientists want to prevent something. The hitch in the git-along is that other countries have scientists who have access to AI research, tools, software, and systems. These scientists may feel as thought their reminding people that doom is (maybe?) just around the corner or a ruined building in an abandoned town on Route 66.

Here are a few observations about why individuals rally around a cause, which is widely perceived by some of those in the money game as the next big thing:

  1. The shouters perception of their importance makes it an imperative to speak out about danger
  2. Getting a group of important, smart people to climb on a bandwagon makes the organizers perceive themselves as doing something important and demonstrating their “get it done” mindset
  3. Publicity is good. It is very good when a speaking engagement, a grant, or consulting gig produces a little extra fame and money, preferably in a combo.

Net net: The wind does not listen to those shouting into it.

Stephen E Arnold, March 14, 2024

AI Deepfakes: Buckle Up. We Are in for a Wild Drifting Event

March 14, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AI deepfakes are testing the uncanny valley but technology is catching up to make them as good as the real thing. In case you’ve been living under a rock, deepfakes are images, video, and sound clips generated by AI algorithms to mimic real people and places. For example, someone could create a deepfake video of Joe Biden and Donald Trump in a sumo wrestling match. While the idea of the two presidential candidates duking it out on a sumo mat is absurd, technology is that advanced.

Gizmodo reports the frustrating news that “The AI Deepfakes Problem Is Going To Get Unstoppably Worse”. Bad actors are already using deepfakes to wreak havoc on the world. Federal regulators outlawed robocalls and OpenAI and Google released watermarks on AI-generated images. These aren’t doing anything to curb bad actors.

image

Which is real? Which is fake? Thanks, MSFT Copilot, the objects almost appear identical. Close enough like some security features. Close enough means good enough, right?

New laws and technology need to be adopted and developed to prevent this new age of misinformation. There should be an endless amount of warnings on deepfake videos and soundbites, not to mention service providers should employ them too. It is going to take a horrifying event to make AI deepfakes more prevalent:

"Deepfake detection technology also needs to get a lot better and become much more widespread. Currently, deepfake detection is not 100% accurate for anything, according to Copyleaks CEO Alon Yamin. His company has one of the better tools for detecting AI-generated text, but detecting AI speech and video is another challenge altogether. Deepfake detection is lagging generative AI, and it needs to ramp up, fast.”

Wired Magazine missed an opportunity to make clear that the wizards at Google can sell data and advertising, but the sneaker-wearing marvels cannot manage deepfake adult pictures. Heck, Google cannot manage YouTube videos teaching people how to create deepfakes. My goodness, what happens if one uploads ASCII art of a problematic item to Gemini? One of my team tells me that the Sundar & Prabhakar guard rails, don’t work too well in some situations.

Not every deepfake will be as clumsy as the one the “to be maybe” future queen of England finds herself ensnared. One can ask Taylor Swift I assume.

Whitney Grace’s March 14, 2024

Can Your Job Be Orchestrated? Yes? Okay, It Will Be Smartified

March 13, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My work career over the last 60 years has been filled with luck. I have been in the right place at the right time. I have been in companies which have been acquired, reassigned, and exposed to opportunities which just seemed to appear. Unlike today’s young college graduate, I never thought once about being able to get a “job.” I just bumbled along. In an interview for something called Singularity, the interviewer asked me, “What’s been the key to your success?” I answered, “Luck.” (Please, keep in mind that the interviewer assumed I was a success, but he had no idea that I did not want to be a success. I just wanted to do interesting work.)

image

Thanks, MSFT Copilot. Will smart software do your server security? Ho ho ho.

Would I be able to get a job today if I were 20 years old? Believe it or not, I told my son in one of our conversations about smart software: “Probably not.” I thought about this comment when I read today (March 13, 2024) the essay “Devin AI Can Write Complete Source Code.” The main idea of the article is that artificial intelligence, properly trained, appropriately resourced can do what only humans could do in 1966 (when I graduated with a BA degree from a so so university in flyover country). The write up states:

Devin is a Generative AI Coding Assistant developed by Cognition that can write and deploy codes of up to hundreds of lines with just a single prompt.  Although there are some similar tools for the same purpose such as Microsoft’s Copilot, Devin is quite the advancement as it not only generates the source code for software or website but it debugs the end-to-end before the final execution.

Let’s assume the write up is mostly accurate. It does not matter. Smart software will be shaped to deliver what I call orchestrated solutions either today, tomorrow or next month. Jobs already nuked by smartification are customer service reps, boilerplate writing jobs (hello, McKinsey), and translation. Some footloose and fancy free gig workers without AI skills may face dilemmas about whether to pursue begging, YouTubing the van life, or doing some spelunking in the Chemical Abstracts database for molecular recipes in a Walmart restroom.

The trajectory of applied AI is reasonably clear to me. Once “programming” gets swept into the Prada bag of AI, what other professions will be smartified? Once again, the likely path is light by dim but visible Alibaba solar lights for the garden:

  1. Legal tasks which are repetitive even though the cases are different, the work flow is something an average law school graduate can master and learn to loathe
  2. Forensic accounting. Accountants are essentially Ground Hog Day people, because every tax cycle is the same old same old
  3. Routine one-day surgeries. Sorry, dermatologists, cataract shops, and kidney stone crunchers. Robots will do the job and not screw up the DRG codes too much.
  4. Marketers. I know marketing requires creative thinking. Okay, but based on the Super Bowl ads this year, I think some clients will be willing to give smart software a whirl. Too bad about filming a horse galloping along the beach in Half Moon Bay though. Oh, well.

That’s enough of the professionals who will be affected by orchestrated work flows surfing on smartified software.

Why am I bothering to write down what seems painfully obvious to my research team?

I just wanted another reason to say, “I am glad I am old.” What many young college graduates will discover that despite my “luck” over the course of my work career, smartified software will not only kill some types of work. Smart software will remove the surprise  in a serendipitous life journey.

To reiterate my point: I am glad I am old and understand efficiency, smartification, and the value of having been lucky.

Stephen E Arnold, March 13, 2024

AI Bubble Gum Cards

March 13, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A publication for electrical engineers has created a new mechanism for making AI into a collectible. Navigate to “The AI apocalypse: A Scorecard.” Scroll down to the part of the post which looks like the gems from the 1050s:

image

The idea is to pick 22 experts and gather their big ideas about AI’s potential to destroy humanity. Here’s one example of an IEEE bubble gum card:

image

© by the estimable IEEE.

The information on the cards is eclectic. It is clear that some people think smart software will kill me and you. Others are not worried.

My thought is that IEEE should expand upon this concept; for example, here are some bubble gum card ideas:

  • Do the NFT play? These might be easier to sell than IEEE memberships and subscriptions to the magazine
  • Offer actual, fungible packs of trading cards with throw-back bubble gum
  • Create an AI movie about AI experts with opposing ideas doing battle in a video game type world. Zap. You lose, you doubter.

But the old-fashioned approach to selling trading cards to grade school kids won’t work. First, there are very few corner stores near schools in many cities. Two, a special interest group will agitate to block the sale of cards about AI because the inclusion of chewing gum will damage children’s teeth. And, three, kids today want TikToks, at least until the service is banned from a fast-acting group of elected officials.

I think the IEEE will go in a different direction; for example, micro USBs with AI images and source code on them. Or, the IEEE should just advance to the 21st-century and start producing short-form AI videos.

The IEEE does have an opportunity. AI collectibles.

Stephen E Arnold, March 13, 2024

Want Clicks: Do Sad, Really, Really Sorrowful

March 13, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The US is a hotbed of negative news. It’s what drives the media and perpetuates the culture of fear that (arguably) has plagued the country since colonial times. US citizens and now the rest of the world are so addicted to bad news that a research team got the brilliant idea to study what words people click. Nieman Lab wrote about the study in, “Negative Words In News Headlines Generate More Clicks-But Sad Words Are More Effective Than Angry Or Scary Ones.”

image

Thanks, MSFT Copilot. One of Redmond’s security professionals I surmise?

Negative words are prevalent in headlines because they sell clicks. The Nature Human Behavior(u)r journal published a study called “Negativity Drives Online News Consumption.” The study analyzed the effect of negative and emotional words on news consumption and the research team discovered that negativity increased clickability. These findings also confirm the well-documented behavior of humans seeking negativity in all information-seeking.

It coincides with humanity’s instinct to be vigilant of any danger and avoid it. While humans instinctually gravitate towards negative headlines, certain negative words are more popular than others. Humans apparently are driven to click on sad-related synonyms, avoid anything resembling joy or fear, and angry words don’t have any effect. It all goes back to survival:

“And if we are to believe “Bad is stronger than good” derives from evolutionary psychology — that it arose as a useful heuristic to detect threats in our environment — why would fear-related words reduce likelihood to click? (The authors hypothesize that fear and anger might be more important in generating sharing behavior — which is public-facing — than clicks, which are private.)

In any event, this study puts some hard numbers to what, in most newsrooms, has been more of an editorial hunch: Readers are more drawn to negativity than to positivity. But thankfully, the effect size is small — and I’d wager that it’d be even smaller for any outlet that decided to lean too far in one direction or the other.”

It could also be a strict diet of danger-filled media too.

Whitney Grace, March 13, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta