FOGINT: Hong Kong: A Significant Crypto Wiggle

November 5, 2024

Hong Kong is taking steps to secure its place in today’s high-tech landscape. Blockonomi reports, “Hong Kong’s Bold Move to Become Asia’s Crypto Capital.” Tax breaks, regulations, and a shiny new virtual asset index underpin the effort. Meanwhile, the Virtual Asset Trading Platform regime launched last year is chugging right along. We suspect Telegram is likely to be the utility for messaging, sales, and marketing.

Writer Oliver Dale tells us:

“The Hong Kong Exchanges and Clearing Limited (HKEX) announced the launch of a Virtual Asset Index Series, scheduled for November 15, 2024. This new index will provide benchmark pricing for Bitcoin and Ether specifically tailored to Asia-Pacific time zones. The Securities and Futures Commission (SFC) is working to finalize a list of crypto exchanges that will receive full licenses by year-end. Eric Yip, executive director for intermediaries at the SFC, revealed plans to establish a consultation panel by early 2025 to maintain oversight of licensed exchanges. The regulatory framework extends beyond trading platforms. Hong Kong authorities are developing comprehensive guidelines for crypto-focused over-the-counter trading desks and custodians, with implementation expected in the coming year. For stablecoin issuers, new requirements are being introduced. Foreign fiat-referenced stablecoin providers will need to establish physical operations in Hong Kong and maintain reserves in local banks.”

Establishing a physical presence in the city is no small thing. Though Hong Kong is a culturally rich and vibrant city, we hear real estate is at a premium. That is ok, we are sure stablecoin geniuses can afford it.

Hong Kong is also working to bring AI tools to the financial sector, but there it is caught between a rock and a hard place. Though a part of China, the dense and wealthy city operates under a unique “one country, two systems” governance framework. As a result, it has limited access to both western AI platforms, like Chat GPT and Gemini, and services from Chinese firms like Baidu and ByteDance. To bridge the gap, local institutions like The Hong Kong University of Science and Technology are building their own solutions. Officials hope tax incentives will attract professional investment firms to the city.

The stablecoin policies should go into effect by the end of this year, while custodian regulations and consultation on over-the-counter trading are to be established some time in 2025.

Cynthia Murrell, November 5, 2024

How to Cut Podcasts Costs and Hassles: A UK Example

November 5, 2024

Using AI to replicate a particular human is a fraught topic. Of paramount concern is the relentless issue of deepfakes. There are also legal issues of control over one’s likeness, of course, and concerns the technology could put humans out of work. It is against this backdrop, the BBC reports, that “Michael Parkinson’s Son Defends New AI Podcast.” The new podcast uses AI to recreate the late British talk show host, who will soon interview (human) guests. Son Mike acknowledges the concerns, but insists this project is different. Writer Steven McIntosh explains:

“Mike Parkinson said Deep Fusion’s co-creators Ben Field and Jamie Anderson ‘are 100% very ethical in their approach towards it, they are very aware of the legal and ethical issues, and they will not try to pass this off as real’. Recalling how the podcast was developed, Parkinson said: ‘Before he died, we [my father and I] talked about doing a podcast, and unfortunately he passed away before it came true, which is where Deep Fusion came in. ‘I came to them and said, ‘if we wanted to do this podcast with my father talking about his archive, is it possible?’, and they said ‘it’s more than possible, we think we can do something more’. He added his father ‘would have been fascinated’ by the project, although noted the broadcaster himself was a ‘technophobe’. Discussing the new AI version of his father, Parkinson said: ‘It’s extraordinary what they’ve achieved, because I didn’t really think it was going to be as accurate as that.’”

So they have the family’s buy-in, and they are making it very clear the host is remade with algorithms. The show is called “Virtually Parkinson,” after all. But there is still that replacing human talent with AI thing. Deep Fusion’s Anderson notes that, since Parkinson is deceased, he is in no danger of losing work. However, McIntosh counters, any guest that appears on this show may give one fewer interview to a show hosted by a different, living person. Good point.

One thing noteworthy about Deep Fusion’s AI on this project is its ability to not just put words in Parkinson’s mouth, but to predict how he would have actually responded. Assuming that function is accurate, we have a request: Please bring back the objective reporting of Walter Cronkite. This world sorely needs it.

Cynthia Murrell, November 5, 2024

Microsoft 24H2: The Reality Versus Self Awareness

November 4, 2024

dino orangeSorry. Written by a dumb humanoid. Art? It is AI, folks. Eighty year old dinobabies cannot draw very well in my experience.

I spotted a short item titled “Microsoft Halts Windows 11 24H2 Update for Many PCs Due to Compatibility Issues.” Today is October 29, 2024. By the time you read this item, you may have a Windows equipped computer humming along on the charmingly named 11 24H2 update. That’s the one with Recall.

image

Microsoft does not see itself as slightly bedraggled. Those with failed updates do. Thanks, ChatGPT, good enough, but at least you work. MSFT Copilot has been down for six days with a glitch.

Now if you work at the Redmond facility where Google paranoia reigns, you probably have Recall running on your computing device as well as Teams’ assorted surveillance features. That means that when you run a query for “updates”, you may see screens presenting an array of information about non functioning drivers, printer errors, visits to the wonderfully organized knowledge bases, and possibly images of email from colleagues wanting to take kinetic action about the interns, new hires, and ham fisted colleagues who rolled out an update which does not update.

According to the write up offers this helpful advice:

We advise users against manually forcing the update through the Windows 11 Installation Assistant or media creation tool, especially on the system configurations mentioned above. Instead, users should check for updates to the specific software or hardware drivers causing the holds and wait for the blocks to be lifted naturally.

Okay.

Let’s look at this from the point of view of bad actors. These folks know that the “new” Windows with its many nifty new features has some issues. When the Softies cannot get wallpaper to work, one knows that deeper, more subtle issues are not on the wizards’ radar.

Thus, the 24H2 update will be installed on bad actors’ test systems and subjected to tests only a fan of Metasploit and related tools can appreciate. My analogy is that these individuals, some of whom are backed by nation states, will give the update the equivalent of a digital colonoscopy. Sorry, Redmond, no anesthetic this go round.

Why?

Microsoft suggests that security is Job Number One. Obviously when fingerprint security functions don’t work and the Windows Hello fails, the bad actor knows that other issues exist. My goodness. Why doesn’t Microsoft just turn its PR and advertising firms lose on Telegram hacking groups and announce, “Take me. I am yours!”

Several observations:

  1. The update is flawed
  2. Core functions do not work
  3. Partners, not Microsoft, are supposed to fix the broken slot machine of operating systems
  4. Microsoft is, once again, scrambling to do what it should have done correctly before releasing a deeply flawed bundle of software.

Net net: Blaming Google for European woes and pointing fingers at everything and everyone except itself, Microsoft is demonstrating that it cannot do a basic task correctly.  The only users who are happy are those legions of bad actors in the countries Microsoft accuses of making its life difficult. Sorry. Microsoft you did this, but you could blame Google, of course.

Stephen E Arnold, November 4, 2024

Will AI Data Scientists Become Street People?

November 4, 2024

Over at HackerNoon, all-around IT guy Dominic Ligot insists data scientists must get on board with AI or be left behind. In “AI Denialism,” he compares data analysts who insist AI can never replace them with 19th century painters who scoffed at photography as an art form. Many of them who specialized in realistic portraits soon found themselves out of work, despite their objections.

Like those painters, Ligot believes, some data scientists are in denial about how well this newfangled technology can do what they do. They hang on to a limited definition of creativity at their peril. In fact, he insists:

“The truth is, AI’s ability to model complex relationships, surface patterns, and even simulate multiple solutions to a problem means it’s already doing much of what data analysts claim as their domain. The fine-grained feature engineering, the subtle interpretations—AI is not just nibbling around the edges; it’s slowly encroaching into the core of what we’ve traditionally defined as ‘analytical creativity.’”

But we are told there is hope for those who are willing to adapt:

“I’m not saying that data scientists or analysts will be replaced overnight. But to assume that AI will never touch their domain simply because it doesn’t fit into an outdated view of what creativity means is shortsighted. This is a transformative era, one that calls for a redefinition of roles, responsibilities, and skill sets. Data analysts and scientists who refuse to keep an open mind risk finding themselves irrelevant in a world that is rapidly shifting beneath their feet. So, let’s not make the same mistake as those painters of the past. Denialism is a luxury we cannot afford.”

Is Ligot right? And, if so, what skill-set changes can preserve data scientists’ careers? That relevant question remains unanswered in this post. (There are good deals on big plastic mugs at Dollar Tree.)

Cynthia Murrell, November 04, 2024

Enter the Dragon: America Is Unhealthy

November 4, 2024

dino orange_thumb_thumbWritten by a humanoid dinobaby. No AI except the illustration.

The YouTube video “A Genius Girl Who Is Passionate about Repairing Machines” presents a simple story in a 38 minute video. The idea is that a young woman with no help fixes a broken motorcycles with basic hand tools outside in what looks like a hoarder’s backyard. The message is: Wow, she is smart and capable. Don’t you wish you knew person like this who could repair your broken motorcycle.

This video is from @vutvtgamming and not much information is provided. After watching this and similar videos like “Genius Girl Restored The 280mm Lathe From 50 Years Ago And Made It Look Like”, I feel pretty stupid for an America dinobaby. I don’t think I can recall meeting a person with similar mechanical skills when I worked at Keystone Steel, Halliburton Nuclear, or Booz, Allen & Hamilton’s Design & Development division. The message I carried away was: I was stupid as were many people with whom I associated.

image

Thanks, MSFT Copilot. Good enough. (I slipped a put down through your filters. Imagine that!)

I picked up a similar vibe when I read “Today’s AI Ecosystem Is Unsustainable for Most Everyone But Nvidia, Warns Top Scholar.” On the surface, the ZDNet write up is an interview with the “scholar” Kai-Fu Lee, who, according to the article:

served as founding director of Microsoft Research Asia before working at Google and Apple, founded his current company, Sinovation Ventures, to fund startups such as 01.AI, which makes a generative AI search engine called BeaGo.

I am not sure how “scholar” correlates with commercial work for US companies and running an investment firm with a keen interest in Chinese start ups. I would not use the word “scholar.” My hunch is that the intent of Kai-Fu Lee is to present as simple and obvious something that US companies don’t understand. The interview is a different approach to explaining how advanced Kai-Fu Lee’s expertise is. He is, via this interview, sharing an opinion that the US is creating a problem and overlooking the simple solution. Just like the young woman able to repair a motorcycle or the lass fixing up a broken industrial lathe alone, the American approach does not get the job done.

What does ZDNet present as Kai-Fu Lee’s message. Here are a couple of examples:

“The ecosystem is incredibly unhealthy,” said Kai-Fu Lee in a private discussion forum earlier this month. Lee was referring to the profit disparity between, on the one hand, makers of AI infrastructure, including Nvidia and Google, and, on the other hand, the application developers and companies that are supposed to use AI to reinvent their operations.

Interesting. I wonder if the “healthy” ecosystem might be China’s approach of pragmatism and nuts-and-bolts evidenced in the referenced videos. The unhealthy versus healthy is a not-so-subtle message about digging one’s own grave in my opinion. The “economics” of AI are unhealthy, which seems to say, “America’s approach to smart software is going to kill it. A more healthy approach is the one in which government and business work to create applications.” Translating: China, healthy; America, sick as a dog.

Here’s another statement:

Today’s AI ecosystem, according to Lee, consists of Nvidia, and, to a lesser extent, other chip makers such as Intel and Advanced Micro Devices. Collectively, the chip makers rake in $75 billion in annual chip sales from AI processing. “The infrastructure is making $10 billion, and apps, $5 billion,” said Lee. “If we continue in this inverse pyramid, it’s going to be a problem,” he said.

Who will flip the pyramid? Uganda, Lao PDR, Greece? Nope, nope, nope. The flip will take an outfit with a strong mind and body. A healthy entity is needed to flip the pyramid. I wonder if that strong entity is China.

Here’s Kai-Fu kung fu move:

He recommended that companies build their own vertically integrated tech stack the way Apple did with the iPhone, in order to dramatically lower the cost of generative AI. Lee’s striking assertion is that the most successful companies will be those that build most of the generative AI components — including the chips — themselves, rather than relying on Nvidia. He cited how Apple’s Steve Jobs pushed his teams to build all the parts of the iPhone, rather than waiting for technology to come down in price.

In the write up Kai-Fu Lee refers to “we”. Who is included in that we? Excluded will be the “unhealthy.” Who is left? I would suggest that the pragmatic and application focused will be the winners. The reason? The “we” includes the healthy entities. Once again I am thinking of China’s approach to smart software.

What’s the correct outcome? Kai-Fu Lee allegedly said:

What should result, he said, is “a smaller, leaner group of leaders who are not just hiring people to solve problems, but delegating to smart enterprise AI for particular functions — that’s when this will make the biggest deal.”

That sounds like the Chinese approach to a number of technical, social, and political challenges. Healthy? Absolutely.

Several observations:

  1. I wonder if ZDNet checked on the background of the “scholar” interviewed at length?
  2. Did ZDNet think about the “healthy” versus “unhealthy” theme in the write up?
  3. Did ZDNet question the “scholar’s” purpose in explaining what’s wrong with the US approach to smart software?

I think I know the answer. The ZDNet outfit and the creators of this unusual private interview believe that the young women rebuilt complicated devices without any assistance. Smart China; dumb America. I understand the message which seems to have not been internalized by ZDNet. But I am a dumb dinobaby. What do I know? Exactly. Unhealthy that American approach to AI.

Stephen E Arnold, October 30, 2024

Computer Security and Good Enough Methods

November 1, 2024

dino orange_thumb_thumb_thumbWritten by a humanoid dinobaby. No AI except the illustration.

I read “TikTok Owner Sacks Intern for Sabotaging AI Project.” The BBC report is straight forward; it does not provide much “management” or “risk” commentary. In a nutshell, the allegedly China linked ByteDance hired or utilized an intern. The term “intern” used to mean a student who wanted to get experience. Today, “intern” has a number of meanings. For example, for certain cyber fraud outfits operating in Southeast Asia an “intern” could be:

  1. A person paid to do work in a special economic zone
  2. A person coerced into doing work for an organization engaged in cyber fraud
  3. A person who is indeed a student and wants to get some experience
  4. An individual kidnapped and forced to perform work; otherwise, bad things can happen in dark rooms.

What’s the BBC say? Here is a snippet:

TikTok owner, ByteDance, says it has sacked an intern for “maliciously interfering” with the training of one of its artificial intelligence (AI) models.

The punishment, according to the write up, was “contacting” the intern’s university. End of story.

My take on this incident is a bit different from the BBC’s.

First, how did a company allegedly linked to the Chinese government make a bad hire? If the student was recommended by a university, what mistake did the university and the professors training the young person commit. The idea is to crank out individuals who snap into certain roles. I am not sure the spirit of an American party school is part of the ByteDance and TikTok work culture, but I may be off base.

Second, when a company hires a gig worker or brings an intern into an organization, are today’s managers able to identify potential issues either with an individual’s work or that person’s inner wiring? The fact that an intern was able to fiddle with code indicates a failure of internal checks and balances. The larger question is, “Can organizations trust interns who are operating as insiders, but without the controls an organization should have over individual workers. This gaffe makes clear that modern management methods are not proactive; they are reactive. For that reason, insider threats exist and could do damage. ByteDance, according to the write up, downplayed the harm caused by the intern:

ByteDance also denied reports that the incident caused more than $10m (£7.7m) of damage by disrupting an AI training system made up of thousands of powerful graphics processing units (GPU).

Is this claim credible? Nope. I refer to the information about four companies “downplaying the impact of the SolarWinds hack.” US outfits don’t want to reveal the impact of a cyber issue. Are outfits like ByteDance and TikTok on the up and up about the impact of the intern’s actions.

Third, the larger question becomes, “How does an organization minimize insider threats?” As organizations seek to cut training staff and rely on lower cost labor?” The answer is, in my opinion, clear to me. An organization does what it can and hope for the best.

Like many parts of a life in an informationized world or datasphere in my lingo, the quality of most efforts is good enough. The approach guarantees problems in the future. These are problems which cannot be solved. Management just finds something to occupy its time. The victims are the users, the customers, or the clients.

The world, even when allegedly linked with nation states, is struggling to achieve good enough.

Stephen E Arnold, November 1, 2024

The Reason IT Work is Never Done: The New Sisyphus Task

November 1, 2024

Why are systems never completely fixed? There is always some modification that absolutely must be made. In a recent blog post, engagement firm Votito chalks it up to Tog’s Paradox (aka The Complexity Paradox). This rule states that when a product simplifies user tasks, users demand new features that perpetually increase the product’s complexity. Both minimalists and completionists are doomed to disappointment, it seems.

The post supplies three examples of Tog’s Paradox in action. Perhaps the most familiar to many is that of social media. We are reminded:

“Initially designed to provide simple ways to share photos or short messages, these platforms quickly expanded as users sought additional capabilities, such as live streaming, integrated shopping, or augmented reality filters. Each of these features added new layers of complexity to the app, requiring more sophisticated algorithms, larger databases, and increased development efforts. What began as a relatively straightforward tool for sharing personal content has transformed into a multi-faceted platform requiring constant updates to handle new features and growing user expectations.”

The post asserts software designers may as well resign themselves to never actually finishing anything. Every project should be seen as an ongoing process. The writer observes:

“Tog’s Paradox reveals why attempts to finalize design requirements are often doomed to fail. The moment a product begins to solve its users’ core problems efficiently, it sparks a natural progression of second-order effects. As users save time and effort, they inevitably find new, more complex tasks to address, leading to feature requests that expand the scope far beyond what was initially anticipated. This cycle shows that the product itself actively influences users’ expectations and demands, making it nearly impossible to fully define design requirements upfront. This evolving complexity highlights the futility of attempting to lock down requirements before the product is deployed.”

Maybe humanoid IT workers will become enshrined as new age Sisyphuses? Or maybe Sisyphi?

Cynthia Murrell, November 1, 2024

Great Moments in Marketing: MSFT Copilot, the Salesforce Take

November 1, 2024

dino orangeA humanoid wrote this essay. I tried to get MSFT Copilot to work, but it remains dead. That makes four days with weird messages about a glitch. That’s the standard: Good enough.

It’s not often I get a kick out of comments from myth-making billionaires. I read through the boy wonder to company founder titled “An Interview with Salesforce CEO Marc Benioff about AI Abundance.” No paywall on this essay, unlike the New York Times’ downer about smart software which appears to have played a part in a teen’s suicide. Imagine when Perplexity can control a person’s computer. What exciting stories will appear. Here’s an example of what may be more common in 2025.

image

Great moments in Salesforce marketing. A senior Agentforce executive considers great marketing and brand ideas of the past. Inspiration strikes. In 2024, he will make fun of Clippy. Yes, a 1995 reference will resonate with young deciders in 2024. Thanks, Stable Diffusion. You are working; MSFT Copilot is not.

The focus today is a single statement in this interview with the big dog of Salesforce. Here’s the quote:

Well, I guess it wasn’t the AGI that we were expecting because I think that there has been a level of sell, including Microsoft Copilot, this thing is a complete disaster. It’s like, what is this thing on my computer? I don’t even understand why Microsoft is saying that Copilot is their vision of how you’re going to transform your company with AI, and you are going to become more productive. You’re going to augment your employees, you’re going to lower your cost, improve your customer relationships, and fundamentally expand all your KPIs with Copilot. I would say, “No, Copilot is the new Clippy”, I’m even playing with a paperclip right now.

Let’s think about this series of references and assertions.

First, there is the direct statement “Microsoft Copilot, this thing is a complete disaster.” Let’s assume the big dog of Salesforce is right. The large and much loved company — Yes, I am speaking about Microsoft — rolled out a number of implementations, applications, and assertions. The firm caught everyone’s favorite Web search engine with its figurative pants down like a hapless Russian trooper about to be dispatched by a Ukrainian drone equipped with a variant of RTX. (That stuff goes bang.) Microsoft “won” a marketing battle and gained the advantage of time. Google with its Sundar & Prabhakar Comedy Act created an audience. Microsoft seized the opportunity to talk to the audience. The audience applauded. Whether the technology worked, in my opinion was secondary. Microsoft wanted to be seen as the jazzy leader.

Second, the idea of a disaster is interesting. Since Microsoft relied on what may be the world’s weirdest organizational set up and supported the crumbling structure, other companies have created smart software which surfs on Google’s transformer ideas. Microsoft did not create a disaster; it had not done anything of note in the smart software world. Microsoft is a marketer. The technology is a second class citizen. The disaster is that Microsoft’s marketing seems to be out of sync with what the PowerPoint decks say. So what’s new? The answer is, “Nothing.” The problem is that some people don’t see Microsoft’s smart software as a disaster. One example is Palantir, which is Microsoft’s new best friend. The US government cannot rely on Microsoft enough. Those contract renewals keep on rolling. Furthermore the “certified” partners could not be more thrilled. Virtually every customer and prospect wants to do something with AI. When the blind lead the blind, a person with really bad eyesight has an advantage. That’s Microsoft. Like it or not.

Third, the pitch about “transforming your company” is baloney. But it sounds good. It helps a company do something “new” but within the really familiar confines of Microsoft software. In the good old days, it was IBM that provided the cover for doing something, anything, which could produce a marketing opportunity or a way to add a bit pizazz to a 1955 Chevrolet two door 210 sedan. Thus, whether the AI works or does not work, one must not lose sight of the fact that Microsoft centric outfits are going to go with Microsoft because most professionals need PowerPoint and the bean counters do not understand anything except Excel. What strikes me as important that Microsoft can use modest, even inept smart software, and come out a winner. Who is complaining? The Fortune 1000, the US Federal government, the legions of MBA students who cannot do a class project without Excel, PowerPoint, and Word?

Finally, the ultimate reference in the quote is Clippy. Personally I think the big dog at Salesforce should have invoked both Bob and Clippy. Regardless of the “joke” hooked to these somewhat flawed concepts, the names “Bob” and “Clippy” have resonance. Bob rolled out in 1995. Clippy helped so many people beginning in the same year. Decades later Microsoft’s really odd software is going to cause a 20 something who was not born to turn away from Microsoft products and services? Nope.

Let’s sum up: Salesforce is working hard to get a marketing lift by making Microsoft look stupid. Believe me. Microsoft does not need any help. Perhaps the big dog should come up with a marketing approach that replicates or comes close to what Microsoft pulled off in 2023. Google still hasn’t recovered fully from that kung fu blow.

The big dog needs to up its marketing game. Say Salesforce and what’s the reaction? Maybe meh.

Stephen E Arnold, November 1, 2024

Google Goes Nuclear For Data Centers

October 31, 2024

From the The Future-Is-Just-Around-the-Corner Department:

Pollution is blamed on consumers who are told to cut their dependency on plastic and drive less, while mega corporations and tech companies are the biggest polluters in the world. Some of the biggest users of energy are data centers and Google decided to go nuclear to help power them says Engadget: “Google Strikes A Deal With A Nuclear Startup To Power Its AI Data Centers.”

Google is teaming up with Kairos Power to build seven small nuclear reactors in the United States. The reactors will power Google’s AI Drive and add 500 megawatts. The first reactor is expected to be built in 2030 with the plan to finish the rest by 2035. The reactors are called small modular reactors or SMRs for short.

Google’s deal with Kairos Power would be the first corporate deal to buy nuclear power from SMRs. The small reactors are build inside a factory, instead of on site so their construction is lower than a full power plant.

“Kairos will need the US Nuclear Regulatory Commission to approve design and construction permits for the plans. The startup has already received approval for a demonstration reactor in Tennessee, with an online date targeted for 2027. The company already builds test units (without nuclear-fuel components) at a development facility in Albuquerque, NM, where it assesses components, systems and its supply chain.

The companies didn’t announce the financial details of the arrangement. Google says the deal’s structure will help to keep costs down and get the energy online sooner.”

These tech companies say they’re green but now they are contributing more to global warming with their AI data centers and potential nuclear waste. At least nuclear energy is more powerful and doesn’t contribute as much as coal or natural gas to pollution, except when the reactors melt down. Amazon is doing one too.

Has Google made the engineering shift from moon shots to environmental impact statements, nuclear waste disposal, document management, assorted personnel challenges? Sure, of course. Oh, and one trivial question: Is there a commercially available and certified miniature nuclear power plant? Russia may be short on cash. Perhaps someone in that country will sell a propulsion unit from those super reliable nuclear submarines? Google can just repurpose it in a suitable data center. Maybe one in Ashburn, Virginia?

Whitney Grace, October 31, 2024

The Sweet Odor of Musk

October 31, 2024

The old Twitter was a boon for academics. It was a virtual gathering place where they could converse with each other, the general public, and even lawmakers. Information was spread and discussed far and wide. The platform was also a venue for conducting online research. Now, though, scholars seem to be withering under the “Musk effect.” Cambridge University Press shares its researchers’ paper, “The Vibes Are Off: Did Elon Musk Push Academics Off Twitter?

The abstract begins by noting several broad impacts of Twitter’s transition to “X,” as Elon Musk has renamed it: Most existing employees were laid-off. Access to its data was monetized. Its handling of censorship and misinformation has were upended and its affordances shifted. But the scope of this paper is more narrow. Researchers James Bisbee and Kevin Munger set out to answer:

“What did Elon Musk’s takeover of the platform mean for this academic ecosystem? Using a snowball sample of more than 15,700 academic accounts from the fields of economics, political science, sociology, and psychology, we show that academics in these fields reduced their ‘engagement’ with the platform, measured by either the number of active accounts (i.e., those registering any behavior on a given day) or the number of tweets written (including original tweets, replies, retweets, and quote tweets).”

Why did scholars disengage? The “Musk Effect,” as the paper calls it, was a mix of factors. Changes to the verification process and account-name rules were part of it. Many were upset when Musk nixed the free API they’d relied on for research in a range of fields. But much of it was simply a collective disgust at the new owner’s unscientific nature, childishness, and affinity for conspiracy theories. The researchers write:

“We argue that a combination of these features of the threat and then the reality of Musk’s ownership of the Twitter corporation influenced academics either to quit Twitter altogether or at least reduce their engagement with the platform (i.e., ‘disengage’). The policy changes and personality of Twitter’s new owner were difficult to avoid and may have made the experience of using the platform less palatable. Conversely, these same attributes may have stimulated a type of ideological boycott, in which academics disengaged with Twitter as a political strategy to indicate their intellectual and moral opposition.”

See the paper for a description of its methodology, the detailed results (complete with charts), and a discussion of the factors behind the Musk Effect. It also describes the role pre-X Twitter played in academic research. Check out section 1 to learn what the scientific community lost when one bratty billionaire decided to make a spite purchase the size of small country’s gross domestic product.

Cynthia Murrell, October 31, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta