Prediction: Next Target Up — Public Libraries
June 26, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The publishers (in spirit at least) have kneecapped the Internet Archive. If you don’t know what the online service does or did, it does not matter. I learned from the estimable ShowBiz411.com site, a cultural treasure is gone. Forget digital books, the article “Paramount Erases Archives of MTV Website, Wipes Music, Culture History After 30 Plus Years” says:
Parent company Paramount, formerly Viacom, has tossed twenty plus years of news archives. All that’s left is a placeholder site for reality shows. The M in MTV – music — is gone, and so is all the reporting and all the journalism performed by music and political writers ever written. It’s as if MTV never existed. (It’s the same for VH1.com, all gone.)
Why? The write up couches the savvy business decision of the Paramount leadership this way:
There’s no precedent for this, and no valid reason. Just cheapness and stupidity.
Tibby, my floppy ear Frenchie, is listening to music from the Internet Archive. He knows the publishers removed 500,000 books. Will he lose access to his beloved early 20th century hill music? Will he ever be able to watch reruns of the rock the casbah music video? No. He is a risk. A threat. A despicable knowledge seeker. Thanks to myself for this nifty picture.
My knowledge of MTV and VH1 is limited. I do recall telling my children, “Would you turn that down, please?” What a waste of energy. Future students of American culture will have a void. I assume some artifacts of the music videos will remain. But the motherlode is gone. Is this a loss? On one hand, no. Thank goodness I will not have to glimpse performs rocking the casbah. On the other hand, yes. Archaeologists study bits of stone, trying to figure out how those who left them built Machu Pichu did it. The value of lost information to those in the future is tough to discuss. But knowledge products may be like mine tailings. At some point, a bright person can figure out how to extract trace elements in quantity.
I have a slightly different view of these two recent cultural milestones. I have a hunch that the publishers want to protect their intellectual property. Internet Archive rolled over because its senior executives learned from their lawyers that lawsuits about copyright violations would be tough to win. The informed approach was to delete 500,000 books. Imagine an online service like the Internet Archive trying to be a library.
That brings me to what I think is going on. Copyright litigation will make quite a lot of digital information disappear. That means that increasing fees to public libraries for digital copies of books to “loan” to patrons must go up. Libraries who don’t play ball may find that those institutions will be faced with other publisher punishments: No American Library Association after parties, no consortia discounts, and at some point no free books.
Yes, libraries will have to charge a patron to check out a physical book and then the “publishers” will get a percentage.
The Andrew Carnegie “free” thing is wrong. Libraries rip off the publishers. Authors may be mentioned, but what publisher cares about 99 percent of its authors? (I hear crickets.)
Several thoughts struck me as I was walking my floppy ear Frenchie:
- The loss of information (some of which may have knowledge value) is no big deal in a social structure which does not value education. If people cannot read, who cares about books? Publishers and the wretches who write them. Period.
- The video copyright timebomb of the Paramount video content has been defused. Let’s keep those lawyers at bay, please. Who will care? Nostalgia buffs and the parents of the “stars”?
- The Internet Archive has music; libraries have music. Those are targets not on Paramount’s back. Who will shoot at these targets? Copyright litigators. Go go go.
Net net: My prediction is that libraries must change to a pay-to-loan model or get shut down. Who wants informed people running around disagreeing with lawyers, accountants, and art history majors?
Stephen E Arnold, June 26, 2024
Microsoft: Not Deteriorating, Just Normal Behavior
June 26, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Gee, Microsoft, you are amazing. We just fired up a new Windows 11 Professional machine and guess what? Yep, the printers are not recognized. Nice work and consistent good enough quality.
Then I read “Microsoft Admits to Problems Upgrading Windows 11 Pro to Enterprise.” That write up says:
There are problems with Microsoft’s last few Windows 11 updates, leaving some users unable to make the move from Windows 11 Pro to Enterprise. Microsoft made the admission in an update to the "known issues" list for the June 11, 2024, update for Windows 11 22H2 and 23H2 – KB5039212. According to Microsoft, "After installing this update or later updates, you might face issues while upgrading from Windows Pro to a valid Windows Enterprise subscription."
Bad? Yes. But then I worked through this write up: “Microsoft Chose Profit Over Security and Left U.S. Government Vulnerable to Russian Hack, Whistleblower Says.” Is the information in the article on the money? I don’t know. I do know that bad actors find Windows the equivalent of an unlocked candy store. Goodies are there for greedy teens to cart off the chocolate-covered peanuts and gummy worms.
Everyone interested in entering the Microsoft Windows Theme Park wants to enjoy the thrills of a potentially lucrative experience. Thanks, MSFT Copilot. Why is everyone in your illustration the same?
This remarkable story of willful ignorance explains:
U.S. officials confirmed reports that a state-sponsored team of Russian hackers had carried out SolarWinds, one of the largest cyberattacks in U.S. history.
How did this happen? The write up asserts:
The federal government was preparing to make a massive investment in cloud computing, and Microsoft wanted the business. Acknowledging this security flaw could jeopardize the company’s chances, Harris [a former Microsoft security expert and whistleblower] recalled one product leader telling him. The financial consequences were enormous. Not only could Microsoft lose a multibillion-dollar deal, but it could also lose the race to dominate the market for cloud computing.
Bad things happened. The article includes this interesting item:
From the moment the hack surfaced, Microsoft insisted it was blameless. Microsoft President Brad Smith assured Congress in 2021 that “there was no vulnerability in any Microsoft product or service that was exploited” in SolarWinds.
Okay, that’s the main idea: Money.
Several observations are warranted:
- There seems to be an issue with procurement. The US government creates an incentive for Microsoft to go after big contracts and then does not require Microsoft products to work or be secure. I know generals love PowerPoint, but it seems that national security is at risk.
- Microsoft itself operates with a policy of doing what’s necessary to make as much money as possible and avoiding the cost of engineering products that deliver what the customer wants: Stable, secure software and services.
- Individual users have to figure out how to make the most basic functions work without stopping business operations. Printers should print; an operating system should be able to handle what my first personal computer could do in the early 1980s. After 25 years, printing is not a new thing.
Net net: In a consequence-filled business environment, I am concerned that Microsoft will not improve its security and the most basic computer operations. I am not sure the company knows how to remediate what I think of as a Disneyland for bad actors. And I wanted the new Windows 11 Professional to work. How stupid of me?
Stephen E Arnold, June 26, 2024
X: The Prominent (Fake) News Source
June 26, 2024
Many of us have turned away from X, formerly Twitter, since its Musky takeover and now pay it little mind. However, it seems many Americans still trust the platform to deliver their news. This is concerning, considering “X Has Highest Rate of Misinformation As a New Source, Study Finds.”
Citing a recent Pew Research study, MediaDailyNews reports 65% of X users say news is a reason they visit the platform. Breaking news is even more of a draw, with 75% of users getting their real-time news on the platform. This is understandable given Twitter’s legacy, but are users unaware how unreliable X has become? Writer Colin Kirkland emphasizes:
“What may the greatest concern in Pew’s findings is that while X touts that it has the most devoted base of news seekers, it also ranked the highest in terms of inaccurate reporting. All of the platforms Pew studied proliferate misinformation-based news stories, but 86% of X’s base reported seeing inaccurate news, and 37% say they see it often. As Meta makes definitive moves to curb its news output on apps like Instagram, Facebook and Threads — the only other potential breaking-news alternative to X — Elon Musk’s app reigns supreme in the proliferation and digestion of news content, which could have effects on the upcoming presidential election, especially due to the amount of misinformation circling the platform.”
Yep. How can one reach X users with this important update? Pew is trying the direct route. Will it make any difference?
Cynthia Murrell, June 26, 2024
Two EU Firms Unite in Pursuit of AI Sovereignty
June 25, 2024
Europe would like to get out from under the sway of North American tech firms. This is unsurprising, given how differently the EU views issues like citizen privacy. Then there are the economic incentives of localizing infrastructure, data, workforce, and business networks. Now, two generative AI firms are uniting with that goal in mind. The Next Web reveals, “European AI Leaders Aleph Alpha and Silo Ink Deal to Deliver ‘Sovereign AI’.” Writer Thomas Macaulay reports:
“Germany’s Aleph Alpha and Finland’s Silo AI announced the partnership [on June 13, 2024]. The duo plan to create a ‘one-stop-solution’ for European industrial firms exploring generative AI. Their collaboration brings together distinctive expertise. Aleph Alpha has been described a European rival to OpenAI, but with a stronger focus on data protection, security, and transparency. The company also claims to operate Europe’s fastest commercial AI data center. Founded in 2019, the firm has become Germany’s leading AI startup. In November, it raised $500mn in a funding round backed by Bosch, SAP, and Hewlett Packard Enterprise. Silo AI, meanwhile, calls itself ‘Europe’s largest private AI lab.’ The Helsinki-based startup provides custom LLMs through a SaaS subscription. Use cases range from smart devices and cities to autonomous vehicles and industry 4.0. Silo also specializes in building LLMs for low-resource languages, which lack the linguistic data typically needed to train AI models. By the end of this year, the company plans to cover every official EU language.”
Both Aleph Alpha CEO Jonas Andrulis and Silo AI CEO Peter Sarlin enthusiastically advocate European AI sovereignty. Will the partnership strengthen their mutual cause?
Cynthia Murrell, June 25, 2024
Ad Hominem Attack: A Revived Rhetorical Form
June 24, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I remember my high school debate coach telling my partner Nick G. (I have forgotten the budding prosecutor’s name, sorry) you should not attack the character of our opponents. Nick G. had interacted with Bill W. on the basketball court in an end-of-year regional game. Nick G., as I recall got a bloody nose, and Bill W. was thrown out of the basketball game. When fisticuffs ensued, I thanked my lucky stars I was a hopeless athlete. Give me the library, a debate topic, a pile of notecards, and I was good to go. Nick G. included in his rebuttal statement comments about the character of Bill W. When the judge rendered a result and his comments, Nick G. was singled out as being wildly inappropriate. After the humiliating defeat, the coach explained that an ad hominem argument is not appropriate for 15-year-olds. Nick G.’s attitude was, “I told the truth.” As Nick G. learned, the truth is not what wins debate tournaments or life in some cases.
I thought about ad hominem arguments as I read “Silicon Valley’s False Prophet.” This essay reminded me of the essay by the same author titled “The Man Who Killed Google Search.” I must admit the rhetorical trope is repeatable. Furthermore it can be applied to an individual who may be clueless about how selling advertising nuked relevance (or what was left of it) at the Google and to the dealing making of a person whom I call Sam AI-Man. Who knows? Maybe other authors will emulate these two essays, and a new Silicon Valley genre may emerge ready for the real wordsmiths and pooh-bahs of Silicon Valley to crank out a hit piece every couple of days.
To the essay at hand: The false profit is the former partner of Elon Musk and the on-again-off-again-on-again Big Dog at OpenAI. That’s an outfit where “open” means closed, and closed means open to the likes of Apple. The main idea, I think, is that AI sucks and Sam AI-Man continues to beat the drum for a technology that is likely to be headed for a correction. In Silicon Valley speak, the bubble will burst. It is, I surmise, Mr. AI-man’s fault.
The essay explains:
Sam Altman, however, exists in a category of his own. There are many, many, many examples of him saying that OpenAI — or AI more broadly — will do something it can’t and likely won’t, and it being meekly accepted by the Fourth Estate without any real pushback. There are more still of him framing the limits of the present reality as a positive — like when, in a fireside sitdown with
1980s used car salesmanSalesforce CEO Marc Benioff, Altman proclaimed that AI hallucinations (when an LLM asserts something untrue as fact, because AI doesn’t know anything) are a feature, not a bug, and rather than being treated as some kind of fundamental limitation, should be regarded as a form of creative expression.
I understand. Salesperson. Quite a unicorn in Silicon Valley. I mean when I worked there I would encounter hyperbole artists every few minutes. Yeah, Silicon Valley. Anchored in reality, minimum viable products, and lots of hanky pinky.
The essay provides a bit of information about the background of Mr. AI-Man:
When you strip away his ability to convince people that he’s smart, Altman had actually done very little — he was a college dropout with a failing-then-failed startup, one where employees tried to get him fired twice.
If true, that takes some doing. Employees tried to get the false prophet fired twice. In olden times, burning at the stake might have been an option. Now it is just move on to another venture. Progress.
The essay does provide some insight into Sam AI-Man’s core competency:
Altman is adept at using connections to make new connections, in finding ways to make others owe him favors, in saying the right thing at the right time when he knew that nobody would think about it too hard. Altman was early on Stripe, and Reddit, and Airbnb — all seemingly-brilliant moments in the life of a man who had many things handed to him, who knew how to look and sound to get put in the room and to get the capital to make his next move. It’s easy to conflate investment returns with intellectual capital, even though the truth is that people liked Altman enough to give him the opportunity to be rich, and he took it.
I cannot figure out if the author envies Sam AI-Man, reviles him for being clever (a key attribute in some high-technology outfits), or genuinely perceives Mr. AI-Man as the first cousin to Beelzebub. Whatever the motivation, I find the phoenix-like rising of the ad hominem attack a refreshing change from the entitled pooh-bahism of some folks writing about technology.
The only problem: I think it is unlikely that the author will be hired by OpenAI. Chance blown.
Stephen E Arnold, June 24, 2024
The Key to Success at McKinsey & Company: The 2024 Truth Is Out!
June 21, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
When I was working at a “real” company, I wanted to labor in the vineyards of a big-time, blue-chip consulting firm. I achieved that goal and, after a suitable period of time in the penal colony, I escaped to a client. I made it out, unscathed, and entered a more interesting, less nutso working life. When the “truth” about big-time, blue-chip consulting firms appears in public sources, I scan the information. Most of it is baloney; for example, the yip yap about McKinsey and its advice pertaining to addictive synthetics. Hey, stuff happens when one is objective. “McKinsey Exec Tells Summer Interns That Learning to Ask AI the Right Questions Is the Key to Success” contains some information which I find quite surprising. First, I don’t know if the factoids in the write up are accurate or if they are the off-the-cuff baloney recruiters regularly present to potential 60-hour-a-week knowledge worker serfs or if the person has a streaming video connection to the McKinsey managing partner’s work-from-the-resort office.
Let’s assume the information is correct and consider some of its implications. An intern is a no-pay or low-pay job for students from the right institutions, the right background, or the right connections. The idea is that associates (one step above the no-pay serf) and partners (the set for life if you don’t die of heart failure crowd) can observe, mentor, and judge these field laborers. The write up states:
Standing out in a summer internship these days boils down to one thing — learning to talk to AI. At least, that’s the advice McKinsey’s chief client officer, Liz Hilton Segel, gave one eager intern at the firm. “My advice to her was to be an outstanding prompt engineer,” Hilton Segel told The Wall Street Journal.
But what about grades? What about my family’s connections to industry, elected officials, and a supreme court judge? What about my background scented with old money, sheepskin from prestigious universities, and a Nobel Prize awarded a relative 50 years ago? These questions, its seems, may no longer be relevant. AI is coming to the blue-chip consulting game, and the old-school markers of building big revenues may not longer matter.
AI matters. Despite McKinsey’s 11-month effort, the firm has produced Lilli. The smart systems, despite fits and starts, has delivered results; that is, a payoff, cash money, engagement opportunities. The write up says:
Lilli’s purpose is to aggregate the firm’s knowledge and capabilities so that employees can spend more time engaging with clients, Erik Roth, a senior partner at McKinsey who oversaw Lili’s development, said last year in a press release announcing the tool.
And the proof? I learned:
“We’ve [McKinsey humanoids] answered over 3 million prompts and add about 120,000 prompts per week,” he [Erik Roth] said. “We are saving on average up to 30% of a consultants’ time that they can reallocate to spend more time with their clients instead of spending more time analyzing things.”
Thus, the future of success is to learn to use Lilli. I am surprised that McKinsey does not sell internships, possibly using a Ticketmaster-type system.
Several observations:
- As Lilli gets better or is replaced by a more cost efficient system, interns and newly hired professionals will be replaced by smart software.
- McKinsey and other blue-chip outfits will embrace smart software because it can sell what the firm learns to its clients. AI becomes a Petri dish for finding marketable information.
- The hallucinative functions of smart software just create an opportunity for McKinsey and other blue-chip firms to sell their surviving professionals at a more inflated fee. Why fail and lose money? Just pay the consulting firm, sidestep the stupidity tax, and crush those competitors to whom the consulting firms sell the cookie cutter knowledge.
Net net: Blue-chip firms survived the threat from gig consultants and the Gerson Lehrman-type challenge. Now McKinsey is positioning itself to create a no-expectation environment for new hires, cut costs, and increase billing rates for the consultants at the top of the pyramid. Forget opioids. Go AI.
Stephen E Arnold, June 21, 2024
DeepMind Is Going to Make Products, Not Science
June 18, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Crack that Google leadership whip. DeepMind is going to make products. Yes, just like that. I am easily confused. I thought Google consolidated its smart software efforts. I thought Dr. Jeffrey Dean did a lateral arabesque making way for new leadership. The company had new marching orders under the calming light of a Red Alert, hair-on-fire, OpenAI and Microsoft will be the new Big Dogs.
From Google DeepMind to greener pastures. Thanks, OpenAI art thing.
Now I learn from “Google’s DeepMind Shifting From Research Powerhouse To AI Product Giant, Redefining Industry Dynamics”:
Alphabet Inc‘s subsidiary Google DeepMind has decided to transition from a research lab to an AI product factory. This move could potentially challenge the company’s long-standing dominance in foundational research… Google DeepMind, has merged its two AI labs to focus on developing commercial services. This strategic change could potentially disrupt the company’s traditional strength in fundamental research
From wonky images of the US founding fathers to weird outputs which appear to be indicative of Google’s smart software and its knowledge of pizza cheese interaction, the company seems to be struggling. To further complicate matters, Google’s management finesse created this interesting round of musical chairs:
…the departure of co-founder Mustafa Suleyman to Microsoft in March adds another layer of complexity to DeepMind’s journey. Suleyman’s move to Microsoft, where he has described his experience as “truly transformational,” indicates the competitive and dynamic nature of the AI industry.
Several observations:
- Microsoft seems to be suffering the AI wobblies. The more it tries to stabilize its AI activities, the more unstable the company seems to be
- Who is in charge of AI at Google?
- Has Google turned off the blinking red and yellow alert lights and operates in what might be called low lumen normalcy?
- xx
However, Google’s thrashing may not matter. OpenAI cannot get its system to stay online. Microsoft has a herd of AI organizations to manage and has managed to create a huge PR gaffe with its “smart” Recall feature. Apple deals in “to be” smart products and wants to work with everyone just without paying.
Net net: Is Google representative of the unraveling of the Next Big Thing?
Stephen E Arnold, June 18, 2024
x
x
x
Palantir: Fear Is Good. Fear Sells.
June 18, 2024
President Eisenhower may not have foreseen AI when he famously warned of the military-industrial complex, but certain software firms certainly fit the bill. One of the most successful, Palantir, is pursuing Madison Avenue type marketing with a message of alarm. The company’s co-founder, Alex Karp, is quoted in the fear-mongering post at right-wing Blaze Media, “U.S. Prepares for War Amid Growing Tensions that China Could Invade Taiwan.”
After several paragraphs of panic over tensions between China and Taiwan, writer Collin Jones briefly admits “It is uncertain if and when the Chinese president will deploy an attack against the small country.” He quickly pivots to the scary AI arms race, intimating Palantir and company can save us as long as we let (fund) them. The post concludes:
“Palantir’s CEO and co-founder Alex Karp said: ‘The way to prevent a war with China is to ramp up not just Palantir, but defense tech startups that produce software-defining weapons systems that scare the living F out of our adversaries.’ Karp noted that the U.S. must stay ahead of its military opponents in the realm of AI. ‘Our adversaries have a long tradition of being not interested in the rule of law, not interested in fairness, not interested in human rights and on the battlefield. It really is going to be us or them.’ Karp noted that the U.S. must stay ahead of its military opponents in the realm of AI. You do not want a world order where our adversaries try to define new norms. It would be very bad for the world, and it would be especially bad for America,’ Karp concluded.”
Wow. But do such scare tactics work? Of course they do. For instance, we learn from DefenseScoop, “Palantir Lands $480M Army Contract for Maven Artificial Intelligence Tech.” That article reports on not one but two Palantir deals: the titular Maven expansion and, we learn:
“The company was recently awarded another AI-related deal by the Army for the next phase of the service’s Tactical Intelligence Targeting Access Node (TITAN) ground station program, which aims to provide soldiers with next-generation data fusion and deep-sensing capabilities via artificial intelligence and other tools. That other transaction agreement was worth $178 million.”
Those are just two recent examples of Palantir’s lucrative government contracts, ones that have not, as of this writing, been added this running tally. It seems the firm has found its winning strategy. Ramping up tensions between world powers is a small price to pay for significant corporate profits, apparently.
Cynthia Murrell, June 18, 2024
A Fancy Way of Saying AI May Involve Dragons
June 14, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The essay “What Apple’s AI Tells Us: Experimental Models” makes clear that pinning down artificial intelligence is proving to be more difficult than some anticipated in January 2023, the day when Google’s Red Alert squawked and many people said, “AI is the silver bullet I want for my innovation cannon.”
Image source: https://www.geographyrealm.com/here-be-dragons/
Here’s a sentence I found important in the One Useful Thing essay:
What is worth paying attention to is how all the AI giants are trying many different approaches to see what works.
The write up explains different approach to AI that the author has identified. These are:
- Apps
- Business models with subscription fees
The essay concludes with a specter “haunting AI.” The write up says:
I do not know if AGI[artificial general intelligence] is achievable, but I know that the mere idea of AGI being possible soon bends everything around it, resulting in wide differences in approach and philosophy in AI implementations.
Today’s smart software environment has an upside other than the money churn the craziness vortices generate:
Having companies take many approaches to AI is likely to lead to faster adoption in the long term. And, as companies experiment, we will learn more about which sets of models are correct.
Several observations are warranted.
First, the confessions of McKinsey’s AI team make it clear that smart outfits may not know what they are doing. The firms just plunge forward and then after months of work recycle the floundering into lessons. Presumably these lessons are “hire McKinsey.” See my write up “What Is McKinsey & Co. Telling Its Clients about AI?”
Second, another approach is to use AI in the hopes that staff costs can be reduced. I think this is the motivation of some AI enthusiasts. PwC (I am not sure if it is a consulting firm, an accounting firm, or some 21st century mutation) fell in lust with OpenAI. Not only did the firm kick OpenAI’s tires, PwC signed up to be what’s called an “enterprise reseller.” A client pays PwC to just make something work. In this case, PwC becomes the equivalent of a fix it shop with a classy address and workers with clean fingernails. The motivation, in my opinion, is cutting staff. “PwC Is Doing Quiet Layoffs. It’s a Brilliant Example of What Not to Do” says:
This is PwC in the U.K., and obviously, they operate under different laws than we do here in the United States. But in case you’re thinking about following this bad example, I asked employment attorney Jon Hyman for advice. He said, "This request would seem to fall under the umbrella of ‘protected concerted activity’ that the NLRB would take issue with. That said, the National Labor Relations Act does not apply to supervisors — defined as one with the authority to make personnel decisions using independent judgment. "Thus," he continues, "whether this specific PwC request runs afoul of the NLRA’s legal protections for employees to engage in protected concerted activity would depend on whether the laid-off employees were ‘supervisors’ under the Act."
I am a simpler person. The quiet layoffs complement the AI initiative. Quiet helps keep staff from making the connection I am suggesting. But consulting firms keep one eye on expenses and the other on partners’ profits. AI is a catalyst, not a technology.
Third, more AI fatigue write ups are appearing. One example is “The AI Fatigue: Are We Getting Tired of Artificial Intelligence?” reports:
Hema Sridhar, Strategic Advisor for Technological Futures at the University of Auckland, says that there is a lot of “noise on the topic” so it is clear that “people are overwhelmed”. “Almost every company is using AI. Pretty much every app that you’re currently using on your phone has recently released some version with some kind of AI-feature or AI-enhanced features,” she adds. “Everyone’s using it and [it’s] going to be part of day-to-day life, so there are going to be some significant improvements in everything from how you search for your own content on your phone, to more improved directions or productivity tools that just fundamentally change the simple things you do every day that are repetitive.”
Let me reference Apple Intelligence to close this write up. Apple did not announce hardware. It talked about “to be” services. Instead of doing the Meta open source thing, the Google wrong answers with historically flawed images, or the MSFT on-again, off-again roll outs — Apple just did “to be.”
My hunch is that Apple is not cautious; its professionals know that AI products and services may be like those old maps which say, “Here be dragons.” Sailing close to the shore makes sense.
Stephen E Arnold, June 14, 2024
More on TikTok Managing the News Streams
June 14, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
TikTok does not occupy much of my day. I don’t have an account, and I am blissfully unaware of the content on the system. I have heard from those on my research team and from people who attend my lectures at law enforcement / intelligence conferences that it is an influential information conduit. I am a dinobaby, and I am not “into” video. I don’t look up information using TikTok. I don’t follow fashion trends other than those popular among other 80-year-old dinobabies. I am hopeless.
However, I did note “TikTok Users Being Fed Misleading Election News, BBC Finds.” I am mostly unaffected by King Charles’s and his subjects activities. What snagged my attention was the presence of videos which were disseminated via TikTok. These videos delivered
content promoted by social media algorithms has found – alongside funny montages – young people on TikTok are being exposed to misleading and divisive content. It is being shared by everyone from students and political activists to comedians and anonymous bot-like accounts.
Tucked in the BBC write up weas this statement:
TikTok has boomed since the last [British] election. According to media regulator Ofcom, it was the fastest-growing source of news in the UK for the second year in a row in 2023 – used by 10% of adults in this way. One in 10 teenagers say it is their most important news source. TikTok is engaging a new generation in the democratic process. Whether you use the social media app or not, what is unfolding on its site could shape narratives about the election and its candidates – including in ways that may be unfounded.
Shortly after reading the BBC item I saw in my feed (June 3, 2024) this story: “Trump Joins TikTok, the App He Once Tried to Ban.” Interesting.
Several observations are warranted:
- Does the US have a similar video channel currently disseminating information into China, the home base of TikTok and its owner? If “No,” why not? Should the US have a similar policy regarding non-US information conduits?
- Why has education in Britain failed to educate young people about obtaining and vetting information? Does the US have a similar problem?
- Have other countries fallen into the scroll and swipe deserts?
Scary.
Stephen E Arnold, June 14, 2024