What Will the Twitter Dependent Do Now?

November 7, 2022

Here’s a question comparable to Roger Penrose’s, Michio Kaku’s, and Sabine Hossenfelder’s discussion of the multiverse. (One would think that the Institute of Art and Ideas could figure out sound, but that puts high-flying discussions in a context, doesn’t it?)

What will the Twitter dependent do now?

Since I am not Twitter dependent nor Twitter curious (twi-curious, perhaps?), I find the artifacts of Muskism interesting to examine. Let’s take one example; specifically, “Twitter, Cut in Half.” Yikes, castration by email! Not quite like the real thing, but for some, the imagery of chopping off the essence of the tweeter thing is psychologically disturbing.

Consider this statement:

After the layoffs, we asked some of the employees who had been cut what they made of the process. They told us that they had been struck by the cruelty: of ordering people to work around the clock for a week, never speaking to them, then firing them in the middle of the night, no matter what it might mean for an employee’s pregnancy or work visa or basic emotional state. More than anything they were struck by the fact that the world’s richest man, who seems to revel in attention on the platform they had made for him, had not once deigned to speak to them.

image

Knife cutting a quite vulnerable finger as collateral damage to major carrot chopping. Image by https://www.craiyon.com/

Cruelty. Interesting word. Perhaps it reflects on the author who sees the free amplifier of his thoughts ripped from his warm fingers? The word cut keeps the metaphor consistent: Cutting the cord, cutting the umbilical, and cutting the unmentionables. Ouch! No wonder some babies scream when slicing and cleaving ensue. Ouch ouch.

Then the law:

whether they were laid off or not, several employees we’ve spoken to say they are hiring attorneys. They anticipate difficulties getting their full severance payments, among other issues. Tensions are running high.

The flocking of the legal eagles will cut off the bright white light of twitterdom. The shadows flicker awaiting the legal LEDs to shine and light the path to justice in free and easy short messages to one’s followers. Yes, the law versus the Elon.

So what’s left of the Fail Whale’s short messaging system and its functions designed to make “real” information available on a wide range of subjects? The write up reports:

It was grim. It was also, in any number of ways, pointless: there had been no reason to do any of this, to do it this way, to trample so carelessly over the lives and livelihoods of so many people.

Was it pointless? I am hopeful that Twitter goes away. The alternatives could spit out a comparable outfit. Time will reveal if those who must tweet will find another easy, cheap way to promote specific ideas, build a rock star like following, and provide a stage for performers who do more than tell jokes and chirp.

Several observations:

  1. A scramble for other ways to find, build, and keep a loyal following is underway. Will it be the China-linked TikTok? Will it be the gamer-centric Discord? Will it be a ghost web service following the Telegram model?
  2. Fear is perched on the shoulder of the Twitter dependent celebrity. What worked for Kim has worked for less well known “stars.” Those stars may wonder how the Elon volcano could ruin more of their digital constructs.
  3. Fame chasers find that the information highway now offers smaller, less well traveled digital paths? Forget the two roads in the datasphere. The choices are difficult, time consuming to master, and may lead to dead ends or crashes on the information highway’s collector lanes.

Net net: Change is afoot. Just watch out for smart automobiles with some Elon inside.

Stephen E Arnold, November 7, 2022

Teens Prefer Apple

November 7, 2022

The 44th semi-annual Taking Stock with Teens survey from Piper Sandler asked US teenagers about their earnings, spending patterns, and brand preferences. Here is a handy infographic of the results. Marketers will find helpful guidance in this report.

Some of the findings are interesting, even for those not looking to make a buck off young people. See the post for trends in clothing, cosmetics, and food. In technology-related preferences, we found some completely unsurprising. For example:

  • “TikTok improved as the favorite social platform (38% share) by 400 bps vs. last Spring, and SNAP was No. 2 at 30% (-100 bps vs. Spring 2022) while Instagram was No. 3 at 20% (-200 bps vs. Spring 2022)
  • Teens spend 32% of daily video consumption on Netflix (flat vs. LY) and 29% on YouTube (-200 bps vs. LY)”

We find one revelation particularly significant. It looks like Apple is on track to monopolize the cohort:

  • “87% of teens own an iPhone; 88% expect an iPhone to be their next phone; 31% of teens own an Apple Watch”

What will advertisers pay to reach this group? Answer: Lots. We anticipate a growing number of teen-focused campaigns across the Appleverse. When Apple squeezed Facebook’s ad methods, where did that delicious money flow go? Do regulators know?

Cynthia Murrell, November 7 , 2022

Smart Software Is Like the Brain Because…. Money, Fame, and Tenure

November 4, 2022

I enjoy reading the marketing collateral from companies engaged in “artificial intelligence.” Let me be clear. Big money is at stake. A number of companies have spreadsheet fever and have calculated the cash flow from dominating one or more of the AI markets. Examples range from synthetic dataset sales to off-the-shelf models, from black boxes which “learn” how to handle problems that stump MBAs to building control subsystems that keep aircraft which would drop like rocks without numerical recipes humming along.

Study Urges Caution When Comparing Neural Networks to the Brain” comes with some baggage. First, the write up is in what appears to be a publication linked with MIT. I think of Jeffrey Epstein when MIT is mentioned. Why? The estimable university ignored what some believe are the precepts of higher education to take cash and maybe get invited to an interesting party. Yep, MIT. Second, the university itself has been a hot bed of smart software. Cheerleading has been heard emanating from some MIT facilities when venture capital flows to a student’s start up in machine learning or an MIT alum cashes out with a smart software breakthrough. The rah rah, I wish to note, is because of money, not nifty engineering.

The write up states:

In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells — key components of the brain’s navigation system — the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems. “What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices,” says Rylan Schaeffer, a former senior research associate at MIT.

What this means is that smart software is like the butcher near our home in Campinas, Brazil, in 1952. For Americans, the butcher’s thumb boosted the weight of the object on the scale. My mother who was unaware of this trickery just paid up none the wiser. A friend of our family, Adair Ricci pointed out the trick and he spoke with the butcher. That professional stopped gouging my mother. Mr. Ricci had what I would later learn to label as “influence.”

The craziness in the AI marketing collateral complements the trickery in many academic papers. When I read research results about AI from Google-type outfits, I assume that the finger on the scale trick has been implemented. Who is going to talk? Timnit Gebru did and look what happened? Find your future elsewhere. What about the Snorkel-type of outfit? You may want to take a “deep dive” on that issue.

Now toss in marketing. I am not limiting marketing to the art history major whose father is a venture capitalist with friends. This young expert in Caravaggio’s selection of color can write about AI. I am including the enthusiastic believers who have turned open source, widely used algorithms, and a college project into a company. The fictional thrust of PowerPoints, white papers, and speeches at “smart” software conferences are confections worthy of the Meilleur Ouvrier of smart software.

Several observations:

  1. Big players in smart software want to control the food chain: Models, datasets, software components, everything
  2. Smart software works in certain use cases. In others, not a chance. Example: Would you stand in front of a 3000 smart car speeding along at 70 miles per hour trusting the smart car to stop before striking you with 491,810 foot pounds of energy? I would not. Would the president of MIT stare down the automobile? Hmmmm.
  3. No one “wins” by throwing water on the flaming imaginations of smart software advocates.

Net net: Is smart software like a brain? No, the human brain thinks in terms of tenure, money, power, and ad sales.

Stephen E Arnold, November 4, 2022

Will the Musker Keep Amplification in Mind?

November 4, 2022

In its ongoing examination of misinformation online, the New York Times tells us about the Integrity Institute‘s quest to measure just how much social media contributes to the problem in, “How Social Media Amplifies Misinformation More than Information.” Reporter Steven Lee Meyers writes:

“It is well known that social media amplifies misinformation and other harmful content. The Integrity Institute, an advocacy group, is now trying to measure exactly how much — and on Thursday [October 13] it began publishing results that it plans to update each week through the midterm elections on Nov. 8. The institute’s initial report, posted online, found that a ‘well-crafted lie’ will get more engagements than typical, truthful content and that some features of social media sites and their algorithms contribute to the spread of misinformation.”

In is ongoing investigation, the researchers compare the circulation of posts flagged as false by the International Fact-Checking Network to that of other posts from the same accounts. We learn:

“Twitter, the analysis showed, has what the institute called the great misinformation amplification factor, in large part because of its feature allowing people to share, or ‘retweet,’ posts easily. It was followed by TikTok, the Chinese-owned video site, which uses machine-learning models to predict engagement and make recommendations to users. … Facebook, according to the sample that the institute has studied so

far, had the most instances of misinformation but amplified such claims to a lesser degree, in part because sharing posts requires more steps. But some of its newer features are more prone to amplify misinformation, the institute found.”

Facebook‘s video content spread lies faster than the rest of the platform, we learn, because its features lean more heavily on recommendation algorithms. Instagram showed the lowest amplification rate, while the team did not yet have enough data on YouTube to draw a conclusion. It will be interesting to see how these amplifications do or do not change as the midterms approach. The Integrity Institute shares its findings here.

Cynthia Murrell, November 4, 2022

Just A Misunderstanding: This Is Not a Zuck Up

November 4, 2022

I read “Since Becoming Meta, Facebook’s Parent Company Has Lost US$650 Billion.” The article presents some information about how the Meta zuck up has progressed. Let me highlight three items, and I urge you to check out the source document for more data points.

  1. “more than half a trillion dollars in value lost in 2022”
  2. “From the start of 2022 to now, the company has shed 70 per cent of its value” (Now means October 28, 2022 I think)
  3. “Apple’s changes cost Meta around $10 billion in ad revenue.”

I would mention that there are some on going legal probes into the exemplar of the individual who has his hands on the steering wheel of what looks to me like a LADA Kalinka. I mean what better vehicle for dating at Harvard a few years ago?

I think that the Top Thumbs Upper at Meta (aka Facebook) has a vision. Without meaningful regulation from assorted governmental agencies, the prospect of a broken down Kalinka is either what one deserves for selecting the vehicle or a menace to others on the information highway.

I understand there is a parking space next to the MySpace 2003 Skoda Fabia.

Stephen E Arnold, November 4, 2022

Vectara: Another Run Along a Search Vector

November 4, 2022

Is this the enterprise search innovation we have been waiting for? A team of ex-Googlers have used what they learned about large language models (LLMs), natural language processing (NLP), and transformer techniques to launch a new startup. We learn about their approach in VentureBeat‘s article, “Vectara’s AI-Based Neural Search-as-a-Service Challenges Keyword-Based Searches.” The platform combines LLMs, NLP, data integration pipelines, and vector techniques into a neural network. The approach can be used for various purposes, we learn, but the company is leading with search. Journalist Sean Michael Kerner writes:

“[Cofounder Amr] Awadallah explained that when a user issues a query, Vectara uses its neural network to convert that query from the language space, meaning the vocabulary and the grammar, into the vector space, which is numbers and math. Vectara indexes all the data that an organization wants to search in a vector database, which will find the vector that has closest proximity to a user query. Feeding the vector database is a large data pipeline that ingests different data types. For example, the data pipeline knows how to handle standard Word documents, as well as PDF files, and is able to understand the structure. The Vectara platform also provides results with an approach known as cross-attentional ranking that takes into account both the meaning of the query and the returned results to get even better results.”

We are reminded a transformer puts each word into context for studious algorithms, relating it to other words in the surrounding text. But what about things like chemical structures, engineering diagrams, embedded strings in images? It seems we must wait longer for a way to easily search for such non-linguistic, non-keyword items. Perhaps Vectara will find a way to deliver that someday, but next it plans to work on a recommendation engine and a tool to discover related topics. The startup, based in Silicon Valley, launched in 2020 under the “stealth” name Zir AI. Recent seed funding of $20 million has enabled the firm to put on its public face and put out this inaugural product. There is a free plan, but one must contact the company for any further pricing details.

Cynthia Murrell, November 4, 2022

Robots and Trust: Humanoids and Machines Side by Side

November 3, 2022

Tracking Trust in Human-Robot Work Interactions” presents some allegedly accurate information. Let’s take a look at a couple of statements which I found interesting:

ITEM 1:

“We found that as humans get tired, they let their guards down and become more trusting of automation than they should. However, why that is the case becomes an important question to address.”

The statement has a number of implications. My hunch is that tired people don’t think when they are fatigued. A failure to think can have unusual consequences. Fatigued professionals hitting the incorrect button or just falling asleep and allowing a smart self driving automobile display its limitations.

ITEM 2:

[The] lab captured functional brain activity as operators collaborated with robots on a manufacturing task. They found faulty robot actions decreased the operator’s trust in the robots.

That smart self driving auto did drive through the day care center playground. It seems obvious that some humanoids would lose their trust in government-approved technology. The injured children are likely to evidence some care when offered a chance to ride in a smart self driving automobile as well.

ITEM 3:

The next step is to expand the research into a different work context, such as emergency response, and understand how trust in multi-human robot teams impact teamwork and task work in safety-critical environments.

When a robot is working with a flesh and blood humanoid, the operative idea may be, “Will this gizmo hurt or kill me?”

Perhaps a Terminator style robot can offer researchers, engineers, MBAs, and penny pinching bean counters some assurances when the electronic voice says, “I am a robot. I am here to help you.”

Stephen E Arnold, November 2022

The Tweeter: Where Are the Tweeter Addicts Going?

November 3, 2022

With Instagram and TikTok becoming the go to source of news, what is Twitter doing to cope with these click magnets? The answer is, “Stay tuned.” In theory the sage of the Twitter thing will end soon. In the meantime, let’s consider the implications of “Exclusive: Twitter Is Losing Its Most Active Users, Internal Documents Show.” The story comes from a trusted news source (what other type of real news outfit is there?). I noted this statement in the write up:

Twitter is struggling to keep its most active users – who are vital to the business – engaged…

The write up points out:

“heavy tweeters” account for less than 10% of monthly overall users but generate 90% of all tweets and half of global revenue. Heavy tweeters have been in “absolute decline” since the pandemic began, a Twitter researcher wrote in an internal document titled “Where did the Tweeters Go?”

The story has a number of interesting factoids; for example:

  • “adult content constitutes 13% of Twitter”
  • “English-speaking users were also increasingly interested in crypto currencies …But interest in the topic has declined since the crypto price crash”
  • “Twitter is also losing a “devastating” percentage of heavy users who are interested in fashion or celebrities such as the Kardashian family.”

What about the Silicon Valley type journalists who tweet to fame and fortune? What about the text outputting Fiverr and software content creators? What about the search engine optimization wizards who do the multiple post approach to visibility?

One of the Arnold Laws of Online is that users dissipate. What this means is that a big service has magnetism. Then the magnetism weakens. The users drift away looking for another magnetic point.

The new magnetic points are:

  • Short form video services
  • Discussion groups which can be Reddit-style on the clear Web and the Dark Web. Think Mastodon and Discord.
  • Emergent super apps like Telegram-type services and specialized services hosted by “ghost” ISPs. (A selected list is available for a modest fee. Write benkent2020 at yahoo dot com if you are interested in something few are tracking.)

The original magnet does not lose its potency quickly. But once those users begin to drift off, the original attractor decays.

How similar is this to radioactive decay? It is not just similar; it is weirdly close.

Stephen E Arnold, November 3, 2022

Meet TOBOR: The CFO Which Never Stops Calculating Your Value

November 3, 2022

Robot coworkers make us uncomfortable, apparently. Who knew? ScienceDaily reports, “Robots in Workplace Contribute to Burnout, Job Insecurity.” The good news, we are told, is that simple self-affirmation exercises can help humans get past such fears. The write-up cites research from the American Psychological Association, stating:

“Working with industrial robots was linked to greater reports of burnout and workplace incivility in an experiment with 118 engineers employed by an Indian auto manufacturing company. An online experiment with 400 participants found that self-affirmation exercises, where people are encouraged to think positively about themselves and their uniquely human characteristics, may help lessen workplace robot fears. Participants wrote about characteristics or values that were important to them, such as friends and family, a sense of humor or athletics. ‘Most people are overestimating the capabilities of robots and underestimating their own capabilities,’ [lead researcher Kai Chi] Yam said.”

Yam suspects ominous media coverage about robots replacing workers is at least partially to blame for the concern. Yeah, that tracks. The write-up continues:

“Fears about job insecurity from robots are common. The researchers analyzed data about the prevalence of robots in 185 U.S. metropolitan areas along with the overall use of popular job recruiting sites in those areas (LinkedIn, Indeed, etc.). Areas with the most prevalent rates of robots also had the highest rates of job recruiting site searches, even though unemployment rates weren’t higher in those areas.”

Researchers suggest this difference may be because workers in those areas are afraid of being replaced by robots at any moment, though they allow other factors could be at play. So just remember—if you become anxious a robot is after your job, just remind yourself what a capable little human you are. Technology is our friend, even if it makes us a bit nervous.

Cynthia Murrell, November 3, 2022

Smart Software: A Trivial Shortcoming, Really Nothing

November 3, 2022

Word Problems Are Tricky for AI Language Models you have trouble with word problems, rest assured you are in good company. Machine-learning researchers have only recently made significant progress teaching algorithms the concept. IEEE Spectrum reports, “AI Language Models Are Struggling to ‘Get’ Math.” Writer Dan Garisto states:

“Until recently, language models regularly failed to solve even simple word problems, such as ‘Alice has five more balls than Bob, who has two balls after he gives four to Charlie. How many balls does Alice have?’ ‘When we say computers are very good at math, they’re very good at things that are quite specific,’ says Guy Gur-Ari, a machine-learning expert at Google. Computers are good at arithmetic—plugging numbers in and calculating is child’s play. But outside of formal structures, computers struggle. Solving word problems, or ‘quantitative reasoning,’ is deceptively tricky because it requires a robustness and rigor that many other problems don’t.”

Researchers threw a couple datasets with thousands of math problems at their language models. The students still failed spectacularly. After some tutoring, however, Google’s Minerva emerged as a star pupil, having achieved 78% accuracy. (Yes, the grading curve is considerable.) We learn:

“Minerva uses Google’s own language model, Pathways Language Model (PaLM), which is fine-tuned on scientific papers from the arXiv online preprint server and other sources with formatted math. Two other strategies helped Minerva. In ‘chain-of-thought prompting,’ Minerva was required to break down larger problems into more palatable chunks. The model also used majority voting—instead of being asked for one answer, it was asked to solve the problem 100 times. Of those answers, Minerva picked the most common answer.”

Not a practical approach for your average college student during an exam. Researchers are still not sure how much Minerva and her classmates understand about the answers they are giving, especially since the more problems they solve the fewer they get right. Garisto notes language models “can have strange, messy reasoning and still arrive at the right answer.” That is why human students are required to show their work, so perhaps this is not so different. More study is required, on the part of both researchers and their algorithms.

Cynthia Murrell, November 3, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta