How to Garner Attention from X.com: The Guardian Method Seems Infallible
January 24, 2025
Prepared by a still-alive dinobaby.
The Guardian has revealed its secret to getting social media attention from Twitter (now the X). “‘Just the Start’: X’s New AI Software Driving Online Racist Abuse, Experts Warn” makes the process dead simple. Here are the steps:
- Publish a diatribe about the power of social media in general with specific references to the Twitter machine
- Use name calling to add some clickable bound phrases; for example, “online racism”, “fake images”, and “naked hate”
- Use loaded words to describe images; for example, an athlete “who is black, picking cotton while another shows that same player eating a banana surrounded by monkeys in a forest.”
Bingo. Instantly clickable.
The write up explains:
Callum Hood, the head of research at the Center for Countering Digital Hate (CCDH), said X had become a platform that incentivised and rewarded spreading hate through revenue sharing, and AI imagery made that even easier. “The thing that X has done, to a degree that no other mainstream platform has done, is to offer cash incentives to accounts to do this, so accounts on X are very deliberately posting the most naked hate and disinformation possible.”
This is a recipe for attention and clicks. Will the Guardian be able to convert the magnetism of the method in cash money?
Stephen E Arnold, January 24, 2025
Amazon: Twitch Is Looking a Bit Lame
January 24, 2025
Are those 30-second ads driving away viewers? Are the bans working to alienate creators and their fans? Is Amazon going to innovate in streaming?
These are questions Amazon needs to answer in a way that is novel and actually works.
Twitch is an online streaming platform primarily used by gamers to stream their play seasons and interact with their fanbase. There hasn’t been much news about Twitch in recent months and it could be die to declining viewership. Tube Filter dives into the details with “Is Twitch Viewership At Its Lowest Point In Four Years?”
The article explains that Twitch had a total of 1.58 billion watch time hours in December 2024. This was its lowest month in four years according to Stream Charts. Twitch, however, did have a small increase in new streamers joining the platform and the amount of channels live at one time. Stream Charts did mention that December is a slow month due to the holiday season. Twitch is dealing with dire financial straits and made users upset when it used AI to make emotes.
Here are some numbers:
“In both October and November 2024, around 89,000 channels on average would be live on Twitch at any one time. In December, that figure pushed up to 92,392. Twitch also saw a bump in the overall number of active channels from 4,490,725 in November to 4,777,395 in December—a 6% increase. [I]t’s important to note that other key metrics for both viewer and streamer activity remain strong,” it wrote in a report about December’s viewership. “A positive takeaway from December was the variety of content on offer. Streamers broadcasted in 43,200 different categories, the highest figure of the year, second only to March.”
Streams Charts notes that all these streamers broadcasted a more diverse range of content of content than usual.
Twitch is also courting TikTok creators in case the US federal government bans the short video streaming platform. The platform has offerings that streamers want, but it needs to do more to attract more viewers.
Whitney Grace, January 24, 2025
And the Video Game Struggler for 2024 Is… Video Games
January 24, 2025
Yep, 2024 sas the worst year for videogames since 1983.
Videogames are still a young medium, but they’re over fifty years old. The gaming industry has seen ups and downs with the first (and still legendary) being the 1983 crash. Arcade games were all the rage back then, but these days consoles and computers have the action. At least, they should.
Wired writes that “2024 Was The Year The Bottom Fell Out Of The Games Industry” due to multiple reasons. There was massive layoffs in 2023 with over 10,000 game developers losing their jobs. Some of this was attributed to AI slowly replacing developers. The gaming industry’s job loss in 2024 was forty percent higher than the prior year. Yikes!
DEI (diversity, equity, and inclusion) combined with woke mantra was also blamed for the failue of many games, including Suicide Squad: Kill the Justice League. The phrase “go woke, go broke” echoed throughout the industry as it is in Hollywood, Silicon Valley, and other fields. I noted:
“According to Matthew Ball, an adviser and producer in the games and TV space…says that the blame for all of this can’t be pinned to a single thing, like capitalism, mismanagement, Covid-19, or even interest rates. It also involves development costs, how studios are staffed, consumers’ spending habits, and game pricing. “This storm is so brutal,” he says, ‘because it is all of these things at once, and none have really alleviated since the layoffs began.’”
Many indie studios were shuttered and large tech leaders such as Microsoft and Sony shut down parts of their gaming division. Also a chain of events influenced by the hatred of DEI and its associated mindsets that is being called a second GamerGate.
The gaming industry will continue through the beginnings of 2025 with business as usual. The industry will bounce back, but it will be different than the past.
Whitney Grace, January 24, 2025
AI Will Doom You to Poverty Unless You Do AI to Make Money
January 23, 2025
Prepared by a still-alive dinobaby.
I enjoy reading snippets of the AI doomsayers. Some spent too much time worrying about the power of Joe Stalin’s approach to governing. Others just watched the Terminator series instead of playing touch football. A few “invented” AI by cobbling together incremental improvements in statistical procedures lashed to ever-more-capable computing infrastructures. A couple of these folks know that Nostradamus became a brand and want to emulate that predictive master.
I read “Godfather of AI Explains How Scary AI Will Increase the Wealth Gap and Make Society Worse.” That is a snappy title. Whoever wrote it crafted the idea of an explainer to fear. Plus, the click bait explains that homelessness is for you too. Finally, it presents a trope popular among the elder care set. (Remember, please, that I am a dinobaby myself.) Prod a group of senior citizens to a dinner and you will hear, “Everything is broken.” Also, “I am glad I am old.” Then there is the ever popular, “Those tattoos! The check out clerks cannot make change! I don’t understand commercials!” I like to ask, “How many wars are going on now? Quick.”
Two robots plan a day trip to see the street people in Key West. Thanks, You.com. I asked for a cartoon; I get a photorealistic image. I asked for a coffee shop; I get weird carnival setting. Good enough. (That’s why I am not too worried.)
Is society worse than it ever was? Probably not. I have had an opportunity to visit a number of countries, go to college, work with intelligent (for the most part) people, and read books whilst sitting on the executive mailing tube. Human behavior has been consistent for a long time. Indigenous people did not go to Wegman’s or Whole Paycheck. Some herded animals toward a cliff. Other harvested the food and raw materials from the dead bison at the bottom of the cliff. There were no unskilled change makers at this food delivery location.
The write up says:
One of the major voices expressing these concerns is the ‘Godfather of AI’ himself Geoffrey Hinton, who is viewed as a leading figure in the deep learning community and has played a major role in the development of artificial neural networks. Hinton previously worked for Google on their deep learning AI research team ‘Google Brain’ before resigning in 2023 over what he expresses as the ‘risks’ of artificial intelligence technology.
My hunch is that like me the “worked at” Google was for a good reason — Money. Having departed from the land of volleyball and weird empty office buildings, Geoffrey Hinton is in the doom business. His vision is that there will be more poverty. There’s some poverty in Soweto and the other townships in South Africa. The slums of Rio are no Palm Springs. Rural China is interesting as well. Doesn’t everyone want to run a business from the area in front of a wooden structure adjacent an empty highway to nowhere? Sounds like there is some poverty around, doesn’t it?
The write up reports:
“We’re talking about having a huge increase in productivity. So there’s going to be more goods and services for everybody, so everybody ought to be better off, but actually it’s going to be the other way around. “It’s because we live in a capitalist society, and so what’s going to happen is this huge increase in productivity is going to make much more money for the big companies and the rich, and it’s going to increase the gap between the rich and the people who lose their jobs.”
The fix is to get rid of capitalism. The alternative? Kumbaya or a better version of those fun dudes Marx. Lenin, and Mao. I stayed in the “last” fancy hotel the USSR built in Tallinn, Estonia. News flash: The hotels near LaGuardia are quite a bit more luxurious.
The godfather then evokes the robot that wanted to kill a rebel. You remember this character. He said, “I’ll be back.” Of course, you will. Hollywood does not do originals.
The write up says:
Hinton’s worries don’t just stop at the wealth imbalance caused by AI too, as he details his worries about where AI will stop following investment from big companies in an interview with CBC News: “There’s all the normal things that everybody knows about, but there’s another threat that’s rather different from those, which is if we produce things that are more intelligent than us, how do we know we can keep control?” This is a conundrum that has circulated the development of robots and AI for years and years, but it’s seeming to be an increasingly relevant proposition that we might have to tackle sooner rather than later.
Yep, doom. The fix is to become an AI wizard, work at a Google-type outfit, cash out, and predict doom. It is a solid career plan. Trust me.
Stephen E Arnold, January 23, 2025
Teenie Boppers and Smart Software: Yep, Just Have Money
January 23, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I scanned the research summary “About a Quarter of U.S. Teens Have Used ChatGPT for Schoolwork – Double the Share in 2023.” Like other Pew data, the summary contained numerous numbers. I was not sufficiently motivated to dig into the methodology to find out how the sample was assembled nor how Pew got the mobile addicted youth were prompted to provide presumably truthful answers to direct questions. But why nit pick? We are at the onset of an interesting year which will include forthcoming announcements about how algorithms are agentic and able to fuel massive revenue streams for those in the know.
Students doing their homework while their parents play polo. Thanks, MSFT Copilot. Good enough. I do like the croquet mallets and volleyball too. But children from well-to-do families have such items in abundance.
Let’s go to the video tape, as the late and colorful Warner Wolf once said to his legion of Washington, DC, fan.
One of the highlights of the summary was this finding:
Teens who are most familiar with ChatGPT are more likely to use it for their schoolwork. Some 56% of teens who say they’ve heard a lot about it report using it for schoolwork. This share drops to 18% among those who’ve only heard a little about it.
Not surprisingly, the future leaders of America embrace short cuts. The question is, “How quickly will awareness reach 99 percent and usage nosing above 75 percent?” My guesstimate is pretty quickly. Convenience and more time to play with mobile phones will drive the adoption. Who in America does not like convenience?
Another finding catching my eye was:
Teens from households with higher annual incomes are most likely to say they’ve heard about ChatGPT. The shares who say this include 84% of teens in households with incomes of $75,000 or more say they’ve heard at least a little about ChatGPT.
I found this interesting because it appears to suggest that if a student comes from a home where money does not seem to be a huge problem, the industrious teens are definitely aware of smart software. And when it comes to using the digital handmaiden, Pew finds apparently nothing. There is no data point relating richer progeny with greater use. Instead we learned:
Teens who are most familiar with the chatbot are also more likely to say using it for schoolwork is OK. For instance, 79% of those who have heard a lot about ChatGPT say it’s acceptable to use for researching new topics. This compares with 61% of those who have heard only a little about it.
My thought is that more wealthy families are more likely to have teens who know about smart software. I would hypothesize that wealthy parents will pay for the more sophisticated smart software and smile benignly as the future intelligentsia stride confidently to ever brighter futures. Those without the money will get the opportunity to watch their classmates have more time for mobile phone scrolling, unboxing Amazon deliveries, and grabbing burgers at Five Guys.
I am not sure that the link between wealth and access to learning experiences is a random, one-off occurrence. If I am correct, the Pew data suggest that smart software is not reinforcing democracy. It seems to be making a digital Middle Ages more and more probable. But why think about what a dinobaby hypothesizes? It is tough to scroll zippy mobile phones with old paws and yellowing claws.
Stephen E Arnold, January 23, 2025
Yo, MSFT-Types, Listen Up
January 23, 2025
Developers concerned about security should check out “Seven Types of Security Issues in Software Design” at InsBug. The article does leave out a few points we would have included. Using Microsoft software, for example, or paying for cyber security solutions that don’t work as licensees believe. And don’t forget engineering for security rather than expediency and cost savings. Nevertheless, the post makes some good points. It begins:
“Software is gradually defining everything, and its forms are becoming increasingly diverse. Software is no longer limited to the applications or apps we see on computers or smartphones. It is now an integral part of hardware devices and many unseen areas, such as cars, televisions, airplanes, warehouses, cash registers, and more. Besides sensors and other electronic components, the actions and data of hardware often rely on software, whether in small amounts of code or in hidden or visible forms. Regardless of the type of software, the development process inevitably encounters bugs that need to be identified and fixed. While major bugs are often detected and resolved before release or deployment by developers or testers, security vulnerabilities don’t always receive the same attention.”
Sad but true. The seven categories include: Misunderstanding of Security Protection Technologies; Component Integration and Hidden Security Designs; Ignoring Security in System Design; Security Risks from Poor Exception Handling; Discontinuous or Inconsistent Trust Relationships; Over-Reliance on Single-Point Security Measures; and Insufficient Assessment of Scenarios or Environments. See the write-up for details on each point. We note a common thread—a lack of foresight. The post concludes:
“To minimize security risks and vulnerabilities in software design and development, one must possess solid technical expertise and a robust background in security offense and defense. Developing secure software is akin to crafting fine art — it requires meticulous thought, constant consideration of potential threats, and thoughtful design solutions. This makes upfront security design critically important.”
Security should not be an afterthought. What a refreshing perspective.
Cynthia Murrell, January 23, 2025
AI: Yes, Intellectual Work Will Succumb, Just Sooner Rather Than Later
January 22, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Has AI innovation stalled? Nope. “It’s Getting Harder to Measure Just How Good AI Is Getting” explains:
OpenAI’s end-of-year series of releases included their latest large language model (LLM), o3. o3 does not exactly put the lie to claims that the scaling laws that used to define AI progress don’t work quite that well anymore going forward, but it definitively puts the lie to the claim that AI progress is hitting a wall.
Okay, that proves that AI is hitting the gym and getting pumped.
However, the write up veers into an unexpected calcified space:
The problem is that AIs have been improving so fast that they keep making benchmarks worthless. Once an AI performs well enough on a benchmark we say the benchmark is “saturated,” meaning it’s no longer usefully distinguishing how capable the AIs are, because all of them get near-perfect scores.
What is wrong with the lack of benchmarks? Nothing. Smart software is probabalistic. How accurate is the weather? Ask a wonk at the National Weather Service and you get quite optimistic answers. Ask a child whose birthday party at the park was rained out on a day Willie the Weather said that it would be sunny, and you get a different answer.
Okay, forget measurements. Here’s what the write up says will happen, and the prediction sounds really rock solid just like Willie the Weatherman:
The way AI is going to truly change our world is by automating an enormous amount of intellectual work that was once done by humans…. Like it or not (and I don’t really like it, myself; I don’t think that this world-changing transition is being handled responsibly at all) none of the three are hitting a wall, and any one of the three would be sufficient to lastingly change the world we live in.
Follow the argument? I must admit jumping from getting good, to an inability to measure “good” to humans will be replaced because AI can do intellectual work is quite a journey. Perhaps I am missing something, but:
- Just because people outside of research labs have smart software that seems to be working like a smart person, what about those hallucinatory outputs? Yep, today’s models make stuff up because probability dictates the output
- Use cases for smart software doing “intellectual work” are where in the write up? They aren’t because Vox doesn’t have any which are comfortable to journalists and writers who can be replaced by the SEO AI’s advertised on Telegram search engine optimization channels or by marketers writing for Forbes Magazine. That’s right. Excellent use cases are smart software killing jobs once held by fresh MBAs or newly minted CFAs. Why? Cheaper and as long as the models are “good enough” to turn a profit, let ‘em rip. Yahoooo.
- Smart software is created by humans, and humans shape what it does, how it is deployed, and care not a whit about the knock on effects. Technology operates in the hands of humans. Humans are deeply flawed entities. Mother Theresas are outnumbered by street gangs in Reno, Nevada, based on my personal observations of that fine city.
Net net: Vox which can and will be replaced by a cheaper and good enough alternative doesn’t want to raise that issue. Instead, Vox wanders around the real subject. That subject is that as those who drive AI figure out how to use what’s available and good enough, certain types of work will be pushed into the black boxes of smart software. Could smart software have written this essay? Yes. Could it have done a better job? Publications like the supremely weird Buzzfeed and some consultants I know sure like “good enough.” As long as it is cheap, AI is a winner.
Stephen E Arnold, January 22, 2025
Microsoft and Its Me-Too Interface for Bing Search
January 22, 2025
Bing will never be Google, but Microsoft wants its search engine to dominate queries. Microsoft Bing has a small percentage of Internet searches and in a bid to gain more traction it has copied Google’s user interface (UI). Windows Latest spills the tea over the UI copying: “Microsoft Bing Is Trying To Spoof Google UI When People Search Google.com.”
Google’s UI is very distinctive with its minimalist approach. The only item on the Google UI is the query box and menus along the top and bottom of the page. Microsoft Edge is Google’s Web browser and it is programed to use Bing. In a sneaky (and genius) move, when Edge users type Google into the bing search box they are taken to UI that is strangely Google-esque. Microsoft is trying this new UI to lower the Bing bounce rate, users who leave.
Is it an effective tactic?
“But you might wonder how effective this idea would be. Well, if you’re a tech-savvy person, you’ll probably realize what’s going on, then scroll and open Google from the link. However, this move could keep people on Bing if they just want to use a search engine.Google is the number one search engine, and there’s a large number of users who are just looking for a search engine, but they think the search engine is Google. In their mind, the two are the same. That’s because Google has become a synonym for search engines, just like Chrome is for browsers.A lot of users don’t really care what search engine they’re using, so Microsoft’s new practice, which might appear stupid to some of you, is likely very effective.”
For unobservant users and/or those who don’t care, it will work. Microsoft is also tugging on heartstrings with another tactic:
“On top of it, there’s also an interesting message underneath the Google-like search box that says “every search brings you closer to a free donation. Choose from over 2 million nonprofits.” This might also convince some people to keep using Bing.”
What a generous and genius tactic interface innovation. We’re not sure this is the interface everyone sees, but we love the me too approach from user-centric big tech outfits.
Whitney Grace, January 22, 2025
And 2024, a Not-So-Wonderful Year
January 22, 2025
Every year has tech failures, some of them will join the zeitgeist as cultural phenomenons like Windows Vista, Windows Me, Apple’s Pippin game console, chatbots, etc. PC Mag runs down the flops in: “Yikes: Breaking Down the 10 Biggest Tech Fails of 2024.” The list starts with Intel’s horrible year with its booted CEO, poor chip performance. It follows up with the Salt Typhoon hack that proved (not that we didn’t already know it with TikTok) China is spying on every US citizen with a focus on bigwigs.
National Public Data lost 272 million social security numbers to a hacker. That was a great day in summer for hacker, but the summer travel season became a nightmare when a CrowdStrike faulty kernel update grounded over 2700 flights and practically locked down the US borders. Microsoft’s Recall, an AI search tool that took snapshots of user activity that could be recalled later was a concern. What if passwords and other sensitive information were recorded?
The fabulous Internet Archive was hacked and taken down by a bad actor to protest the Israel-Gaza conflict. It makes us worry about preserving Internet and other important media history. Rabbit and Humane released AI-powered hardware that was supposed to be a hands free way to use a digital assistant, but they failed. JuiceBox ended software support on its EV car chargers, while Scarlett Johansson’s voice was stolen by OpenAI for its Voice Mode feature. She sued.
The worst of the worst is this:
“Days after he announced plans to acquire Twitter in 2022, Elon Musk argued that the platform needed to be “politically neutral” in order for it to “deserve public trust.” This approach, he said, “effectively means upsetting the far right and the far left equally.” In March 2024, he also pledged to not donate to either US presidential candidate, but by July, he’d changed his tune dramatically, swapping neutrality for MAGA hats. “If we want to preserve freedom and a meritocracy in America, then Trump must win,” Musk tweeted in September. He seized the @America X handle to promote Trump, donated millions to his campaign, shared doctored and misleading clips of VP Kamala Harris, and is now working closely with the president-elect on an effort to cut government spending, which is most certainly a conflict of interest given his government contracts. Some have even suggested that he become Speaker of the House since you don’t have to be a member of Congress to hold that position. The shift sent many X users to alternatives like Bluesky, Threads, and Mastodon in the days after the US election.”
Let’s assume NPR is on the money. Will the influence of the Leonardo da Vinci of modern times make everything better? Absolutely. I mean the last Space X rocket almost worked. No Tesla has exploded in my neighborhood this week. Perfect.
Whitney Grace, January 22, 2025
Why Ghost Jobs? Answer: Intelligence
January 21, 2025
Prepared by a still-alive dinobaby.
A couple of years ago, an intelware outfit’s US “president” contacted me. He was curious about the law enforcement and intelligence markets appetite for repackaged Maltego, some analytics, and an interface with some Palantir-type bells and whistles. I explained that I charged money to talk because as a former blue-chip consultant, billing is in my blood. I don’t have platelets. I have Shrinky-dink invoices. Add some work, and these Shrinky-dinks blow up to big juicy invoices. He disconnected.
A few weeks later, he sent me an email. He wanted to pick up our conversation because his calls to other people whom he thought knew something about selling software to the US government did not understand that his company emerged from a spy shop. I was familiar with the issues: Non-US company, ties to a high-power intelligence operation, an inability to explain whether the code was secure, and the charming attitude of many intelligence professionals who go from A to B without much thought about some social conventions.
The fellow wanted to know how one could obtain information about a competitor; specifically, what was the pricing spectrum. It is too bad the owner of the company dumped the start up and headed to the golf course. If that call came to me today, I would point him at this article: “1 in 5 Online Job Postings Are Either Fake or Never Filled, Study Finds.” Gizmodo has explained one reason why there are so many bogus jobs offering big bogus salaries and promising big bogus benefits.
The answer is obvious when viewed from my vantage point in rural Kentucky? The objective is to get a pile or résumés, filter through them looking for people who might have some experience (current or past) at a company of interest to the job advertiser. What? Isn’t that illegal? I don’t know, but the trick has been used for a long, long time. Headhunting is a tricky business, and it is easy for someone to post a job opening and gather information from individuals who want to earn money.
What’s the write up say?
The Wall Street Journal cites internal data from the hiring platform Greenhouse that shows one in five online job postings—or between 18% and 22% of jobs advertised—are either fake or never filled. That data was culled from Greenhouse’s proprietary information, which the company can access because it sells automated software that helps employers fill out job postings. The “ghost job” phenomenon has been growing for some time—much to the vexation of job-seekers.
Okay, snappy. Ghost jobs. But the number seems low to me.
The article fails to note the intelligence angle, however. It concludes:
The plague of such phantom positions has led some platforms to treat job postings in very much the same way that other online content gets treated: as either A) verified or B) potential misinformation. Both Greenhouse and LinkedIn now supply a job verification service, the Journal writes, which allows users to know whether a position is legit or not. “It’s kind of a horror show,” Jon Stross, Greenhouse’s president and co-founder, told the Journal. “The job market has become more soul-crushing than ever.”
I think a handful of observations may be warranted:
- Some how the education of a job seeker has ignored the importance of making sure that the résumé is sanitized so no information is provided to an unknown entity from whom there is likely to be zero response. Folks, this is data collection. Volume is good.
- Interviews are easier than ever. Fire up Zoom and hit the record button. The content of the interview can be reviewed and analyzed for tasty little info-nuggets.
- The process is cheap, easy, and safe. Getting some information can be quite tricky. Post an advertisement on a service and wait. Some podcasts brag about how many responses their help wanted ads generate in as little as a few hours. As I said, cheap, easy, and safe.
What can a person do to avoid this type of intelligence gathering activity? Sorry. I have some useful operational information, but those little platelet sized invoices are just so eager to escape this dinobaby’s body. What’s amazing is that this ploy is news just as it was to the intelware person who was struggling to figure out some basics about selling to the government. Recycling open source software and pretending that it was an “innovation” was more important than trying to hire a former US government procurement officer, based in the DC area with a minimum of 10 years in software procurement. We have a situation where professional intelligence officers, job seekers, and big time journalists have the same level of understanding about how to obtain high-value information quickly and easily. Amazing what a dinobaby knows, isn’t it?
Stephen E Arnold, January 21, 2025