IBM: A Management Beacon Shines Brightly
May 17, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
To be frank, I don’t know if the write up called “IBM Sued Again for Alleged Discrimination. This Time Against White Males” is on the money. I don’t really care. The item is absolutely delicious. For context, older employees were given an opportunity to train their replacements and then find their future elsewhere. I think someone told me that was “age discrimination.” True or not, a couple of interesting Web sites disappeared. These reported on the hilarious personnel management policies in place at Big Blue during the sweep of those with silver hair. Hey, as a dinobaby, I know getting older adds a cost burden to outfits who really care about their employees. Plus, old employees are not “fast,” like those whip smart 24 year olds with fancy degrees and zero common sense. I understood the desire to dump expensive employees and find cheaper, more flexible workers. Anyone can learn PL/I, but only the young can embrace the intricacies of Squarespace.
Old geezers and dinobabies have no place on a team of young, bright, low wage athletes. Thanks, ChatGPT. Good enough in one try. Microsoft Copilot crashed. Well, MSFT is busy with security and AI or is it AI and security. I don’t know, do you?
The cited article reports:
The complaint claims that in the pursuit of greater racial and gender diversity within the Linux distro maker, Red Hat axed senior director Allan Kingsley Wood, an employee of eight years. According to the suit, that diversity, equity, and inclusion (DEI) initiative within Red Hat “necessitates prioritizing skin color and race as primary hiring factors,” and this, and not other factors, led to him being laid off. Basically, Wood claims he was unfairly let go for being a White man, rather for performance or the like, because Red Hat was focused on prioritizing in an unlawfully discriminatory fashion people of other races and genders to diversify its ranks.
The impact? The professional has an opportunity to explore the greenness on the side of the fence closer to the unemployment benefits claims office. The write up concludes this way:
It’s too early to tell how likely Wood is to succeed in his case. A 2020 lawsuit against Google on similar grounds didn’t even make it to court because the plaintiff withdrew. On the other hand, IBM has been settling age-discrimination claims left and right, so perhaps we’ll see that happen here. We’ve reached out to Red Hat and AFL for further comment on the impending court battle, and we’ll update if we hear back.
I will predict the future. The parties to this legal matter (assuming it is not settled among gentlemen) will not get back to the author of the news report. In my opinion, IBM remains a paragon of outstanding personnel management.
Stephen E Arnold, May 17, 2024
Flawed AI Will Still Take Jobs
May 16, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Shocker. Organizations are using smart software which is [a] operating in an way its creators cannot explain, [b] makes up information, and [c] appears to be dominated by a handful of “above the law” outfits. Does this characterization seem unfair? No, well, stop reading. If it seems anchored in reality, you may find my comments about jobs for GenX, GenY or GenWhy?, millennials, and Alphas (I think this is what marketers call wee lads and lasses) somewhat in line with the IMF’s view of AI.
The answer is, “Your daughter should be very, very intelligent and very, very good at an in-demand skill. If she is not, then it is doom scrolling for sure. Thanks, MSFT Copilot. Do your part for the good of mankind today.
“Artificial Intelligence Hitting Labour Forces Like a Tsunami – IMF Chief” screws up the metaphor. A tsunami builds, travels, dissipates. I am not sure what the headline writer thinks will dissipate in AI land. Jobs for sure. But AI seems to have some sticking power.
What does the IMF say? Here’s a bit of insight:
Artificial intelligence is likely to impact 60% of jobs in advanced economies and 40% of jobs around the world in the next two years…
So what? The IMF Big Dog adds:
“It could bring tremendous increase in productivity if we manage it well, but it can also lead to more misinformation and, of course, more inequality in our society.”
Could. I think it will but for those who know their way around AI and are in the tippy top of smart people. ATM users, TikTok consumers, and those who think school is stupid may not emerge as winners.
I find it interesting to consider what a two-tier society in the US and Western Europe will manifest. What will the people who do not have jobs do? Volunteer to work at the local animal shelter, pick up trash, or just kick back. Yeah, that’s fun.
What if one looks back over the last 50 years? When I grew up, my father had a job. My mother worked at home. I went to school. The text books were passed along year to year. The teachers grouped students by ability and segregated some students into an “advanced” track. My free time was spent outside “playing” or inside reading. When I was 15, I worked as a car hop. No mobile phones. No computer. Just radio, a record player, and a crappy black-and-white television which displayed fuzzy programs. The neighbors knew me and the other “kids.” From my eighth grade class, everyone went to college after high school. In my high school class of 1962, everyone was thinking about an advanced degree. Social was something a church sponsored. Its main feature was ice cream. After getting an advanced degree in 1965 I believe, I got a job because someone heard me give a talk about indexing Latin sermons and said, “We need you.” Easy.
A half century later, what is the landscape. AI is eliminating jobs. Many of these will be either intermediating jobs like doing email spam for a PR firm’s client or doing legal research. In the future, knowledge work will move up the Great Chain of Being. Most won’t be able to do the climbing to make it up to a rung with decent pay, some reasonable challenges, and a bit of power.
Let’s go back to the somewhat off-the-mark tsunami metaphor. AI is going to become more reliable. The improvements will continue. Think about what an IBM PC looked like in the 1980s. Now think about the MacBook Air you or your colleague has. They are similar but not equivalent. What happens when AI systems and methods keep improving? That’s tough to predict. What’s obvious is that the improvements and innovations in smart software are not a tsunami.
I liken it more like the continuous pressure in a petroleum cracking facility. Work is placed in contact with smart software, and stuff vaporizes. The first component to be consumed are human jobs. Next, the smart software will transform “work” itself. Most work is busy work; smart software wants “real” work. As long as the electricity stays on, the impact of AI will be on-going. AI will transform. A tsunami crashes, makes a mess, and then is entropified. AI is a different and much hardier development.
The IMF is on the right track; it is just not making clear how much change is now underway.
Stephen E Arnold, May 16, 2024
Generative AI: Minor Value and Major Harms
May 16, 2024
Flawed though it is, generative AI has its uses. In fact, according to software engineer and Citation Needed author Molly White, AI tools for programming and writing are about as helpful as an intern. Unlike the average intern, however, AI supplies help with a side of serious ethical and environmental concerns. White discusses the tradeoffs in her post, “AI Isn’t Useless. But Is It Worth It?”
At first White was hesitant to dip her toes in the problematic AI waters. However, she also did not want to dismiss their value out of hand. She writes:
“But as the hype around AI has grown, and with it my desire to understand the space in more depth, I wanted to really understand what these tools can do, to develop as strong an understanding as possible of their potential capabilities as well as their limitations and tradeoffs, to ensure my opinions are well-formed. I, like many others who have experimented with or adopted these products, have found that these tools actually can be pretty useful for some tasks. Though AI companies are prone to making overblown promises that the tools will shortly be able to replace your content writing team or generate feature-length films or develop a video game from scratch, the reality is far more mundane: they are handy in the same way that it might occasionally be useful to delegate some tasks to an inexperienced and sometimes sloppy intern. Still, I do think acknowledging the usefulness is important, while also holding companies to account for their false or impossible promises, abusive labor practices, and myriad other issues. When critics dismiss AI outright, I think in many cases this weakens the criticism, as readers who have used and benefited from AI tools think ‘wait, that’s not been my experience at all’.”
That is why White put in the time and effort to run several AI tools through their paces. She describes the results in the article, so navigate there for those details. Some features she found useful. Others required so much review and correction they were more trouble than they were worth. Overall, though, she finds the claims of AI bros to be overblown and the consequences to far outweigh the benefits. So maybe hand that next mundane task to the nearest intern who, though a flawed human, comes with far less baggage than ChatGPT and friends.
Cynthia Murrell, May 16, 2024
The Future for Flops with Humans: Flop with Fakes
May 15, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
As a dinobaby, I find the shift from humans to fake humans fascinating. Jeff Epstein’s favorite university published “Deepfakes of Your Dead Loved Ones Are a Booming Chinese Business.” My first thought is that MIT’s leadership will commission a digital Jeffrey. Imagine. He could introduce MIT fund raisers to his “friends.” He could offer testimonials about the university. He could invite — virtually, of course — certain select individuals to a virtual “island.”
The bar located near the technical university is a hot bed of virtual dating, flirting, and drinking. One savvy service person is disgusted by the antics of the virtual customers. The bartender is wide-eyed in amazement. He is a math major with an engineering minor. He sees what’s going on. Thanks, MSFT Copilot. Working hard on security, I bet.
Failing that, MIT might turn its attention to Whitney Wolfe Herd, the founder of Bumble. Although a graduate of the vastly, academically inferior Southern Methodist University in the non-Massachusetts locale of Texas (!), she has a more here-and-now vision. The idea is probably going to get traction among some of the MIT-type brainiacs. A machine-generated “self” — suitably enhanced to remove pocket protectors, plaid jammy bottoms, and observatory grade bifocals — will date a suitable companion’s digital self. Imagine the possibilities.
The write up “AI Personas Are the Future of Dating, Bumble Founder Says. Many Aren’t Buying.” The write up reports:
Herd proposed a scenario in which singles could use AI dating concierges as stand-ins for themselves when reaching out to prospective partners online. “There is a world where your dating concierge could go and date for you with other dating concierge … and then you don’t have to talk to 600 people,” she said during the summit.
Wow. More time to put a pony on the roof of an MIT building.
The write up did inject a potential downside. A downside? Who is NBC News kidding?
There’s some healthy skepticism over whether AI is the answer. A clip of Herd at the Bloomberg Summit gained over 10 million views on X, where people expressed uneasiness with the idea of an AI-based dating scene. Some compared it to episodes of "Black Mirror," a Netflix series that explores dystopian uses of technology. Others felt like the use of AI in dating would exacerbate the isolation and loneliness that people have been feeling in recent years.
Are those working in the techno-feudal empires or studying in the prep schools known to churn out the best, the brightest, the most 10X-ceptional knowledge workers weak in social skills? Come on. Having a big brain (particularly for mathy type of logic) is “obviously” the equipment needed to deal with lesser folk. Isolated? No. Think about gamers. Such camaraderie. Think about people like the head of Bumble. Lectures, Discord sessions, and access to data about those interested in loving and living virtually. Loneliness? Sorry. Not an operative word. Halt.
“AI Personas Are the Future…” reports:
"We will not be a dating app in a few years," she [the Bumble spokesperson] said. "Dating will be a component, but we will be a true human connection platform. This is where you will meet anyone you want to meet — a hiking buddy, a mahjong buddy, whatever you’re looking for."
What happens when a virtually Jeff Epstein goes to the bar and spots a first-year who looks quite youthful. Virtual fireworks?
Stephen E Arnold, May 15, 2024
AdVon: Why So Much Traction and Angst?
May 14, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
AdVon. AdVon. AdVon. Okay, the company is in the news. Consider this write up: “Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry.” So why meet AdVon? The subtitle explains:
Remember that AI company behind Sports Illustrated’s fake writers? We did some digging — and it’s got tendrils into other surprisingly prominent publications.
Let’s consider the question: Why is AdVon getting traction among “prominent publications” or any other outfit wanting content? The answer is not far to see: Cutting costs, doing more with less, get more clicks, get more money. This is not a multiple choice test in a junior college business class. This is common sense. Smart software makes it possible for those with some skill in the alleged art of prompt crafting and automation to sell “stories” to publishers for less than those publishers can produce the stories themselves.
The future continues to arrive. Here’s smart software is saying “Hasta la vista” to the human information generator. The humanoid looks very sad. The AI software nor its owner does not care. Revenue and profit are more important as long as the top dogs get paid big bucks. Thanks, MSFT Copilot. Working on your security systems or polishing the AI today?
Let’s look at the cited article’s peregrination to the obvious: AI can reduce costs of “publishing”. Plus, as AI gets more refined, the publications themselves can be replaced with scripts.
The write up says:
Basically, AdVon engages in what Google calls “site reputation abuse”: it strikes deals with publishers in which it provides huge numbers of extremely low-quality product reviews — often for surprisingly prominent publications — intended to pull in traffic from people Googling things like “best ab roller.” The idea seems to be that these visitors will be fooled into thinking the recommendations were made by the publication’s actual journalists and click one of the articles’ affiliate links, kicking back a little money if they make a purchase. It’s a practice that blurs the line between journalism and advertising to the breaking point, makes the web worse for everybody, and renders basic questions like “is this writer a real person?” fuzzier and fuzzier.
Okay. So what?
In spite of the article being labeled as “AI” in AdVon’s CMS, the Outside Inc spokesperson said the company had no knowledge of the use of AI by AdVon — seemingly contradicting AdVon’s claim that automation was only used with publishers’ knowledge.
Okay, corner cutting as part of AdVon’s business model. What about the “minimum viable product” or “good enough” approach to everything from self driving auto baloney to Boeing air craft doors? AI use is somehow exempt from what is the current business practice. Major academic figures take short cuts. Now an outfit with some AI skills is supposed to operate like a hybrid of Joan of Arc and Mother Theresa? Sure.
The write up states:
In fact, it seems that many products only appear in AdVon’s reviews in the first place because their sellers paid AdVon for the publicity. That’s because the founding duo behind AdVon, CEO Ben Faw and president Eric Spurling, also quietly operate another company called SellerRocket, which charges the sellers of Amazon products for coverage in the same publications where AdVon publishes product reviews.
To me, AdVon is using a variant of the Google type of online advertising concept. The bar room door swings both ways. The customer pays to enter and the customer pays to leave. Am I surprised? Nope. Should anyone? How about a government consumer protection watch dog. Tip: Don’t hold your breath. New York City tested a chatbot that provided information that violated city laws.
The write up concludes:
At its worst, AI lets unscrupulous profiteers pollute the internet with low-quality work produced at unprecedented scale. It’s a phenomenon which — if platforms like Google and Facebook can’t figure out how to separate the wheat from the chaff — threatens to flood the whole web in an unstoppable deluge of spam. In other words, it’s not surprising to see a company like AdVon turn to AI as a mechanism to churn out lousy content while cutting loose actual writers. But watching trusted publications help distribute that chum is a unique tragedy of the AI era.
The kicker is that the company owning the publication “exposing” AdVon used AdVon.
Let me offer several observations:
- The research reveals what will become an increasingly wide spread business practice. But the practice of using AI to generate baloney and spam variants is not the future. It is now.
- The demand for what appears to be old fashioned information generation is high. The cost of producing this type of information is going to force those who want to generate information to take short cuts. (How do I know? How about the president of Stanford University who took short cuts. That’s how. When a university president muddles forward for years and gets caught by accident, what are students learning? My answer: Cheat better than that.)
- AI diffusion is like gerbils. First, you have a couple of cute gerbils in your room. As a nine year old, you think those gerbils are cute. Then you have more gerbils. What do you do? You get rid of the gerbils in your house. What about the gerbils? Yeah, they are still out there. One can see gerbils; it is more difficult to see the AI gerbils. The fix is not the plastic bag filled with gerbils in the garbage can. The AI gerbils are relentless.
Net net: Adapt and accept that AI is here, reproducing rapidly, and evolving. The future means “adapt.” One suggestion: Hire McKinsey & Co. to help your firm make tough decisions. That sometimes works.
Stephen E Arnold, May 14, 2024
Apple and a Recycled Carnival Act: Woo Woo New New!
May 13, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
A long time ago, for a project related to a new product which was cratering, one person on my team suggested I read a book by James B. Twitchell. Carnival Culture: The Trashing of Taste in America provided a broad context, but the information in the analysis of taste was not going to save the enterprise software I was supposed to analyze. In general, I suggest that investment outfits with an interest in online information give me a call before writing checks to the tale-spinning entrepreneurs.
A small creative spark getting smashed in an industrial press. I like the eyes. The future of humans in Apple’s understanding of the American datasphere. Wow, look at those eyes. I can hear the squeals of pain, can’t you?
Dr. Twitchell did a good job, in my opinion, of making clear that some cultural actions are larger than a single promotion. Popular movies and people like P.T. Barnum (the circus guy) explain facets of America. These two examples are not just entertaining; they are making clear what revs the engines of the US of A.
I read “Hating Apple Goes Mainstream” and realized that Apple is doing the marketing for which it is famous. The roll out of the iPad had a high resolution, big money advertisement. If you are around young children, squishy plastic toys are often in small fingers. Squeeze the toy and the eyes bulge. In the image above, a child’s toy is smashed in what seems to me be the business end of a industrial press manufactured by MSE Technology Ltd in Turkey.
Thanks, MSFT Copilot. Glad you had time to do this art. I know you are busy on security or is it AI or is AI security or security AI? I get so confused.
The Apple iPad has been a bit of an odd duck. It is a good substitute for crappy Kindle-type readers. We have a couple, but they don’t get much use. Everything is a pain for me because the super duper Apple technology does not detect my fingers. I bought the gizmos so people could review the PowerPoint slides for one of my lectures at a conference. I also experimented with the iPad as a teleprompter. After a couple of tests, getting content on the device, controlling it, and fiddling so the darned thing knew I was poking the screen to cause an action — I put the devices on the shelf.
Forget the specific product, let’s look at the cited write ups comments about the Apple “carnival culture” advertisement. The write up states:
Apple has lost its presumption of good faith over the last five years with an ever-larger group of people, and now we’ve reached a tipping point. A year ago, I’m sure this awful ad would have gotten push back, but I’m also sure we’d heard more “it’s not that big of a deal” and “what Apple really meant to say was…” from the stalwart Apple apologists the company has been able to count on for decades. But it’s awfully quiet on the fan-boy front.
I think this means the attempt to sell sent weird messages about a company people once loved. What’s going on, in my opinion, is that Apple is explaining what technology is going to do to people who once used software to create words, images, and data exhaust will be secondary to cosmetics of technology.
In short, people and their tools will be replaced by a gizmo or gizmos that are similar to bright lights and circus posters. What do these artifacts tell us. My take on the Apple iPad M4 super duper creative juicer is, at this time:
- So what? I have an M2 Air, and it does what I hoped the two touch insensitive iPads would do.
- Why create a form factor that is likely to get crushed when I toss my laptop bad on a security screening belt? Apple’s products are, in my view, designed to be landfill residents.
- Apple knows in its subconscious corporate culture heat sink that smart software, smart services, and dumb users are the future. The wonky expensive high-resolution shouts, “We know you are going to be out of job. You will be like the yellow squishy toy.”
The message Apple is sending is that innovation has moved from utility to entertainment to the carnival sideshow. Put on your clown noses, people. Buy Apple.
Stephen E Arnold, May 13, 2024
Will Google Behave Like Telegram?
May 10, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I posted a short item on LinkedIn about Telegram’s blocking of Ukraine’s information piped into Russia via Telegram. I pointed out that Pavel Durov, the founder of VK and Telegram, told Tucker Carlson that he was into “free speech.” A few weeks after the interview, Telegram blocked the data from Ukraine for Russia’s Telegram users. One reason given, as I recall, was that Apple was unhappy. Telegram rolled over and complied with a request that seems to benefit Russia more than Apple. But that’s just my opinion. The incident, which one of my team verified with a Ukrainian interacting with senior professionals in Ukraine, the block. Not surprisingly, Ukraine’s use of Telegram is under advisement. I think that means, “Find another method of sending encrypted messages and use that.” Compromised communications can translate to “Rest in Peace” in real time.
A Hong Kong rock band plays a cover of the popular hit Glory to Hong Kong. The bats in the sky are similar to those consumed in Shanghai during a bat festival. Thanks, MSFT Copilot. What are you working on today? Security or AI?
I read “Hong Kong Puts Google in Hot Seat With Ban on Protest Song.” That news story states:
The Court of Appeal on Wednesday approved the government’s application for an injunction order to prevent anyone from playing Glory to Hong Kong with seditious intent. While the city has a new security law to punish that crime, the judgment shifted responsibility onto the platforms, adding a new danger that just hosting the track could expose companies to legal risks. In granting the injunction, judges said prosecuting individual offenders wasn’t enough to tackle the “acute criminal problems.”
What’s Google got to do with it that toe tapper Glory to Hong Kong?
The write up says:
The injunction “places Google, media platforms and other social media companies in a difficult position: Essentially pitting values such as free speech in direct conflict with legal obligations,” said Ryan Neelam, program director at the Lowy Institute and former Australian diplomat to Hong Kong and Macau. “It will further the broader chilling effect if foreign tech majors do comply.”
The question is, “Roll over as Telegram allegedly has, or fight Hong Kong and by extension everyone’s favorite streaming video influencer, China?” What will Google do? Scrub Glory to Hong Kong, number one with a bullet on someone’s hit parade I assume.
My guess is that Google will go to court, appeal, and then take appropriate action to preserve whatever revenue is at stake. I do know The Sundar & Prabhakar Comedy Show will not use Glory to Hong Kong as its theme for its 2024 review.
Stephen E Arnold, May 10, 2024
Microsoft and Its Customers: Out of Phase, Orthogonal, and Confused
May 9, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I am writing this post using something called Open LiveWriter. I switched when Microsoft updated our Windows machines and killed printing, a mouse linked via a KVM, and the 2012 version of its blog word processing software. I use a number of software products, and I keep old programs in order to compare them to modern options available to a user. The operative point is that a Windows update rendered the 2012 version of LiveWriter lost in the wonderland of Windows’ Byzantine code.
A young leader of an important project does not want to hear too much from her followers. In fact, she wishes they would shut up and get with the program. Thank, MSFT Copilot. How’s the Job One of security coming today?
There are reports, which I am not sure I believe, that Windows 11 is a modern version of Windows Vista. The idea is that users are switching to Windows 10. Well, maybe. But the point is that users are not happy with Microsoft’s alleged changes to Windows; for instance:
- Notifications (advertising) in the Windows 11 start menu
- Alleged telemetry which provides a stream of user action and activity data to Microsoft for analysis (maybe marketing purposes?)
- Gratuitous interface changes which range from moving control items from a control panel to a settings panel to fiddling with task manager
- Wonky updates like the printer issue, driver wonkiness, and smart help which usually returns nothing of much help.
I read “This Third-Party App Blocks Integrated Windows 11 Advertising.” You can read the original article to track down this customization tool. My hunch is that its functions will be intentionally blocked by some bonus centric Softie or a change to the basic Windows 11 control panel will cause the software to perform like LiveWriter 2012.
I want to focus on a comment to the cited article written by seeprime:
Microsoft has seriously degraded File Explorer over the years. They should stop prolonging the Gates culture of rewarding software development, of new and shiny things, at the expense of fixing what’s not working optimally.
Now that security, not AI and not Windows 11, are the top priority at Microsoft, will the company remediate the grouses users have about the product? My answer is, “No.” Here’s why:
- Fixing, as seeprime, suggests is less important that coming up with some that seems “new.” The approach is dangerous because the “new” thing may be developed by someone uninformed about the hidden dependencies within what is code as convoluted as Google’s search plumbing. “New” just breaks the old or the change is something that seems “new” to an intern or an older Softie who just does not care. Good enough is the high bar to clear.
- Details are not Microsoft’s core competency. Indeed, unlike Google, Microsoft has many revenue streams, and the attention goes to cooking up new big-money services like a version of Copilot which is not exposed to the Internet for its government customers. The cloud, not Windows, is the future.
- Microsoft whether it knows it or not is on the path to virtualize desktop and mobile software. The idea means that Microsoft does not have to put up with developers who make changes Microsoft does not want to work. Putting Windows in the cloud might give Microsoft the total control it desires.
- Windows is a security challenge. The thinking may be: “Let’s put Windows in the cloud and lock down security, updates, domain look ups, etc. I would suggest that creating one giant target might introduce some new challenges to the Softie vision.
Speculation aside, Microsoft may be at a point when users become increasingly unhappy. The mobile model, virtualization, and smart interfaces might create tasty options for users in the near future. Microsoft cannot make up its mind about AI. It has the OpenAI deal; it has the Mistral deal; it has its own internal development; and it has Inflection and probably others I don’t know about.
Microsoft cannot make up its mind. Now Microsoft is doing an about face and saying, “Security is Job One.” But there’s the need to make the Azure Cloud grow. Okay, okay, which is it? The answer, I think, is, “We want to do it all. We want everything.”
This might be difficult. Users might just pile up and remain out of phase, orthogonal, and confused. Perhaps I could add angry? Just like LiveWriter: Tossed into the bit trash can.
Stephen E Arnold, May 9. 2024
Buffeting AI: A Dinobaby Is Nervous
May 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I am not sure the “go fast” folks are going to be thrilled with a dinobaby rich guy’s view of smart software. I read “Warren Buffett’s Warning about AI.” The write up included several interesting observations. The only problem is that smart software is out of the bag. Outfits like Meta are pushing the open source AI ball forward. Other outfits are pushing, but Meta has big bucks. Big bucks matter in AI Land.
Yes, dinobaby. You are on the right wavelength. Do you think anyone will listen? I don’t. Thanks, MSFT Copilot. Keep up the good work on security.
Let’s look at a handful of statements from the write up and do some observing while some in the Commonwealth of Kentucky recover from the Derby.
First, the oracle of Omaha allegedly said:
“When you think about the potential for scamming people… Scamming has always been part of the American scene. If I was interested in investing in scamming— it’s gonna be the growth industry of all time.”
Mr. Buffet has nailed the scamming angle. I particularly liked the “always.” Imagine a country built upon scamming. That makes one feel warm and fuzzy about America. Imagine how those who are hostile to US interests interpret the comment. Ill will toward the US can now be based on the premise that “scamming has always been part of the American scene.” Trust us? Just ignore the oracle of Omaha? Unlikely.
Second, the wise, frugal icon allegedly communicated that:
the technology would affect “anything that’s labor sensitive” and that for workers it could “create an enormous amount of leisure time.”
What will those individuals do with that “leisure time”? Gobbling down social media? Working on volunteer projects like picking up trash from streets and highways?
The final item I will cite is his 2018 statement:
“Cyber is uncharted territory. It’s going to get worse, not better.”
Is that a bit negative?
Stephen E Arnold, May 7, 2024
Trust the Internet? Sure and the Check Is in the Mail
May 3, 2024
This essay is the work of a dumb humanoid. No smart software involved.
When the Internet became common place in schools, students were taught how to use it as a research tool like encyclopedias and databases. Learning to research is better known as information literacy and it teaches critical evaluation skills. The biggest takeaway from information literacy is to never take anything at face value, especially on the Internet. When I read CIRA and Continuum Loops’ report, “A Trust Layer For The Internet Is Emerging: A 2023 Report,” I had my doubts.
CIRA is the Canadian Internet Registration Authority, a non-profit organization that supposedly builds a trusted Internet. CIRA acknowledges that as a whole the Internet lacks a shared framework and tool sets to make it trustworthy. The non-profit states that there are small, trusted pockets on the Internet, but they sacrifice technical interoperability for security and trust.
CIRA released a report about how people are losing faith in the Internet. According to the report’s executive summary, the number of Canadians who trust the Internet fell from 71% to 57% while the entire world went from 74% to 63%. The report also noted that companies with a high trust rate outperform their competition. Then there’s this paragraph:
“In this report, CIRA and Continuum Loop identify that pairing technical trust (e.g., encryption and signing) and human trust (e.g., governance) enables a trust layer to emerge, allowing the internet community to create trustworthy digital ecosystems and rebuild trust in the internet as a whole. Further, they explore how trust registries help build trust between humans and technology via the systems of records used to help support these digital ecosystems. We’ll also explore the concept of registry of registries (RoR) and how it creates the web of connections required to build an interoperable trust layer for the internet.”
Does anyone else hear the TLA for Whiskey Tango Foxtrot in their head? Trusted registries sound like a sales gimmick to verify web domains. There are trusted resources on the Internet but even those need to be fact checked. The companies that have secure networks are Microsoft, TikTok, Google, Apple, and other big tech, but the only thing that can be trusted about some outfits are the fat bank accounts.
Whitey Grace, May 3, 2024