AI Will Doom You to Poverty Unless You Do AI to Make Money

January 23, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb Prepared by a still-alive dinobaby.

I enjoy reading snippets of the AI doomsayers. Some spent too much time worrying about the power of Joe Stalin’s approach to governing. Others just watched the Terminator series instead of playing touch football. A few “invented” AI by cobbling together incremental improvements in statistical procedures lashed to ever-more-capable computing infrastructures. A couple of these folks know that Nostradamus became a brand and want to emulate that predictive master.

I read “Godfather of AI Explains How Scary AI Will Increase the Wealth Gap and Make Society Worse.” That is a snappy title. Whoever wrote it crafted the idea of an explainer to fear. Plus, the click bait explains that homelessness is for you too. Finally, it presents a trope popular among the elder care set. (Remember, please, that I am a dinobaby myself.) Prod a group of senior citizens to a dinner and you will hear, “Everything is broken.” Also, “I am glad I am old.” Then there is the ever popular, “Those tattoos! The check out clerks cannot make change! I  don’t understand commercials!” I like to ask, “How many wars are going on now? Quick.”

two robots

Two robots plan a day trip to see the street people in Key West. Thanks, You.com. I asked for a cartoon; I get a photorealistic image. I asked for a coffee shop; I get weird carnival setting. Good enough. (That’s why I am not too worried.)

Is society worse than it ever was? Probably not. I have had an opportunity to visit a number of countries, go to college, work with intelligent (for the most part) people, and read books whilst sitting on the executive mailing tube. Human behavior has been consistent for a long time. Indigenous people did not go to Wegman’s or Whole Paycheck. Some herded animals toward a cliff. Other harvested the food and raw materials from the dead bison at the bottom of the cliff. There were no unskilled change makers at this food delivery location.

The write up says:

One of the major voices expressing these concerns is the ‘Godfather of AI’ himself Geoffrey Hinton, who is viewed as a leading figure in the deep learning community and has played a major role in the development of artificial neural networks. Hinton previously worked for Google on their deep learning AI research team ‘Google Brain’ before resigning in 2023 over what he expresses as the ‘risks’ of artificial intelligence technology.

My hunch is that like me the “worked at” Google was for a good reason — Money. Having departed from the land of volleyball and weird empty office buildings, Geoffrey Hinton is in the doom business. His vision is that there will be more poverty. There’s some poverty in Soweto and the other townships in South Africa. The slums of Rio are no Palm Springs. Rural China is interesting as well. Doesn’t everyone want to run a business from the area in front of a wooden structure adjacent an empty highway to nowhere? Sounds like there is some poverty around, doesn’t it?

The write up reports:

“We’re talking about having a huge increase in productivity. So there’s going to be more goods and services for everybody, so everybody ought to be better off, but actually it’s going to be the other way around. “It’s because we live in a capitalist society, and so what’s going to happen is this huge increase in productivity is going to make much more money for the big companies and the rich, and it’s going to increase the gap between the rich and the people who lose their jobs.”

The fix is to get rid of capitalism. The alternative? Kumbaya or a better version of those fun dudes Marx. Lenin, and Mao. I stayed in the “last” fancy hotel the USSR built in Tallinn, Estonia. News flash: The hotels near LaGuardia are quite a bit more luxurious.

The godfather then evokes the robot that wanted to kill a rebel. You remember this character. He said, “I’ll be back.” Of course, you will. Hollywood does not do originals.

The write up says:

Hinton’s worries don’t just stop at the wealth imbalance caused by AI too, as he details his worries about where AI will stop following investment from big companies in an interview with CBC News: “There’s all the normal things that everybody knows about, but there’s another threat that’s rather different from those, which is if we produce things that are more intelligent than us, how do we know we can keep control?” This is a conundrum that has circulated the development of robots and AI for years and years, but it’s seeming to be an increasingly relevant proposition that we might have to tackle sooner rather than later.

Yep, doom. The fix is to become an AI wizard, work at a Google-type outfit, cash out, and predict doom. It is a solid career plan. Trust me.

Stephen E Arnold, January 23, 2025

AI Doom: Really Smart Software Is Coming So Start Being Afraid, People

January 20, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb Prepared by a still-alive dinobaby.

The essay “Prophecies of the Flood” gathers several comments about software that thinks and decides without any humans fiddling around. The “flood” metaphor evokes the streams of money about which money people fantasize. The word “flood” evokes the Hebrew Biblical idea’s presentation of a divinely initiated cataclysm intended to cleanse the Earth of widespread wickedness. Plus, one cannot overlook the image of small towns in North Carolina inundated in mud and debris from a very bad storm.

Screenshot 2025-01-12 055443

When the AI flood strikes as a form of divine retribution, will the modern arc be filled with humans? Nope. The survivors will be those smart agents infused with even smarter software. Tough luck, humanoids. Thanks, OpenAI, I knew you could deliver art that is good enough.

To sum up: A flood is bad news, people.

The essay states:

the researchers and engineers inside AI labs appear genuinely convinced they’re witnessing the emergence of something unprecedented. Their certainty alone wouldn’t matter – except that increasingly public benchmarks and demonstrations are beginning to hint at why they might believe we’re approaching a fundamental shift in AI capabilities. The water, as it were, seems to be rising faster than expected.

The signs of darkness, according to the essay, include:

  • Rising water in the generally predictable technology stream in the park populated with ducks
  • Agents that “do” something for the human user or another smart software system. To humans with MBAs, art history degrees, and programming skills honed at a boot camp, the smart software is magical. Merlin wears a gray T shirt, sneakers, and faded denims
  • Nifty art output in the form of images and — gasp! — videos.

The essay concludes:

The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.

Let’s assume that I buy this analysis and agree with the notion “prepare now.” How realistic is it that the United Nations, a couple of super powers, or a motivated individual can have an impact? Gentle reader, doom sells. Examples include The Big Short: Inside the Doomsday Machine, The Shifts and Shocks: What We’ve Learned – and Have Still to Learn – from the Financial Crisis, and Too Big to Fail: How Wall Street and Washington Fought to Save the Financial System from Crisis – and Themselves, and others, many others.

Have these dissections of problems had a material effect on regulators, elected officials, or the people in the bank down the street from your residence? Answer: Nope.

Several observations:

  1. Technology doom works because innovations have positive and negative impacts. To make technology exciting, no one is exactly sure what the knock on effects will be. Therefore, doom is coming along with the good parts
  2. Taking a contrary point of view creates opportunities to engage with those who want to hear something different. Insecurity is a powerful sales tool.
  3. Sending messages about future impacts pulls clicks. Clicks are important.

Net net: The AI revolution is a trope. Never mind that after decades of researchers’ work, a revolution has arrived. Lionel Messi allegedly said, “It took me 17 years to become an overnight success.” (Mr. Messi is a highly regarded professional soccer player.)

Will the ill-defined technology kill humans? Answer: Who knows. Will humans using ill-defined technology like smart software kill humans? Answer: Absolutely. Can “anyone” or “anything” take an action to prevent AI technology from rippling through society.  Answer: Nope.

Stephen E Arnold, January 20, 2025

Apple and Some Withering Fruit: Is the Orchard on Fire?

January 14, 2025

Hopping DinoA dinobaby-crafted post. I confess. I used smart software to create the heart wrenching scene of a farmer facing a tough 2025.

Apple is a technology giant, a star in the universe of bytes. At the starter’s gun for 2025, Apple may have some work to do. For example, I read “Apple’s China Troubles Mount as Foreign Phone Sales Sink for 4th Month.” (For now, this is a trust outfit story, but a few months down the road the information may originate from the “real” news powerhouse Gannet. Imagine that.) The “trusted” outfit Reuters stated:

Apple, the dominant foreign smartphone maker in China, faces a slowing economy and competition from domestic rivals, such as Huawei…. Apple briefly fell out of China’s top five smartphone vendors in the second quarter of 2024 before recovering in the third quarter. The U.S. company’s smartphone sales in China still slipped 0.3% during the third quarter from a year earlier, while Huawei’s sales rose 42%, according to research firm IDC.

I think this means that Apple is losing share in what may have been a very juicy market. Can it get this fertile revenue field producing in-demand Fuji Apples to market? With a new US administration coming down the information highway, it is possible that the iPhone’s pop up fruit stand could be blown off the side of the main road.

image

An apple farmer grasps the problem fruit blight poses. Thanks, You.com you produced okay fruit blight when ChatGPT told me that an orchard with fruit blight was against is guidelines. Helpful, right?

Another issue Apple faces in a different orchard regards privacy. “Apple to Pay $95 Million to Settle Siri Privacy Lawsuit” reports:

Apple agreed to pay $95 million in cash to settle a proposed class action lawsuit claiming that its voice-activated Siri assistant violated users’ privacy…. Mobile device owners complained that Apple routinely recorded their private conversations after they activated Siri unintentionally, and disclosed these conversations to third parties such as advertisers.

Yeah, what about those privacy protections? What about those endless “Log in to your Facetime” when our devices don’t use Facetime. Hey, that is just Apple being so darned concerned about privacy. Will Apple pay or will it appeal? I won’t trouble you with my answer. Legal eagles love these fertile fields.

I don’t want to overlook the Apple AI. Yahoo recycled a story from Digital Intelligence called “The Good and Bad of Apple Intelligence after Using It on My iPhone for Months.” The Yahoo version of the story said:

I was excited to check out more Apple Intelligence features when I got the iOS 18.2 update on my iPhone 16 Pro. But aside from what I’ve already mentioned, the rest isn’t as exciting. I already hate AI art in general, so I wasn’t too thrilled about Image Playground. However, since it’s a new feature, I had to try it at least once. I tried to get Apple Intelligence to generate an AI image of me, in various scenarios, to perhaps share on social media. But every result I got did not look good to me, and I felt it had no actual resemblance to my image. It kept giving me odd-looking teeth in my smiles, hair that looked nothing like what I had, and other imperfections. I wasn’t expecting a perfect picture, but I was hoping I would get something that would be decent enough to share online — dozens of tries, and I wasn’t happy with any of them. I suppose my appearance doesn’t work with Apple’s AI art style? Whatever the reason is, my experience with it hasn’t been positive.

Yep, bad teeth. Perhaps the person has eaten too many apples?

Looking at these three allegedly accurate news stories what do I hypothesize about Apple in 2025:

  1. Apple will become increasingly desperate to generate revenue. Let’s face it the multi-thousand dollar Vision Pro headset and virtual Apple TV may fill the Chinese iPhone sales hole.
  2. Apple simply does what it wants to do with regard to privacy. From automatic iPhone reboots to smarmy talk about accidentally sucking down user data, the company cannot be trusted in 2025 in my opinion.
  3. Apple’s innovation is stalled. One of my colleagues told me Apple rolled out two dozen “new” products in 2025. I must confess that I cannot name one of them. The fruitarian seemed to be able to get my attention with “one more thing.” Today’s Apple  has some discoloration.

Net net: The orchard needs a more skilled agrarian, fertilizer, and some luck with the business climate. Failing that, another bad crop may be ahead.

Stephen E Arnold, January 14, 2025

China Good, US Bad: Australia Reports the Unwelcome News

December 13, 2024

animated-dinosaur-image-0055This write up was created by an actual 80-year-old dinobaby. If there is art, assume that smart software was involved. Just a tip.

I read “Critical Technology Tracker: Two Decades of Data Show Rewards of Long-Term Investment.” The write up was issued in September 2024, and I have no confidence that much has changed. I believe the US is the leader in marketing hyperbole output. Other countries are far behind, but some are closing the gaps. I will focus on the article, and I will leave it to you to read the full report available from the ASPI Australia Web site.

The main point of this report by the Australian Strategic Policy Institute is that the US has not invested in long-term research. I am not sure how much of this statement is a surprise to those who have watched as US patents have become idea recyclers, the deterioration of US education, and the fascinating quest for big money.

The cited summary of the research reports:

The US led in 60 of 64 technologies in the five years from 2003 to 2007, but in the most recent five year period, it was leading in just seven.

I want to point out that playing online games and doom scrolling are not fundamental technologies. The US has a firm grip on the downstream effects of applied technology. The fundamentals are simply not there. AI which seems to be everywhere is little more than word probability which is not a fundamental; it is an application of methods.

The cited article points out:

image

The chart is easy to read. The red line heading up is China. The blue line going down is the US.

In what areas are China’s researchers making headway other than its ability to terminate some US imports quickly? Here’s what the cited article reports:

China has made its new gains in quantum sensors, high-performance computing, gravitational sensors, space launch and advanced integrated circuit design and fabrication (semiconductor chip making). The US leads in quantum computing, vaccines and medical countermeasures, nuclear medicine and radiotherapy, small satellites, atomic clocks, genetic engineering and natural language processing.

The list, one can argue, is arbitrary and easily countered by US researchers. There are patents, start ups, big financial winners, and many fine research institutions. With AI poised to become really smart in a few years, why worry?

I am not worried because I am old. The people who need to worry are the parents of children who cannot read and comprehend, who do not study and master mathematics, who do not show much interest in basic science, and are indifferent to the concept of work ethic.

Australia is worried. It is making an attempt to choke off the perceived corrosive effects of the US social media juggernaut for those under 16 years of age. It is monitoring China’s activities in the Pacific. It is making an effort to enhance its military capabilities.

Is America worried? I would characterize the attitude here in rural Kentucky as the mascot of Mad Magazine’s catchphrase, “What, me worry?”

Stephen E Arnold, December 13, 2024

Pragmatism or the Normalization of Good Enough

November 14, 2024

dino orange_thumb_thumb_thumb_thumb_thumb_thumbSorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.

I recall that some teacher told me that the Mona Lisa painter fooled around more with his paintings than he did with his assistants. True or false? I don’t know. I do know that when I wandered into the Louvre in late 2024, there were people emulating sardines. These individuals wanted a glimpse of good old Mona.

image

Is Hamster Kombat the 2024 incarnation of the Mona Lisa? I think this image is part of the Telegram eGame’s advertising. Nice art. Definitely a keeper for the swipers of the future.

I read “Methodology Is Bullsh&t: Principles for Product Velocity.” The main idea, in my opinion, is do stuff fast and adapt. I think this is similar to the go-go mentality of whatever genius said, “Go fast. Break things.” This version of the Truth says:

All else being equal, there’s usually a trade-off between speed and quality. For the most part, doing something faster usually requires a bit of compromise. There’s a corner getting cut somewhere. But all else need not be equal. We can often eliminate requirements … and just do less stuff. With sufficiently limited scope, it’s usually feasible to build something quickly and to a high standard of quality. Most companies assign requirements, assert a deadline, and treat quality as an output. We tend to do the opposite. Given a standard of quality, what can we ship in 60 days? Recent escapades notwithstanding, Elon Musk has a similar thought process here. Before anything else, an engineer should make the requirements less dumb.

Would the approach work for the Mona Lisa dude or for Albert Einstein? I think Al fumbled along for years, asking people to help with certain mathy issues, and worrying about how he saw a moving train relative to one parked at the station.

I think the idea in the essay is the 2024 view of a practical way to get a product or service before prospects. The benefits of redefining “fast” in terms of a specification trimmed to the MVP or minimum viable product makes sense to TikTok scrollers and venture partners trying to find a pony to ride at a crowded kids’ party.

One of the touchstones in the essay, in my opinion, is this statements:

Our customers are engineers, so we generally expect that our engineers can handle product, design, and all the rest. We don’t need to have a whole committee weighing in. We just make things and see whether people like them.

I urge you to read the complete original essay.

Several observations:

  1. Some people like the Mona List dude are engaged in a process of discovery, not shipping something good enough. Discovery takes some people time, lots of time. What happens during this process is part of the process of expanding an information base.
  2. The go-go approach has interesting consequences; for example, based on the anecdotal and flawed survey data, young users of social media evidence a number of interesting behaviors. The idea of “let ‘er rip” appears to have some impact on young people. Perhaps you have one hand experience with this problem? I know people whose children have manifested quite remarkable behaviors. I do know that certain basic mental functions like concentrating is visible to me every time I have a teenager check me out at the grocery store.
  3. By redefining excellence and quality, the notion of a high-value goal drops down a bit. Some new automobiles don’t work too well; for example, the Tesla Cybertruck owner whose vehicle was not able to leave the dealer’s lot.

Net net: Is a Telegram mini app Hamster Kombat today’s equivalent of the Mona Lisa?

Stephen E Arnold, November 14, 2024

Bring Back Bell Labs…Wait, Google Did…

November 12, 2024

Bell Labs was once a magical, inventing wonderland and it established the foundation for modern communication, including the Internet. Everything was great at Bell Labs until projects got deadlines and creativity was stifled. Hackaday examines the history of the mythical place and discusses if there could ever be a new Bell Labs in, “What Would It Take To Recreate Bell Labs?”

Bell Labs employees were allowed to tinker on their projects for years as long as they focused on something to benefit the larger company. These fields ranges from metallurgy, optics, semiconductors, and more. Bell Labs worked with Western Electric and AT&T. These partnerships resulted in transistor, laser, photovoltaic cell, charge-coupled cell (CCD), Unix operating system, and more.

What made Bell Labs special was that inventors were allowed to let their creativity marinate and explore their ideas. This came to screeching halt in 1982 when the US courts ordered AT&T to breakup. Western Electric became Lucent Technologies and took Bell Labs with it. The creativity and gift of time disappeared too. Could Bell Labs exist today? No, not as it was. It would need to be updated:

The short answer to the original question of whether Bell Labs could be recreated today is thus a likely ‘no’, while the long answer would be ‘No, but we can create a Bell Labs suitable for today’s technology landscape’. Ultimately the idea of giving researchers leeway to tinker is one that is not only likely to get big returns, but passionate researchers will go out of their way to circumvent the system to work on this one thing that they are interested in.”

Google did have a new incarnation of Bell Labs. Did Google invent the Google Glass and billions in revenue from actions explained in the novel 1984?

Whitney Grace, November 12, 2024

Boring Technology Ruins Innovation: Go, Chaos!

October 25, 2024

Jonathan E. Magen is an experienced computer scientist and he writes a blog called Yonkeltron. He recently posted, “Boring Tech Is Stifling Improvement.” After a brief anecdote about highway repair that wasn’t hindered because of bureaucracy and the repair crew a new material to speed up the job, Magen got to thinking about the current state of tech.

He thinks it is boring.

Magen supports tech teams being allocated budgets to adopt old technology. The montage of “don’t fix what’s not broken” comes to mind, but sometimes newer is definitely better. He relates that it is problematic if tech teams have too much technology or solution, but there’s also the problem if the one-size-fits all solution no longer works. It’s like having a document that can only be opened by Microsoft Office and you don’t have the software. It’s called a monoculture with a single point of failure. Tech nerds and philosophers have names for everything!

Magen bemoans that a boring tech environment is a buzzkill. He then shares this “happy thoughts”:

“A second negative effect is the chilling of innovation. Creating a better way of doing things definitionally requires deviation from existing practices. If that is too heavily disincentivized by “engineering standards”, then people don’t feel they have enough freedom to color outside the lines here and there. Therefore, it chills innovation in company environments where good ideas could, conceivably, come from anywhere. Put differently, use caution so as not to silence your pioneers.

Another negative effect is the potential to cause stagnation. In this case, devotion to boring tech leads to overlooking better ways of doing things. Trading actual improvement and progress for “the devil you know” seems a poor deal. One of the main arguments in favor of boring tech is operability in the polycontext composed of predictability and repairability. Despite the emergence of Site Reliability Engineering (SRE), I think that this highlights a troubling industry trope where we continually underemphasize, and underinvest in, production operations.”

Necessity is the mother of invention, but boring is the killer of innovation. Bring on chaos.

Whitney Grace, October 25, 2024

Stupidity: Under-Valued

September 27, 2024

We’re taught from a young age that being stupid is bad. The stupid kids don’t move onto higher grades and they’re ridiculed on the playground. We’re also fearful of showing our stupidity, which often goes hand and hand with ignorance. These cause embarrassment and fear, but Math For Love has a different perspective: “The Centrality Of Stupidity In Mathematics.”

Math For Love is a Web site dedicated to revolutionizing how math is taught. They have games, curriculum, and more demonstrating how beautiful and fun math is. Math is one of those subjects that makes a lot of people feel dumb, especially the higher levels. The Math For Love team referenced an essay by Martin A. Schwartz called, “The Importance Of Stupidity In Scientific Research.”

Schwartz is a microbiologist and professor at the University of Virginia. In his essay, he expounds on how modern academia makes people feel stupid.

The stupid feeling is one of inferiority. It’s problem. We’re made to believe that doctors, engineers, scientists, teachers, and other smart people never experienced any difficulty. Schwartz points out that students (and humanity) need to learn that research is extremely hard. No one starts out at the top. He also says that they need to be taught how to be productively stupid, i.e. if you don’t feel stupid then you’re not really trying.

Humans are meant to feel stupid, otherwise they wouldn’t investigate, explore, or experiment. There’s an entire era in western history about overcoming stupidity: the Enlightenment. Math For Love explains that stupidity relative for age and once a child grows they overcome certain stupidity levels aka ignorance. Kids gain the comprehension about an idea, then can apply it to life. It’s the literal meaning of the euphemism: once a mind has been stretched it can’t go back to its original size.

“I’ve come to believe that one of the best ways to address the centrality of stupidity is to take on two opposing efforts at once: you need to assure students that they are not stupid, while at the same time communicating that feeling like they are stupid is totally natural. The message isn’t that they shouldnt be feeling stupid – that denies their honest feeling to learning the subject. The message is that of course they’re feeling stupid… that’s how everyone has to feel in order to learn math!:

Add some warm feelings to the equation and subtract self-consciousness, multiply by practice, and divide by intelligence level. That will round out stupidity and make it productive.

Whitney Grace, September 27, 2024

E2EE: Not Good Enough. So What Is Next?

May 21, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

What’s wrong with software? “

I think one !*#$ thing about the state of technology in the world today is that for so many people, their job, and therefore the thing keeping a roof over their family’s head, depends on adding features, which then incentives people to, well, add features. Not to make and maintain a good app.

image

Who has access to the encrypted messages? Someone. That’s why this young person is distraught as she is escorted to the police van. Thanks, MSFT Copilot. Good enough.

This statement appears in “A Rant about Phone Messaging Apps UI.” But there are some more interesting issues in messaging; specifically, E2EE or end to end encrypted messaging. The current example of talking about the wrong topic in a quite important application space is summarized in Business Insider, an estimable online publication with snappy headlines like this one: “”In the Battle of Telegram vs Signal, Elon Musk Casts Doubt on the Security of the App He Once Championed.” That write up reports as “real” news:

Signal has also made its cryptography open-source. It is widely regarded as a remarkably secure way to communicate, trusted by Jeff Bezos and Amazon executives to conduct business privately.

I want to point out that Edward Snowden “endorses” Signal. He does not use Telegram. Does he know something that others may not have tucked into their memory stack?

The Business Insider “real” news report includes this quote from a Big Dog at Signal:

“We use cryptography to keep data out of the hands of everyone but those it’s meant for (this includes protecting it from us),” Whittaker wrote. “The Signal Protocol is the gold standard in the industry for a reason–it’s been hammered and attacked for over a decade, and it continues to stand the test of time.”

Pavel Durov, the owner of Telegram, and the brother of the person like two Ph.D.’s (his brother Nikolai), suggests that Signal is insecure. Keep in mind that Mr. Durov has been the subject of some scrutiny because after telling the estimable Tucker Carlson that Telegram is about free speech. Why? Telegram blocked Ukraine’s government from using a Telegram feature to beam pro-Ukraine information into Russia. That’s a sure-fire way to make clear what country catches Mr. Durov’s attention. He did this, according to rumors reaching me from a source with links to the Ukraine, because Apple or maybe Google made him do it. Blaming the alleged US high-tech oligopolies is a good red herring and a sinky one at that.

What Telegram got to do with the complaint about “features”? In my view, Telegram has been adding features at a pace that is more rapid than Signal, WhatsApp, and a boatload of competitors. have those features created some vulnerabilities in the Telegram set up? In fact, I am not sure Telegram is a messaging platform. I also think that the company may be poised to do an end run around open sourcing its home-grown encryption method.

What does this mean? Here are a few observations:

  1. With governments working overtime to gain access to encrypted messages, Telegram may have to add some beef.
  2. Established firms and start ups are nosing into obfuscation methods that push beyond today’s encryption methods.
  3. Information about who is behind an E2EE messaging service is tough to obtain? What is easy to document with a Web search may be one of those “fake” or misinformation plays.

Net net: E2EE is getting long in the tooth. Something new is needed. If you want to get a glimpse of the future, catch my lecture about E2EE at the upcoming US government Cycon 2024 event in September. Want a preview? We have a briefing. Write benkent2020 at yahoo dot com for restrictions and prices.

Stephen E Arnold, May 21, 2024

Interesting Observations: Do These Apply to Technology Is a Problem Solver Thinking?

February 16, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting essay by Nat Eliason, an entity unknown previously to me. “A Map Is Not a Blueprint: Why Fixing Nature Fails.” is a short collection of the way human thought processes create some quite spectacular problems. His examples include weight loss compounds like Ozempic, transfats, and the once-trendy solution to mental issues, the lobotomy.

Humans can generate a map of a “territory” or a problem space. Then humans dig in and try to make sense of their representation. The problem is that humans may approach a problem and get the solution wrong. No surprise there. One of the engines of innovation is coming up with a solution to a problem created by something incorrectly interpreted. A recent example is the befuddlement of Mark Zuckerberg when a member of the Senate committee questioning him about his company suggested that the quite wealthy entrepreneur had blood on his hands. No wonder he apologized for creating a service that has the remarkable power of bringing people closer together, well, sometimes.

image

Immature home economics students can apologize for a cooking disaster. Techno feudalists may have a more difficult time making amends. But there are lawyers and lobbyists ready and willing to lend a hand. Thanks, MSFT Copilot Bing thing. Good enough.

What I found interesting in Mr. Eliason’s essay was the model or mental road map humans create (consciously or unconsciously) to solve a problem. I am thinking in terms of social media, AI generated results for a peer-reviewed paper, and Web search systems which filter information to generate a pre-designed frame for certain topics.

Here’s the list of the five steps in the process creating interesting challenges for those engaged in and affected by technology today:

  1. Smart people see a problem, study it, and identify options for responding.
  2. The operations are analyzed and then boiled down to potential remediations.
  3. “Using our map of the process we create a solution to the problem.”
  4. The solution works. The downstream issues are not identified or anticipated in a thorough manner.
  5. New problems emerge as a consequence of having a lousy mental map of the original problem.

Interesting. Creating a solution to a technology-sparked problem without consequences may be one key to success. “I had no idea” or “I’m a sorry” makes everything better.

Stephen E Arnold, February 16, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta