Old Code, New Code: Can You Make It Work Again… Sort Of?

March 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Even hippy dippy super slick AI start ups have a technical debt problem. It is, in my opinion, no different from the “costs” imposed on outfits like JPMorgan Chase or (heaven help us) AMTRAK. Software which mostly works is subject to two environmental problems. First, the people who wrote the code or made it work that last time catastrophe struck (hello, AT&T, how are those pushed updates working for you now?) move on, quit, or whatever. Second, the technical options for remediating the problem are evolving (how are those security hot fixes working out, Microsoft?).

image

The helpful father asks an question the aspiring engineer cannot answer. Thus it was when the wizard was a child, and it is when the wizard is working on a modern engineering project. Buildings tip; aircraft lose doors and wheels. Software updates kill computers. Self-driving cars cannot. Thanks, MSFT Copilot. Did you get your model airplane to fly when you were a wee lad? I think I know the answer.

I thought about this problem of the cost of code remediating, fixing, redoing, upgrading or whatever term fast-talking sales engineers use in their Zooms and PowerPoints as I read “The High-Risk Refactoring.” The write up does a good job of explaining in a gentle way what happens when suits authorize making old code like new again. (The suits do not know the agonies of the original developers, but why should “history” intrude on a whiz bang GenX or GenY management type?

The article says:

it’s highly important to ensure the system works the same way after the swap with the new code. In that regard, immediately spotting when something breaks throughout the whole refactoring process is very helpful. No one wants to find that out in production.

No kidding.

In most cases, there are insufficient skilled people and money to create a new or revamped system, get it up and running in parallel for an appropriate period of time, identify the problems, remediate them, and then make the cut over. People buy cars this way, but that’s not how most organizations, regardless of size, “do” software. Okay, the take your car in, buy a new one, and drive off will not work in today’s business environment.

The write up focuses on what most organizations do; that is, write or fix new code and stick it into a system. There may or may not be resources for a staging server, but the result is the same. The old software has been “fixed” and the documentation is “sort of written” and people move on to other work or in the case of consulting engineering firms, just get replaced by a new, higher margin professional.

The write up takes a different approach and concludes with four suggestions or questions to ask. I quote:

“Refactor if things are getting too complicated, but  stop if can’t prove it works.

Accompany new features with refactoring for areas you foresee to be subject to a change, but copy-pasting is ok until patterns arise.

Be proactive in finding new ways to ensure refactoring predictability, but be conservative about the assumption QA will find all the bugs.

Move business logic out of busy components, but be brave enough to keep the legacy code intact if the only argument is “this code looks wrong”.

These are useful points. I would like to suggest some bright white lines for those who have to tackle an IRS-mainframe- or AT&T-billing system type of challenge as well as tweaking an artificial intelligence solution to respond to those wonky multi-ethnic images Google generated in order to allow the Sundar & Prabhakar Comedy Team to smile sheepishly and apologize again for lousy software.

Are you ready? Let’s go:

  1. Fixes add to the complexity of the code base. As time goes stumbling forward, the complexity of the software becomes greater. The cost of making sure the fix works and does not create exciting dependency behavior goes up. Thus, small fixes “cost” more, and these costs are tough to control.
  2. The safest fixes are “wrappers”; that is, no one in his or her right mind wants to change software written in 1978 for a machine no longer in production by the manufacturer. Therefore, new software is written to interact in a “safe” way with the original software. The new code “fixes up” the problem without screwing up what grandpa programmer wrote almost half a century ago. The problem is that “wrappers” tend to slow stuff down. The fix is to say one will optimize the system while one looks for a new project or job.
  3. The software used for “fixing” a problem is becoming the equivalent of repairing an aircraft component with Dawn laundry detergent. The “fix” is cheap, easy to use, and good enough. The software equivalent of this Dawn solution is that it will not stand the test of time. Instead of code crafted in good old COBOL or Assembler, we have some Fancy Dan tools which may fall out of favor in a matter of months, not decades.

Many projects result in better, faster, and cheaper. The reminder “Pick two” is helpful.

Net net: Fixing up lousy or flawed software is going to increase risks and costs. The question asked by bean counters is, “How much?” The answer is, “No one knows until the project is done … if ever.”

Stephen E Arnold, March 18, 2024

Stanford: Tech Reinventing Higher Education: I Would Hope So

March 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “How Technology Is Reinventing Education.” Essays like this one are quite amusing. The ideas flow without important context. Let’s look at this passage:

“Technology is a game-changer for education – it offers the prospect of universal access to high-quality learning experiences, and it creates fundamentally new ways of teaching,” said Dan Schwartz, dean of Stanford Graduate School of Education (GSE), who is also a professor of educational technology at the GSE and faculty director of the Stanford Accelerator for Learning. “But there are a lot of ways we teach that aren’t great, and a big fear with AI in particular is that we just get more efficient at teaching badly. This is a moment to pay attention, to do things differently.”

imageI

A university expert explains to a rapt audience that technology will make them healthy, wealthy, and wise. Well, that’s the what the marketing copy which the lecturer recites. Thanks, MSFT Copilot. Are you security safe today? Oh, that’s too bad.

I would suggest that Stanford’s Graduate School of Education consider these probably unimportant points:

  • The president of Stanford University resigned allegedly because he fudged some data in peer-reviewed documents. True or false. Does it matter? The fellow quit.
  • The Stanford Artificial Intelligence Lab or SAIL innovated with cooking up synthetic data. Not only was synthetic data the fast food of those looking for cheap and easy AI training data, Stanford became super glued to the fake data movement which may be good or it may be bad. Hallucinating is easier if the models are training using fake information perhaps?
  • Stanford University produced some outstanding leaders in the high technology “space.” The contributions of famous graduates have delivered social media, shaped advertising systems, and interesting intelware companies which dabble in warfighting and saving lives from one versatile software and consulting platform.

The essay operates in smarter-than-you territory. It presents a view of the world which seems to be at odds with research results which are not reproducible, ethics-free researchers, and an awareness of how silly it looks to someone in rural Kentucky to have a president accused of pulling a grade-school essay cheating trick.

Enough pontification. How about some progress in remediating certain interesting consequences of Stanford faculty and graduates innovations?

Stephen E Arnold, March 15, 2024

Techno Bashing from Thumb Typers. Give It a Rest, Please

March 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Every generation says that the latest cultural and technological advancements make people stupider. Novels were trash, the horseless carriage ruined traveling, radio encouraged wanton behavior, and the list continues. Everything changed with the implementation of television aka the boob tube. Too much television does cause cognitive degradation. In layman’s terms, it means the brain goes into passive functioning rather than actively thinking. It would be almost a Zen moment. Addiction is fun for some.

The introduction of videogames, computers, and mobile devices augmented the decline of brain function. The combination of AI-chatbots and screens, however, might prove to be the ultimate dumbing down of humans. APA PsycNet posted a new study by Umberto León-Domínguez called, “Potential Cognitive Risks Of Generative Transformer-Based AI-Chatbots On Higher Order Executive Thinking.”

Psychologists already discovered that spending too much time on a screen (i.e. playing videogames, watching TV or YouTube, browsing social media, etc.) increases the risk of depression and anxiety. When that is paired with AI-chatbots, or programs designed to replicate the human mind, humans rely on the algorithms to think for them.

León-Domínguez wondered if too much AI-chatbot consumption impaired cognitive development. In his abstract he invented some handy new terms that:

“The “neuronal recycling hypothesis” posits that the brain undergoes structural transformation by incorporating new cultural tools into “neural niches,” consequently altering individual cognition. In the case of technological tools, it has been established that they reduce the cognitive demand needed to solve tasks through a process called “cognitive offloading.” Cognitive offloading”perfectly describes younger generations and screen addicts. “Cultural tools into neural niches” also respects how older crowds view new-fangled technology, coupled with how different parts of the brain are affected with technology advancements. The modern human brain works differently from a human brain in the 18th-century or two thousand years ago.

He found:

“The pervasive use of AI chatbots may impair the efficiency of higher cognitive functions, such as problem-solving. Importance: Anticipating AI chatbots’ impact on human cognition enables the development of interventions to counteract potential negative effects. Next Steps: Design and execute experimental studies investigating the positive and negative effects of AI chatbots on human cognition.”

Are we doomed? No. Do we need to find ways to counteract stupidity? Yes. Do we know how it will be done? No.

Isn’t tech fun?

Whitney Grace, March 6, 2024

Technology Becomes Detroit

March 4, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Have you ever heard of technical debt? Technical debt is when an IT team prioritize speedy delivery of a product over creating a feasible, quality product. Technology history is full of technical debt. Some of the more famous cases are the E.T. videogame for the Atari, Windows Vista, and the Samsung Galaxy Gear. Technical debt is an ongoing issue for IT departments and tech companies. It’s apparently getting worse. ITPro details the current problems with technical debt in, “IT Leaders Need To Accept They’ll Never Escape Technical Debt, But That Doesn’t Mean They Should Down Tools.”

Gordon Haff is a senior leader at Red Hat and a technology evangelist. Haff told ITPro that tech experts will continue to remain hindered as they continue to deal with technical debt and shill shortages. Tech experts want to advance their field with transformative projects but they’re held back by the same aforementioned issues. Haff stressed that as soon as one project is complete, tech experts build the next project on existing architecture. It creates a technical debt infrastructure.

Haff provided an example using a band-aid metaphor:

“Haff pointed toward application modernization as a prime example of this rinse and repeat trend. Many enterprises, he said, deliberately choose to not tinker with certain applications due to the fact they still worked nominally.

Fast forward several years later, these applications are overhauled and modernized, then are left to their own devices – to some extent – and reassessed during the next transformation cycle.

‘If you go back 10 years, we had this sort of bimodal IT, or fast-slow IT, that was kind of the thing,” he explained. “The idea was ‘we’ll leave that old stuff, we’ll shove that off into the corner and not worry about it’ and the cool kids can work on all this greenfield, often new customer-facing applications.

‘But by and large, it’s then a case of ‘oh we actually need to deal with this core business stuff’ and these older applications.’”

Haff suggests that IT experts shouldn’t approach their work with a “one and done” mindset. They should realize their work is constantly evolving. These should be aware of how to go with the flow and program legacy systems that don’t transform into large messes. There’s a reason videogame companies have beta tests, restaurants have soft openings, and musicals have previews. They test things to deliver quality products. Technical debt leads to technical rot.

Whitney Grace, March 4, 2024

Does Cheap and Sneaky Work Better than Expensive and Hyperbole?

February 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My father was a member of the Sons of the American Revolution (SAR). He loved reading about that historical “special operation.” I think he imagine himself in a make-shift uniform, hiding behind some bushes, and then greeting the friend of George III with some old-fashioned malice. My hunch is that John Arnold’s descendants wrote anti-British editorials and gave speeches. But what do I know? Not much that’s for sure.

image

The David and Goliath trope may be applicable to the cheap drone tactic. Thanks, MSFT Copilot Bing thing. Good enough.

I thought about how a rag-tag, under-supplied collection of colonials could bedevil the British when I read The Guardian’s essay “Deadly, Cheap and Widespread: How Iran-Supplied Drones Are Changing the Nature of Warfare.” The write up opines that the drone which killed several Americans in Iraq:

is most likely to the smaller Shahed 101 or delta winged Shahed 131, both believed to be in Kataib Hezbollah’s arsenal …with estimated ranges of at least 700km (434 miles) and a cost of $20,000 (£15,700) or more. (Source Fabian Hinz, a weapons expert)

The point strikes me as a variant of David versus Goliath. The giant gets hurt by a lesser opponent with a cheap weapon. Iran is using drones, not exotic hardware like the F-16s Türkiye craves. A flimsy drone does not require the obvious paraphernalia of power the advanced jet does. Tin snips, some parts from Shenzhen retail outlets, and model airplane controls. No hangers, mechanics, engineers, and specially trained pilots.

Shades of the Colonials I think. The article continues:

The drones …are best considered cheap substitutes for guided cruise missiles, most effective against soft or “static structures” which force those under threat to “either invest money in defenses or disperse and relocate which renders things like aircraft on bases more inefficient”

Is there a factoid in this presumably accurate story from a British newspaper? Yes. My take-away is that simple and basic can do considerable harm. Oh, let me add “economical”, but that is rarely a popular concept among some government entities and approved contractors.

Net net: How about thinking like some of those old-time Revolutionaries in what has become the US?

Stephen E Arnold, February 8, 2024

Why Stuff Does Not Work: Airplane Doors, Health Care Services, and Cyber Security Systems, Among Others

January 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Downward Spiral of Technology” stuck a chord with me. Think about building monuments in the reign of Cleopatra. The workers can check out the sphinx and giant stone blocks in the pyramids and ask, “What happened to the technology? We are banging with bronze and crappy metal compounds and those ancient dudes were zipping along with snappier tech.? That conversation is imaginary, of course.

The author of “The Downward Spiral” is focusing on less dusty technology, the theme might resonate with my made up stone workers. Modern technology lacks some of the zing of the older methods. The essay by Thomas Klaffke hit on some themes my team has shared whilst stuffing Five Guys’s burgers in their shark-like mouths.

Here are several points I want to highlight. In closing, I will offer some of my team’s observations on the outcome of the Icarus emulators.

First, let’s think about search. One cannot do anything unless one can find electronic content. (Lawyers, please, don’t tell me you have associates work through the mostly-for-show books in your offices. You use online services. Your opponents in court print stuff out to make life miserable. But electronic content is the cat’s pajamas in my opinion.)

Here’s a table from the Mr. Klaffke essay:

image

Two things are important in this comparison of the “old” tech and the “new” tech deployed by the estimable Google outfit. Number one: Search in Google’s early days made an attempt to provide content relevant to the query. The system was reasonably good, but it was not perfect. Messrs. Brin and Page fancy danced around issues like disambiguation, date and time data, date and time of crawl, and forward and rearward truncation. Flash forward to the present day, the massive contributions of Prabhakar Raghavan and other “in charge of search” deliver irrelevant information. To find useful material, navigate to a Google Dorks service and use those tips and tricks. Otherwise, forget it and give Swisscows.com, StartPage.com, or Yandex.com a whirl. You are correct. I don’t use the smart Web search engines. I am a dinobaby, and I don’t want thresholds set by a 20 year old filtering information for me. Thanks but no thanks.

The second point is that search today is a monopoly. It takes specialized expertise to find useful, actionable, and accurate information. Most people — even those with law degrees, MBAs, and the ability to copy and paste code — cannot cope with provenance, verification, validation, and informed filtering performed by a subject matter expert. Baloney does not work in my corner of the world. Baloney is not a favorite food group for me or those who are on my team. Kudos to Mr. Klaffke to make this point. Let’s hope someone listens. I have given up trying to communicate the intellectual issues lousy search and retrieval creates. Good enough. Nope.

image

Yep, some of today’s tools are less effective than modern gizmos. Hey, how about those new mobile phones? Thanks, MSFT Copilot Bing thing. Good enough. How’s the MSFT email security today? Oh, I asked that already.

Second, Mr Klaffke gently reminds his reader that most people do not know snow cones from Shinola when it comes to information. Most people assume that a computer output is correct. This is just plain stupid. He provides some useful examples of problems with hardware and user behavior. Are his examples ones that will change behaviors. Nope. It is, in my opinion, too late. Information is an undifferentiated haze of words, phrases, ideas, facts, and opinions. Living in a haze and letting signals from online emitters guide one is a good way to run a tiny boat into a big reef. Enjoy the swim.

Third, Mr. Klaffke introduces the plumbing of the good-enough mentality. He is accurate. Some major social functions are broken. At lunch today, I mentioned the writings about ethics by Thomas Dewey and William James. My point was that these fellows wrote about behavior associated with a world long gone. It would be trendy to wear a top hat and ride in a horse drawn carriage. It would not be trendy to expect that a person would work and do his or her best to do a good job for the agreed-upon wage. Today I watched a worker who played with his mobile phone instead of stocking the shelves in the local grocery store. That’s the norm. Good enough is plenty good. Why work? Just pay me, and I will check out Instagram.

I do not agree with Mr. Klaffke’s closing statement; to wit:

The problem is not that the “machine” of humanity, of earth is broken and therefore needs an upgrade. The problem is that we think of it as a “machine”.

The problem is that worldwide shared values and cultural norms are eroding. Once the glue gives way, we are in deep doo doo.

Here are my observations:

  1. No entity, including governments, can do anything to reverse thousands of years of cultural accretion of norms, standards, and shared beliefs.
  2. The vast majority of people alive today are reverting back to some fascinating behaviors. “Fascinating” is not a positive in the sense in which I am using the word.
  3. Online has accelerated the stress on social glue; smart software is the turbocharger of abrupt, hard-to-understand change.

Net net: Please, read Mr. Klaffke’s essay. You may have an idea for remediating one or more of today’s challenges.

Stephen E Arnold, January 25, 2024

Signals for the Future: January 2024

January 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Data points fly more rapidly than arrows in an Akiro Kurosawa battle scene. My research team identified several items of interest which the free lunchers collectively identified as mysterious signals for the future. Are my special librarians, computer programmers, and eager beavers prognosticators you can trust to presage the future? I advise some caution. Nevertheless, let me share their divinations with you.

1 15 24 hearing aid

This is an illustration of John Arnold, a founder of Hartford, Connecticut, trying to discern the signals about the future of his direct descendant Stephen E Arnold. I use the same type of device, but I focus on a less ambitious time span. Thanks, MidJourney, good enough.

Enablers in the Spotlight

First, Turkey has figured out that the digital enablers which operate as Internet server providers, hosting services which offer virtual machines and crypto, developers of assorted obfuscation software are a problem. The odd orange newspaper reported in “Turkey Tightens Internet Censorship ahead of Elections.” The signal my team identified appears in this passage:

Documents seen by the Financial Times show that Turkey’s Information Technologies and Communications Authority (BTK) told internet service providers a month ago to curtail access to more than a dozen popular virtual private network services.

If Turkey’s actions return the results the government of Turkey find acceptable, will other countries adopt a similar course of action. My dinobaby memory allowed me to point out that this is old news. China and Iran have played this face card before. One of my team pointed out, “Yes, but this time it is virtual private networks.” I asked one of the burrito eaters to see if M247 has been the subject of any chatter. What’s an M247? Good question. The answer is, “An enabler.”

AI Kills Jobs

Second, one of my hard workers pointed out that Computerworld published an article with a bold assertion. Was it a bit of puffery or was it a signal? The answer was, “A signal.”

AI to Impact 60% of Jobs in Developed Economies: IMF” points out:

The blog post points out that automation has typically impacted routine tasks. However, this is different with AI, as it can potentially affect skilled jobs. “As a result, advanced economies face greater risks from AI — but also more opportunities to leverage its benefits — compared with emerging market and developing economies,” said the blog post. The older workforce would be more vulnerable to the impact of technology than the younger college-educated workers. “Technological change may affect older workers through the need to learn new skills. Firms may not find it beneficial to invest in teaching new skills to workers with a shorter career horizon; older workers may also be less likely to engage in such training, since the perceived benefit may be limited given the limited remaining years of employment,” said the IMF report.

Life for some recently RIFed and dinobabies will be more difficult. This is a signal? My team says, “Yes, dinobaby.”

Advertising As Cancer

Final signal for this post: One of my team made a big deal out of the information in “We Removed Advertising Cookies, Here’s What Happened.” Most of the write up will thrill the lucky people who are into search engine optimization and related marketing hoo hah. The signal appears in this passage:

When third-party cookies are fully deprecated this year, there will undoubtedly be more struggles for performance marketers. Without traditional pixels or conversion signals, Google (largest ad platform in the world) struggles to find intent of web visitors to purchase.

We listened as our colleague explained: “Google is going to do whatever it can to generate more revenue. The cookie thing, combined with the ChatGPT-type of search, means that Google’s goldon goose is getting perilously close to one of those craters with chemical-laced boiling water at Yellowstone.” That’s an interesting signal. Can we hear a goose squawking now?

Make of these signals what you will. My team and I will look and listen for more.

Stephen E Arnold, January 18, 2024

Can Technology Be Kept in a Box?

January 11, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

If true, this is a relationship worth keeping an eye on. Tom’s Hardware reports, “China Could Have Access to the Largest AI Chips Ever Made, Supercomputer with 54 Million Cores—US Government Investigates Cerebras’ UAE-Based Partner.” That United Arab Emirates partner is a holding company called G42, and it has apparently been collecting the powerful supercomputers to underpin its AI ambitions. According to reporting from the New York Times, that collection now includes the record-breaking processors from California-based Cerebras. Writer Anton Shilov gives us the technical details:

“Cerebras’ WSE-2 processors are the largest chips ever brought to market, with 2.6 trillion transistors and 850,000 AI-optimized cores all packed on a single wafer-sized 7nm processor, and they come in CS-2 systems. G42 is building several Condor Galaxy supercomputers for A.I. based on the Cerebras CS-2 systems. The CG-1 supercomputer in Santa Clara, California, promises to offer four FP16 Exaflops of performance for large language models featuring up to 600 billion parameters and offers expansion capability to support up to 100 trillion parameter models.”

That is impressive. One wonders how fast that system sucks down water. But what will the firms do with all this power? That is what the CIA is concerned about. We learn:

“G42 and Cerebras plan to launch six four-Exaflop Condor Galaxy supercomputers worldwide; these machines are why the CIA is suspicious. Under the leadership of chief executive Peng Xiao, G42’s expansion has been marked by notable agreements — including a partnership with AstraZeneca and a $100 million collaboration with Cerebras to develop the ‘world’s largest supercomputer.’ But classified reports from the CIA paint a different picture: they suggest G42’s involvement with Chinese companies — specifically Huawei — raises national security concerns.”

For example, G42 may become a clearinghouse for sensitive American technologies and genetic data, we are warned. Also, with these machines located outside the US, they could easily be used to train LLMs for the Chinese. The US has threatened sanctions against G42 if it continues to associate with Chinese entities. But as Shilov points out, we already know the UAE has been cozying up to China and Russia and distancing itself from the US. Sanctions may have a limited impact. Tech initiatives like G42’s are seen as an important part of diversifying the country’s economy beyond oil.

Cynthia Murrell, January 11, 2024

Sci Fi or Sci Fake: A Post about a Chinese Force Field

January 10, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Imagine a force field which can deflect a drone or other object. Commercial applications could range from a passenger vehicles to directing flows of material in a manufacturing process. Is a force field a confection of science fiction writers or a technical avenue nearing market entry?

image

A Tai Chi master uses his powers to take down a drone. Thanks, MSFT Copilot Bing thing. Good enough.

Chinese Scientists Create Plasma Shield to Guard Drones, Missiles from Attack” presents information which may be a combination of “We’re innovating and you are not” and “science fiction.” The write up reports:

The team led by Chen Zongsheng, an associate researcher at the State Key Laboratory of Pulsed Power Laser Technology at the National University of Defence Technology, said their “low-temperature plasma shield” could protect sensitive circuits from electromagnetic weapon bombardments with up to 170kW at a distance of only 3 metres (9.8 feet). Laboratory tests have shown the feasibility of this unusual technology. “We’re in the process of developing miniaturized devices to bring this technology to life,” Chen and his collaborators wrote in a peer-reviewed paper published in the Journal of National University of Defence Technology last month.

But the write up makes clear that other countries like the US are working to make force fields more effective. China has a colorful way to explain their innovation; to wit:

The plasma-based energy shield is a radical new approach reminiscent of tai chi principles – rather than directly countering destructive electromagnetic assaults it endeavors to convert the attacker’s energy into a defensive force.

Tai chi, as I understand the discipline is a combination of mental discipline and specific movements to develop mental peace, promote physical well being, and control internal force for a range of purposes.

How does the method function. The article explains:

… When attacking electromagnetic waves come into contact with these charged particles, the particles can immediately absorb the energy of the electromagnetic waves and then jump into a very active state. If the enemy continues to attack or even increases the power at this time, the plasma will suddenly increase its density in space, reflecting most of the incidental energy like a mirror, while the waves that enter the plasma are also overwhelmed by avalanche-like charged particles.

One question: Are technologists mining motion pictures, television shows, and science fiction for ideas?

Beam me up, Scotty.

Stephen E Arnold, January 10, 2024

Want to Fix Technopoly Life? Here Is a Plan. Implement It. Now.

December 28, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Cal Newport published an interesting opinion essay in New Yorker Magazine called “It Is Time to Dismantle the Technopoly.” The point upon which I wish to direct my dinobaby microscope appears at the end of the paywalled artistic commentary. Here’s the passage:

We have no other reasonable choice but to reassert autonomy over the role of technology in shaping our shared story.

The author or a New Yorker editor labored over this remarkable sentence.

First, I want to point out that there is a somewhat ill-defined or notional “we”. Okay, exactly who is included in the “we.” I would suggest that the “technopoly” is excluded. The title of the article makes clear that dismantle means taking apart, disassembling, or deconstructing. How will that be accomplished in a nation state like the US? What about the four entities in the alleged “Axis of Evil”? Are there other social constructs like an informal, distributed group of bad actors who want to make smart software available to anyone who wants to mount phishing and ransomware attacks? Okay, that’s the we problem, not too tiny is it?

image

A teacher explains to her top students that they have an opportunity to define some interesting concepts. The students do not look too happy. As the students grow older, their interest is therapist jargon may increase. The enthusiasm for defining such terms remains low. Thanks, MSFT Copilot.

Second, “no other reasonable choice.” I think you know where I am going with my next question: What does “reasonable” mean? I think the author knows or hopes that the “we” will recognize “reasonable” when those individuals see it. But reason is slippery, particularly in an era in which literacy is defined as being able to “touch and pay” and “swiping left.” What is the computing device equipped with good enough smart software “frames” an issue? How does one define “reasonable” if the information used to make that decision is weaponized, biased, or defined by a system created by the “technopoly”? Who other than lawyers wants to argue endlessly over an epistemological issue? Not me. The “reasonable” is pulled from the same word list used by some of the big technology outfits. Isn’t Google reasonable when it explains that it cares about the user’s experience? What about Meta (the Zuckbook) and its crystal clear explanations of kiddie protections on its services? What about the explanations of legal experts arguing against one another? The word “reasonable” strikes me as therapist speak or mother-knows-best talk.

Third, the word “reassert” suggests that it is time to overthrow the technopoly. I am not sure a Boston Tea Party-type event will do the trick. Technology, particularly open source software, makes it easy for a bad actor working from a beat down caravan near Makarska can create a new product or service that sweeps through the public network. How is “reassert” going to cope with an individual hooked into an online, distributed criminal network. Believe me, Europol is trying, but the work is difficult. But the notion of “reassert” implies that there was a prior state, a time when technopolists were not the focal point of “The New Yorker.” “Reassert” is a call to action. The who, how, when, and where questions are not addressed. The result is crazy rhetoric which, I suppose, might work if one were a TikTok influencer backed by a large country’s intelligence apparatus. But that might not work either. The technopolies have created the datasphere, and it is tough to grab a bale of tea and pitch it in the Boston Harbor today. “Heave those bits overboard, mates” won’t work.

Fourth “autonomy.” I am not sure what “autonomy” means. When I was taking required classes at the third-rate college I attended, I learned the definition each instructor presented. Then, like a good student chasing top marks, I spit the definition back. Bingo. The method worked remarkably well. The notion of “autonomy” dredges upon explanations of free will and predestination. “Autonomy” sounds like a great idea to some people. To me, it smacks of ideas popular when Ben Franklin was chasing females through French doors before he was asked to return to the US of A. YouTube is chock-a-block with off-the-grid methods. Not too many people go off the grid and remain there. When someone disappears, it becomes “news.” And the person or the entity’s remains become an anecdote on a podcast. How “free” is a person in the US to “dismantle” a public or private enterprise? Can one “dismantle” a hacker? Remember those homeowners who put bullets in an intruder and found themselves in jail? Yeah. Autonomy. How’s that working out in other countries? What about the border between French Guyana and Brazil? Do something wrong and the French Foreign Legion will define “autonomy” in terms of a squad solving a problem. Bang. Done. Nice idea that “autonomy” stuff.

Fifth, the word “role” is interesting. I think of “role” as a character in a social setting; for example, a CEO who is insecure about how he or she actually became a CEO. That individual tries to play a “role.” A character like the actor who becomes “Mr. Kitzel” on a Jack Benny Show plays a role. The talking heads on cable news play a “role.” Technology enables, it facilitates, and it captivates. I suppose that’s its “role.” I am not convinced. Technology does what it does because humans have shaped a service, software, or system to meet an inner need of a human user. Technology is like a gerbil. Look away and there are more and more little technologies. Due to human actions, the little technologies grow and then the actions of lots of human make the technologies into digital behemoths. But humans do the activating, not the “technology.” The twist with technology is that as it feeds on human actions, the impact of the two interacting is tough to predict. In some cases, what happens is tough to explain as that action is taking place. A good example is the role of TikTok in shaping the viewpoints of some youthful fans. “Role” is not something I link directly to technology, but the word implies some sort of “action.” Yeah, but humans  were and are involved. The technology is perhaps a catalyst or digital Teflon. It is not Mr. Kitzel.

Sixth, the word “shaping” in the cited sentence directly implies that “technology” does something. It has intent. Nope. The humans who control or who have unrestricted access to the “technology” do the shaping. The technology — sorry, AI fans — is following instructions. Some instructions come from a library; others can be cooked up based on prior actions. But for most technology technology is inanimate and “smart” to uninformed people. It is not shaping anything unless a human set up the system to look for teens want to commit suicide and the software identifies similar content and displays it for the troubled 13 year old. But humans did the work. Humans shape, distort, and weaponize. The technology is putty composed of zeros and ones. If I am correct, the essay wants to terminate humans. Once these bad actors are gone, the technology “problem” goes away. Sounds good, right?

Finally, the word “shared story.” What is this “shared story”? The commentary on a spectacular shot to win a basketball game? A myth that Thomas Jefferson was someone who kept his trousers buttoned? The story of a Type A researcher who experimented with radium and ended up a poster child for radiation poisoning? An influencer who escaped prison and became a homeless minister caring for those without jobs and a home? The “shared story” is a baffler. My hunch is that “shared story” is something that the “we” are sad has disappeared. My family was one of the group that founded Hartford, Connecticut, in the 17th century. Is that the Arnolds’ shared story. News flash: There are not many Arnolds left and those who remain laugh we I “share” that story. It means zero to them. If you want a “shared story”, go viral on YouTube or buy Super Bowl ads. Making friends with Taylor Swift will work too.

Net net: The mental orientation of the cited essay is clear in one sentence. Yikes, as the honor students might say.

Stephen E Arnold, December 28, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta