Smart Software and a Re-Run of Paradise Lost Joined Progress

June 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I picked up two non-so-faint and definitely not-encrypted signals about the goals of Google and Microsoft for smart software.

6 3 god 2

Which company will emerge as the one true force in smart software? MidJourney did not pick a winner, just what the top dog will wear to the next quarterly sales report delivered via a neutral Zoom call.

Navigate to the visually thrilling podcast hosted by Lex Fridman, an American MIT wizard. He interviewed the voluble Google wizard Chris Lattner. The subject was the Future of Programming and AI. After listening to the interview, I concluded the following:

  1. Google wants to define and control the “meta” framework for artificial intelligence. What’s this mean? Think a digital version of a happy family: Vishnu, Brahma, and Shiva, among others.
  2. Google has an advantage when it comes to doing smart software because its humanoids have learned what works, what to do, and how to do certain things.
  3. The complexity of Google’s multi-pronged smart software methods, its home-brew programming languages, and its proprietary hardware are nothing more than innovation. Simple? Innovation means no one outside of the Google AI cortex can possibly duplicate, understand, or outperform Googzilla.
  4. Google has money and will continue to spend it to deliver the Vishnu, Brahma, and Shiva experience in my interpretation of programmer speak.

How’s that sound? I assume that the fruit fly start ups are going to ignore the vibrations emitted from Chris Lattner, the voluble Chris Lattner, I want to emphasize. But like those short-lived Diptera, one can derive some insights from the efforts of less well-informed, dependent, and less-well-funded lab experiments.

Okay, that’s signal number one.

Signal number two appears in “Microsoft Signs Deal for AI Computing Power with Nvidia-Backed CoreWeave That Could Be Worth Billions.” This “real news” story asserts:

… Microsoft has agreed to spend potentially billions of dollars over multiple years on cloud computing infrastructure from startup CoreWeave …

CoreWeave? Yep, the company “sells simplified access to Nvidia’s graphics processing units, or GPUs, which are considered the best available on the market for running AI models.” By the way, nVidia has invested in this outfit. What’s this signal mean to me? Here are the flickering lines on my oscilloscope:

  1. Microsoft wants to put smart software into its widely-used enterprise applications in order to make the one true religion of smart software. The idea, of course, is to pass the collection plate and convert dead dog software into racing greyhounds.
  2. Microsoft has an advantage because when an MBA does calculations and probably letters to significant others, Excel is the go-to solution. Some people create art in Excel and then sell it. MBAs just get spreadsheet fever and do leveraged buyouts. With smart software the Microsoft alleged monopoly does the billing.
  3. The wild and wonderful world of Azure is going to become smarter because… well, Microsoft does smart things. Imagine the demand for training courses, certification for Microsoft engineers, and how-to YouTube videos.
  4. Microsoft has money and will continue to achieve compulsory attendance at the Church of Redmond.

Net net: Two titans will compete. I am thinking about the battle between the John Milton’s protagonist and antagonist in “Paradise Lost.” This will be fun to watch whilst eating chicken korma.

Stephen E Arnold, June 5, 2023

The Intellectual Titanic and Sister Ships at Sea: Ethical Ballast and Flawed GPS Aboard

June 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Researchers Retract Over 300 COVID-Era Medical Papers For Scientific Errors, Ethical Concerns.” I ignored the information about the papers allegedly hand crafted with cow outputs. I did note this statement, however:

Gunnveig Grødeland, a senior researcher at the Institute of Immunology at the University of Oslo, said many withdrawn papers during COVID-19 have been the result of ethical shortcomings.

Interesting. I recall hearing that the president of a big time university in Palo Alto was into techno sci-fi paper writing. I also think that the estimable Jeffrey Epstein affiliated MIT published some super positive information about the new IBM smart WatsonX. (Doesn’t IBM invest big bucks in MIT?) I have also memory tickles about inventors and entrepreneurs begging to be regulated.

5 31 bad info and kids

Bad, distorted values chase kids the Lane of Life. Imagine. These young people and their sense of right and wrong will be trampled by darker motives. Image produced by MidJourney, of course.

What this write up about peer reviewed and allegedly scholarly paper says to me is that ethical research and mental gyroscopes no longer align with what I think of as the common good.

Academics lie. Business executives lie. Entrepreneurs lie. Now what’s that mean for the quaint idea that individuals can be trusted? I can hear the response now:

Senator, thank you, for that question. I will provide the information you desire after this hearing.

I suppose one can look forward to made up information as the increasingly lame smart software marketing demonstrations thrill the uninformed.

Is it possible for flawed ethical concepts and out of kilter moral GPS system to terminate certain types of behavior?

Here’s the answer: Sure looks like it. That’s an interesting gain of function.

Stephen E Arnold, June 1, 2023

Does Jugalbandi Mean De-casting?

June 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Microsoft Launches Jugalbandi: An AI Powered Platform and Chatbot to Bridge Information Gap in India.” India connotes for me spicy food and the caste system. My understanding of this term comes from Wikipedia which says:

The caste system in India is the The caste system in India is the paradigmatic ethnographic instance of social classification based on castes. It has its origins in ancient India, and was transformed by various ruling elites in medieval, early-modern, and modern India, especially the Mughal Empire and the British Raj.

Like me, the Wikipedia can be incorrect, one-sided, and PR-ish.

The Jugalbandi write up contains some interesting statements which I interpret against my understanding of the Wikipedia article about castes in India. Here’s one example:

Microsoft, a pioneer in the artificial intelligence (AI) field, has made significant strides with its latest venture, Jugalbandi. This generative AI-driven platform and chatbot aim to revolutionize access to information about government initiatives and public programs in India. With nearly 22 official languages and considerable linguistic variations in the country, Jugalbandi seeks to address the challenges in disseminating information effectively.

I wonder if Microsoft’s pioneering smart software (based largely upon the less than open and often confused OpenAI technology) will do much to “address the challenges in disseminating information effectively.”

Wikipedia points out:

In 1948, negative discrimination on the basis of caste was banned by law and further enshrined in the Indian constitution in 1950; however, the system continues to be practiced in parts of India. There are 3,000 castes and 25,000 sub-castes in India, each related to a specific occupation.

If law and every day behavior have not mitigated castes and how these form fences in India and India outposts in London and Silicon Valley, exactly what will Microsoft (the pioneer in AI) accomplish?

My hunch the write up enshrines:

  1. The image of Microsoft as the champion of knocking down barriers and allowing communication to flow. (Why does smart Bing block certain queries?)
  2. Microsoft’s self-professed role as a “pioneer” in smart software. I think a pioneer in clever Davos messaging is closer to the truth.
  3. The OnMSFT.com’s word salad about something that may be quite difficult to accomplish in many social, business, and cultural settings.

Who created the concept of untouchables?

Stephen E Arnold, June 1, 2023

Stop Smart Software! A Petition to Save the World! Signed by 350 Humans!

May 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

A “real” journalist (Kevin Roose), who was told to divorce his significant other for a chat bot published the calming, measured, non-clickbait story “AI Poses Risk of Extinction, Industry Leaders Warn.” What’s ahead for the forest fire of smart software activity? The headline explains a “risk of extinction.” What no screenshot of a Terminator robot saying”:

The strength of the human heart. The difference between us and machines. [Uplifting music]

Sadly, no.

Write up reports:

Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen. These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.

Isn’t the Gray Lady amplifying fear, uncertainty, and doubt? Didn’t IBM pay sales engineers to spread the FUD?

Enough. AI is bad. Stop those who refined the math and numerical recipes. Pass laws to regulate the AI technology. Act now. Save humanity. Several observations:

  1. The credibility of technologists who “develop” functions and then beg for rules is disingenuous. The idea is to practice self-control and judgment before inviting Mr. Hyde to brunch.
  2. With smart software chock full of “unknown unknowns”, how exactly are elected officials supposed to regulate a diffusing and enabling technology? Appealing to US and EU officials omits common sense in my opinion.
  3. The “fix” for the AI craziness may be emulating the Chinese approach: Do what the CCP wants or be reeducated. What a nation state can d with smart software is indeed a something to consider. But China has taken action and will move forward with militarization no matter what the US and EU do.

Silicon Valley type innovation has created a “myth of excellence.” One need look at the consequences of social media to see the consequences of high school science club decision making. Now a handful of individuals with the Silicon Valley DNA want external forces to reign in their money making experiments and personal theme parks. Sorry, folks. Internal control, ethical behavior, and integrity provide that to mature individuals.

A sheet of paper with “rules” and “regulations” is a bit late to the Silicon Valley game. And the Gray Lady? Chasing clicks in my opinion.

Stephen E Arnold, May 30, 2023

Regulate Does Not Mean Regulate. Leave the EU Does Not Mean Leave the EU. Got That?

May 30, 2023

I wrote about Sam AI-man’s explanation that he wants regulation. I pointed out that his definition of regulate means leaving OpenAI free to do whatever it can to ace out the Google and a handful of other big outfits chasing the pot of gold at the end of the AI rainbow.

I just learned from the self-defined trusted news source (Thomson Reuters) that Mr. AI-man has no plans to leave Europe. I understand. “Leave” does not mean leave as in depart, say adios, or hit the road, Jack.

ChatGPT Maker OpenAI Says Has No Plan to Leave Europe” reports:

OpenAI has no plans to leave Europe, CEO Sam Altman said on Friday, reversing a threat made earlier this week to leave the region if it becomes too hard to comply with upcoming laws on artificial intelligence.

I am not confused. Just as the company’s name OpenAI does not mean “open,” the AI-man’s communication skills are based on the probabilities of certain words following another word. Got it. The slippery fish with AI-man is that definition of the words in his mind do not regress to the mean. The words — like those of some other notable Silicon Valley high tech giants — reflect the deeper machinations of a machine-assisted superior intelligence.

Translated this means: Regulate means shaft our competitors. Leave means stay. Regulate means let those OpenAI sheep run through the drinking water of free range cattle

The trusted write up says:

Reacting to Altman’s tweet on Friday, Dutch MEP Kim van Sparrentak, who has worked closely on the AI draft rules, told Reuters she and her colleagues must stand up to pressure from tech companies…. Voluntary codes of conduct are not the European way.

What does this statement mean to AI-man?

I would suggest from my temporary office in clear thinking Washington, DC, not too much.

I look forward to the next hearing from AI-man. That will be equally easy to understand.

Stephen E Arnold, May 30, 2023

Smart Software Knows Right from Wrong

May 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The AI gold rush is underway. I am not sure if the gold is the stuff of the King’s crown or one of those NFT confections. I am not sure what company will own the mine or sell the miner’s pants with rivets. But gold rush days remind me of forced labor (human indexers), claim jumping (hiring experts from one company to advantage another), and hydraulic mining (ethical and moral world enhancement). Yes, I see some parallels.

I thought of claim jumping and morals after reading “OpenAI Competitor Says Its Chatbot Has a Rudimentary Conscience: A Fascinating Concept.” The following snippet from the article caught my attention:

Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are “basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic.”

Please, read the original.

I want to capture several thoughts which flitted through my humanoid mind:

  1. What is right? What is wrong?
  2. What yardstick will be used to determine “rightness” or “wrongness.”
  3. What is the context for each right or wrong determination; for example, at the National Criminal Justice Training Center, there is a concept called “sexploitation.” The moral compass of You.com prohibits searching for information related to this trendy criminal activity? How will the Anthropic approach address the issue of a user with a “right” intent from a user with a “wrong” intent?

Net net: Baloney. Services will do what’s necessary to generate revenue. I know from watching the trajectories of the Big Tech outfits that right, wrong, ethics, and associated dorm room discussions wobble around and focus on getting rich or just having a job.

The goal for some will be to get their fingers on the knobs and control levers. Right or wrong?

Stephen E Arnold, May 29, 2023

Google AI Moves Slowly to Google Advertising. Soon, That Is. Soon.

May 24, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read l ”Google Search Ads Will Soon Automatically Adapt to Queries Using Generative AI.” The idea of using smart software to sell ads is one that seems obvious to me. What surprised me about this article in TechCrunch is the use of the future tense and the indefinite “soon.” The Sundar Financial Times’ PR write up emphasized that Google has been doing smart software for a looooong time.

How could a company so dependent on ads be in the “will” and “soon” vaporware announcement business?

I noted this passage in the write up:

Google is going to start using generative AI to boost Search ads’ relevance based on the context of a query…

But why so slow in releasing obvious applications of generative software?

I don’t have answers to this quite Googley question, probably asked by those engaged in the internal discussions about who’s on first in the Google Brain versus DeepMind softball game, but I have some observations:

  1. Google had useful technology but lacked the administrative and managerial expertise to get something out the door and into the hands paying customers
  2. Google’s management processes simply do not work when the company is faced with strategic decisions. This signals the end of the go go mentality of the company’s Backrub to Google transformation. And it begs the question, “What else has the company lost over the last 25 years?”
  3. Google’s engineers cannot move from Ivory Tower quantum supremacy mental postures to common sense applications of technology to specific use cases.

In short, after 25 years Googzilla strikes me as arthritic when it comes to hot technology and a little more nimble when it tries to do PR. Except for Paris, of course.

Stephen E Arnold, May 24, 2023

Neeva: Another Death from a Search Crash on the Information Highway

May 22, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

What will forensic search experts find when they examine the remains of Neeva? The “gee, we failed” essay “Next Steps for Neeva” presents one side of what might be an interesting investigation for a bushy tailed and wide eyed Gen Z search influencer. I noted some statements which may have been plucked from speeches at the original Search Engine Conferences ginned up by an outfit in the UK or academic post mortems at the old International Online Meeting once held in the companionable  Olympia London.

I noted these statements from the cited document:

Statement 1: The users of a Web search system

We started Neeva with the mission to take search back to its users.

The reality is that 99 percent of people using a Web search engine are happy when sort of accurate information is provided free. Yep, no one wants to pay for search. That’s the reason that when a commercial online service like LexisNexis loses one big client, it is expensive, time consuming, and difficulty to replace the revenue. One former LexisNexis big wheel told me when we met in his limousine in the parking lot of the Cherry Hill Mall: “If one of the top 100 law firms goes belly up, we need a minimum of 200 new law firms to sign up for our service and pay for it.”

5 12 mommy I failed

“Mommy, I failed Search,” says Timmy Neeva. Mrs. Neeva says, “What caused your delusional state, Timmy.” The art work is a result of the smart software MidJourney.

Users don’t care about for fee search when those users wouldn’t know whether a hit in a results list was right, mostly right, mostly wrong, or stupidly crazy. Free is the fuel that pulls users, and without advertising, there’s no chance a free service will be able to generate enough cash to index, update the index, and develop new features. At the same time, the plumbing is leaking. Plumbing repairs are expensive: New machines, new ways to reduce power consumption, and oodles of new storage devices.

Users want free. Users don’t want to compare the results from a for fee service and a free service. Users want free. After 25 years, the Google is the champion of free search. Like the old Xoogler search system Search2, Neeva’s wizards never figured that most users don’t care about Fancy Dan yip yap about search.

Statement 2: An answer engine.

We rallied the Neeva team around the vision to create an answer engine.

Shades of DR-LINK: Users want answers. In 1981, a former Predicasts’ executive named Paul Owen told me, “Dialog users want answers.” That sounds logical, and it is to many who are expert informationists the Gospel according to Online. The reality is that users want crunchy, bite sized chunks of information which appear to answer the question or almost right answers that are “good enough” or “close enough for horseshoes.”

Users cannot differentiate from correct and incorrect information. Heck, some developers of search engines don’t know the difference between weaponized information and content produced by a middle school teacher about the school’s graduation ceremony. Why? Weaponized information is abundant; non-weaponized information may not pass the user’s sniff test. And the middle school graduation ceremony may have a typo about the start time or the principal of the school changed his mind due to an active shooter situation. Something output from a computer is believed to be credible, accurate, and “right.” An answer engine is what a free Web search engine spits out. The TikTok search spits out answers, and no one wonders if the results list are shaped by Chinese interests.

Search and retrieval has been defined by Google. The company has a 90 plus percent share of the Web search traffic in North America and Western Europe. (In Denmark, the company has 99 percent of Danish users’ search traffic. People in Denmark are happier, and it is not because Google search delivers better or more accurate results. Google is free and it answers questions.

The baloney about it takes one-click to change search engines sounds great. The reality is as Neeva found out, no one wants to click away from what is perceived to work for them. Neeva’s yip yap about smart software proves that the jazz about artificial intelligence is unlikely to change how free Web search works in Google’s backyard. Samsung did not embrace Bing because users would rebel.

Answer engine. Baloney. Users want something free that will make life easier; for example, a high school student looking for a quick way to crank out a 250 word essay about global warming or how to make a taco. ChatGPT is not answering questions; the application is delivering something that is highly desirable to a lazy student. By the way, at least the lazy student had the git up and go to use a system to spit out a bunch of recycled content that is good enough. But an answer engine? No, an online convenience store is closer to the truth.

Statement 3:

We are actively exploring how we can apply our search and LLM expertise in these settings, and we will provide updates on the future of our work and our team in the next few weeks.

My interpretation of this statement is that a couple of Neeva professionals will become venture centric. Others will become consultants. A few will join the handful of big companies which are feverishly trying to use “smart software” to generate more revenue. Will there be some who end up working at Philz Coffee. Yeah, some. Perhaps another company will buy the “code,” but valuing something that failed is likely to prove tricky. Who remembers who bought Entopia? No one, right?

Net net: The GenZ forensic search failure exercise will produce some spectacular Silicon Valley news reporting. Neeva is explaining its failure, but that failure presaged when Fast Search & Transfer pivoted from Web search to the enterprise, failed, and was acquired by Microsoft. Where is Fast Search now as the smart Bing is soon to be everywhere. The reality is that Google has had 25 years to do cement its search monopoly. Neeva did not read the email. So Neeva sucked up investment bucks with a song and dance about zapping the Big Bad Google with a death ray. Yep, another example of high school science club mentality touched by spreadsheet fever.

Well, the fever broke.

Stephen E Arnold, May 22, 2023

Google DeepMind Risk Paper: 60 Pages with a Few Googley Hooks

May 22, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved in writing, just a dumb humanoid.

I read the long version of “Ethical and Social Risks of Harm from Language Models.” The paper is mostly statements and footnotes to individuals who created journal-type articles which prove the point of each research article. With about 25 percent of the peer reviewed research including shaped, faked, or weaponized data – I am not convinced by footnotes. Obviously the DeepMinders believe that footnotes make a case for the Google way. I am not convinced because the Google has to find a way to control the future of information. Why? Advertising money and hoped for Mississippis of cash.

The research paper dates from 2021 and is part of Google’s case for being ahead of the AI responsibility game. The “old” paper reinforces the myth that Google is ahead of everyone else in the AI game. The explanation for Sam AI-man’s and Microsoft’s markeitng coup is that Google had to go slow because Google knew that there were ethical and social risks of harm from the firm’s technology. Google cares about humanity! The old days of “move fast and break things” are very 1998. Today Google is responsible. The wild and crazy dorm days are over. Today’s Google is concerned, careful, judicious, and really worried about its revenues. I think the company worries about legal actions, its management controversies, and its interdigital dual with the Softies of Redmond.

5 17 hunting for footnotes 2

A young researcher desperately seeking footnotes to support a specious argument. With enough footnotes, one can move the world it seems. Art generated by the smart software MidJourney.

I want to highlight four facets of the 60 page risks paper which are unlikely to get much, if any, attention from today’s “real” journalists.

Googley hook 1: Google wants to frame the discussion. Google is well positioned to “guide mitigation work.” The examples in the paper are selected to “guiding action to resolve any issues that can be identified in advance.” My comment: How magnanimous of Google. Framing stakes out the Googley territory. Why? Google wants to be Googzilla and reap revenue from its users, licensees, models, synthetic data, applications, and advertisers. You can find the relevant text in the paper on page 6 in the paragraph beginning “Responsible innovation.”

Googley hook 2: Google’s risks paper references fuzzy concepts like “acceptability” and “fair.” Like love, truth, and ethics, the notion of “acceptability” is difficult to define. Some might suggest that it is impossible to define. But Google is up to the task, particularly for application spaces unknown at this time. What happens when you apply “acceptability” to “poor quality information.” One just accepts the judgment of the outfit doing the framing. That’s Google. Game. Set. Match. You can find the discussion of “acceptability” on page 9.

Googley hook 3: Google is not going to make the mistake of Microsoft and its racist bot Tay. No way, José. What’s interesting is that the only company mentioned in the text of the 60 page paper is Microsoft. Furthermore, the toxic aspects of large language models are hard for technologies to detect (page18). Plus large language models can infer a person’s private data. So “providing true information is not always beneficial (Page 21). What’s the fix? Use smaller sets of training data… maybe. (Page 22). But one can fall back on trust — for instance, trust in Google the good — to deal with these challenges. In fact, trust Google to choose training data to deal with some of the downsides of large language models (Page 24).

Googley hook 4: Making smart software dependent on large language models that mitigates risk is expensive. Money, smart people who are in short supply, and computing resources are expensive.  Therefore, one need not focus on the origin point (large language model training and configuration). Direct attention at those downstream. Those users can deal with the identified 21 problems. The Google method puts Google out of the primary line of fire. There are more targets for the aggrieved to seek and shoot at (Page 37).

When I step back from the article which is two years old, it is obvious Google was aware of some potential issues with its approach. Dr. Timnit Gebru was sacrificed on a pyre of spite. (She does warrant a couple of references and a footnote or two. But she’s now a Xoogler. The one side effect was that Dr. Jeff Dean, who was not amused by the stochastic parrot has been kicked upstairs and the UK “leader” is now herding the little wizards of Google AI.

The conclusion of the paper echoes the Google knows best argument. Google wants a methodological toolkit because that will keep other people busy. Google wants others to figure out fair, an approach that is similar to Sam Altman (OpenAI) who begs for regulation of a sector about which much is unknown.

The answer, according to the risk analysis is “responsible innovation.” I would suggest that this paper, the television interviews, the PR efforts to get the Google story in as many places as possible are designed to make the sluggish Google a player in the AI game.

Who will be fooled? Will Google catch up in this Silicon Valley venture invigorating hill climb? For me the paper with the footnotes is just part of Google’s PR and marketing effort. Your mileage may vary. May relevance be with you, gentle reader.

Stephen  E Arnold, May 22, 2023

Time at Work: Work? Who Has Time?

May 18, 2023

I recall data from IDC years ago which suggested or asserted or just made up the following:

knowledge workers spend more than one day each week looking for information.

Other mid tier consulting firms jumped on the bandwagon. Examples include:

  • McKinsey (yep, the outfit eager to replace human MBAs with digital doppelgängers says is is 9.3 hours a week
  • A principal analyst offers up 2.5 hours per day or 12.5 hours per week searching for information

Now let’s toss in a fresh number. The Rupert Murdoch Wall Street Journal asserts “Workers Now Spend Two Full Days a Week on Email and in Meetings.” I assume this includes legal preparation for the voting machine hoo hah.

What do these numbers suggest when workers are getting RIFed and college graduates are wandering in the wilderness hoping like a blind squirrel that an acorn will trip them?

With meetings, email, and hunting for information, who has time for work? Toss in some work from home flexibility and the result is… why nothing seems to work. Whether it is locating information in an ad supported network, browsing Twitter without logging in, or making “contacts” on LinkedIn — the work part of work is particularly slippery.

Microsoft needs a year to fix a security issue. Google is — any day now — rolling out smart software in most of its products except in the European Union due to some skepticism about the disconnect between Googley words and Googley actions. Cyber security firms are deploying proactive systems as the individual cyber security developers work overtime to deal with new threats.

I am surprised when something works; for example, a Southwest flight takes off and lands mostly on time, an Amazon package arrives the next day as promised, and my Kia is not stolen due to engineering that causes automobile insurance companies to let loose a flight of legal eagles.

Net net: Not too many people work. Quite a few say they work and some are stressed about their work. But work? Who has time? The purpose of work is to not work.

Stephen E Arnold, May 18, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta