Microsoft and Good Enough Engineering: The MSI BSOD Triviality

August 30, 2023

My line up of computers does not have a motherboard from MSI. Call me “Lucky” I guess. Some MSI product owners were not. “Microsoft Puts Little Blame on Its Windows Update after Unsupported Processor BSOD Bug” is a fun read for those who are keeping notes about Microsoft’s management methods. The short essay romps through a handful of Microsoft’s recent quality misadventures.

8 26 broken vase

“Which of you broke mom’s new vase?” asks the sister. The boys look surprised. The vase has nothing to say about the problem. Thanks, MidJourney, no adjudication required for this image.

I noted this passage in the NeoWin.net article:

It has been a pretty eventful week for Microsoft and Intel in terms of major news and rumors. First up, we had the “Downfall” GDS vulnerability which affects almost all of Intel’s slightly older CPUs. This was followed by a leaked Intel document which suggests upcoming Wi-Fi 7 may only be limited to Windows 11, Windows 12, and newer.

The most helpful statement in the article in my opinion was this statement:

Interestingly, the company says that its latest non-security preview updates, ie, Windows 11 (KB5029351) and Windows 10 (KB5029331), which seemingly triggered this Unsupported CPU BSOD error, is not really what’s to blame for the error. It says that this is an issue with a “specific subset of processors”…

Like the SolarWinds’ misstep and a handful of other bone-chilling issues, Microsoft is skilled at making sure that its engineering is not the entire problem. That may be one benefit of what I call good enough engineering. The space created by certain systems and methods means that those who follow documentation can make mistakes. That’s where the blame should be placed.

Makes sense to me. Some MSI motherboard users looking at the beloved BSOD may not agree.

Stephen E Arnold, August 30, 2023

New Learning Model Claims to Reduce Bias, Improve Accuracy

August 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Promises, promises. We have seen developers try and fail to eliminate bias in machine learning models before. Now ScienceDaily reports, “New Model Reduces Bias and Enhances Trust in AI Decision-Making and Knowledge Organization.” Will this effort by University of Waterloo researchers be the first to succeed? The team worked in a field where AI bias and inaccuracy can be most devastating: healthcare. The write-up tells us:

“Hospital staff and medical professionals rely on datasets containing thousands of medical records and complex computer algorithms to make critical decisions about patient care. Machine learning is used to sort the data, which saves time. However, specific patient groups with rare symptomatic patterns may go undetected, and mislabeled patients and anomalies could impact diagnostic outcomes. This inherent bias and pattern entanglement leads to misdiagnoses and inequitable healthcare outcomes for specific patient groups. Thanks to new research led by Dr. Andrew Wong, a distinguished professor emeritus of systems design engineering at Waterloo, an innovative model aims to eliminate these barriers by untangling complex patterns from data to relate them to specific underlying causes unaffected by anomalies and mislabeled instances. It can enhance trust and reliability in Explainable Artificial Intelligence (XAI.)”

Wong states his team was able to disentangle statistics in a certain set of complex medical results data, leading to the development of a new XAI model they call Pattern Discovery and Disentanglement (PDD). The post continues:

“The PDD model has revolutionized pattern discovery. Various case studies have showcased PDD, demonstrating an ability to predict patients’ medical results based on their clinical records. The PDD system can also discover new and rare patterns in datasets. This allows researchers and practitioners alike to detect mislabels or anomalies in machine learning.”

If accurate, PDD could lead to more thorough algorithms that avoid hasty conclusions. Less bias and fewer mistakes. Can this ability to be extrapolated to other fields, like law enforcement, social services, and mortgage decisions? Assurances are easy.

Cynthia Murrell, August 30, 2023

AI Weird? Who Knew?

August 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Captain Obvious here. Today’s report comes from the IEEE, an organization for really normal people. Oh, you are not an electrical engineer? Then, you are not normal. Just ask an EE and inquire about normalcy?

Enough electrical engineer humor. Oh, well, one more: Which is a more sophisticated engineer? [a] Civil, [b] Mechanical, [c] Electrical, [d] Nuclear. The answer is [d] nuclear. Why? You have to be able to do math, chemistry, and fix a child’s battery powered toy. Get it? I must admit that I did not when Dr. James Terwilliger told it to me when I worked at the Halliburton nuclear outfit. Never heard of it? Well, there you go. Just ask a chatbot to fill you in.

I read “Why Today’s Chatbots Are Weird, Argumentative, and Wrong.” The IEEE article is going to create some tension in engineering-forward organizations. Most of these outfits are in the words of insightful leaders like the stars of the “All In” podcast. Booze, money, gambling, and confidence — a heady mixture indeed.

What does the write up say that Captain Obvious did not know? That’s a poor question. The answer is, “Not much.”

Here’s a passage which received the red marker treatment from this dinobaby:

[Generative AI services have] become way more fluent and more subtly wrong in ways that are harder to detect.

I love the “way more.” The key phrase in the extract, at least for me, is: “Harder to detect.” But why? Is it because developers are improving their generative systems a tweak and a human judgment at a time. The “detect” folks are in react mode. Does this suggest that at least for now the cat-and-mouse game ensures an advantage to the steadily improving generative systems. In simple terms, non-electrical engineers are going to be “subtly” fooled? It sure does.

A second example of my big Japanese chunky marker circling behavior is this snippet:

The problem is the answers do look vaguely correct. But [the chatbots] are making up papers, they’re making up citations or getting facts and dates wrong, but presenting it the same way they present actual search results. I think people can get a false sense of confidence on what is really just probability-based text.

Are you getting a sense that if a person who is not really informed about a topic will read baloney and perceive it as a truffle?

Captain Obvious is tired of this close reading game. For more AI insights, just navigate to the cited IEEE article. And be kind to electrical engineers. These individuals require respect and adulation. Make a misstep and your child’s battery powered toy will never emit incredibly annoying squeaks again.

Stephen E Arnold, August 29, 2023

Better and Modern Management

August 29, 2023

I spotted this amusing (at least to me) article: “Shares of Better.com — Whose CEO Fired 900 Workers on a Zoom Call — Slumped 95% on Their First Day of Trade.” The main idea of the story strikes me as “modern management.” The article explains that Better.com helps its customers get mortgages. The company went public. The IPO was interesting because shares cratered.

8 26 confused

“Hmmm. I wonder if my management approach could be improved?” asks the bold leader. MidJourney has confused down pat.

Other highlights from the story struck me as reasonably important:

  • The CEO fired 900 employees via a Zoom call in 2021
  • The CEO allegedly accused 250 of those employees of false time keeping
  • The CEO underwent “leadership training”
  • The company is one of the semi-famous Softbank venture firm.

Several ideas passed through my mind:

  1. Softbank does have a knack for selecting companies to back
  2. Training courses may not be effective
  3. Former employers may find the management expertise of the company ineffectual.

I love the name Better. The question is, “Better at what?” Perhaps the Better management team could learn from the real superstars of leadership; for example, Google, X, and the Zuckbook?

Stephen E Arnold, August 29, 2023

Calls for AI Pause Futile At this Late Date

August 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Well, the nuclear sub has left the base. A group of technology experts recently called for a 6-month pause on AI rollouts in order to avoid the very “loss of control of our civilization” to algorithms. That might be a good idea—if it had a snowball’s chance of happening. As it stands, observes ComputerWorld‘s Rob Enderle, “Pausing AI Development Is a Foolish Idea.” We think foolish is not a sufficiently strong word. Perhaps regulation could have been established before the proverbial horse left the barn, but by now there are more than 500 AI startups according to Jason Calacanis, noted entrepreneur and promoter.

8 27 sdad sailor

A sad sailor watches the submarine to which he was assigned leave the dock without him. Thanks, MidJourney. No messages from Mother MJ on this image.

Enderle opines as a premier pundit:

“Once a technology takes off, it’s impossible to hold back, largely because there’s no strong central authority with the power to institute a global pause — and no enforcement entity to ensure the pause directive is followed. The right approach would be to create such an authority beforehand, so there’s some way to assure the intended outcome. I tend to agree with former Microsoft CEO Bill Gates that the focus should be on assuring AI reliability, not trying to pause everything. … There simply is no global mechanism to enforce a pause in any technological advance that has already reached the market.”

We are reminded that even development on clones, which is illegal in most of the world, continues apace. The only thing bans seem to have accomplished there is to obliterate transparency around cloning projects. There is simply no way to rein in all the world’s scientists. Not yet. Enderle offers a grain of hope on artificial intelligence, however. He notes it is not too late to do for general-purpose AI what we failed to do for generative AI:

“General AI is believed to be more than a decade in the future, giving us time to devise a solution that’s likely closer to a regulatory and oversight body than a pause. In fact, what should have been proposed in that open letter was the creation of just such a body. Regardless of any pause, the need is to ensure that AI won’t be harmful, making oversight and enforcement paramount. Given that AI is being used in weapons, what countries would allow adequate third-party oversight? The answer is likely none — at least until the related threat rivals that of nuclear weapons.”

So we have that to look forward to. And clones, apparently. The write-up points to initiatives already in the works to protect against “hostile” AI. Perhaps they will even be effective.

Cynthia Murrell, August 16, 2023

The Age of the Ideator: Go Fast, Ideate!

August 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “To De-Risk AI, the Government Must Accelerate Knowledge Production.” The essay introduces a word I am not sure I have seen before; that is, “ideator.” The meaning of an ideator, I think, is a human (not a software machine) able to produce people “who can have outsized impact on the world.” I think the author is referring to the wizard El Zucko (father of Facebook), the affable if mercurial Elon Musk, or the AI leaning Tim Apple. I am reasonably certain that the “outsized influence” moniker does not apply to the lip smacking Spanish football executive, Vlad Putin, or or similar go-getters.

8 28 share info you crazy

Share my information with a government agency. Are you crazy? asks the hard charging, Type A overachiever working wonders with smart software designed for autonomous weapons. Thanks, MidJourney. Not what I specified but close enough for horse shoes.

The pivotal idea is good for ideators. These individuals come up with ideas. These should be good ideas which flow from ideators of the right stripe. Solving problems requires information. Ideators like information, maybe crave it? The white hat ideators can neutralize non-white hat ideators. Therefore, white hat ideators need access to information. The non-white hat ideator won’t have a change. (No, I won’t ask, “What happens when a white hat ideator flips, changes to a non-white hat, and uses information in ways different from the white hat types’ actions?”)

What’s interesting about the essay is that the “fix” is to go fast when it comes to making information and then give the white hat folks access. To make the system work, a new government agency is needed. (I assume that the author is thinking about a US, Canadian, or Australian, or Western European government agency.)

That agency will pay the smart software outfits to figure out “AI alignment.” (I must admit I am a bit fuzzy on how commercial enterprises with trade secrets will respond to the “alignment.”) The new government agency will have oversight authority and will publish the work of its professionals. The government will not try to slow down or impede the “alignment.”

I have simplified most of the ideas for one reason. I want to conclude this essay with a single question, “How are today’s government agencies doing with homelessness, fiscal management, health care, and regulation of high-technology monopolies?”

Alignment? Yeah.

Stephen E Arnold, August 28, 2023

Content Moderation: Modern Adulting Is Too Much Work

August 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Content moderation requires editorial policies. Editorial policies cost money. Editorial policies must be communicated. Editorial policies must be enforced by individuals trained in what information is in bounds or out of bounds. Commercial database companies had editorial policies. One knew what was “in” Compendex, Predicasts, Business Dateline, and and similar commercial databases. Some of these professional publishers have worked to keep the old-school approach in place to serve their customers. Other online services dumped the editorial policies approach to online information because it was expensive and silly. I think that lax or no editorial policies is a bad idea. One can complain about how hard a professional online service was or is to use, but one knows the information placed into the database.

8 26 take out garbage

“No, I won’t take out the garbage. That’s a dirty job,” says the petulant child. Thanks, MidJourney, you did not flash me the appeal message this morning.

Fun fact. Business Dateline, originally created by the Courier Journal & Louisville Times, was the first online commercial database to include corrections to stories made by the service’s sources. I am not sure if that policy is still in place. I think today’s managers will have cost in mind. Extras like accuracy are going to be erased by the belief that the more information one has, the less a mistake means.

I thought about adulting and cost control when I read “Following Elon Musk’s Lead, Big Tech Is Surrendering to Disinformation.” The “real” news story reports:

Social media companies are receding from their role as watchdogs against political misinformation, abandoning their most aggressive efforts to police online falsehoods in a trend expected to profoundly affect the 2024 presidential election.

Creating, producing, and distributing electronic information works when those involved have a shared belief in accuracy, appropriateness, and the public good. One those old-fashioned ideas are discarded what’s the result? From my point of view, look around. What does one see in different places in the US and elsewhere? What can be believed? What is socially-acceptable behavior?

When one defines adulting in terms of cost, civil life is eroded in my opinion. Defining responsibility in terms of one’s self interest is one thing that seems to be the driving force of many decisions. I am glad I am a dinobaby. I am glad I am old. At least we tried to enforce editorial policies for ABI/INFORM, Business Dateline, the Health Reference Center, and the other electronic projects in which I was involved. Even our early Internet service ThePoint (Top 5% of the Internet) which became part of Lycos many years ago had an editorial policy.

Ah, the good old days when motivated professionals worked to provide accurate, reliable reference information. For those involved in those projects, I thank you. For those like the companies mentioned in the cited WaPo story, your adulting is indeed a childish response to an important task.

What is the fix? One approach is the Chinese government / TikTok paying Oracle to moderate TikTok content. I wonder what the punishment for doing a “bad” job is. Is this the method to make “correct” decisions? The surveillance angle is an expensive solution. What’s the alternative?

Stephen E Arnold, August 28, 2023


This Dinobaby Likes Advanced Search, Boolean Operators, and Precision. Most Do Not

August 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am not sure of the chronological age of the author of “7 Reasons to Replace Advanced Search with Filters So Users Can Easily Find What They Need.” From my point of view, the author has a mental age of someone much younger than I. The article identifies a number of reasons why “advanced search” functions are lousy. As a dinobaby, I want to be crystal clear: A user should have an interface which allows that user to locate the information required to respond in a useful way to a query.

8 24 sliding board

The expert online searcher says with glee, “I love it when free online search services make finding information easy. Best of all is Amazon. It suggests so many things I absolutely need.” Hey, MidJourney, thanks for the image without suggesting Mother MJ okay my word choice. “Whoever said, ‘Nothing worthwhile comes easy’ is pretty stupid,” shouts or sliding board slider.

Advanced search in my dinobaby mental space means Boolean operators like AND, OR, and NOT, among others. Advanced search requires other meaningful “tags” specifically designed to minimize the ambiguity of words; for example, terminal can mean transportation or terminal can mean computing device. English is notable because it has numerous words which make sense only when a context is provided. Thus, a Field Code can instruct the retrieval system to discard the computing device context and retrieve the transportation context.

The write up makes clear that for today’s users training wheels are important. Are these “aids” like icons, images, bundles of results under a category dark patterns or assistance for a user. I can only imagine the push back I would receive if I were in a meeting with today’s “user experience” designers. Sorry, kids. I am a dinobaby.

I really want to work through seven reasons advanced search sucks. But I won’t. The number of people who know how to use key word search is tiny. One number I heard when I was a consultant to a certain big search engine is less than three percent of the Web search users. The good news for those who buy into the arguments in the cited article is that dinobabies will die.

Is it a lack of education? Is it laziness? Is it what most of today’s users understand?

I don’t know. I don’t care. A failure to understand how to obtain the specific information one requires is part of the long slow slide down a descent gradient. Enjoy the non-advanced search.

Stephen E Arnold, August 28, 2023

Traveling to France? On a Watch List?

August 25, 2023

The capacity for surveillance has been lurking in our devices all along, of course. Now, reports Azerbaijan’s Azernews, “French Police Can Secretly Activate Phone Cameras, Microphones, and GPS to Spy on Citizens.” The authority to remotely activate devices was part of a larger justice reform bill recently passed. Officials insist, though, this authority will not be used willy-nilly:

“A judge must approve the use of the powers, and the recently amended bill forbids use against journalists, lawyers, and other ‘sensitive professions.’ The measure is also meant to limit use to serious cases, and only for a maximum of six months. Geolocation would be limited to crimes that are punishable by at least five years in prison.”

Surely, law enforcement would never push those limits. Apparently the Orwellian comparisons are evident even to officials, since Justice Minister Éric Dupond-Moretti preemptively batted them away. Nevertheless, we learn:

“French digital rights advocacy group, La Quadrature du Net, has raised serious concerns over infringements of fundamental liberties, and has argued that the bill violates the ‘right to security, right to a private life and to private correspondence’ and ‘the right to come and go freely.’ … The legislation comes as concerns about government device surveillance are growing. There’s been a backlash against NSO Group, whose Pegasus spyware has allegedly been misused to spy on dissidents, activists, and even politicians. The French bill is more focused, but civil liberties advocates are still alarmed at the potential for abuse. The digital rights group La Quadrature du Net has pointed out the potential for abuse, noting that remote access may depend on security vulnerabilities. Police would be exploiting security holes instead of telling manufacturers how to patch those holes, La Quadrature says.”

Smartphones, laptops, vehicles, and any other connected devices are all fair game under the new law. But only if one has filed the proper paperwork, we are sure. Nevertheless, progress.

Cynthia Murrell, August 25, 2023

Software Marches On: Should Actors Be Worried?

August 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

How AI Is Bringing Film Stars Back from the Dead” is going to raise hackles of some professionals in Hollywood. I wonder how many people alive today remember James Dean. Car enthusiasts may know about his driving skills, but not too much about his dramaturgical abilities. I must confess that I know zippo about Jimmy other than he was a driver prone to miscalculations.

8 20 angry writer

An angry human actor — recycled and improved by smart software — snarls, “I didn’t go to acting school to be replaced by software. I have a craft, and it deserves respect.” MidJourney, I only had to describe what I wanted one time. Keep on improving or recursing or whatever it is you do.

The Beeb reports:

The digital cloning of Dean also represents a significant shift in what is possible. Not only will his AI avatar be able to play a flat-screen role in Back to Eden and a series of subsequent films, but also to engage with audiences in interactive platforms including augmented reality, virtual reality and gaming. The technology goes far beyond passive digital reconstruction or deepfake technology that overlays one person’s face over someone else’s body. It raises the prospect of actors – or anyone else for that matter – achieving a kind of immortality that would have been otherwise impossible, with careers that go on long after their lives have ended.

The write up does not reference the IBM study suggesting that 40 percent of workers will require reskilling. I am not sure that a reskilled actor will be able to do. I polled my team and it came up with some Hollywood possibilities:

  1. Become an AI adept with a mastery of python, Java, and C. Code software replacing studio executives with a product called DorkMBA
  2. Channel the anger into a co-ed game of baseball and discuss enthusiastically with the umpire corrective lenses
  3. Start an anger management podcast and, like a certain Stanford professor, admit the indiscretions of one’s childhood
  4. Use MidJourney and ChatGPT to write a manga for Amazon
  5. Become a street person.

I am not sure these ideas will be acceptable to those annoyed by the BBC write up. I want to point out that smart software can do some interesting things. My hunch is that software can do endless versions of classic hits with old-time stars quickly and more economically than humanoid involved professionals.

I am not Bogarting you.

Stephen E Arnold, August 25, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta