Microsoft Pop Ups: Take Screen Shots

August 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Microsoft Is Using Malware-Like Pop-Ups in Windows 11 to Get People to Ditch Google.” Kudos to the wordsmiths at TheVerge.com for avoiding the term “po*n storm” to describe the Windows 11 alleged pop ups.

8 30 pop up

A person in the audience says, “What’s that pop up doing up there?” Thanks, MJ. Another so so piece of original art.

The write up states:

I have no idea why Microsoft thinks it’s ok to fire off these pop-ups to Windows 11 users in the first place. I wasn’t alone in thinking it was malware, with posts dating back three months showing Reddit users trying to figure out why they were seeing the pop-up.

What popups for three months? I love “real” news when it is timely.

The article includes this statement:

Microsoft also started taking over Chrome searches in Bing recently to deliver a canned response that looks like it’s generated from Microsoft’s GPT-4-powered chatbot. The fake AI interaction produced a full Bing page to entirely take over the search result for Chrome and convince Windows users to stick with Edge and Bing.

How can this be? Everyone’s favorite software company would not use these techniques to boost Credge’s market share, would it?

My thought is that Microsoft’s browser woes began a long time ago in an operating system far, far away. As a result, Credge is lagging behind Googzilla’s browser. Unless Google shoots itself in both feet and fires a digital round into the beastie’s heart, the ad monster will keep on sucking data and squeezing out alternatives.

The write up does not seem to be aware that Google wants to control digital information flows. Microsoft will need more than popups to prevent the Chrome browser from becoming the primary access mechanism to the World Wide Web. Despite Microsoft’s market power, users don’t love the Microsoft  Credge thing. Hey, Microsoft, why not pay people to use Credge.

Stephen E Arnold, August 31, 2023

Slackers, Rejoice: Google Has a Great Idea Just for You

August 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I want to keep this short because the idea of not doing work to do work offends me deeply. Just like the big thinkers who want people to relax, take time, smell the roses, and avoid those Type A tendencies annoy me. I like being a Type A. In fact, if I were not a Type A, I would not “be” to use some fancy Descartes logic.

8 29 looking down info highway

Is anyone looking down the Information Superhighway to see what speeding AI vehicle is approaching? Of course not, everyone is on break or playing Foosball. Thanks, Mother MidJourney, you did not send me to the arbitration committee for my image request.

Google Meet’s New AI Will Be Able to Go to Meetings for You” reports:

…you might never need to pay attention to another meeting again — or even show up at all.

Let’s think about this new Google service. If AI continues to advance at a reasonable pace, an AI which can attend a meeting for a person can at some point replace the person. Does that sound reasonable? What a GenZ thrill. Money for no work. The advice to take time for kicking back and living a stress free life is just fantastic.

In today’s business climate, I am not sure that delegating knowledge work to smart software is a good idea. I like to use the phrase “gradient descent.” My connotation of this jargon means a cushioned roller coaster to one or more of the Seven Deadly Sins. I much prefer intentional use of software. I still like most of the old-fashioned methods of learning and completing projects. I am happy to encounter a barrier like my search for the ultimate owners of the domain rrrrrrrrrrr.com or the methods for enabling online fraud practiced by some Internet service providers. (Sorry, I won’t name these fine outfits in this free blog post. If you are attending my keynote at the Massachusetts and New York Association of Crime Analysts’ conference in early October, say, “Hello.” In that setting, I will identify some of these outstanding companies and share some thoughts about how these folks trample laws and regulations. Sound like fun?

Google’s objective is to become the source for smart software. In that position, the company will have access to knobs and levers controlling information access, shaping, and distribution. The end goal is a quarterly financial report and the diminution of competition from annoying digital tsetse flies in my opinion.

Wouldn’t it be helpful if the “real news” looked down the Information Highway? No, of course not. For a Type A, the new “Duet” service does not “do it” for me.

Stephen E Arnold, August 31, 2023

A Wonderful Romp through a Tech Graveyard

August 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I heard about a Web site called killedby.tech. I took a look and what a walk down Memory Lane. You know Memory Lane. It runs close to the Information Superhighway. Are products smashed on the Info Highway? Some, not all.

The entry for ILoo, an innovation from the Softies was born and vaporized in 2003. Killedby describes the breakthrough this way:

iLoo was a smart portable toilet integrating the complete equipment to surf the Internet from inside and outside the cabinet.

I wonder how many van lifers would buy this product. Imagine the TikTok videos. That would keep the Oracle TikTok review team busy and probably provide some amusement for others as well.

And I had forgotten about Google’s weird response to failing to convince the US government to use the Googley search system for FirstGov.gov. Ah, forward truncation — something Google would never ever do. The product/service was Google Public Service Search. Here’s what the tomb stone says:

Google Public Service Search provided governmental, non-profit and academic organizational search results without ads.

That idea bit the dust in 2006, which is the year I have pegged as the point at which Google went all-in on its cheerful, transparent business model. No ads! Imagine that!

I had forgotten about Google’s real time search play. Killedby says:

Google Real-Time Search provided live search results from Twitter, Facebook, and news websites.

I never learned why this was sent to the big digital dumpster behind the Google building on Shoreline. Rumor was that some news outfits and some social media Web sites were not impressed. Google — ever the trusted ad provider — hasta la vista to a social information metasearch.

Great site. I did not see Google Transformic, however. Killedby is quite good.

Stephen E Arnold, August 31, 2023

Google: Trapped in Its Own Walled Garden with Lots of Science Club Alums

August 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “MapReduce, TensorFlow, Vertex: Google’s Bet to Avoid Repeating History in AI.” I found the idea that Google gets in its own way a retelling of how high school science club management produces interesting consequences.

image

A young technology wizard finds himself in a Hall of Mirrors at the carnival. He is not sure what is real or in which direction to go. The world of the House of Mirrors is disorienting. The young luminary wants to return to the walled garden where life is more comfortable. Thanks, MidJourney. Four tries and I get this tired illustration. Gradient descent time?

The write up asserts:

Google is in the middle of trying to avoid repeating history when releasing its industry-altering technology.

I disagree. The methods defining Google produce with remarkable consistency a lack of informed control. The idea is that organizations have a culture. That cultural evolves over time, but it remains anchored in its past. Thus, as the organization appears to move forward in time, that organization behaves in a predictable way; for example, Google has an approach to management which guarantees friction. Examples range from the staff protests to the lateral arabesque used to move Dr. Jeff Dean out of the way of the DeepMind contingent.

The write up takes a different view; for example:

Run by engineers, the [Google MapReduce] team essentially did not foresee the coming wave of open-source technology to power the modern Web and the companies that would come to commercialize it.

Google lacks the ability to perceive its opportunities. The company is fenced by its dependence on online advertising. Thus, innovations are tough for the Googlers to put into perspective. One reason is the high school science club ethos of the outfit; the other is that the outside world is as foreign to many Googlers as the world beyond the goldfish’s bowl filled with water. The view is distorted, surreal, and unfamiliar.

How can a company innovate and make a commercially viable product with this in its walled garden? It cannot. Advertising at Google is a me-too product for which Google prior to its IPO settled a dispute with Yahoo over the “inspiration” for pay-to-play search. The cost of this “inspiration” was about $1 billion.

In a quarter century, Google remains what one Microsoftie called “a one-trick pony.” Will the Google Cloud emerge as a true innovation? Nope. There are lots of clouds. Google is the Enterprise Rent-a-Car to the Hertz and Avis cloud rental firms. Google’s innovation track record is closer to a high school science club which has been able to win the state science club content year after year. Other innovators win the National Science Club Award (once called the Westinghouse Award). The context-free innovations are useful to others who have more agility and market instinct.

My view is that Google has become predictable, lurching from one technical paper to legal battle like a sine wave in a Physics 101 class; that is, a continuous wave with a smooth periodic function.

Don’t get me wrong. Google is an important company. What is often overlooked is the cultural wall that keeps the 100,000 smartest people in the world locked down in the garden. Innovation is constrained, and the excitement exists off the virtual campus. Why do so many Xooglers innovate and create interesting things once freed from the walled garden? Culture has strengths and weaknesses. Google’s muffing the bunny, as the article points out, is one defining characteristic of a company which longs for high school science club meetings and competitions with those like themselves.

Tony Bennett won’t be singing in the main cafeteria any longer, but the Googlers don’t care. He was an outsider, interesting but not in the science club. If the thought process doesn’t fit, you must quit.

Stephen E Arnold, August 30. 2023

Microsoft and Good Enough Engineering: The MSI BSOD Triviality

August 30, 2023

My line up of computers does not have a motherboard from MSI. Call me “Lucky” I guess. Some MSI product owners were not. “Microsoft Puts Little Blame on Its Windows Update after Unsupported Processor BSOD Bug” is a fun read for those who are keeping notes about Microsoft’s management methods. The short essay romps through a handful of Microsoft’s recent quality misadventures.

8 26 broken vase

“Which of you broke mom’s new vase?” asks the sister. The boys look surprised. The vase has nothing to say about the problem. Thanks, MidJourney, no adjudication required for this image.

I noted this passage in the NeoWin.net article:

It has been a pretty eventful week for Microsoft and Intel in terms of major news and rumors. First up, we had the “Downfall” GDS vulnerability which affects almost all of Intel’s slightly older CPUs. This was followed by a leaked Intel document which suggests upcoming Wi-Fi 7 may only be limited to Windows 11, Windows 12, and newer.

The most helpful statement in the article in my opinion was this statement:

Interestingly, the company says that its latest non-security preview updates, ie, Windows 11 (KB5029351) and Windows 10 (KB5029331), which seemingly triggered this Unsupported CPU BSOD error, is not really what’s to blame for the error. It says that this is an issue with a “specific subset of processors”…

Like the SolarWinds’ misstep and a handful of other bone-chilling issues, Microsoft is skilled at making sure that its engineering is not the entire problem. That may be one benefit of what I call good enough engineering. The space created by certain systems and methods means that those who follow documentation can make mistakes. That’s where the blame should be placed.

Makes sense to me. Some MSI motherboard users looking at the beloved BSOD may not agree.

Stephen E Arnold, August 30, 2023

New Learning Model Claims to Reduce Bias, Improve Accuracy

August 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Promises, promises. We have seen developers try and fail to eliminate bias in machine learning models before. Now ScienceDaily reports, “New Model Reduces Bias and Enhances Trust in AI Decision-Making and Knowledge Organization.” Will this effort by University of Waterloo researchers be the first to succeed? The team worked in a field where AI bias and inaccuracy can be most devastating: healthcare. The write-up tells us:

“Hospital staff and medical professionals rely on datasets containing thousands of medical records and complex computer algorithms to make critical decisions about patient care. Machine learning is used to sort the data, which saves time. However, specific patient groups with rare symptomatic patterns may go undetected, and mislabeled patients and anomalies could impact diagnostic outcomes. This inherent bias and pattern entanglement leads to misdiagnoses and inequitable healthcare outcomes for specific patient groups. Thanks to new research led by Dr. Andrew Wong, a distinguished professor emeritus of systems design engineering at Waterloo, an innovative model aims to eliminate these barriers by untangling complex patterns from data to relate them to specific underlying causes unaffected by anomalies and mislabeled instances. It can enhance trust and reliability in Explainable Artificial Intelligence (XAI.)”

Wong states his team was able to disentangle statistics in a certain set of complex medical results data, leading to the development of a new XAI model they call Pattern Discovery and Disentanglement (PDD). The post continues:

“The PDD model has revolutionized pattern discovery. Various case studies have showcased PDD, demonstrating an ability to predict patients’ medical results based on their clinical records. The PDD system can also discover new and rare patterns in datasets. This allows researchers and practitioners alike to detect mislabels or anomalies in machine learning.”

If accurate, PDD could lead to more thorough algorithms that avoid hasty conclusions. Less bias and fewer mistakes. Can this ability to be extrapolated to other fields, like law enforcement, social services, and mortgage decisions? Assurances are easy.

Cynthia Murrell, August 30, 2023

AI Weird? Who Knew?

August 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Captain Obvious here. Today’s report comes from the IEEE, an organization for really normal people. Oh, you are not an electrical engineer? Then, you are not normal. Just ask an EE and inquire about normalcy?

Enough electrical engineer humor. Oh, well, one more: Which is a more sophisticated engineer? [a] Civil, [b] Mechanical, [c] Electrical, [d] Nuclear. The answer is [d] nuclear. Why? You have to be able to do math, chemistry, and fix a child’s battery powered toy. Get it? I must admit that I did not when Dr. James Terwilliger told it to me when I worked at the Halliburton nuclear outfit. Never heard of it? Well, there you go. Just ask a chatbot to fill you in.

I read “Why Today’s Chatbots Are Weird, Argumentative, and Wrong.” The IEEE article is going to create some tension in engineering-forward organizations. Most of these outfits are in the words of insightful leaders like the stars of the “All In” podcast. Booze, money, gambling, and confidence — a heady mixture indeed.

What does the write up say that Captain Obvious did not know? That’s a poor question. The answer is, “Not much.”

Here’s a passage which received the red marker treatment from this dinobaby:

[Generative AI services have] become way more fluent and more subtly wrong in ways that are harder to detect.

I love the “way more.” The key phrase in the extract, at least for me, is: “Harder to detect.” But why? Is it because developers are improving their generative systems a tweak and a human judgment at a time. The “detect” folks are in react mode. Does this suggest that at least for now the cat-and-mouse game ensures an advantage to the steadily improving generative systems. In simple terms, non-electrical engineers are going to be “subtly” fooled? It sure does.

A second example of my big Japanese chunky marker circling behavior is this snippet:

The problem is the answers do look vaguely correct. But [the chatbots] are making up papers, they’re making up citations or getting facts and dates wrong, but presenting it the same way they present actual search results. I think people can get a false sense of confidence on what is really just probability-based text.

Are you getting a sense that if a person who is not really informed about a topic will read baloney and perceive it as a truffle?

Captain Obvious is tired of this close reading game. For more AI insights, just navigate to the cited IEEE article. And be kind to electrical engineers. These individuals require respect and adulation. Make a misstep and your child’s battery powered toy will never emit incredibly annoying squeaks again.

Stephen E Arnold, August 29, 2023

Better and Modern Management

August 29, 2023

I spotted this amusing (at least to me) article: “Shares of Better.com — Whose CEO Fired 900 Workers on a Zoom Call — Slumped 95% on Their First Day of Trade.” The main idea of the story strikes me as “modern management.” The article explains that Better.com helps its customers get mortgages. The company went public. The IPO was interesting because shares cratered.

8 26 confused

“Hmmm. I wonder if my management approach could be improved?” asks the bold leader. MidJourney has confused down pat.

Other highlights from the story struck me as reasonably important:

  • The CEO fired 900 employees via a Zoom call in 2021
  • The CEO allegedly accused 250 of those employees of false time keeping
  • The CEO underwent “leadership training”
  • The company is one of the semi-famous Softbank venture firm.

Several ideas passed through my mind:

  1. Softbank does have a knack for selecting companies to back
  2. Training courses may not be effective
  3. Former employers may find the management expertise of the company ineffectual.

I love the name Better. The question is, “Better at what?” Perhaps the Better management team could learn from the real superstars of leadership; for example, Google, X, and the Zuckbook?

Stephen E Arnold, August 29, 2023

Calls for AI Pause Futile At this Late Date

August 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Well, the nuclear sub has left the base. A group of technology experts recently called for a 6-month pause on AI rollouts in order to avoid the very “loss of control of our civilization” to algorithms. That might be a good idea—if it had a snowball’s chance of happening. As it stands, observes ComputerWorld‘s Rob Enderle, “Pausing AI Development Is a Foolish Idea.” We think foolish is not a sufficiently strong word. Perhaps regulation could have been established before the proverbial horse left the barn, but by now there are more than 500 AI startups according to Jason Calacanis, noted entrepreneur and promoter.

8 27 sdad sailor

A sad sailor watches the submarine to which he was assigned leave the dock without him. Thanks, MidJourney. No messages from Mother MJ on this image.

Enderle opines as a premier pundit:

“Once a technology takes off, it’s impossible to hold back, largely because there’s no strong central authority with the power to institute a global pause — and no enforcement entity to ensure the pause directive is followed. The right approach would be to create such an authority beforehand, so there’s some way to assure the intended outcome. I tend to agree with former Microsoft CEO Bill Gates that the focus should be on assuring AI reliability, not trying to pause everything. … There simply is no global mechanism to enforce a pause in any technological advance that has already reached the market.”

We are reminded that even development on clones, which is illegal in most of the world, continues apace. The only thing bans seem to have accomplished there is to obliterate transparency around cloning projects. There is simply no way to rein in all the world’s scientists. Not yet. Enderle offers a grain of hope on artificial intelligence, however. He notes it is not too late to do for general-purpose AI what we failed to do for generative AI:

“General AI is believed to be more than a decade in the future, giving us time to devise a solution that’s likely closer to a regulatory and oversight body than a pause. In fact, what should have been proposed in that open letter was the creation of just such a body. Regardless of any pause, the need is to ensure that AI won’t be harmful, making oversight and enforcement paramount. Given that AI is being used in weapons, what countries would allow adequate third-party oversight? The answer is likely none — at least until the related threat rivals that of nuclear weapons.”

So we have that to look forward to. And clones, apparently. The write-up points to initiatives already in the works to protect against “hostile” AI. Perhaps they will even be effective.

Cynthia Murrell, August 16, 2023

The Age of the Ideator: Go Fast, Ideate!

August 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “To De-Risk AI, the Government Must Accelerate Knowledge Production.” The essay introduces a word I am not sure I have seen before; that is, “ideator.” The meaning of an ideator, I think, is a human (not a software machine) able to produce people “who can have outsized impact on the world.” I think the author is referring to the wizard El Zucko (father of Facebook), the affable if mercurial Elon Musk, or the AI leaning Tim Apple. I am reasonably certain that the “outsized influence” moniker does not apply to the lip smacking Spanish football executive, Vlad Putin, or or similar go-getters.

8 28 share info you crazy

Share my information with a government agency. Are you crazy? asks the hard charging, Type A overachiever working wonders with smart software designed for autonomous weapons. Thanks, MidJourney. Not what I specified but close enough for horse shoes.

The pivotal idea is good for ideators. These individuals come up with ideas. These should be good ideas which flow from ideators of the right stripe. Solving problems requires information. Ideators like information, maybe crave it? The white hat ideators can neutralize non-white hat ideators. Therefore, white hat ideators need access to information. The non-white hat ideator won’t have a change. (No, I won’t ask, “What happens when a white hat ideator flips, changes to a non-white hat, and uses information in ways different from the white hat types’ actions?”)

What’s interesting about the essay is that the “fix” is to go fast when it comes to making information and then give the white hat folks access. To make the system work, a new government agency is needed. (I assume that the author is thinking about a US, Canadian, or Australian, or Western European government agency.)

That agency will pay the smart software outfits to figure out “AI alignment.” (I must admit I am a bit fuzzy on how commercial enterprises with trade secrets will respond to the “alignment.”) The new government agency will have oversight authority and will publish the work of its professionals. The government will not try to slow down or impede the “alignment.”

I have simplified most of the ideas for one reason. I want to conclude this essay with a single question, “How are today’s government agencies doing with homelessness, fiscal management, health care, and regulation of high-technology monopolies?”

Alignment? Yeah.

Stephen E Arnold, August 28, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta