Microsoft Claims to Bring Human Reasoning to AI with New Algorithm
September 20, 2023
Has Microsoft found the key to meld the strengths of AI reasoning and human cognition? Decrypt declares, “Microsoft Infuses AI with Human-Like Reasoning Via an ‘Algorithm of Thoughts’.” Not only does the Algorithm of Thoughts (AoT for short) come to better conclusions, it also saves energy by streamlining the process, Microsoft promises. Writer Jose Antonio Lanz explains:
“The AoT method addresses the limitations of current in-context learning techniques like the ‘Chain-of-Thought’ (CoT) approach. CoT sometimes provides incorrect intermediate steps, whereas AoT guides the model using algorithmic examples for more reliable results. AoT draws inspiration from both humans and machines to improve the performance of a generative AI model. While humans excel in intuitive cognition, algorithms are known for their organized, exhaustive exploration. The research paper says that the Algorithm of Thoughts seeks to ‘fuse these dual facets to augment reasoning capabilities within LLMs.’ Microsoft says this hybrid technique enables the model to overcome human working memory limitations, allowing more comprehensive analysis of ideas. Unlike CoT’s linear reasoning or the ‘Tree of Thoughts’ (ToT) technique, AoT permits flexible contemplation of different options for sub-problems, maintaining efficacy with minimal prompting. It also rivals external tree-search tools, efficiently balancing costs and computations. Overall, AoT represents a shift from supervised learning to integrating the search process itself. With refinements to prompt engineering, researchers believe this approach can enable models to solve complex real-world problems efficiently while also reducing their carbon impact.”
Wowza! Lanz expects Microsoft to incorporate AoT into its GPT-4 and other advanced AI systems. (Microsoft has partnered with OpenAI and invested billions into ChatGPT; it has an exclusive license to integrate ChatGPT into its products.) Does this development bring AI a little closer to humanity? What is next?
Cynthia Murrell, September 20, 2023
Microsoft: Good Enough Just Is Not
September 18, 2023
Was it the Russian hackers? What about the special Chinese department of bad actors? Was it independent criminals eager to impose ransomware on hapless business customers?
No. No. And no.
The manager points his finger at the intern working the graveyard shift and says, “You did this. You are probably worse than those 1,000 Russian hackers orchestrated by the FSB to attack our beloved software. You are a loser.” The intern is embarrassed. Thanks, Mom MJ. You have the hands almost correct… after nine months or so. Gradient descent is your middle name.
“Microsoft Admits Slim Staff and Broken Automation Contributed to Azure Outage” presents an interesting interpretation of another Azure misstep. The report asserts:
Microsoft’s preliminary analysis of an incident that took out its Australia East cloud region last week – and which appears also to have caused trouble for Oracle – attributes the incident in part to insufficient staff numbers on site, slowing recovery efforts.
But not really. The report adds:
The software colossus has blamed the incident on “a utility power sag [that] tripped a subset of the cooling units offline in one datacenter, within one of the Availability Zones.”
Ah, ha. Is the finger of blame like a heat seeking missile. By golly, it will find something like a hair dryer, fireworks at a wedding where such events are customary, or a passenger aircraft. A great high-tech manager will say, “Oops. Not our fault.”
The Register’s write up points out:
But the document [an official explanation of the misstep] also notes that Microsoft had just three of its own people on site on the night of the outage, and admits that was too few.
Yeah. Work from home? Vacay time? Managerial efficiency planning? Whatever.
My view of this unhappy event is:
- Poor managers making bad decisions
- A drive for efficiency instead of a drive toward excellence
- A Microsoft Bob moment.
More exciting Azure events in the future? Probably. More finger pointing? It is a management method, is it not?
Stephen E Arnold, September 18, 2023
Surprised? Microsoft Drags Feet on Azure Security Flaw
September 5, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Microsoft has addressed a serious security flaw in Azure, but only after being called out by the cybersecurity firm that found the issue. It only took several months. Oh, and according to that firm, the “fix” only applies to new applications despite Microsoft’s assurances to the contrary. “Microsoft Fixes Flaw After Being Called Irresponsible by Tenable CEO,” Bleeping Computer reports. Writer Sergiu Gatlan describes the problem Tenable found within the Power Platform Custom Connectors feature:
“Although customer interaction with custom connectors usually happens via authenticated APIs, the API endpoints facilitated requests to the Azure Function without enforcing authentication. This created an opportunity for attackers to exploit unsecured Azure Function hosts and intercept OAuth client IDs and secrets. ‘It should be noted that this is not exclusively an issue of information disclosure, as being able to access and interact with the unsecured Function hosts, and trigger behavior defined by custom connector code, could have further impact,’ says cybersecurity firm Tenable which discovered the flaw and reported it on March 30th. ‘However, because of the nature of the service, the impact would vary for each individual connector, and would be difficult to quantify without exhaustive testing.’ ‘To give you an idea of how bad this is, our team very quickly discovered authentication secrets to a bank. They were so concerned about the seriousness and the ethics of the issue that we immediately notified Microsoft,’ Tenable CEO Amit Yoran added.”
Yes, that would seem to be worth a sense of urgency. But even after the eventual fix, this bank and any other organizations already affected were still vulnerable, according to Yoran. As far as he can tell, they weren’t even notified of the problem so they could mitigate their risk. If accurate, can Microsoft be trusted to keep its users secure going forward? We may have to wait for another crop of interns to arrive in Redmond to handle the work “real” engineers do not want to do.
Cynthia Murrell, September 5, 2023
Planning Ahead: Microsoft User Agreement Updates To Include New AI Stipulations
September 4, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Microsoft is eager to capitalize on its AI projects, but first it must make sure users are legally prohibited from poking around behind the scenes. For good measure, it will also ensure users take the blame if they misuse its AI tools. “Microsoft Limits Use of AI Services in Upcoming Services Agreement Update,” reports Ghacks.net. Writer Martin Brinkman notes these services include but are not limited to Bing Chat, Windows Copilot, Microsoft Security Copilot, Azure AI platform, and Teams Premium. We learn:
“Microsoft lists five rules regarding AI Services in the section. The rules prohibit certain activity, explain the use of user content and define responsibilities. The first three rules limit or prohibit certain activity. Users of Microsoft AI Services may not attempt to reverse engineer the services to explore components or rulesets. Microsoft prohibits furthermore that users extract data from AI services and the use of data from Microsoft’s AI Services to train other AI services. … The remaining two rules handle the use of user content and responsibility for third-party claims. Microsoft notes in the fourth entry that it will process and store user input and the output of its AI service to monitor and/or prevent ‘abusive or harmful uses or outputs.’ Users of AI Services are also solely responsible regarding third-party claims, for instance regarding copyright claims.”
Another, non-AI related change is that storage for one’s Outlook.com attachments will soon affect OneDrive storage quotas. That could be an unpleasant surprise for many when changes take effect on September 30. Curious readers can see a summary of the changes here, on Microsoft’s website.
Cynthia Murrell, September 4, 2023
Microsoft Pop Ups: Take Screen Shots
August 31, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Microsoft Is Using Malware-Like Pop-Ups in Windows 11 to Get People to Ditch Google.” Kudos to the wordsmiths at TheVerge.com for avoiding the term “po*n storm” to describe the Windows 11 alleged pop ups.
A person in the audience says, “What’s that pop up doing up there?” Thanks, MJ. Another so so piece of original art.
The write up states:
I have no idea why Microsoft thinks it’s ok to fire off these pop-ups to Windows 11 users in the first place. I wasn’t alone in thinking it was malware, with posts dating back three months showing Reddit users trying to figure out why they were seeing the pop-up.
What popups for three months? I love “real” news when it is timely.
The article includes this statement:
Microsoft also started taking over Chrome searches in Bing recently to deliver a canned response that looks like it’s generated from Microsoft’s GPT-4-powered chatbot. The fake AI interaction produced a full Bing page to entirely take over the search result for Chrome and convince Windows users to stick with Edge and Bing.
How can this be? Everyone’s favorite software company would not use these techniques to boost Credge’s market share, would it?
My thought is that Microsoft’s browser woes began a long time ago in an operating system far, far away. As a result, Credge is lagging behind Googzilla’s browser. Unless Google shoots itself in both feet and fires a digital round into the beastie’s heart, the ad monster will keep on sucking data and squeezing out alternatives.
The write up does not seem to be aware that Google wants to control digital information flows. Microsoft will need more than popups to prevent the Chrome browser from becoming the primary access mechanism to the World Wide Web. Despite Microsoft’s market power, users don’t love the Microsoft Credge thing. Hey, Microsoft, why not pay people to use Credge.
Stephen E Arnold, August 31, 2023
Microsoft and Good Enough Engineering: The MSI BSOD Triviality
August 30, 2023
My line up of computers does not have a motherboard from MSI. Call me “Lucky” I guess. Some MSI product owners were not. “Microsoft Puts Little Blame on Its Windows Update after Unsupported Processor BSOD Bug” is a fun read for those who are keeping notes about Microsoft’s management methods. The short essay romps through a handful of Microsoft’s recent quality misadventures.
“Which of you broke mom’s new vase?” asks the sister. The boys look surprised. The vase has nothing to say about the problem. Thanks, MidJourney, no adjudication required for this image.
I noted this passage in the NeoWin.net article:
It has been a pretty eventful week for Microsoft and Intel in terms of major news and rumors. First up, we had the “Downfall” GDS vulnerability which affects almost all of Intel’s slightly older CPUs. This was followed by a leaked Intel document which suggests upcoming Wi-Fi 7 may only be limited to Windows 11, Windows 12, and newer.
The most helpful statement in the article in my opinion was this statement:
Interestingly, the company says that its latest non-security preview updates, ie, Windows 11 (KB5029351) and Windows 10 (KB5029331), which seemingly triggered this Unsupported CPU BSOD error, is not really what’s to blame for the error. It says that this is an issue with a “specific subset of processors”…
Like the SolarWinds’ misstep and a handful of other bone-chilling issues, Microsoft is skilled at making sure that its engineering is not the entire problem. That may be one benefit of what I call good enough engineering. The space created by certain systems and methods means that those who follow documentation can make mistakes. That’s where the blame should be placed.
Makes sense to me. Some MSI motherboard users looking at the beloved BSOD may not agree.
Stephen E Arnold, August 30, 2023
Microsoft Wants to Help Improve Security: What about Its Engineering of Security
August 24, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Microsoft is a an Onion subject when it comes to security. Black hat hackers easily crack any new PC code as soon as it is released. Generative AI adds a new slew of challenges for bad actors but Microsoft has taken preventative measures to protect their new generative AI tools. Wired details how Microsoft has invested in AI security for years, “Microsoft’s AI Red Team Has Already Made The Case For Itself.”
While generative AI aka chatbots aka AI assistants are new for consumers, tech professionals have been developing them for years. While the professionals have experimented with the best ways to use the technology, they have also tested the best way to secure AI.
Microsoft shared that since 2018 it has had a team learning how to attack its AI platforms to discover weaknesses. Known as Microsoft’s AI red team, the group consists of an interdisciplinary team of social engineers, cybersecurity engineers, and machine learning experts. The red team shares its findings with its parent company and the tech industry. Microsoft wants the information known across the tech industry. The team learned that AI security has conceptual differences from typical digital defense so AI security experts need to alter their approach to their work.
“ ‘When we started, the question was, ‘What are you fundamentally going to do that’s different? Why do we need an AI red team?’ says Ram Shankar Siva Kumar, the founder of Microsoft’s AI red team. ‘But if you look at AI red teaming as only traditional red teaming, and if you take only the security mindset, that may not be sufficient. We now have to recognize the responsible AI aspect, which is accountability of AI system failures—so generating offensive content, generating ungrounded content. That is the holy grail of AI red teaming. Not just looking at failures of security but also responsible AI failures.’”
Kumar said it took time to make the distinction and that red team with have a dual mission. The red team’s early work focused on designing traditional security tools. As time passed, the AI read team expanded its work to incorporate machine learning flaws and failures.
The AI red team also concentrates on anticipating where attacks could emerge and developing solutions to counter them. Kumar explains that while the AI red team is part of Microsoft, they work to defend the entire industry.
Whitney Grace, August 24, 2023
Microsoft and Russia: A Convenient Excuse?
August 14, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
In the Solarwinds’ vortex, the explanation of 1,000 Russia hackers illuminated a security with the heat of a burning EV with lithium batteries. Now Russian hackers have again created a problem. Are these Russians cut from the same cloth as the folks who have turned a special operation into a noir Laurel & Hardy comedy routine?
users in Microsoft Teams chatrooms, pretending to be from technical support. In a blog post [August 2, 2023], Microsoft researchers called the campaign a “highly targeted social engineering attack” by a Russia-based hacking team dubbed Midnight Blizzard. The hacking group, which was previously tracked as Nobelium, has been attributed by the U.S. and UK governments as part of the Foreign Intelligence Service of the Russian Federation.
Isn’t this the Russia producing planners who stalled a column of tanks in its alleged lightning strike on the capital of Ukraine? I think this is the country now creating problems for Microsoft. Imagine that.
The write up continues:
For now, the fake domains and accounts have been neutralized, the researchers said. “Microsoft has mitigated the actor from using the domains and continues to investigate this activity and work to remediate the impact of the attack,” Microsoft said. The company also put forth a list of recommended precautions to reduce the risk of future attacks, including educating users about “social engineering” attacks.
Let me get this straight. Microsoft deployed software with issues. Those issues were fixed after the Russians attacked. The fix, if I understand the statement, is for customers/users to take “precautions” which include teaching obviously stupid customers/users how to be smart. I am probably off base, but it seems to me that Microsoft deployed something that was exploitable. Then after the problem became obvious, Microsoft engineered an alleged “repair.” Now Microsoft wants others to up their game.
Several observations:
- Why not cut and paste the statements from Microsoft’s response to the SolarWinds’ missteps. Why write the same old stuff and recycle the tiresome assertion about Russia? ChatGPT could probably help out Microsoft’s PR team.
- The bad actors target Microsoft because it is a big, overblown system/products with security that whips some people into a frenzy of excitement.
- Customers and users are not going to change their behaviors even with a new training program. The system must be engineered to work in the environment of the real-life users.
Net net: The security problem can be identified when Microsofties look in a mirror. Perhaps Microsoft should train its engineers to deliver security systems and products?
Stephen E Arnold, August 14, 2023
AI Analyzed by a Human from Microsoft
July 14, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
“Artificial Intelligence Doesn’t Have Capability to Take Over, Microsoft Boss Says” provides some words of reassurance when Sam AI-Man’s team are suggesting annihilation of the human race. Here are two passages I found interesting in the article-as-interview write up.
This is an illustration of a Microsoft training program for its smart future employees. Humans will learn or be punished by losing their Microsoft 365 account. The picture is a product of the gradient surfing MidJourney.
First snippet of interest:
“The potential for this technology to really drive human productivity… to bring economic growth across the globe, is just so powerful, that we’d be foolish to set that aside,” Eric Boyd, corporate vice president of Microsoft AI Platforms told Sky News.
Second snippet of interest:
“People talk about how the AI takes over, but it doesn’t have the capability to take over. These are models that produce text as output,” he said.
Now what about this passage posturing as analysis:
Big Tech doesn’t look like it has any intention of slowing down the race to develop bigger and better AI. That means society and our regulators will have to speed up thinking on what safe AI looks like.
I wonder if anyone is considering that AI in the hands of Big Tech might have some interest in controlling some of the human race. Smart software seems ideal as an enabler of predatory behavior. Regulators thinking? Yeah, that’s a posture sure to deal with smart software’s applications. Microsoft, do you believe this colleague’s marketing hoo hah?
Stephen E Arnold, July 14, 2023
Microsoft Causing Problems? Heck, No
July 14, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I cruised through the headlines my smart news system prepared for me. I noted two articles on different subjects. The two write ups were linked with a common point of reference: Microsoft Corp., home of the Softies and the throbbing heart of a significant portion of the technology governments in North America and Western Europe find essential.
“What’s the big deal?” asks Mr. Microsoft. “You have Windows. You have Azure. Software has bugs. Get used to it. You can switch to Linux anytime.” Thin interesting scene is the fruit of MidJourney’s tree of creativity.
The first article appeared in TechRadar. an online real news outfit. The title was compelling; specifically, “Windows 11 Update Is Reportedly Slowing Down PCs and Breaking Internet Connections.” The write up reports:
KB5028185, the ‘Moment 3’ update, is proving seriously problematic for some users … The main bones of contention with patch KB5028185 for Windows 11 22H2 are instances of performance slowdown – with severe cases going by some reports – and problems with flaky internet connections.
The second story appeared on cable “real” news. I tracked down the item titled “US and Microsoft Sound Alarm about China-Based Cybersecurity Threat.” The main idea seems to be:
The U.S. and Microsoft say China-based hackers, focused on espionage, have breached email accounts of about two dozen organizations, including U.S. government agencies.
Interesting. Microsoft seems to face two challenges: Desktop engineering and cloud engineering. The common factor is obviously engineering.
I am delighted that Bing is improving with smart software. I am fascinated by Microsoft’s effort to “win” in online games. However, isn’t it time for something with clout to point out that Microsoft may need to enhance its products’ stability, security, and reliability.
Due to many organizations’ and individuals’ dependence on Microsoft, the company seems to have a knack for creating a range of issues. Will someone step up and direct the engineering in a way that does not increase vulnerability and cause fiduciary loss for its customers?
Anyone? Crickets I fear. Bad actors find Microsoft’s approach more satisfying than a stream of TikTok moments.
Stephen E Arnold, July 14, 2023