Surprised? Microsoft Drags Feet on Azure Security Flaw

September 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Microsoft has addressed a serious security flaw in Azure, but only after being called out by the cybersecurity firm that found the issue. It only took several months. Oh, and according to that firm, the “fix” only applies to new applications despite Microsoft’s assurances to the contrary. “Microsoft Fixes Flaw After Being Called Irresponsible by Tenable CEO,” Bleeping Computer reports. Writer Sergiu Gatlan describes the problem Tenable found within the Power Platform Custom Connectors feature:

“Although customer interaction with custom connectors usually happens via authenticated APIs, the API endpoints facilitated requests to the Azure Function without enforcing authentication. This created an opportunity for attackers to exploit unsecured Azure Function hosts and intercept OAuth client IDs and secrets. ‘It should be noted that this is not exclusively an issue of information disclosure, as being able to access and interact with the unsecured Function hosts, and trigger behavior defined by custom connector code, could have further impact,’ says cybersecurity firm Tenable which discovered the flaw and reported it on March 30th. ‘However, because of the nature of the service, the impact would vary for each individual connector, and would be difficult to quantify without exhaustive testing.’ ‘To give you an idea of how bad this is, our team very quickly discovered authentication secrets to a bank. They were so concerned about the seriousness and the ethics of the issue that we immediately notified Microsoft,’ Tenable CEO Amit Yoran added.”

Yes, that would seem to be worth a sense of urgency. But even after the eventual fix, this bank and any other organizations already affected were still vulnerable, according to Yoran. As far as he can tell, they weren’t even notified of the problem so they could mitigate their risk. If accurate, can Microsoft be trusted to keep its users secure going forward? We may have to wait for another crop of interns to arrive in Redmond to handle the work “real” engineers do not want to do.

Cynthia Murrell, September 5, 2023

Meta Play Tactic or Pop Up a Level. Heh Heh Heh

September 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Years ago I worked at a blue chip consulting firm. One of the people I interacted with had a rhetorical ploy when cornered in a discussion. The wizard would say, “Let’s pop up a level.” He would then shift the issue which was, for example, a specific problem, into a higher level concept and bring his solution into the bigger context.

8 31 pop up a level

The clever manager pop up a level to observe the lower level tasks from a broader view. Thank Mother MJ. Not what I specified, but the gradient descent is alive and well.

Let’s imagine that the topic is a plain donut or a chocolate covered donut with sprinkles. There are six people in the meeting. The discussion is contentious because that’s what blue chip consulting Type As do: Contention, sometime nice, sometimes not. The “pop up a level” guy says, “Let pop up a level. The plain donut has less sugar. We are concerned about everyone’s health, right? The plain donut does not have so much evil, diabetes linked sugar. It makes sense to just think of health and obviously the decreased risk for increasing the premiums for health insurance.” Unless one is paying attention and not eating the chocolate chip cookies provided for the meeting attendees, the pop-up-a-level approach might work.

A current example of pop-up-a-level thinking, navigate to “Designing Deep Networks to Process Other Deep Networks.” Nvidia is in hog heaven with the smart software boom. The company realizes that there are lots of people getting in the game. The number of smart software systems and methods, products and services, grifts and gambles, and heaven knows what else is increasing. Nvidia wants to remain the Big Dog even though some outfits wants to design their own chips or be like Apple and maybe find a way to do the Arm thing. Enter the pop-up-a-level play.

The write up says:

The solution is to use convolutional neural networks. They are designed in a way that is largely “blind” to the shifting of an image and, as a result, can generalize to new shifts that were not observed during training…. Our main goal is to identify simple yet effective equivariant layers for the weight-space symmetries defined above. Unfortunately, characterizing spaces of general equivariant functions can be challenging. As with some previous studies (such as Deep Models of Interactions Across Sets), we aim to characterize the space of all linear equivariant layers.

Translation: Our system and method can make use of any other accessible smart software plumbing. Stick with Nvidia.

I think the pop-up-a-level approach is a useful one. Are the competitors savvy enough to counter the argument?

Stephen E Arnold, September 4, 2023

Planning Ahead: Microsoft User Agreement Updates To Include New AI Stipulations

September 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Microsoft is eager to capitalize on its AI projects, but first it must make sure users are legally prohibited from poking around behind the scenes. For good measure, it will also ensure users take the blame if they misuse its AI tools. “Microsoft Limits Use of AI Services in Upcoming Services Agreement Update,” reports Ghacks.net. Writer Martin Brinkman notes these services include but are not limited to Bing Chat, Windows Copilot, Microsoft Security Copilot, Azure AI platform, and Teams Premium. We learn:

“Microsoft lists five rules regarding AI Services in the section. The rules prohibit certain activity, explain the use of user content and define responsibilities. The first three rules limit or prohibit certain activity. Users of Microsoft AI Services may not attempt to reverse engineer the services to explore components or rulesets. Microsoft prohibits furthermore that users extract data from AI services and the use of data from Microsoft’s AI Services to train other AI services. … The remaining two rules handle the use of user content and responsibility for third-party claims. Microsoft notes in the fourth entry that it will process and store user input and the output of its AI service to monitor and/or prevent ‘abusive or harmful uses or outputs.’ Users of AI Services are also solely responsible regarding third-party claims, for instance regarding copyright claims.”

Another, non-AI related change is that storage for one’s Outlook.com attachments will soon affect OneDrive storage quotas. That could be an unpleasant surprise for many when changes take effect on September 30. Curious readers can see a summary of the changes here, on Microsoft’s website.

Cynthia Murrell, September 4, 2023

Google: Another Modest Proposal to Solve an Existential Crisis. No Big Deal, Right?

September 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am fascinated with corporate “do goodism.” Many people find themselves in an existential crisis anchored in zeros and ones. Is the essay submitted as original work the product of an industrious 15 year old? Or, is the essay the 10 second output of a smart software system like ChatGPT or You.com? Is that brilliant illustration the labor of a dedicated 22 year old laboring in a cockroach infested garage in in Corona, Queens? Or, was the art used in this essay output in about 60 seconds by my trusted graphic companion Mother MidJourney?

9 1 hidden pixel

“I see the watermark. This is a fake!” exclaims the precocious lad. This clever middle school student has identified the super secret hidden clue that this priceless image is indeed a fabulous fake. How could a young person detect such a sophisticated and subtle watermark? The is, “Let’s overestimate our capabilities and underestimate those of young people who are skilled navigators of the digital world?”

Queens and what’s this “modest proposal” angle. Jonathan Swift beat this horse until it died in the late 17th century. I think the reference makes a bit of sense. Mr. Swift proposed simple solutions to big problems. “DeepMind Develops Watermark to Identify AI Images” explains:

Google’s DeepMind is trialling [sic] a digital watermark that would allow computers to spot images made by artificial intelligence (AI), as a means to fight disinformation. The tool, named SynthID, will embed changes to individual pixels in images, creating a watermark that can be identified by computers but remains invisible to the human eye. Nonetheless, DeepMind has warned that the tool is not “foolproof against extreme image manipulation.

Righto, it’s good enough. Plus, the affable crew at Alphabet Google YouTube are in an ideal position to monitor just about any tiny digital thing in the interwebs. Such a prized position as de facto ruler of the digital world makes it easy to flag and remove offending digital content with the itty bitty teenie weeny  manipulated pixel thingy.

Let’s assume that everyone, including the young fake spotter in the Mother MJ image accompany this essay gets to become the de facto certifier of digital content. What are the downsides?

Gee, I give up. I cannot think of one thing that suggests Google’s becoming the chokepoint for what’s in bounds and what’s out of bounds. Everyone will be happy. Happy is good in our stressed out world.

And think of the upsides? A bug might derail some creative work? A system crash might nuke important records about a guilty party because pixels don’t lie? Well, maybe just a little bit. The Google intern given the thankless task of optimizing image analysis might stick in an unwanted instruction. So what? The issue will be resolved in a court, and these legal proceedings are super efficient and super reliable.

I find it interesting that the article does not see any problem with the Googley approach. Like the Oxford research which depended upon Facebook data, the truth is the truth. No problem. Gatekeepers and certification authority are exciting business concepts.

Stephen E Arnold, September 1, 2023

Regulating Smart Software: Let Us Form a Committee and Get Industry Advisors to Help

September 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The Boston Globe published what I thought was an amusing “real” news story about legislators and smart software. I know. I know. I am entering oxymoron land. The article is “The US Regulates Cars, Radio, and TV. When Will It Regulate AI? A number of passages received True Blue check marks.

8 26 stone age mobile

A person living off the grid works to make his mobile phone deliver generative content to solve the problem of … dinner. Thanks, MidJourney. You did a Stone Age person but you would not generate a street person. How helpful!

Let me share two passages and then offer a handful of observations.

How about this statement attributed to Microsoft’s Brad Smith. He is the professional who was certain Russia organized 1,000 programmers to figure out the SolarWinds’ security loopholes. Yes, that Brad Smith. The story quotes him as saying:

“We should move quickly,” Brad Smith, the president of Microsoft, which launched an AI-powered version of its search engine this year, said in May. “There’s no time for waste or delay,” Chuck Schumer, the Senate majority leader, has said. “Let’s get ahead of this,” said Sen. Mike Rounds, R-S.D.

Microsoft moved fast. I think the reason was to make Google look stupid. Both of these big outfits know that online services aggregate and become monopolistic. Microsoft wants to be the AI winner. Microsoft is not spending extra time helping elected officials understand smart software or the stakes on the digital table. No way.

The second passage is:

Historically, regulation often happens gradually as a technology improves or an industry grows, as with cars and television. Sometimes it happens only after tragedy.

Please, read the original “real” news story for Captain Obvious statements. Here are a few observations:

  1. Smart software is moving along at a reasonable clip. Big bucks are available to AI outfits in Germany and elsewhere. Something like 28 percent of US companies are fiddling with AI. Yep, even those raising chickens have AI religion.
  2. The process of regulation is slow. We have a turtle and a hare situation. Nope, the turtle loses unless an exogenous power kills the speedy bunny.
  3. If laws were passed, how would one get fast action to apply them? How is the FTC doing? What about the snappy pace of the CDC in preparing for the next pandemic?

Net net: Yes, let’s understand AI.

Stephen E Arnold, September 1, 2023.

The statement aligns with my experience.

YouTube Content: Are There Dark Rabbit Holes in Which Evil Lurks? Come On Now!

September 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Google has become a cultural touchstone. The most recent evidence is a bit of moral outrage in Popular Science. Now the venerable magazine is PopSci.com, and the Google has irritated the technology explaining staff. Navigate to “YouTube’s Extremist Rabbit Holes Are Deep But Narrow.”

8 30 shout

Google, your algorithm is creating rabbit holes. Yes, that is a technical term,” says the PopSci technology expert. Thanks for a C+ image MidJourney.

The write up asserts:

… exposure to extremist and antagonistic content was largely focused on a much smaller subset of already predisposed users. Still, the team argues the platform “continues to play a key role in facilitating exposure to content from alternative and extremist channels among dedicated audiences.” Not only that, but engagement with this content still results in advertising profits.

I think the link with popular science is the “algorithm.” But the write up seems to be more a see-Google-is-bad essay. Science? No. Popular? Maybe?

The essay concludes with this statement:

While continued work on YouTube’s recommendation system is vital and admirable, the study’s researchers echoed that, “even low levels of algorithmic amplification can have damaging consequences when extrapolated over YouTube’s vast user base and across time.” Approximately 247 million Americans regularly use the platform, according to recent reports. YouTube representatives did not respond to PopSci at the time of writing.

I find the use of the word “admirable” interesting. Also, I like the assertion that algorithms can do damage. I recall seeing a report that explained social media is good and another study pitching the idea that bad digital content does not have a big impact. Sure, I believe these studies, just not too much.

Google has a number of buns in the oven. The firm’s approach to YouTube appears to be “emulate Elon.” Content moderation will be something with a lower priority than keeping tabs on Googlers who don’t come to the office or do much Google work. My suggestion for Popular Science is to do a bit more science, and a little less quasi-MBA type writing.

Stephen E Arnold, September 1, 2023

Microsoft Pop Ups: Take Screen Shots

August 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Microsoft Is Using Malware-Like Pop-Ups in Windows 11 to Get People to Ditch Google.” Kudos to the wordsmiths at TheVerge.com for avoiding the term “po*n storm” to describe the Windows 11 alleged pop ups.

8 30 pop up

A person in the audience says, “What’s that pop up doing up there?” Thanks, MJ. Another so so piece of original art.

The write up states:

I have no idea why Microsoft thinks it’s ok to fire off these pop-ups to Windows 11 users in the first place. I wasn’t alone in thinking it was malware, with posts dating back three months showing Reddit users trying to figure out why they were seeing the pop-up.

What popups for three months? I love “real” news when it is timely.

The article includes this statement:

Microsoft also started taking over Chrome searches in Bing recently to deliver a canned response that looks like it’s generated from Microsoft’s GPT-4-powered chatbot. The fake AI interaction produced a full Bing page to entirely take over the search result for Chrome and convince Windows users to stick with Edge and Bing.

How can this be? Everyone’s favorite software company would not use these techniques to boost Credge’s market share, would it?

My thought is that Microsoft’s browser woes began a long time ago in an operating system far, far away. As a result, Credge is lagging behind Googzilla’s browser. Unless Google shoots itself in both feet and fires a digital round into the beastie’s heart, the ad monster will keep on sucking data and squeezing out alternatives.

The write up does not seem to be aware that Google wants to control digital information flows. Microsoft will need more than popups to prevent the Chrome browser from becoming the primary access mechanism to the World Wide Web. Despite Microsoft’s market power, users don’t love the Microsoft  Credge thing. Hey, Microsoft, why not pay people to use Credge.

Stephen E Arnold, August 31, 2023

Slackers, Rejoice: Google Has a Great Idea Just for You

August 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I want to keep this short because the idea of not doing work to do work offends me deeply. Just like the big thinkers who want people to relax, take time, smell the roses, and avoid those Type A tendencies annoy me. I like being a Type A. In fact, if I were not a Type A, I would not “be” to use some fancy Descartes logic.

8 29 looking down info highway

Is anyone looking down the Information Superhighway to see what speeding AI vehicle is approaching? Of course not, everyone is on break or playing Foosball. Thanks, Mother MidJourney, you did not send me to the arbitration committee for my image request.

Google Meet’s New AI Will Be Able to Go to Meetings for You” reports:

…you might never need to pay attention to another meeting again — or even show up at all.

Let’s think about this new Google service. If AI continues to advance at a reasonable pace, an AI which can attend a meeting for a person can at some point replace the person. Does that sound reasonable? What a GenZ thrill. Money for no work. The advice to take time for kicking back and living a stress free life is just fantastic.

In today’s business climate, I am not sure that delegating knowledge work to smart software is a good idea. I like to use the phrase “gradient descent.” My connotation of this jargon means a cushioned roller coaster to one or more of the Seven Deadly Sins. I much prefer intentional use of software. I still like most of the old-fashioned methods of learning and completing projects. I am happy to encounter a barrier like my search for the ultimate owners of the domain rrrrrrrrrrr.com or the methods for enabling online fraud practiced by some Internet service providers. (Sorry, I won’t name these fine outfits in this free blog post. If you are attending my keynote at the Massachusetts and New York Association of Crime Analysts’ conference in early October, say, “Hello.” In that setting, I will identify some of these outstanding companies and share some thoughts about how these folks trample laws and regulations. Sound like fun?

Google’s objective is to become the source for smart software. In that position, the company will have access to knobs and levers controlling information access, shaping, and distribution. The end goal is a quarterly financial report and the diminution of competition from annoying digital tsetse flies in my opinion.

Wouldn’t it be helpful if the “real news” looked down the Information Highway? No, of course not. For a Type A, the new “Duet” service does not “do it” for me.

Stephen E Arnold, August 31, 2023

A Wonderful Romp through a Tech Graveyard

August 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I heard about a Web site called killedby.tech. I took a look and what a walk down Memory Lane. You know Memory Lane. It runs close to the Information Superhighway. Are products smashed on the Info Highway? Some, not all.

The entry for ILoo, an innovation from the Softies was born and vaporized in 2003. Killedby describes the breakthrough this way:

iLoo was a smart portable toilet integrating the complete equipment to surf the Internet from inside and outside the cabinet.

I wonder how many van lifers would buy this product. Imagine the TikTok videos. That would keep the Oracle TikTok review team busy and probably provide some amusement for others as well.

And I had forgotten about Google’s weird response to failing to convince the US government to use the Googley search system for FirstGov.gov. Ah, forward truncation — something Google would never ever do. The product/service was Google Public Service Search. Here’s what the tomb stone says:

Google Public Service Search provided governmental, non-profit and academic organizational search results without ads.

That idea bit the dust in 2006, which is the year I have pegged as the point at which Google went all-in on its cheerful, transparent business model. No ads! Imagine that!

I had forgotten about Google’s real time search play. Killedby says:

Google Real-Time Search provided live search results from Twitter, Facebook, and news websites.

I never learned why this was sent to the big digital dumpster behind the Google building on Shoreline. Rumor was that some news outfits and some social media Web sites were not impressed. Google — ever the trusted ad provider — hasta la vista to a social information metasearch.

Great site. I did not see Google Transformic, however. Killedby is quite good.

Stephen E Arnold, August 31, 2023

Google: Trapped in Its Own Walled Garden with Lots of Science Club Alums

August 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “MapReduce, TensorFlow, Vertex: Google’s Bet to Avoid Repeating History in AI.” I found the idea that Google gets in its own way a retelling of how high school science club management produces interesting consequences.

image

A young technology wizard finds himself in a Hall of Mirrors at the carnival. He is not sure what is real or in which direction to go. The world of the House of Mirrors is disorienting. The young luminary wants to return to the walled garden where life is more comfortable. Thanks, MidJourney. Four tries and I get this tired illustration. Gradient descent time?

The write up asserts:

Google is in the middle of trying to avoid repeating history when releasing its industry-altering technology.

I disagree. The methods defining Google produce with remarkable consistency a lack of informed control. The idea is that organizations have a culture. That cultural evolves over time, but it remains anchored in its past. Thus, as the organization appears to move forward in time, that organization behaves in a predictable way; for example, Google has an approach to management which guarantees friction. Examples range from the staff protests to the lateral arabesque used to move Dr. Jeff Dean out of the way of the DeepMind contingent.

The write up takes a different view; for example:

Run by engineers, the [Google MapReduce] team essentially did not foresee the coming wave of open-source technology to power the modern Web and the companies that would come to commercialize it.

Google lacks the ability to perceive its opportunities. The company is fenced by its dependence on online advertising. Thus, innovations are tough for the Googlers to put into perspective. One reason is the high school science club ethos of the outfit; the other is that the outside world is as foreign to many Googlers as the world beyond the goldfish’s bowl filled with water. The view is distorted, surreal, and unfamiliar.

How can a company innovate and make a commercially viable product with this in its walled garden? It cannot. Advertising at Google is a me-too product for which Google prior to its IPO settled a dispute with Yahoo over the “inspiration” for pay-to-play search. The cost of this “inspiration” was about $1 billion.

In a quarter century, Google remains what one Microsoftie called “a one-trick pony.” Will the Google Cloud emerge as a true innovation? Nope. There are lots of clouds. Google is the Enterprise Rent-a-Car to the Hertz and Avis cloud rental firms. Google’s innovation track record is closer to a high school science club which has been able to win the state science club content year after year. Other innovators win the National Science Club Award (once called the Westinghouse Award). The context-free innovations are useful to others who have more agility and market instinct.

My view is that Google has become predictable, lurching from one technical paper to legal battle like a sine wave in a Physics 101 class; that is, a continuous wave with a smooth periodic function.

Don’t get me wrong. Google is an important company. What is often overlooked is the cultural wall that keeps the 100,000 smartest people in the world locked down in the garden. Innovation is constrained, and the excitement exists off the virtual campus. Why do so many Xooglers innovate and create interesting things once freed from the walled garden? Culture has strengths and weaknesses. Google’s muffing the bunny, as the article points out, is one defining characteristic of a company which longs for high school science club meetings and competitions with those like themselves.

Tony Bennett won’t be singing in the main cafeteria any longer, but the Googlers don’t care. He was an outsider, interesting but not in the science club. If the thought process doesn’t fit, you must quit.

Stephen E Arnold, August 30. 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta