Registering Dismay: Microsoft Azure Blues

October 20, 2021

The Beyond Search team loves Microsoft. Totally.

Some are not thrilled with automated customer service. Talk to smart software. Skip the human thing. Microsoft’s customer service has been setting a high standard for decades. . Despite the company getting bigger and more powerful, Microsoft sparked a story in The Register called “WTF? Microsoft Makes Fixing Deadly OOMIGOD Flaws On Azure Your Job.”

Azure is Microsoft’s cloud platform and users using Linux VMs are susceptible to four “OMIGOD” in the Open Management Infrastructure (OMI). Linux Azure users are forced to fend for themselves with the OMIGOD bugs, because Microsoft will not assist them. What is even worse for the Linux users is that they do no want to run OMIs on their virtual machines. OMIs are automatically deployed when the VM is installed when some Azure features are enabled. Without a patch, hackers can access root code and upload malware.

The write up points out that Microsoft did some repairs:

“The Windows giant publicly fixed the holes in its OMI source in mid-August, released it last week, and only now is advising customers. Researchers quickly found unpatched instances of OMI. Security vendor Censys, for example, wrote that it discovered ’56 known exposed services worldwide that are likely vulnerable to this issue, including a major health organization and two major entertainment companies.…In other words, there may not be that many vulnerable machines facing the public internet, or not many that are easily found.”

Linux VM users on Azure are unknowingly exposed and a determined hacker could access the systems.

Is it possible Windows 11 is a red herring. OMIGOD, no.

Whitney Grace, October 20, 2021

Interesting Behavior: Is It a Leitmotif for Big Tech?

October 18, 2021

A leitmotif, if I remember the required music appreciation course in 1962 is a melodic figure that accompanies a person, a situation, or a character like Brünnhilde from a special someone’s favorite composer.

My question this morning on October 18, 2021, is:

“Is there a leitmotif associated with some of the Big Tech “we are not monopolies” outfits?”

You can decide from these three examples or what Stephen Toulmin called “data.” I will provide my own “warrant”, but that’s what the Toulmin’s model says to do.

Here we go. Data:

  1. The Wall Street Journal asserts that William “Bill” Gates learned from some Softie colleagues suggested Mr. Gates alter his email behavior to a female employee. Correctly or incorrectly, Mr. Gates has been associated with everyone’s favorite academic donor, Jeffrey Epstein, according to the mostly-accurate New York Times.
  2. Facebook does not agree with a Wall Street Journal report that the company is not doing a Class A job fighting hate speech. See “Facebook Disputes Report That Its AI Can’t Detect Hate Speech or Violence Consistently.”
  3. The trusty Thomson Reuters reports that “Amazon May Have Lied to Congress, Five US Lawmakers Say.” The operative word is lied; that is, not tell the “truth”, which is, of course, like “is” a word with fluid connotations.

Now the warrant:

With each of the Big Tech “we’re not monopolies” a high-profile individual defends a company’s action or protests that “reality” is different from the shaped information about the individual or the company.

Let’s concede that these are generally negative “data.” What’s interesting is that generally negative and the individuals and their associated organizations are allegedly behaving in a way that troubles some people.

That’s enough Stephen Toulmin for today. Back to Wagner.

Leitmotifs allowed that special someone’s favorite composer to create musical symbols. In that eminently terse and listenable Der Ring des Nibelungen, Wagner delivers dozens of distinct leitmotiv. These are possible used to represent many things.

In our modern Big Tech settings, perhaps the leitmotif is the fruits of no consequences, fancy dancing, and psychobabble.

Warrant? What does that mean? I think it means one thing to Stephen Toulmin and another thing to Stephen E Arnold.

Stephen E Arnold, October 18, 2021

University of Washington: Struggling with Ethics 101

October 18, 2021

Like the Massachusetts Institute of Technology, some professionals at these esteemed institutions are struggling with Ethics 101. A typical syllabus includes such questions as these from the University of Wisconsin Stevens Point Introduction to Ethics course:

  • What theoretical principles guide our moral behavior?
  • What makes an action right or wrong?
  • What factors (theoretical and practical) ground moral disputes?
  • Is there hope that we will resolve moral disputes?

The syllabus includes this statement:

If you commit any acts of academic dishonesty (such as plagiarism on written work or cheating on an exam) you will earn a zero for that work (and possibly other disciplinary actions).

Well, this is a basic class. How well did the University of Washington do? (We already know that MIT accepted some Jeffrey Epstein goodness and participated in the digital hair shirt ritual.)

Navigate to “University of Washington Settles DOJ Claims of Grant Fraud.” You will learn that one of those who appears to have flunked Introduction to Ethics engaged in some search engine optimization. I learned from the article:

The University of Washington has agreed to pay more than $800,000 to settle Justice Department allegations that a professor submitted false documentation relating to a highly competitive grant. The grant documents were submitted to the National Science Foundation by Mehmet Sarikaya, a professor in the university’s Materials Science and Engineering Department…

Keep in mind that some academics engage in citation exchanges and other crafty techniques to burnish their reputation as big time thinkers.

If the Department of Justice is correct, the get out of jail card cost the university providing Amazon-type and Google-type graduates a mere $800,000.

A PR-savvy university professional is quoted as saying“The UW takes very seriously the responsibility of stewarding public funding of scientific research,” university spokesman Victor Balta said in an email. “We are grateful this issue was brought to light and pleased to have it resolved.”

Abso-fricking-lutely. “Grateful.”

The issue is one that St. Thomas Aquinas might have enjoyed pondering. Why fool around with Aristotelian ethics when one can do what’s necessary to be a winner. The text of these thoughts might be called Macho invento and authored by a group of recent University of Washington graduates who volunteer their time to advance ethical thought.

Stephen E Arnold, October 18, 2021

Facebook Targets Paginas Amarillas: Never Enough, Zuck?

October 14, 2021

Facebook is working to make one of its properties more profitable. The Next Web reports, “WhatsApp Reinvents the ‘Yellow Pages’ and Proves there Are No New Ideas.” The company will test out a new business directory feature in San Paulo, Brazil, where local users will be able to search for “businesses nearby” through the app. Writer Ivan Mehta reports:

“For years, Facebook and Instagram have been trying to connect you to businesses and make your shop through their platforms. While the WhatsApp Business app has been around, you couldn’t really search for businesses using the app, unless you’ve interacted with them previously. WhatsApp already offers payment services in Brazil. So it makes sense for it to provide discovery services for local businesses, so you can shop for goods in person, and pay through the platform. The chat app doesn’t have any ads, unlike Facebook and Instagram, so business interactions and transactions are one of the biggest ways for Facebook to earn some moolah out of it. In June, the company integrated its Shops feature in WhatsApp. So, we can expect more business-facing features in near future.”

India and Indonesia are likely next on the list for the project, according to Facebook’s Matt Idema. We are assured the company will track neither users’ locations nor the businesses they search for. Have we heard similar promises before?

Cynthia Murrell, October 14, 2021

Voyager Labs Expands into South America

October 14, 2021

Well this is an interesting development. Brazil’s ITForum reports, “Voyager Labs Appoints VP and Opens Operations in Latin America and the Caribbean.” (I read and quote from Google’s serviceable translation.)

Voyager Labs is an Israeli specialized services firm that keeps a very low profile. Their platform uses machine learning to find and analyze clues to fight cyber attacks, organized crime, fraud, corruption, drug trafficking, money laundering, and terrorism. Voyager Labs’ clients include private companies and assorted government agencies around the world.

The brief announcement reveals:

“Voyager Labs, an AI-based cybersecurity and research specialist, announced this week the arrival in Latin America and the Caribbean. To lead the operation, the company appointed Marcelo Comité as regional vice president. The executive, according to the company, has experience in the areas of investigation, security, and defense in Brazil and the region. Comité will have as mission to consolidate teams of experts to improve the services and support in technologies in the region, according to the needs and particularities of each country. ‘It is a great challenge to drive Voyager Labs’ expansion in Latin America and the Caribbean. Together with our network of partners in each country, we will strengthen ties with strategic clients in the areas of government, police, military sector and private companies’, says the executive.”

We are intrigued by the move to South America, since most of the Israeli firms are building operations in Singapore. What’s Voyager know that its competitors do not? Not familiar with Voyager Labs? Worth knowing the company perhaps?

Cynthia Murrell, October 14, 2021

Amazon AI: Redefines Defensive Driving and Some Rules of the Road

October 8, 2021

For a glimpse of the smart software which cost Dr. Timnit Gebru her role at the Google, check out “Amazon’s AI Cameras Are Punishing Drivers for Mistakes They Didn’t Make.” Now imagine this software monitoring doctors, pilots, consultants, and Amazon product teams. No, not Amazon product teams.

The write up states:

In February, Amazon announced that it would install cameras made by the AI-tech startup Netradyne in its Amazon-branded delivery vans as an “innovation” to “keep drivers safe.”… The Netradyne camera, which requires Amazon drivers to sign consent forms to release their biometric data, has four lenses that record drivers when they detect “events” such as following another vehicle too closely, stop sign and street light violations, and distracted driving.

Smart software then makes sense of the data.

The write up quotes one driver who says:

I personally did not feel any more safe with a camera watching my every move.

Safe? Nope. Hit quotas. I noted:

In June, Motherboard reported that Amazon delivery companies were encouraging drivers to shut off the Mentor app that monitors safety in order to hit Amazon’s delivery quotas.

What’s up?

  1. Get points for showing concern for driver safety
  2. Get the packages out
  3. Have life both ways: Safe and speedy.

Might not work, eh?

Stephen E Arnold, October 8, 2021

Who Is Ready to Get Back to the Office?

October 4, 2021

The pandemic has had many workers asking “hey, who needs an office?” Maybe most of us, according to the write-up, “How Work from Home Has Changed and Became Less Desirable in Last 18 Months” posted at Illumination. Cloud software engineer Amrit Pal Singh writes:

“Work from home was something we all wanted before the pandemic changed everything. It saved us time, no need to commute to work, and more time with the family. Or at least we used to think that way. I must say, I used to love working from home occasionally in the pre-pandemic era. Traveling to work was a pain and I used to spend a lot of time on the road. Not to forget the interrupts, tea breaks, and meetings you need to attend at work. I used to feel these activities take up a lot of time. The pandemic changed it all. In the beginning, it felt like I could work from home all my life. But a few months later I want to go to work at least 2–3 times a week.”

What changed Singh’s mind? Being stuck at home, mainly. There is the expectation that since he is there he can both work and perform household chores each day. He also shares space with a child attending school virtually—as many remote workers know, this makes for a distracting environment. Then there is the loss of work-life balance; when both work and personal time occur in the same space, they tend to blend together and spur monotony. An increase in unnecessary meetings takes away from actually getting work done, but at the same time Singh misses speaking with his coworkers face-to-face. He concludes:

“I am not saying WFH is bad. In my opinion, a hybrid approach is the best where you go to work 2–3 days a week and do WFH the rest of the week. I started going to a nearby cafe to get some time alone. I have written this article in a cafe :)”

Is such a hybrid approach the balance we need?

Cynthia Murrell, October 4, 2021

Social Media Engagement, Manipulation, and Bad Information

October 1, 2021

Researchers at Harvard’s NeimanLab have investigated the interactions between users and social media platforms. Writer Filippo Menczer shares some of the results in, “How ‘Engagement’ Makes You Vulnerable to Manipulation and Misinformation on Social Media.” Social media algorithms rely on the “wisdom of the crowds” to determine what users see. That concept helped our ancestors avoid danger—when faced with a fleeing throng, they ran first and asked questions later. However, there are several reasons this approach breaks down online. Menczer writes:

“The wisdom of the crowds fails because it is built on the false assumption that the crowd is made up of diverse, independent sources. There may be several reasons this is not the case. First, because of people’s tendency to associate with similar people, their online neighborhoods are not very diverse. The ease with which a social media user can unfriend those with whom they disagree pushes people into homogeneous communities, often referred to as echo chambers. Second, because many people’s friends are friends of each other, they influence each other. A famous experiment demonstrated that knowing what music your friends like affects your own stated preferences. Your social desire to conform distorts your independent judgment. Third, popularity signals can be gamed. Over the years, search engines have developed sophisticated techniques to counter so-called “link farms” and other schemes to manipulate search algorithms. Social media platforms, on the other hand, are just beginning to learn about their own vulnerabilities. People aiming to manipulate the information market have created fake accounts, like trolls and social bots, and organized fake networks. They have flooded the network to create the appearance that a conspiracy theory or a political candidate is popular, tricking both platform algorithms and people’s cognitive biases at once. They have even altered the structure of social networks to create illusions about majority opinions.”

See the link-packed article for more findings and details on the researchers’ approach, including their news literacy game called Fakey (click the link to play for yourself). The write-up concludes with a recommendation. Tech companies are currently playing a game of whack-a-mole against bad information, but they might make better progress by instead slowing down the spread of information on their platforms. As for users, we recommend vigilance—do not be taken in by the fake wisdom of the crowds.

Cynthia Murrell, October 1, 2021

Microsoft and Its Post Security Posture

October 1, 2021

Windows 11 seems like a half-baked pineapple upside down cake. My mother produced some spectacular versions of baking missteps. There was the SolarWinds’ version which had gaps everywhere, just hot air and holes. Then there was the Exchange Server variant. I exploded and only the hardiest ants would chow down on that disaster.

I thought about her baking adventures when I read “Microsoft Says Azure Users Will Have to Patch these Worrying Security Flaws Themselves.” Betty Crocker took the same approach when my beloved mother nuked a dessert.

Here’s the passage that evoked a Proustian memory:

instead of patching all affected Azure services, Microsoft has put an advisory stating that while it’ll update six of them, seven others must be updated by users themselves.

Let’s hope there’s a Sara Lee cake around to save the day for those who botch the remediation or just skip doing the baking thing.

Half baked? Yeah, and terrible.

Stephen E Arnold, October 1, 2021

Facebook Doing Its Thing with Weaponized Content?

October 1, 2021

I read “Facebook Forced Troll Farm Content on Over 40% of All Americans Each Month.” Straight away, I have problems with “all.” The reality is that “all” Americans means those who don’t use Facebook, Instagram, or WhatsApp. Hence, I am not sure how accurate the story itself is.

Let’s take a look at a couple of snippets, shall we?

Here’s one that caught my attention:

When the report was published in 2019, troll farms were reaching 100 million Americans and 360 million people worldwide every week. In any given month, Facebook was showing troll farm posts to 140 million Americans. Most of the users never followed any of the pages. Rather, Facebook’s content-recommendation algorithms had forced the content on over 100 million Americans weekly. “A big majority of their ability to reach our users comes from the structure of our platform and our ranking algorithms rather than user choice,” the report said. Sweeping internal Facebook memo: “I have blood on my hands” The troll farms appeared to single out users in the US. While globally more people saw the content by raw numbers—360 million every week by Facebook’s own accounting—troll farms were reaching over 40 percent of all Americans.

Yeah, lots of numbers, not much context, and the source of the data appears to be Facebook. Maybe on the money, maybe a bent penny? If we assume that the passage is “sort of correct”, Facebook has added to its track record for content moderation.

Here’s another snippet I circled in red:

Allen believed the problem could be fixed relatively easily by incorporating “Graph Authority,” a way to rank users and pages similar to Google’s PageRank, into the News Feed algorithm. “Adding even just some easy features like Graph Authority and pulling the dial back from pure engagement-based features would likely pay off a ton in both the integrity space and… likely in engagement as well,” he wrote. Allen [a former data scientist at Facebook,] left Facebook shortly after writing the document, MIT Technology Review reports, in part because the company “effectively ignored” his research, a source said.

Disgruntled employee? Fancy dancing with confidential information? A couple of verification items?

Net net: On the surface, Facebook continues to do what its senior management prioritizes. Without informed oversight, what’s the downside for Facebook? Answer: At this time, none.

Stephen E Arnold, October 1, 2021

Next Page »

  • Archives

  • Recent Posts

  • Meta