DarkCyber for October 5, 2021, Now Available

October 5, 2021

DarkCyber Number 20 for October 5, 2021 is available at this link. The program focuses on artificial intelligence operations or AIOps. The 11 minute program reviews how AIOps work, applications for law enforcement and intelligence activities, upsides, and downsides. The methods discussed include those of a late 1990s innovator implementing a method which has rippled over a 20 year period to the Stanford University Artificial Intelligence Lab. Snorkel.ai — a start up with more than $132 million in funding — is an influential AIOps system used by a number of high profile companies. DarkCyber is produced by Stephen E Arnold, publisher of Beyond Search. The video is available on YouTube and via the splash page of Mr. Arnold’s blog, Beyond Search. The videos are not sponsored and contain no advertising.

Kenny Toth, October 5, 2021

AWS and CloudFlare: Doing Some Math, Eating Some Pizza

October 4, 2021

I read “Unroll Thread” and noted an interesting point of information; to wit:

If 1 million people download that 1GB this month, my cost with @cloudflare R2 this way rounds up to 13¢. With @awscloud S3 it’s $59,247.52. THAT is why people are losing their minds over this. Slight correction: $53,891.16. Apologies, the @awscloud pricing calculator LOVES to slip “developer support” onto the tab. 

I am not too sharp at the math thing, but at first glance it sure looks to me that Cloudflare is less costly for this type of data transfer. What’s the multiplier? Sure looks to be more that twice. On second glance, that difference is a tiny bit closer to a lot more.

Several questions:

  • Will the author seek a business analysis role at AWS?
  • Will either AWS or Cloudflare clarify the analysis?
  • Has no other person or certified cloud professional noticed the minor discrepancy?

Interesting indeed.

Stephen E Arnold, October 4, 2021

Who Is Ready to Get Back to the Office?

October 4, 2021

The pandemic has had many workers asking “hey, who needs an office?” Maybe most of us, according to the write-up, “How Work from Home Has Changed and Became Less Desirable in Last 18 Months” posted at Illumination. Cloud software engineer Amrit Pal Singh writes:

“Work from home was something we all wanted before the pandemic changed everything. It saved us time, no need to commute to work, and more time with the family. Or at least we used to think that way. I must say, I used to love working from home occasionally in the pre-pandemic era. Traveling to work was a pain and I used to spend a lot of time on the road. Not to forget the interrupts, tea breaks, and meetings you need to attend at work. I used to feel these activities take up a lot of time. The pandemic changed it all. In the beginning, it felt like I could work from home all my life. But a few months later I want to go to work at least 2–3 times a week.”

What changed Singh’s mind? Being stuck at home, mainly. There is the expectation that since he is there he can both work and perform household chores each day. He also shares space with a child attending school virtually—as many remote workers know, this makes for a distracting environment. Then there is the loss of work-life balance; when both work and personal time occur in the same space, they tend to blend together and spur monotony. An increase in unnecessary meetings takes away from actually getting work done, but at the same time Singh misses speaking with his coworkers face-to-face. He concludes:

“I am not saying WFH is bad. In my opinion, a hybrid approach is the best where you go to work 2–3 days a week and do WFH the rest of the week. I started going to a nearby cafe to get some time alone. I have written this article in a cafe :)”

Is such a hybrid approach the balance we need?

Cynthia Murrell, October 4, 2021

Big Tech Responds to AI Concerns

October 4, 2021

We cannot decide whether this news represents a PR move or simply three red herrings. Reuters declares, “Money, Mimicry and Mind Control: Big Tech Slams Ethics Brakes on AI.” The article gives examples of Google, Microsoft, and IBM hitting pause on certain AI projects over ethical concerns. Reporters Paresh Dave and Jeffrey Dastin write:

“In September last year, Google’s (GOOGL.O) cloud unit looked into using artificial intelligence to help a financial firm decide whom to lend money to. It turned down the client’s idea after weeks of internal discussions, deeming the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender. Since early last year, Google has also blocked new AI features analyzing emotions, fearing cultural insensitivity, while Microsoft (MSFT.O) restricted software mimicking voices and IBM (IBM.N) rejected a client request for an advanced facial-recognition system. All these technologies were curbed by panels of executives or other leaders, according to interviews with AI ethics chiefs at the three U.S. technology giants.”

See the write-up for more details on each of these projects and the concerns around how they might be biased or misused. These suspensions sound very responsible of the companies, but they may be more strategic than conscientious. Is big tech really ready to put integrity over profits? Some legislators believe regulations are the only way to ensure ethical AI. The article tells us:

“The EU’s Artificial Intelligence Act, on track to be passed next year, would bar real-time face recognition in public spaces and require tech companies to vet high-risk applications, such as those used in hiring, credit scoring and law enforcement. read more U.S. Congressman Bill Foster, who has held hearings on how algorithms carry forward discrimination in financial services and housing, said new laws to govern AI would ensure an even field for vendors.”

Perhaps, though lawmakers in general are famously far from tech-savvy. Will they find advisors to help them craft truly helpful legislation, or will the industry dupe them into being its pawns? Perhaps Watson could tell us.

Cynthia Murrell, October 4, 2021

Mistaken Fools Versus Lying Schemers

October 4, 2021

We must distinguish between misinformation born of honest, if foolish, mistakes and deliberate disinformation. Writer Mike Masnick makes that point in, “The Role of Confirmation Bias In Spreading Misinformation” at TechDirt.

If a story supports our existing beliefs we are more likely to believe it without checking the facts. This can be true even for professional journalists, as a recent Rolling Stone article illustrates. That venerable publication relied on a local TV report that made what turned out to be unverifiable claims. Both reported that gunshot victims were turned away from a certain emergency room because ivermectin overdose patients had taken all the beds. The story quickly spread, covered by The Guardian, the The BBC, the Hill, and a wealth of foreign papers eager to scoff at the US. Ouch. According to the healthcare system overseeing that hospital, however, they had not treated a single case of ivermectin overdose and had not turned away any emergency-care patients. The original article was based on the word of a doctor who, they say, had not worked at that hospital in over two months. (And, we suspect, never again after all this.) This debacle should serve as a warning to all journalists to do their own fact-checking, no matter how plausible a story sounds to them.

Though such misinformation is a serious issue, Masnick writes, it is a different problem from that of deliberate disinformation. Conflating the two leads to even more problems. He observes:

“However, as we’ve discussed before, when you conflate a mistake with the deliberate bad faith pushing of false information, then that only serves to give more ammunition to those who wish to not just discredit all content from certain publications, but to then look to minimize complaints against ‘news’ organizations that specialize and focus on bad faith propaganda, by simply claiming it’s no different than what the mainstream media does in presenting ‘disinformation.’ But there is a major difference. A mistake is bad, and everyone who fell for this story looks silly for doing so. But without a clear pattern of deliberately pushing misleading or out of context information, it suggests a mere error, as opposed to deliberate bad faith activity. The same cannot be said for all ‘news’ organizations.”

An important distinction indeed.

Cynthia Murrell, October 4, 2021

Social Media Engagement, Manipulation, and Bad Information

October 1, 2021

Researchers at Harvard’s NeimanLab have investigated the interactions between users and social media platforms. Writer Filippo Menczer shares some of the results in, “How ‘Engagement’ Makes You Vulnerable to Manipulation and Misinformation on Social Media.” Social media algorithms rely on the “wisdom of the crowds” to determine what users see. That concept helped our ancestors avoid danger—when faced with a fleeing throng, they ran first and asked questions later. However, there are several reasons this approach breaks down online. Menczer writes:

“The wisdom of the crowds fails because it is built on the false assumption that the crowd is made up of diverse, independent sources. There may be several reasons this is not the case. First, because of people’s tendency to associate with similar people, their online neighborhoods are not very diverse. The ease with which a social media user can unfriend those with whom they disagree pushes people into homogeneous communities, often referred to as echo chambers. Second, because many people’s friends are friends of each other, they influence each other. A famous experiment demonstrated that knowing what music your friends like affects your own stated preferences. Your social desire to conform distorts your independent judgment. Third, popularity signals can be gamed. Over the years, search engines have developed sophisticated techniques to counter so-called “link farms” and other schemes to manipulate search algorithms. Social media platforms, on the other hand, are just beginning to learn about their own vulnerabilities. People aiming to manipulate the information market have created fake accounts, like trolls and social bots, and organized fake networks. They have flooded the network to create the appearance that a conspiracy theory or a political candidate is popular, tricking both platform algorithms and people’s cognitive biases at once. They have even altered the structure of social networks to create illusions about majority opinions.”

See the link-packed article for more findings and details on the researchers’ approach, including their news literacy game called Fakey (click the link to play for yourself). The write-up concludes with a recommendation. Tech companies are currently playing a game of whack-a-mole against bad information, but they might make better progress by instead slowing down the spread of information on their platforms. As for users, we recommend vigilance—do not be taken in by the fake wisdom of the crowds.

Cynthia Murrell, October 1, 2021

Microsoft and Its Post Security Posture

October 1, 2021

Windows 11 seems like a half-baked pineapple upside down cake. My mother produced some spectacular versions of baking missteps. There was the SolarWinds’ version which had gaps everywhere, just hot air and holes. Then there was the Exchange Server variant. I exploded and only the hardiest ants would chow down on that disaster.

I thought about her baking adventures when I read “Microsoft Says Azure Users Will Have to Patch these Worrying Security Flaws Themselves.” Betty Crocker took the same approach when my beloved mother nuked a dessert.

Here’s the passage that evoked a Proustian memory:

instead of patching all affected Azure services, Microsoft has put an advisory stating that while it’ll update six of them, seven others must be updated by users themselves.

Let’s hope there’s a Sara Lee cake around to save the day for those who botch the remediation or just skip doing the baking thing.

Half baked? Yeah, and terrible.

Stephen E Arnold, October 1, 2021

Facebook Doing Its Thing with Weaponized Content?

October 1, 2021

I read “Facebook Forced Troll Farm Content on Over 40% of All Americans Each Month.” Straight away, I have problems with “all.” The reality is that “all” Americans means those who don’t use Facebook, Instagram, or WhatsApp. Hence, I am not sure how accurate the story itself is.

Let’s take a look at a couple of snippets, shall we?

Here’s one that caught my attention:

When the report was published in 2019, troll farms were reaching 100 million Americans and 360 million people worldwide every week. In any given month, Facebook was showing troll farm posts to 140 million Americans. Most of the users never followed any of the pages. Rather, Facebook’s content-recommendation algorithms had forced the content on over 100 million Americans weekly. “A big majority of their ability to reach our users comes from the structure of our platform and our ranking algorithms rather than user choice,” the report said. Sweeping internal Facebook memo: “I have blood on my hands” The troll farms appeared to single out users in the US. While globally more people saw the content by raw numbers—360 million every week by Facebook’s own accounting—troll farms were reaching over 40 percent of all Americans.

Yeah, lots of numbers, not much context, and the source of the data appears to be Facebook. Maybe on the money, maybe a bent penny? If we assume that the passage is “sort of correct”, Facebook has added to its track record for content moderation.

Here’s another snippet I circled in red:

Allen believed the problem could be fixed relatively easily by incorporating “Graph Authority,” a way to rank users and pages similar to Google’s PageRank, into the News Feed algorithm. “Adding even just some easy features like Graph Authority and pulling the dial back from pure engagement-based features would likely pay off a ton in both the integrity space and… likely in engagement as well,” he wrote. Allen [a former data scientist at Facebook,] left Facebook shortly after writing the document, MIT Technology Review reports, in part because the company “effectively ignored” his research, a source said.

Disgruntled employee? Fancy dancing with confidential information? A couple of verification items?

Net net: On the surface, Facebook continues to do what its senior management prioritizes. Without informed oversight, what’s the downside for Facebook? Answer: At this time, none.

Stephen E Arnold, October 1, 2021

« Previous Page

  • Archives

  • Recent Posts

  • Meta