DarkCyber for December 28, 2021, Now Available

December 28, 2021

This is the 26th program in the third series of DarkCyber video news programs produced by Stephen E Arnold and Beyond Search. You can view the ad-free show at this url. This program includes news of changes to the DarkCyber video series. Starting in January 2022, Dark Cyber will focus on smart software and its impact on intelware and policeware. In addition, Dark Cyber will appear once each month and expand to a 15 to 20 minute format.

What will we do with the production time? We begin a new video series called “OSINT Radar.” OSINT is an acronym for open source intelligence. In a December 2021 presentation to cyber investigators, the idea surfaced of a 60 second profile of a high value OSINT site. We have developed this idea and will publish what we hope will be a weekly video “infodeck” in video form of an OSINT resource currently in use by law enforcement and intelligence professionals. Watch Beyond Search for the details of how to view these short, made-for-mobile video infodecks. Now when you swipe left, you will learn how to perform free reverse phone number look ups, obtain a list of a social media user’s friends, and other helpful data collection actions from completely open source data pools.

Also, in this DarkCyber program are: [a] the blame for government agencies and specialized software vendors using Facebook to crank out false identities. Hint: It’s not the vendors’ fault. [b] why 2022 will be a banner year for bad actors. No, it’s not just passwords, insiders, and corner-cutting software developers. There is a bigger problem. [c] Microsoft has its very own Death Star. Does Microsoft know that the original Death Star was a fiction and it did not survive an attack by the rebels?, and [d] a smart drone with kinetic weapons causes the UN to have a meeting and decide to have another meeting.

Kenny Toth, December 28, 2021

Clicks Rule: The Foundation for Unchecked Expansion

December 22, 2021

TikTok foods. You laugh. Maybe rethink that? What about TikTok financial services? Sounds crazy, right? A China-centric company financing users around the world? What about TikTok shopping? That makes sense, doesn’t it? How can short videos provide a platform for expansion, more specifically, unchecked expansion?

Clicks.

I read “In 2021, the Internet Went for TikTok, Space and Beyond.” I have been around online services since Ellen Shedlarz, the information specialist at Booz, Allen & Hamilton, sat me down and walked me through commercial databases. I think that was in 1973, maybe 1974. I learned that traffic was a big deal. No clicks, no nothing.

What’s the king of clicks now? The data compiled by a trade association like the old American Petroleum Institute, Chem Abstracts, or Medline? Nope. What about Facebook, Google, or Yahoo? Nope.

The big dog is TikTok, the outfit that for a short time Microsoft or Oracle would buy. How did that work out? Right, it didn’t.

The write up explains:

Google [was] dethroned by the young ‘padawan’ TikTok. Let’s start with our Top Domains Ranking and 2021 brought us a very interesting duel for the Number 1 spot in our global ranking. Google.com (which includes Maps, Translate, Photos, Flights, Books, and News, among others) ended 2020 as the undefeated leader in our ranking — from September to December of last year it was always on top. Back then TikTok.com was only ranked #7 or #8.

What outfits dominate the majority of mostly counted Internet clicks whether by humanoid or happy racks of mobile phones and tireless bots? The answer is TikTok, the service mostly beyond the comprehension of people over the age of 15. (I am just joking, you art history majors who are now TikTok consultants. Humor. Chill.)

I have created a table based on the CloudFlare data which — like all click stream numbers — requires handfuls of salt and a liter of soy sauce;

Top Domains 2021 Top Domains 2020
TikTok The Google
The Google The Zuck
The Zuck Microsoft
Microsoft Apple
Apple Netflix
Amazon Amazon
Netflix TikTok
YouTube YouTube
The Tweeter Instagram
WhatsApp The Tweeter

Three observations as you ponder the alleged loss of position by Googzilla and the worst company in America, according to a Yahoo poll:

  1. TikTok is number one. Where does that user data go? What can one do with such data? What has TikTok learned from its monitoring of the ageing and increasingly clumsy creature known to my research team as Googzilla. (Did you know Googzilla wants to date Snow White?)
  2. The data from clicks makes it possible for those with access to the data to build out their services; for example, Apple into financial services with the really classy metal credit card and App Store.
  3. Clicks translate into monopoly jets. Let me explain. The more clicks, the more data, and the more data, the clearer the signals about moving into a new stream of revenue.

Will the TikTok service remain at the top? Nah, nothing is forever in the one click world. But for now, the message sent to the Google and Zuck is one that is best expressed in a holiday card like this one from Etsy:

image

These products are explained in detail on TikTok. Eating disorders, kitchen envy, and gastrointestinal distress will make it easy to cook, eat, buy, and suffer in 30 second increments. Yep, it’s a TikTok “buy now” innovation. Neither the Google nor the Zuck have a now answer to the China backed service. Where do those data go?

Stephen E Arnold, December 22, 2021

Red Kangaroos? Maybe a Nuisance. Online Trolls? Very Similar

December 16, 2021

It is arguable that trolls are the worst bullies in history, because online anonymity means they do not face repercussions. Trolls’ behavior caused innumerable harm, including suicides, psychological problems, and real life bullying. Local and international governments have taken measures to prevent cyber bullying, but ABC Australia says the country continent is taking a stand: “Social Media Companies Could Be Forced To Give Out Names And Contact Details, Under New Anti-Troll Laws.”

Australia’s federal government is drafting laws that could force social media companies to reveal trolls’ identities. The new legislation aims to hold trolls accountable for their poor behavior by having social media companies collect user information and share it with courts in defamation cases. The new laws would also hold social media companies liable for hosted content instead of users and management companies. Australia’s prime minister stated:

“Prime Minister Scott Morrison said he wanted to close the gap between real life and discourse online. ‘The rules that exist in the real world must exist in the digital and online world,’ he said. ‘The online world shouldn’t be a wild west, where bots and bigots and trolls and others can anonymously go around and harm people and hurt people.’”

The new law would require social media companies to have a complaints process for people who feel like they have been defamed. The process would ask users to delete defamatory material. If they do not, the complaint could be escalated to where users details are shared to issue court orders so people can pursue defamation action.

One of the biggest issues facing the legislation is who is responsible for trolls’ content. The new law wants social media companies to be held culpable. The complaints system would allow the social media companies to use it as a defense in defamation cases.

The article does not discuss what is deemed “defamatory” content. Anything and everything is offensive to someone, so the complaints system will be abused. What rules will be instituted to prevent abuse of the complaints system? Who will monitor it and who will pay for it? An analogous example is YouTube system of what constitutes as “appropriate” children’s videos and how they determine flagged videos for intellectual theft as well as inappropriate content. In short, YouTube’s system is not doing well.

The social media companies should be culpable in some way, such as sharing user information when there is dangerous behavior, i.e.e suicide, any kind of abuse, child pornography, planned shooting attacks and other crimes. Sexist and abusive comments that are not an opinion, i.e., saying someone should die or is stupid for being a woman, should be monitored and users held accountable. It is a fine line, though, determining the dangers in many cases.

Whitney Grace, December 16, 2021

Monopolies Know Best: The Amazon Method Involves a Better Status Page

December 13, 2021

Here’s the fix for the Amazon AWS outage: An updated status page. “Amazon Web Services Explains Outage and Will Make It Easier to Track Future Ones” reports:

A major Amazon Web Services outage on Tuesday started after network devices got overloaded, the company said on Friday [December 10, 2021] .  Amazon ran into issues updating the public and taking support inquiries, and now will revamp those systems.

Several questions arise:

  1. How are those two pizza technical methods working out?
  2. What about automatic regional load balancing and redundancy?
  3. What is up with replicating the mainframe single point of failure in a cloudy world?

Neither the write up nor Amazon have answers. I have a thought, however. Monopolies see efficiency arising from:

  1. Streamlining by shifting human intermediated work to smart software which sort of works until it does not.
  2. Talking about technical prowess via marketing centric content and letting the engineering sort of muddle along until it eventually, if ever, catches up to the Mad Ave prose, PowerPoints, and rah rah speeches at bespoke conferences
  3. Cutting costs where one can; for example, robust network devices and infrastructure.

The AT&T approach is a goner, but it seems to be back, just in the form of Baby Bell thinking applied to an online bookstore which dabbles in national security systems and methods, selling third party products with mysterious origins, and promoting audio books to those who have cancelled the service due to endless email promotions.

Yep, outstanding, just from Wall Street’s point of view. From my vantage point, another sign of deep seated issues. What outfit is up next? Google, Microsoft, or some back office provider of which most humans have never heard?

The new and improved approach to an AT&T type business is just juicy with wonderfulness. Two pizzas. Yummy.

Stephen E Arnold, December 13, 2021

A Xoogler May Question the Google about Responsible and Ethical Smart Software

December 2, 2021

Write a research paper. Get colleagues to provide input. Well, ask colleagues do that work and what do you get. How about “Looks good.” Or “Add more zing to that chart.” Or “I’m snowed under so it will be a while but I will review it…” Then the paper wends its way to publication and a senior manager type reads the paper on a flight from one whiz kid town to another whiz kid town and says, “This is bad. Really bad because the paper points out that we fiddle with the outputs. And what we set up is biased to generate the most money possible from clueless humans under our span of control.” Finally, the paper is blocked from publication and the offending PhD is fired or sent signals that your future lies elsewhere.

image

Will this be a classic arm wrestling match? The winner may control quite a bit of conceptual territory along with knobs and dials to shape information.

Could this happen? Oh, yeah.

Ex Googler Timnit Gebru Starts Her Own AI Research Center” documents the next step, which may mean that some wizards undergarments will be sprayed with eau de poison oak for months, maybe years. Here’s one of the statements from the Wired article:

“Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures,” says Gebru, who is founder and executive director of Distributed Artificial Intelligence Research (DAIR). The first part of the name is a reference to her aim to be more inclusive than most AI labs—which skew white, Western, and male—and to recruit people from parts of the world rarely represented in the tech industry. Gebru was ejected from Google after clashing with bosses over a research paper urging caution with new text-processing technology enthusiastically adopted by Google and other tech companies.

The main idea, which Wired and Dr. Gebru delicately sidestep, is that there are allegations of an artificial intelligence or machine learning cabal drifting around some conference hall chatter. On one side is the push for what I call the SAIL approach. The example I use to illustrate how this cost effective, speedy, and clever short cut approach works is illustrated in some of the work of Dr. Christopher Ré, the captain of the objective craft SAIL. Oh, is the acronym unfamiliar to you? SAIL is short version of Stanford Artificial Intelligence Laboratory. SAIL fits on the Snorkel content diving gear I think.

On the other side of the ocean, are Dr. Timnit Gebru’s fellow travelers. The difference is that Dr. Gebru believes that smart software should not reflect the wit, wisdom, biases, and general bro-ness of the high school science club culture. This culture, in my opinion, has contributed to the fraying of the social fabric in the US, caused harm, and erodes behaviors that are supposed to be subordinated to “just what people do to make a social system function smoothly.”

Does the Wired write up identify the alleged cabal? Nope.

Does the write up explain that the Ré / Snorkel methods sacrifice some precision in the rush to generate good enough outputs? (Good enough can be framed in terms of ad revenue, reduced costs, and faster time to market testing in my opinion.) Nope.

Does Dr. Gebru explain how insidious the short cut training of models is and how it will create systems which actively harm those outside the 60 percent threshold of certain statistical yardsticks? Heck, no.

Hopefully some bright researchers will explain what’s happening with a “deep dive”? Oh, right, Deep Dive is the name of a content access company which uses Dr. Ré’s methods. Ho, ho, ho. You didn’t know?

Beyond Search believes that Dr. Gebru has important contributions to make to applied smart software. Just hurry up already.

Stephen E Arnold, December 2, 2021

Apple Podcast Ratings: A Different Angle

November 24, 2021

I read “Apple Podcasts App Ratings Flip after the Company Starts Prompting Users.” The write up explains that Apple’s podcast application was receiving the rough equivalent of a D or D- from its users. How did Apple fix this? Some big monopolies wou8ld have just had an intern enter the desired number. This works with search results pages on some Web and enterprise search systems. Not Apple. The write up reports:

The iPhone maker told The Verge that iOS 15.1 started prompting users for ratings and reviews “just like most third-party apps.” However, many people thought they were rating the show they were listening to, not the app — and that led to a flood of scores and reviews for podcasts.

Two points:

  1. Users were confused
  2. Prompts sparked ratings.

I interpreted this information to mean that users are not too swift even thought Apple’s high priced products are supposed to appeal to the swift and sure. Second, the prompts caused an immediate user reaction at least for some of the app’s users.

My takeaway: Online services can cause behaviors. Power in the hands of the just and true or evidence of the impact of digital nudges? Do higher ratings improve the app? Probably not.

Stephen E Arnold, November 24, 2021

OSINT: As Good as Government Intel

November 16, 2021

It is truly amazing how much information private citizens in the OSINT community can now glean from publicly available data. As The Economist puts it, “Open-Source Intelligence Challenges State Monopolies on Information.” Complete with intriguing examples, the extensive article details the growth of technologies and networks that have drastically changed the intelligence-gathering game over the last decade. We learn of Geo4Nonpro, a project of the James Martin Centre for Nonproliferation

Studies (CNS) at the Middlebury Institute for International Studies at Monterey, California. The write-up reports:

“The CNS is a leader in gathering and analyzing open-source intelligence (OSINT). It has pulled off some dramatic coups with satellite pictures, including on one occasion actually catching the launch of a North Korean missile in an image provided by Planet, a company in San Francisco. Satellite data, though, is only one of the resources feeding a veritable boom in non-state OSINT. There are websites which track all sorts of useful goings-on, including the routes taken by aircraft and ships. There are vast searchable databases. Terabytes of footage from phones are uploaded to social-media sites every day, much of it handily tagged. … And it is not just the data. There are also tools and techniques for working with them—3D modeling packages, for example, which let you work out what sort of object might be throwing the shadow you see in a picture. And there are social media and institutional settings that let this be done collaboratively. Eclectic expertise and experience can easily be leveraged with less-well-versed enthusiasm and curiosity in the service of projects which link academics, activists, journalists and people who mix the attributes of all three groups.”

We recommend reading the whole article for more about those who make a hobby of painstakingly analyzing images and footage. Some of these projects have come to startling conclusions. Government intelligence agencies are understandably wary as capabilities that used to be their purview spread among private OSINT enthusiasts. Not so wary, though, that they will not utilize the results when they prove useful. In fact, the government is a big customer of companies that supply higher-resolution satellite images than one can pull from the Web for free—outfits like American satellite maker Maxar and European aerospace firm Airbus. The article is eye-opening, and we can only wonder what the long-term results of this phenomenon will be.

Cynthia Murrell November 16, 2021

Ampliganda: A Wonderful Word

October 13, 2021

Let’s try to create a meme. That’s sounds like fun. How about coining a word? The Atlantic has one to share. It’s ampliganda.

You can read about the word in “It’s Not Misinformation. It’s Amplified Propaganda.” The write up explains as only the Atlantic and the Stanford Internet Observatory can:

Perhaps the best word for this emergent bottom-up dynamic is one that doesn’t exist quite yet: ampliganda, the shaping of perception through amplification. It can originate from an online nobody or an onscreen celebrity. No single person or organization bears responsibility for its transmission. And it is having a profound effect on democracy and society.

Several observations:

  1. The Stanford Internet Observatory is definitely quick on the meme trigger. It has been a mere two decades since the search engine optimization crowd figured out how to erode relevance
  2. A number of the night ampliganda outfits have roots at Stanford. Isn’t that something?
  3. “Voting” for popularity is a thrilling concept. It works for middle school class officer elections. Algorithms can emulate popularity feedback mechanisms.

Who would have known unless Stanford was on the job? Yep, ampliganda. A word for the ages. Like Google maybe?

Stephen E Arnold, October 13, 2021

Stanford Google AI Bond?

October 12, 2021

I read “Peter Norvig: Today’s Most Pressing Questions in AI Are Human-Centered.” It appears, based on the interview, that Mr. Norvig will work at Stanford’s Institute for Human Centered AI.

Here’s the quote I found interesting:

Now that we have a great set of algorithms and tools, the more pressing questions are human-centered: Exactly what do you want to optimize? Whose interests are you serving? Are you being fair to everyone? Is anyone being left out? Is the data you collected inclusive, or is it biased?

These are interesting questions, and ones that I assume Dr. Timnit Gebru will offer answers.

Will Stanford’s approach to artificial intelligence advance its agenda and address such issues as bias in the Snorkel-type of approach to machine learning? Will Stanford and Google expand their efforts to provide the solutions which Mr. Norvig describes in this way?

You don’t get credit for choosing an especially clever or mathematically sophisticated model, you get credit for solving problems for your users.

Like ads, maybe? Like personnel problems? Like augmenting certain topics for teens? Maybe?

Stephen E Arnold, October 12, 2021

Mistaken Fools Versus Lying Schemers

October 4, 2021

We must distinguish between misinformation born of honest, if foolish, mistakes and deliberate disinformation. Writer Mike Masnick makes that point in, “The Role of Confirmation Bias In Spreading Misinformation” at TechDirt.

If a story supports our existing beliefs we are more likely to believe it without checking the facts. This can be true even for professional journalists, as a recent Rolling Stone article illustrates. That venerable publication relied on a local TV report that made what turned out to be unverifiable claims. Both reported that gunshot victims were turned away from a certain emergency room because ivermectin overdose patients had taken all the beds. The story quickly spread, covered by The Guardian, the The BBC, the Hill, and a wealth of foreign papers eager to scoff at the US. Ouch. According to the healthcare system overseeing that hospital, however, they had not treated a single case of ivermectin overdose and had not turned away any emergency-care patients. The original article was based on the word of a doctor who, they say, had not worked at that hospital in over two months. (And, we suspect, never again after all this.) This debacle should serve as a warning to all journalists to do their own fact-checking, no matter how plausible a story sounds to them.

Though such misinformation is a serious issue, Masnick writes, it is a different problem from that of deliberate disinformation. Conflating the two leads to even more problems. He observes:

“However, as we’ve discussed before, when you conflate a mistake with the deliberate bad faith pushing of false information, then that only serves to give more ammunition to those who wish to not just discredit all content from certain publications, but to then look to minimize complaints against ‘news’ organizations that specialize and focus on bad faith propaganda, by simply claiming it’s no different than what the mainstream media does in presenting ‘disinformation.’ But there is a major difference. A mistake is bad, and everyone who fell for this story looks silly for doing so. But without a clear pattern of deliberately pushing misleading or out of context information, it suggests a mere error, as opposed to deliberate bad faith activity. The same cannot be said for all ‘news’ organizations.”

An important distinction indeed.

Cynthia Murrell, October 4, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta