AI Algorithms: Dealing Straight?

May 4, 2021

Humans are easily influenced and it is not rocket science either. In order to influence humans, all it takes is a working understanding of psychology, human behavior, and appealing to their emotions. Con artists are master manipulators, but they are about to be one upped by AI algorithms. The Next Web shares how easy it is to influence human behavior in the article, “Study Shows How Dangerously Simple It Is To Manipulate Voters (And Daters) With AI.”

Researchers at the Spanish Universidad de Deusto published a study on how AI can easily influence humans:

“Up front: The basic takeaway from the work is that people tend to do what the algorithm says. Whether they’re being influenced to vote for a specific candidate based on an algorithmic recommendation or being funneled toward the perfect date on an app, we’re dangerously easy to influence with basic psychology and rudimentary AI.

The big deal: We like to think we’re agents of order making informed decisions in a somewhat chaotic universe. But, as Neural’s Thomas Maucalay recently pointed out…we’re unintentional cyborgs.”

Apparently humans are no longer homo sapiens—the literal translation of the human scientific name is “wise man.” Due to our larger brain capacity, humans reasoned their way through evolution to become the dominant species. The research argues that due to our dependence on computers to do our thinking, we have changed our evolutionary status.

The researchers used fake personality tests situated around political candidates and dating apps to determine how participants were influenced by algorithms. The tests showed that participants were easily manipulated by choices AI algorithms offered them. The concept is similar to how magicians lure their audiences to a specific outcome with a “magical force.”

The problem is that we are uneducated about AI algorithm’s power. Companies use them for advertising to earn bigger profits, politicians use them to manipulate votes, and bad actors can use them to take advantage of unsuspecting marks. Bad actors, companies, politicians (although they do not all fall into the same ethics category) work faster than academics can test new algorithmic science. While humans are smart enough to beat some AI algorithms, they need to be educated about them first and it will be a long time before AI manipulation tactics make their way into Google type public service announcements.

Whitney Grace, May 4, 2021

DarkCyber for May 4, 2021, Now Available

May 4, 2021

The 9th 2021 DarkCyber video is now available on the Beyond Search Web site. Will the link work? If it doesn’t, the Facebook link can assist you. The original version of this 9th program contained video content from an interesting Dark Web site selling malware and footage from the PR department of the university which developed the kid-friendly Snakebot. Got kids? You will definitely want a Snakebot, but the DarkCyber team thinks that US Navy Seals will be in line to get duffle of Snakebots too. These are good for surveillance and termination tasks.

Plus, this 9th program of 2021 addresses five other stories, not counting the Snakebot quick bite. These are: [1] Two notable take downs, [2] iPhone access via the Lightning Port, [3] Instant messaging apps may not be secure, [4] VPNs are now themselves targets of malware, and [5] Microsoft security with a gust of SolarWinds.

The complete program is available — believe it or not — on Tess Arnold’s Facebook page. You can view the video with video inserts of surfing a Dark Web site and the kindergarten swimmer friendly Snakebot at this link: https://bit.ly/2PLjOLz. If you want the YouTube approved version without the video inserts, navigate to this link.

DarkCyber is produced by Stephen E Arnold, publisher of Beyond Search. You can access the current video plus supplemental stories on the Beyond Search blog at www.arnoldit.com/wordpress.

We think smart filtering is the cat’s pajamas, particularly for videos intended for law enforcement, intelligence, and cyber security professionals. Smart software crafted in the Googleplex is on the job.

Kenny Toth, May 4, 2021

Making Processes Simple Is Tough Work: Just Add Features and Move On

May 3, 2021

I read “Science Shows Why Simplifying Is Hard and Complicating Is Easy.” I am generally suspicious of “science says” arguments. The reproducibility of the experiments, the statistical methods used to analyze data, and the integrity of those involved. (Remember MIT and the Jeffrey Epstein dalliance?) With these caveats in mind, let’s consider the information in the Japan Times’s article. (Note: You may have to pay to view the original article.)

The core of the write up is that making a procedure or explanation simple is not what humans do. The reasons are set forth in  a paper published in Nature by scientists from the University of Virginia. Yep, the honor system outfit. The write up states:

In eight observational studies and experiments, they found that people systematically overlook opportunities to improve things by subtracting
and default instead to adding.

One of the reported findings I noted was:

The more intriguing insight was that people became less likely to consider subtraction the more they felt “cognitive load.”

When I commuted on Highway 101 in the San Francisco area, I recall seeing wizards fiddling with computing devices whilst driving. Not a good idea, science says. Common sense? Not part of the science, gentle reader.

I noted this passage too: Then dare to dream what thoughtful subtraction could do for the real mother lodes of self-propagating complexity — the U.S. tax code springs to mind, or the European Union’s fiscal rules. We can simplify our lives, but we have to put in the work. That’s what the
philosopher Blaise Pascal captured when he apologized, “I would have written a shorter letter, but I did not have the time.”

I would have sworn that that snappy comment was the work of Mark Twain or a British Fancy Dan who allegedly said Common sense is the best sense.

Let’s add footnotes, a glossary, and marginalia. Keep stuff simple like the automatic record-the-meeting feature added to Microsoft Teams. I think this is called featuritis or what could go wrong?

Stephen E Arnold, May 3, 2021

Fixing Disinformation: Some Ideas

May 3, 2021

I read “2021 Rockman Award Winner Interview: Alison J. Head, Barbara Fister, and Margy MacMillan.” This was positioned as an interview, but it seems more like a report to a dean or provost to demonstrate “real” work in a critical academic niches. There were some interesting statements in the “interview.” For instance, here’s the passage I noted about helping students (people) think about the accuracy of information presented online:

Improve understanding of how a wider array of information, particularly the news, is produced and disseminated (beyond discussions of the peer review process), and (2) develop more reflective information habits that consider their roles as consumers, curators and creators of news, the relationships between news media and audiences, and the wider process through which media and society shape each other.

There were some statements in the article/interview which caught my attention. Here’s are a few examples:

  • They [students] didn’t believe their teachers had kept up with technological developments, including algorithms and tracking
  • … we [the researchers] were cheered by how deeply interested students were in learning more about what we have called the “age of algorithms”
  • Faculty interviews, too, suggested both a high degree of concern about algorithmic information systems and they had a desire for someone on campus to step up and take on a challenge they felt unequipped to tackle.
  • … students were often more aware of algorithms than faculty…

Net net: Neither students nor teachers are exactly sure about the online information disinformation, misinformation or info-reformation processes to use to figure out what’s accurate, what’s not, and what’s manipulation.

How does one guide students if faculty are adrift?

Stephen E Arnold, May 3, 2021

Great Moments in Censorship: Beethoven Bust

May 3, 2021

I noted a YouTube video called “Five Years on YouTube.” Well, not any longer. A highly suspect individual who has the gall to teach piano was deemed unacceptable. Was this a bikini haul by YouTube influencer/sensation Soph Mosca, who recently pointed out that reading a book was so, well, you know, ummm hard. Was it a pointer to stolen software like this outstanding creators’ contributions who seem little troubled by YouTube’s smart software monitoring system:

image

Nope, the obviously questionable piano teacher with 29,000 people who want to improve their piano skills is a copyright criminal type.

Watch the video. Notice the shifty eyes. Notice the prison complexion. Notice the poor grammar, enunciation, and use of bad actor argot.

Did you hear these vile words:

  • Beethoven
  • APRA_CS, ECAD_CS, SOCAN, VCPMC_CS
  • Upsetting.

And the music? I think Beethoven is active on Facebook, Instagram, Twitter, and other social media channels. He is protected by the stalwarts at Amazon, Apple, and Google. Did he really tweet: “Persecute piano teachers”?

What’s he have to say about this nefarious person’s use of notes from the Moonlight Sonata?

Asking Beethoven is similar to asking Alexa or Siri something. The truth will be acted upon.

I think smart software makes perfect decisions even though accuracy ranges from 30 percent to 90 percent for most well crafted and fiddled models.

Close enough for horse shoes. And piano teachers! Ban them. Lock them up. Destroy their pianos.

Furthermore the perpetrator of this crime against humanity ifs marina@thepianokeys.com. If you want to help her, please, contact her. Beyond Search remembers piano teachers, an evil brood. Ban them all, including Tiffany Poon and that equally despicable Dame Mitsuko Uchida who has brazenly performed Mozart’s Piano Concerto K. 271.

Cleanse the world of these spawn of Naamah.

Stephen E Arnold, May 3, 2021

Google Caught In Digital and Sticky Ethical Web

May 3, 2021

Google is described as an employee first company. Employees are affectionately dubbed “Googlers” and are treated to great benefits, perks, and work environment. Google, however, has a dark side. While the company culture is supposedly great, misogynistic, racist attitudes run rampant. Bloomberg via Medium highlights recent ethical violations in the article, “Google Ethical AI Group’s AI Turmoil Began Long Before Public Unraveling.”

One of the biggest ethical problems Google has dealt with is the lack of diverse information in their facial recognition datasets. This has led to facial recognition AI’s inability to recognize minority populations. If ethical problems within their technology were not enough, Google had created an Ethical AI research team headed by respected scientists Margaret Mitchell and Timnit Gebru.

Google had Gebru forcefully resign from the company in December 2020, when she refused to retract a research paper that criticized Google’s AI. Mitchell was also terminated in February 2021 on the grounds she was sending Google sensitive documents to personal accounts.

During their short tenure as Google’s Ethical AI team leads, Mitchell and Gebru witnessed a sexist and racist environment. They both noticed that women with more experience held lower job titles than men with less experience. When female employees were harassed and incidents were reported nothing was done.

Head of Google AI Jeff Dean appeared to be a supporter of Gebru and Mitchell, but while he voiced supported his actions spoke louder:

“Dean struck a skeptical and cautious note about the allegations of harassment, according to people familiar with the conversation. He said he hadn’t heard the claims and would look into the matter. He also disputed the notion that women were being systematically put in lower positions than they deserved and pushed back on the idea that Mitchell’s treatment was related to her gender. Dean and the women discussed how to create a more inclusive environment, and he said he would follow up on the other topics….

About a month after Gebru and Heller reported the claims of sexual harassment and after the lunch meeting with Gebru and Mitchell, Dean announced a significant new research initiative, and put the accused individual in charge of it, according to several people familiar with the situation. That rankled the internal whistleblowers, who feared the influence their newly empowered colleague could have on women under his tutelage.”

Google had purposely created the Ethical AI research team to bring attention to disparities and poor behavior to their attention. When Gebru and Mitchell did their job, their observations were dismissed.

Google shot itself in the foot when they fired Gebru and Mitchell, because the pair were doing their job. Because the pair questioned Google’s potential problems with Google technology and fought against sexism and racism, the company treated them as disruptive liabilities. Mitchell and Gebru’ treatment point to issues of self-regulation. Companies are internally biased, because they want to make money and not make mistakes. However, this attitude creates a lackadaisical attitude towards self-regulation and responsibility. Biased technology leads to poor consequences for minorities that could potentially ruin their lives. Is it not better to be aware of these issues, accept the problem, then fix it?

Google is only going to champion itself and not racial/gender equality.

Whitney Grace, May 3, 2021

Is the Freight Train of Responsibility Rerouting to Amazonia?

May 3, 2021

A Hoverboard Burst into Flames. It Could Change the Way Amazon Does Business” is from the consistently fascinating business wizards at the Los Angeles Times. (No, this is not a report about a new owner, newsroom turmoil, or the business school case of the future.) The newspaper reports that a three California “justices” decided that Amazon cannot pass the buck. I noted this statement in the write up:

“We are persuaded that Amazon’s own business practices make it a direct link in the vertical chain of distribution under California’s strict liability doctrine,” the justices ruled, rejecting Amazon’s claim that its site is merely a platform connecting buyers and sellers.

Yikes, do these legal professionals believe that a company, operating in an essentially unregulated environment, is responsible for the products it sells? The answer to this question seems to be “yes.” Remember, however, these factoids about modern legal practices:

  1. A company like Amazon lives and works in Amazonia. There are different rules and regulations in this digital country. As a result, the experts from Amazonia have considerable resources at their disposal. Legal processes will unfold in legal time, which is different from one click buyer time.
  2. Amazon has money and can hire individuals who can ensure that legal procedures are observed, considered, and subjected to applicable legal processes. That means drag out proceedings if possible.
  3. Amazon will keep on doing what has made the firm successful. Massive, quick change is possible, but in Amazonia there are Amazon time zones. California is just one time zone in the 24×7 world of the mom and pop online bookstore.

I want to be optimistic that after decades of ignoring the digital behemoths government may take meaningful action. But these are country-scale entities. Why not hire the lawyers working on this California matter and let them help shape Amazon’s response. The revolving door is effective when it delivers money, influence, and bonuses faster than the Great State of California loses population. Hopes for controlling Amazon could go up in flames too.

Stephen E Arnold, May 3, 2021

« Previous Page

  • Archives

  • Recent Posts

  • Meta