Deep Fakes Are Old

November 24, 2020

Better late than never, we suppose. The New York Post reports, “BBC Apologizes for Using Fake Bank Statements to Land Famous Princess Diana Interview.” Princess Diana being unavailable to receive the apology, the BBC apologized to her brother instead for luring her into the 1995 interview with counterfeit documentation. Writer Marisa Dellatto specifies:

“Network director-general Tim Davie wrote to Diana’s brother, Charles Spencer, to acknowledge the fraudulent actions of reporter Martin Bashir 25 years ago. Last month, the BBC finally admitted that Bashir showed Spencer bank statements doctored by a staff graphic designer. Spencer had alleged that Bashir told his sister ‘fantastical stories to win her trust’ and showed him fake bank records which reportedly helped land Bashir the interview. At the time, the princess was apparently deeply worried she was being spied on and that her staff was leaking information about her. Bashir’s ‘evidence’ allegedly made her confident to do the interview, one year after she and [Prince] Charles split.”

This is the interview in which Princess Di famously remarked that “there were three of us in this marriage, so it was a bit crowded,” and the couple filed for divorce in the weeks that followed. (For those who were not around or old enough to follow the story, her statement was a reference to Prince Charles’ ongoing relationship with Camila Parker Bowles, whom he subsequently married.)

For what it is worth, a BBC spokesperson insists this sort of deception would not pass the organization’s more stringent editorial processes now in place. Apparently, Bashir also intimidated the Princess with fake claims her phones had been tapped by the British Intelligence Service. Though it did issue the apology, the BBC does not plan to press the issue further because Bashir is now in poor health.

Cynthia Murrell, November 24, 2020

Linear Math Textbook: For Class Room Use or Individual Study

October 30, 2020

Jim Hefferon’s Linear Algebra is a math textbook. You can get it for free by navigating to this page. From Mr. Hefferon’s Web page for the book, you can download a copy and access a range of supplementary materials. These include:

  • Classroom slides
  • Exercise sets
  • A “lab” manual which requires Sage
  • Video.

The book is designed for students who have completed one semester of calculus. Remember: Linear algebra is useful for poking around in search or neutralizing drones. Zaap. Highly recommended.

Stephen E Arnold, October 30, 2020

A Googley Book for the Google-Aspiring Person

October 29, 2020

Another free book? Yep, and it comes from the IBM-centric and Epstein-allied Massachusetts Institute of Technology. The other entity providing conceptual support is the Google, the online advertising company. MIT is an elite generator. Google is a lawsuit attractor. You will, however, look in vain through the 1,000 page volume for explanations of the numerical theorems explaining the amplification of value when generators and attractors interact.

The book, published in 2017, is “Mathematics for Computer Science.” The authors are a Googler named Eric Lehman, the MIT professors F Thomas Leighton and Albert R Meyer, and possibly a number of graduate students who work helped inform the the content.

The books numerical recipes, procedures, and explanations fall into five categories:

  • Proofs, you know, that’s Googley truth stuff to skeptical colleagues who don’t want to be in a meat space or a virtual meeting
  • Structures. These are the nuts and bolts of being able to solve problems the Googley way
  • Counting. Addition and such on steroids
  • Probability. This is the reality of the Google. And you thought Robinhood was the manifestation of winning a game. Ho ho ho.
  • Recurrences. Revisiting the Towers of Hanoi. This is a walk down memory lane.

You can download your copy at this link. Will the MIT Press crank out 50,000 copies for those who lack access to an industrial strength laser printer?

Another IBM infusion of cash may be need to make that happen. Mr. Epstein is no longer able to contribute money to the fascinating MIT. What’s the catch? Perhaps that will be a question on a reader’s Google interview?

Stephen E Arnold, October 29, 2020

The Bulldozer: Driver Accused of Reckless Driving

October 28, 2020

I don’t know if the story in the Sydney Morning Herald is true. You, as I did, will have to work through the “real” news report about Amazon’s commitment to its small sellers. With rumors of Jeff Bezos checking out the parking lots at CNN facilities, it is difficult to know where the big machine’s driver will steer the online bookstore next. Just navigate to “Ruined My Life: After Going All In on Amazon, a Merchant Says He Lost Everything.” The hook for the story is that a small online seller learned that Amazon asserted his product inventory was comprised of knock offs, what someone told me was a “fabulous fake.” Amazon wants to sell “real” products made by “real” companies with rights to the “real” product. A Rolex on Amazon, therefore, is “real,” unlike the fine devices available at the Paris street market Les Puces de Saint-Ouen.

What happened?

The Bezos bulldozer allegedly ground the inventory of the small merchant into recyclable materials. The write up explains in objective, actual factual “real” news rhetoric:

Stories like his [the small merchant with zero products and income] have swirled for years in online merchant forums and conferences. Amazon can suspend sellers at any time for any reason, cutting off their livelihoods and freezing their money for weeks or months. The merchants must navigate a largely automated, guilty-until-proven-innocent process in which Amazon serves as judge and jury. Their emails and calls can go unanswered, or Amazon’s replies are incomprehensible, making sellers suspect they’re at the mercy of algorithms with little human oversight.

Yikes, algorithms. What did those savvy math wonks do to alleged knock offs? What about the kidney transplant algorithms? Wait, that’s a different algorithm.

The small merchant was caught in the bulldozer’s blade. The write up explains:

Hoping to have his [the small merchant again] account reinstated and continue selling on the site, Govani [the small merchant] put off the decision. He received a total of 11 emails from Amazon each giving him different dates at which time his inventory would be destroyed if he hadn’t removed it. He sought clarity from Amazon about the conflicting dates. When he tried to submit an inventory removal order through Amazon’s web portal, it wouldn’t let him.

What’s happening now?

The small merchant is couch surfing and trying to figure out what’s next. One hopes that the Bezos bulldozer will not back over the small merchant. Taking Amazon to court is an option. There is the possibility of binding arbitration.

But it may be difficult to predict what the driver of the Bezos bulldozer will do. What’s a small merchant when the mission is larger. In the absence of meaningful regulation and a functioning compass on the big machine, maybe that renovation of CNN is more interesting than third party sellers? The Bezos bulldozer is a giant device with many moving parts. Can those driving it know what’s going on beneath the crawler treads? Is it break time yet?

Stephen E Arnold, October 28, 2020

Algorithm Tuning: Zeros and Ones Plus Human Judgment

October 23, 2020

This is the Korg OT-120 Orchestral Tuner. You can buy it on Amazon for $53. It is a chromatic tuner with an eight octave detection range that supports band and orchestra instruments. Physics tune pianos, organs, and other instruments. Science!

image

This is the traditional piano tuner’s kit.

image

You will need ears, judgment, and patience. Richard Feynman wrote a letter to a piano tuner. The interesting point in Dr. Feynman’s note was information about the non-zero stiffness of piano strings affects tuning. The implication? A piano tuner may have to factor in the harmonics of the human ear.

The Korg does hertz; the piano tuner does squishy human, wetware, and subjective things.

I thought about the boundary between algorithms and judgment in terms of piano tuning as I read “Facebook Manipulated the News You See to Appease Republicans, Insiders Say”, published by Mother Jones, an information service not happy with the notes generated by the Facebook really big organ. The main idea is that human judgment adjusted zeros, ones, and numerical recipes to obtain desirable results.

The write up reports:

In late 2017, Zuckerberg told his engineers and data scientists to design algorithmic “ranking changes” that would dial down the temperature.

Piano tuners fool around to deliver the “sound” judged “right” for the venue, the score, and the musician. Facebook seems to be grabbing the old-fashioned tuner’s kit, not the nifty zeros and ones gizmos.

The article adds:

The code was tweaked, and executives were given a new presentation showing less impact on these conservative sites and more harm to progressive-leaning publishers

What happened?

We learn:

for more than two years, the news diets of Facebook audiences have been spiked with hyper conservative content—content that would have reached far fewer people had the company not deliberately tweaked the dials to keep it coming, even as it throttled independent journalism. For the former employee, the episode was emblematic of the false equivalencies and anti-democratic impulses that have characterized Facebook’s actions in the age of Trump, and it became “one of the many reasons I left Facebook.”

The specific impact on Mother Jones was, according to the article:

Average traffic from Facebook to our content decreased 37 percent between the six months prior to the change and the six months after.

Human judgment about tool use reveal that information issues once sorted slowly by numerous gatekeepers can be done more efficiently. The ones and zeros, however, resolve to what a human decides. With a big information lever like Facebook, the effort for change may be slight, but the impact significant. The problem is not ones and zeros; the problem is human judgment, intent, and understanding of context. Get it wrong and people’s teeth are set on edge. Unpleasant. Some maestros throw tantrums and seek another tuner.

Stephen E Arnold, October 23, 2020

Music and Moods: Research Verifies the Obvious

October 21, 2020

It has been proven that music can have positive or negative psychological impacts on people. Following this train of research, Business Line reports that playlists are a better reflection of mood than once thought, “Your Playlist Mirrors Your Mood, Confirms IIIT-Hyderabad Study.”

The newest study on music and its effect on mood titled “Tag2risk: Harnessing Social Music Tags For Characterizing Depression Risk, Cover Over 500 Individuals” comes from the International Institute of Information Technology in Hyderabad (IIIT-H). The study discovered that people who listen to sad music can be thrown into depression. Vinoo Alluri and her students from IIIT-H’s cognitive science department investigated if they could identify music listeners with depressive tendencies from their music listening habits.

Over five hundred people’s music listening histories were studied. The researchers discovered that repeatedly listening to sad music was used as an avoidance tool and a coping mechanism. These practices, however, also kept people in depressive moods. Music listeners in the study were also drawn to music sub genres tagged with “sadness” and tenderness.

We noted:

“ ‘While it can be cathartic sometimes, repeatedly being in such states may be an indicator of potential underlying mental illness and this is reflected in their choice and usage of music,’ Vinoo Alluri points out. She feels that music listening habits can be changed. But, in order to do that, they need to be identified first by uncovering their listening habits. It is possible to break the pattern of “ruminative and repetitive music usage”, which will lead to a more positive outcome.”

Alluri’s study is an amazing investigation into the power and importance of music. Her research, however, only ratifies what music listeners and teenagers have known for decades.

Whitney Grace, October 21, 2020

Infohazards: Another 2020 Requirement

October 20, 2020

New technologies that become society staples have risks and require policies to rein in potential dangers. Artificial intelligence is a developing technology. Governing policies have yet to catch up with the emerging tool. Experts in computer science, government, and other controlling organizations need to discuss how to control AI says Vanessa Kosoy in the Less Wrong blog post: “Needed: AI Infohazard Policy.”

Kosoy approaches her discussion about the need for a controlling AI information policy with the standard science fiction warning argument: “AI risk is that AI is a danger, and therefore research into AI might be dangerous.” It is good to draw caution from science fiction to prevent real world disaster. Experts must develop a governing body of AI guidelines to determine what learned information should be shared and how to handle results that are not published.

Individuals and single organizations cannot make these decisions alone, even if they do have their own governing policies. Governing organizations and people must coordinate their knowledge regarding AI and develop a consensual policies to control AI information. Kozoy determines that any AI policy shoulder consider the following:

• “Some results might have implications that shorten the AI timelines, but are still good to publish since the distribution of outcomes is improved.

• Usually we shouldn’t even start working on something which is in the should-not-be-published category, but sometimes the implications only become clear later, and sometimes dangerous knowledge might still be net positive as long as it’s contained.

• In the midgame, it is unlikely for any given group to make it all the way to safe AGI by itself. Therefore, safe AGI is a broad collective effort and we should expect most results to be published. In the endgame, it might become likely for a given group to make it all the way to safe AGI. In this case, incentives for secrecy become stronger.

• The policy should not fail to address extreme situations that we only expect to arise rarely, because those situations might have especially major consequences.”

She continues that any AI information policy should determine the criteria for what information is published, what channels should be consulted to determine publication, and how to handle potentially dangerous information.

These questions are universal for any type of technology and information that has potential hazards. However, specificity of technological policies weeds out any pedantic bickering and sets standards for everyone, individuals and organizations. The problem is getting everyone to agree on the policies.

Whitney Grace, October 20, 2020

Tickeron: The Commercial System Which Reveals What Some Intel Professionals Have Relied on for Years

October 16, 2020

Are you curious about the capabilities of intelware systems developed by specialized services firms? You can gat a good idea about the type of information available to an authorized user:

  • Without doing much more than plugging in an entity with a name
  • Without running ad hoc queries like one does on free Web search systems unless there is a specific reason to move beyond the provided output
  • Without reading a bunch of stuff and trying to figure out what’s reliable and what’s made up by a human or a text robot
  • Without having to spend time decoding a table of numbers, a crazy looking chart, or figuring out weird colored blobs which represent significant correlations.

Sound like magic?

Nope, it is the application of pattern matching and established statistical methods to streams of data.

The company delivering this system, tailored to Robinhood-types and small brokerages, has been assembled by Tickeron. There’s original software, some middleware, and some acquired technology. Data are ingested and outputs indicate what to buy or sell or to know, as a country western star crooned, “know when to hold ‘em.”

A rah rah review appeared in The Stock Dork. “Tickeron Review: An AI-Powered Trading Platform That’s Worth the Hype” provides a reasonably good overview of the system. If you want to check out the system, navigate to Tickeron’s Web site.

Here’s an example of a “card,” the basic unit of information output from the system:

image

The key elements are:

  • Icon to signal “think about buying” the stock
  • A chart with red and green cues
  • A hot link to text
  • A game angle with the “odds” link
  • A “more” link
  • Hashtags (just like Twitter).

Now imaging this type of data presented to an intel officer monitoring a person of interest. Sound useful? The capability has been available for more than a decade. It’s interesting to see this type of intelware finds its way to those who want to invest like the wizards at the former Bear Stearns (remember that company, the bridge players, the implosion?).

DarkCyber thinks that the high-priced solutions available from Wall Street information providers may wonder about the $15 a month fee for the Tickeron service.

Keep in mind that predictions, if right, can allow you to buy an exotic car, an island, and a nice house in a Covid-free location. If incorrect, there’s van life.

The good news is that the functionality of intelware is finally becoming more widely available.

Stephen E Arnold, October 16, 2020

A New Role for Facial Recognition

October 6, 2020

The travel industry is finding its way around COVID-provoked limitations. Where once travelers were promised a “seamless” experience, they are now promised a “touchless” one, we learn from PhocusWire’s piece, “Touchless Tech: The Simple—and Advanced—Ways Ground Transport Providers Are Encouraging Travel.” Some measures are low-tech, like pledges to clean thoroughly, glove and mask use, and single-passenger rides instead of traditional shuttles. However, others are more technically advanced. The role of facial recognition in “touchless tickets” caught our eye. Writer Jill Menze reports:

“On the rail front, Eurostar has tapped facial-verification technology provider iProov to enable contactless travel from United Kingdom to France. With the solution, passengers can be identified without a ticket or passport when boarding the train, as well as complete border exit processes, at St. Pancras International station without encountering people or hardware. ‘What we’re trying to facilitate for the first time ever is a seamless process of going through ticket and border exit checks contactlessly and more fluidly than it’s ever been possible before using face verification,’ iProov founder and CEO Andrew Bud says. ‘That means, instead of checking people’s ID when they arrive, you check their ID long before. The idea is that you move the process of checking IDs away from the boarding point to the booking point.’ During booking, Eurostar will offer travelers an accelerated pre-boarding option, which allows passengers to scan their identity documentation using Eurostar’s app before using iProov’s facial biometric check, which uses patented controlled illumination to authenticate the user’s identity against the ID document. After that, travelers would not have to show a ticket or passport until they reach their destination.”

Eurostar plans to enact the technology next March, and Bud says other railway entities have expressed enthusiasm. This is an interesting use of facial recognition tech. It seems getting back to business is powerful motivation to innovate.

Cynthia Murrell, October 6, 2020

Twitter Photo Preview AI Suspected of Racial Bias

October 1, 2020

Is this yet another case of a misguided algorithm? BreakingNews.ie reports, “Twitter Investigating Photo Preview System After Racial Bias Claims.” Several Twitter users recently posted examples of the platform’s photo-preview function seeming to consider white people more important that black ones. Well that is not good. We’re told:

“The tech giant uses a system called neural network to automatically crop photo previews before you can click on them to view the full image. This focuses on the area identified as the ‘salient’ image region, where it is likely a person would look when freely viewing an entire photo. But tests by a number of people on the platform suggest that the technology may treat white faces as the focal point more frequently than black faces. One example posted online shows American politician Mitch McConnell and Barack Obama, with the system favoring Mr. McConnell in its preview over the former US president. Meanwhile, another person tried with Simpson cartoon characters Lenny and Carl – the latter who is black – with Lenny appearing to take preference. A third user even tried with dogs, resulting in a white dog in the prime preview position over a black dog.”

That last example suggests this may be an issue of highlight and shadow rather than biased training data, but either way is problematic. The company’s chief design officer posted one test he performed that seemed to counter the accusations, but acknowledges his experiment is far from conclusive. Twitter continues to investigate.

Cynthia Murrell, October 1, 2020

Next Page »

  • Archives

  • Recent Posts

  • Meta