FR Is Going Far

September 6, 2021

Law enforcement officials are using facial recognition software and the array of cameras that cover the majority of the world to identify bad actors. The New York Times reports on a story that was used to track down a terroristic couple: “A Fire In Minnesota. An Arrest In Mexico. Cameras Everywhere.”

Mena Yousif is an Iranian refuge and Jose Felan is a felon. The couple were frustrated about the current state of the American law enforcement system and government, especially after George Floyd’s death. They set fire to buildings, including schools, stores, gas stations, and caused damage to over 1500. The ATF posted videos of the pair online, asking for any leads to their arrests. The ATF received tips as Felan and Yousif traveled across the US to the Mexican border. The were on the run for two weeks before they were identified outside of a motel in Texas.

Mexican authorities deployed a comprehensive facial recognition system, deployed in 2019, and it was used to find Felan and Yousif. Dahua Technology designed Mexico’s facial recognition system. Dahua is a Chinese company, one of the largest video surveillance companies in the world, and is partially owned by the its government. The US Defense and Commerce departments blacklisted Dahua for China’s treatment of Uighur Muslims and the trade war. Dahua denies the allegations and stated that it cannot control how its technology is used. Facial recognition did not catch Yousif and Felan, instead they were given a tip.

China is marketing surveillance technology to other countries, particularly in South America, Asia, and Africa, as a means to minimize crime and promote order. There are issues with the technology being perfect and the US does use it despite them:

“In the United States, facial recognition technology is widely used by law enforcement officials, though poorly regulated. During a congressional hearing in July, lawmakers expressed surprise that 20 federal agencies were using it without having fully assessed the risks of misuse or bias — some algorithms have been found to work less accurately on women and people of color, and it has led to mistaken arrests. The technology can be a powerful and effective crime-solving tool, though, placing it, for now, at a tipping point. At the start of the hearing, Representative Sheila Jackson Lee, Democrat of Texas, highlighted the challenge for Congress — or anyone — in determining the benefits and downsides to using facial recognition: It’s not clear how well it works or how widely it’s used. As Ms. Jackson Lee said, “Information on how law enforcement agencies have adopted facial recognition technology remains underreported or nonexistent.”

Many governments around the world, including the US, seem poised to their increase the amount of facial recognition and tracking technology for law and order. What is interesting is that China has been a pacesetter.

Whitney Grace, September 9, 2021

Not an Onion Report: Handwaving about Swizzled Data

August 24, 2021

I read at the suggestion of a friend “These Data Are Not Just Excessively Similar. They Are Impossibly Similar.” At first glance, I thought the write up was a column in an Onion-type of publication. Nope, someone copied the same data set and pasted it into itself.

Here’s what the write up says:

The paper’s Excel spreadsheet of the source data indicated mathematical malfeasance.

Malfeasance. Okay.

But what caught my interest was the inclusion of this name: Dan Ariley. If this is the Dan Ariely who wrote these books, that fact alone is suggestive. If it is a different person, then we are dealing with routine data dumbness or data dishonesty.

image

The write up contains what I call academic ducking and covering. You may enjoy this game, but I find it boring. Non reproducible results, swizzled data, and massaged numerical recipes are the status quo.

Is there a fix? Nope, not as long as most people cannot make change or add up the cost of items in a grocery basket. Smart software depends on data. And if those data are like those referenced in this Metafilter article, well. Excitement.

Stephen E Arnold, August 24, 2021

Big Data, Algorithmic Bias, and Lots of Numbers Will Fix Everything (and Your Check Is in the Mail)

August 20, 2021

We must remember, “The check is in the mail” and “I will always respect you” and “You can trust me.” Ah, great moments in the University of Life’s chapbook of factoids.

I read “Moving Beyond Algorithmic Bias Is a Data Problem”. I was heartened by the essay. First, the document has a document object identifier and a link to make checking updates easy. Very good. Second, the focus of the write up is the inherent problem of most of the Fancy Dan baloney charged big data marketing to which I have been subjected in the last six or seven years. Very, very good.

I noted this statement in the essay:

Why, despite clear evidence to the contrary, does the myth of the impartial model still hold allure for so many within our research community? Algorithms are not impartial, and some design choices are better than others.

Notice the word “myth”. Notice the word “choices.” Yep, so much for the rock solid nature of big data, models, and predictive silliness based on drag-and-drop math functions.

I also starred this important statement by Donald Knuth:

Donald Knuth said that computers do exactly what they are told, no more and no less.

What’s the real world behavior of smart anti-phishing cyber security methods? What about the autonomous technology in some nifty military gear like the Avenger drone?

Google may not be thrilled with the information in this essay nor thrilled about the nailing of the frat bros’ tail to the wall; for example:

The belief that algorithmic bias is a dataset problem invites diffusion of responsibility. It absolves those of us that design and train algorithms from having to care about how our design choices can amplify or curb harm. However, this stance rests on the precarious assumption that bias can be fully addressed in the data pipeline. In a world where our datasets are far from perfect, overall harm is a product of both the data and our model design choices.

Perhaps this explains why certain researchers’ work is not zipping around Silicon Valley at the speed of routine algorithm tweaks? The statement could provide some useful insight into why Facebook does not want pesky researchers at NYU’s Ad Observatory digging into how Facebook manipulates perception and advertisers.

The methods for turning users and advertisers into puppets is not too difficult to figure out. That’s why certain companies obstruct researchers and manufacture baloney, crank up the fog machine, and offer free jargon stew to everyone including researchers. These are the same entities which insist they are not monopolies. Do you believe that these are mom-and-pop shops with a part time mathematician and data wrangler coming in on weekends? Gee, I do.

The “Moving beyond” article ends with a snappy quote:

As Lord Kelvin reflected, “If you cannot measure it, you cannot improve it.”

Several observations are warranted:

  1. More thinking about algorithmic bias is helpful. The task is to get people to understand what’s happening and has been happening for decades.
  2. The interaction of math most people don’t understand and very simple objectives like make more money or advance this agenda is a destabilizing force in human behavior. Need an example. The Taliban and its use of WhatsApp is interesting, is it not?
  3. The fix to the problems associated with commercial companies using algorithms as monetary and social weapons requires control. The question is from whom and how.

Stephen E Arnold, August 20, 2021

Nifty Interactive Linear Algebra Text

August 17, 2021

Where was this text when I was an indifferent student in a one-cow town high school? I suggest you take a look at Dan Margalit’s and Joseph Rabinoff’s Interactive linear Algebra. The text is available online and as a PDF version. The information is presented clearly and there are helpful illustrations. Some of them wiggle and jump. This is a must-have in my opinion. Linear algebra in the age of whiz-bang smart methods? Yes. One comment: When we checked the online version, the hot links in the index did not resolve. Use the Next link.

Stephen E Arnold, August 17, 2021

Spreadsheet Fever: It Is Easy to Catch

August 9, 2021

Regression is useful. I regress to my mean with every tick of my bio clock. I read “A Simple Regression Problem.” I like these explainer type of articles.

This write up contains a paragraph amplifying model fitting techniques, and  I found this passage thought provoking. Here it is:

If you use Excel, you can try various types of trend lines to approximate the blue curve, and even compute the regression coefficients and the R-squared for each tested model. You will find very quickly that the power trend line is the best model by far, that is, An is very well approximated (for large values of n) by An = b n^c. Here n^c stands for n at power c; also, b and c are the regression coefficients. In other words, log An = log b + c log n (approximately).

The bold face indicates the words and phrases I found suggestive. With this encouraged twiddling, one can get a sense of how fancy match can be converted into a nifty array of numbers which flow. Even better, a graphic can be generated with a click.

What happens when data scientists and algorithm craftspeople assemble their confection of dozens, even hundreds, of similar procedures. Do you talk about Bayesian drift at the golf club? If yes, then toss in spreadsheet fever’s warning signs.

Stephen E Arnold, August 9, 2021

Why Some Outputs from Smart Software Are Wonky

July 26, 2021

Some models work like a champ. Utility rate models are reasonably reliable. When it is hot, use of electricity goes up. Rates are then “adjusted.” Perfect. Other models are less solid; for example, Bayesian systems which are not checked every hour or large neural nets which are “assumed” to be honking along like a well-ordered flight of geese. Why do I offer such Negative Ned observations? Experience for one thing and the nifty little concepts tossed out by Ben Kuhn, a Twitter persona. You can locate this string of observations at this link. Well, you could as of July 26, 2021, at 630 am US Eastern time. Here’s a selection of what are apparently the highlights of Mr. Kuhn’s conversation with “a former roommate.” That’s provenance enough for me.

Item One:

Most big number theory results are apparently 50-100 page papers where deeply understanding them is ~as hard as a semester-long course. Because of this, ~nobody has time to understand all the results they use—instead they “black-box” many of them without deeply understanding.

Could this be true? How could newly minted, be an expert with our $40 online course, create professionals who use models packaged in downloadable and easy to plug in modules be unfamiliar with the inner workings of said bundles of brilliance? Impossible? Really?

Item Two:

A lot of number theory is figuring out how to stitch together many different such black boxes to get some new big result. Roommate described this as “flailing around” but also highly effective and endorsed my analogy to copy-pasting code from many different Stack Overflow answers.

Oh, come on. Flailing around. Do developers flail or do they “trust” the outfits who pretend to know how some multi-layered systems work. Fiddling with assumptions, thresholds, and (close your ears) the data themselves  are never, ever a way to work around a glitch.

Item Three

Roommate told a story of using a technique to calculate a number and having a high-powered prof go “wow, I didn’t know you could actually do that”

No kidding? That’s impossible in general, and that expression would never be uttered at Amazon-, Facebook-, and Google-type operations, would it?

Will Mr. Kuhn be banned for heresy. [Keep in mind how Wikipedia defines this term: “is any belief or theory that is strongly at variance with established beliefs or customs, in particular the accepted beliefs of a church or religious organization.”] Just repeating an idea once would warrant a close encounter with an Iron Maiden or a pile of firewood. Probably not today. Someone might emit a slightly critical tweet, however.

Stephen E Arnold, July 26, 2021

Elasticsearch Versus RocksDB: The Old Real Time Razzle Dazzle

July 22, 2021

Something happens. The “event” is captured and written to the file. Even if you are watching the “something” happening, there is latency between the event and the sensor or the human perceiving the event. The calculus of real time is mostly avoiding too much talk about latency. But real time is hot because who wants to look at old data, not TikTok fans and not the money-fueled lovers of Robinhood.

Rockset CEO on Mission to Bring Real-Time Analytics to the Stack” used lots of buzzwords, sidesteps inherent latency, and avoids commentary on other allegedly real-time analytics systems. Rockset is built on RockDB, an open source software. Nevertheless, there is some interesting information about Elasticsearch; for example:

  • Unsupported factoids like: “Every enterprise is now generating more data than what Google had to index in [year] 2000.”
  • No definition or baseline for “simple”: “The combination of the converged index along with the distributed SQL engine is what allows Rockset to be fast, scalable, and quite simple to operate.”
  • Different from Elasticsearch and RocksDB: “So the biggest difference between Elastic and RocksDB comes from the fact that we support full-featured SQL including JOINs, GROUP BY, ORDER BY, window functions, and everything you might expect from a SQL database. Rockset can do this. Elasticsearch cannot.”
  • Similarities with Rockset: “So Lucene and Elasticsearch have a few things in common with Rockset, such as the idea to use indexes for efficient data retrieval.”
  • Jargon and unique selling proposition: “We use converged indexes, which deliver both what you might get from a database index and also what you might get from an inverted search index in the same data structure. Lucene gives you half of what a converged index would give you. A data warehouse or columnar database will give you the other half. Converged indexes are a very efficient way to build both.”

Amazon has rolled out its real time system, and there are a number of options available from vendors like Trendalyze.

Each of these vendors emphasizes real time. The problem, however, is that latency exists regardless of system. Each has use cases which make their system seem to be the solution to real time data analysis. That’s what makes horse races interesting. These unfold in real time if one is at the track. Fractional delays have big consequences for those betting their solution is the least latent.

Stephen E Arnold, July 22, 2021

A Theory: No Room for Shortcuts in Healthcare Datasets

July 1, 2021

The value of any machine learning algorithm depends on the data it was trained on, we are reminded in the article, “Machine Learning Deserves Better Than This” at AAASScience Mag. Writer Derek Lowe makes some good points that are, nevertheless, likely to make him unpopular among the rah-rah AI crowd. He is specifically concerned with the ways machine learning is currently being applied in healthcare. As an example, Lowe examines a paper on coronavirus pathology as revealed in lung X-ray data. He writes:

“Every single one of the studies falls into clear methodological errors that invalidate their conclusions. These range from failures to reveal key details about the training and experimental data sets, to not performing robustness or sensitivity analyses of their models, not performing any external validation work, not showing any confidence intervals around the final results (or not revealing the statistical methods used to compute any such), and many more. A very common problem was the (unacknowledged) risk of bias right up front. Many of these papers relied on public collections of radiological data, but these have not been checked to see if the scans marked as COVID-19 positive patients really were (or if the ones marked negative were as well). It also needs to be noted that many of these collections are very light on actual COVID scans compared to the whole database, which is not a good foundation to work from, either, even if everything actually is labeled correctly by some miracle. Some papers used the entire dataset in such cases, while others excluded images using criteria that were not revealed, which is naturally a further source of unexamined bias.”

As our regular readers are aware, any AI is only as good as the data it is trained upon. However, data scientists can be so eager to develop tools (or, to be less charitable, to get published) that they take shortcuts. Some, for example, accept all data from public databases without any verification. Others misapply data, like the collection of lung x-rays from patients under the age of five that was included in the all-ages pneumonia dataset. Then there are the datasets and algorithms that simply do not have enough documentation to be trusted. How was the imaging data pre-processed? How was the model trained? How was it selected and validated? Crickets.

We understand why people are excited about the potential of machine learning in healthcare, a high-stakes field where solutions can be frustratingly elusive. However, it benefits no one to rely on conclusions drawn from flawed data. In fact, doing so can be downright dangerous. Let us take the time to get machine learning right first.

Cynthia Murrell, July 1, 2021

TikTok: What Is the Problem? None to Sillycon Valley Pundits.

June 18, 2021

I remember making a comment in a DarkCyber video about the lack of risk TikTok posed to its users. I think I heard a couple of Sillycon Valley pundits suggest that TikTok is no big deal. Chinese links? Hey, so what. These are short videos. Harmless.

Individuals like this are lost in clouds of unknowing with a dusting of gold and silver naive sparkles.

TikTok Has Started Collecting Your ‘Faceprints’ and ‘Voiceprints.’ Here’s What It Could Do With Them” provides some color for parents whose children are probably tracked, mapped, and imaged:

Recently, TikTok made a change to its U.S. privacy policy,allowing the company to “automatically” collect new types of biometric data, including what it describes as “faceprints” and “voiceprints.” TikTok’s unclear intent, the permanence of the biometric data and potential future uses for it have caused concern 

Well, gee whiz. The write up is pretty good, but there are a couple of uses of these types of data left out of the write up:

  • Cross correlate the images with other data about a minor, young adult, college student, or aging lurker
  • Feed the data into analytic systems so that predictions can be made about the “flexibility” of certain individuals
  • Cluster young people into egg cartons so fellow travelers and their weakness could be exploited for nefarious or really good purposes.

Will the Sillycon Valley real journalists get the message? Maybe if I convert this to a TikTok video.

Stephen E Arnold, June 18, 2021

Google Encourages Competition: Our Way or No Way. Seems Fair

June 4, 2021

I get a kick out of the Google. First, there was the really embarrassing matter of the diversity director outputting a few years ago some spicy comments about a country. Here’s a rundown of what makes the Timnit Gebru affair like just another synthetic pearl in a long string of management jewelry at a flea market.

I found this story even more revealing. The context is that numerous legal eagles are slapping Googzilla with a wide range of legal documents. Many of these are related to alleged monopolistic practices. I am no lawyer, but I get the feeling that some people are concerned about Google’s ability to absorb online advertising revenues, control what information people can find via the universal search thing, and Google’s Amazon like arrogance. (Yep, Amazon is the new Big Dog, but you knew that, right?)

Here’s the key statement:

Today I Learned you can not advertise on  @GoogleAds if you use  @googleanalytics competitors like  @matomo_org

This seems reasonable. An “if then” statement for organizations that want to tap into Google’s billions of “users.”

An entity called @HashNuke added:

This is easily identifiable as anti-competitive practice. Wouldn’t this be illegal in many countries?

If these statements are accurate, isn’t being Googley just the best way to inspire individuals and organizations. Some of those legal eagles may find the information worth checking out.

Stephen E Arnold, June 4, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta