99 Percent Accurate: Close Enough for PR Output

August 24, 2021

I am not entering a horse in this race, a dog in this fight, or a pigeon in this race. I want to point to a write up in a newspaper in some way very tenuously connected to the former driver of the Bezos bulldozer. That write is “Opinion: Apple’s New Child Safety Tool Comes with Privacy Trade-Offs — Just Like All the Others.”

Right off the bat I noted the word “all.” Okay, categorical affirmatives put my teeth edge the same way Miss Blackburn’s fingernails scraping on the blackboard in calculus class did. “All”. Very tidy.

The write up contains an interesting statement or two. I circled this one in Bezos bulldozer orange:

The practice of on-device flagging may sound unusually violative. Yet Apple has a strong argument that it’s actually more protective of privacy than the industry standard. The company will learn about the existence of CSAM only when the quantity of matches hits a certain threshold, indicating a collection.

The operative word is threshold. Like “all”, threshold sparks a few questions in my mind. Does it yours? Let me provide a hint: Who or what sets a threshold? And under what conditions is a threshold changed? There are others, but I want to make this post readable to my TikTok-like readers.

I liked the conundrum angle too:

The benefit of nabbing abusers in this case may outweigh these hypothetical harms, especially if Apple holds itself to account — and the public keeps on the pressure. Yet the company’s conundrum emphasizes an unpleasant truth: Doing something to protect public safety in the Internet age is better than doing nothing — yet every “something” introduces issues of its own.

Fascinating. I am curious how Apple PR and marketing will respond. Hopefully with fewer unsupported assertions, info about thresholds, and the logician’s bane: A categorical affirmative.

Stephen E Arnold, August 24, 2021

Does Google Play Protect and Serve—Ads?

August 20, 2021

We hope, gentle reader, that you have not relied on the built-in Google Play Protect to safeguard your Android devices when downloading content from the Play store. MakeUseOf cites a recent report from AV-Test in, “Report: Google Play Protect Sucks at Detecting Malware.” Writer Gavin Phillips summarizes:

“With a maximum of 18 points on offer across the three test sections of Protection, Performance, and Usability, Google Play Protect picked up just 6.0—a full ten points behind the next option, Ikarus. AV-TEST pits each of the antivirus tools against more than 20,000 malicious apps. In the endurance test running from January to June 2021, there were three rounds of testing. Each test involved 3,000 newly discovered malware samples in a real-time test, along with a reference set of malicious apps using malware samples in circulation for around four weeks. Google Play Protect detected 68.8 percent of the real-time malware samples and 76.7 percent of the reference malware samples. In addition, AV-TEST installs around 10,000 harmless apps from the Play Store on each device, aiming to detect any false positives. Again, Google’s Play Protect came bottom of the pile, marking 70 harmless apps as malware.”

A chart listing the test’s results for each security solution can be found in the writeup or the report itself. More than half received the full 18 points while the rest fall between 16 and 17.8 points. Except for Google—its measly 6 points really set it apart as the worst option by far. Since Google “Protect” is the default security option for Android app downloads, this is great news for bad actors. The rest of us would do well to study the top half of that list. iOS users excepted.

Based in Magdeburg, Germany, research institute AV-Test pits the world’s cyber security solutions against its large collection of digital malware samples and makes results available to private users for free. The firm makes its money on consulting for companies and government institutions. AV-Test was founded in 2004 and was just acquired by Ufenau Capital Partners in February of this year.

Cynthia Murrell, August 20, 2021

DuckDuckGo Produces Privacy Income

August 10, 2021

DuckDuckGo advertises that it protects user privacy and does not have targeted ads in search results.  Despite its small size, protecting user privacy makes DuckDuckGo a viable alternative to Google.  TechRepublic delves into DuckDuckGo’s profits and how privacy is a big money maker in the article, “How DuckDuckGo Makes Money Selling Selling Search, Not Privacy.”  DuckDuckGo has had profitable margins since 2014 and made over $100 million in 2020.

Google, Bing, and other companies interested in selling personal data say that it is a necessary evil in order for search and other services to work.  DuckDuckGo says that’s not true and the company’s CEO Gabriel Weinberg said:

“It’s actually a big myth that search engines need to track your personal search history to make money or deliver quality search results. Almost all of the money search engines make (including Google) is based on the keywords you type in, without knowing anything about you, including your search history or the seemingly endless amounts of additional data points they have collected about registered and non-registered users alike. In fact, search advertisers buy search ads by bidding on keywords, not people….This keyword-based advertising is our primary business model.”

Weinberg continued that search engines do not need to track as much personal information as they do to personalize customer experiences or make money.  Search engines and other online services could limit the amount of user data they track and still generate a profit.

Google made over $147 billion in 2020, but DuckDuckGo’s $100 million is not a small number either.  DuckDuckGo’s market share is greater than Bing’s and, if limited to the US market, its market share is second to Google.  DuckDuckGo is a like the Little Engine That Could.  It is a hard working marketing operation and it keeps chugging along while batting the privacy beach ball along the Madison Avenue sidewalk.

Whitney Grace, August 10, 2021

COVID Forces Google To Show Its Work And Cites Sources

August 10, 2021

Do you remember in math class when you were told to show you work or when writing an essay you had to cite your sources? Google has decided to do the same thing with its search results says Today Online in the article, “Google Is Starting To Tell You How It Found Search Results.” Google wants to share with users why they are shown particular results. Soon Google will display an option within search results that allows users to see how results were matched to their query. Google wants users to know where their search results come from to better determine relevancy.

Google might not respect users’ privacy, but they do want to offer better transparency in search results. Google wants to explain itself and help its users make better decisions:

“Google has been making changes to give users more context about the results its search engine provides. Earlier this year it introduced panels to tell users about the sources of the information they are seeing. It has also started warning users when a topic is rapidly evolving and search results might not be reliable.”

Google search makes money by selling ads and sponsoring content in search results. Google labels any sponsored results with an “ad” tag. However, one can assume that Google does push more sponsored content into search results than it tells users. Helping users understand content and make informative choices, is a great way to educate users. Google isn’t being altruistic, though. Misinformation about vaccines and COVID-19 has spread like wildfire since the past US presidential administration. Users have demanded that Google, Facebook, and other tech companies be held accountable as they are platforms used to spread misinformation. Google sharing the why behind search queries is a start, but how many people will actually read them?

Whitney Grace, August 10, 2021

Another Perturbation of the Intelware Market: Apple Cores Forbidden Fruit

August 6, 2021

It may be tempting for some to view Apple’s decision to implement a classic man-in-the-middle process. If the information in “Apple Plans to Scan US iPhones for Child Abuse Imagery” is correct, the maker of the iPhone has encroached on the intelware service firms’ bailiwick. The paywalled newspaper reports:

Apple intends to install software on American iPhones to scan for child abuse imagery

The approach — dubbed ‘neuralMatch’ — is on the iPhone device, thus providing functionality substantially similar to other intelware vendors’ methods for obtaining data about a user’s actions.

The article concludes:

According to people briefed on the plans, every photo uploaded to iCloud in the US will be given a “safety voucher” saying whether it is suspect or not. Once a certain number of photos are marked as suspect, Apple will enable all the suspect photos to be decrypted and, if apparently illegal, passed on to the relevant authorities.

Observations:

  1. The idea allows Apple to provide a function likely to be of interest to law enforcement and intelligence professionals; for example, requesting a report about a phone with filtered and flagged data are metadata
  2. Specialized software companies may have an opportunity to refine existing intelware or develop a new category of specialized services to make sense of data about on-phone actions
  3. The proposal, if implemented, would create a PR opportunity for either Apple or its critics to try to leverage
  4. Legal issues about the on-phone filtering and metadata (if any) would add friction to some legal matters.

One question: How similar is this proposed Apple service to the operation of intelware like that allegedly available from the Hacking Team, NSO Group, and other vendors? Another question: Is this monitoring a trial balloon or has the system and method been implemented in test locations; for example, China or an Eastern European country?

Stephen E Arnold, August 6, 2021

About Privacy? You Ask

July 30, 2021

Though the issue of privacy was not central to the recent US Supreme Court case Transunion v. Ramirez, the Court’s majority opinion may have far-reaching implications for privacy rights. The National Law Review considers, “Did the US Supreme Court Just Gut Privacy Law Enforcement?” At issue is the difference between causing provable harm and simply violating a law. Writer Theodore F. Claypoole explains:

“The relevant decision in Transunion involves standing to sue in federal court. The court found that to have Constitutional standing to sue in federal court, a plaintiff must show, among other things, that the plaintiff suffered concrete injury in fact, and central to assessing concreteness is whether the asserted harm has a close relationship to a harm traditionally recognized as providing a basis for a lawsuit in American courts. The court makes a separation between a plaintiff’s statutory cause of action to sue a defendant over the defendant’s violation of federal law, and a plaintiff’s suffering concrete harm because of the defendant’s violation of federal law. It claims that under the Constitution, an injury in law is not automatically an injury in fact. A risk of future harm may allow an injunction to prevent the future harm, but does not magically qualify the plaintiff to receive damages. … This would mean that some of the ‘injuries’ that privacy plaintiffs have claimed to establish standing, like increased anxiety over a data exposure or the possibility that their data may be abused by criminals in the future, are less likely to resonate in some future cases.”

The opinion directly affects only the ability to sue in federal court, not on the state level. However, California aside, states tend to follow SCOTUS’ lead. Since when do we require proof of concrete harm before punishing lawbreakers? “Never before,” according to dissenting Justice Clarence Thomas. It will be years before we see how this ruling affects privacy cases, but Claypoole predicts it will harm plaintiffs and privacy-rights lawyers alike. He notes it would take an act of Congress to counter the ruling, but (of course) Democrats and Republicans have different priorities regarding privacy laws.

Cynthia Murrell, July 30, 2021

Facial Recognition: More Than Faces

July 29, 2021

Facial recognition software is not just for law enforcement anymore. Israel-based firm AnyVision’s clients include retail stores, hospitals, casinos, sports stadiums, and banks. Even schools are using the software to track minors with, it appears, nary a concern for their privacy. We learn this and more from, “This Manual for a Popular Facial Recognition Tool Shows Just How Much the Software Tracks People” at The Markup. Writer Alfred Ng reports that AnyVision’s 2019 user guide reveals the software logs and analyzes all faces that appear on camera, not only those belonging to persons of interest. A representative boasted that, during a week-long pilot program at the Santa Fe Independent School District in Texas, the software logged over 164,000 detections and picked up one student 1100 times.

There are a couple privacy features built in, but they are not turned on by default. “Privacy Mode” only logs faces of those on a watch list and “GDPR Mode” blurs non-watch listed faces on playbacks and downloads. (Of course, what is blurred can be unblurred.) Whether a client uses those options depends on its use case and, importantly, local privacy regulations. Ng observes:

“The growth of facial recognition has raised privacy and civil liberties concerns over the technology’s ability to constantly monitor people and track their movements. In June, the European Data Protection Board and the European Data Protection Supervisor called for a facial recognition ban in public spaces, warning that ‘deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places.’ Lawmakers, privacy advocates, and civil rights organizations have also pushed against facial recognition because of error rates that disproportionately hurt people of color. A 2018 research paper from Joy Buolamwini and Timnit Gebru highlighted how facial recognition technology from companies like Microsoft and IBM is consistently less accurate in identifying people of color and women. In December 2019, the National Institute of Standards and Technology also found that the majority of facial recognition algorithms exhibit more false positives against people of color. There have been at least three cases of a wrongful arrest of a Black man based on facial recognition.”

Schools that have implemented facial recognition software say it is an effort to prevent school shootings, a laudable goal. However, once in place it is tempting to use it for less urgent matters. Ng reports the Texas City Independent School District has used it to identify one student who was licking a security camera and to have another removed from his sister’s graduation because he had been expelled. As Georgetown University’s Clare Garvie points out:

“The mission creep issue is a real concern when you initially build out a system to find that one person who’s been suspended and is incredibly dangerous, and all of a sudden you’ve enrolled all student photos and can track them wherever they go. You’ve built a system that’s essentially like putting an ankle monitor on all your kids.”

Is this what we really want as a society? Never mind, it is probably a bit late for that discussion.

Cynthia Murrell, July 29, 2021

More TikTok Questions

June 30, 2021

I read “Dutch Group Launches Data Harvesting Claim against TikTok.” The write up states:

Dutch consumer group is launching a 1.5 billion euro ($1.8 billion) claim against TikTok over what it alleges is unlawful harvesting of personal data from users of the popular video sharing platform.

Hey, TikTok is for young people and the young at heart. What’s the surveillance angle?

The write up adds:

“The conduct of TikTok is pure exploitation,” Consumentenbond director Sandra Molenaar said in a statement.

What’s TikTok say? Here you go:

TikTok responded in an emailed statement saying the company is “committed to engage with external experts and organizations to make sure we’re doing what we can to keep people on TikTok safe. It added that “privacy and safety are top priorities for TikTok and we have robust policies, processes and technologies in place to help protect all users, and our teenage users in particular.”

Some Silicon Valley pundits agree with the China-linked harmless app and content provider. No big deal. Are the Dutch overreacting or just acting in a responsible manner? I lean toward responsible.

Stephen E Arnold, June 30, 2021

Google Tracking: Not Too Obvious Angle, Right?

June 18, 2021

Apple is the privacy outfit. Remember? Google wants to do away with third party cookies, right? Apple was sufficiently unaware to know that the company was providing a user’s information. Now Google has added a new, super duper free service. I learned about this wonderful freebie in “Google Workspace Is Now Free for Everyone — Here’s How to Get It.” I noted this paragraph:

Anyone with a Google account can use the integrated platform (formerly known as G Suite) to collaborate on the search giant’s productivity apps.

Free. Register. Agree to the terms.

Bingo. Magical, stateful opportunities for any vendor using this unbeatable approach. Need more? The Google will have a premium experience on offer soon.

Cookies? Nope. Better method I posit. And if there is some Fancy Dan tracking? Apple did not know some stuff, and I might wager Google won’t either.

Stephen E Arnold, June 18, 2021

TikTok: What Is the Problem? None to Sillycon Valley Pundits.

June 18, 2021

I remember making a comment in a DarkCyber video about the lack of risk TikTok posed to its users. I think I heard a couple of Sillycon Valley pundits suggest that TikTok is no big deal. Chinese links? Hey, so what. These are short videos. Harmless.

Individuals like this are lost in clouds of unknowing with a dusting of gold and silver naive sparkles.

TikTok Has Started Collecting Your ‘Faceprints’ and ‘Voiceprints.’ Here’s What It Could Do With Them” provides some color for parents whose children are probably tracked, mapped, and imaged:

Recently, TikTok made a change to its U.S. privacy policy,allowing the company to “automatically” collect new types of biometric data, including what it describes as “faceprints” and “voiceprints.” TikTok’s unclear intent, the permanence of the biometric data and potential future uses for it have caused concern 

Well, gee whiz. The write up is pretty good, but there are a couple of uses of these types of data left out of the write up:

  • Cross correlate the images with other data about a minor, young adult, college student, or aging lurker
  • Feed the data into analytic systems so that predictions can be made about the “flexibility” of certain individuals
  • Cluster young people into egg cartons so fellow travelers and their weakness could be exploited for nefarious or really good purposes.

Will the Sillycon Valley real journalists get the message? Maybe if I convert this to a TikTok video.

Stephen E Arnold, June 18, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta