About Privacy? You Ask

July 30, 2021

Though the issue of privacy was not central to the recent US Supreme Court case Transunion v. Ramirez, the Court’s majority opinion may have far-reaching implications for privacy rights. The National Law Review considers, “Did the US Supreme Court Just Gut Privacy Law Enforcement?” At issue is the difference between causing provable harm and simply violating a law. Writer Theodore F. Claypoole explains:

“The relevant decision in Transunion involves standing to sue in federal court. The court found that to have Constitutional standing to sue in federal court, a plaintiff must show, among other things, that the plaintiff suffered concrete injury in fact, and central to assessing concreteness is whether the asserted harm has a close relationship to a harm traditionally recognized as providing a basis for a lawsuit in American courts. The court makes a separation between a plaintiff’s statutory cause of action to sue a defendant over the defendant’s violation of federal law, and a plaintiff’s suffering concrete harm because of the defendant’s violation of federal law. It claims that under the Constitution, an injury in law is not automatically an injury in fact. A risk of future harm may allow an injunction to prevent the future harm, but does not magically qualify the plaintiff to receive damages. … This would mean that some of the ‘injuries’ that privacy plaintiffs have claimed to establish standing, like increased anxiety over a data exposure or the possibility that their data may be abused by criminals in the future, are less likely to resonate in some future cases.”

The opinion directly affects only the ability to sue in federal court, not on the state level. However, California aside, states tend to follow SCOTUS’ lead. Since when do we require proof of concrete harm before punishing lawbreakers? “Never before,” according to dissenting Justice Clarence Thomas. It will be years before we see how this ruling affects privacy cases, but Claypoole predicts it will harm plaintiffs and privacy-rights lawyers alike. He notes it would take an act of Congress to counter the ruling, but (of course) Democrats and Republicans have different priorities regarding privacy laws.

Cynthia Murrell, July 30, 2021

Facial Recognition: More Than Faces

July 29, 2021

Facial recognition software is not just for law enforcement anymore. Israel-based firm AnyVision’s clients include retail stores, hospitals, casinos, sports stadiums, and banks. Even schools are using the software to track minors with, it appears, nary a concern for their privacy. We learn this and more from, “This Manual for a Popular Facial Recognition Tool Shows Just How Much the Software Tracks People” at The Markup. Writer Alfred Ng reports that AnyVision’s 2019 user guide reveals the software logs and analyzes all faces that appear on camera, not only those belonging to persons of interest. A representative boasted that, during a week-long pilot program at the Santa Fe Independent School District in Texas, the software logged over 164,000 detections and picked up one student 1100 times.

There are a couple privacy features built in, but they are not turned on by default. “Privacy Mode” only logs faces of those on a watch list and “GDPR Mode” blurs non-watch listed faces on playbacks and downloads. (Of course, what is blurred can be unblurred.) Whether a client uses those options depends on its use case and, importantly, local privacy regulations. Ng observes:

“The growth of facial recognition has raised privacy and civil liberties concerns over the technology’s ability to constantly monitor people and track their movements. In June, the European Data Protection Board and the European Data Protection Supervisor called for a facial recognition ban in public spaces, warning that ‘deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places.’ Lawmakers, privacy advocates, and civil rights organizations have also pushed against facial recognition because of error rates that disproportionately hurt people of color. A 2018 research paper from Joy Buolamwini and Timnit Gebru highlighted how facial recognition technology from companies like Microsoft and IBM is consistently less accurate in identifying people of color and women. In December 2019, the National Institute of Standards and Technology also found that the majority of facial recognition algorithms exhibit more false positives against people of color. There have been at least three cases of a wrongful arrest of a Black man based on facial recognition.”

Schools that have implemented facial recognition software say it is an effort to prevent school shootings, a laudable goal. However, once in place it is tempting to use it for less urgent matters. Ng reports the Texas City Independent School District has used it to identify one student who was licking a security camera and to have another removed from his sister’s graduation because he had been expelled. As Georgetown University’s Clare Garvie points out:

“The mission creep issue is a real concern when you initially build out a system to find that one person who’s been suspended and is incredibly dangerous, and all of a sudden you’ve enrolled all student photos and can track them wherever they go. You’ve built a system that’s essentially like putting an ankle monitor on all your kids.”

Is this what we really want as a society? Never mind, it is probably a bit late for that discussion.

Cynthia Murrell, July 29, 2021

More TikTok Questions

June 30, 2021

I read “Dutch Group Launches Data Harvesting Claim against TikTok.” The write up states:

Dutch consumer group is launching a 1.5 billion euro ($1.8 billion) claim against TikTok over what it alleges is unlawful harvesting of personal data from users of the popular video sharing platform.

Hey, TikTok is for young people and the young at heart. What’s the surveillance angle?

The write up adds:

“The conduct of TikTok is pure exploitation,” Consumentenbond director Sandra Molenaar said in a statement.

What’s TikTok say? Here you go:

TikTok responded in an emailed statement saying the company is “committed to engage with external experts and organizations to make sure we’re doing what we can to keep people on TikTok safe. It added that “privacy and safety are top priorities for TikTok and we have robust policies, processes and technologies in place to help protect all users, and our teenage users in particular.”

Some Silicon Valley pundits agree with the China-linked harmless app and content provider. No big deal. Are the Dutch overreacting or just acting in a responsible manner? I lean toward responsible.

Stephen E Arnold, June 30, 2021

Google Tracking: Not Too Obvious Angle, Right?

June 18, 2021

Apple is the privacy outfit. Remember? Google wants to do away with third party cookies, right? Apple was sufficiently unaware to know that the company was providing a user’s information. Now Google has added a new, super duper free service. I learned about this wonderful freebie in “Google Workspace Is Now Free for Everyone — Here’s How to Get It.” I noted this paragraph:

Anyone with a Google account can use the integrated platform (formerly known as G Suite) to collaborate on the search giant’s productivity apps.

Free. Register. Agree to the terms.

Bingo. Magical, stateful opportunities for any vendor using this unbeatable approach. Need more? The Google will have a premium experience on offer soon.

Cookies? Nope. Better method I posit. And if there is some Fancy Dan tracking? Apple did not know some stuff, and I might wager Google won’t either.

Stephen E Arnold, June 18, 2021

TikTok: What Is the Problem? None to Sillycon Valley Pundits.

June 18, 2021

I remember making a comment in a DarkCyber video about the lack of risk TikTok posed to its users. I think I heard a couple of Sillycon Valley pundits suggest that TikTok is no big deal. Chinese links? Hey, so what. These are short videos. Harmless.

Individuals like this are lost in clouds of unknowing with a dusting of gold and silver naive sparkles.

TikTok Has Started Collecting Your ‘Faceprints’ and ‘Voiceprints.’ Here’s What It Could Do With Them” provides some color for parents whose children are probably tracked, mapped, and imaged:

Recently, TikTok made a change to its U.S. privacy policy,allowing the company to “automatically” collect new types of biometric data, including what it describes as “faceprints” and “voiceprints.” TikTok’s unclear intent, the permanence of the biometric data and potential future uses for it have caused concern 

Well, gee whiz. The write up is pretty good, but there are a couple of uses of these types of data left out of the write up:

  • Cross correlate the images with other data about a minor, young adult, college student, or aging lurker
  • Feed the data into analytic systems so that predictions can be made about the “flexibility” of certain individuals
  • Cluster young people into egg cartons so fellow travelers and their weakness could be exploited for nefarious or really good purposes.

Will the Sillycon Valley real journalists get the message? Maybe if I convert this to a TikTok video.

Stephen E Arnold, June 18, 2021

Google: The High School Science Club Management Method Cracks Location Privacy

June 2, 2021

How does one keep one’s location private? Good question. “Apple Is Eating Our Lunch: Google Employees Admit in Lawsuit That the Company Made It Nearly Impossible for Users to Keep Their Location Private” explains:

Google continued collecting location data even when users turned off various location-sharing settings, made popular privacy settings harder to find, and even pressured LG and other phone makers into hiding settings precisely because users liked them, according to the documents.

The fix. Enter random locations in order to baffle the high school science club whiz kids. The write up explains:

The unsealed versions of the documents paint an even more detailed picture of how Google obscured its data collection techniques, confusing not just its users but also its own employees. Google uses a variety of avenues to collect user location data, according to the documents, including WiFi and even third-party apps not affiliated with Google, forcing users to share their data in order to use those apps or, in some cases, even connect their phones to WiFi.

Interesting. The question is, “Why?”

My hunch is that geolocation is a darned useful item of data. Do a bit of sleuthing and check out the importance of geolocation and cross correlation on policeware and intelware solutions. Marketing finds the information useful as well. Does Google have a master plan? Sure, make money. The high school science club wants to keep the data flowing for three reasons:

First, ever increasing revenues are important. Without cash flow, Google’s tough-to-control costs could bring down the company. Geolocation data are valuable and provide a kitting needle to weave other items of information into a detailed just-for-you quilt.

Second, Amazon, Apple, and Facebook pose significant threats to the Google. Amazon is, well, doing its Bezos bulldozer thing. Apple is pushing its quasi privacy campaign to give “users” control. And Facebook is unpredictable and trying to out Google Google in advertising and user engagement. These outfits may be monopolies, but monopolies have to compete so high value data become the weaponized drones of these business wars.

Third, Google’s current high school management approach is mostly unaware of how the company gathers data. The systems and methods were institutionalized years ago. What persists are the modules of code which just sort of mostly do their thing. Newbies use the components and the data collection just functions. Why fix it if it isn’t broken. That assumes that someone knows how to fiddle with legacy Google.

Net net: Confusion. What high school science club admits to not having the answers? I can’t name one, including my high school science club in 1958. Some informed methods are wonderful and lesser being should not meddle. I read the article and think, “If you don’t get it, get out.”

Stephen E Arnold, June 1, 2021

And about That Windows 10 Telemetry?

May 28, 2021

The article “How to Disable Telemetry and Data Collection in Windows 10” reveals an important fact. Most Windows telemetry is turned on by default. But the write up does not explain what analyses occur for data on the company’s cloud services or for the Outlook email program. I find this amusing, but Microsoft — despite the SolarWinds and Exchange Server missteps — is perceived as the good outfit among the collection of ethical exemplars of US big technology firms.

I read “Three Years Until We’re in Orwell’s 1984 AI Surveillance Panopticon, Warns Microsoft Boss.” Do the sentiments presented as those allegedly representing the actual factual views of the Microsoft executive Brad Smith reference the Windows 10 telemetry and data collection article mentioned above? Keep in mind that Mr. Smith believed at one time than 1,000 bad actors went after Microsoft and created the minor security lapses which affected a few minor US government agencies and sparked the low profile US law enforcement entities into pre-emptive action on third party computers to help address certain persistent threats.

I chortled when I read this passage:

Brad Smith warns the science fiction of a government knowing where we are at all times, and even what we’re feeling, is becoming reality in parts of the world. Smith says it’s “difficult to catch up” with ever-advancing AI, which was revealed is being used to scan prisoners’ emotions in China.

Now about the Microsoft telemetry and other interesting processes? What about the emotions of a Windows 10 user when the printer does not work after an update? Yeah.

Stephen E Arnold, May 28, 2021

Could the Google Cloud Secretly Hide Sensitive Information?

April 7, 2021

It is odd seeing Google interested in protecting user information, but Alphabet Inc. follows dollar signs. The high demand for digital security is practically flashing bright neon dollar signs, so it is not surprising Google is investing its talents into security development. Tech Radar shares that simpler applications could lead to better security in the article, “Google Cloud Is Making It Easier For Developers To Smuggle ‘Secrets’ In Their Code.”

A big problem with application development is accidentally exposing sensitive information via the source code. Bad actors can hack applications’ code, then steal the sensitive information. Google Cloud fused its Secret Manager service (a secure method to store private information) with its Cloud Code IDE extensions that speed up cloud-based application development.

The benefits of the merged technologies are:

“The integration allows developers to replace hardcoded data with so-called Secrets, a type of global object available to applications at build or runtime. This way, cloud applications can make use of the sensitive data when needed, but without leaving it exposed in the codebase.

According to Google, the new integration will make it easier for developers to build secure applications, while also avoiding the complexities of securing sensitive data via alternative methods.”

In the past, developers hardcoded sensitive information into their codebase. It made it easier to recall data, but savvy bad actors could access it. Many applications know that hardcoding sensitive information is a security risk, so hey make users run the gambit with authentication services.

Secret Manager and Cloud Code IDE could eliminate the authentication hassle, while protecting sensitive information.

Whitney Grace, April 7, 2021

Microsoft: Your Computer, Your Data. That Is a Good One

March 23, 2021

The online news stream is chock full of information about Microsoft’s swing-for-the-fences PR push for Discord. If you are not familiar with the service, I am not going to explain this conduit for those far more youthful than I. Like GitHub, Discord is going to be an interesting property if the Redmond crowd does the deal. If we anticipate Discord becoming part of the Xbox and Teams family, the alleged censorship of software posted to GitHub will be a glimpse of the content challenges in Microsoft’s future.

The more interesting development is the “real” news story “Microsoft Edge Could Soon Share Browsing Data with Windows 10.” The idea is that a person’s computer and the authorized users of the computing device will become one big, happy data family.

The article states:

Called share browsing data with other Windows features, it is designed to share data from Edge, such as Favorites or visited sites, with other Windows components. Search is a prime target, and highlighted by Microsoft at the time of writing. Basically, what this means is that users who run searches using the built-in search feature may get Edge results as well.

And what does Microsoft get? Possibilities include:

  • Federated, fine grained user behavior data
  • Click stream data matched to content on the user’s personal computer
  • Real-time information flows
  • Opportunities to share data with certain entities.

What happens to the user’s computer if said user does not accept such integration? The options range from loss of access to certain data to pro-active interaction to alter the functioning of the user’s computing device.

Why is this such a good idea? Microsoft, like Amazon, Facebook, and Google realize that the days of the Wild West are coming to an end. There are new sheriffs with new ideas about right and wrong.

Thus, get what one can while the gittin’ is good as the old times used to say.

But “What about security and privacy?” you ask? One response is, “That’s a good one.” Why not try stand up?

Stephen E Arnold, March 23, 2021

Dark Patterns: Fool Me Once, Fool Me Twice, Heck Fool Me for Decades

March 18, 2021

I find “dark patterns” interesting. Whether one is used by T-Mobile to make it almost impossible to opt out of the firm’s data slurping and selling biz or the Facebook trick of making “log out” tough to see. There are other, more ingenious methods; for example, if you want to read a super duper white paper, just provide a name and email address. The action flips the switch on spam and scam operations. (Who doesn’t enjoy cybersecurity solutions which cannot detect SolarWinds and Exchange type breaches for three, six, 12 months, or more.

I read “California Bans Dark Patterns That Trick Users into Giving Away Their Personal Data.” The write up reports:

The concept [Dark Patterns] was coined in 2010 but is slowly being addressed in US legislation, with California this week announcing that it is banning the use of dark patterns that stop users from opting out of the sale of their personal data.

The idea of tricking users began a long time ago. The fine P.T. Barnum said:

There’s a sucker born every minute.

How about those free online services?

The article makes clear that regulators are quick to act. Those using dark patterns have to mend their ways in 30 days.

Then what? Let’s use a free online service and free email to share ideas. I have a better idea. Let’s use Clubhouse to discuss options.

Stephen E Arnold, March 18, 2021

Next Page »

  • Archives

  • Recent Posts

  • Meta