DuckDuckGo Produces Privacy Income

August 10, 2021

DuckDuckGo advertises that it protects user privacy and does not have targeted ads in search results.  Despite its small size, protecting user privacy makes DuckDuckGo a viable alternative to Google.  TechRepublic delves into DuckDuckGo’s profits and how privacy is a big money maker in the article, “How DuckDuckGo Makes Money Selling Selling Search, Not Privacy.”  DuckDuckGo has had profitable margins since 2014 and made over $100 million in 2020.

Google, Bing, and other companies interested in selling personal data say that it is a necessary evil in order for search and other services to work.  DuckDuckGo says that’s not true and the company’s CEO Gabriel Weinberg said:

“It’s actually a big myth that search engines need to track your personal search history to make money or deliver quality search results. Almost all of the money search engines make (including Google) is based on the keywords you type in, without knowing anything about you, including your search history or the seemingly endless amounts of additional data points they have collected about registered and non-registered users alike. In fact, search advertisers buy search ads by bidding on keywords, not people….This keyword-based advertising is our primary business model.”

Weinberg continued that search engines do not need to track as much personal information as they do to personalize customer experiences or make money.  Search engines and other online services could limit the amount of user data they track and still generate a profit.

Google made over $147 billion in 2020, but DuckDuckGo’s $100 million is not a small number either.  DuckDuckGo’s market share is greater than Bing’s and, if limited to the US market, its market share is second to Google.  DuckDuckGo is a like the Little Engine That Could.  It is a hard working marketing operation and it keeps chugging along while batting the privacy beach ball along the Madison Avenue sidewalk.

Whitney Grace, August 10, 2021

COVID Forces Google To Show Its Work And Cites Sources

August 10, 2021

Do you remember in math class when you were told to show you work or when writing an essay you had to cite your sources? Google has decided to do the same thing with its search results says Today Online in the article, “Google Is Starting To Tell You How It Found Search Results.” Google wants to share with users why they are shown particular results. Soon Google will display an option within search results that allows users to see how results were matched to their query. Google wants users to know where their search results come from to better determine relevancy.

Google might not respect users’ privacy, but they do want to offer better transparency in search results. Google wants to explain itself and help its users make better decisions:

“Google has been making changes to give users more context about the results its search engine provides. Earlier this year it introduced panels to tell users about the sources of the information they are seeing. It has also started warning users when a topic is rapidly evolving and search results might not be reliable.”

Google search makes money by selling ads and sponsoring content in search results. Google labels any sponsored results with an “ad” tag. However, one can assume that Google does push more sponsored content into search results than it tells users. Helping users understand content and make informative choices, is a great way to educate users. Google isn’t being altruistic, though. Misinformation about vaccines and COVID-19 has spread like wildfire since the past US presidential administration. Users have demanded that Google, Facebook, and other tech companies be held accountable as they are platforms used to spread misinformation. Google sharing the why behind search queries is a start, but how many people will actually read them?

Whitney Grace, August 10, 2021

Another Perturbation of the Intelware Market: Apple Cores Forbidden Fruit

August 6, 2021

It may be tempting for some to view Apple’s decision to implement a classic man-in-the-middle process. If the information in “Apple Plans to Scan US iPhones for Child Abuse Imagery” is correct, the maker of the iPhone has encroached on the intelware service firms’ bailiwick. The paywalled newspaper reports:

Apple intends to install software on American iPhones to scan for child abuse imagery

The approach — dubbed ‘neuralMatch’ — is on the iPhone device, thus providing functionality substantially similar to other intelware vendors’ methods for obtaining data about a user’s actions.

The article concludes:

According to people briefed on the plans, every photo uploaded to iCloud in the US will be given a “safety voucher” saying whether it is suspect or not. Once a certain number of photos are marked as suspect, Apple will enable all the suspect photos to be decrypted and, if apparently illegal, passed on to the relevant authorities.

Observations:

  1. The idea allows Apple to provide a function likely to be of interest to law enforcement and intelligence professionals; for example, requesting a report about a phone with filtered and flagged data are metadata
  2. Specialized software companies may have an opportunity to refine existing intelware or develop a new category of specialized services to make sense of data about on-phone actions
  3. The proposal, if implemented, would create a PR opportunity for either Apple or its critics to try to leverage
  4. Legal issues about the on-phone filtering and metadata (if any) would add friction to some legal matters.

One question: How similar is this proposed Apple service to the operation of intelware like that allegedly available from the Hacking Team, NSO Group, and other vendors? Another question: Is this monitoring a trial balloon or has the system and method been implemented in test locations; for example, China or an Eastern European country?

Stephen E Arnold, August 6, 2021

About Privacy? You Ask

July 30, 2021

Though the issue of privacy was not central to the recent US Supreme Court case Transunion v. Ramirez, the Court’s majority opinion may have far-reaching implications for privacy rights. The National Law Review considers, “Did the US Supreme Court Just Gut Privacy Law Enforcement?” At issue is the difference between causing provable harm and simply violating a law. Writer Theodore F. Claypoole explains:

“The relevant decision in Transunion involves standing to sue in federal court. The court found that to have Constitutional standing to sue in federal court, a plaintiff must show, among other things, that the plaintiff suffered concrete injury in fact, and central to assessing concreteness is whether the asserted harm has a close relationship to a harm traditionally recognized as providing a basis for a lawsuit in American courts. The court makes a separation between a plaintiff’s statutory cause of action to sue a defendant over the defendant’s violation of federal law, and a plaintiff’s suffering concrete harm because of the defendant’s violation of federal law. It claims that under the Constitution, an injury in law is not automatically an injury in fact. A risk of future harm may allow an injunction to prevent the future harm, but does not magically qualify the plaintiff to receive damages. … This would mean that some of the ‘injuries’ that privacy plaintiffs have claimed to establish standing, like increased anxiety over a data exposure or the possibility that their data may be abused by criminals in the future, are less likely to resonate in some future cases.”

The opinion directly affects only the ability to sue in federal court, not on the state level. However, California aside, states tend to follow SCOTUS’ lead. Since when do we require proof of concrete harm before punishing lawbreakers? “Never before,” according to dissenting Justice Clarence Thomas. It will be years before we see how this ruling affects privacy cases, but Claypoole predicts it will harm plaintiffs and privacy-rights lawyers alike. He notes it would take an act of Congress to counter the ruling, but (of course) Democrats and Republicans have different priorities regarding privacy laws.

Cynthia Murrell, July 30, 2021

Facial Recognition: More Than Faces

July 29, 2021

Facial recognition software is not just for law enforcement anymore. Israel-based firm AnyVision’s clients include retail stores, hospitals, casinos, sports stadiums, and banks. Even schools are using the software to track minors with, it appears, nary a concern for their privacy. We learn this and more from, “This Manual for a Popular Facial Recognition Tool Shows Just How Much the Software Tracks People” at The Markup. Writer Alfred Ng reports that AnyVision’s 2019 user guide reveals the software logs and analyzes all faces that appear on camera, not only those belonging to persons of interest. A representative boasted that, during a week-long pilot program at the Santa Fe Independent School District in Texas, the software logged over 164,000 detections and picked up one student 1100 times.

There are a couple privacy features built in, but they are not turned on by default. “Privacy Mode” only logs faces of those on a watch list and “GDPR Mode” blurs non-watch listed faces on playbacks and downloads. (Of course, what is blurred can be unblurred.) Whether a client uses those options depends on its use case and, importantly, local privacy regulations. Ng observes:

“The growth of facial recognition has raised privacy and civil liberties concerns over the technology’s ability to constantly monitor people and track their movements. In June, the European Data Protection Board and the European Data Protection Supervisor called for a facial recognition ban in public spaces, warning that ‘deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places.’ Lawmakers, privacy advocates, and civil rights organizations have also pushed against facial recognition because of error rates that disproportionately hurt people of color. A 2018 research paper from Joy Buolamwini and Timnit Gebru highlighted how facial recognition technology from companies like Microsoft and IBM is consistently less accurate in identifying people of color and women. In December 2019, the National Institute of Standards and Technology also found that the majority of facial recognition algorithms exhibit more false positives against people of color. There have been at least three cases of a wrongful arrest of a Black man based on facial recognition.”

Schools that have implemented facial recognition software say it is an effort to prevent school shootings, a laudable goal. However, once in place it is tempting to use it for less urgent matters. Ng reports the Texas City Independent School District has used it to identify one student who was licking a security camera and to have another removed from his sister’s graduation because he had been expelled. As Georgetown University’s Clare Garvie points out:

“The mission creep issue is a real concern when you initially build out a system to find that one person who’s been suspended and is incredibly dangerous, and all of a sudden you’ve enrolled all student photos and can track them wherever they go. You’ve built a system that’s essentially like putting an ankle monitor on all your kids.”

Is this what we really want as a society? Never mind, it is probably a bit late for that discussion.

Cynthia Murrell, July 29, 2021

More TikTok Questions

June 30, 2021

I read “Dutch Group Launches Data Harvesting Claim against TikTok.” The write up states:

Dutch consumer group is launching a 1.5 billion euro ($1.8 billion) claim against TikTok over what it alleges is unlawful harvesting of personal data from users of the popular video sharing platform.

Hey, TikTok is for young people and the young at heart. What’s the surveillance angle?

The write up adds:

“The conduct of TikTok is pure exploitation,” Consumentenbond director Sandra Molenaar said in a statement.

What’s TikTok say? Here you go:

TikTok responded in an emailed statement saying the company is “committed to engage with external experts and organizations to make sure we’re doing what we can to keep people on TikTok safe. It added that “privacy and safety are top priorities for TikTok and we have robust policies, processes and technologies in place to help protect all users, and our teenage users in particular.”

Some Silicon Valley pundits agree with the China-linked harmless app and content provider. No big deal. Are the Dutch overreacting or just acting in a responsible manner? I lean toward responsible.

Stephen E Arnold, June 30, 2021

Google Tracking: Not Too Obvious Angle, Right?

June 18, 2021

Apple is the privacy outfit. Remember? Google wants to do away with third party cookies, right? Apple was sufficiently unaware to know that the company was providing a user’s information. Now Google has added a new, super duper free service. I learned about this wonderful freebie in “Google Workspace Is Now Free for Everyone — Here’s How to Get It.” I noted this paragraph:

Anyone with a Google account can use the integrated platform (formerly known as G Suite) to collaborate on the search giant’s productivity apps.

Free. Register. Agree to the terms.

Bingo. Magical, stateful opportunities for any vendor using this unbeatable approach. Need more? The Google will have a premium experience on offer soon.

Cookies? Nope. Better method I posit. And if there is some Fancy Dan tracking? Apple did not know some stuff, and I might wager Google won’t either.

Stephen E Arnold, June 18, 2021

TikTok: What Is the Problem? None to Sillycon Valley Pundits.

June 18, 2021

I remember making a comment in a DarkCyber video about the lack of risk TikTok posed to its users. I think I heard a couple of Sillycon Valley pundits suggest that TikTok is no big deal. Chinese links? Hey, so what. These are short videos. Harmless.

Individuals like this are lost in clouds of unknowing with a dusting of gold and silver naive sparkles.

TikTok Has Started Collecting Your ‘Faceprints’ and ‘Voiceprints.’ Here’s What It Could Do With Them” provides some color for parents whose children are probably tracked, mapped, and imaged:

Recently, TikTok made a change to its U.S. privacy policy,allowing the company to “automatically” collect new types of biometric data, including what it describes as “faceprints” and “voiceprints.” TikTok’s unclear intent, the permanence of the biometric data and potential future uses for it have caused concern 

Well, gee whiz. The write up is pretty good, but there are a couple of uses of these types of data left out of the write up:

  • Cross correlate the images with other data about a minor, young adult, college student, or aging lurker
  • Feed the data into analytic systems so that predictions can be made about the “flexibility” of certain individuals
  • Cluster young people into egg cartons so fellow travelers and their weakness could be exploited for nefarious or really good purposes.

Will the Sillycon Valley real journalists get the message? Maybe if I convert this to a TikTok video.

Stephen E Arnold, June 18, 2021

Google: The High School Science Club Management Method Cracks Location Privacy

June 2, 2021

How does one keep one’s location private? Good question. “Apple Is Eating Our Lunch: Google Employees Admit in Lawsuit That the Company Made It Nearly Impossible for Users to Keep Their Location Private” explains:

Google continued collecting location data even when users turned off various location-sharing settings, made popular privacy settings harder to find, and even pressured LG and other phone makers into hiding settings precisely because users liked them, according to the documents.

The fix. Enter random locations in order to baffle the high school science club whiz kids. The write up explains:

The unsealed versions of the documents paint an even more detailed picture of how Google obscured its data collection techniques, confusing not just its users but also its own employees. Google uses a variety of avenues to collect user location data, according to the documents, including WiFi and even third-party apps not affiliated with Google, forcing users to share their data in order to use those apps or, in some cases, even connect their phones to WiFi.

Interesting. The question is, “Why?”

My hunch is that geolocation is a darned useful item of data. Do a bit of sleuthing and check out the importance of geolocation and cross correlation on policeware and intelware solutions. Marketing finds the information useful as well. Does Google have a master plan? Sure, make money. The high school science club wants to keep the data flowing for three reasons:

First, ever increasing revenues are important. Without cash flow, Google’s tough-to-control costs could bring down the company. Geolocation data are valuable and provide a kitting needle to weave other items of information into a detailed just-for-you quilt.

Second, Amazon, Apple, and Facebook pose significant threats to the Google. Amazon is, well, doing its Bezos bulldozer thing. Apple is pushing its quasi privacy campaign to give “users” control. And Facebook is unpredictable and trying to out Google Google in advertising and user engagement. These outfits may be monopolies, but monopolies have to compete so high value data become the weaponized drones of these business wars.

Third, Google’s current high school management approach is mostly unaware of how the company gathers data. The systems and methods were institutionalized years ago. What persists are the modules of code which just sort of mostly do their thing. Newbies use the components and the data collection just functions. Why fix it if it isn’t broken. That assumes that someone knows how to fiddle with legacy Google.

Net net: Confusion. What high school science club admits to not having the answers? I can’t name one, including my high school science club in 1958. Some informed methods are wonderful and lesser being should not meddle. I read the article and think, “If you don’t get it, get out.”

Stephen E Arnold, June 1, 2021

And about That Windows 10 Telemetry?

May 28, 2021

The article “How to Disable Telemetry and Data Collection in Windows 10” reveals an important fact. Most Windows telemetry is turned on by default. But the write up does not explain what analyses occur for data on the company’s cloud services or for the Outlook email program. I find this amusing, but Microsoft — despite the SolarWinds and Exchange Server missteps — is perceived as the good outfit among the collection of ethical exemplars of US big technology firms.

I read “Three Years Until We’re in Orwell’s 1984 AI Surveillance Panopticon, Warns Microsoft Boss.” Do the sentiments presented as those allegedly representing the actual factual views of the Microsoft executive Brad Smith reference the Windows 10 telemetry and data collection article mentioned above? Keep in mind that Mr. Smith believed at one time than 1,000 bad actors went after Microsoft and created the minor security lapses which affected a few minor US government agencies and sparked the low profile US law enforcement entities into pre-emptive action on third party computers to help address certain persistent threats.

I chortled when I read this passage:

Brad Smith warns the science fiction of a government knowing where we are at all times, and even what we’re feeling, is becoming reality in parts of the world. Smith says it’s “difficult to catch up” with ever-advancing AI, which was revealed is being used to scan prisoners’ emotions in China.

Now about the Microsoft telemetry and other interesting processes? What about the emotions of a Windows 10 user when the printer does not work after an update? Yeah.

Stephen E Arnold, May 28, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta