Satellites Are Upgraded Peeping Toms

December 28, 2020

Satellites have had powerful cameras for decades. Camera technology for satellites has advanced from being able to read credit card numbers in the early 2000s to peering inside buildings. Futurism shares details about the new (possible) invasion of privacy in: “A New Satellite Can Peer Inside Some Buildings, Day Or Night.”

Capella Space launched a new type of satellite with a new state of the art camera capable of taking clear radar images with precise resolution. It even has the ability to take photos inside buildings, such as airplane hangers. Capella Space assures people that while their satellite does take powerful, high resolution photos it can only “see” through lightweight structures. The new satellite camera cannot penetrate dense buildings, like residential houses and high rises.

The new satellite can also take pictures from space of Earth on either its daytime or nighttime side. Capella also released a new photo imaging platform that allows governments or private customers to request images of anything in the world. Most satellites orbiting the Earth use optical image sensors, which make it hard to take photos when its cloudy. Capella’s new system uses synthetic aperture radar that can peer through cloud cover and night skies.

The resolution for the SAR images is extraordinary:

“Another innovation, he says, is the resolution at which Capella’s satellites can collect imagery. Each pixel in one of the satellite’s images represents a 50-centimeter-by-50-centimeter square, while other SAR satellites on the market can only get down to around five meters. When it comes to actually discerning what you’re looking at from space, that makes a huge difference.

Cityscapes are particularly intriguing. Skyscrapers poke out of the Earth like ghostly, angular mushrooms — and, if you look carefully, you notice that you can see straight through some of them, though the company clarified that this is a visual distortion rather than truly seeing through the structures.”

Capella’s new satellite has a variety of uses. Governments can use it to track enemy armies, while scientists can use it to monitor fragile ecosystems like the Amazon rainforest. Capella has assured the public that its new satellite cannot spy into dense buildings, but if the technology improves maybe it is a possibility? Hopefully bad actors will not use Capella’s new satellite.

Whitney Grace, December 28, 2020

Virtual Private Networks: Not What They Seem

November 17, 2020

Virtual private networks are supposed to provide a user with additional security. There are reports about Apple surfing on this assumption in its Big Sur operating system. For more information, check out “Apple Apps on Big Sur Bypass Firewalls and VPNs — This Is Terrible.” Apple appears to making privacy a key facet of its marketing and may be experiencing one of those slips betwixt cup and lip with regard to this tasty sales Twinkie?

Almost as interesting is the information in “40% of Free VPN Apps Found to Leak Data.” Note that the assertion involves no-charge virtual private networks. The write up reports:

ProPrivacy has researched the top 250 free VPN apps available on Google Play Store and found that 40% failed to adequately protect users privacy.

Okay, security conscious Google and its curated apps on its bulletproof Play store are under the Microscope. The write up points out:

… A study by CSIRO discovered that more than 75% of free VPNs have at least one third-party tracker rooted in their software. These trackers collect information on customers online presence and forward that data to advertising agencies to optimize their ads.

Who is involved in the study? Possible the provider of for fee VPN services like NordVPN.

Marketing and privacy. Like peanut butter and honey.

Stephen E Arnold, November 17, 2020

Cybersecurity Lapse: Lawyers and Technology

October 29, 2020

Hackers Steal Personal Data of Google Employees after Breaching US Law Firm” is an interesting article. First, it reminds the reader that security at law firms may not be a core competency. Second, it makes clear that personal information can be captured in order to learn about every elected officials favorite company Google.

The write up states:

Fragomen, Del Rey, Bernsen & Loewy LLP, a law firm that offers employment verification compliance services to Google in the United States, suffered unauthorized access into its computer systems in September that resulted in hackers accessing the personal information of present and former Google employees.

The article quotes to a cyber security professional from ImmuniWeb. I learned:

According to Ilia Kolochenko, Founder & CEO of ImmuniWeb, the fact that hackers targeted a law firm that stores a large amount of data associated with present and former Google employees is not surprising as law firms possess a great wealth of the most confidential and sensitive data of their wealthy or politically-exposed clients, and habitually cannot afford the same state-of-the-art level of cybersecurity as the original data owners.

This law firm, according to the write up, handles not just the luminary Google. It also does work for Lady Gaga and Run DMC.

Stephen E Arnold, October 29, 2020

Music and Moods: Research Verifies the Obvious

October 21, 2020

It has been proven that music can have positive or negative psychological impacts on people. Following this train of research, Business Line reports that playlists are a better reflection of mood than once thought, “Your Playlist Mirrors Your Mood, Confirms IIIT-Hyderabad Study.”

The newest study on music and its effect on mood titled “Tag2risk: Harnessing Social Music Tags For Characterizing Depression Risk, Cover Over 500 Individuals” comes from the International Institute of Information Technology in Hyderabad (IIIT-H). The study discovered that people who listen to sad music can be thrown into depression. Vinoo Alluri and her students from IIIT-H’s cognitive science department investigated if they could identify music listeners with depressive tendencies from their music listening habits.

Over five hundred people’s music listening histories were studied. The researchers discovered that repeatedly listening to sad music was used as an avoidance tool and a coping mechanism. These practices, however, also kept people in depressive moods. Music listeners in the study were also drawn to music sub genres tagged with “sadness” and tenderness.

We noted:

“ ‘While it can be cathartic sometimes, repeatedly being in such states may be an indicator of potential underlying mental illness and this is reflected in their choice and usage of music,’ Vinoo Alluri points out. She feels that music listening habits can be changed. But, in order to do that, they need to be identified first by uncovering their listening habits. It is possible to break the pattern of “ruminative and repetitive music usage”, which will lead to a more positive outcome.”

Alluri’s study is an amazing investigation into the power and importance of music. Her research, however, only ratifies what music listeners and teenagers have known for decades.

Whitney Grace, October 21, 2020

Infohazards: Another 2020 Requirement

October 20, 2020

New technologies that become society staples have risks and require policies to rein in potential dangers. Artificial intelligence is a developing technology. Governing policies have yet to catch up with the emerging tool. Experts in computer science, government, and other controlling organizations need to discuss how to control AI says Vanessa Kosoy in the Less Wrong blog post: “Needed: AI Infohazard Policy.”

Kosoy approaches her discussion about the need for a controlling AI information policy with the standard science fiction warning argument: “AI risk is that AI is a danger, and therefore research into AI might be dangerous.” It is good to draw caution from science fiction to prevent real world disaster. Experts must develop a governing body of AI guidelines to determine what learned information should be shared and how to handle results that are not published.

Individuals and single organizations cannot make these decisions alone, even if they do have their own governing policies. Governing organizations and people must coordinate their knowledge regarding AI and develop a consensual policies to control AI information. Kozoy determines that any AI policy shoulder consider the following:

• “Some results might have implications that shorten the AI timelines, but are still good to publish since the distribution of outcomes is improved.

• Usually we shouldn’t even start working on something which is in the should-not-be-published category, but sometimes the implications only become clear later, and sometimes dangerous knowledge might still be net positive as long as it’s contained.

• In the midgame, it is unlikely for any given group to make it all the way to safe AGI by itself. Therefore, safe AGI is a broad collective effort and we should expect most results to be published. In the endgame, it might become likely for a given group to make it all the way to safe AGI. In this case, incentives for secrecy become stronger.

• The policy should not fail to address extreme situations that we only expect to arise rarely, because those situations might have especially major consequences.”

She continues that any AI information policy should determine the criteria for what information is published, what channels should be consulted to determine publication, and how to handle potentially dangerous information.

These questions are universal for any type of technology and information that has potential hazards. However, specificity of technological policies weeds out any pedantic bickering and sets standards for everyone, individuals and organizations. The problem is getting everyone to agree on the policies.

Whitney Grace, October 20, 2020

Covid Trackers Are Wheezing in Europe

October 19, 2020

COVID-19 continues to roar across the world. Health professionals and technologists have combined their intellects attempting to provide tools to the public. The Star Tribune explains how Europe wanted to use apps to track the virus: “As Europe Faces 2nd Wave Of Virus, Tracing Apps Lack Impact.”

Europe planned that mobile apps tracking where infected COVID-19 individuals are located would be integral to battling the virus. As 2020 nears the end, the apps have failed because of privacy concerns, lack of public interest, and technical problems. The latter is not a surprise given the demand for a rush job. The apps were supposed to notify people when they were near infected people.

Health professionals predicted that 60% of European country populations would download and use the apps, but adoption rates are low. The Finnish, however, reacted positively and one-third of the country downloaded their country’s specific COVID-19 tracking app. Finland’s population ironically resists wearing masks in public.

The apps keep infected people’s identities secret. Their data remains anonymous and the apps only alert others if they come in contact with a virus carrier. If the information provides any help to medical professionals remains to be seen:

“We might never know for sure, said Stephen Farrell, a computer scientist at Trinity College Dublin who has studied tracing apps. That’s because most apps don’t require contact information from users, without which health authorities can’t follow up. That means it’s hard to assess how many contacts are being picked up only through apps, how their positive test rates compare with the average, and how many people who are being identified anyway are getting tested sooner and how quickly. ‘I’m not aware of any health authority measuring and publishing information about those things, and indeed they are likely hard to measure,’ Farrell said.”

Are these apps actually helpful? Maybe. But they require maintenance and constant updating. They could prevent some of the virus from spreading, but sticking to tried and true methods of social distancing, wearing masks, and washing hands work better.

Whitney Grace, October 19, 2020

Apple and AWS: Security?

October 13, 2020

DarkCyber noted an essay-style report called “We Hacked Apple for 3 Months: Here’s What We Found.” The write up contains some interesting information. One particular item caught our attention:

AWS Secret Keys via PhantomJS iTune Banners and Book Title XSS

The information the data explorers located potential vulnerabilities to allow such alleged actions as:

  • Obtain what are essentially keys to various internal and external employee applications
  • Disclose various secrets (database credentials, OAuth secrets, private keys) from the various design.apple.com applications
  • Likely compromise the various internal applications via the publicly exposed GSF portal
  • Execute arbitrary Vertica SQL queries and extract database information

Other issues are touched upon in the write up.

Net net: The emperor has some clothes; they are just filled with holes and poorly done stitching if the write up is correct.

Stephen E Arnold, October 13, 2020

Amazon: The Bulldozer Grinds Forward

October 7, 2020

It is hard to tell whether the company is shameless or clueless. Either way, SlashGear observes, “Amazon Has A Creepiness Problem.” The growingly ubiquitous tech giant recently unveiled two products that will make privacy enthusiasts shiver. Writer Chris Davies reports:

“The Echo Show 10, for example, brings movement to Amazon’s smart displays, with a rotating base that promises to track you as you wander around the room. The result? A perfectly-centered video call, or a more attentive Alexa, whether you’re stood at the sink or raiding the refrigerator. Echo Show 10 seems positively pedestrian, though, in comparison to the Ring Always Home Cam. Part drone, part security camera, it launches out of a base station that resembles a fancy fragrance diffuser and then buzzes around your home to spot intruders or misbehaving pets. Never mind wondering whether the microphone on your Echo is disabled: now, the cameras themselves will be airborne.”

Naturally, Amazon offers reassurances that users are in complete control of what the devices observe and transmit. The Ring drone maintains a certain hum so one can hear it coming, and users can limit its flight area. Also, when it is docked, the camera is physically blocked. The Echo Show 10 relies on visual and audio cues to keep the user center stage, but we’re assured that data is processed locally and immediately deleted. But there is no easy way to verify the devices respect these restrictions. Users will just have to take Amazon’s word. Davies considers:

“Rationally, nothing Amazon has announced today is any more intrusive or dangerous to privacy than, well, any other smart speaker or connected camera the company has offered before. All the same, there’s a gulf between perception and reality. I could understand you being skeptical about Amazon’s intentions – and its technology – simply because it’s, well, Amazon. The company that knows so much about your shopping habits it can make pitch-perfect recommendations; the company that wants to put microphones and cameras all over your house, in your car, and in your hotel room. […]

The man has a point. Apparently, many consumers do trust Amazon enough to place these potential spies in their homes and offices. Others, though, do not. Will a day come when it will be difficult to function in society without them? We think Amazon hopes so.

Cynthia Murrell, October 7, 2020

TikTok: Maybe Some Useful Information?

September 19, 2020

US President Donald Trump banned Americans from using TikTok, because of potential information leaks to China. In an ironic twist, The Intercept explains “Leaked Documents Reveal What TikTok Shares With Authorities—In The U.S.” It is not a secret in the United States that social media platforms from TikTok to Facebook collect user data as ways to spy and sell products.

While the US monitors its citizens, it does not take the same censorship measures as China does with its people. It is alarming the amount of data TikTok gathers for the Chinese, but leaked documents show that the US also accesses that data. Data privacy has been a controversial topic for years within the United States and experts argue that TikTok collects the same type of information as Google, Amazon, and Facebook. The documents reveal that ByteDance, TikTok’s parent company, the FBI, and Department of Homeland Security monitored the platform.

Law enforcement officials use TikTok as a means to monitor social unrest related to the death of George Floyd. Floyd suffocated when a police officer cut off his oxygen attempting to restrain him during arrest. TikTok users post videos about Black Lives Matter, police protests, tips for disarming law enforcement, and even jokes about the US’s current upheaval. TikTok’s user agreement says it collects information and will share it with third parties. The third parties include law enforcement if TikTok feels there is an imminent danger.

TikTok, however, also censors videos, particularly those the Chinese government dislikes. These videos include political views, the Hong Kong protests, Uyghur internment camps, and people considered poor, disabled, or ugly.

Trump might try to make the US appear as the better country, but:

““The common concern, whether we’re talking about TikTok or Huawei, isn’t the intentions of that company necessarily but the framework within which it operates,” said Elsa Kania, an expert on Chinese technology at the Center for a New American Security. “You could criticize American companies for having an opaque relationship to the U.S. government, but there definitely is a different character to the ecosystem.” At the same time, she added, the Trump administration’s actions, including a handling of Portland protests that brought to mind the police crackdown in Hong Kong, have undercut official critiques of Chinese practices: “At a moment when we’re seeing attempts by the administration to draw a contrast in terms of values and ideology with China, these eerie parallels that keep recurring do really undermine that.”

The issue is contentious. Information does not have to be used at the time of collection. The actions of youth can be used to exert pressure at a future time. That may be the larger risk.

Whitney Grace, September 19, 2020

Apple, Google Make it Easier for States to Adopt Virus Tracing App

September 12, 2020

Google and Apple created an app that would, with the cooperation of state governments, aid in tracing the spread of the coronavirus and notify citizens if they spent time around someone known to have tested positive. It is nice to see these rivals working together for the common good. So far, though, only a few states have adopted the technology. In order to encourage more states to join in, AP News reveals, “Apple, Google Build Virus-Tracing Tech Directly into Phones.” Reporter Matt O’Brien writes:

“Apple and Google are trying to get more U.S. states to adopt their phone-based approach for tracing and curbing the spread of the coronavirus by building more of the necessary technology directly into phone software. That could make it much easier for people to get the tool on their phone even if their local public health agency hasn’t built its own compatible app. The tech giants on Tuesday launched the second phase of their ‘exposure notification’ system, designed to automatically alert people if they might have been exposed to the coronavirus. Until now, only a handful of U.S. states have built pandemic apps using the tech companies’ framework, which has seen somewhat wider adoption in Europe and other parts of the world.”

In states that do adopt the system, iPhone users will be prompted for consent to run it on their phones. Android users will have to download the app, which Google will auto-generate for each public health agency that participates. Early adopters are expected to be Maryland, Nevada, Virginia, and Washington D.C. Virginia was the first to use the framework to launch a customized app in early August, followed by North Dakota, Wyoming, Alabama, and Nevada. O’Brien describes how it works:

“The technology relies on Bluetooth wireless signals to determine whether an individual has spent time near anyone else who has tested positive for the virus. Both people in this scenario must have signed up to use the Google-Apple technology. Instead of geographic location, the app relies on proximity. The companies say the app won’t reveal personal information either to them or public health officials.”

This all sounds helpful. However, the world being what it is today, we must ask: does this have surveillance applications? Perhaps. Note we’re promised the app won’t “reveal” personal data, but will it retain it? If it does, will agencies be able to resist this big, juicy pile of data? Promises about surveillance have a way of being broken, after all.

Cynthia Murrell, September 12, 2020

Next Page »

  • Archives

  • Recent Posts

  • Meta