Facebook WhatsApp, No Code Ecommerce, and Google: What Could Go Wrong?

March 5, 2021

The Dark Web continues to capture the attention of some individuals. The little secret few pursue is that much of the Dark Web action has shifted to encrypted messaging applications. Even Signal gets coverage in pot boiler novels. Why? Encrypted messaging apps are quite robust convenience stores? Why go to Ikea when one can scoot into a lightweight, mobile app and do “business.” How hard is it to set up a store, make its products like malware or other questionable items available in WhatsApp, and start gathering customers? Not hard at all. In fact, there is a no code wrapper available. With a few mouse clicks, a handful of images, and a product or service to sell, one can be in business. The developer – an outfit called Wati – provides exactly when the enterprising marketer requires. None of that Tor stuff. None of the Amazon police chasing down knock off products from the world’s most prolific manufacturers. New territory, so what could go wrong. If you are interested in using WhatsApp as an ecommerce vehicle, you can point your browser to this Google Workspace Marketplace. You will need both a Google account and a WhatsApp account. Then you can us “a simple and powerful Google Sheet add-on to launch an online store from Google Sheets and take orders on WhatsApp.” How much does this service cost? The developer asserts, “It’s free forever.” There is even a video explaining what one does to become a WhatsApp merchant. Are there legitimate uses for this Google Sheets add on? Sure. Will bad actors give this type of service a whirl? Sure. Will Google police the service? Sure. Will Facebook provide oversight? Sure. That’s a lot of sures. Why not be optimistic? For me, the Wati wrapper is a flashing yellow light that a challenge to law enforcement is moving from the Dark Web to apps which are equally opaque. Progress? Nope.

Stephen E Arnold, March 5, 2021

Another Brilliant Maneuver from the Zuckbook: What Looks Like a Setback Is a Strategic Win

March 1, 2021

I read “Facebook Just Admitted It Has Lost the Battle with Apple over Privacy.” The subtitle suggests that the scintillating managerial acumen is using its custom shibboleth to fool its adversaries at the fruit outfit. Here’s the subtitle illustrating how the Zuckbook is feinting:

The company launched an ad campaign that shows just how worried it is about Apple’s upcoming privacy changes.

Yes, the “real” news outfit Inc. has been fooled. The write up continues:

The company [Facebook] released a pair of ads in three of the most widely-circulated newspapers in the country, accusing Apple of attacking small businesses and the open internet. Mark Zuckerberg also attacked Apple’s motivations during the company’s quarterly earnings report last month, and there are reports that he has been considering filing an antitrust lawsuit against the iPhone maker.

Red herrings have done their job. The dogs of privacy are going in circles. The write up reports:

Now, the company [Facebook] has launched a new campaign, including an ad titled “Good Ideas Deserve to be Found.” The new ad is a little hard to follow but is meant to show the value of personalized ads to small businesses. Facebook wants to make it very clear that personalized ads make for a better experience on Facebook and Instagram, which it also owns.

Confused. Don’t be.

The company [Apple] won’t stop Facebook from tracking you, but it will have to ask you for permission first.  Why, then, is Facebook so worried? Because it knows what everyone else already knows–that when given a choice, most people will choose to not allow Facebook to track them.

Such a slick maneuver. Facebook’s moves make Julius Caesar’s tactics at the Battle of Alesia look like the Brazilian president’s Covid fighting campaign. MBAs will study this brilliant Facebook conflict. I am assuming that the MBA programs at certification institutions do more than collect money from the eager students.

Stephen E Arnold, March 1, 2021

Satellites Are Upgraded Peeping Toms

December 28, 2020

Satellites have had powerful cameras for decades. Camera technology for satellites has advanced from being able to read credit card numbers in the early 2000s to peering inside buildings. Futurism shares details about the new (possible) invasion of privacy in: “A New Satellite Can Peer Inside Some Buildings, Day Or Night.”

Capella Space launched a new type of satellite with a new state of the art camera capable of taking clear radar images with precise resolution. It even has the ability to take photos inside buildings, such as airplane hangers. Capella Space assures people that while their satellite does take powerful, high resolution photos it can only “see” through lightweight structures. The new satellite camera cannot penetrate dense buildings, like residential houses and high rises.

The new satellite can also take pictures from space of Earth on either its daytime or nighttime side. Capella also released a new photo imaging platform that allows governments or private customers to request images of anything in the world. Most satellites orbiting the Earth use optical image sensors, which make it hard to take photos when its cloudy. Capella’s new system uses synthetic aperture radar that can peer through cloud cover and night skies.

The resolution for the SAR images is extraordinary:

“Another innovation, he says, is the resolution at which Capella’s satellites can collect imagery. Each pixel in one of the satellite’s images represents a 50-centimeter-by-50-centimeter square, while other SAR satellites on the market can only get down to around five meters. When it comes to actually discerning what you’re looking at from space, that makes a huge difference.

Cityscapes are particularly intriguing. Skyscrapers poke out of the Earth like ghostly, angular mushrooms — and, if you look carefully, you notice that you can see straight through some of them, though the company clarified that this is a visual distortion rather than truly seeing through the structures.”

Capella’s new satellite has a variety of uses. Governments can use it to track enemy armies, while scientists can use it to monitor fragile ecosystems like the Amazon rainforest. Capella has assured the public that its new satellite cannot spy into dense buildings, but if the technology improves maybe it is a possibility? Hopefully bad actors will not use Capella’s new satellite.

Whitney Grace, December 28, 2020

Virtual Private Networks: Not What They Seem

November 17, 2020

Virtual private networks are supposed to provide a user with additional security. There are reports about Apple surfing on this assumption in its Big Sur operating system. For more information, check out “Apple Apps on Big Sur Bypass Firewalls and VPNs — This Is Terrible.” Apple appears to making privacy a key facet of its marketing and may be experiencing one of those slips betwixt cup and lip with regard to this tasty sales Twinkie?

Almost as interesting is the information in “40% of Free VPN Apps Found to Leak Data.” Note that the assertion involves no-charge virtual private networks. The write up reports:

ProPrivacy has researched the top 250 free VPN apps available on Google Play Store and found that 40% failed to adequately protect users privacy.

Okay, security conscious Google and its curated apps on its bulletproof Play store are under the Microscope. The write up points out:

… A study by CSIRO discovered that more than 75% of free VPNs have at least one third-party tracker rooted in their software. These trackers collect information on customers online presence and forward that data to advertising agencies to optimize their ads.

Who is involved in the study? Possible the provider of for fee VPN services like NordVPN.

Marketing and privacy. Like peanut butter and honey.

Stephen E Arnold, November 17, 2020

Cybersecurity Lapse: Lawyers and Technology

October 29, 2020

Hackers Steal Personal Data of Google Employees after Breaching US Law Firm” is an interesting article. First, it reminds the reader that security at law firms may not be a core competency. Second, it makes clear that personal information can be captured in order to learn about every elected officials favorite company Google.

The write up states:

Fragomen, Del Rey, Bernsen & Loewy LLP, a law firm that offers employment verification compliance services to Google in the United States, suffered unauthorized access into its computer systems in September that resulted in hackers accessing the personal information of present and former Google employees.

The article quotes to a cyber security professional from ImmuniWeb. I learned:

According to Ilia Kolochenko, Founder & CEO of ImmuniWeb, the fact that hackers targeted a law firm that stores a large amount of data associated with present and former Google employees is not surprising as law firms possess a great wealth of the most confidential and sensitive data of their wealthy or politically-exposed clients, and habitually cannot afford the same state-of-the-art level of cybersecurity as the original data owners.

This law firm, according to the write up, handles not just the luminary Google. It also does work for Lady Gaga and Run DMC.

Stephen E Arnold, October 29, 2020

Music and Moods: Research Verifies the Obvious

October 21, 2020

It has been proven that music can have positive or negative psychological impacts on people. Following this train of research, Business Line reports that playlists are a better reflection of mood than once thought, “Your Playlist Mirrors Your Mood, Confirms IIIT-Hyderabad Study.”

The newest study on music and its effect on mood titled “Tag2risk: Harnessing Social Music Tags For Characterizing Depression Risk, Cover Over 500 Individuals” comes from the International Institute of Information Technology in Hyderabad (IIIT-H). The study discovered that people who listen to sad music can be thrown into depression. Vinoo Alluri and her students from IIIT-H’s cognitive science department investigated if they could identify music listeners with depressive tendencies from their music listening habits.

Over five hundred people’s music listening histories were studied. The researchers discovered that repeatedly listening to sad music was used as an avoidance tool and a coping mechanism. These practices, however, also kept people in depressive moods. Music listeners in the study were also drawn to music sub genres tagged with “sadness” and tenderness.

We noted:

“ ‘While it can be cathartic sometimes, repeatedly being in such states may be an indicator of potential underlying mental illness and this is reflected in their choice and usage of music,’ Vinoo Alluri points out. She feels that music listening habits can be changed. But, in order to do that, they need to be identified first by uncovering their listening habits. It is possible to break the pattern of “ruminative and repetitive music usage”, which will lead to a more positive outcome.”

Alluri’s study is an amazing investigation into the power and importance of music. Her research, however, only ratifies what music listeners and teenagers have known for decades.

Whitney Grace, October 21, 2020

Infohazards: Another 2020 Requirement

October 20, 2020

New technologies that become society staples have risks and require policies to rein in potential dangers. Artificial intelligence is a developing technology. Governing policies have yet to catch up with the emerging tool. Experts in computer science, government, and other controlling organizations need to discuss how to control AI says Vanessa Kosoy in the Less Wrong blog post: “Needed: AI Infohazard Policy.”

Kosoy approaches her discussion about the need for a controlling AI information policy with the standard science fiction warning argument: “AI risk is that AI is a danger, and therefore research into AI might be dangerous.” It is good to draw caution from science fiction to prevent real world disaster. Experts must develop a governing body of AI guidelines to determine what learned information should be shared and how to handle results that are not published.

Individuals and single organizations cannot make these decisions alone, even if they do have their own governing policies. Governing organizations and people must coordinate their knowledge regarding AI and develop a consensual policies to control AI information. Kozoy determines that any AI policy shoulder consider the following:

• “Some results might have implications that shorten the AI timelines, but are still good to publish since the distribution of outcomes is improved.

• Usually we shouldn’t even start working on something which is in the should-not-be-published category, but sometimes the implications only become clear later, and sometimes dangerous knowledge might still be net positive as long as it’s contained.

• In the midgame, it is unlikely for any given group to make it all the way to safe AGI by itself. Therefore, safe AGI is a broad collective effort and we should expect most results to be published. In the endgame, it might become likely for a given group to make it all the way to safe AGI. In this case, incentives for secrecy become stronger.

• The policy should not fail to address extreme situations that we only expect to arise rarely, because those situations might have especially major consequences.”

She continues that any AI information policy should determine the criteria for what information is published, what channels should be consulted to determine publication, and how to handle potentially dangerous information.

These questions are universal for any type of technology and information that has potential hazards. However, specificity of technological policies weeds out any pedantic bickering and sets standards for everyone, individuals and organizations. The problem is getting everyone to agree on the policies.

Whitney Grace, October 20, 2020

Covid Trackers Are Wheezing in Europe

October 19, 2020

COVID-19 continues to roar across the world. Health professionals and technologists have combined their intellects attempting to provide tools to the public. The Star Tribune explains how Europe wanted to use apps to track the virus: “As Europe Faces 2nd Wave Of Virus, Tracing Apps Lack Impact.”

Europe planned that mobile apps tracking where infected COVID-19 individuals are located would be integral to battling the virus. As 2020 nears the end, the apps have failed because of privacy concerns, lack of public interest, and technical problems. The latter is not a surprise given the demand for a rush job. The apps were supposed to notify people when they were near infected people.

Health professionals predicted that 60% of European country populations would download and use the apps, but adoption rates are low. The Finnish, however, reacted positively and one-third of the country downloaded their country’s specific COVID-19 tracking app. Finland’s population ironically resists wearing masks in public.

The apps keep infected people’s identities secret. Their data remains anonymous and the apps only alert others if they come in contact with a virus carrier. If the information provides any help to medical professionals remains to be seen:

“We might never know for sure, said Stephen Farrell, a computer scientist at Trinity College Dublin who has studied tracing apps. That’s because most apps don’t require contact information from users, without which health authorities can’t follow up. That means it’s hard to assess how many contacts are being picked up only through apps, how their positive test rates compare with the average, and how many people who are being identified anyway are getting tested sooner and how quickly. ‘I’m not aware of any health authority measuring and publishing information about those things, and indeed they are likely hard to measure,’ Farrell said.”

Are these apps actually helpful? Maybe. But they require maintenance and constant updating. They could prevent some of the virus from spreading, but sticking to tried and true methods of social distancing, wearing masks, and washing hands work better.

Whitney Grace, October 19, 2020

Apple and AWS: Security?

October 13, 2020

DarkCyber noted an essay-style report called “We Hacked Apple for 3 Months: Here’s What We Found.” The write up contains some interesting information. One particular item caught our attention:

AWS Secret Keys via PhantomJS iTune Banners and Book Title XSS

The information the data explorers located potential vulnerabilities to allow such alleged actions as:

  • Obtain what are essentially keys to various internal and external employee applications
  • Disclose various secrets (database credentials, OAuth secrets, private keys) from the various design.apple.com applications
  • Likely compromise the various internal applications via the publicly exposed GSF portal
  • Execute arbitrary Vertica SQL queries and extract database information

Other issues are touched upon in the write up.

Net net: The emperor has some clothes; they are just filled with holes and poorly done stitching if the write up is correct.

Stephen E Arnold, October 13, 2020

Amazon: The Bulldozer Grinds Forward

October 7, 2020

It is hard to tell whether the company is shameless or clueless. Either way, SlashGear observes, “Amazon Has A Creepiness Problem.” The growingly ubiquitous tech giant recently unveiled two products that will make privacy enthusiasts shiver. Writer Chris Davies reports:

“The Echo Show 10, for example, brings movement to Amazon’s smart displays, with a rotating base that promises to track you as you wander around the room. The result? A perfectly-centered video call, or a more attentive Alexa, whether you’re stood at the sink or raiding the refrigerator. Echo Show 10 seems positively pedestrian, though, in comparison to the Ring Always Home Cam. Part drone, part security camera, it launches out of a base station that resembles a fancy fragrance diffuser and then buzzes around your home to spot intruders or misbehaving pets. Never mind wondering whether the microphone on your Echo is disabled: now, the cameras themselves will be airborne.”

Naturally, Amazon offers reassurances that users are in complete control of what the devices observe and transmit. The Ring drone maintains a certain hum so one can hear it coming, and users can limit its flight area. Also, when it is docked, the camera is physically blocked. The Echo Show 10 relies on visual and audio cues to keep the user center stage, but we’re assured that data is processed locally and immediately deleted. But there is no easy way to verify the devices respect these restrictions. Users will just have to take Amazon’s word. Davies considers:

“Rationally, nothing Amazon has announced today is any more intrusive or dangerous to privacy than, well, any other smart speaker or connected camera the company has offered before. All the same, there’s a gulf between perception and reality. I could understand you being skeptical about Amazon’s intentions – and its technology – simply because it’s, well, Amazon. The company that knows so much about your shopping habits it can make pitch-perfect recommendations; the company that wants to put microphones and cameras all over your house, in your car, and in your hotel room. […]

The man has a point. Apparently, many consumers do trust Amazon enough to place these potential spies in their homes and offices. Others, though, do not. Will a day come when it will be difficult to function in society without them? We think Amazon hopes so.

Cynthia Murrell, October 7, 2020

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta