Killing Wickr

January 26, 2023

Encrypted messaging services are popular for privacy-concerned users as well as freedom fighters in authoritarian countries.  Tech companies consider these messaging services to be a wise investment, so Amazon purchased Wickr in 2020.  Wickr is an end-to-end encrypted messaging app and it was made available for AWS users.  Gizmodo explains that Wickr will soon be nonexistent in the article, “Amazon Plans To Close Up Wickr’s User-Centric Encrypted Messaging App.”

Amazon no longer wants to be part of the encrypted messaging services, because it got too saturated like the ugly Christmas sweater market.  Amazon is killing the Wickr Me app, limiting use to business and public sectors through AWS Wickr and Wickr Enterprise.  New registrations end on December 31 and the app will be obsolete by the end of 2023.  

Wickr was worth $60 million went Amazon purchased it.  Amazon, however, lost $1 trillion in stock vaguer in November 2022, becoming the first company in history to claim that “honor.”  Amazon is laying off employees and working through company buyouts.  Changing Wickr’s target market could recoup some of the losses:

“But AWS apparently wants Wickr to focus on its business and government customers much more than its regular users. Among those public entities using Wickr is U.S. Customs and Border Protection. That contract was reportedly worth around $900,000 when first reported in September last year. Sure, the CBP wants encrypted communications, but Wickr can delete all messages sent via the app, which is an increasingly dangerous proposition for open government advocates.”

Wickr, like other encryption services, does not have a clean record.  It has been used for illegal drug sales and other illicit items via the Dark Web.  At one time, Wickr might have been a source of useful metadata. Not now. Odd.

Whitney Grace, January 26, 2023

Encryption Will Be an Issue in 2023

January 18, 2023

The United Kingdom is making plans to weaken encryption within its boundaries. The UK’s government claims it will become the safest place in the world to be online, but others are fearful free speech and privacy will be the ultimate victims. The Next Web explores the situation in: “Privacy Advocates Are. Aghast At UK’s Anti-Encryption Plans.” The UK’s plans are part of the Online Safety Bill currently in parliament.

Privacy advocates are most worried about end-to-end encrypted (E2EE) messenger apps, because the new measures would force Internet providers to scam private messages for illegal content. Another clause would require Internet providers to use “accredited technology” to prevent people from terrorist or child pornography.

The bill would make everything viewable and exploitable by anyone with technical knowledge, including government surveillance and bad actors. Some UK lawmakers are worried about the bill’s implications:

“The proposals have also raised the eyebrows of legal experts. In November, barrister Matthew Ryder of Matrix Chambers, who was commissioned by the Index on Censorship campaign group to analyze the bill, asserted that the proposals could breach human rights laws.

‘No communications in the UK — whether between MPs, between whistleblowers and journalists, or between a victim and a victims support charity — would be secure or private,” said Ryder. “In an era where Russia and China continue to work to undermine UK cybersecurity, we believe this could pose a critical threat to UK national security.’”

The lawmakers pointed out that it could encourage authoritarian governments to implement their own policies against E2EE.

The UK government wants to force big tech companies to create backdoors to their products and services. The US has been after big tech companies to do the same for years, especially when there is a terrorist attack or mass shooting. Governments want more information access, but big tech wants to protect their technology. No one is concerned about privacy rights.

Whitney Grace, January 18, 2023

The Intelware Sector: In the News Again

January 13, 2023

It’s Friday the 13th. Bad luck day for Voyager Labs, an Israel-based intelware vendor. But maybe there is bad luck for Facebook or Meta or whatever the company calls itself. Will there be more bad luck for outfits chasing specialized software and services firms?

Maybe.

The number of people interested in the savvy software and systems which comprise Israel’s intelware industry is small. In fact, even among some of the law enforcement and intelligence professionals whom I have encountered over the years, awareness of the number of firms, their professional and social linkages, and the capabilities of these systems is modest. NSO Group became the poster company for how some of these systems can be used. Not long ago, the Brennan Center made available some documents obtained via legal means about a company called Voyager Labs.

Now the Guardian newspaper (now begging for dollars with blue and white pleas) has published “Meta Alleges Surveillance Firm Collected Data on 600,000 Users via Fake Accounts.” the main idea of the write up is that an intelware vendor created sock puppet accounts with phony names. Under these fake identities, the investigators gathered information. The write up refers to “fake accounts” and says:

The lawsuit in federal court in California details activities that Meta says it uncovered in July 2022, alleging that Voyager used surveillance software that relied on fake accounts to scrape data from Facebook and Instagram, as well as Twitter, YouTube, LinkedIn and Telegram. Voyager created and operated more than 38,000 fake Facebook accounts to collect information from more than 600,000 Facebook users, including posts, likes, friends lists, photos, comments and information from groups and pages, according to the complaint. The affected users included employees of non-profits, universities, media organizations, healthcare facilities, the US armed forces and local, state and federal government agencies, along with full-time parents, retirees and union members, Meta said in its filing.

Let’s think about this fake account thing. How difficult is it to create a fake account on a Facebook property. About eight years ago as a test, my team created a fake account for a dog — about eight years ago. Not once in those eight years was any attempt to verify the humanness or the dogness of the animal. The researcher (a special librarian in fact) set up the account and demonstrated to others on my research team how the Facebook sign up system worked or did not work in this particularly example. Once logged in, faithful and trusting Facebook seemed to keep our super user logged into the test computer. For all I know, Tess is still logged in with Facebook doggedly tracking her every move. Here’s Tess:

image

Tough to see that Tess is not a true Facebook type, isn’t it?

Is the accusation directed at Voyager Labs a big deal? From my point of view, no. The reason that intelware companies use Facebook is that Facebook makes it easy to create a fake account, exercises minimal administrative review of registered user, and prioritizes other activities.

I personally don’t know what Voyager Labs did or did not do. I don’t care. I do know that other firms providing intelware have the capability of setting up, managing, and automating some actions of accounts for either a real human, an investigative team, or another software component or system. (Sorry, I am not at liberty to name these outfits.)

Grab your Tum’s bottle and consider these points:

  1. What other companies in Israel offer similar alleged capabilities?
  2. Where and when were these alleged capabilities developed?
  3. What entities funded start ups to implement alleged capabilities?
  4. What other companies offer software and services which deliver similar alleged capabilities?
  5. When did Facebook discover that its own sign up systems had become a go to source of social action for these intelware systems?
  6. Why did Facebook ignore its sign up procedures failings?
  7. Are other countries developing and investing in similar systems with these alleged capabilities? If so, name a company in England, France, China, Germany, or the US?

These one-shot “intelware is bad” stories chop indiscriminately. The vendors get slashed. The social media companies look silly for having little interest in “real” identification of registrants. The licensees of intelware look bad because somehow investigations are somehow “wrong.” I think the media reporting on intelware look silly because the depth of the information on which they craft stories strikes me as shallow.

I am pointing out that a bit more diligence is required to understand the who, what, why, when, and where of specialized software and services. Let’s do some heavy lifting, folks.

Stephen E Arnold, January 13, 2023

Google: Do Small Sites Need Anti Terrorism Help or Is the Issue Better Addressed Elsewhere?

January 3, 2023

Are “little sites” really in need of Google’s anti-terrorism tool? Oh, let me be clear. Google is — according to “Google Develops Free Terrorism-Moderation Tool for Smaller Websites” — in the process of creating Googley software. This software will be:

a free moderation tool that smaller websites can use to identify and remove terrorist material, as new legislation in the UK and the EU compels Internet companies to do more to tackle illegal content.

And what institutions are working with Google on this future software? The article reports:

The software is being developed in partnership with the search giant’s research and development unit Jigsaw and Tech Against Terrorism, a UN-backed initiative that helps tech companies police online terrorism.

What’s interesting to me is that the motivation for this to-be software or filtering system is in development. The software, it seems, does not exist.

Why would Google issue statements about vaporware?

The article provides a clue:

The move comes as Internet companies will be forced to remove extremist content from their platforms or face fines and other penalties under laws such as the Digital Services Act in the EU, which came into force in November, and the UK’s Online Safety bill, which is expected to become law this year.

I understand. Google’s management understands that regulation and fines are not going away in 2023. It is logical, therefore, to get in front of the problem. How does Google propose to do this?

Yep, vaporware. (I have a hunch there is a demonstration available.) Nevertheless, the genuine article is not available to small Web sites, who need help in coping with terrorism-related content.

How will the tool work? The article states:

Jigsaw’s tool aims to tackle the next step of the process and help human moderators make decisions on content flagged as dangerous and illegal. It will begin testing with two unnamed sites at the beginning of this year.

Everything sounds good when viewed the top of Mount Public Relations, where the vistas are clear and the horizons are unlimited.

I want to make one modest observation: Small Web sites run on hosting services. These hosting services are, in my opinion, more suitable locations for filtering software. The problem is that hosting providers comprise a complex and diverse group of enterprises. In fact, I have yet to receive from my research team a count of service providers that is accurate and comprehensive.

Pushing the responsibility to the operator of a single Web site strikes me as a non-functional approach. Would it make sense for Google’s tool to be implemented in service providers. The content residing on the service providers equipment or co-located hardware and in the stream of data for virtual private systems or virtual private servers. The terrorism related content would be easier to block.

Let’s take a reasonable hosting service; for example, Hertzner in Germany or OVHCloud in France. The European Union could focus on these enabling nodes and implement either the Google system if and when it becomes available and actually works or an alternative filtering method devised by  a European team. (I would suggest that Europol or similar entities can develop the needed filters, test them, and maintain them.) Google has a tendency to create or talk about solutions and then walk away after a period of time. Remember Google’s Web Accelerator?)

Based on our research for an upcoming presentation to a group of investigators focused on cyber crime, service providers (what I call enablers) should be the point of attention in an anti-terrorism action. Furthermore, these enablers are also pivotal in facilitating certain types of online crime. Examples abound. These range from right-wing climate activists using services in Romania to child pornography hosted on what we call “shadow ISPs.” These shadow enablers operate specialized services specifically to facilitate illegal activities within specialized software like The Onion Router and other obfuscation methods.

For 2023, I advocate ignoring PR motivated “to be” software. I think the efforts of national and international law enforcement should be directed at the largely unregulated and often reluctant “enablers.” I agree that some small Web site operators could do more. But I think it is time to take a closer look at enablers operating from vacant lots in the Seychelles or service providers running cyber fraud operations to be held responsible.

Fixing the Internet requires consequences. Putting the focus on small Web sites is a useful idea. But turning up the enforcement and regulatory heat on the big outfits will deliver more heat where being “chill” has allowed criminal activity to flourish. I have not mentioned the US and Canada. I have not forgotten that there are enablers operating in plain sight in such places as Detroit and Québec City. Google’s PR play is a way to avoid further legal and financial hassles.

It is time to move from “to be” software to “taking purposeful, intentional action.”

Stephen E Arnold, January 3, 2023

Need a Human for Special Work? Just Buy One Maybe?

December 29, 2022

Is it possible to purchase a person? Judging from the rumors I have heard in rural Romania, outside the airport in Khartoum, and in a tavern in Tirana — I would suggest that the answer is “possibly.” The Times of London is not into possibilities if the information in “Maids Trafficked and Sold to Wealthy Saudis on Black Market” is accurate. Keep in mind that I am mindful of what I call open source information blindspots. Shaped, faked, and weaponized information is now rampant.

The article focuses on an ecommerce site called Haraj.sa. The article explains:

[The site] Saudi Arabia’s largest online marketplace, through which a Times investigation shows that hundreds of domestic workers are being illegally trafficked and sold to the highest bidders.

Furthermore, the Times adds:

The app, which had 2.5 million visits last year — more than Amazon or AliExpress within the kingdom — is still available on the Apple and Google Play stores despite being criticised by the UN’s Special Rapporteurs in 2020 for facilitating modern slavery.

If true, the article is likely to make for some uncomfortable days as the world swings into 2023; specifically:

  1. The Saudi government
  2. Apple
  3. Google
  4. Assorted law enforcement professionals.

If the information in the write up is accurate, several of the newspaper’s solicitors will be engaged in conversations with other parties’ solicitors. I assume that there will be some conversations in Mayfair and Riyadh about the article. Will Interpol become curious? Probably.

Let’s step back and ask some different questions. I am assuming that some of the information in the article is “correct”; that is, one can verify screenshots or chase down the source of the information. Maybe the lead journalist will consent to an interview on a true crime podcast. Whatever.

Consider these questions:

  1. Why release the story at the peak of some countries’ holiday season? Is the timing designed to minimize or emphasize the sensitive topic of alleged slavery, the Kingdom’s conventions, or the apparent slipshod app review process at controversial US high technology companies?
  2. What exactly did or does Apple and Google know about the app for the Haraj marketplace? If the Times’ story is accurate, what management issue exists at each of these large, but essential to some, companies?
  3. Is the ecommerce site operating within the Kingdom’s cultural norms or is the site itself breaking outside legal guidelines? What does Saudi Arabia say about this site?

To sum up, human trafficking is a concern for many individuals, government entities, and non-governmental organizations. I keep coming back to the question “Why now?” The article states:

Apple said: “We strictly prohibit the solicitation or promotion of illegal behaviour, including human trafficking and child exploitation, in the App Store and across every part of our business. We take any accusations or claims around this behaviour very seriously.” Google declined to comment. Haraj, Saudi Arabia’s human rights commission and the government have been contacted for a response.

Perhaps taking more time to obtain comments would have been useful? What’s the political backstory for the disclosure of the allegedly accurate information during the holiday season? Note that the story is behind a paywall which further limits its diffusion.

Net net: Many questions have I.

Stephen E Arnold, December 29, 2022

Palantir Makes Clear That Its Aggressively Marketed Systems May Not Work as Advertised

December 21, 2022

The real journalists at the Wall Street Journal has made painfully clear that Palantir’s smart software and sophisticated platform for functioning like the seeing stone in Lord of the Rings does not work.

You can read the real news analysis in “Palantir Misfires on Revenue Tied SPAC Deals.” The main point of the write up is that Palantir, equipped with proprietary technology and oodles of seeing stone expert, lost a great deal of money quickly.

The article says:

The bets have backfired.

So what? No big deal. Tens of millions gone, maybe hundreds of millions. The bigger loss is the exposure of the shortcomings of smart software. What did Palantir’s spokesperson say:

The market has turned an it is now clear that these investments were unsuccessful. It was a bet on a group of early stage companies that, with the benefit of hindsight, we wish we did not make.

But Palantir’s marketing since the firm open for intelligence analysis in 2003 or almost two decades ago has pitched the system’s ability to reveal what ordinary intelware cannot identify. In my files, I have some Palantir marketing material. Here’s an example:

image

Who doesn’t want data sovereignty? ©Palantir Technologies

Several observations:

  1. The Palantir management team presumably had access to Gotham and other Palantir technology. But the Palantir system did deliver massive financial losses. Some seeing stone.
  2. In my opinion, Palantir made big bets in order to get a big payoff so that the company’s financial strength and the excellence of its smart software would be evident. What’s evident is that even Palantir’s software and its wizards cannot get the Palantir systems to be right about “bets.”
  3. Intelware and policeware vendors typically sell to government and selected financial services customers. Converting intelligence software tuned to the needs of a three letter agency has not worked in the past, and it is now evident Palantir may be failing in its commercial push now.
  4. Intelware works because no matter how slick the intelware is, governments also rely on old fashioned methods before taking action.
  5. Palantir’s technology is almost 20 years old, based on open source, and highly derivative. There are better, faster, and cheaper options available from Palantir’s competitors.

Net net: Palantir has embraced full throttle marketing. The company has done some interesting things regarding the IBM Analysts Notebook file formats. Palantir’s investment were, in my opinion, investments which made it attractive to the recipients of Palantir’s funds to become Palantir customers. As I write this, Palantir’s marketing is chugging along, but Palantir’s share price is a stellar $6.43 a share. A blind seeing stone? Hmmmm. Good question.

Stephen E Arnold, December 21, 2022

Hello, Lawmakers in Greece. Have You Heard about Open Source Software?

December 15, 2022

I read a story from an outfit which makes quoting one of the stories risky business. The write up in question is “As Wiretap Claims Rattle Government, Greece Bans Spyware.” The article presents as real news — allegedly the old fashioned kind when newspapers were arbiters of truth via stringers — that Greece outlaws what it calls commercial spyware. For a number of years, I have used the term “intelware” to describe the specialized services and software provided to government agencies by commercial enterprises and open source developers.

The article does the normal handwaving associated with products and services which have been available since the mid 19th century. Those early systems chugged along within products from Bell, Systems Development Corporation, and others. I have found the bland names fascinating. Systems Development Corporation? What could be better? If you read Jill Lepore’s techno-noir history, you will know more than you ever wanted to know about Simulmatics. There’s a descriptive company name for you, right?

What happens when a government bans specialized services and software? Some interesting things; for example, it may be tough to know when warships from a friendly country are converging on a critical island. What if a country on Greece’s border gets frisky with its Soviet era tanks and artillery? The answer is, “License those specialized software and systems. Now!”

In terms of the ban on commercial intelware, what’s Greece going to do with the open source version of Maltego or one of dozens of other tools which can ingest digital content and output useful facts. What happens when one of those open source intelware tools requires an extension of functions?

The answer is to hire a consulting firm, hopefully not one affiliated with a certain jewelry store in Athens, to create bespoke code. Once that’s done, won’t government entities use these tools to protect citizen and monitor potential threats?

The answer is, “You bet your life.” The secret word is “politicians.” I am not sure of Greek’s elected officials or the people reporting on the world of intelware understand the difference between handwaving and getting a particular job done.

And the story. Oh, objective and an example of publicizing the considered viewpoints of elected officials.

Stephen E Arnold, December 15, 2022

A Digital Schism: Is It the 16th Century All Over Again?

December 12, 2022

I noted “FBI Calls Apple’s Enhanced iCloud Encryption Deeply Concerning As Privacy Groups Hail It As a Victory for Users.” I am tempted to provide some historical color about Galileo, Jesuits, and infinitesimals. I won’t. I will point out that schisms appear to be evident today and may be as fraught as those when data flows were not ripping apart social norms. (How bad was it in the 16th century? Think in terms of toasting in fires those who did not go with the program. Quite toasty for some.)

The write up explains:

Apple yesterday [December 7, 2022] announced that end-to-end encryption is coming to even more sensitive types of iCloud data, including device backups, contacts, messages, photos, and more, meeting the longstanding demand of both users and privacy groups who have rallied for the company to take the significant step forward in user privacy.

Who is in favor of Apple’s E2EE push? The article says:

We [the Electronic Frontier Foundation] applaud Apple for listening to experts, child advocates, and users who want to protect their most sensitive data. Encryption is one of the most important tools we have for maintaining privacy and security online. That’s why we included the demand that Apple let users encrypt iCloud backups in the Fix It Already campaign that we launched in 2019.

Across the E2EE chess board is the FBI. The article points out:

In a statement to The Washington Post, the FBI, the largest intelligence agency in the world, said it’s “deeply concerned with the threat end-to-end and user-only-access encryption pose.” The bureau said that end-to-end encryption and Apple’s Advanced Data Protection make it harder for them to do their work and that they request “lawful access by design.”

I don’t have a dog in this commercial push for E2EE encryption which is one component in Apple’s marketing of itself as the Superman/Superwoman of truth, justice, and the American way. (A 30 percent app store tariff is part of this mythic set up as well.) I understand the concern of the investigators, but I am retired and sitting on the sidelines as I watch the Grim Reaper’s Rivian creep closer.

Several observations:

  1. In the boundary between these two sides or factions, the emergent behavior will get around the rules. That emergent behavior is a consequence of apparently irreconcilable differences. The impact of this schism will reverberate for an unknown amount of time.
  2. Absolutism makes perfect sense in a social setting where one side enjoys near total control of behavior, access, thoughts, etc. However we live in a Silicon Valley environment partially fueled by phenomenological existentialism. Toss in the digital flows of information, and the resulting mixture is likely to be somewhat unpredictable.
  3. Compromise will be painful but baby steps will be taken. Even Iran is reassigning morality police to less riot inducing activities. China has begun to respond to increasingly unhappy campers in lock down mode. Like I said, Baby steps.

Net net: Security and privacy are a bit like love and Plato’s chair. Welcome to the digital Middle Ages. The emergent middle class may well be bad actors.

Stephen E Arnold, December 12, 2022

A Legal Information Truth Inconvenient, Expensive, and Dangerous

December 5, 2022

The Wall Street Journal published “Justice Department Prosecutors Swamped with Data As Cases Leave Long Digital Trails.” The write up addressed a problematic reality without craziness. The basic idea is that prosecutors struggle with digital information. The consequences are higher costs and in some cases allowing potentially problematic individuals to go to Burger King or corporate practices to chug along with felicity.

The article states:

Federal prosecutors are swamped by data, as the way people communicate and engage in behavior scrutinized by investigators often leaves long and complicated digital trails that can outpace the Justice Department’s technology.

What’s the fix? This is a remarkable paragraph:

The Justice Department has been working on ways to address the problem, including by seeking additional funding for electronic-evidence technology and staffing for US attorney’s offices. It is also providing guidance in an annual training for prosecutors to at times collect less data.

Okay, more money which may or may not be spent in a way to address the big data issues, more lawyers (hopefully skilled in manipulating content processing systems functions), annual training, and gather less information germane to a legal matter. I want to mention that misinformation, reformation of data, and weaponized data are apparently not present in prosecutors’ data sets or not yet recognized as a problem by the Justice Department.

My response to this interesting article includes:

  1. This is news? The issue has been problematic for many years. The vendors of specialized systems to manage evidence, index and make searchable content from disparate sources, and output systems which generate a record of what lawyer accessed what and when are asserting their systems can handle this problem. Obviously either licensees discover the systems don’t work like the demos or cannot handle large flows of disparate content.
  2. The legal industry is not associated with groundbreaking information innovation. I may be biased, but I think of lawyers knowing more about billing for their time than making use of appropriate, reliable technology for managing evidence. Excel timesheets are one thing. Dark Web forum content, telephone intercepts, and context free email and chat messages are quite different. Annual training won’t change the situation. The problem has to be addressed by law schools and lawyer certification systems. Licensing a super duper search system won’t deal with the problem no matter what consultants, vendors, and law professors say.
  3. The issue of “big data” is real, particularly when there are some many content objects available to a legal team, its consultants, and the government professionals working on a case or a particular matter. It is just easier to gather and then try to make sense of the data. When the necessary information is not available, time or money runs out and everyone moves on. Big data becomes a process that derails some legal proceedings.

My view is that similar examples of “data failure” will surface. The meltdown of crypto? Yes, too much data. The downstream consequences of certain medical products? Yes, too much data and possibly the subtle challenge of data shaping by certain commercial firms? The interlocks among suppliers of electrical components? Yes, too much data and possibly information weaponization by parties to a legal matter?

When online meant keyword indexing and search, old school research skills and traditional data collection were abundant. Today, short cuts and techno magic are daily fare.

It is time to face reality. Some technology is useful, but human expertise and judgment remain essential. Perhaps that will be handled in annual training, possibly on a cruise ship with colleagues? A vendor conference offering continuing education credits might be a more workable solution than smart software with built in workflow.

Stephen E Arnold, December 5, 2022

Europol Take Down Despite a Bad Actor Haven, Encryption, and Modern Business Methods

November 28, 2022

First, Europol and a group of investigators shut down a drug operation. “Operation Desert Light: Europol Take Down Massive Cocaine Super Cartel” reported:

…49 people were arrested across six European countries, the EU’s police agency, Europol, said.

The somewhat terse news story referenced a couple of factoids that I found interesting:

  1. The article mentioned that there were six senior criminals running the operation. This to me suggests what I call in my lectures to law enforcement and intelligence professionals “industrialized crime.” The idea is that the precepts and methods are ones widely used by successful businesses. Just as the ideas about engineering efficiency and MBA profit maximization have diffused in legitimate enterprises, bad actors have been good students and implementers.
  2. One bad actor fled Europe and operated from Dubai. Dubai has, for some including this particular bad actor, has become a destination of choice for some pursued by authorities representing other countries. What makes Dubai a possible safe haven? What additional steps are needed to reduce the appeal of Dubai and certain other countries?
  3. The article mentions “encrypted phones.” In my lectures, I discuss the Teflon effect of secure mobile devices and digital currencies which I describe as Bitcoin or variants.

Net net: More direct action by governments is necessary to [a] speed up investigations and [b] remove barriers to appropriate surveillance by direct and indirect methods. Crime is an emergent feature of online systems and services. To prevent criminal activity from becoming the dominant feature of online, a rethink of systems and methods may provide fruitful.

Just my opinion.

Stephen E Arnold, November 28, 2022

Next Page »

  • Archives

  • Recent Posts

  • Meta