Health And Human Services Continues Palantir Contract
August 23, 2021
The Us Department of Health and Human Services (HHS) renewed its contract with Palantir to continue using Tiberius. Fed Scoop shares the details about the renewal in the article, “HHS Renews, Expands Palantir’s Tiberius Contract To $31M.” Palantir designed Tiberius as a COVID-19 vaccine distribution platform. It has evolved beyond assisting HHS employees understand the vaccine supply chain to being the central information source for dosage programs.
HHS partnered with Palantir in mid-2020 under Trump’s administration. It was formerly known as Operation Warp Speed and now is called Countermeasure Acceleration Group. The renewed contract expands the Palantir’s deal from $17 million to $31 million. Palantir will continue upgrading Tiberius. Agencies will now use the platform to determine policy decision about additional doses, boosters, and international distribution.
When Palantir was first implemented it had not been designed to handle Federal Retail Pharmacy nor Long-Term Car Facility programs. These now provide more analysis gaps for vaccination gaps. Tiberius is also used for:
“Tiberius already has between 2,000 and 3,000 users including those at HHS, CDC, BARDA, the Countermeasure Acceleration Group, the Office of the Assistant Secretary for Preparedness and Response, the Federal Emergency Management Agency, the Pentagon, and other agencies involved in pandemic response. State and territory employees make up two-thirds of the user base, which also includes sub-state entities that receive vaccines like New York City and Chicago and commercial users including all retail pharmacies.”
Trump was supportive of Palantir; Biden’s team seems okay with the platform.
Whitney Grace, August 23, 2021
Big Data, Algorithmic Bias, and Lots of Numbers Will Fix Everything (and Your Check Is in the Mail)
August 20, 2021
We must remember, “The check is in the mail” and “I will always respect you” and “You can trust me.” Ah, great moments in the University of Life’s chapbook of factoids.
I read “Moving Beyond Algorithmic Bias Is a Data Problem”. I was heartened by the essay. First, the document has a document object identifier and a link to make checking updates easy. Very good. Second, the focus of the write up is the inherent problem of most of the Fancy Dan baloney charged big data marketing to which I have been subjected in the last six or seven years. Very, very good.
I noted this statement in the essay:
Why, despite clear evidence to the contrary, does the myth of the impartial model still hold allure for so many within our research community? Algorithms are not impartial, and some design choices are better than others.
Notice the word “myth”. Notice the word “choices.” Yep, so much for the rock solid nature of big data, models, and predictive silliness based on drag-and-drop math functions.
I also starred this important statement by Donald Knuth:
Donald Knuth said that computers do exactly what they are told, no more and no less.
What’s the real world behavior of smart anti-phishing cyber security methods? What about the autonomous technology in some nifty military gear like the Avenger drone?
Google may not be thrilled with the information in this essay nor thrilled about the nailing of the frat bros’ tail to the wall; for example:
The belief that algorithmic bias is a dataset problem invites diffusion of responsibility. It absolves those of us that design and train algorithms from having to care about how our design choices can amplify or curb harm. However, this stance rests on the precarious assumption that bias can be fully addressed in the data pipeline. In a world where our datasets are far from perfect, overall harm is a product of both the data and our model design choices.
Perhaps this explains why certain researchers’ work is not zipping around Silicon Valley at the speed of routine algorithm tweaks? The statement could provide some useful insight into why Facebook does not want pesky researchers at NYU’s Ad Observatory digging into how Facebook manipulates perception and advertisers.
The methods for turning users and advertisers into puppets is not too difficult to figure out. That’s why certain companies obstruct researchers and manufacture baloney, crank up the fog machine, and offer free jargon stew to everyone including researchers. These are the same entities which insist they are not monopolies. Do you believe that these are mom-and-pop shops with a part time mathematician and data wrangler coming in on weekends? Gee, I do.
The “Moving beyond” article ends with a snappy quote:
As Lord Kelvin reflected, “If you cannot measure it, you cannot improve it.”
Several observations are warranted:
- More thinking about algorithmic bias is helpful. The task is to get people to understand what’s happening and has been happening for decades.
- The interaction of math most people don’t understand and very simple objectives like make more money or advance this agenda is a destabilizing force in human behavior. Need an example. The Taliban and its use of WhatsApp is interesting, is it not?
- The fix to the problems associated with commercial companies using algorithms as monetary and social weapons requires control. The question is from whom and how.
Stephen E Arnold, August 20, 2021
Does Google Play Protect and Serve—Ads?
August 20, 2021
We hope, gentle reader, that you have not relied on the built-in Google Play Protect to safeguard your Android devices when downloading content from the Play store. MakeUseOf cites a recent report from AV-Test in, “Report: Google Play Protect Sucks at Detecting Malware.” Writer Gavin Phillips summarizes:
“With a maximum of 18 points on offer across the three test sections of Protection, Performance, and Usability, Google Play Protect picked up just 6.0—a full ten points behind the next option, Ikarus. AV-TEST pits each of the antivirus tools against more than 20,000 malicious apps. In the endurance test running from January to June 2021, there were three rounds of testing. Each test involved 3,000 newly discovered malware samples in a real-time test, along with a reference set of malicious apps using malware samples in circulation for around four weeks. Google Play Protect detected 68.8 percent of the real-time malware samples and 76.7 percent of the reference malware samples. In addition, AV-TEST installs around 10,000 harmless apps from the Play Store on each device, aiming to detect any false positives. Again, Google’s Play Protect came bottom of the pile, marking 70 harmless apps as malware.”
A chart listing the test’s results for each security solution can be found in the writeup or the report itself. More than half received the full 18 points while the rest fall between 16 and 17.8 points. Except for Google—its measly 6 points really set it apart as the worst option by far. Since Google “Protect” is the default security option for Android app downloads, this is great news for bad actors. The rest of us would do well to study the top half of that list. iOS users excepted.
Based in Magdeburg, Germany, research institute AV-Test pits the world’s cyber security solutions against its large collection of digital malware samples and makes results available to private users for free. The firm makes its money on consulting for companies and government institutions. AV-Test was founded in 2004 and was just acquired by Ufenau Capital Partners in February of this year.
Cynthia Murrell, August 20, 2021
Remember Who May Have Wanted to License Pegasus?
August 20, 2021
Cyber intelligence firm NSO, makers of Pegasus spyware, knows no bounds when it comes to enabling government clients to spy on citizens. Apparently, however, it draws the line at helping Facebook spy on its users. At his Daring Fireball blog, computer scientist John Gruber reports that “Facebook Wanted NSO Spyware to Monitor iOS Users.” We learn that NSO CEO Shalev Hulio has made a legal declaration stating he was approached in 2017 by Facebook reps looking to purchase certain Pegasus capabilities. Gruber quotes Motherboard’s Joseph Cox, who wrote:
“At the time, Facebook was in the early stages of deploying a VPN product called Onavo Protect, which, unbeknownst to some users, analyzed the web traffic of users who downloaded it to see what other apps they were using. According to the court documents, it seems the Facebook representatives were not interested in buying parts of Pegasus as a hacking tool to remotely break into phones, but more as a way to more effectively monitor phones of users who had already installed Onavo. ‘The Facebook representatives stated that Facebook was concerned that its method for gathering user data through Onavo Protect was less effective on Apple devices than on Android devices,’ the court filing reads. ‘The Facebook representatives also stated that Facebook wanted to use purported capabilities of Pegasus to monitor users on Apple devices and were willing to pay for the ability to monitor Onavo Protect users.’”
We are glad to learn NSO has boundaries of any sort. And score one for Apple security. As for Facebook, Gruber asserts this news supports his oft-stated assertion that Facebook is a criminal operation. He bluntly concludes:
“Facebook’s stated intention for this software was to use it for mass surveillance of its own honest users. That is profoundly [messed] up — sociopathic.”
Perhaps.
Cynthia Murrell, August 20, 2021
CISA Head Embraces Cooperation with Public-Private Task Force
August 20, 2021
Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly is wielding the power of cooperation in the fight against ransomware and other threats. Her agency will work with both other security agencies and big tech companies. This novel approach might just work. The article “Black Hat: New CISA Head Woos Crowd With Public-Private Task Force” at Threatpost reports on Easterly’s keynote presentation at this year’s Black Hat USA conference.
The partnership is logically named the Joint Cyber Defense Collaborative (JCDC) and had 20 corporate partners signed up by the end of July. Amazon, AT&T, Google Cloud, Microsoft, Verizon, and FireEye Mandiant are some of the biggest names participating. (Is FireEye, perhaps, trying to redeem itself?) Easterly also plans to work with other federal agencies like the DoD, NSA, and FBI to make sure their efforts align. We are told ransomware will be the team’s first priority. Writer Tom Spring reveals a bit about the new director:
“Easterly is a former NSA deputy for counterterrorism and has a long history within the U.S. intelligence community. She served for more than 20 years in the Army, where she is credited for creating the armed service’s first cyber battalion. More recently she worked at Morgan Stanley as global head of the company’s cybersecurity division. Easterly replaced CISA acting director Brandon Wales after the agency’s founder and former director Christopher Krebs was fired by former President Trump in 2020.”
But will the cybersecurity veteran be able to win over her corporate colleagues? The article notes one point in her favor:
“During a question-and-answer session, the CISA director scored points with the audience by stating that she supported strong encryption. ‘I realized that there are other points of view across the government, but I think strong encryption is absolutely fundamental for us to be able to do what we need to do,’ she said. … While acknowledging distrust within some segments of the cybersecurity community, Easterly urged the audience of security professionals to trust people first. ‘We know some people never want to trust an organization,’ she said. ‘In reality we trust people – you trust people. … When you work closely together with someone to solve problems, you can begin to create that trust.’
Will the JCDC members and CISA’s fellow agencies be able to trust one another enough to make the partnership a success? We certainly hope so, because effective solutions are sorely needed.
Cynthia Murrell, August 20, 2021
Google Fiddled Its Magic Algorithm. What?
August 19, 2021
This story is a hoot. Google, as I recall, has a finely tuned algorithm. It is tweaked, tuned, and tailored to deliver on point results. The users benefit from this intense interest the company has in relevance, precision, recall, and high-value results. Now a former Google engineer or Xoogler in my lingo has shattered my beliefs. Night falls.
Navigate to “Top Google Engineer Abandons Company, Reveals Big Tech Rewrote Algos To Target Trump.” (I love the word “algos”. So colloquial. So in.) I spotted this statement:
Google rewrote its algorithms for news searches in order to target #Trump, according to target Trump, according to @Perpetualmaniac #Google whistleblower, and author of the new book, “Google Leaks: An Expose of Bit Tech Censorship.”
The write up states:
As a senior engineer at Google for many years, Zach was aware of their bias, but watched in horror as the 2016 election of Donald Trump seemed to drive them into dangerous territory. The American ideal of an honest, hard-fought battle of ideas — when the contest is over, shaking hands and working together to solve problems — was replaced by a different, darker ethic alien to this country’s history,” the description adds. Vorhies said he left Google in 2019 with 950 pages of internal documents and gave them to the Justice Department.
Wowza. Is this an admission of unauthorized removal of a commercial enterprise’s internal information?
The sources for this interesting allegation of algorithm fiddling are interesting and possibly a little swizzly.
I am shocked.
The Google fiddling with precision, recall, objectivity, and who knows what else? Why? My goodness. What has happened to cause a former employee to offer such shocking assertions.
The algos are falling on my head and nothing seems to fit. Crying’s not for me. Nothing’s worrying me. Because Google.
Stephen E Arnold, August 19, 2021
Silicon Valley Neologisms: The Palantir Edition
August 19, 2021
Do you remember the Zuckerland metaverse? (Yes, I know he borrowed the word, but when you are president of a digital country, does anyone dare challenge Zuck the First, Le Roi Numérique?)
Palantir Technologies (the Seeing Stone outfit with the warm up jacket fashion bug) introduced a tasty bit of jargon-market speak in its Q2 2021 earnings call:
Palantir’s meta-constellation software harnesses the power of growing satellite constellations, deploying AI into space to provide insights to decision-makers here on Earth. Our meta-constellation integrates with existing satellites, optimizing hundreds of orbital sensors and AI models and allowing users to ask time-sensitive questions across the entire planet. Important questions like, where are the indicators of wildfires or how are climate changes affecting crop productivity? And when and where are naval fleets conducting operations? Meta-constellation pushes Palantir’s Edge AI technology to a new frontier.
I think meta-constellation is a positive contribution to the American Silicon Valley-Denver lingo.
One of the interesting factoids in the write up is that the average customer “invests” lots of money in the firm’s software and services. The average customer yields $7.9 million. Let’s assume there was a touch of spreadsheet fever whipping the accountants. Chop that down to a couple of million, and the cowboy outfit is doing okay. Now the job is to corral those customers so there is sustainable, recurring revenue and generous profits going forward like little doggies heading to the meat processing facility.
Also, deploying the Palantirians’ system is as easy as cooking some of Cowboy Ken’s beans in an iron pot over a wood fire. The transcript faithfully reports:
In just two days, we were able to deploy an entire solution for this customer, leveraging our out-of-the-box functionality built in foundry, a time line previously unthinkable in the eyes of the customer. And frankly, it would have been unthinkable to us even three years ago, where an equivalent project might have taken three months. This is only possible because of our product. Innovations from software-defined data integration are driving the marginal cost of data integration to 0, archetypes and our no-code technologies that are driving the marginal cost of application development to zero.
Those data cowboys are moving faster than a branded calf on a crisp April morning.
The most interesting factoid is contained in this statement:
Given our strong cash flow position, we repaid our outstanding $200 million term loan facility and are currently debt-free. After paying off the debt, we ended the quarter with $2.3 billion in cash and cash equivalents.
I don’t want to raise a touchy subject, but this chart caught my attention:
That yellow line means that the company is losing money if I am interpreting the Google Finance graph correctly.
It may be helpful to consider that Palantir has never turned a profit. Let’s hope those Colorado transplants can covert expensive cows into hard cash after more than a decade grazing on the range. No digital cows, please. Leave those for the Facebook metaverse which is less than a meta-constellation in JRR Tolkien fantasy space.
Stephen E Arnold, August 19, 2021
Stopping Disinformation At The Systemic Level
August 19, 2021
Disinformation has been a problem since humans created the first conspiracy theory, but the spread has gotten worse in the past view years during Trump’s administration and the pandemic. TechDirt describes how it is more difficult to stop the disinformation spread in the article: “Disentangling Disinformation: Not As Easy As It Looks.” Protestors are urging Facebook to ban disinformation super spreaders and rightly so.
Disinformation about COVID-19 comes from a limited number of Facebook accounts as well as WhatsApp groups, news programs, local communities, and other social media platforms. Facebook does ban misinformation about COVID-19, but the company does not enforce its own rules. It is easy to identify the misinformation super spreaders, it is difficult to stop them. Disinformation has infected the Internet on a systemic level and it is hard to target.
It is hard to decide what actually qualifies as misinformation. What is real deemed hard fact and conspiracy theories changes all the time. For example, homosexuality used to be considered a mental illness and the chronic illness ME/CFS was only deemed recently deemed real. Another part of the issue is that giving authorities power to determine what is disinformation has downsides, because authorities do not always agree with the public about what is truthful. It is also extremely difficult to enforce rules about disinformation:
“We know that enforcing terms of service and community standards is a difficult task even for the most resourced, even for those with the best of intentions—like, say, a well-respected, well-funded German newspaper. But if a newspaper, with layers of editors, doesn’t always get it right, how can content moderators—who by all accounts are low-wage workers who must moderate a certain amount of content per hour—be expected to do so? And more to the point, how can we expect automated technologies—which already make a staggering amount of errors in moderation—to get it right?”
In other words, companies can do better jobs to moderate disinformation, but it is nearly an impossible task. Misinformation spreads around the globe in multiple languages and there is not an easy, universal way to stop everything. It is even worse when good content gets lost because of misinformation.
Whitney Grace, August 19, 2021
The Google Wants to Be Sciencey
August 19, 2021
This write up is not about time crystals. This write up is about being sciencey or more sciencey than any other online advertising company is at this time. Freeze that thought, please.
The Next Web exclaims, “Google’s ‘Time Crystals’ Could Be the Greatest Scientific Achievement of our Lifetimes: EurekaEurekaEurekaEureka!” We are told Google researchers and their partners “may” have created time crystals, which were hypothesized nine years ago. We also learn the research has yet to survive a full peer-review process. At the very least, this represents quite a leap for the company’s marketing department, which has been trying to position the company as the quantum leader for years. To say writer Tristan Greene is excited about the (potential) triumph is an understatement. He declares:
“Eureka! A research team featuring dozens of scientists working in partnership with Google‘s quantum computing labs may have created the world’s first time crystal inside a quantum computer. … These scientists may have produced an entirely new phase of matter.”
Greene notes that it is difficult to understand exactly what time crystals are, but he tries his best to explain it to us. See the write-up for his attempt, and/or turn to one of these alternate explanations for more details. The quantum-computing enthusiast goes on to explain why he is so excited:
“Literally everyone should care. As I wrote back in 2018, time crystals could be the miracle quantum computing needs. Time crystals have always been theoretical. And by ‘always,’ I mean: since 2012 when they were first hypothesized. If Google‘s actually created time-crystals, it could accelerate the timeline for quantum computing breakthroughs from ‘maybe never’ to ‘maybe within a few decades.’ At the far-fetched, super-optimistic end of things – we could see the creation of a working warp drive in our lifetimes. Imagine taking a trip to Mars or the edge of our solar system, and being back home on Earth in time to catch the evening news. And, even on the conservative end with more realistic expectations, it’s not hard to imagine quantum computing-based chemical and drug discovery leading to universally-effective cancer treatments. This could be the big eureka we’ve all been waiting for. I can’t wait to see what happens in peer-review.”
Yes, we too would like to see the outcome of that process. Will Google be trumpeting the results from the rooftops? Or will it quietly move on as with some previous Google endeavors?
It’s more likely that Google wants to generate some sciencey stuff to muffle the antitrust investigations, the Timnit Gebru matter, and the company’s data collection services which support online advertising.
Freeze that with a time crystal, please.
Cynthia Murrell, August 19, 2021
Facebook Keeps E2EE Goodness Flowing
August 18, 2021
Facebook is a wonderful outfit. One possible example is the helpful company’s end to end encryption for Facebook Messenger. “Facebook Messenger now have End-to-End Encryption for Voice and Video Calls” reports:
The social media giant said that end-to-end encryption for group voice and video calls will soon be a part of Messenger. Encryption is already available in Messenger as Secret Conversation. But Secret Conversation makes many features disable and only can be done with individuals. Facebook is going to change it in the coming weeks. Users will be able to control who can reach your chat lists, who will stay in the requests folder, and who can’t message you at all. In the blog, Facebook also talked about that Instagram is also likely to get end-to-end encryption and one-to-one conversations.
Should Facebook be subject to special oversight?
Stephen E Arnold, August 18, 2021