Talking Down: A Specialty of High-Tech?
October 10, 2019
I am not angry nor am I annoyed. I am surprised. I read “Nadella Warns Government Conference Not to Betray User Trust.” “Nadella”, readers are supposed to know, is a big dog at Microsoft. The write up explains that attendees at an event called Microsoft Government Leaders Summit learned what Microsoft expects them to think and do.
According to TechCrunch (the article may be paywalled, require registration, or just disappear when you view it) reports:
He [Nadella] said it is essential to earn user trust, regardless of your business.
Now a direct quote from Mr. Nadella:
Now, of course, the power law here is all around trust because one of the keys for us, as providers of platforms and tool, trust is everything…. That means you need to also ensure that there is trust in the technology that you adopt, and the technology that you create and that’s what’s going to really define the power law on this equation. If you have trust, you will have exponential benefit. If you erode trust, it will exponentially decay.
Ho, ho, ho.
Mr. Nadella seems indifferent to the problems updating the vaunted Windows 10 operating system is causing users, integrators, and information technology professionals.
What problems?
First, Microsoft has warned 800 million users to install a specific patch in order to avoid terminating with extreme prejudice one’s computer. You can get ore information from the capitalist tool here. Will I trust Microsoft after it killed my computer? Nope.
Second, Windows updates have in the last few weeks killed network adaptors, printers, USB functions, and some audio features. Will a user trust Microsoft’s updates? Nope.
Third, Microsoft is lecturing at a Microsoft sponsored event for government related people. Will these people trust Microsoft when computers in their department cannot print, connect, or play a video? Nope.
To sum up, those dishing out advice about trust may want to make certain that their products and services earn trust.
I suppose one could use Bing or the revised Fast Search & Transfer services to look for more information. But these search services can erode trust as well.
Arrogance, superiority complexes, and confidence — attributes to engender trust? Not in Harrod’s Creek.
Stephen E Arnold, October 10, 2019
Cloudera Bids to Be the Next Gen Anti Financial Crime Platform
October 10, 2019
DarkCyber read “Moving Towards the Next Gen Financial Crimes Platform.” The essay, which is two parts information and three parts marketing collateral, presents a diagram of the Cloudera anti financial crime platform. The phrase “financial crime platform” could be interpreted as the airfield for dispatching a range of malware attacks, a position in which some cloud vendors find themselves either wittingly or unwittingly. In this DarkCyber article, I will refer to the Cloudera vision as an anti financial crimes platform, hopefully to make clear that the cloud vendor is not a bad actor.
In DarkCyber’s view, there are three main points about Cloudera’s enterprise focused solution. Silos of information are a problem, and Cloudera will sweep across organizational data silos, at least that’s the idea. Here are points DarkCyber noted:
- The focus is on the enterprise, not on a wider scope; for example, a bank, not a number of FBI field offices, each of which operates more or less autonomously
- Smart software (artificial intelligence, machine learning, et al) are used at the edge to provide necessary signals about activity warranting further analysis by more numerical recipes
- The solution can accommodate innovations either from Cloudera or from partners.
Cloudera includes a diagram of what the solution’s broad outlines are. Here’s the illustration from the Cloudera article:
Working from right to left, data are ingested by Cloudera. The content goes into an enterprise data store. A suite of financial crime “applications” operate on the data in the Enterprise Data Store and its modules. At the right hand of the diagram analytical tools (maybe like Tibco SpotFire?), business intelligence systems, and Cloudera’s Data Science Workbench allow authorized users to interact with the system.
Cloudera’s article includes this statement:
With CDP as the foundation, intelligence gaps are mitigated by a holistic enterprise view of all customer and financial crime-related data (holistic KYC), systems, models and processes. You will also be able to tighten the loop between detecting and responding to new fraud patterns. CDP also supports open-source advances to ensure that your teams are able to experiment with and adopt the latest technologies and methods, which helps to mitigate technology and vendor lock-in. The diagram below illustrates the Cloudera Data Platform and its various components for enterprise management. [Emphasis in the original source]
Several observations are warranted:
- Vendor lock is an organic consequence of putting one’s egg in one cloud-centric basket. Although it is possible to envision a system which accepts enhancements, the write and the diagram do not include a provision for this type of extension. DarkCyber posits that restrictions will apply.
- The diagram has “financial crime applications” without providing much “color” or detail about these policeware components. One key question is, “Will these policeware applications run “on Cloudera” or on some other system; for example, IBM cloud which delivers Analyst Notebook functions?”
- The write up does not provide information about restrictions on data; for example, streaming data from telephone intercept systems.
- Information about functional components, application programming interfaces, and programmatic methods for the platform are not provided. DarkCyber understands the need for economy in writing, but a table or a list of suggested links would be helpful.
Why is Cloudera making this play?
DarkCyber hypothesizes that Cloudera realizes Amazon’s “as is” capabilities pose a substantial threat. Cloudera wants to stake out some territory before the Bezos bulldozer rolls through the policeware market.
Stephen E Arnold, October 9, 2019
Phishing: Intriguing Approach
October 10, 2019
I don’t want to work through what’s on target and what’s wonky about this “Credible Phishing Attempt.” Note that the approach makes use of voice and a reasonably coherent script. You will want to take a look at the comments in the thread. There are some interesting points along with a few comments which help explain why phishing is one of the go-to methods for bad actors.
Stephen E Arnold, October 10, 2019
Amazon Policeware: Getting Visible in Spite of Amazon
October 9, 2019
An enterprising reporter included some information from my Amazon research. You can find these open source factoids in “Meet America’s Newest Military Giant: Amazon.” Like good recipients of Jeffrey Epstein love, the publication will enjoin you to pay to read the recycled version of my research. Hey, that’s capitalism in action.
The write up does veer from “military giant” into policeware, a term I coined to make clear that there are platforms, applications, and tools purpose-built to support law enforcement, analysts, and investigators.
© Stephen E Arnold, 2016
You may want to read the article and take a look at the information I have published in this blog and on YouTube and Vimeo. The search systems struggle to highlight this content, but that’s the way life is in the world of ad-supported search. (Tip: To locate the information, use the search box on this Web site or you can explore these short videos at these links:
October 30, 2018 https://vimeo.com/297839909
November 6, 2018 https://vimeo.com/298831585
November 13, 2018 https://vimeo.com/300178710
November 20, 2018 https://vimeo.com/301440474.)
Another peek at Amazon’s activities is provided in a side mirror attached to a speeding Chevrolet Volt. “Ring’s Police Partnerships Must End, Say More Than 30 Civil Rights Groups” is an “open letter.” That document, according to CNet, “urges local lawmakers to cancel all existing police deals with Amazon’s video doorbell company.”
Good luck with that.
The CNet write up adds:
Ring has more than 500 police partnerships across the US, and a coalition of civil rights groups are calling for local governments to cancel them all. On Tuesday, tech-focused nonprofit Fight For the Future published an open letter to elected officials raising concerns about Ring’s police partnerships and its impacts on privacy and surveillance. The letter is signed by more than 30 civil rights groups, including the Center for Human Rights and Privacy, Color of Change and the Constitutional Alliance. Along with asking mayors and city councils to cancel existing Ring partnerships, the letter also asks for surveillance oversight ordinances to prevent police departments from making these deals in the future, and also requested members of Congress to investigate Ring’s practices.
Gender Bias in Old Books. Rewrite Them?
October 9, 2019
Here is an interesting use of machine learning. Salon tells us “What Reading 3.5 Million Books Tells Us About Gender Stereotypes.” Researchers led by University of Copenhagen’s Dr. Isabelle Augenstein analyzed 11 billion English words in literature published between 1900 and 2008. Not surprisingly, the results show that adjectives about appearance were most often applied to women (“beautiful” and “sexy” top the list), while men were more likely to be described by character traits (“righteous,” “rational,” and “brave” were most frequent). Writer Nicole Karlis describes how the team approached the analysis:
“Using machine learning, the researchers extracted adjectives and verbs connected to gender-specific nouns, like ‘daughter.’ Then the researchers analyzed whether the words had a positive, negative or neutral point of view. The analysis determined that negative verbs associated with appearance are used five times more for women than men. Likewise, positive and neutral adjectives relating to one’s body appearance occur twice as often in descriptions of women. The adjectives used to describe men in literature are more frequently ones that describe behavior and personal qualities.
“Researchers noted that, despite the fact that many of the analyzed books were published decades ago, they still play an active role in fomenting gender discrimination, particularly when it comes to machine learning sorting in a professional setting. ‘The algorithms work to identify patterns, and whenever one is observed, it is perceived that something is “true.” If any of these patterns refer to biased language, the result will also be biased,’ Augenstein said. ‘The systems adopt, so to speak, the language that we people use, and thus, our gender stereotypes and prejudices.’” Augenstein explained this can be problematic if, for example, machine learning is used to sift through employee recommendations for a promotion.”
Karlis does list some caveats to the study—it does not factor in who wrote the passages, what genre they were pulled from, or how much gender bias permeated society at the time. The research does affirm previous results, like the 2011 study that found 57% of central characters in children’s books are male.
Dr. Augenstein hopes her team’s analysis will raise awareness about the impact of gendered language and stereotypes on machine learning. If they choose, developers can train their algorithms on less biased materials or program them to either ignore or correct for biased language.
Cynthia Murrell, October 9, 2019
Cyber Security: Hand Waving Instead of Results?
October 9, 2019
Beta News published what DarkCyber views as a bit of an exposé. “Security Professionals Struggle to Measure Success within the Business” recycles information which appears to come from a services firm called Thycotic. (DarkCyber has not been able to locate the referenced report.)
Among the statements in the write up, DarkCyber noted these as particularly thought provoking:
- “Nearly half (44 percent) [of those in the Thycotic sample] say their organization struggles to align security initiatives with the business’s overall goals”
- “More [than] 35 percent aren’t clear what the business goals are”
- “The most commonly used metric is to count the number of security breaches (56 percent) followed by the time taken to resolve a breach (51 percent). It appears, however, that these criteria may not be terribly useful.”
- “Around two in five (39 percent) say they have no way of measuring what difference past security initiatives have made to the business.”
- “36 percent agree it’s not a priority for them to measure security success once initiatives have been rolled out.”
These are interesting results. If an information unit cannot demonstrate that their security efforts are useful, budgets will be cut or staff rotated. Vendors will be sucked into this negative atmosphere.
Are cyber security vendors delivering solutions which work? Are customers able to use these products? Will executives lose confidence in their staff and vendors because security challenges continue to bedevil the organization?
The big question, however, remains:
Do the hundreds of vendors have solutions that are useful?
Paying invoices for hand waving can be an issue in some organizations. Well funded cyber security start ups might run into choppy waters after several years of smooth sailing and the support of investors who believe that nothing can derail new cyber security solutions.
Stephen E Arnold, October 9, 2019
Robots: Not Animals?
October 9, 2019
The prevailing belief is that if Google declares something to be true, then it is considered a fact. The same can be said about YouTube, because if someone sees it on YouTube then, of course, it must be real. YouTube already has trouble determining what truly is questionable content. For example, YouTube does not flag white supremacy and related videos taken down. Another curious YouTube incident about flagged content concerns robot abuse, “YouTube Concedes Robot Fight Videos Are Not Actually Animal Cruelty After Removing Them By Mistake” from Gizmodo.
YouTube rules state that videos displaying animals suffering, such as dog and cock fights, cannot be posted on the streaming service. For some reason, videos and channels centered on robot fighting were cited and content was removed.
“…the takedowns were first noted by YouTube channel Maker’s Muse and affected several channels run by Battle Bots contenders, including Jamison Go of Team SawBlaze (who had nine videos taken down) and Sarah Pohorecky of Team Uppercut. Pohorecky told Motherboard she estimated some 10 to 15 builders had been affected, with some having had multiple videos removed. There didn’t appear to be any pattern in the titles of either the videos or the robots themselves, beyond some of the robots being named after animals, she added.”
YouTube’s algorithms make mistakes and robots knocking the gears and circuit boards out of each other was deemed violent, along the lines of “inflicting suffering.” YouTubers can appeal removal citations, so that content can be reviewed again.
Google humans doing human deciding. Interesting.
Whitney Grace, October 9, 2019
Higher Education: A Disconnect between Data Analysis and Behavior
October 9, 2019
DarkCyber worked through “Artificial Intelligence and Big Data in Higher Education: Promising or Perilous?” The write up tries to strike a balance between commercial practices and the job universities are supposed to do. Smart software can help an admissions officer determine which students are “really” interested in a particular institution. In view of the payments parents have made to get their children into prestigious universities, we’re just not buying this argument.
We noted this statement about other uses of smart software:
Other uses extend to student support, which for example, makes recommendations on courses and career paths based on how students with similar data profiles performed in the past. Traditionally this was a role of career service officers or guidance counselors, the data-based recommendation service arguably provides better solutions for students. Student support is further elevated by the use of predictive analytics and its potential to identify students who are at risk of failing or dropping out of university. Traditionally, institutions would rely on telltale signs of attendance or falling GPA to assess whether a student is at risk. AI systems allow for the analysis of more granular patterns of the student’s data profile. Real-time monitoring of the student’s risk allows for timely and effective action to be taken.
The indicator of student performance is grades. Maybe one can consider certain extra curricular activities as useful signals.
DarkCyber is not certain that today’s institutions of higher education are much more than loan agency middlemen.
The notion that today’s academic environment will improve when adjunct professors who work for less than “regular” professors seems odd. Will poorly paid adjuncts chase down a student who has lousy grades and doesn’t attend lectures or go through the online work. Then will that adjunct or maybe a graduate assistant know what magic words to say to get the student on track?
DarkCyber doubts the present academic environment encourages this type of behavior. At a recent conference, a professor on the program asked me, “Do you think I need to contact an agent to get me more speaking engagements?”
Typical. Students are not a primary concern it seems.
Stephen E Arnold, October 9, 2019
Ah, the Cloud: The Risks of Subscription Software
October 9, 2019
DarkCyber is amused when articles about the wonder of cloud-provided subscription software is presented as a real benefit to users. The team was intrigued with the information in “Adobe to Deactivate Accounts for All Venezuelan Users Due to US Sanctions.”
The write up reported:
US software giant Adobe is canceling all subscriptions and deactivating all accounts for Venezuelan users as part of its efforts to become compliant with sanctions imposed by the Trump administration over the summer.
The accounts go dead which means Photoshop, Illustrator, and other Adobe app users in Venezuela have to find alternatives.
ZDNet states: “Because of the White House’s sanctions, users aren’t eligible for refunds either.”
Developers of software which can be installed on a user’s computer may find this announcement heartening.
DarkCyber wants to point out:
- The cloud based service is a variation on old school time sharing. Users have limited control under certain circumstances.
- Government actions can have more impact when centralized services comply with mandates.
- Subscriptions’ benefits may not be tilted toward users.
The cloud has other attributes as well; for example, monitoring and control.
DarkCyber’s view is that the modern computing environment is becoming increasingly interesting. Those dependent on cloud based solutions may want to consider having a Plan B.
Stephen E Arnold, October 9, 2019
Unusual Source, Useful Information
October 8, 2019
I want to give a thumbs up to Cool Smart Phone and its write up “Lies Everywhere. The Truth Is Dead.” The article does a very good job of explaining the basic mechanism for planting misinformation in online channels. Plus the article contains a number of examples.
DarkCyber noted this statement in the write up:
So as a test, I replied to every single one of these replies. I even replied to the original tweet itself, stating that the official advice was indeed to do just this. I thought I’d get some sort of response from the several dozen tweets but no, not one. Not one reply, no one angry response. No blocks. Then, if you look into a lot of these accounts, it’s apparent they’re bots. However, to the casual Twitter, they just see a tweet has 1.4 thousand “Likes”, nearly a thousand retweets and lots of people agreeing with the core message. The bots start things off – next it’s time for the media to chip in. Who knows, the media themselves may have even “planted” some of these stories on social media – just to have a juicy news item to cover.
The one issue I had with the write up was its defeatist approach; specifically:
We’re all being lied to. Social engineering is rife and none of us have the time or the inclination to check and investigate whether that short video on Facebook is real or if the tweet we read this morning is untrue. Like our “sheep” instincts at airports, we just go where we’re told and believe what we’re shown.
DarkCyber’s perception is that increasingly restrictive laws, demands for encryption backdoors, and tighter Internet controls are a response and a potential solution. Note that the fix may be brutal. When societal and personal constraints are removed in our digital era. the governments have limited tools to get civilized behavior back on track. The good old days are going to be imposed via a version of Chinafication.
That shift is underway in many countries, and it will become more visible and forceful. Will news cease being fake? Probably not.
Stephen E Arnold, October 8, 2019