Code Skill for Everyone? An Interesting Question
August 8, 2019
Amazon, Google, and Microsoft want “everyone” to code. Not so fast.
Necessity is the mother of invention and prisoners are some of the most ingenious individuals when it comes to making food, tattoo machines, booze, and shanks. Prisoners also prove their dexterity in hiding contraband items and getting them into prisons. Books were being used to get contraband items into prisons and it got so bad many prisons have forbidden people to send books to those behind bars. Specific books have also been banned by prisons because of their content and Oregon and other states are taking a stand by forbidding books that teach code. Motherboard Vice shares why in the article, “Prisons Are Banning Books That Teach Prisoners How To Code.”
Oregon’s Department of Corrections wants to set the record straight that not all technology-related books are banned, but each one that is sent through the mail room is assessed to see if it presents “a clear and present danger.” Some of the books that are deemed unsuitable include Microsoft Excel 2016 for Dummies, Google Adsense for Dummies, and Windows 10 For Dummies. It is not surprising that Black Hat Python by Justin Seitz is on the list, because it does include hacking tricks and black hat is dubbed black hat for a reason.
However, basic programming languages are not inherently a clear and present danger. Some of the content in the books is outdated and not a danger to the prison. Then again prisons, like most federal organizations, are notoriously under budgeted and could still be running on Windows 98 or even worse Windows ME. Not allowing prisoners to gain computer literacy skills is more harmful, because you need to be sufficient in computers for even the most basic jobs. Without the proper skills, it is much easier to slip back into a life of crime.
But…
“Officials at the Oregon Department of Corrections (DOC) argue, however, that knowledge of even these basic programs can pose a threat to prisons. ‘Not only do we have to think about classic prison escape and riot efforts like digging holes, jumping fences and starting fires, modernity requires that we also protect our prisons and the public against data system breaches and malware,’ DOC spokesperson Jennifer Black said in an emailed statement. ‘It is a balancing act we are actively trying to achieve.’”
That is a good point, but…
“According to Rutgers law professor Todd Clear, security concerns are overblown because learning to hack can require more than reading a book (for example, unrestricted internet access and some savvy comrades), and prison staff can monitor prisoners’ activities. “They are different places, no doubt, but the security claim is often specious,’ he said.”
In Oregon’s defense 98% of books and magazines sent into prisons are approved. Items that are banned based on “based on IT experience, DOC technical architecture and DOC’s mandate to run safe and secure institutions for all.” Coding classes, where offered, are popular among inmates.
Should prisoners be given access to educational classes, so they improve their lives and break free of the prison system? Perhaps the “everyone” push needs a footnote?
Whitney Grace, August 8, 2019
Clickbait: Still Tasty After All Those Years
August 8, 2019
This will come as no surprise to many who consider the rise of smartphones a scourge on society. Haas Newsroom, a publication of UC Berkeley’s Haas School of Business, explains “How Information Is Like Snacks, Money, and Drugs—to Your Brain.” Writer Laura Counts reports on a study performed at Haas which found that, as with eating tasty food or receiving money, taking in information can produce a dopamine surge. So, clickbait junkies are real junkies, addicted to junk information instead of drugs, money, or junk food. Counts writes:
“‘To the brain, information is its own reward, above and beyond whether it’s useful,’ says Assoc. Prof. Ming Hsu, a neuroeconomist whose research employs functional magnetic imaging (fMRI), psychological theory, economic modeling, and machine learning. ‘And just as our brains like empty calories from junk food, they can overvalue information that makes us feel good but may not be useful—what some may call idle curiosity.’ The paper, ‘Common neural code for reward and information value,’ was published this month by the Proceedings of the National Academy of Sciences. Authored by Hsu and graduate student Kenji Kobayashi, now a post-doctoral researcher at the University of Pennsylvania, it demonstrates that the brain converts information into the same common scale as it does for money. It also lays the groundwork for unraveling the neuroscience behind how we consume information—and perhaps even digital addiction. ‘We were able to demonstrate for the first time the existence of a common neural code for information and money, which opens the door to a number of exciting questions about how people consume, and sometimes over-consume, information,’ Hsu says.”
See the write-up for the study’s methodology. Though researchers did not specifically examine the brain’s response to consuming information online, their results do indicate information prompts the reward response. That is why we find ourselves seeking out details that are not really helpful in any way. Except, of course, to get a shot of that sweet, sweet dopamine.
Hsu draws a parallel to junk food. Sugar was a rare treat for our distant ancestors, and the desire for sweetness drove them to eat healthy fruit whenever they could. Now, though, refined sugar is all around us and usually divorced from fruity nutrients. Similarly, we live in a time when unhelpful (and downright untrue) information pervades our environment. As with watching our diet, we must be careful which information we choose to consume.
Cynthia Murrell, August 8, 2019
More on Biases in Smart Software
August 7, 2019
Bias in machine learning strikes again. Citing a study performed by Facebook AI Research, The Verge reports, “AI Is Worse at Identifying Household Items from Lower-Income Countries.” Researchers studied the accuracy of five top object-recognition algorithms, Microsoft Azure, Clarifai, Google Cloud Vision, Amazon Rekognition, and IBM Watson, using this dataset of objects from around the world. Writer James Vincent tells us:
“The researchers found that the object recognition algorithms made around 10 percent more errors when asked to identify items from a household with a $50 monthly income compared to those from a household making more than $3,500. The absolute difference in accuracy was even greater: the algorithms were 15 to 20 percent better at identifying items from the US compared to items from Somalia and Burkina Faso.”
Not surprisingly, researchers point to the usual suspect—the similar backgrounds and financial brackets of most engineers who create algorithms and datasets. Vincent continues:
“In the case of object recognition algorithms, the authors of this study say that there are a few likely causes for the errors: first, the training data used to create the systems is geographically constrained, and second, they fail to recognize cultural differences. Training data for vision algorithms, write the authors, is taken largely from Europe and North America and ‘severely under sample[s] visual scenes in a range of geographical regions with large populations, in particular, in Africa, India, China, and South-East Asia.’ Similarly, most image datasets use English nouns as their starting point and collect data accordingly. This might mean entire categories of items are missing or that the same items simply look different in different countries.”
Why does this matter? For one thing, it means object recognition performs better for certain audiences than others in systems as benign as photo storage services, as serious as security cameras, and as crucial self-driving cars. Not only that, we’re told, the biases found here may be passed into other types of AI that will not receive similar scrutiny down the line. As AI products pick up speed throughout society, developers must pay more attention to the data on which they train their impressionable algorithms.
Cynthia Murrell, August 7, 2019
Embedded Search: A Baidu from ByteDance?
August 7, 2019
“Regular” search used to require four steps: [1] Navigate to Lycos.com or another Web search engine; [2] Enter query and review results; [3] Maybe enter another bunch of words and review results; [4] Snag some info. Done. Close enough for horse shoes.
“Modern” search presents an answer: Use phone and see information the system determines that which you want even if you don’t know you want that information. Pizza? Beer? KFC? React.
Is there a third way?
ByteDance thinks there is. According to India’s Economic Times:
ByteDance, an innovative Beijing startup that created the hit video app TikTok, is moving into search in a threat to the ad business that has fuelled Baidu’s profit. ByteDance, known for aggressively recruiting top tech talent, is turning its attention to one of the most lucrative businesses online. “From 0 to 1, we are building a general search engine for a more ideal user experience…”
What is the information retrieval method? DarkCyber noted this explanation:
ByteDance’s search will be embedded within its own apps, beginning with its Jinri Toutiao news service. That will allow users to quickly search for related news, information or products — and ByteDance will be able to profit from search and display advertising.
How is this different from “modern” search?
DarkCyber is not sure. What’s clear is that ByteDance knows how to attract users. Remember. This is the company with the TikTok video service. Users may not know or care about regular search. None of the Web search services do.
Users want convenience, quick jolts of data which the users perceive as “relevant”, and ease of use.
Will ByteDance become the service of choice when an aspiring scientist seeks information about a specific technical topic.
No, but search is convenient, experiential, and easy. The future of search?
There is no search. But there is a Baidu.
Stephen E Arnold, August 7, 2019
Factualities for August 7, 2019
August 7, 2019
The summer doldrums have had no suppressing effect on those spreadsheet jockeys, wizards of pop up surveys, and latte charged predictors.
Here’s our fanciest number of the week. It comes from an outfit called The Next Web:
1 billion. The number of people who watch esports. Esports are video games. Does Amazon Twitch, Google YouTube, and Ninja’s new home report verifiable data? Yeah, sure. Source: TNW
There was a close race for craziest. We have recognized a runner up, however, we marveled at this figure:
13. Percentage of apps on the Google Play app store which have more than 1,000 installs. And 13 apps have more than 10 million users. (How many Android phones are there in the world? More than 2 billion, if NewZoo data are “sort of correct.”) Source: ZDNet
Here’s our “normal” rundown of factualities:
(20). The percentage decrease in malware. Source: Computing UK
12. Minutes per hour devoted to TV commercials on the AT&T owned Turner television network. Source: Los Angeles Times
$5. The amount Google paid people for permission to scan their faces. Source: The Verge
33. The percentage of businesses running Windows XP which was rolled out in 2001. Source: Slashdot
50. The percentage of companies which do not know if their security procedures are working. Source: IT Pro Portal
50. The percentage of the cloud market which Amazon has. Source: Marketwatch
50. The percentage of “workers” who was half their time struggling with data. Source: ZDNet
82. Percentage of people who will connect to any free WiFi service available to them. Source: Slashdot
89. Percentage of Germans who think France is a trustworthy partner. Source: Reddit
100,000. Estimated staff IBM terminated. An unknown percentage of these professionals were too old to make IBM hip again. Source: Bloomberg
$8.6 million. Amount Cisco Systems had to pay for selling a security product which was not secure. Source: DarkReading
106 million. Number of people whose personal details were stolen in the Capital One breach of an Amazon AWS system. Source: Washington Post
250 million. Number of email accounts stolen by trickbot. Source: Forbes
1 billion. Number of people who watch esports (online games). Source: Next Web
$4.769 trillion. The net worth of 13,650 Harvard grads. Source: MarketWatch
Stephen E Arnold, August 7, 2019
CafePress: Just 23 Million Customer Details May Have Slipped Away
August 6, 2019
I read “CafePress Hacked, 23M Accounts Compromised. Is Yours One Of Them?” Several years ago I participated in a meeting at which a senior officer of CafePress was in the group. The topic was a conference at which I was going to deliver a lecture about cyber security. I recall that the quite confident CafePress C suite executive pointed out to me that the firm had first rate security. Interesting, right?
The write up in the capitalist tool said:
According to that HIBP notification, the breach itself took place on Feb 20 and compromised a total of 23,205,290 accounts. The data was provided to Troy Hunt at HIBP from a source attributed as JimScott.Sec@protonmail.com.
I thought that an outfit with first rate security would not fall to a bad actor. I also assumed that the company would have reported the issue to customers promptly. It seems as though the breach took placed more than five months ago. (February 2019 and today is August 5, 2019.)
What’s DarkCyber’s take on this?
- The attitude of a CafePress executive makes clear that confidence and arrogance are poor substitutes for knowledge.
- The company looks like it needs a security and management health check.
- A failure to act more quickly suggests significant governance issues.
How about a T shirt with the CafePress logo and the phrase “First Rate Security” printed on the front?
Stephen E Arnold, August 6, 2019
DarkCyber for August 6, 2019, Now Available
August 6, 2019
DarkCyber for August 6, 2019, is now available at www.arnoldit.com/wordpress and on Vimeo at https://www.vimeo.com/351872293. The program is a production of Stephen E Arnold. It is the only weekly video news shows focusing on the Dark Web, cybercrime, and lesser known Internet services.
DarkCyber (August 6, 2019) explores reports about four high-profile leaks of confidential or secret information. Each “leak” has unique attributes, and some leaks may be nothing more than attempts to generate publicity, cause embarrassment to a firm, or a clever repurposing of publicly available but little known information. Lockheed Martin made available in a blog about automobiles data related to its innovative propulsion system. The fusion approach is better suited to military applications. The audience for the “leak” may be US government officials. The second leak explains that the breach of a Russian contractor providing technical services to the Russian government may be politically-motivated. The information could be part of an effort to criticize Vladimir Putin. The third example is the disclosure of “secret” Palantir Technologies’ documents. This information may create friction for the rumored Palantir INITIAL PUBLIC OFFERING. The final secret is the startling but unverified assertion that the NSO Group, an Israeli cyber security firm, can compromise the security of major cloud providers like Amazon and Apple, among others. The DarkCyber conclusion from this spate of “leak” stories is that the motivations for each leak are different. In short, leaking secrets may be political, personal, or just marketing.
Other stories in this week’s DarkCyber include:
A report about Kazakhstan stepped up surveillance activities. Monitoring of mobile devices in underway in the capital city. DarkCyber reports that the system may be deployed to other Kazakh cities. The approach appears to be influenced by China’s methods; namely, installing malware on mobile devices and manipulating Internet routing.
DarkCyber explains that F Secure offers a free service to individuals who want to know about their personal information. The Data Discovery Portal makes it possible for a person to plug in an email. The system will then display some of the personal information major online services have in their database about that person.
DarkCyber’s final story points out that online drug merchants are using old-school identity verification methods. With postal services intercepting a larger number of drug packages sent via the mail, physical hand offs of the contraband are necessary. The method used relies on the serial number on currency. When the recipient provides the number, the “drug mule” verifies that number on a printed bank note.
DarkCyber videos appears each week through the September 30, 2019. A new series of videos will begin on November 1, 2019. Programs are available on Vimeo.com and YouTube.com.
Kenny Toth, August 6, 2019
Elsevier: A Fun House Mirror of the Google?
August 5, 2019
Is Elsevier like Google? My hunch is that most people would see few similarities. In Google: The Digital Gutenberg, the third monograph in my Google trilogy, I noted:
- Google is the world’s largest publisher. Each search results page output is a document. Those documents make Google an publisher of import.
- Google uses its technology to create a walled garden for content. Rules must be followed to access that content for certain classes of users; for example, advertisers. I know that this statement does not mean much, if anything to most people, but think about AMP, its rules, and why it is important.
- Google is a content recycler. Original content on Google is usually limited to its own blog posts. The majority of content on Google is created by other people, and some of those people pay Google a variable, volatile fee to get that content in front of users (who, by the way, are indirect content generators).
Therefore, Google is the digital Gutenberg.
Now Elsevier:
- Elsevier published content for a fee from a specialized class of authors.
- Elsevier, like other professional publishers, rely on institutions for revenue who typically subscribe to services, an approach Google is slowly making publicly known and beginning to use.
- Elsevier is an artifact of the older Gutenberg world which required control or gatekeepers to keep information out of the wrong hands.
What’s interesting is that one can consider that Google is becoming more like Elsevier? Or, alternatively, Elsevier is trying to become more like Google?
The questions are artificial because both firms:
- See themselves as natural control points and arbiters of data access
- Evidence management via arrogance; that is, what’s good for the firm is good for those in the know
- Revenue diversification has become a central challenge.
I thought of my Digital Gutenberg work when I read “Elsevier Threatens Others for Linking to Sci-Hub But Does So Itself.” I noted this statement (which in an era of fake news may or may not be accurate):
I learned this morning that the largest scholarly publisher in the world, Elsevier, sent a legal threat to Citationsy for linking to Sci-Hub. There are different jurisdictional views on whether linking to copyright material is or is not a copyright violation. That said, the more entertaining fact is that scholarly publishers frequently end up linking to Sci-Hub. Here’s one I found on Elsevier’s own ScienceDirect site ….
Key point: We do what we want. We are the gatekeepers. Very Googley.
Stephen E Arnold, August 4, 2019
Deep Fake Round Up
August 5, 2019
DarkCyber spotted “8 Deepfake Examples That Terrified the Internet.” This type of article is interesting because it catalogs items which can be forgotten or which become difficult to locate even with the power of Bing, DuckDuckGo, or Google search at one’s fingertips. The Dark Cyber team was not “terrified.” In fact, we were amused once again by item three: “Zuckerberg speaks frankly.”
Stephen E Arnold, August 5, 2019
Alphabet Google: Alleged Election Manipulation Goal
August 5, 2019
In the best tradition of 2019 news reporting, an opinion has become “real news.” I read “Google Wants Trump to Lose in 2020: Former Engineer for Tech Giant Says: That’s Their Agenda.” DarkCyber prefers watching Twitch’s live stream of the Hong Kong protests to the “experts” who appear on Fox News.
However, Fox issued an actual “real news” story with old school words. The write up reports the actual non real fake words of Kevin Cernekee, a former Google engineer, who allegedly departed Google in 2018. The reason? Rumors about misuse of equipment was one possible reason, which strikes DarkCyber as unsubstantiated.
We noted this statement in the write up:
“They have very biased people running every level of the company,” Cernekee continued. “They have quite a bit of control over the political process. That’s something we should really worry about.”
The “they” appears to refer to individuals who work at Alphabet Google, although the floating in space pronouns create some ambiguity.
Here’s another sound bite, but in text on Web site form:
“They really want Trump to lose in 2020. That’s their agenda. They have very biased people running every level of the company.”
If you want more, please, navigate to the “real news” story.
A few observations:
- A single source, particularly a person who no longer works at Google and who may have an interesting historical interaction with the firm, may or may not be delivering actual factual information. A second or third source would be helpful.
- The likelihood of a Google conspiracy to alter an election exists, of course. But Google relies on smart software. The allegations in the write up suggest that actual factual humanoids interact with the smart software to fiddle search results. DarkCyber thinks some data, sample searches, and supporting testimony would be useful. Sure, the other sources might be biased, but more than one voice plus some data would be helpful.
- Why is this former Google engineer now actualized? Is it a book deal? A desire for revenge? A way to get hired by a company who wants someone with a sharp edge to write code? Context and motive would be interesting to DarkCyber.
To sum up: Without more than one person’s headline making statement, DarkCyber asks, “Is Google sufficiently organized to fiddle search results in a consistent sustained manner over time?”
Example: Google killed its Hangouts service and just added a new feature to the marginalized service.
Example: Google continues to push the amusing Loon balloon as more adventurous innovators are moving to satellites.
DarkCyber asks, “Is Google capable of a manipulation on this scale?”
We need more than one de-hired Xoogler’s statements.
Stephen E Arnold, August 5, 2019