The Dark Web: Becoming Trendy? Yep

January 4, 2018

I track articles which suggest that the Dark Web is becoming trendy. My fave for 2017 was the supermodel who was to be auctioned as a sex slave via the Dark Web. Fake news? Who knows. One lucky person of interest faces a trial in Italy.

The leader in the Dark Web trend making is Wired Magazine and its story “An Interview With Darkside, Russia’s Favorite Dark Web Drug Lord.” The Dark Web Notebook team has no easy way to tell if the write up about the interview with a Dark Web drug kingpin is real or more like the information distributed by some other “real” journalism outfits.

We did note three interesting comments in the write up. Let’s tally these and remind you to read the original story:

  • Darkside says that RAMP makes around $250,000 a year from its brisk drug trafficking business. [With the shuttering of Dark Web drug markets, the estimated revenue seems low. The dealer subscription service is a nice angle. The DarkCyber report about the economics of online drug trafficking suggest that RAMP is an outlier both in its longevity (five years of operation) and its approach to business.]
  • Darkside says that he favors human intermediated processes, not smart software. [One issue with humans is that they talk. Presumably Darkside has a way to zip the lips of his colleagues, subscribers, and customers. However, he did allegedly “talk” with Wired. No word on whether the information was obtained face to face, via a phone call, or a digital channel like encrypted email. Loose lips sink ships and Dark Web drug markets.]
  • Darkside does not “mess with the CIA.” [This is interesting. A number of enforcement agencies are working to shutter Dark Web contraband sites. Examples range from Interpol to the Dutch authorities, German and Czech Republic investigators, and, of course, US enforcement entities. How does Darkside know which investigator is from what country? Not even some of the parallel enforcement authorities know what other countries’ agents are doing on a daily basis. We have seen a list of more than 1,500 Dark Web sites operated by police. Maybe RAMP is such a spoofed site?]

Interesting information. Now about that “real” news thing.

Stephen E Arnold, January 4, 2018

Linguistic AI Research in China

January 4, 2018

How is linguistics AI fairing in the country with some of the most complex languages in the world? The linguistics blog Language Log examines “Linguistic Science and Technology in China.” Upon attending the International Workshop on Language Resource Construction: Theory, Methodology and Applications (PDF), writer Mark Liberman seems impressed with Chinese researchers’ progress. He writes:

The growing strength of Chinese research in the various areas of linguistic science and technology has been clear for some time, and the presentations and discussions at this workshop made it clear that this work is poised for a further major increase in quantity and quality. That trend is obviously connected to what Will Knight called “China’s AI Awakening” (Technology Review, 10/10/2017).

Liberman shares a passage from Knight’s article that emphasizes the Chinese government’s promotion of AI technology and links to other recent articles on the subject. He continues:

The Chinese government’s plan is well worth reading — and Google Translate does a good job of making it accessible to those who can’t read Chinese.  Overall this plan strikes me as serious and well thought out, but there seems to me to be a potential tension between one aspect of the plan and the current reality. One of the plan’s four ‘basic principles’ is ‘Open Source.’ … This is very much like the approach followed in the U.S. over the past half century or so. But it’s increasingly difficult for Chinese researchers to ‘Actively participate in global R & D and management of artificial intelligence and optimize the allocation of innovative resources on a global scale,’ given the increasingly restrictive nature of the ‘Great Firewall.’

Hmm, he has a point there. The write-up compares China’s plan to Japan’s approach to AI in the 1980s but predicts China will succeed where Japan fell short. Liberman embeds links to several related articles within his, so check them out for more information.

Cynthia Murrell, January 4, 2018

Blurring the Line Between Employees and AI

January 4, 2018

Using artificial intelligence to monitor employees is a complicated business. While some employers aim to improve productivity and making work easier, others may have other intents. That’s the thesis of the recent Harvard Business Review story, “The Legal Risks of Monitoring Employees Online.”

According to the story:

Companies are increasingly adopting sophisticated technologies that can help prevent the intentional or inadvertent export of corporate IP and other sensitive and proprietary data.

 

Enter data loss prevention, or “DLP” solutions, that help companies detect anomalous patterns or behavior through keystroke logging, network traffic monitoring, natural language processing, and other methods, all while enforcing relevant workplace policies. And while there is a legitimate business case for deploying this technology, DLP tools may implicate a panoply of federal and state privacy laws, ranging from laws around employee monitoring, computer crime, wiretapping, and potentially data breach statutes. Given all of this, companies must consider the legal risks associated with DLP tools before they are implemented and plan accordingly.

While it’s undeniable some companies will use technology monitor employees, this same machine learning and AI can help better employees. Like this story about how AI is forcing human intelligence to evolve and strengthen itself, not get worse. This is a story we’re watching closely because these two camps will likely only create a deeper divide.

Patrick Roland, January 4, 2018

Google Just Caught the Amazon Ad Disease

January 3, 2018

The ideas are good. Build up revenue from online sales. Diversity revenue and offset infrastructure costs, the bane of Alphabet Google. Open new channels with consumer hardware. Then look around for a competitor with a back injury or a wobbly knee and run plays at that weak spot.

Football American style?

Nope. Just Amazon’s apparent 2018 game plan.

I read “What It Means That Amazon Is Bringing Ads to Alexa.” (I must admit the working of the title was interesting with the phrase “means that”.)

The point of the write up focuses on the consumer “experience.” Sigh. I learned from the write up:

Amazon is reportedly testing out various ad types, including videos and promoted paid search results (a la Google). CNBC reports that Amazon is preparing for a “serious run at the ad market” that could begin as soon as this year.

I understand the counter argument: Google’s ad revenue is “safe.” See, for example, the analyst think in “Amazon’s Advertising Push Will Not Threaten Google’s Search Business, Analyst Says.”

My view is that Google is dependent upon online advertising. In the company’s two decades of making relevance irrelevant, Google lacks Amazon’s revenue diversity.

I may be a simplistic hick living in rural Kentucky, but it seems to be that the cost to Amazon to probe online ad revenues poses few risks and comparatively cost-free opportunities for the digital behemoth.

Let’s assume that Amazon is only partially successful; that is, the company lands a few big advertisers and confines its efforts to ads in Amazon search results and to Alexa outputs.

Google will have to spend big or cost costs in order to make up for the loss of a handful of big advertisers. The problem is similar to that Westlaw and LexisNexis face when a big law firm dies or merges with another firm. The revenues are expensive, time consuming, and difficult to replace.

Assume that Amazon is quite successful. The erosion of Google revenue may be modest at first and then map into one of those nifty diagrams for the spread of cancer. My recollection is that Sartwell’s Law may be germane. See “Sartwell’s Incubation Period Model Revisited in the Light of Dynamic Modeling.”

Amazon advertising may be a form of cancer. If it gains traction, the cancer will spread. Unpleasant metaphor, but it illustrates how Amazon can undermine Google and either [a] force Alphabet Google to spend more to remain healthy, [b] weaken Google so that it cannot resist other “infectious” incursions like governmental actions related to taxes and allegations of  unfair practices, or [c] set Google up for gradual stagnation followed by a phase change (collapse).

In short, whether one is pro or anti Amazon, the testing of Amazon ads warrants watching.

Stephen E Arnold, January 3, 2018

Will China Overtake the US in AI?

January 3, 2018

Is the U.S. investing enough in AI technology? Not according to DefenseTech’s piece, “Google Exec:  China to Outpace US in Artificial Intelligence by 2025.” Writer Matt Cox reports that the chairman of the Defense Innovation Board has warned that China is pursuing AI so vigorously that they will have caught up to the U.S. by 2020, will have surpassed us by 2025, and, by 2030, will “dominate” the field. However, Google’s Eric Schmidt, speaking at November’s  Artificial Intelligence and Global Security Summit at the Center for New American Security, disagrees. Cox quotes Schmidt:

Just stop for a sec. The government said that. Weren’t we the ones who are in charge of AI dominance in our country? Weren’t we the ones that invented this stuff? Weren’t we the ones who were willing to go and exploit the benefits of all this technology for betterment of American exceptionalism and our own arrogant view?” Schmidt asked. “Trust me. These Chinese people are good,” he continued.

 

Currently, the United States does not have a national AI strategy, nor does it place a priority on funding basic research in AI and other science and technology endeavors, Schmidt said. “We need to get our act together as a country,” he said. “America is the country that leads in these areas; there is every reason we can continue that leadership.

Schmidt went on to note that today’s immigration restrictions are counterproductive, noting:

Iran produces some of the smartest and top computer scientists in the world. I want them here. It’s crazy not to let these people in. Would you rather have them building AI somewhere else or would you rather have them building it here?

Schmidt asserts the real problem lies within the gears of bureaucracy but suspects interdepartmental cooperation would improve drastically if we happened to be at war with a “major adversary.” I hope we do not have the chance to confirm his suspicion anytime soon.

Cynthia Murrell, January 3, 2018

Big Data Logic Turning Government on Its Ear

January 3, 2018

Can the same startup spirit that powers so many big data companies disrupt the way the government operates? According to a lot of experts, that’s exactly what is happening. We discovered more in a recent Next Gov article, “This Company is Trying to Turn Federal Agencies into Startups.”

According to the story:

BMNT Partners, a Palo Alto-based company, is walking various government agencies through the process of identifying pressing problems and then creating teams that compete against each other to design the best solution. The best of those products might warrant future investments from the agency.

The process begins when an agency presents BMNT with an array of problems it faces internally; BMNT staff helps them narrow down the problem scope, conduct market research to identify the problems that could pique interest from commercial companies, and then track down experts within the agency who can evaluate the solutions. BMNT also helps agencies create various teams of three or four employees who can start building minimum viable products. Newell explained those employees often are selected from the pool within the chief information officer’s or chief technology officers’ staffs.

This seems like a very plausible avenue. Federal agencies are already embracing machine learning and AI, so why not move a little further in this direction? We are looking forward to seeing how this pans out, but chances are this is something the government cannot ignore.

Patrick Roland, January 3, 2018

Sisyphus Gets a Digital Task: Defining Hate Speech, Fake News, and Illegal Material

January 2, 2018

I read “Germany Starts Enforcing Hate Speech Law.” From my point of view in Harrod’s Creek, Kentucky, defining terms and words is tough. When I was a debate team member, our coach Kenneth Camp insisted that each of the “terms” in our arguments and counter arguments be defined. When I went to college and joined the debate team, our coach — a person named George Allen — added a new angle to the rounded corners of definitions. The idea was “framing.” As I recall, one not only defined terms, but one selected factoids, sources, and signs which would  put our opponents in a hen house from which one could escape with scratches and maybe a nasty cut or two.

The BBC and, of course, the author of the article, Germany, and the lawmakers were not thinking about definitions (high school), framing (setting up the argument so winning was easier), or the nicks and bumps incurred when working free of the ramshackle structure.

The write up states:

Germany is set to start enforcing a law that demands social media sites move quickly to remove hate speech, fake news and illegal material.

So what’s hate speech, fake news, and illegal material? The BBC does not raise this question.

I noted:

Germany’s justice ministry said it would make forms available on its site, which concerned citizens could use to report content that violates NetzDG or has not been taken down in time.

And what do the social media outfits have to do?

As well as forcing social media firms to act quickly, NetzDG requires them to put in place a comprehensive complaints structure so that posts can quickly be reported to staff.

Is a mini trend building in the small pond of clear thinking? The BBC states:

The German law is the most extreme example of efforts by governments and regulators to rein in social media firms. Many of them have come under much greater scrutiny this year as information about how they are used to spread propaganda and other sensitive material has come to light. In the UK, politicians have been sharply critical of social sites, calling them a “disgrace” and saying they were “shamefully far” from doing a good job of policing hate speech and other offensive content. The European Commission also published guidelines calling on social media sites to act faster to spot and remove hateful content.

Several observations:

  1. I am not sure if there are workable definitions for the concepts. I may be wrong, but point of view, political orientation, and motivation may be spray painting gray over already muddy concepts.
  2. Social media giants do not have the ability to move quickly. I would suggest that the largest of these targeted companies are not sure what is happening amidst their programmers, algorithms, and marketing professionals. How can one react quickly when one does not know who, what, or where an action occurs.
  3. Attempts to shut down free flowing information will force those digital streams into the murky underground of hidden networks with increasingly labyrinthine arabesques of obfuscation used to make life slow, expensive, and frustrating for enforcement authorities.

Net net: We know that the BBC does  not think much about these issues; otherwise, a hint of the challenges would have filtered into the write up. We know that the legislators are interested in getting control of social media communications, and filtering looks like a good approach. We know that the social media “giants” are little more than giant, semi-organized ad machines designed to generate data and money. We know that those who allegedly create and disseminate “hate speech, fake news and illegal material” will find communication channels, including old fashioned methods like pinning notes on a launderette’s bulletin board or marking signs on walls.

Worth watching how these “factors” interact, morph, and innovate.

Stephen E Arnold, January 2, 2018

Neural Net Machine Translation May Increase Acceptance by Human Translators

January 2, 2018

Apparently, not all professional translators are fond of machine translation technology, with many feeling that it just gets in their way. A post from Trusted Translations’ blog examines, “Rage Against the Machine Translation: What’s All the Fuzz About?” Writer Cesarm thinks the big developers of MT tech, like Google and Amazon, have a blind spot—the emotional impact on all the humans involved in the process. From clients to linguists to end users, each has a stake in the results. Especially the linguists, who, after all, could theoretically lose their jobs altogether to the technology. We’re told, however, that (unspecified) studies indicate translators are more comfortable with software that incorporates neural networking/ deep learning technology. I seem such tools produce a better linguistic flow, even if some accuracy is sacrificed. Cesarm writes:

That’s why I mention emotional investment in machine translation as a key element to reinventing the concept for users.  Understanding the latest changes that have been implemented in the process can help MT-using linguists get over their fears. It seems the classic, more standardized way of MT, (based solely on statistical comparison rather than artificial intelligence) is much better perceived by heavy users, considering the latter to be more efficient and easier to ‘fix’ whenever a Post-Editing task is being conducted, while Post Editing pre-translated text, with more classical technology has proven to be much more problematic, erratic, and what has probably nurtured the anger against MT in the first place, giving it a bad name. Most users (if not all of them) will take on pre-translated material processed with statistical MT rather that rule based MT any day. It seems Neural MT could be the best tool to bridge the way to an increased degree of acceptance by heavy users.

Perhaps. I suppose we will see whether linguists’ prejudice against MT technology ultimately hinders the process.

Cynthia Murrell, January 2, 2018

AI Makes Life-Saving Medical Advances

January 2, 2018

Too often we discuss the grey area around AI and machine learning. While that is incredibly important during this time, it is also not all this amazing technology can do. Saving lives, for instance. We learned a little more on that front from a recent Digital Journal story, “Algorithm Repairs Corrupted Digital Images.”

According to the story:

University of Maryland researchers have devised a technique exploits the power of artificial neural networks to tackle multiple types of flaws and degradations in a single image in one go.

The researchers achieved image correction through the use of a new algorithm. The algorithm operates artificial neural networks simultaneously, so that the networks apply a range of different fixes to corrupted digital images. The algorithm was tested on thousands of damage digital images, some with severe degradations. The algorithm was able to repair the damage and return each image to its original state.

The application of such technology crosses the business and consumer divide, taking in everything from everyday camera snapshots to lifesaving medical scans. The types of faults digital images can develop include blurriness, grainy noise, missing pixels and color corruption.

Very promising from a commercial and medical standpoint. Especially, the medical side. This news, coupled with the story in Forbes about AI disrupting healthcare norms in 2018 makes for a big promise. We are looking forward to seeing what the new year brings for medical AI.

Patrick Roland, January 2, 2018

Dark Cyber, January 2, 2018, Now Available

January 2, 2018

Dark Cyber, a weekly video news program about the Dark Web, is now available. The January 2, 2018, program can be viewed at www.arnoldit.com/wordpress and on Vimeo at https://vimeo.com/248961405.

Dark Cyber reveals the connection between zero day exploits and Tor de-anonymization. Specialist vendors like Gamma Group Finfisher, the Hacking Team and NSO Group provide technology to law enforcement and intelligence professionals. These software components make it possible to strip away some of the security the Onion Router software bundle implements for Dark Web access. Zerodium, One high profile vendor of these exploits is Zerodium. Dark Cyber reveals the million dollar price tag on new Tor exploits.

Viewers will learn about the new wave of take downs and seizures of Surface Web and Dark Web sites. With more than 20,000 sites affected, would be scofflaws may be visiting Web sites operated by law enforcement agencies in the UK, the Netherlands, and dozens of other countries.

The program reports that Grams and its sister site Helix have been taken offline. Grams provided a “drug” centric Dark Web search service for three years until it went dark  in mid December 2017. Helix offered digital currency laundering and mixing services. These also were shuttered. The Grams Helix technology was offered with an application programming interface or API. The idea was that developers could include Grams and Helix services in third party applications. Dark Cyber reveals that the administrator of these sites and services stepped away from these Dark Web offerings because of the work required to deal with stepped up enforcement and technological change.

Dark Web is a weekly video program distributed via YouTube and Vimeo. The program provides information about the Dark Web and about the tools and technologies used to hide, obfuscate, and encrypt a wide range of online activities, products, and services.

You can view the video at www.arnoldit.com/wordpress.

Kenny Toth, January 2, 2018

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta