Infohazards: Another 2020 Requirement

October 20, 2020

New technologies that become society staples have risks and require policies to rein in potential dangers. Artificial intelligence is a developing technology. Governing policies have yet to catch up with the emerging tool. Experts in computer science, government, and other controlling organizations need to discuss how to control AI says Vanessa Kosoy in the Less Wrong blog post: “Needed: AI Infohazard Policy.”

Kosoy approaches her discussion about the need for a controlling AI information policy with the standard science fiction warning argument: “AI risk is that AI is a danger, and therefore research into AI might be dangerous.” It is good to draw caution from science fiction to prevent real world disaster. Experts must develop a governing body of AI guidelines to determine what learned information should be shared and how to handle results that are not published.

Individuals and single organizations cannot make these decisions alone, even if they do have their own governing policies. Governing organizations and people must coordinate their knowledge regarding AI and develop a consensual policies to control AI information. Kozoy determines that any AI policy shoulder consider the following:

• “Some results might have implications that shorten the AI timelines, but are still good to publish since the distribution of outcomes is improved.

• Usually we shouldn’t even start working on something which is in the should-not-be-published category, but sometimes the implications only become clear later, and sometimes dangerous knowledge might still be net positive as long as it’s contained.

• In the midgame, it is unlikely for any given group to make it all the way to safe AGI by itself. Therefore, safe AGI is a broad collective effort and we should expect most results to be published. In the endgame, it might become likely for a given group to make it all the way to safe AGI. In this case, incentives for secrecy become stronger.

• The policy should not fail to address extreme situations that we only expect to arise rarely, because those situations might have especially major consequences.”

She continues that any AI information policy should determine the criteria for what information is published, what channels should be consulted to determine publication, and how to handle potentially dangerous information.

These questions are universal for any type of technology and information that has potential hazards. However, specificity of technological policies weeds out any pedantic bickering and sets standards for everyone, individuals and organizations. The problem is getting everyone to agree on the policies.

Whitney Grace, October 20, 2020

DarkCyber for October 20, 2020, Now Available

October 20, 2020

The October 20, 2020 DarkCyber video news program covers five stories. First, secure messaging apps have some vulnerabilities. These can be exploited, according to researchers in Europe. Second, QuinetiQ’s most recent cyber report provides some eye-opening information about exploit techniques and methods. Third, a free phishing tool is available on GitHub. With it, a bad actor can automate phishing attacks. Fourth, mobile phones can be remotely activated to work like spy cameras and audio transmitters. The final story explains that swarms of drones can be controlled from a mobile phone and a new crawling drone can deliver bio-weapons in a stealthy manner. DarkCyber is produced by Stephen E Arnold, author of CyberOSINT and the Dark Web Notebook. You can view the 11 minute program at this link. (The miniature centipede-like drone is a marvel.)

Kenny Toth, October 20, 2020

Gartner: Sweetening Its Data Confections

October 19, 2020

The dead tree edition of the Wall Street Journal (October 19, 2020) ran an interesting story. The ingredients made my mouth water. My interest in technology activated like yeast as well. Imagine the implements a confectioner requires: A spreader, piping nozzles, melt drippers, cookie cutters (templates), and sugar, spice, and everything nice.

The title of the write up is “Reboot. Career Reinvention: A Cordon Bleu Trained Pastry Chef Ditched Desserts to Become a Data Analyst at a Global Advisory Firm.” (Note: This Hyperglycemic write up is locked in the cupboard, and one without a subscription must pay.) What was the name of the “global advisor firm”? Answer: Gartner Group, the chefs behind the hype cycle!

The write up states:

But after more than five years in the kitchen, he [Chris Pariso, the cordon bleu pâtissier] realized he wanted to do more with my life than bake cookies and brownies all day long.

Gartner’s human people people spotted the talented brownie expert and slotted him into “assessing cyber security risks” or “digging into concerns of possible fraud or internal waste” or developing “modes to forecast the company’s expansion.”

Gartner is apparently a stressful employer. The write up notes:

Habits he learned as a chef, such as working calmly under stressful, time-sensitive situations are useful in his job… Experience in open kitchens has given me some great inter personal skills.

And those Gartner reports: Sweeter than ever. Empty calories? Absolutely not. Sugar frosted? Hmm. Good question.

If you are working in food service, consider Gartner, a global advisory firm.

Stephen E Arnold, October 19, 2020

Amazon Twitch: Inappropriate Behavior? Shocking

October 19, 2020

Gamers are stereotypically portrayed as immature, racist, sexist, and antisocial males. There is truth behind this stereotype, because many gamers are immature, racist, sexist, and antisocial males, but it does not speak for the entire community. The problem with this gamer “archetype” is that the industry does not fall from from this image.

The newest gaming company to be called out for inappropriate behavior is video streaming platform Twitch. GamesIndustry.biz has the scoop on Twitch’s poor behavior in the article: “Twitch Staff Call The Company Out On Sexual Assault, Racism, More.”

The Twitch CEO Emmett Shear denounced inappropriate behavior and demanded industry wide change. Despite this supportive bravado, Shear’s company has its own share of poor actions. GamesIndustry.biz interviewed former Twitch employees for the article on the condition they remain anonymous. The stories at Twitch echo many toxic workplace stories, but one of the saddest recollections comes from a former HR representative:

“ ‘I’d seen many people go to HR and HR ultimately would not resolve things in favor of the complainant,’ they said. ‘They weren’t a source of support for employees. If anything, they just worked to minimize the complaining person and their complaint. They were always in favor of and working for the person with the most power.’”

Since Twitch began as Justin.tv, abusive behavior has run rampant. Women were not the only victims, ethnic minorities were frequent targets as were LGTBQA members. The problem resides in the typical bro culture atmosphere, where misogyny and racism are deemed as okay. Victim blaming is another aspect of Twitch’s toxic workplace as well as the demand to make more money.

Most, if not all, of these incidents were KOed, because Twitch did not want to lose face or revenue opportunities. Many of the perpetrators were leaders or held important company roles, so they could get away with anything. The company as a whole is a black mark on the gaming industry, but individual employees demonstrated humanity:

“It should be noted that several people we talked to spoke highly of Twitch staffers helping vulnerable co-workers, streamers, or viewers, but all were seen to be acting as individuals going above and beyond rather than acting at the behest of the company or in their role as Twitch employees.”

Twitch’s company culture might have changed since its beginning, but many of the perpetrators still hold leadership roles.

Things might be changing slowly in Silicon Valley as people demand accountability and better work environments. In the meantime, potential victims please do what you can to stay safe. Twitch is Amazon after all.

Whitney Grace, October 19, 2020

Covid Trackers Are Wheezing in Europe

October 19, 2020

COVID-19 continues to roar across the world. Health professionals and technologists have combined their intellects attempting to provide tools to the public. The Star Tribune explains how Europe wanted to use apps to track the virus: “As Europe Faces 2nd Wave Of Virus, Tracing Apps Lack Impact.”

Europe planned that mobile apps tracking where infected COVID-19 individuals are located would be integral to battling the virus. As 2020 nears the end, the apps have failed because of privacy concerns, lack of public interest, and technical problems. The latter is not a surprise given the demand for a rush job. The apps were supposed to notify people when they were near infected people.

Health professionals predicted that 60% of European country populations would download and use the apps, but adoption rates are low. The Finnish, however, reacted positively and one-third of the country downloaded their country’s specific COVID-19 tracking app. Finland’s population ironically resists wearing masks in public.

The apps keep infected people’s identities secret. Their data remains anonymous and the apps only alert others if they come in contact with a virus carrier. If the information provides any help to medical professionals remains to be seen:

“We might never know for sure, said Stephen Farrell, a computer scientist at Trinity College Dublin who has studied tracing apps. That’s because most apps don’t require contact information from users, without which health authorities can’t follow up. That means it’s hard to assess how many contacts are being picked up only through apps, how their positive test rates compare with the average, and how many people who are being identified anyway are getting tested sooner and how quickly. ‘I’m not aware of any health authority measuring and publishing information about those things, and indeed they are likely hard to measure,’ Farrell said.”

Are these apps actually helpful? Maybe. But they require maintenance and constant updating. They could prevent some of the virus from spreading, but sticking to tried and true methods of social distancing, wearing masks, and washing hands work better.

Whitney Grace, October 19, 2020

AI the New Battlefield in Cyberattack and Defense

October 19, 2020

It was inevitable—in the struggle between cybercrime and security, each side constantly strives to be a step ahead of the other. Now, both bad actors and protectors are turning to AI tools. Darktrace’s Max Heinemeyer describes the escalation in, “War of the Algorithms: The Next Evolution of Cyber Attacks” posted at Information/Age. He explains:

“In recent years, thousands of organizations have embraced AI to understand what is ‘normal’ for their digital environment and identify behavior that is anomalous and potentially threatening. Many have even entrusted machine algorithms to autonomously interrupt fast-moving attacks. This active, defensive use of AI has changed the role of security teams fundamentally, freeing up humans to focus on higher level tasks. … In what is the attack landscape’s next evolution, hackers are taking advantage of machine learning themselves to deploy malicious algorithms that can adapt, learn, and continuously improve in order to evade detection, signaling the next paradigm shift in the cyber security landscape: AI-powered attacks. We can expect Offensive AI to be used throughout the attack life cycle – be it to use natural language processing to understand written language and to craft contextualized spear-phishing emails at scale or image classification to speed up the exfiltration of sensitive documents once an environment is compromised and the attackers are on the hunt for material they can profit from.”

Forrester recently found (pdf) nearly 90% of security pros they surveyed expect AI attacks to become common within the year. Tools already exist that can, for example, assess an organizations juiciest targets based on their social media presence and then tailor phishing expeditions for the highest chance of success. On the other hand, defensive AI tools track what is normal activity for its organization’s network and works to block suspicious activity as soon as it begins. As each side in this digital arms race works to pull ahead of the other, the battles continue.

Cynthia Murrell, October 19, 2020

IBM Watson: Can AI Have Trouble Finding a True Friend?

October 19, 2020

It appears that IBM’s super computer Watson is dealing with loneliness during the global pandemic, because the Daily Mail shares: “Artificial Intelligence Can Detect How Lonely You Are With 94 Percent Accuracy Just By Analyzing Your Speech Patterns.”

Researchers at the UC San Diego School of Medicine studied the speech patterns of older adults when they discussed loneliness. Using AI that included IBM’s Watson, the researchers analyzed how participants spoke including words, phrases, and silence gaps. They discovered that AI algorithms were almost as accurate as self-reports and questionnaires.

The researchers discovered that lonely people usually have long respires when discussing loneliness and express more sadness in their responses. The problem with self-reports and questionnaires (also completed by individuals) are often biased, because of stigma associated with loneliness.

To avoid bias, the researchers used natural language processing specially designed as a quantitative assessment of expressed emotion and sentiment combined with the usual loneliness diagnostic tools. The project did the following:

“Participants were also interviewed during personal conversations, which were taped and manually transcribed. Transcripts were then examined using natural language processing tools, including IBM’s Watson Natural Language Understanding (WNLU) software, to quantify sentiment and expressed emotions.  WNLU uses deep learning to extract metadata from keywords, categories, sentiment, emotion and syntax. ‘Natural language patterns and machine learning allow us to systematically examine long interviews from many individuals and explore how subtle speech features like emotions may indicate loneliness,’ said first author Varsha Badal at UCSD. ‘Similar emotion analyses by humans would be open to bias, lack consistency and require extensive training to standardize.’”

The AI predicted with 94% accuracy self-acknowledged loneliness and quantitative loneliness with 76%. In the future, mental health professionals may use AI algorithms with natural language processing to diagnosis and record loneliness. It would be more accurate without the self-bias and could lead to better treatment.

Whitney Grace, October 18, 2020

Another Crazy Enterprise Search Report

October 18, 2020

“Enterprise Search Market Investment Analysis | Dassault Systemes, Oracle, HP Autonomy, Expert System Inc.” may be a knock out report, but its presentation of the company’s nuanced understanding is like hitting an egg with a feather. The effort appears to be there, but the result is an intact egg.

You can learn about this omelet of a report at this link. The publisher is PRnewsleader, which seems to be one of the off brand SEO centric content outputters.

The first thing I noticed about this report was the list of vendors in the document; to wit:

Coveo Corp.

Dassault Systèmes

Esker Software

Expert System

HP Autonomy

IBM Corp.

Lucidworks

Marklogic

Microsoft

Oracle

Perceptive Software

Polyspot and Sinequa

SAP

What jumped out at me was the inclusion of Polyspot and Sinequa. Polyspot was acquired years ago by an outfit called oppScience. The company offers Bee4Sense and list information retrieval as a solution. As far as I know, oppScience is a company based in Paris, not on a street once known for fish sales. Sinequa is a separate company. True, it once positioned itself as an enterprise search developer. That core capability has been wrapped in buzzwordery; for example, “insight platform.” Therefore, listing two companies incorrectly as one illustrates a minor slip up.

I also noticed the inclusion of Esker Software. This company is a process automation outfit, and it says that it has an artificial intelligence capability. (Doesn’t every company today?) Esker is into the cloud, and its search technology is a bullet point, not the white paper/journal article/rah rah approach used by Lucidworks.

And what about Elasticsearch? What about Algolia (former Dassault Exalead DNA I heard)? What about Voyager Search? What about Maxxcat? And there are other vendors.

What’s amusing is that the authors of this report are able to set forth:

forecasts for Enterprise Search investments till 2029.

Okay, that’s almost a decade in the Era of the Rona. I am not sure what’s going on tomorrow. Predicting search in 2029 is Snow Crash territory. But I am confident the authors of this report are intrepid researchers who just happened to overlook the Polyspot Sinequa mistake. What else has been overlooked?

Stephen E Arnold, October 18, 2020

Journalists Do More Than Report: The Covid Determination

October 17, 2020

One of the DarkCyber research team alerted me to “Facebook Greatest Source of Covid-19 Disinformation, Journalists Say.” That’s the factoid, according to the “real” journalists at a British newspaper.

The main point of the write up may be an interesting way to send this message, “Hey, we are not to blame for erroneous Rona info.” I hear the message.

The write up states:

The majority of journalists covering the pandemic say Facebook is the biggest spreader of disinformation, outstripping elected officials who are also a top source, according to an international survey of journalism and Covid-19.

The survey prompted another Guardian article in August 2020.

Let’s assume Facebook and the other social media high pressure data hoses are responsible for bad, weaponized, or just incorrect Rona info. Furthermore, let’s accept these assertions:

Journalism is one of the worst affected industries during the pandemic as hundreds of jobs have been lost and outlets closed in Australia alone. Ninety per cent of journalists surveyed said their media company had implemented austerity measures including job losses, salary cuts and outlet closures.

The impression the write up creates in the malleable Play-doh of my mind is that journalists are no longer reporting the news. “Real” journalists are making the news, and it is about time!

The sample probably reflects the respondents reaction to the questions on the survey, which remain unknown to me. The survey itself may have been structured as a dark pattern. What better way to explain that bad things are happening to “real” journalists.

What’s interesting is that “real” journalists know that Facebook and other social media systems are bad.

One question, “How long has it taken “real” journalists to figure out the harsh realities of digital streams of users unfettered by internal or external constraints.

Maybe the news is: “It is too late.” Maybe the working hypothesis is that “better late than never”?

Stephen E Arnold, October 17, 2020

Tickeron: The Commercial System Which Reveals What Some Intel Professionals Have Relied on for Years

October 16, 2020

Are you curious about the capabilities of intelware systems developed by specialized services firms? You can gat a good idea about the type of information available to an authorized user:

  • Without doing much more than plugging in an entity with a name
  • Without running ad hoc queries like one does on free Web search systems unless there is a specific reason to move beyond the provided output
  • Without reading a bunch of stuff and trying to figure out what’s reliable and what’s made up by a human or a text robot
  • Without having to spend time decoding a table of numbers, a crazy looking chart, or figuring out weird colored blobs which represent significant correlations.

Sound like magic?

Nope, it is the application of pattern matching and established statistical methods to streams of data.

The company delivering this system, tailored to Robinhood-types and small brokerages, has been assembled by Tickeron. There’s original software, some middleware, and some acquired technology. Data are ingested and outputs indicate what to buy or sell or to know, as a country western star crooned, “know when to hold ‘em.”

A rah rah review appeared in The Stock Dork. “Tickeron Review: An AI-Powered Trading Platform That’s Worth the Hype” provides a reasonably good overview of the system. If you want to check out the system, navigate to Tickeron’s Web site.

Here’s an example of a “card,” the basic unit of information output from the system:

image

The key elements are:

  • Icon to signal “think about buying” the stock
  • A chart with red and green cues
  • A hot link to text
  • A game angle with the “odds” link
  • A “more” link
  • Hashtags (just like Twitter).

Now imaging this type of data presented to an intel officer monitoring a person of interest. Sound useful? The capability has been available for more than a decade. It’s interesting to see this type of intelware finds its way to those who want to invest like the wizards at the former Bear Stearns (remember that company, the bridge players, the implosion?).

DarkCyber thinks that the high-priced solutions available from Wall Street information providers may wonder about the $15 a month fee for the Tickeron service.

Keep in mind that predictions, if right, can allow you to buy an exotic car, an island, and a nice house in a Covid-free location. If incorrect, there’s van life.

The good news is that the functionality of intelware is finally becoming more widely available.

Stephen E Arnold, October 16, 2020

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta