AI Visual Interpretations Bridge The Gap
August 21, 2018
By now, we are all well versed in the amazing advances in AI and machine learning. From increasing travel efficiency to managing your household, these dynamic programs are sweeping the globe. However, when a recent chatbot began looking at images and creating new works of art, it sparked some debate. We discovered more from a recent Alphr story, “Microsoft’s AI Can Create Decent Chinese Poetry from Just a Few Images.”
According to the story:
“Conversation isn’t all it’s able to do, with the AI creating better poetry than I managed in my university creative writing module. And, while Xiaolce isn’t the first AI to create poetry – just look at this one recreating Donald Trump’s famously erudite speeches – it is the first that’s able interpret images as poetry.”
AI processing visual data for other purposes will move beyond converting visual inputs to poetry. AI has been successful in diagnosing skin cancer, a visual application. Hyperbole is often more common than tangible benefits where AI is concerned.
Patrick Roland, August 21, 2018
Australia Considers Adding Fair Use Provisions to Copyright Laws
August 21, 2018
Australia’s existing copyright laws are too strict to allow for AI innovation, according to two of the most prominent companies in the field. Computerworld reports, “New Copyright Rules Needed for AI Era Argue Google, Microsoft.” The news comes as that nation prepares to reform those laws, which currently lack the sort of “fair use” provisions found in other countries. In March, the government issued a discussion paper asking for input on incorporating such provisions. Reporter Rohan Pearce tells us:
“The ‘extent to which AI will be able to be developed in Australia is in doubt’ because of the nation’s copyright regime, according to Google’s response to the discussion paper. AI depends ‘not only on having large sets of data and information to analyse, but also on making copies of those data sets as part of the process of training the algorithms,’ Google argued. ‘In many cases these data sets include material protected by copyright. This can pose significant barriers to the development of AI in countries like Australia which have only inflexible and prescriptive exceptions in their copyright laws.’ …
We noted:
“Microsoft in its submission argued that Australia should introduce an exception explicitly permitting text and data mining (TDM) of copyright works. The tech company said that it would support either an express exception for text and data mining or a broader fair use provision. ‘There is very little connection between copyright and TDM, just as copyright has never controlled how people read books and do research,’ Microsoft argued. ‘With TDM, it may be necessary to make copies of information to train the artificial intelligence and allow it to analyze this material to look for patterns, relationships, and insights. These copies are not read by humans, nor are they consumed or redistributed for their creative expression, so they don’t substitute for the original articles or subscriptions.’
That is an interesting point. Pearce notes that an overhaul is being proposed as a way to boost Australia’s digital economy, currently projected to be worth $139 billion by 2020. Both the country’s Productivity Commission and the Australian Law Reform Commission support the introduction of fair use provisions.
Cynthia Murrell, August 21, 2018
DarkCyber for August 21, 2018 Now Available
August 21, 2018
The DarkCyber video news program for August 21, 2018, is now available. You can view the nine minute show at www.arnoldit.com/wordpress or on Vimeo at this link.
This week’s program reports about Methods for hacking crypto currency … hijacking mobile phones via SIM swapping… TSMC hacked with an Eternal Blue variant… and information about WikiLeaks leaked.
The first story runs down more than nine ways to commit cybercrime in order to steal digital currency. A student assembled these data and published them on a personal page on the Medium information service. Prior to the step by step explanation, ways to exploit blockchain for the purpose of committing a possible crime was difficult to find. The Dark Cyber video includes a link to the online version of this information.
The second story reviews the mobile phone hacking method called SIM swapping. This exploit makes it possible for a bad actor to take control of a mobile phone and then transfer digital currency from the phone owner’s account to the bad actor’s account. More “how to” explanations are finding their way into the Surface Web, a trend which has been gaining momentum in the last six months.
The third story reviews how a variant of the Eternal Blue exploit compromised the Taiwan Semiconductor Manufacturing Company. Three of the company’s production facilities were knocked offline. Eternal Blue is the software which enables a number of ransomware attacks. The code was allegedly developed by a government agency. The DarkCyber video provides links to repositories of some software developed by the US government. Stephen E Arnold, author of Dark Web Notebook, “The easier and easier access to specific methods for committing cybercrime make it easy to attack individuals and organizations. On one hand, greater transparency may help some people take steps to protect their data. On the other hand, the actionable information may encourage individuals to try their hand at crime in order to obtain easy money. Once how to information is available to hackers, the likelihood of more attacks, exploits, and crimes is likely to rise.”
The final story reports that WikiLeaks itself has had some of its messages leaked. These messages provide insight into the topics which capture WikiLeaks interest and reveal information about some of the source of support the organization enjoys. The Dark Cyber video provides a link to this collection of WikiLeaks messages.
Stephen E Arnold will be lecturing in Washington, DC, the week of September 6, 2018. If you want to meet or speak with him, please contact him via this email benkent2020 at yahoo dot com.
Kenny Toth, August 21, 2018
Google and Its Management Challenge: Not a Bug, a Feature
August 20, 2018
I read “China and the Moral Dilemma at Google.” The write up does a reasonable job of explaining why Google seems to be struggling with staff management. I highlighted several observations made in the article.
First, a statement attributed to Human Rights Watch senior internet researcher Cynthia Wong:
“Google wants to organize the world’s information; Facebook wants to connect everyone,” Wong said. “I think the engineers really do believe in those missions, and that accounts for some of the difference in how Silicon Valley reacts than, say, the oil sector.”
Second, I circled:
Google has led with this strong culture, and now has its own employees calling it on hypocrisy,” Attributed to Ann Skeet, senior director or leadership ethics at Santa Clara University.
Net net: Millennia perceive the right to know more about their work and how that work will be used. Google and other technology have to adapt to workers who want to decide about whether or not they work on certain projects. The management problem is baked into the organization it seems.
Stephen E Arnold, August 20, 2018
A New Cyber Angle: Differential Traceability
August 20, 2018
Let’s start the week with a bit of jargon: differential traceability.”
How do you separate the bad eggs from the good online? It’s a question we’ve all been wracking our brains to solve ever since the first email was sent. However, the stakes have grown incredibly higher since those innocent days. Recently, some very bright minds have begun digging deeply into the idea of traceability as a way to track down internet offenders and it’s gaining traction, as we discovered from a Communications of the ACM editorial entitled: “Traceability.”
According to the story, it all comes down to differential traceability:
“The ability to trace bad actors to bring them to justice seems to me an important goal in a civilized society. The tension with privacy protection leads to the idea that only under appropriate conditions can privacy be violated. By way of example, consider license plates on cars. They are usually arbitrary identifiers and special authority is needed to match them with the car owners.”
Giving everyone a tag, much like a car, for Internet traffic is an interesting idea. However, much like real license plates, the only ones who will follow the rules will be the ones who aren’t trying to break them.
This phrase meshes nicely with Australia’s proposed legislation to attach fines to specific requests for companies to work around encryption. Cooperate and there is no fine. Fail to cooperate, the company could be fined millions per incident.
Differential? A new concept.
Patrick Roland, August 20, 2018
More Administrative Action from Facebook
August 20, 2018
Rarely do we get a report from the front lines of the war on social spying and fake news. However, recently a story appeared that showcased Facebook’s heavy-handed tactics up close and personal. The article appeared in Gizmodo, titled: “Facebook Wanted to Kill This Investigative Tool.”
The story is about how one designer at Gizmodo tried creating a program that collected data on Facebook, trying to determine what they used their data farms for. It did not go well and the social media giant attempted to gain access to the offending account almost instantly.
“We argued that we weren’t seeking access to users’ accounts or collecting any information from them; we had just given users a tool to log into their own accounts on their own behalf, to collect information they wanted collected, which was then stored on their own computers. Facebook disagreed and escalated the conversation to their head of policy for Facebook’s Platform…”
News such as this has been slowly leaking its way into the mainstream. In short, Facebook has been attempting to crack down on offenders, but in the process might be going a little too far—this is not unlike overcorrecting a car while skidding on ice. Wall Street is more than a little worried they won’t pull out of this wreck, but some experts say it’s all just growing pains.
We think this could be another example of management decisions fueled by high school science club thinking.
Patrick Roland, August 20, 2018
China Charts a Course in Cyber Space
August 19, 2018
I am not much a political thinker. But even with the minimal knowledge I possess about world affairs, it seems to me that China has made its cyber technology objective clear. Of course, I am assuming that the information in “When China Rules the Web” is accurate. You will have to judge for yourself.
The write up states:
Chinese President Xi Jinping has outlined his plans to turn China into a “cyber-superpower.”
My reaction to this statement was to ask this question, “When US companies make changes in order to sell to China, does that mean those companies are helping to make the Chinese cyber space vision a reality?”
There are other questions swirling through my mind, and I need time to sort them out. Companies define the US to a large part. If the companies go one way, will other components of the US follow?
Worth considering. A stated policy that is being implemented is different from a less purposeful approach.
Stephen E Arnold, August 19, 2018
High School Science Club Management: In the Dark with Tweets
August 18, 2018
I read what I found to be a somewhat bittersweet description of today’s Google management trajectory. The article is worth your time and has the title “The Tweets That Stopped Google in Its Tracks.” I think this is the title. That in itself throws some water on the idea that a reader should know a title, the source, the date, and the author in that order. No more. Now it seems to be “provide your email address,” the title in smallish letters, the date, the title of the online publication in giant letter, and then the author. Yeah, disruptive.
The write up, despite its free form approach to the MLA and University of Chicago style suggestions, makes this point:
employees noticed that executives’ words were being transcribed in real time by the New York Times’ Kate Conger, who had a source inside [the Google company meeting].
The Google approach to this issue of leaking was interesting and definitely by the high school science club management handbook.
A Googler allegedly said:
#^!* you.
Outstanding.
One senior Googler explained that Google’s creating a search engine custom crafted to meet Chinese guidelines was “exploratory.”
Allegedly one of the founders of Google was not in the loop on the Chinese search system designed to get Google a piece of the large Chinese online market. I circled this statement:
Brin said he had only recently become aware of Dragonfly. On one level, this would seem to strain credulity: Brin’s upbringing in the Soviet Union shaped his views on censorship and informed the company’s decision to exit the Chinese market in 2010. Launching an initiative to re-enter China without Brin’s express approval would seem to be a firing offense, even if Google is now a subsidiary of Alphabet and operating with less direct oversight. (Counterpoint: this is Sergey Brin we’re talking about! One of the world’s most eccentric billionaires. Yesterday he described Dragonfly as a “kerfuffle.” If you told me Brin had recently delegated all of his decision-making authority to a stack of pancakes, I would believe it.)
Let’s assume that these statements are accurate.
What I took from the information provided in the Get Revue write up was:
- The notion of appropriate behavior in a company meeting is different from what was expected of me when I worked at Halliburton Nuclear and Booz, Allen & Hamilton. Although my colleagues were smart, maybe Google quality, discourse was civilized.
- I cannot recall a time when I worked at these firms when information from a confidential company meeting was disseminated outside of the company as quickly as humanly possible. Neither Halliburton nor Booz, Allen was a utopia, but there was an understanding of what was acceptable and what was not with regard to company information.
- I cannot recall a time when my boss at Halliburton or my boss at Booz, Allen was “surprised” by a major activity kept from him. Both of my superiors made it part of their job to know what was going on via established communication meetings, formal and informal meetings, and by wandering around and asking people what occupied their attention at that time. Mr. Brin may be checking out or is in the process of being checked out.
I will have to hunt around for the revised edition of the High School Science Club Management Handbook. I am definitely out of touch with how business works when a company pays an individual to perform work identified by the company as important. Also, who is in charge? Maybe employees are? Maybe management has segregated managers to those in the know and those outside the fence?
Worth monitoring.
Stephen E Arnold, August 18, 2018
Challenges to High School Science Club Management Methods
August 17, 2018
High school science club management methods involve individuals who often perceive other students as less capable. The result is an “I know better” mindset. When applied on a canvas somewhat larger than a public high school, the consequences are often fascinating.
I am confident that high school science club management methods are indeed effective. But it is useful to look at two recent examples which suggest that the confidence of the deciders may be greater than the benefit to the non-deciders.
The first example concerns Google. The company has had some employee pushback about its work on US government projects. I learned when I read “Google Employees Protest Secret Work on Censored Search Engine for China.” The newspaper of record at least around 42nd Street and Park Avenue said:
Hundreds of Google employees, upset at the company’s decision to secretly build a censored version of its search engine for China, have signed a letter demanding more transparency to understand the ethical consequences of their work. In the letter, which was obtained by The New York Times, employees wrote that the project and Google’s apparent willingness to abide by China’s censorship requirements “raise urgent moral and ethical issues.” They added, “Currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment.”
High school management methods have created an interesting workplace problem: Employees want to pick and choose what the company does to generate revenue. Publicly traded companies have to generate revenue and a profit.
How will Google’s management deal with the apparent desire of senior management to make revenue headway in China as its employees appear to want to tell management what’s okay and what’s not okay. I assume that high school science club management methods will rise to this challenge.
The second example is provided by the article “Twitter Company Email Addresses Why It’s #BreakingMyTwitter.” Twitter management is making decisions which seem to illustrate the power of “I know better than you” what’s an appropriate course of action. Twitter has made unilateral changes which appear to have put developers and users in a sticky patch of asphalt. Plus, management has taken an oddly parental approach to the Alex Jones content problem.
I learned from the article:
It’s hard to be a fan of Twitter right now. The company is sticking up for conspiracy theorist Alex Jones, when nearly all other platforms have given him the boot, it’s overrun with bots, and now it’s breaking users’ favorite third-party Twitter clients like Tweetbot and Twitterific by shutting off APIs these apps relied on. Worse still, is that Twitter isn’t taking full responsibility for its decisions.
My takeaway is that high school management methods are more interesting than the dry and dusty notions of Peter Drucker or the old school consultants at the once untarnished blue chip consulting firms like McKinsey & Company and Booz, Allen type operations.
Business school curricula may need an update.
Stephen E Arnold, August 17, 2018
How Long Is the Artificial Intelligence Leash?
August 17, 2018
The merger of AI technology and facial recognition software have been on the minds of the industry’s brightest thinkers lately. With developments coming at a furious clip, it seems as though there is no shortage to the benefits to this amazing prospect. We learned bout just how seamless this software is getting after reading an Inquirer story, “IBM’s Watson AI Used to Develop Multi-Faced Tracking Algorithm.”
According to the piece:
“IBM Watson researcher Chung-Ching Lin led a team of scientists to develop the technology, using a method to spot different individuals in a video sequence…. The system is also able to recognize if people leave and then re-enter the video, even if they look very different.”
Seems positive, doesn’t it? The idea of facial recognition software sweeping across a crowd and pulling out all the individuals from that sequence. However, this could become an issue if used by those with interesting intentions. For example, in some countries, excluding the US, law enforcement uses these systems for drift line surveillance.
Will governments take steps to keep AI on a shorter leash? Some countries will use a short leash because their facial recognition systems are the equivalent of a dog with a bite.
Patrick Roland, August 17, 2018