Fragmented Data: Still a Problem?
January 28, 2019
Digital transitions are a major shift for organizations. The shift includes new technology and better ways to serve clients, but it also includes massive amounts of data. All organizations with a successful digital implementation rely on data. Too much data, however, can hinder organizations’ performance. The IT Pro Portal explains how data and something called mass data fragmentation is a major issue in the article, “What Is Mass Data Fragmentation, And What Are IT Leaders So Worried About It?”
The biggest question is: what exactly is mass data fragmentation? I learned:
“We believe one of the major culprits is a phenomenon called mass data fragmentation. This is essentially just a technical way of saying, ’data that is siloed, scattered and copied all over the place’ leading to an incomplete view of the data and an inability to extract real value from it. Most of the data in question is what’s called secondary data: data sets used for backups, archives, object stores, file shares, test and development, and analytics. Secondary data makes up the vast majority of an organization’s data (approximately 80 per cent).”
The article compares the secondary data to an iceberg, most of it is hidden beneath the surface. The poor visibility leads to compliance and vulnerability risks. In other words, security issues that put the entire organization at risk. Most organizations, however, view their secondary data as a storage bill, compliance risk (at least that is good), and a giant headache.
When surveyed about the amount of secondary data they have, it was discovered that organizations had multiple copies of the same data spread over the cloud and on premise locations. IT teams are expected to manage the secondary data across all the locations, but without the right tools and technology the task is unending, unmanageable, and the root of more problems.
If organizations managed their mass data fragmentation efficiently it would increase their bottom line, reduce costs, and reduce security risks. With more access points to sensitive data and they are not secure, it increases the risk of hacking and information being stolen.
Whitney Grace, January 28, 2019
Smart Software: Maybe Not What It Seems
January 27, 2019
Fast computers, memory, and bandwidth can make stupid software look smart. That’s one take away from Big Think’s AI debunker “Why A.I. Is a Big Fat Lie.” Marketers at the likes of IBM, Palantir Technologies, and similar companies are likely to take an opposing view. These firms’ software are magical, reduce the time required to make sense of information, and deliver the “fix” in to the “find, fix, and finish” crowd.
Among the weak spots in the AI defenders’ suit of armor are:
- AI as a buzzword is “BS”. I assume this acronym does not mean Beyond Search
- Machine learning is one thing but it is not autonomous. Humans are needed
- AI won’t terminate me.
The article tackles talking computers and fancy concepts like neural nets.
I learned:
There’s literally no meaningful definition whatsoever. AI poses as a field, but it’s actually just a fanciful brand. As a supposed field, AI has many competing definitions, most of which just boil down to “smart computer.” I must warn you, do not look up “self-referential” in the dictionary. You’ll get stuck in an infinite loop.
The problem is that venture capitalists desperately want a next big thing, lots of money, and opportunities to give talks at Davos. Therefore, smart software is, by golly, going down the bullet train’s rails.
The entrepreneurs who often believe that their innovation has cracked the AI problem have to tell the world. Enter marketers, PR people, biz dev types with actual suits or sport jackets. These folks cheer for the smart software team.
Finally, there are the overwhelmed, confused, and panicked software procurement teams who have to find a way to cut costs and improve efficiency, yada yada yada. The objective is to acquire something new, study it, realign, and repeat the process. Ah, complex smart software. A thing of beauty, right?
Take a look at this Big Think article. Interesting stuff.
Stephen E Arnold, January 27, 2019
Cyber Saturday, January 26, 2019
January 26, 2019
Information about the world of government centric information makes headlines. Usually one or two stories a week make it into the trade journals or on the talking head TV shows.
This morning was an exception.
If you a follower of cloak-and-dagger, cat-and-mouse style adventures, you may be interested in these stories.
Kremlin Secrets Maybe?
DDoSecrets (an acronym for Distributed Denial of Secrets) points to gigabytes of Kremlin related data. You can find the links at this tweet for now. Once the data are taken down, you may have to do your own sleuthing. You will need to be wise in the ways of Tor, however.
Facebook and Message Encryption
Worried about your Facebook Messenger and Instagram posts being viewed by someone other than the recipient. Like WhatsApp, the company will be rolling out end to end encryption before the end of 2019. Will this move make government authorities gathering information for an investigation happy? Will more countries adopt Australia-style backdoor regulations? This is an important development. Is Facebook sufficiently organized to make this happen? Details appear in the New York Times’ story “Zuckerberg Plans to Integrate WhatsApp, Instagram and Facebook Messenger.” This is also a story which may be pay walled.
Journalists Targeted
Writing real news — whether behind a pay wall or not — may be risky. According to the Association Press, an outfit which frightens me when I even consider quoting a sentence — some of the people at Citizen Lab have been under pressure as a result of their reporting. The subject? NSO, an cyber security firm, and the Khashoggi matter. Navigate to this link.
Better Filtering
Some may call Google You Tube recommendations censorship. I am not sure what to call Google’s actions. The company is a bit of a waffler on most things except selling online advertisements and chastising me because I disabled Google Play on one of my Android test mobile phones. According to the Guardian, YouTube will back off suggesting conspiracy theory videos. What’s a conspiracy theory video? Good question which Google assumes it can answer.
From my point of view, Dark cyber has become mainstream. Interesting.
Stephen E Arnold, January 26, 2019
Microsoft in China: Bing Back
January 25, 2019
Gone. Now back. For now.
I read “Microsoft’s Bing Accessible in China after Hours of Outages.” The source is the ever reliable, real news outfit Bloomberg. Yep, the group which runs the hardware compromise stories without sources.
Anyway I learned:
Posting on one of China’s biggest social networks, Weibo, multiple users commented that “Bing is back” and “Bing returns to normal.” Bloomberg was able to independently verify that access to the search engine in the country was once again possible.
Is the Bing system comprehensive?
Yeah, about that.
Stephen E Arnold, January 25, 2019
Bang. Bang. Ouch. We Shot Ourselves in the Feet
January 25, 2019
Silicon Valley and its many offshoots are comparable to the Wild West says the Mashable article: “The Real Wild West Actually Had A Lot In Common With The Tech Industry.” Senator Mark Warner made the comparison between the Wild West and the tech industry due to the unregulated information, cyber attacks, political bias, online piracy, and other issues in the digital world.
The Wild West went from an unregulated outback to a controlled and lawful part of the United States, but how was it accomplished? Stanford University professor Richard White, an expert in American Western history as well as capitalism history, said that the tech industry regulatory changes are good, but Warner might have missed the mark with the comparison. White said:
“‘The West can serve as a cautionary tale, explaining why regulation is necessary,’ Prof. White told Mashable. ‘But also why you’d better keep a very close eye on the people who are actually shaping these regulations, and who they really benefit.’
We also noted:
The Westworld-esque “Wild West” as we imagine it never existed. Instead, Professor White explained, the true story of taming the West tells the tale of regulating the groundbreaking technology of that time: the railroads. So reigning in America’s Wild West in the mid-1800s had more to do with regulating runaway tech corporations than it did with capturing bandits. And the achievements, and mistakes, of that period in our country’s history, can teach us about what to expect from our technological and political moment today.”
According to White, the Wild West was not about gunslinging cowboys, saloons, or wrangling broncos. The true Wild West was Native American and Mexican racism followed by unregulated railroad companies. The railroad corporations had more capital than the state governments and could do practically anything they wanted, including exploiting their customers. The people in the West wanted and needed the railroads, but they were tired of being exploited. The US government then stepped in and regulated the railroads. Tech companies are now facing regulation and they do not like it.
The hope is that consumers will come out on top, tech companies will provide top services, and exploitation stops.
Whitney Grace, January 25, 2019
Facebook: Will Its Artificial Intelligence Understand France, Russia, and the EU?
January 25, 2019
According to one prominent expert, Facebook has gone all in on AI. A write-up at Analytics India Magazine reports, “Yann LeCun Says Facebook is ‘Dust’ Without Deep Learning, and No One is Disagreeing.” Writer Abhijeet Katte reports on comments LeCun made in a recent CNN interview. Katte writes:
“The French AI expert who played a pivotal role in setting up Facebook’s lab in Paris shared in an interview to CNN, ‘If you take the deep learning out of Facebook today, Facebook’s dust. It’s entirely built around it now.’ The statement typifies the current stand on technology by Mark Zuckerberg led Menlo Park giant which has been embattled over its role in US elections. Even though statement has been largely dubbed controversial, LeCun summed up Facebook’s transformation around AI and ML succinctly. Over the last three to four years, Facebook has been slowly but surely transforming its business around intelligent systems technology. The change can be seen in features such as posts, translations and newsfeed algorithms which is the core of the social network platform. Facebook applied deep learning to combat hate speech and misinformation in countries like Myanmar. The social media giant was criticized when the platform was said to have fueled ethnic violence against the Rohingya population. The company also said AI application created by Facebook is now capable of flagging 52 percent of all content it gets rid of in Myanmar before it is reported by users.”
The article goes on to list several ways Facebook has incorporated deep learning—FB Learner Flow, the “backbone” of the company’s AI; the convolutional neural network system Building Perception; and text-comprehension-engine DeepText, which is said to go beyond traditional NLP and can work on multiple languages. Though LeCun’s observation seems to have stirred some controversy, would it be so surprising for Facebook to become almost entirely dependent on AI tech?
Facebook may have to be dependent upon human attorneys.
Cynthia Murrell, January 25, 2019
China Is the Winner: Bing Go
January 24, 2019
I read “China Appears to Block Microsoft’s Bing as Censorship Intensifies.” The write up explains that Bing has gone. Perhaps the Avis search system will return, but I think that some work may be required.
What’s interesting is that I understood Microsoft to be filtering certain results from the index used by those users firing queries from the Middle Kingdom.
The write up explains:
If the block proves to be permanent, it would suggest that Western companies can do little to persuade China to give them access to what has become the world’s largest Internet market by users, especially at a time of increased trade and economic tensions with the United States.
There may be some interesting implications; for example:
- Chinese nationals who are working for Microsoft may find themselves subject scrutiny. That could bring bad tidings to the individuals and possibly their families.
- The Redmond giant has big plans for its cloud services. In China, the weather forecast could turn grim. I suppose one can think of the possible prohibitions against Microsoft technology as a form of raining on a parade.
- Google’s floundering in China and the more recent dust up about as special China style search system may suggest that the online ad giant is not on the same wave length as the government of China.
To sum up, this is significant if less interesting than having one’s mobile phone alert a user when a person of “low social credit score” is near.
Stephen E Arnold, January 24, 2019
That Good Old AI Transition
January 24, 2019
At a recent industry event, The Drum Future of Marketing, IBM’s Jeremy Waite discussed the use of AI services in business. “Digital Transformation Takes Around Four Years and 85% of Them Fail,” The Drum reports. IBM should know about transformation. And failure. Just ask Watson.
Writer Danielle Gibson does note that Waite acknowledges IBM has been bad at communicating what its AI can (and cannot) do. He also shared some insights into the process of transitioning into a company that embraces AI tech. Gibson writes:
“But before we realize this [AI revolution], AI and how it is used in businesses itself needs to mature, said Waite. He reminded that only about 3% of the industry is using AI. Looking ahead, within 18 months to three years, Waite expects this figure to rise a whopping 28%. ‘Particularly when you look at healthcare. It takes such a long time to mature and a lot of it is trying to educate the marketplace on what it is and isn’t,’ he said. There is a competitive element stopping players from talking about some of the incredible projects being driven by AI. Waite added: ‘Any digital transformation project is going to take around four years, and 85% of them fail. That’s the biggest challenge we have, trying to educate people about what it is that most people in the industry don’t want to share.’”
The article touches on what does and does not qualify as AI, and assures us the technology is expected to create more jobs than it eliminates by 2020. We shall see.
Cynthia Murrell, January 24, 2019
Google: Trolls and Love
January 24, 2019
Internet trolls are as old as the Internet. They are annoying, idiotic, and sad individuals. People are getting tired of Internet trolls. While it is best to ignore them, some trolls take things to the next level, so they need to be seriously dealt with. Google, Twitter, Facebook, and other technology companies are implementing AI to detect toxic comments and hate speech. Unfortunately these AI are simple to undermine. The Next Web shares that, “Google’s AI To Detect Toxic Comments Can Be Easily Fooled With ‘Love.’”
According to the article, Google’s perspective AI is easily fooled with typos, more spaces between words, and adding innocuous words to sentences. Google is trying to make the Internet a nicer place:
“The AI project, which was started in 2016 by a Google offshoot called Jigsaw, assigns a toxicity score to a piece of text. Google defines a toxic comment as a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion. The researchers suggest that even a slight change in the sentence can change the toxicity score dramatically. They saw that changing “You are great” to “You are [obscenity] great”, made the score jump from a totally safe 0.03 to a fairly toxic 0.82.”
The AI is using words with negative meanings to create a toxicity score. The AI’s design is probably very simple, where negative words are assigned a 1 and positive words have a 0. Human speech and emotion is more complicated than what an AI can detect, so sentiment analytics are needed. The only problem is that sentiment analytics are just as easily fooled as Google’s Jigsaw. How can Google improve this? Time, money, and more trial and error.
Whitney Grace, January 24, 2019
Aleph: Another Hidden Internet Indexing Service
January 23, 2019
Law enforcement and intelligence organizations have a new tool to navigate the Dark Web, the Mail & Guardian reports in, “French Start-Up Offers ‘Dark Web’ Compass, but Not for Everyone.” The start-up, called Aleph Networks, has developed a way to navigate the Dark Web, but they wish it to only be wielded for good. In fact, report writer Frederic Garlan, the company performs ethics reviews of potential clients and turns down 30-40 percent of the licensing requests it receives. We also learn:
“Over the past five years Aleph has indexed 1.4 billion links and 450 million documents across some 140,000 dark web sites. As of December its software had also found 3.9 million stolen credit card numbers. ‘Without a search engine, you can’t have a comprehensive view’ of all the hidden sites, Hernandez said. He and a childhood friend began their adventure by putting their hacking skills to work for free-speech advocates or anti-child abuse campaigners, while holding down day jobs as IT engineers. [Co-founder Celine] Haeri, at the time a teacher, asked for their help in merging blogs by her colleagues opposed to a government reform of the education system. The result became the basis of their mass data collection and indexing software, and the three created Aleph in 2012. They initially raised €200,000 ($228,000) but had several close calls with bankruptcy before finding a keen client in the French military’s weapon and technology procurement agency. ‘They asked us for a demonstration two days after the Charlie Hebdo attack,’ Hernandez said, referring to the 2015 massacre of 12 people at the satirical magazine’s Paris offices, later claimed by a branch of Al-Qaeda. ‘They were particularly receptive to our pitch which basically said, if you don’t know the territory — which is the case with the dark web — you can’t gain mastery of it,’ Haeri added.”
That is a good point. Garlan notes the DARPA’s Memex program, which is based on the same principle. As for Aleph, it is now working to incorporate AI into its platform. While the company’s clients so far have mostly been government agencies, it plans to bring in more private-sector clients as it continues to attract investors. Based in Pommiers, France, Aleph Networks was launched in 2012.
Cynthia Murrell, January 23, 2019