Change Is Hard, Especially in the User Interface

March 22, 2016

One of the most annoying things in life is when you go to the grocery store and notice they have rearranged the entire place since your last visit.  I always ask myself the question, “Why grocery store people did you do this to me?”  Part of the reason is to improve the shopping experience and product exposure, while the other half is to screw with customers (I cannot confirm the latter).  According to the Fuzzy Notepad with its Pokémon Evee mascot the post titled “We Have Always Been At War With UI” explains that programmers and users have always been at war with each other when it comes to the user interface.

Face it, Web sites (and other areas of life) need to change to maintain their relevancy.  The biggest problem related to UI changes is the roll out of said changes.  The post points out that users get confused and spend hours trying to understand the change.  Sometimes the change is announced, other times it is only applied to a certain number of users.

The post lists several changes to UI and how they were handled, describing how they were handled and also the programming.  One constant thread runs through the post is that users simply hate change, but the inevitable question of, “Why?” pops up.

“Ah, but why? I think too many developers trot this line out as an excuse to ignore all criticism of a change, which is very unhealthy. Complaints will always taper off over time, but that doesn’t mean people are happy, just that they’ve gone hoarse. Or, worse, they’ve quietly left, and your graphs won’t tell you why. People aren’t like computers and may not react instantly to change; they may stew for a while and drift away, or they may join a mass exodus when a suitable replacement comes along.”

Big data can measure anything and everything, but the data can be interpreted for or against the changes.  Even worse is that the analysts may not know what exactly they need to measure.  What can be done to avoid total confusion about changes is to have a plan, let users know in advance, and even create tutorial about how to use the changes.  Worse comes to worse, it can be changed back and then we move on.

 

Whitney Grace, March 22, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Infonomics and the Big Data Market Publishers Need to Consider

March 22, 2016

The article on Beyond the Book titled Data Not Content Is Now Publishers’ Product floats a new buzzword in its discussion of the future of information: infonomics, or the study of creation and consumption of information. The article compares information to petroleum as the resource that will cause quite a stir in this century. Grace Hong, Vice-President of Strategic Markets & Development for Wolters Kluwer’s Tax & Accounting, weighs in,

“When it comes to big data – and especially when we think about organizations like traditional publishing organizations – data in and of itself is not valuable.  It’s really about the insights and the problems that you’re able to solve,”  Hong tells CCC’s Chris Kenneally. “From a product standpoint and from a customer standpoint, it’s about asking the right questions and then really deeply understanding how this information can provide value to the customer, not only just mining the data that currently exists.”

Hong points out that the data itself is useless unless it has been produced correctly. That means asking the right questions and using the best technology available to find meaning in the massive collections of information possible to collect. Hong suggests that it is time for publishers to seize on the market created by Big Data.

 

Chelsea Kerwin, March 22, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

How Many Types of Big Data Exist?

March 18, 2016

Navigate to “The Five Different Types of Big Data.” If you are a student of classification, you will find the categories set forth in this write up an absolute hoot. The author is an expert, I assume, in energy, transportation, food, and data. Oh, goodie. Food.

I have not thought too much about the types of Big Data. I usually think only when a client pays me to perform that function. An example is my analysis of the concept “real time” information. You can find that write up at this link. Big requires me to understand the concept of relative to what. I find this type of thinking uninteresting, but obviously the editors at Forbes find the idea just another capitalist tool.

When I learned that an expert had chased down the types of Big Data, I was and remain confused. “Big” describes something that is relative. “Data” is the plural of datum and refers to more than two facts or statistics, quantities, characters, symbols, etc.

I am not sure what Big Data is, and like many marketing buzzwords, the phrase has become a catchall for vendors of all manner of computer related products and services.

Here are the five types of Big Data.

  1. Big data. I like the Kurt Friedrich Gödel touch.
  2. Fast data. “Relative to what?” I ask.
  3. Dark data. “Darker than what? Is this secret versus un-secret or some other yardstick?” I wonder.
  4. Lost data. I pose to myself, “Lost as in unknown, known but unknown, or some other Rumsfeldesque state of understanding?”
  5. New data. I think, “I really don’t want to think about what ‘new’ means? Is this new as in never before seen or Madison Avenue ‘new’ like an improved Colgate Total toothpaste with whitener.

I like the tag on the article “Recommended by Forbes.” Quite an endorsement from a fine example of capitalistic tool analysis.

Stephen E Arnold, March 18, 2016

A Thought Leader Embraces Mid Tier Consultant Thinking

March 16, 2016

I read “The Hype of Big Data Revisited: It’s About Extracting Value.” I am not particularly interested in “how big” discussions. What I found interesting was that a through leader reproduced a mid tier consulting firm’s Hype Cycle for Emerging Technologies, 2015. I thought mid tier outfits were not too keen on having their proprietary charts reproduced. Obviously I am off the beam on this assumption.

I did note this statement:

In between 2013 and 2014, Big Data reached the Peak of Inflated Expectations in Gartner’s Hype Cycle for Emerging Technologies. By mid 2014, Big Data was sliding into the Trough of Disillusionment, and by 2015, the term was removed from the hype cycle altogether.

 

More mid tier goodness.

Here’s what I learned about the source of this write up:

Bob E. Hayes, PhD is the Chief Research Officer of Analytics Week and president of Business Over Broadway. At Analytics Week, he is responsible for directing research to identify organizational best practices in the areas of Big Data, data science and analytics. He is considered a thought leader in the field of customer experience management. He conducts research on analytics, customer feedback programs, customer experience / satisfaction / loyalty measurement and shares his insights through his talks, blogs and books.

Perhaps the notion of thought leadership and recycling a mid tier consultant firm’s viewpoints is the future of deep insight and analysis. Wow, the mid tier consulting firm is a significant influence on some thought leaders.

Too bad the intellectual force does not reach to my part of rural Kentucky. It obviously skips me and works its magic in Bowling Green, the home of the Corvette hole.

Stephen E Arnold, March 15, 2016

Big Data Adoption Rate

March 12, 2016

I read “More Companies Walking the Big Data Walk.” The highlight of the article was information from a survey by an outfit called CompTIA. Also, there is some detail about the size of the sample, but not much about how the sample was selected. But that’s not surprising in a world when Survey Monkey has to retool.

Here’s one passage which I highlighted with my trusty yellow marker:

…the fraction of companies embarking upon big data initiatives continues to accelerate. In its 2015 Big Data Insights and Opportunities study, the research organization found that 51 percent of survey respondents report having big data projects in place today, up from 42 percent in 2013. In a corresponding shift, just 36 percent report having a big data project in the planning stage, down from 46 percent two years ago.

The write up pointed out:

This groundswell of movement in big data analytics is being felt up and down the supply chain.

The system in favor is Hadoop. No hint of the challenges Hadoop presents nor of the difficulties some organizations have recruiting competent folks with the technical, math and analytic skills helpful in a Big Data whirlwind.

IBM Watson has a fix, however. This is the citizen data scientist. For more information about this initiative, point your browser at “IBM Expands Watson Analytics Program, Creates Citizen Data Scientists.” Everything but revenue growth it seems.

Stephen E Arnold, March 7, 2016

Hershey Chocolate: Semi Sweet Analytics?

March 4, 2016

I am wrapping up my profile of Palantir Technologies. I located a couple of references to Palantir’s activities in the non-government markets. One of the outfits allegedly swooned by the Hobbits was Hershey chocolate. A typical reference to the Hobbits and Kisses folks was “Hershey Turns Kisses and Hugs into Hard Data.”

image

When I read “The Hershey Company Partners with Infosys to Build Predictive Analytics Capability using Open Source Information Platform on Amazon Web Services,” I wondered why Palantir Technologies was not featured in the write up. Praescient Analytics, near Washington, DC, can plug industrial strength predictive analytics like Recorded Future’s into a Metropolitan installation without much hassle.

The write up makes clear that the chocolate outfit is going a new way. The path leads through Amazon Web Services to the Infosys Information Platform.

I find this quite a surprise. I have no doubt that Infosys has some competent folks on its team. But the questions flashing through my mind are:

  • What’s up with the Palantir system?
  • Why jump to Infosys when there are darned good outfits available in Boston and Washington, DC?
  • What’s an outsourcing firm able to deliver that specialists with deep experience in making sense of data cannot?

I never understood Mars, and now I don’t understand the makers of the York Peppermint Patty.

Perhaps this is a “whopper” of a project?

Stephen E Arnold, March 4, 2016

Real Time: Maybe, Maybe Not

March 1, 2016

Years ago an outfit in Europe wanted me to look at claims made by search and content processing vendors about real time functions.

The goslings and I rounded up the systems, pumped our test corpus through, and tried to figure out what was real time.

The general buzzy Teddy Bear notion of real time is that when new data are available to the system, the system processes the data and makes them available to other software processes and users.

The Teddy Bear view is:

  1. Zero latency
  2. Works reliably
  3. No big deal for modern infrastructure
  4. No engineering required
  5. Any user connected to the system has immediate access to reports including the new or changed data.

Well, guess what, Pilgrim?

We learned quickly that real time, like love and truth, is a darned slippery concept. Here’s one view of what we learned:

image

Types of Real Time Operations. © Stephen E Arnold, 2009

The main point of the chart is that there are six types of real time search and content processing. When someone says, “Real time,” there are a number of questions to ask. The major finding of the study was that for near real time processing for a financial trading outfit, the cost soars into seven figures and may keep on rising as the volume of data to be processed goes up. The other big finding was that every real time system introduces latency. Seconds, minutes, hours, days, and weeks may pass before the update actually becomes available to other subsystems or to users. If you think you are looking at real time info, you may want to shoot us an email. We can help you figure out which type of “real time” your real time system is delivering. Write benkent2020 @ yahoo dot com and put Real Time in the subject line, gentle reader.

I thought about this research project when I read “Why the Search Console Reporting Is not real time: Explains Google!” As you work through the write up, you will see that the latency in the system is essentially part of the woodwork. The data one accesses is stale. Figuring out how stale is a fairly big job. The Alphabet Google thing is dealing with budgets, infrastructure costs, and a new chief financial officer.

Real time. Not now and not unless something magic happens to eliminate latencies, marketing baloney, and user misunderstanding of real time.

Excitement in non real time.

Stephen E Arnold, March 1, 2016

Computational Demand: Not So Fast

February 19, 2016

Analytics, Big Data, and smart software. The computer systems today can handle the load.

The death of Moore’s Law; that is, the drive to make chips ever more capable is dead. I just learned this. See “Moore’s Law Really Is Dead This Time.” If that is the case, too bad for some computations.

With the rise of mobile and the cloud, who worries about doing complex calculations?

As it turns out, some researchers do. Navigate to “New Finding May Explain Heat Loss in Fusion Reactors.”

Here’s the passage that underscores the need to innovate in computational systems:

it requires prodigious amounts of computer time to run simulations that encompass such widely disparate scales, explains Howard, who is the lead author on the paper detailing these simulations. Accomplishing each simulation required 15 million hours of computation, carried out by 17,000 processors over a period of 37 days at the National Energy Research Scientific Computing Center — making this team the biggest user of that facility for the year. Using an ordinary MacBook Pro to run the full set of six simulations that the team carried out…would have taken 3,000 years.

The next time you buy into the marketing baloney, keep in mind the analyses which require computational horsepower. Figuring out who bought what brand of candy on Valentine’s Day is different from performing other types of analyses.

Stephen E Arnold, February 19, 2016

Dark Web Crime Has Its Limits

February 12, 2016

The Dark Web is an intriguing and mysterious phenomenon, but rumors about what can be found there are exaggerated. Infomania examines what is and what is not readily available in that murky realm in, “Murder-for-Hire on the Dark Web? It Can’t Be True!

Anonymity is the key factor in whether certain types of criminals hang out their shingles on the TOR network. Crimes that can be more easily committed without risking identification include drug trafficking, fraud, and information leaks.  On the other hand, contract assassins, torture-as-entertainment, and human trafficking are not actually to be found, despite reports to the contrary. See the article for details on each of these, and more. The article cites independent researcher Chris Monteiro as it summarizes:

The dark web is rife with cyber crime. But it’s more rampant with sensationalized myths about assassination and torture schemes — which, as Chris can attest, simply aren’t true. “What’s interesting is so much of the coverage of these scam sites is taken at face value. Like, ‘There is a website. Therefore its contents must be true.’ Even when mainstream media picks it up, very few pick it up skeptically,” he says.

Take the Assassination Market, for example. When news outlets got wind of its alleged existence in 2013, they ran with the idea of “Murder-for-hire!!” on the Internet underground. Although Chris has finally demonstrated that these sites are not real, their legend lives on in Internet folklore. “Talking about the facts — this is how cybercrime works, this is how Tor and Bitcoin work — is a lot less sexy than saying, ‘If you click on the wrong link, you’ll be kidnapped, and you’ll end up in a room where you’ll be livestreamed, murdered, and you’re all over the internet!’” Chris says. “All I can do is point out what’s proven and what isn’t.”

So, next time someone spins a scary tale about killers-for-hire who are easily found online, you can point them to this article. Yes, drug trafficking, stolen data, and other infractions are big problems associated with the Dark Web, but let us not jump at shadows.

 

Cynthia Murrell, February 12, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

To Search the Dark Web

February 11, 2016

If you have wondered how, exactly, one searches for information on the Dark Web, take a gander at “The Best TOR Search Engines of 2016” at Cyberwarzone. Reporter CWZ writes:

“On the TOR network you can find various websites just like you find on the ‘normal web.’ The websites which are hosted on the TOR network are not indexed by search engines like Google, Bing and Yahoo, but the search engines which are listed below, do index the TOR websites which are hosted via the TOR network. It is important to remember that you do need the TOR client on your device in order to access the TOR network, if you cannot use a TOR client on your device, you can use one of the free TOR gateways which are listed below in the web TOR providers tab.”

The article warns about malicious TOR clients, and strongly suggests readers download the client found at the official TOR website. Four search engines are listed— https://Ahmia.fi,  https://Onion.cab, https://onion.link/, and http://thehiddenwiki.org/.  CWZ also lists those  Web TOR gateways, through which one can connect to TOR services with a standard Web browser instead of using a TOR client. See the end of the article for that information.

 

Cynthia Murrell, February 11, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta