Google Faces Sanctions over Refusal to Embrace Right to Be Forgotten Ruling

October 2, 2015

The article on Reuters titled France Rejects Google Appeal on Cleaning Up Search Results Globally explores the ramifications of Europe’s recently passed Right to be Forgotten law. The law stipulates that search engines be compelled by requests to remove information. Google has made some attempts to yield to the law, granting 40% of the 320,000 requests to remove incorrect, irrelevant, or controversial information, but only on the European version of its sites. The article delves into the current state of affairs,

“The French authority, the CNIL, in June ordered Google to de-list on request search results appearing under a person’s name from all its websites, including Google.com. The company refused in July and requested that the CNIL abandon its efforts, which the regulator officially refused to do on Monday…France is the first European country to open a legal process to punish Google for not applying the right to be forgotten globally.”

Google countered that while the company was happy to meet the French and European standards in Europe, they did not see how the European law could be globally enforced. This refusal will almost certainly be met with fines and sanctions, but that may be the least of Alphabet Google’s troubles considering its ongoing disapproval by Europe.
Chelsea Kerwin, October 02, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Harsh Criticism of Yahoo

September 24, 2015

Kill dear old Yahoo? IBTimes reports on some harsh words from an ivory-tower type in, “NYU Professor: Yahoo Ought to Be ‘Euthanised’ and Marissa Mayer’s Pregnancy Saved her Job.” It seems marketing professor Scott Galloway recently criticized the company, and its famous CEO, in a televised Bloomberg interview. In his opinion, any website with Yahoo’s traffic should be rolling in dough, and the company’s struggles are the result of mismanagement. As for his claim that the “most overpaid CEO in history” only retains her position due to her pregnancy? Reporter Mary-Ann Russon writes:

“Galloway says that Yahoo would not be willing to face the public backlash that would come from firing a woman in such a position of power who has just announced she is pregnant.

“This is not a stretch since there are still far fewer women in leadership positions than men – as of March 2015, only 24 of the CEOs in Fortune 500 companies are women – and the issue with how companies perceive family planning remains a sore point for many career-minded women (Read: Gamechangers: Why multimillionaire ‘mom’ Marissa Mayer is damned if she does and damned if she doesn’t).

“However, Galloway also pointed the finger of blame for Yahoo’s woes at its board, which he said has been a ‘lesson in poor corporate governance,’ since there have been five CEOs in the last seven years.”

Though Yahoo was a great success around the turn of the millennium, it has fallen behind as users migrate their internet usage to mobile devices (with that format’s smaller, cheaper ads). Though many still use its free apps, nowadays most of Yahoo’s revenue comes from its Alibaba investment.

So what does Galloway recommend? “It should be sold to Microsoft,” he declared. “We should put a bullet in this story called ‘Yahoo’.” Ouch. Can Yahoo reverse their fortunes, or is it too late for the veteran Internet company?

Cynthia Murrell, September 24, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

A Search Engine for College Students Purchasing Textbooks

August 27, 2015

The article on Life Hacker titled TUN’s Textbook Search Engine Compares Prices from Thousands of Sellers reviews TUN, or the “Textbook Save Engine.” It’s an ongoing issue for college students that tuition and fees are only the beginning of the expenses. Textbook costs alone can skyrocket for students who have no choice but to buy the assigned books if they want to pass their classes. TUN offers students all of the options available from thousands of booksellers. The article says,

“The “Textbook Save Engine” can search by ISBN, author, or title, and you can even use the service to sell textbooks as well. According to the main search page…students who have used the service have saved over 80% on average buying textbooks. That’s a lot of savings when you normally have to spend hundreds of dollars on books every semester… TUN’s textbook search engine even scours other sites for finding and buying cheap textbooks; like Amazon, Chegg, and Abe Books.”

After typing in the book title, you get a list of editions. For example, when I entered Pride and Prejudice, which I had to read for two separate English courses, TUN listed an annotated version, several versions with different forewords (which are occasionally studied in the classroom as well) and Pride and Prejudice and Zombies. After you select an edition, you are brought to the results, laid out with shipping and total prices. A handy tool for students who leave themselves enough time to order their books ahead of the beginning of the class.

Chelsea Kerwin, August 27, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Open Source Tools for IBM i2

August 17, 2015

IBM has made available two open source repositories for the IBM i2 intelligence platform: the Data-Acquisition-Accelerators and Intelligence-Analysis-Platform can both be found on the IBM-i2 page at GitHub. The IBM i2 suite of products includes many parts that work together to give law enforcement, intelligence organizations, and the military powerful data analysis capabilities. For an glimpse of what these products can do, we recommend checking out the videos at the IBM i2 Analyst’s Notebook page. (You may have to refresh the page before the videos will play.)

The Analyst’s Notebook is but one piece, of course. For the suite’s full description, I turned to the product page, IBM i2 Intelligence Analysis Platform V3.0.11. The Highlights summary describes:

“The IBM i2 Intelligence Analysis product portfolio comprises a suite of products specifically designed to bring clarity through the analysis of the mass of information available to complex investigations and scenarios to help enable analysts, investigators, and the wider operational team to identify, investigate, and uncover connections, patterns, and relationships hidden within high-volume, multi-source data to create and disseminate intelligence products in real time. The offerings target law enforcement, defense, government agencies, and private sector businesses to help them maximize the value of the mass of information that they collect to discover and disseminate actionable intelligence to help them in their pursuit of predicting, disrupting, and preventing criminal, terrorist, and fraudulent activities.”

The description goes on to summarize each piece, from the Intelligence Analysis Platform to the Information Exchange Visualizer. I recommend readers check out this page, and, especially, the videos mentioned above for better understanding of this software’s capabilities. It is an eye-opening experience.

Cynthia Murrell, August 18, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

IT Architecture Needs to Be More Seamless

August 7, 2015

IT architecture might appear to be the same across the board, but depending on the industry the standards change.  Rupert Brown wrote “From BCBS To TOGAF: The Need For A Semantically Rigorous Business Architecture” for Bob’s Guide and he discusses how TOGAF is the defacto standard for global enterprise architecture.  He explains that while TOGAF does have its strengths, it supports many weaknesses are its reliance on diagrams and using PowerPoint to make them.

Brown spends a large portion of the article stressing that information content and model are more important and a diagramed should only be rendered later.  He goes on that as industries have advanced the tools have become more complex and it is very important for there to be a more universal approach IT architecture.

What is Brown’s supposed solution? Semantics!

“The mechanism used to join the dots is Semantics: all the documents that are the key artifacts that capture how a business operates and evolves are nowadays stored by default in Microsoft or Open Office equivalents as XML and can have semantic linkages embedded within them. The result is that no business document can be considered an island any more – everything must have a reason to exist.”

The reason that TOGAF has not been standardized using semantics is the lack of something to connect various architecture models together.  A standardized XBRL language for financial and regulatory reporting would help get the process started, but the biggest problem will be people who make a decent living using PowerPoint (so he claims).

Brown calls for a global reporting standard for all industries, but that is a pie in the sky hope unless the government imposes regulations or all industries have a meeting of the minds.  Why?  The different industries do not always mesh, think engineering firms vs. a publishing house, and each has their own list of needs and concerns.  Why not focus on getting industry standards for one industry rather than across the board?

Whitney Grace, August 7, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Humans Screw Up the Self-Driving Car Again

August 5, 2015

Google really, really wants its self-driving cars to be approved for consumer usage.  While the cars have been tested on actual roads, they have also been accompanied by car accidents.  The Inquirer posted the article, “Latest Self-Driving Car Crash Injures Three Google Employees” about how the public might not be ready for self-driving vehicles.  Google, not surprisingly, blames the crash on humans.

Google has been testing self-driving cars for over six years and there have been a total of fourteen accidents involving the vehicles.  The most recent accident is the first that resulted in injuries.  Three Google employees were using the self-driving vehicle during Mountain View, California rush hour traffic on July 1. When the accident occurred, each of the three employees were treated for whiplash.  Google says that its car was not at fault and a distracted driver was at caused the accident, which is also the reason for the other accidents.

While Google is upset, the accidents have not hindered their plans, they have motivated them to push forward.  Google explained that:

“ ‘The most recent collision, during the evening rush hour on 1 July, is a perfect example. The light was green, but traffic was backed up on the far side, so three cars, including ours, braked and came to a stop so as not to get stuck in the middle of the intersection.  After we’d stopped, a car slammed into the back of us at 17 mph? ?and it hadn’t braked at all.’ ”

Google continues to insist that human error and inattention are ample reason to allow self-driving cars on the road.  While it is hard to trust a machine with driving a weapon going 50 miles per hour, why do we trust people who have proven to be poor drivers with a license?

Whitney Grace, August 5, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Whither Unix Data

July 30, 2015

For anyone using open-source Unix to work with data, IT World has a few tips for you in “The Best Tools and Techniques for Finding Data on Unix Systems.” In her regular column, “Unix as a Second Language,” writer Sandra Henry-Stocker explains:

“Sometimes looking for information on a Unix system is like looking for needles in haystacks. Even important messages can be difficult to notice when they’re buried in huge piles of text. And so many of us are dealing with ‘big data’ these days — log files that are multiple gigabytes in size and huge record collections in any form that might be mined for business intelligence. Fortunately, there are only two times when you need to dig through piles of data to get your job done — when you know what you’re looking for and when you don’t. 😉 The best tools and techniques will depend on which of these two situations you’re facing.”

When you know just what to search for, Henry-Stocker suggests the “grep” command. She supplies a few variations, complete with a poetic example. Sometimes, like when tracking errors, you’re not sure what you will find but do know where to look. In those cases, she suggests using the “sed” command. For both approaches, Henry-Stocker supplies example code and troubleshooting tips. See the article for the juicy details.

Cynthia Murrell, July 30, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Quality Peer Reviews Are More Subjective Than Real Science

July 16, 2015

Peer reviewed journals are supposed to have an extra degree of authority, because a team of experts read and critiqued an academic work.  Science 2.0 points out in the article, “Peer Review Is Subjective And The Quality Is Highly Variable” that peer-reviewed journals might not be worth their weight in opinions.

Peer reviews are supposed to be objective criticisms of work, but personal beliefs and political views are working their way into the process and have been for some time.  It should not come as a surprise, when academia has been plagued by this problem for decades.  It also has also been discussed, but peer review problems are brushed under the rug.  In true academic fashion, someone is conducting a test to determine how reliable peer review comments are:

“A new paper on peer review discusses the weaknesses we all see – it is easy to hijack peer review when it is a volunteer effort that can drive out anyone who does not meet the political or cultural litmus test. Wikipedia is dominated by angry white men and climate science is dominated by different angry white men, but in both cases they were caught conspiring to block out anyone who dissented from their beliefs.  Then there is the fluctuating nature of guidelines. Some peer review is lax if you are a member, like at the National Academy of Sciences, while the most prominent open access journal is really editorial review, where they check off four boxes and it may never go to peer review or require any data, especially if it matches the aesthetic self-identification of the editor or they don’t want to be yelled at on Twitter.”

The peer review problem is getting worse in the digital landscape.  There are suggested solutions, such as banning all fees associated with academic journals and databases, homogenizing review criteria across fields, but the problems would be far from corrected.  Reviewers are paid to review works, which likely involves kickbacks of some kind.  Also trying to get different academic journals, much less different fields to standardize an issue will take a huge amount of effort and work, if they can come to any sort of agreement.

Fixing the review system will not be done quickly and anytime money is involved, the process is slowed even further.  In short, academic journals are far from being objective, which is why it pays to do your own research and take everything with a grain of salt.

 

Whitney Grace, July 16, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

How Not to Drive Users Away from a Website

July 15, 2015

Writer and web psychologist Liraz Margalit at the Next Web has some important advice for websites in “The Psychology Behind Web Browsing.” Apparently, paying attention to human behavioral tendencies can help webmasters avoid certain pitfalls that could damage their brands. Imagine that!

The article cites a problem an unspecified news site encountered when it tried to build interest in its videos by making them play automatically when a user navigated to their homepage. I suspect I know who they’re talking about, and I recall thinking at the time, “how rude!” I thought it was just because I didn’t want to be chastised by people near me for suddenly blaring a news video. According to Margalit, though, my problem goes much deeper: It’s an issue of control rooted in pre-history. She writes:

“The first humans had to be constantly on alert for changes in their environment, because unexpected sounds or sights meant only one thing: danger. When we click on a website hoping to read an article and instead are confronted with a loud, bright video, the automatic response is not so different from that our prehistoric ancestors, walking in the forest and stumbling upon a bear or a saber-toothed hyena.”

This need for safety has morphed into a need for control; we do not like to be startled or lost. When browsing the Web, we want to encounter what we expect to encounter (perhaps not in terms of content, but certainly in terms of format.) The name for this is the “expectation factor,” and an abrupt assault on the senses is not the only pitfall to be avoided. Getting lost in an endless scroll can also be disturbing; that’s why those floating menus, that follow you as you move down the page, were invented. Margalit  notes:

“Visitors like to think they are in charge of their actions. When a video plays without visitors initiating any interaction, they feel the opposite. If a visitor feels that a website is trying to ‘sell’ them something, or push them into viewing certain content without permission, they will resist by trying to take back the interaction and intentionally avoid that content.”

And that, of course, is the opposite of what websites want, so giving users the control they expect is a smart business move. Besides, it’s only polite to ask before engaging a visitor’s Adobe Flash or, especially, speakers.

Cynthia Murrell, July 15, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Researchers Glean Audio from Video

July 10, 2015

Now, this is fascinating. Scary, but fascinating. MIT News explains how a team of researchers from MIT, Microsoft, and Adobe are “Extracting Audio from Visual Information.” The article includes a video in which one can clearly hear the poem “Mary Had a Little Lamb” as extrapolated from video of a potato chip bag’s vibrations filmed through soundproof glass, among other amazing feats. I highly recommend you take four-and-a-half minutes to watch the video.

 Writer Larry Hardesty lists some other surfaces from which the team was able reproduce audio by filming vibrations: aluminum foil, water, and plant leaves. The researchers plan to present a paper on their results at this year’s Siggraph computer graphics conference. See the article for some details on the research, including camera specs and algorithm development.

 So, will this tech have any non-spying related applications? Hardesty cites MIT grad student, and first writer on the team’s paper, Abe Davis as he writes:

 “The researchers’ technique has obvious applications in law enforcement and forensics, but Davis is more enthusiastic about the possibility of what he describes as a ‘new kind of imaging.’

“‘We’re recovering sounds from objects,’ he says. ‘That gives us a lot of information about the sound that’s going on around the object, but it also gives us a lot of information about the object itself, because different objects are going to respond to sound in different ways.’ In ongoing work, the researchers have begun trying to determine material and structural properties of objects from their visible response to short bursts of sound.”

 That’s one idea. Researchers are confident other uses will emerge, ones no one has thought of yet. This is a technology to keep tabs on, and not just to decide when to start holding all private conversations in windowless rooms.

 Cynthia Murrell, July 10, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta