Podcast Search Service

December 18, 2015

I read “Podcasting’s Search Problem Could be Solved by This Spanish Startup.” According to the write up:

Smab’s web app will automatically transcribe podcasts, giving listeners a way to scan and search their content.

What’s the method? I learned from the article:

The company takes audio files and generates text files. If those text files are hosted on Smab’s site, a person can click on a word in the transcript and it will take them directly to that part of the recording, because the transcript and the text are synced. In fact, a second program assesses the audio to determine where sentences begin, making it easier to find chunks of audio. Both functions are uneven, but it’s worth noting here that the company is in a very early stage.

There are three challenges for automatic voice to text to indexing from audio and video sources:

First, there is a great deal of content. The computational cost to covert a large chunk of audio data to a searchable form and then offer a reasonably robust search engine is significant.

Second, selectivity requires an editorial policy. Business and government are likely paying customers, but the topics these folks chase change frequently. The risk is that a paying customer will be disappointed and drop the service. Thus, sustainable revenue may be an issue.

Third, indexing podcasts and YouTube is work that Apple handles rather off handedly and YouTube performs as part of its massive investment in the Google search system. The fact that neither of these firms has pushed forward with more sophisticated search systems suggests that market demand may not be significant.

I hope the Smab service becomes available. Worth watching.

Stephen E Arnold, December 21, 2015

Topsy: Good Bye, Gentle Search Engine

December 18, 2015

I used Topsy as a way to search certain social content. No more. The service, she be dead.

The money constrained Apple has shut down the public Topsy search system. “Social Analytics Firm Topsy Shut Down by Apple Two Years After Purchase.”

If you want a recommendation for an alternative, sorry, I don’t have one. There are some solutions that are not free to the general public. The gateways to social media content require money and a bit of effort. If you cannot search content, maybe the content does not exist? That’s a comforting thought unless one knows that the content is available, just not searchable by a person with an Internet connection in a public library, at home, or from the local Apple store.

Stephen E Arnold, December 21, 2015

New Patent for a Google PageRank Methodology

December 18, 2015

Google recently acquired a patent for a different approach to page ranking, we learn from “Recalculating PageRank” at SEO by the Sea. Though the patent was just granted, the application was submitted back in 2006. Writer Bill Slawski informs us:

“Under this new patent, Google adds a diversified set of trusted pages to act as seed sites. When calculating rankings for pages. Google would calculate a distance from the seed pages to the pages being ranked. A use of a trusted set of seed sites may sound a little like the TrustRank approach developed by Stanford and Yahoo a few years ago as described in Combating Web Spam with TrustRank (pdf). I don’t know what role, if any, the Yahoo paper had on the development of the approach in this patent application, but there seems to be some similarities. The new patent is: Producing a ranking for pages using distances in a Web-link graph.”

The theory behind trusted pages is that “good pages seldom point to bad ones.” The patent’s inventor, Nissan Hajaj, has been a Google senior engineer since 2004. See the write-up for the text of the patent, or navigate straight to the U.S. Patent and Trademark Office’s entry on the subject.

 

Cynthia Murrell, December 18, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Old School Mainframes Still Key to Big Data

December 17, 2015

According to ZDNet, “The Ultimate Answer to the Handling of Big Data: The Mainframe.” Believe it or not, a recent survey of 187 IT pros from Syncsort found the mainframe to be the important to their big data strategy. IBM has even created a Hadoop-capable mainframe. Reporter Ken Hess lists some of the survey’s findings:

*More than two-thirds of respondents (69 percent) ranked the use of the mainframe for performing large-scale transaction processing as very important

*More than two-thirds (67.4 percent) of respondents also pointed to integration with other standalone computing platforms such as Linux, UNIX, or Windows as a key strength of mainframe

*While the majority (79 percent) analyze real-time transactional data from the mainframe with a tool that resides directly on the mainframe, respondents are also turning to platforms such as Splunk (11.8 percent), Hadoop (8.6 percent), and Spark (1.6 percent) to supplement their real-time data analysis […]

*82.9 percent and 83.4 percent of respondents cited security and availability as key strengths of the mainframe, respectively

*In a weighted calculation, respondents ranked security and compliance as their top areas to improve over the next 12 months, followed by CPU usage and related costs and meeting Service Level Agreements (SLAs)

*A separate weighted calculation showed that respondents felt their CIOs would rank all of the same areas in their top three to improve

Hess goes on to note that most of us probably utilize mainframes without thinking about it; whenever we pull cash out of an ATM, for example. The mainframe’s security and scalability remain unequaled, he writes, by any other platform or platform cluster yet devised. He links to a couple of resources besides the Syncsort survey that support this position: a white paper from IBM’s Big Data & Analytics Hub and a report from research firm Forrester.

 

Cynthia Murrell, December 17, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

The Modern Law Firm and Data

December 16, 2015

We thought it was a problem if law enforcement officials did not know how the Internet and Dark Web worked as well as the capabilities of eDiscovery tools, but a law firm that does not know how to work with data-mining tools much less the importance of technology is losing credibility, profit, and evidence for cases.  According to Information Week in “Data, Lawyers, And IT: How They’re Connected” the modern law firm needs to be aware of how eDiscovery tools, predictive coding, and data science work and see how they can benefit their cases.

It can be daunting trying to understand how new technology works, especially in a law firm.  The article explains how the above tools and more work in four key segments: what role data plays before trial, how it is changing the courtroom, how new tools pave the way for unprecedented approaches to law practice, how data is improving how law firms operate.

Data in pretrial amounts to one word: evidence.  People live their lives via their computers and create a digital trail without them realizing it.  With a few eDiscovery tools lawyers can assemble all necessary information within hours.  Data tools in the courtroom make practicing law seem like a scenario out of a fantasy or science fiction novel.  Lawyers are able to immediately pull up information to use as evidence for cross-examination or to validate facts.  New eDiscovery tools are also good to use, because it allows lawyers to prepare their arguments based on the judge and jury pool.  More data is available on individual cases rather than just big name ones.

“The legal industry has historically been a technology laggard, but it is evolving rapidly to meet the requirements of a data-intensive world.

‘Years ago, document review was done by hand. Metadata didn’t exist. You didn’t know when a document was created, who authored it, or who changed it. eDiscovery and computers have made dealing with massive amounts of data easier,’ said Robb Helt, director of trial technology at Suann Ingle Associates.”

Legal eDiscovery is one of the main branches of big data that has skyrocketed in the past decade.  While the examples discussed here are employed by respected law firms, keep in mind that eDiscovery technology is still new.  Ambulance chasers and other law firms probably do not have a full IT squad on staff, so when learning about lawyers ask about their eDiscovery capabilities.

Whitney Grace, December 16, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Google Timeline Knows Where You Have Been

December 16, 2015

We understand that to get the most out of the Internet, we sacrifice a bit of privacy; but do we all understand how far-reaching that sacrifice can be? The Intercept reveals “How Law Enforcement Can Use Google Timeline to Track Your Every Move.” For those who were not aware, Google helpfully stores all the places you (or your devices) have traveled, down to longitude and latitude, in Timeline. Now, with an expansion launched in July 2015, that information goes back years, instead of just six months. Android users must actively turn this feature off to avoid being tracked.

The article cites a report titled “Google Timelines: Location Investigations Involving Android Devices.” Written by a law-enforcement trainer, the report is a tool for investigators. To be fair, the document does give a brief nod to privacy concerns; at the same time, it calls it “unfortunate” that Google allows users to easily delete entries in their Timelines. Reporter Jana Winter writes:

“The 15-page document includes what information its author, an expert in mobile phone investigations, found being stored in his own Timeline: historic location data — extremely specific data — dating back to 2009, the first year he owned a phone with an Android operating system. Those six years of data, he writes, show the kind of information that law enforcement investigators can now obtain from Google….

“The ability of law enforcement to obtain data stored with privacy companies is similar — whether it’s in Dropbox or iCloud. What’s different about Google Timeline, however, is that it potentially allows law enforcement to access a treasure trove of data about someone’s individual movement over the course of years.”

For its part, Google admits they “respond to valid legal requests,” but insists the bar is high; a simple subpoena has never been enough, they insist. That is some comfort, I suppose.

Cynthia Murrell, December 16, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Big Data Gets Emotional

December 15, 2015

Christmas is the biggest shopping time of the year and retailers spending months studying consumer data.  They want to understand consumer buying habits, popular trends in clothing, toys, and other products, physical versus online retail, and especially what competition will be doing sale wise to entice more customers to buy more.  Smart Data Collective recently wrote about the science of shopping in “Using Big Data To Track And Measure Emotion.”

Customer experience professionals study three things related to customer spending habits: ease, effectiveness, and emotion.  Emotion is the biggest player and is the biggest factor to spur customer loyalty.  If data specialists could figure out the perfect way to measure emotion, shopping and science would change as we know it.

“While it is impossible to ask customers how do they feel at every stage of their journey, there is a largely untapped source of data that can provide a hefty chunk of that information. Every day, enterprise servers store thousands of minutes of phone calls, during which customers are voicing their opinions, wishes and complaints about the brand, product or service, and sharing their feelings in their purest form.”

The article describes some methods emotional data is fathered: phone recordings, surveys, and with vocal layer speech layers being the biggest.  Analytic platforms that measure vocal speech layers that measure relationships between words and phrases to understand the sentiment.  The emotions are ranged on a five-point scale, ranging from positive to negative to discover patterns that trigger reactions.

Customer experience input is a data analyst’s dream as well as nightmare based on all of the data constantly coming.

Whitney Grace, December 15, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Easy as 1,2,3 Common Mistakes Made with Data Lakes

December 15, 2015

The article titled Avoiding Three Common Pitfalls of Data Lakes on DataInformed explores several pitfalls that could negate the advantages of data lakes. The article begins with the perks, such as easier data access and of course, the cost-effectiveness of keeping data in a single hub. The first is sustainability (or the lack thereof), since the article emphasizes that data lakes actually require much more planning and management of data than conventional databases. The second pitfall raised is resource allocation,

“Another common pitfall of implementing data lakes arises when organizations need data scientists, who are notoriously scarce, to generate value from these hubs. Because data lakes store data in their native format, it is common for data scientists to spend as much as 80 percent of their time on basic data preparation. Consequently, many of the enterprise’s most valued resources are dedicated to mundane, time-consuming processes that considerably lengthen time to action on potentially time-sensitive big data.“

The third pitfall is technology contradictions or trying to use traditional approaches on a data lake that holds both big and unstructured data. Be not alarmed, however, the article goes into great detail about how to avoid these issues through data lake development with smart data technologies such as semantic tech.

Chelsea Kerwin, December 15, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Bill Legislation Is More Complicated than Sitting on Capitol Hill

December 14, 2015

When I was in civics class back in the day and learning about how a bill became an official law in the United States, my teacher played Schoolhouse Rock’s famous “I’m Just a Bill” song.  While that annoying retro earworm still makes the education rounds, the lyrics need to be updated to record some of the new digital “paperwork” that goes into tracking a bill.  Engaging Cities focuses on legislation data in “When Lobbyists Write Legislation, This Data Mining Tool Traces The Paper Trail.”

While the process to make a bill might seem simple according to Schoolhouse Rock, it is actually complicated and is even crazier as technology pushes more bills through the legislation process.  In 2014, there were 70,000 state bills introduced across the country and no one has the time to read all of them.  Technology can do a much better and faster job.

“ A prototype toolpresented in September at Bloomberg’s Data for Good Exchange 2015 conference, mines the Sunlight Foundation’s database of more than 500,000 bills and 200,000 resolutions for the 50 states from 2007 to 2015. It also compares them to 1,500 pieces of “model legislation” written by a few lobbying groups that made their work available, such as the conservative group ALEC (American Legislative Exchange Council) and the liberal group the State Innovation Exchange(formerly called ALICE).”

A data-mining tool for government legislation would increase government transparency.  The software tracks earmarks in the bills to track how the Congressmen are benefiting their states with these projects.  The software analyzed earmarks as far back as 1995 and it showed that there are more than anyone knew.   The goal of the project is to scour the data that the US government makes available and help people interpret it, while also encouraging them to be active within the laws of the land.

The article uses the metaphor “need in a haystack” to describe all of the government data.  Government transparency is good, but when they overload people with information it makes them overwhelmed.

Whitney Grace, December 14, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Censys Search Engine Used to Blow the Lid off Security Screw-Ups at Dell, Cisco

December 14, 2015

The article on Technology Review intriguingly titled A Search Engine for the Internet’s Dirty Secrets discusses the search engine Censys, which targets security flaws in devices hooked up to the Internet. The company has already caused some major waves while being used by SEC Consult to uncover lazy device encryption methods among high profile manufacturers such as Cisco and General Electric. The article also provides this revealing anecdote about Censys being used by Duo Security to investigate Dell,

“Dell had to apologize and rush out remediation tools after Duo showed that the company was putting rogue security certificates on its computers that could be used to remotely eavesdrop on a person’s encrypted Web traffic, for example to intercept passwords. Duo used Censys to find that a Kentucky water plant’s control system was affected, and the Department of Homeland Security stepped in.”

Censys uses software called ZMap to harvest data for search, which was developed by Zakir Durumeric, who is also directing the open-source project at the University of Michigan. The article also goes into detail on Censys’s main rival, Shodan. The companies use different software but Shodan is a commercial search engine while Censys is free to use. Additionally, the almighty Google has thrown its weight behind Censys by providing an infrastructure.

Chelsea Kerwin, December 14, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta