Lexmark Upgrades Its Enterprise Search

September 30, 2016

Enterprise search has taken a back a back seat to search news regarding Google’s next endeavor and what the next big thing is in big data.  Enterprise search may have taken a back seat in my news feed, but it is still a major component in enterprise systems.  You can even speculate that without a search function, enterprise systems are useless.

Lexmark, one of the largest suppliers of printers and business solutions in the country, understand the importance of enterprise search.  This is why they recently updated the description of its Perceptive Enterprise Search in its system’s technical specifications:

Perceptive Enterprise Search is a suite of enterprise applications that offer a choice of options for high performance search and mobile information access. The technical specifications in this document are specific to Perceptive Enterprise Search version 10.6…

A required amount of memory and disk space is provided. You must meet these requirements to support your Perceptive Enterprise Search system. These requirements specifically list the needs of Perceptive Enterprise Search and do not include any amount of memory or disk space you require for the operating system, environment, or other software that runs on the same machine.

Some technical specifications also provide recommendations. While requirements define the minimum system required to run Perceptive Enterprise Search, the recommended specifications serve as suggestions to improve the performance of your system. For maximum performance, review your specific environment, network, and platform capabilities and analyze your planned business usage of the system. Your specific system may require additional resources above these recommendations.”

It is pretty standard fare when it comes to technical specifications, in other words, not that interesting but necessary to make the enterprise system work correctly.

Whitney Grace, September 30, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Google and the Future of Search Engine Optimization

September 30, 2016

Regular readers know that we are not big fans of SEO (Search Engine Optimization ) or its champions, so you will understand our tentative glee at the Fox News headline, “Is Google Trying to Kill SEO?” The article is centered around a Florida court case whose plaintiff is e.ventures Worldwide LLC, accused by Google of engaging in “search-engine manipulation”. As it turns out, that term is a little murky. That did not stop Google from unilaterally de-indexing “hundreds” of e.ventures’ websites. Writer Dan Blacharski observes:

The larger question here is chilling to virtually any small business which seeks a higher ranking, since Google’s own definition of search engine manipulation is vague and unpredictable. According to a brief filed by e-ventures’ attorney Alexis Arena at Flaster Greenberg PC, ‘Under Google’s definition, any website owner that attempts to cause its website to rank higher, in any manner, could be guilty of ‘pure spam’ and blocked from Google’s search results, without explanation or redress. …

The larger question here is chilling to virtually any small business which seeks a higher ranking, since Google’s own definition of search engine manipulation is vague and unpredictable. According to a brief filed by e-ventures’ attorney Alexis Arena at Flaster Greenberg PC, ‘Under Google’s definition, any website owner that attempts to cause its website to rank higher, in any manner, could be guilty of ‘pure spam’ and blocked from Google’s search results, without explanation or redress.

We cannot share Blacharski’s alarm at this turn of events. In our humble opinion, if websites focus on providing quality content, the rest will follow. The article goes on to examine Google’s first-amendment based stance, and considers whether SEO is even a legitimate strategy. See the article for its take on these considerations.

Cynthia Murrell, September 30, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

 

EasyAsk Has a Sticky Search

September 29, 2016

When I first began reading the EasyAsk article, “Search Laboratory: Rock ‘n’ Roll Lab Rats” it has the typical story about search difficulties and the importance about an accurate, robust search engine.   They even include video featuring personified search engines and the troubles a user goes through to locate a simple item, although the video refers to Google Analytics.   The article pokes fun at EasyAsk employees and how they develop the Search Lab, where they work on improving search functions.

One of the experiments that Search Lab worked on is “sticky search.”  What is sticky search?  Do you throw a keyword reel covered in honey into the Web pool and see what returns?  Is it like the Google “I Feel Lucky” button.  None of these are correct.  The Search Lab conducted an experiment where the last search term was loaded into the search box when a user revisited.  The Search Lab tracked the results and discovered:

As you can see, the sticky search feature was used by close-to one third of the people searching from the homepage, but by a smaller proportion of people on other types of page. Again, this makes sense as you’re more likely to use the homepage as a starting point when your intention is to return to a previously viewed product.  We had helped 30% of people searching from our homepage get to where they wanted to go more quickly, but added inconvenience to the other two thirds (and 75% of searchers across the site as a whole) because to perform their searches, rather than just tapping the search box and beginning to type they now had to erase the old (sticky) search term too.

In other words, it was annoying.  Search Lab retracted the experiment, but it was a decent effort to try something new even if the results could have been predicted.  Keep experimenting with search options SearchLab, but keep the search box empty.

Whitney Grace, September 29, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Googley Spin-Offs Underwhelm

September 29, 2016

One might think that starting out as a derivative of one of the most successful companies in the world would be a sure path to profits. Apparently one would be wrong. The Telegraph reports, “Alphabet’s Spin-Offs are Struggling to Repeat the Google Success Story.” Readers will recall that Alphabet was created last year as the holding company for Google and its derivatives, like Calico, Google Capital, Nest, Google Ventures, Verily, and X. Writer James Titcomb explains the logic behind the move:

The theory behind Alphabet, when Page laid it out in August, made sense. Google had become more than just an internet services and advertising company, even though the main internet business still made all the money. Google had set up units such as Calico, a life sciences division trying to eradicate death; Project Loon, which is trying to beam the internet to rural Asia with gigantic space balloons; and Boston Dynamics, which is trying to build humanoid robots.

These ‘moonshots’ weren’t able to realize their potential within the confines of a company focused on selling pay-per-click internet advertising, so they were separated from it. Page and Sergey Brin, Google’s two co-founders, left the everyday running of the internet business to their trusted lieutenant, Sundar Pichai, who had been effectively doing it anyway.

Being liberated from Google, the moonshots were supposed to thrive under the Alphabet umbrella. Have they? The early signs are not good.

The article concedes that Alphabet expected to lose money on some of these derivative projects, but notes that the loss has been more than expected—to the tune of some $3.6 billion. Titcomb examines Nest, Google’s smart-thermostat initiative, as an example; its once-bright future is not looking up at the moment. Meanwhile, we’re reminded, Apple is finding much success with its services division. See the article for more details on each company.

Will Alphabet continue to use Google Search’s stellar profits to prop up its pet projects? Consider that, from the beginning, one of the companies’ winning strategies has been to try anything and run with what proves successful; repeated failure as a path to success. I predict Alphabet will never relinquish its experimental streak.

Cynthia Murrell, September 29, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Recent Developments in Deep Learning Architecture from AlexNet to ResNet

September 27, 2016

The article on GitHub titled The 9 Deep Learning Papers You Need To Know About (Understanding CNNs Part 3) is not an article about the global media giant but rather the advancements in computer vision and convolutional neural networks (CNNs). The article frames its discussion around the ImageNet Large-Scale Recognition Challenges (ILSVRC), what it terms the “annual Olympics of computer vision…where teams compete to see who has the best computer vision model for tasks such as classification, localization, detection and more.” The article explains that the 2012 winners and their network (AlexNet) revolutionized the field.

This was the first time a model performed so well on a historically difficult ImageNet dataset. Utilizing techniques that are still used today, such as data augmentation and dropout, this paper really illustrated the benefits of CNNs and backed them up with record breaking performance in the competition.

In 2013, CNNs flooded in, and ZF Net was the winner with an error rate of 11.2% (down from AlexNet’s 15.4%.) Prior to AlexNet though, the lowest error rate was 26.2%. The article also discusses other progress in general network architecture including VGG Net, which emphasized depth and simplicity of CNNs necessary to hierarchical data representation, and GoogLeNet, which tossed the deep and simple rule out of the window and paved the way for future creative structuring using the Inception model.

Chelsea Kerwin, September 27, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

Open Source CRM Galore for Salespeople, Manufacturers, and Even Freelancers

September 26, 2016

The article titled Top 10 Open Source CRM on Datamation weighs the customer relationship management (CRM) options based on individual needs in addition to features and functions. It highlights certain key benefits and points of strength such as EspoCRM’s excellent website, SugarCRM’s competitive edge over Salesforce, and the low cost of Dolibarr. The typical entry reads like this,

EPESI – The last in this list of Linux compatible CRM options is called EPESI. What makes it unique is the ability to take the mail page of the CRM and rearrange how things are laid out visually…it’s pretty nice to have when customizing ones workflow. In addition to expected CRM functionality, this tool also offers ERP options as well. With its modular design and cloud, enterprise and DIY editions, odds are there is a CRM solution available for everyone.

What strikes one the most about this list is how few familiar names appear. This list is certainly worth consulting to gain insights about the landscape, particularly since it does at least allude now and then to the specialty of several of the CRM software. For example, Dolibarr supports freelancers, Compiere is based around the needs of warehousing and manufacturing companies, and Zurmo was designed for salespeople. It is a good time to be in the market for CRM apps.

Chelsea Kerwin, September 26, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monographThere is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

Geoparsing Is More Magical Than We Think

September 23, 2016

The term geoparsing sounds like it has something to do with cartography, but according to Directions Magazine in the article, “Geoparsing Maps The Future Of Text Documents” it is more like an alchemical spell.  Geoparsing refers to when text documents into a geospatial database that allows entity extraction and disambiguation (aka is geotagging).  It relies on natural language processing and is generally used to analyze text document collections.

While it might appear that geoparsing is magical, it actually is a complex technological process that relies on data to put information into context.  Places often have the same name, so disambiguation would have difficulty inputting the correct tags.  Geoparsing has important applications, such as:

Military users will not only want to exploit automatically geoparsed documents, they will require a capability to efficiently edit the results to certify that the place names in the document are all geotagged, and geotagged correctly. Just as cartographers review and validate map content prior to publication, geospatial analysts will review and validate geotagged text documents. Place checking, like spell checking, allows users to quickly and easily edit the content of their documents.

The article acts as a promo piece for the GeoDoc application, however, it does delve into the details into how geoparsing works and its benefits.

Whitney Grace, September 23, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

Watch out for Falling Burritos

September 22, 2016

Amazon and Wal-Mart are already trying to deliver packages by drones, but now a Mexican restaurant wants in on the automated delivery game.  Bloomberg Technology tells the story in “Alphabet And Chipotle Are Bringing Burrito Delivery Drones To Campus.”  If you think you can now order a burrito and have it delivered to you via drone, sorry to

tell you that the service is only available on the Virginia Tech campus.  Alphabet Inc. unit Project Wing has teamed up with Chipotle Mexican Grill for the food delivery service.

Self-guided hybrid drones will deliver the burritos.  The burritos will come from a nearby food truck, so the navigation will be accurate and also so the food will be fresh.  The best part is that when the drones are making the delivery, they will hover and lower the burritos with a winch.

While the drones will be automated, human pilots will be nearby to protect people on campus from falling burritos and in case the drones veer from their flight pattern.  The FAA approved the burrito delivering drone test, but the association is hesitant to clear unmanned drones for bigger deliver routes.

…the experiment will not assess one of the major technology hurdles facing drone deliveries: creation of a low-level air-traffic system that can maintain order as the skies become more crowded with unmanned vehicles. NASA is working with Project Wing and other companies to develop the framework for such a system. Data from the tests will be provided to the FAA to help the agency develop new rules allowing deliveries…

The drone burrito delivery at Virginia Tech is believed to be the most complex delivery flight operation in the US.  It is a test for a not too distant future when unmanned drones deliver packages and food.  It will increase the amount of vehicles in the sky, but it will also put the delivery business in jeopardy.  Once more things change and more jobs become obsolete.

Whitney Grace, September 22, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

Open Source Log File Viewer Glogg

September 21, 2016

Here is an open source solution for those looking to dig up information within large and complex log files; BetaNews shares, “View and Search Huge Log Files with Glogg.”  The software reads directly from your drive, saving time and keeping memory free (or at least as free as it was before.) Reviewer, Mike Williams tells us:

Glogg’s interface is simple and uncluttered, allowing anyone to use it as a plain text viewer. Open a log, browse the file, and the program grabs and displays new log lines as they’re added. There’s also a search box. Enter a plain text keyword, a regular or extended regular expression and any matches are highlighted in the main window and displayed in a separate pane. Enable ‘auto-refresh’ and glogg reruns searches as lines are added, ensuring the matches are always up-to-date. Glogg also supports ‘filters’, essentially canned searches which change text color in the document window. You could have lines containing ‘error’ displayed as black on red, lines containing ‘success’ shown black on green, and as many others as you need.

Williams spotted some more noteworthy features, like a quick-text search, highlighted matches, and helpful Next and Previous buttons. He notes the program is not exactly chock-full of fancy features, but suggests that is probably just as well for this particular task. Glogg runs on 64-bit Windows 7 and later, and on Linux.

Cynthia Murrell, September 21, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

Featurespace Raises Capital for Bank Fraud Monitoring Technology

September 21, 2016

Monitoring online fraud has become an increasingly popular application for machine learning and search technology. The Telegraph reported Cambridge AI fraud detection group raises £6.2m. The company, Featurespace, grew out of Cambridge University and its ARIC technology goes beyond rule-based fraud-detection. It scans all activity on a network and thus learns what registers as fraudulent or suspicious. The write-up tells us,

The company has now raised $9m (£6.2m), which it will use to open a US office after signing two big stateside deals. The funding is led by US fintech investor TTV Capital – the first time it has backed a UK company – and early stage investors Imperial Innovations and Nesta.

Mike Lynch, the renowned technology investor who founded software group Autonomy before its $11.7bn sale to Hewlett Packard, has previously invested in the company and sits on its board. Ms King said Featurespace had won a contract with a major US bank, as well as payments company TSYS, which processes MasterCard and Visa transactions.”

Overall, the company aims to protect consumers from credit and debit card fraud. The article reminds us that millions of consumers have been affected by stolen credit and debit card information. Betfair, William Hill and VocaLink are current customers of Featurespace and several banks are using its technology too. Will this become a big ticket application for these machine learning technologies?

Megan Feil, September 21, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

 

 

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta