Latent Semantic Technology Tops Business Strategy List

July 26, 2012

The current 2012 year is going by quickly, but there is still time to implement business strategies that could gain your company a bigger presence on the Internet. Venture Beat reported on the “Top 10 Most Important SEO and Social Marketing Tactics of 2012.” Generally these top ten lists yield information we already know: distribute content via social channels, list your social media connection buttons prominently on the page, enable sharing content, join Pinterest, etc. Some of the ideas are new: author guest blog posts, keep your own blog content interesting and new, but the number one suggestion that caught our attention was:

“Get an onsite SEO audit: an onsite SEO audit is the foundation of your SEO campaign. Getting one will help you answer questions like: Are your title and meta tags optimized? How’s your keyword density? Have you correlated certain pages with certain keywords? Is that evident in the copy? Have you done your LSI (latent semantic indexing) research and incorporated it into the copy? An onsite SEO audit is relatively cheap, and it’s a one-time payment that you shouldn’t need to address more than once a year.”

An SEO audit done by a professional company will work wonders, heck, if you do your research you can do provide the service for yourself. One important aspect of the audit is latent semantic indexing, a powerful component of text and document analysis.

Whitney Grace, July 26, 2012

Sponsored by Polyspot

 

SEO World Reacts to Google Knowledge Graph

July 23, 2012

Yikes, it’s a semantic invasion! Search Engine Watch declares, “Semantic Search: The Eagle Has Landed.” This article takes a look at Google’s newest Web search incarnation, the Knowledge Graph; the shift to the new system is already in progress.

Writer Jiyan Wei’s intended audience is made up of SEO pros, so most of the article focuses on what the development means for those who game results page rankings for a living. He does, however, give a good description of the service, using a search for “The Dark Knight Rises” as an example. He writes:

“Google’s goal is to infer that ‘The Dark Knight Rises’ is a specific entity type (a movie). Once this inference is made, they are then able to relate the entity with a set of associated entities (directors, actors, theatres, etc.). This relational understanding lays the foundation for a search experience that is far more consumer friendly, far more like ‘how humans understand the world.’. . .

“The right-column is almost entirely composed of content derived from semantic inference: it displays a list of people who have contributed to the movie as well as information about the movie pulled from Wikipedia.

“Semantic search is also currently influencing the organic search results by displaying people related to the movie, dates associated with the movie, and once the movie has been released and reviewed, ratings associated with the movie.”

Wei notes that the schema Google is using is publically available at Schema.org, a collaborative project shepherded by Google, Microsoft, and Yahoo. If you’re interested in keeping up with the changing rules behind the search engine optimization game, see the second half of the article.

Cynthia Murrell, July 23, 2012

Sponsored by PolySpot

Obedience School for Cross Domain Semantics

July 2, 2012

It is possible to teach an old dog new tricks according to Semanticweb.com’s article, ‘FirstRain Spotlights Semantics Across Domains’.  Semantic approaches for a targeted domain work well because one can train the NLP engine to recognize key words that are applied. The downside is that the business world of today is vast and the current training limitations for specific domains cannot always scale.
FirstRain has opened a unique version of a semantic obedience school as:

“Affinity scoring must be a breakthrough for classes of information where there is a lot of ambiguity, and the cool thing about it is that you can actually apply it in a way to create a virtuous self-improving spiral that works across massively different information domains. When you set up the correct feedback loop of affinity scoring and don’t encode to different domains, but let it swing across those you are trying to match things to, you can create a self-learning system.”

The new system derived by FirstRain is capable of re-training the most stubborn of semantics and inspiring functionality. By creating adaptable semantics they have taught an already workable system to handle a variety of information in an even more efficient process. The semantic obedience school could very well be the next big thing in the business world if all goes as they plan. The new routine seems feasible, so has FirstRain cracked the tough training nut of cross domain semantics?

Jennifer Shockley, Juuly 2, 2012

Sponsored by IKANOW

Semantic Technology with a Reverse Twist

June 29, 2012

Semantic technology just got a little twisted, but not in a bad way. RolfROLLE explains multiple ways to look at semantics in his blog, ‘RECON 2012 Keynote: The Case for Semantics-Based Methods in Reverse Engineering’ on OpenRCE.org.

A bit of insight into Rolf’s view:

“The goal of my RECON 2012 keynote speech was to introduce methods in academic program analysis and demonstrate — intuitively, without drawing too much on formalism — how they can be used to solve practical problems that are interesting to industrial researchers in the real world. Given that it was the keynote speech, and my goal of making the material as accessible as possible, I attempted to make my points with pictures instead of dense technical explanations.”

The method behind the madness can be found in Rolf’s PDF Semantic based Methods, where he fine tunes his opinions with graphs and examples for visual aid. The presentation introduces a binary program analysis, as opposed to mathematical monograph and clarifies the difference between Semantics and Syntactic.

Setting mathematical equations aside and simplifying the results in layman’s terms shows us that semantic methods move slower, but are by far more detailed than syntactic methods. By providing the completeness of phase semantics, the referenced work strives to prove the correctness of trace semantics. When reflecting on the concept overall, it indeed shows semantic technology with an interesting new twist.

Jennifer Shockley, June 29, 2012

Sponsored by IKANOW

Google and Latent Semantic Indexing: The KnowledgeGraph Play

June 26, 2012

One thing that is always constant is Google changing itself.  Not too long ago Google introduced yet another new tool: Knowledge Graph.  Business2Community spoke highly about how this new application proves the concept of latent semantic indexing in “Keyword Density is Dead…Enter “Thing Density.”  Google’s claim to fame is providing the most relevant search results based on a user’s keywords.  Every time they update their algorithm it is to keep relevancy up.  The new Knowledge Graph allows users to break down their search by clustering related Web sites and finding what LSI exists between the results.  From there the search conducts a secondary search and so on.  Google does this to reflect the natural use of human language, i.e. making their products user friendly.

But this change begs an important question:

“What does it mean for me!? Well first and foremost keyword density is dead, I like to consider the new term to be “Concept Density” or to coin Google’s title to this new development “Thing Density.” Which thankfully my High School English teachers would be happy about. They always told us to not use the same term over and over again but to switch it up throughout our papers. Which is a natural and proper style of writing, and we now know this is how Google is approaching it as well.”

The change will means good content and SEO will be rewarded.  This does not change the fact, of course, that Google will probably change their algorithm again in a couple months but now they are recognizing that LSI has value.  Most IVPs that provide latent semantic indexing, content and text analytics, such as Content Analyst,have gone way beyond what Google’s offering with the latest LSI trends to make data more findable and discover new correlations.

Whitney Grace, June 26, 2012

Sponsored by Content Analyst

EasyAsk Product Announced As a Cool Vendor

June 11, 2012

A new, “cool,” vendor has been announced in a list of Cool Vendors in the Analytics and Business Intelligence, 2012 report by Garner, Inc.

According to the article, “EasyAsk Named ‘Cool Vendor’ by Leading Analyst Firm,” EasyAsk’s Siri-like mobile app for corporate data is one to note. The app, named Quiri, combines voice and NLP to provide a usable, and apparently “cool,” user-experience. A video demonstration of the product is available here. The article states:

“Quiri offers users Siri-like built-in speech recognition and natural language processing, allowing users to conveniently speak their business questions and get immediate answers to business questions. Users tap a microphone button, speak a request and Quiri retrieves the answer from existing corporate data.

EasyAsk eCommerce search and merchandising software – available on-premise or as a service (SaaS) – leads the industry in customer conversion by providing the right products on the first page, every time.”

We find this to be an interesting angle for a product spotlight. We aren’t sure if this is a pay-to-play write-up or an objective analysis. We also aren’t sure what “cool” means when referring to a product’s usability, but look forward to seeing more from EasyAsk.

Andrea Hayden, June 11, 2012

Sponsored by PolySpot

Semantic Duet Seems to Harmonize. Will 1+1=3?

June 10, 2012

The enterprise data crowd is being entertained by a new duet according to, fluid Operations and Ontotext Team Up to Usher in the Next Generation of Enterprise Data Management. The article states:

“Ontotext and fluid Operations (fluidOps) have teamed up to offer clients practical enterprise solutions for RDF data mining, access, publishing and search.”

Dr. Andreas Eberhart of fluid Operations predicts an evolution in data management, stating:

“This is really a pairing of best-in-class tools that we feel will usher in the next generation of enterprise data management. There are many players in the semantics space providing bits and pieces of a solution, but Ontotext and fluidOps have proven to deliver turnkey products that deliver a complete solution and have solved real world customer demands.”

Ontotext develops core semantic technology, text mining and web mining solutions. They specialize in creating software for tools and solutions based on semantics that optimize performance in data integration, analysis, evaluation, management and publishing.

fluid Operations is stationed out of Walldorf, Germany and designs open platform software. Their specialty is semantic integration for both structured and unstructured data entwined with business and IT stacks. They also provide infrastructure and cloud monitoring, management and orchestration solutions.

These two software designers harmonize well. The stage is set and this duet may very well write the next data management symphony. Ontotext and fluid are an interesting tie up. It makes us wonder, will 1+1=3?

Jennifer Shockley, June 10, 012

The Semantic Sector. More Crowded Than a NY Subway

May 30, 2012

Yet another provider has moved into the semantic search house according to Ontology launches OSS/BSS intelligent semantic search app suite. This field is becoming more crowded than a frat house during a party where everyone invites a friend. With multiple companies creating their own search providers the market’s saturation is going to reach an all-time high.

Each company to join the party see’s their product as more innovative than the last and now:

Ontology Systems the semantic-search provider for enterprise data alignment, announced Ontology Intelligent 360, a suite of apps that deliver enterprise views, dashboards and fully featured operational solutions using Ontology 3 semantic search technology. The applications include real-time, revenue-prioritized service impact notification, customer care, end-to-end multi-vendor/technology topology, and margin management and benefit users in network operations, customer care, finance, sales and marketing.”

It’s the same old song and dance we see with each new company offering a search provider services. The new Intelligent 360 has joined in the line dance by utilizing the most state-of-the-art semantic search technology. Just like everyone else, they plan to provide a single, accurate, enterprise-wide view of customers, services and networking assets.

Even a frat house has a maximum capacity limit and parties sometimes get busted for exceeding limitations. We have to wonder when this rapidly growing industry is going to reach full capacity. For now, it seems the semantic search sector is getting more crowded than the frat house party of the year.

Jennifer Shockley, May 30, 2012

Sponsored by PolySpot

Kyield Aims to Pilot a Big Data BI Revolution

May 29, 2012

Kyield has introduced a newly patented semantic enterprise platform. According to the article, Kyield Announces Pilot Program for Advanced Analytics and Big Data with New Revolutionary BI Platform their inviting others to hop in the co-pilot’s seat and collaborate on the take-off.

Mark Montgomery, Founder and CEO of Kyield stated:

“We are inviting well-matched organizations to collaborate with us in piloting our breakthrough system to bring a higher level of performance to the information workplace. In addition to the significant competitive advantage exclusive to our pilot program, we are offering attractive long-term incentives free from lock-in, maintenance fees, and high service costs traditionally associated with the enterprise software industry.”

Kyield was once a consulting firm but evolved into a small private lab founded by Mark Montgomery in 1995. Initially they dealt with e-commerce, but grew their focus into advanced technology with specific focus on knowledge systems, which they are known for today.

The company executed a CTO search with strong response in 2011, following up with a call for collaborative customers in enterprise prototype technology. In early 2012 the Forrester report recognized Kyield in ‘The Future Of BI’, the Top 10 Business Intelligence Predictions of 2012.

Their ground breaking artificial intelligence system will offer an almost holistic architecture. The plan is to extend advanced business intelligence and predictive analytics to all aspects of an organization utilizing an adaptive approach to data optimization. Kyield is in the driver’s seat for the big data BI revolution, and their graciously accepting co-pilots.

Jennifer Shockley, May 29, 2012

Sponsored by PolySpot

Semantic Key Word Research

May 29, 2012

Keyword research is the time-tested, reliable way to locate information on the Internet and databases. There have been many changes to they way people use keyword research, some of them have stayed around and others have disappeared into the invisible web faster than a spambot hits a web site. The Search Engine Journal has come up with “5 Tips for Conducting Semantic Keyword Research” which believes that users “must recognize the semantic nature of the search engines’ indexing behaviors.”

For those without a dictionary handy, semantics refers to the meaning or interpretation of a word or phrase. When a user types a phrase into a search engine, it uses indexing (akin to browsing through a list of synonyms) to find other pertinent results.

words yellow copy

A happy quack to http://languagelog.ldc.upenn.edu

So how do the tips measure up? Tip #1 has users create a list of “level 1” core keywords aka write a list of subject/keywords. This is the first step in any research project and most people will be familiar with it if they have completed elementary school. Pretty basic, but it builds the foundation for an entire project. Tip #2 delves farther by having users expand the first list by finding more supporting keywords that are not necessary tied to the main keyword, but are connected to others on the list. Again another elementary research tip, reach out and expand.

Tip #3 moves us away from the keyword lists and tells users to peruse their results and see what questions they can answer. After the users find what can be answered they make another list detailing their findings (so we didn’t step that far away from lists).

Tip #4 explains to combine tips #1-3, which will allow the users to outline their research and then write an article on the topic. Lastly. Tip #5 is a fare-thee-well, good luck, and write interesting content:

“One final tip for incorporating semantically-related keywords into your website’s content…  Building these varied phrases into your web articles should help eliminate the stilted, unpleasant content that results from trying to stuff a single target keyword into your text a certain number of times.

However, it’s still important to focus on using your new keyword lists to write content that’s as appealing to your readers as it is to the search engines.  If Google’s recent crackdowns on Web spam are any indication of its future intentions, it’s safe to say that the best long-term strategy is to use semantic keywords to enhance the value of your copy – without letting its optimization eclipse the quality of the information you deliver to your website visitors.”

What have we got here? Are the tips useful? Yes, they are, but they do not bring about new material about keyword searching. As mentioned earlier, these steps are taught as the very basic of elementary research: make a keyword list about your topic, find associated terms, read what you got, then write the report. It is true that many schools and higher education institutes do not teach the basics, thus so-called researchers lack these finite skills. Also people tend to forget the beginner’s steps. Two common mishaps that make articles like this necessary, but the more seasoned researcher will simply intone, “Duh!.”

Whitney Grace, May 29, 2012

Sponsored by Polyspot

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta