Sci Tech Content as Marketing Collateral
August 25, 2009
The notion of running a query across a collection of documents is part of research. Most users assume that the information indexed is going to be like like mixed nuts. In my experience, a more general query against an index of Web logs is likely to contain more of the lower grade nuts. A query passed against a corpus of electrical engineering or medical research reports will return a hit list with higher quality morsels. Those who teach information science often remind the students to understand the source, the bias of the author, and the method of the indexing system. Many people perceive online information as more accurate than other types of research material. Go figure.
When I read “McGill Prof Caught in Ghostwriting Scandal”, I thought about a rather heated exchange at lunch on Friday, August 21. The topic was the perceived accuracy of online information. With some of the new Twitter tools, it is possible for a person to create a topical thread, invite comments, and create a mini conversation on a subject. These conversations can be directed. The person starting the thread defines the terms and the subject. Those adding comments follow the thread. The originator of the thread can add comments of his / her own steering the presentation of information, suggesting links, and managing the information. Powerful stuff. Threads are becoming a big deal, and if you are not familiar with them, you may want to poke around to locate a thread service.
The McGill professor’s story triggered several ideas which may have some interesting implications for marketing and research. For example:
A scholarly paper may look more objective than a comment in a Web log. The Montreal Gazette reported:
Barbara Sherwin – a psychology professor whose expertise in researching how hormones influence memory and mood in humans – was listed as the sole author of an April 2000 article in the Journal of the American Geriatrics Society arguing that estrogen could help treat memory loss in older patients. In fact, the article was written by a freelance author hired by DesignWrite, a ghostwriting firm based in New Jersey. The company was paid by Wyeth to produce ghostwritten articles, which were then submitted to reputable scholars.
I would not have known that this ghostwritten article was a marketing piece. In fact, I don’t think I would have been able to figure it out by myself. That’s important. If I were a student or a researcher, I would see the marketing collateral as objective research. A search system would index the marketing document and possibly the Tweets about the document. Using Twitter hashtags, a concept space can be crafted. Run a query for McGill on Collecta, and you can see how the real time content picked up this ghostwriting story. How many hot topics are marketing plays? My hunch is that there will be more of this content shaping, not less, in the months ahead. Controlling the information flow is getting easier, not harder. More important, the method is low cost. When undiscovered, the use of disinformation may have more impact than other types of advertising.
What happens if a marketer combines a sci tech marketing piece like the Sherwin write up with a conversation directing Twitter tool? My initial thought is that a marketer can control and shape how information is positioned. With a little bit of tinkering, the information in the marketing piece can be disseminated widely, used to start a conversation, and with some nudges in a Twitter thread directed.
I am going to do some more thinking about the manipulation of perception possible with marketing materials and Twitter threads. What can an information consumer do to identify these types of disinformation tactics? I don’t have an answer.
Stephen Arnold, August 25, 2009
SharePoint Video Explains SharePoint
August 24, 2009
I think I know what SharePoint is—a very complex collection of disparate services, functions, and subsystems. I know that SharePoint can be one or more of these systems: search, content management, collaboration, enterprise publishing, business intelligence, or knowledge management, among others. For those who don’t know what SharePoint is, a video is available to explain SharePoint. The article “New SharePoint 2007 Video” can be located on the Microsoft SharePoint Team Blog. What I found interesting was that SharePoint 2007 is getting a new video in 2009. Here’s a snippet from the write up:
This video takes a friendly approach to show how SharePoint can help you work together with others in more efficient ways. For example, SharePoint can help you plan and work on a project online instead of sending files back and forth in e-mail.
When someone tells me SharePoint is not complicated, I chuckle. Enjoy the video. Does the video touch upon search? I do not know. I fell asleep.
Stephen Arnold, August 24, 2009
When SharePoint Search Breaks
August 24, 2009
You can get some tips to revivify a dead SharePoint search system by reading “Troubleshooting SPSearch and Good Practices for Moving Large Files”. I worked through the write up and the code samples. The problem was resolved when the author realized that SharePoint pointed the wrong place. Here is the key passage:
It turned out that during our disk reconfiguration, the path of F:\DATA\INDEX no longer existed. So I recreated the path specified in the registry (F:\DATA\INDEX) and copied the contents of the CONFIG folder from my fresh VM install. I then started the search service from Central Administration and… bingo! Search finally started successfully…Wohoo!
If you are experiencing missing data, check out this tip. Life would be easier if there were administrative tools for moving files that implement “best” practices. Obviously “good” practices are not enough.
Stephen Arnold, August 24, 2009
XML and Big Data
August 24, 2009
If you are struggling with big data, you may want to spend a few minutes reading “How XML Threatens Big Data”. Here in the wilds of Kentucky, we are awash in XML. Years ago we learned that XML is not the answer to some data problems. Others, however, have embraced XML only to discover that there are some gotchas in the reality of ASCII with lots of tags and the cute stuff programmers can do with XML. Michael Driscoll recounts some of his XML adventures. A few of these are quote interesting. One passage we enjoyed was:
XML’s complexity inflicts misery on both sides of the data divide: on the publishing side, developers struggle to comply with the latest edicts of a fussy standards group. While data suitors labor to quickly unravel that XML format into something they can use.
The author offers three recommendations. These are worth noting. I won’t spoil your fun by summarizing Mr. Driscoll’s observations. Enjoy those angle brackets!
Stephen Arnold, August 24, 2009
Grokker Status
August 24, 2009
This news stuff ruffles the addled goose’s feathers. A post in response to my observation that Grokker was not answering its telephone brought this post to the Beyond Search Web log. The author is Randy Marcinko. Here is his response in full:
Let me clarify the purported mystery…. As many of you know, Groxis had gone through tumultuous times following the dot.com days. Having survived the dot.com generation as a company able to create glowing expenses, it needed to learn how to come to terms with revenue generation. My predecessor (Brian Chadbourne) and I attempted to right the ship and seek out the best path forward.
I took over as Groxis’ CEO in September of 2007 and it became almost immediately apparent that Groxis’ sweet spot was and is with content creators and aggregators–publishers large and small, traditional aggregators, syndicators and others of the content world. This is a group of clients who have a need, for whom Groxis is compelling and a “need-to-have,” not “nice-to-have” purchase. They are also a group of prospects with sales cycles that are manageable for a small company. So we moved down that path. With a great team we were able to make quick changes to the product, making it more vital and current. We jettisoned many old product lines in favor a short list to whom we had the resources to sell. The results were great and we were on track to a cash flow positive Q4 of 2009.
Unfortunately, in Q2 of 2008, we were also on track to close a Series D round of funding, necessary to allow Groxis to move quickly enough to succeed. The round was all but completed in Q3 along with the onset of the economic downturn. With the change in the economy our Series D investors decided that it was not feasible to continue with that financial plan. This was a reality, despite a rich pipeline and refurbished products.
Thanks to a diligent and hardworking team at Groxis, we did our best through 2008 but by the end of Q1 of 2009 the only feasible next step was to close down the current operation. We closed down the day-to-day operation in March 2009. Since that time I have been negotiating with possible acquirers and investors. We have had a great response only time will tell whether a long term solution will emerge.
As information becomes available, I will post it.
Stephen Arnold, August 24, 2009
Somat Engineering: Delivering on the Obama Technology Vision
August 24, 2009
I fielded an email from an engaging engineer named Mark Crawford. Most of those who contact me get a shrill honk that means, “The addled goose does not want to talk with you.” Mr. Crawford, an expert in the technology required to make rockets reach orbit and vehicles avoid collisions, said, “One of the top demo companies in San Francisco listened.” I asked, "Was it TechCrunch?" Mr. Crawford said, "I cannot comment on that." So with a SF demo showcase interested, I figured, “Why not get a WebEx myself?”
Quite a surprise. I wrote a dataspace report for IDC in September 2008. No one really cared. I then included dataspace material in my new Google monograph, Google: The Digital Gutenberg. No one cared. I was getting a bit gun shy about this dataspace technology. You can get a reasonable sense of the thinking behind dataspace technology by reading the chapter in Digital Gutenberg which is available without charge from my publisher in the UK. Click here to access this discussion of the concept of dataspaces.
Mr. Crawford’s briefing began, “We looked at how we could create a dataspace that brings together information for a government agency, an organization, or a small group of entrepreneurs. We took a clean sheet of paper and built a system that bridges the gap between what people want to do and the various islands of technology most enterprises use to get their knowledge sharing act together.”
Mr. Crawford, along with his partner Arpan Patel, and some of Somat Engineering’s information technology team, built a system that President Obama himself would love. Said Mr. Crawford, “We continue to talk with our government partners and people like the demo showcase. There seems to be quite a bit of excitement about our dataspace technology.”
I wanted to put screenshots and links in this write up, but when I followed up with Somat Engineering, a 60 person multi-office professional services firm headquartered in Detroit, Michigan, I was told, “Sit tight. You are one of the first to get a glimpse at our dataspace system.”
I challenged Mr. Crawford because Somat designs bridges, roads and other fungible entities. It is not a software company. Mr. Crawford chuckled:
Sure, we work with bridges and smart transportation systems. What we have learned is that engineers in our various offices build bridges among information items. Our dataspace technology was developed to build bridges across the gaps in data. Without our dataspace technology, we could not design the bridges you drive on. Unlike some software companies, our dataspace technology was a solution to a real problem. We did not develop software and then have to hunt for a problem to solve. Without our technology, we could not deliver the award winning engineering Somat puts out each and every day.
Good answer. A real software solution to a real world problem – bridging gaps among and between disparate information. Maybe that is what turned the crank at the analyst’s shop. Refreshing and pragmatic.
However, I did get the okay to provide some highlights for you and one piece of information that may be of considerable interest to my two or three readers who live in the Washington, DC area.
First, the functions:
- Somat has woven together in one seamless system Microsoft and Google functions. Other functions can be snapped to make information sharing and access intuitive and dead simple.
- The service allows a team to create the type of sharing spaces that Lotus has been delivering in a very costly and complicated manner. The Somat approach chops down the cost per user and eliminates the trademarked complexity of the Lotus solutions.
- The system integrates with whatever security methods a licensing organization requires. This means that the informal security of the Active Directory works as well as the most exotic dongle based methods that are popular in some sectors.
The second piece of news is that the public demonstration of this Somat technology will take place in Washington, DC, at the National Press Club on September 23, 2009. I have only basic details at the moment. The program begins at 9 am sharp and ends promptly at 11 am. There will be a presentation by the president of Somat, a demonstration of the dataspace technology, a presentation by long-time open source champion Robert Steele, president of OSS Inc. and a technology review by Adhere Solutions, the US government’s contact point for Google technology. A question and answer session will be held. After my interrogation of Mr. Crawford, he extended an invitation to me to participate in that Q&A session.
Bridging information pathways. The key to Somat’s engineering method.
Somat’s choice of the National Press Club venue was, according to Mr. Crawford:
The logical choice for Somat. As a minority owned engineering and professional services company, we see our dataspace technology as one way to deliver on President Obama’s technical vision. We think that dataspaces can address many of the transparency challenges that the Obama administration is tackling head on. Furthermore, we know from our work on certain government projects, that US agencies can use our dataspace approach to reduce costs and chop out the “wait” time in knowledge centric projects.
Based on what I saw on Friday, August 21, 2009, the San Francisco tech analysts were right on the money. I believe that Somat’s dataspace solution will be one of the contenders in this year’s high profile demo event. My thought is that if you want to deal with integrated information in a way that works, you will want to navigate to the Somat registration link to attend this briefing. If you want to talk to me about my view, drop me an email at seaky2000 at yahoo dot com. (The NPC charges for coffee, etc., so there is a nominal fee to deal with this situation.)
Somat has a remarkable technology. It touches upon such important information requirements as access to disparate information, intuitive “no training required” interfaces, and real time intelligence.
For more information about Somat, visit the company’s Web site. The addled goose asserts, “Important technology that bridges the gaps between Google, Microsoft and other information systems.”
Stephen Arnold, August 24, 2009
Yahoo Leaving Google in the Dust!
August 24, 2009
My newsreader delivered a morsel to me with the title “Where Yahoo Leaves Google in the Dust.” The author is Randall Stross. His analysis left me confused. Selecting one Yahoo service like News or Finance is a useful analytic method. I use it myself. What surprises me is the leap made that the success of a single service suggests that other successes will follow. I like to step back and look 01at the overall picture, not the exception. I admit exceptions at Yahoo are interesting.
I am tired as I write this (Saturday, midnight, August 22) but I had to capture my thoughts about how a couple of successes distinguish Yahoo. Is Yahoo is making the Google look silly or “leaving it in the dust”?
In my narrow view of the world, Yahoo has revenues that are smaller than Google’s. Yahoo has traffic but the Google is, according to some of the outfits who estimate traffic, has pulled ahead. Yahoo is a collection of services that I have a tough time keeping straight and sometimes finding. Mr. Stross writes:
It seems unlikely, however, that Google’s new tools — whose metrics include one called the Fast Stochastic Oscillator — will do as much for building traffic as a fluffy news story or a short video featuring talking heads. Yahoo understands that a free finance site prospers by drawing less from the world of mathematics and more from the world of entertainment, informing just enough to satisfy users without setting off an anxiety attack.
The business school professor, Mr. Stross, is talking about the popular Yahoo Finance service. That service is outperforming Google Finance when measured by the yardsticks of friendliness and traffic. I agree. Yahoo’s service does not cause me any stress. For me, Google’s service is useful. I quite like the link to the Relegence content on AOL.com for example. Also, Yahoo has a killer service with OMG. Few of my contacts know about OMG, a celebrity info service. Google does not have much of a product in this category but there is some celebrity info on Google News. Also, Yahoo has a deal with Microsoft. Google does not. Google is pretty much alone in its sail boat. For me, defining differences between Google and Yahoo that I note include:
- Google makes more revenue. Money is really important.
- Yahoo watched as Google aced the company in online advertising. Losing that original ad lead to Google remains important today.
- Yahoo has abandoned what may be the key function in online: search. Yahoo has withdrawn from search which is important. Search is no longer search. Search is the way to access information. Without that access, it is indeed tough to do certain types of knowledge work.
- Yahoo has sought refuge with Microsoft. Yahoo needs a care giver. That’s important to me. Care givers are often in the cat bird seat.
If anyone is left in the dust, it is Yahoo in my opinion. That is not a popular view. What is popular is making the Google look like a loser by selecting a narrow focus and setting up yardsticks that ignore the overall revenue capability of the organizations. Business school boils down to money. Seems like an omission of note in my opinion.
Stephen Arnold, August 24, 2009
Cloud Price War
August 23, 2009
Short honk: If you have been keeping an eye on the fees assessed for cloud computing, you will want to take a look at “Amazon Lowers Fees on Reserved EC2 Instances”. The article contains a useful table of prices. What was missing were the pre discount prices, but the information is useful. When I read the write up, I thought, “Amazon is buying market share.” I wonder if Amazon will report revenues for its cloud product line. Probably not. Clarity in financial reporting is not part of the Amazon game plan in my opinion.
Stephen Arnold, August 22, 2009
Smart Wiki Search
August 23, 2009
A happy quack to the reader who sent me a link to Smart Wiki Search. There is a good description of what the system does to identify related queries. Click a related query and the system displays a page of Wikipedia information germane to your original query. I tested the query “Julius Caesar” and got useful links shown below:
The about section of the service explains the nuts and bolts of the system:
Smart Wiki Search uses the link structure of Wikipedia to calculate which concepts each page is associated with. It is easy to see why looking at links can help group pages by concepts. For example, pages about mathematics have a lot of links to (and from) other pages about mathematics. Pages about the Apollo moon landing have a lot of links to pages about NASA and pages about the moon, etc. More specifically, Smart Wiki Search uses the so-called Eigen decomposition of the Wikipedia link transition matrix. Eigen decomposition provides of a number of special vectors, called eigenvectors, and their corresponding Eigen values. These vectors are special because even a relatively small number of eigenvectors having the largest Eigen values can capture all the most important properties of the link structure.
Give the system a spin. Graduate students and those writing research papers are likely to find this content domain specific search system useful.
Stephen Arnold, August 23, 2009
Stephen Arnold
Microsoft and SEO Optimization
August 23, 2009
Whilst poking around for the latest Microsoft search information, I came across a Web site called Internet Information Services at www.iis.net. I was curious because the write up on the Web site said:
The Site Analysis module allows users to analyze local and external Web sites with the purpose of optimizing the site’s content, structure, and URLs for search engine crawlers. In addition, the Site Analysis module can be used to discover common problems in the site content that negatively affects the site visitor experience. The Site Analysis module includes a large set of pre-built reports to analyze the sites compliance with SEO recommendations and to discover problems on the site, such as broken links, duplicate resources, or performance issues. The Site Analysis module also supports building custom queries against the data gathered during crawling.
The word “experience” did it. I zipped to whois and learned that the site is Microsoft’s. The registrar is an outfit called CSC Protect-a-Brand. Microsoft does not want to let this url slip through its hands I assume. You can download the tool here.
What interested me was that Microsoft has written the description of the tool without reference to its own Web indexing system. Furthermore, the language is generic which leads me to believe that this extension and the other nine in the category “Search Engine Optimization Toolkit” apply to Google as well.
If you are an SEO wizard and love the Microsoft way, you will want to download and experiment with these tools. Testing might be a good idea. If the tools work equally well for Bing.com and Google.com, has Microsoft emulated the Google Webmaster guidelines? If not, what will be the impact on a highly ranked Google site. With 75 to 85 percent of Web search traffic flowing to Google.com, an untoward tweak might yield interesting effects.
Stephen Arnold, August 23, 2009