Google and Alta Vista: Who Remembers?
September 9, 2015
A lifetime ago, I did some work for an outfit called Persimmon IT. We fooled around with ways to take advantage of memory, which was a tricky devil in my salad days. The gizmos we used were manufactured by Digital Equipment. The processors were called “hot”, “complex”, and AXP. You may know this foot warmer as the Alpha. Persimmon operated out of an office in North Carolina. We bumped into wizards from Cambridge University (yep, that outfit again), engineers housed on the second floor of a usually warm office in Palo Alto, and individuals whom I never met but I had to slog through their email.
So what?
A person forwarded me a link to a what seems to be an aged write up called “Why Did Alta Vista Search Engine Lose Ground so Quickly to Google?” The write up was penned by an UCLA professor. I don’t have too much to say about the post. I was lucky to finish grade school. I missed the entire fourth and fifth grades because my Calvert Course instructor in Brazil died of yellow jaundice after my second lesson.
I scanned the write up, which you may need to register in order to read the article and the comments thereto. I love walled gardens. They are so special.
I did notice that one reason Alta Vista went south was not mentioned. Due to the brilliant management of the company by Hewlett Packard/Compaq, Alta Vista created some unhappy campers. Few at HP knew about Persimmon, and none of these MBAs had the motivation to learn anything about the use of Alta Vista as a demonstration of the toasty Alpha chips, the clever use of lots of memory, and the speed with which certain content operations could be completed.
Unhappy with the state of affairs, the Palo Alto Alta Vista workers began to sniff for new opportunities. One scented candle burning in the information access night was a fledgling outfit Google, formerly Backrub. Keep in mind that intermingling of wizards was and remains a standard operating procedure in Plastic Fantastic (my name for Sillycon Valley).
The baby Google benefited from HP’s outstanding management methods. The result was the decampment from the HP Way. If my memory serves me, the Google snagged Jeff Dean, Simon Tong, Monica Henzinger, and others. Keep in mind that I am no “real” academic, but my research revealed to me and those who read my three monographs about Google that Google’s “speed” and “scaling” benefited significantly from the work of the Alta Vista folks.
I think this is important because few people in the search business pay much attention to the turbo boost HP unwittingly provided the Google.
In the comments to the “Why Did Alta Vista…” post, there were some other comments which I found stimulating.
- One commenter named Rajesh offered, “I do not remember the last time I searched for something and it did not end up in page 1.” My observation is, “Good for you.” Try this query and let me know how Google delivers on point information: scram action. I did not see any hits to nuclear safety procedures. Did you, Rajesh? I assume your queries are different from mine. By the way, “scram local events” will produce a relevant hit half way down the Google result page.
- Phillip observed that the “time stamp is irrelevant in this modern ear, since sub second search is the norm.” I understand that “time” is not one of Google’s core competencies. Also, many results are returned from caches. The larger point is that Google remains time blind. Google invested in a company that does time well, but sophisticated temporal operations are out of reach for the Google.
- A number of commenting professionals emphasized that Google delivered clutter free, simple, clear results. Last time I looked at a Google results page for this query katy perry the presentation was far from a tidy blue list of relevant results.
- Henry pointed out that the Alta Vista results were presented without logic. I recall that relevant results did appear when a query was appropriately formed.
- One comment pointed out that it was necessary to cut and paste results for the same query processed by multiple search engines. The individual reported that it took a half hour to do this manual work. I would point out that metasearch solutions became available in the early 1990s. Information is available here and here.
Enough of the walk down memory lane. Revisionism is alive and well. Little wonder that folks at Alphabet and other searchy type outfits continue to reinvent the wheel.
Isn’t a search app for a restaurant a “stored search”? Who cares? Very few.
Stephen E Arnold, September 9, 2015
Indexing Teen Messages?
September 7, 2015
If you are reading teens’ SMS messages, you may need a lexicon of young speak. The UK Department of Education has applied tax dollars to help you decode PAW and GNOC. The problem is that the http://parentinfo.org/ does not provide a link to the word list. What is available is a link to Netlingo’s $20 list of Internet terms.
Maybe I am missing something in “P999: What Teenage Messages Really Mean?”
For a list of terms teens and the eternally young use, check out these free links:
- http://www.netlingo.com/acronyms.php
- http://www.webopedia.com/quick_ref/textmessageabbreviations.asp
- http://www.internetslang.com/SMS-meaning-definition.asp
I love it when “real journalists” do not follow the links about which they write. Some of these folks probably find turning on their turn signal too much work as well.
Stephen E Arnold, September 7, 2015
A Search Engine for College Students Purchasing Textbooks
August 27, 2015
The article on Life Hacker titled TUN’s Textbook Search Engine Compares Prices from Thousands of Sellers reviews TUN, or the “Textbook Save Engine.” It’s an ongoing issue for college students that tuition and fees are only the beginning of the expenses. Textbook costs alone can skyrocket for students who have no choice but to buy the assigned books if they want to pass their classes. TUN offers students all of the options available from thousands of booksellers. The article says,
“The “Textbook Save Engine” can search by ISBN, author, or title, and you can even use the service to sell textbooks as well. According to the main search page…students who have used the service have saved over 80% on average buying textbooks. That’s a lot of savings when you normally have to spend hundreds of dollars on books every semester… TUN’s textbook search engine even scours other sites for finding and buying cheap textbooks; like Amazon, Chegg, and Abe Books.”
After typing in the book title, you get a list of editions. For example, when I entered Pride and Prejudice, which I had to read for two separate English courses, TUN listed an annotated version, several versions with different forewords (which are occasionally studied in the classroom as well) and Pride and Prejudice and Zombies. After you select an edition, you are brought to the results, laid out with shipping and total prices. A handy tool for students who leave themselves enough time to order their books ahead of the beginning of the class.
Chelsea Kerwin, August 27, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Misinformation How To: You Can Build a False Identity
August 9, 2015
The Def Con hit talk is summarized in “Rush to Put Death Records Online Lets Anyone Be Killed.” The main idea is that one can fill out forms (inject) containing misinformation. Various “services” suck in the info and then make “smart” decisions about that information. Plug in the right combination of fake and shaped data, and a living human being can be declared “officially” dead. There are quite a few implications of this method which is capturing the imaginations of real and would be bad actors. Why swizzle through the so called Dark Web when you can do it yourself. The question is, “How do search engines identify and filter this type of information?” Oh, right. The search engines do not. Quality and perceived accuracy are defined within the context of advertising and government grants.
Stephen E Arnold, August 9, 2015
Does America Want to Forget Some Items in the Google Index?
July 8, 2015
The idea that the Google sucks in data without much editorial control is just now grabbing brain cells in some folks. The Web indexing approach has traditionally allowed the crawlers to index what was available without too much latency. If there were servers which dropped a connection or returned an error, some Web crawlers would try again. Our Point crawler just kept on truckin’. I like the mantra, “Never go back.”
Google developed a more nuanced approach to Web indexing. The link thing, the popularity thing, and the hundred plus “factors” allowed the Google to figure out what to index, how often, and how deeply (no, grasshopper, not every page on a Web site is indexed with every crawl).
The notion of “right to be forgotten” amounts to a third party asking the GOOG to delete an index pointer in an index. This is sort of a hassle and can create some exciting moments for the programmers who have to manage the “forget me” function across distributed indexes and keep the eager beaver crawler from reindexing a content object.
The Google has to provide this type of third party editing for most of the requests from individuals who want one or more documents to be “forgotten”; that is, no longer in the Google index which the public users’ queries “hit” for results.
According to “Google Is Facing a Fight over Americans’ Right to Be Forgotten.” The write up states:
Consumer Watchdog’s privacy project director John Simpson wrote to the FTC yesterday, complaining that though Google claims to be dedicated to user privacy, its reluctance to allow Americans to remove ‘irrelevant’ search results is “unfair and deceptive.”
I am not sure how quickly the various political bodies will move to make being forgotten a real thing. My hunch is that it will become an issue with legs. Down the road, the third party editing is likely to be required. The First Amendment is a hurdle, but when it comes times to fund a campaign or deal with winning an election, there may be some flexibility in third party editing’s appeal.
From my point of view, an index is an index. I have seen some frisky analyses of my blog articles and my for fee essays. I am not sure I want criticism of my work to be forgotten. Without an editorial policy, third party, ad hoc deletion of index pointers distorts the results as much, if not more, than results skewed by advertisers’ personal charm.
How about an editorial policy and then the application of that policy so that results are within applicable guidelines and representative of the information available on the public Internet?
Wow, that sounds old fashioned. The notion of an editorial policy is often confused with information governance. Nope. Editorial policies inform the database user of the rules of the game and what is included and excluded from an online service.
I like dinosaurs too. Like a cloned brontosaurus, is it time to clone the notion of editorial policies for corpus indices?
Stephen E Arnold, July 8, 2015
A Reminder about What Is Available to Search
July 5, 2015
Navigate to “Big Data, Big Problems: 4 Major Link Indexes Compared.” The write up explains why indexes have different content in their indexes. The services referenced in the write up are:
- Ahrefs. A backlink index updated every 15 minutes.
- Majestic. A big data solution for marketers and others. The company says, “Majestic-12 has crawled the web again, and again, and again. We have seen 2.7 trillion URLs come and go, and in the last 90 days we have seen, checked, scored and categorized 715 billion URLs.”
- Moz. Products for in bound marketers.
- SEMrush. Search engine marketing for digital marketers.
Despite the marketing focus, there were some interesting comments based on the analysis of backlink services (who links to what). Here’s one point I highlighted:
Each organization has to create a crawl prioritization strategy.
The article points out:\
The bigger the crawl, the more the crawl prioritization will cause disparities. This is not a deficiency; this is just the nature of the beast.
Yep, editorial choice. Inclusions and exclusions. Take away. When you run a query, chances are you are getting biased, incomplete information for the query.
The most important statement in the write up, in my opinion, is this one:
If anything rings true, it is that once again it makes sense to get data from as many sources as possible.
Good advice for search experts and sixth graders. Oh, MBAs may want to heed the statement as well.
But who cares? Probably not too many Internet users. Exciting when these “incomplete” information searchers make decisions.
Stephen E Arnold, July 5, 2015
France: Annoying the GOOG. Do the French Change a Cheese Process?
June 15, 2015
I have do chien in this fight. I read “France Orders Google to Scrub Search Globally in Right to Be Forgotten Requests.” Since I had been in a far off land then beavering away in a place where open carry enhances one’s machismo, the story may be old news to you. To me, it was like IBM innovation: Looked fresh, probably recycled.
Nevertheless, the article reports that the folks who bedeviled Julius Caesar are now irritating the digital Roman Empire. I learned:
France’s Commission nationale de l’informatique et des libertés (CNIL), the country’s data protection authority, has ordered Google to apply delisting on all domain names of its search engine. CNIL said in its news release that it’s received hundreds of complaints following Google’s refusals to carry out delisting. According to its latest transparency report, last updated on Friday 12 June, Google had received a total of 269,314 removal requests, had evaluated 977,948 URLs, and had removed 41.3% of those URLs.
I had an over the transom email from a person who identified himself with two initials only. He wrote:
For some reason the person was unhappy with Google’s responsiveness. I pointed the person to the appropriate Google Web page. But the two initial person continues to ask me to help. Yo, dude, I am retired. Google does not perceive me as much more than a person who should be buying Adwords.
Apparently, folks like my two letter person feels similarly frustrated.
As I understand the issue, France, like some other countries, wants the Google to remove links to content a person or entity filling in the form to move quickly and with extreme prejudice.
We will see. The Google does not do sprints, even when the instructions come from a country with more than 200 varieties of cheese, a plethora of search and retrieval systems, and some unsullied landscapes.
My hunch is that it may be quicker to create a Le Châtelain Camembert than to modify Google’s internal work flows. Well, maybe Roquefort or a Tomme de Savoie. Should France stick with cheese and leave the Googling to Google?
Stephen E Arnold, June 15, 2015
Medical Tagging: No Slam Dunk
May 28, 2015
The taxonomy/ontology/indexing professionals have a challenge. I am not sure many of the companies pitching better, faster, cheaper—no, strike that—better automated indexing of medical information will become too vocal about a flubbed layup.
Navigate to “Coalition for ICD 10 Responds to AMA.” It seems as if indexing what is a more closed corpus is a sticky ball of goo. The issue is the coding scheme required by everyone who wants to get reimbursed and retain certification.
The write up quotes a person who is supposed to be in the know:
“We’d see 13,000 diagnosis codes balloon into 68,000 – a five-fold increase.” [Dr. Robert Wah of the AMA]
The idea is that the controlled terms are becoming obese, weighty, and frankly sufficiently numerous to require legions of subject matter experts and software a heck of a lot more functional than Watson to apply “correctly.” I will let you select the definition of “correctly” which matches your viewpoint from this list of Beyond Search possibilities:
- Health care administrators: Get paid
- Physicians: Avoid scrutiny from any entity or boss
- Insurance companies: Pay the least possible amount yet have an opportunity for machine assisted claim identification for subrogation
- Patients: Oh, I forgot. The patients are of lesser importance.
You, gentle reader, are free to insert your own definition.
I circled this statement as mildly interesting:
As to whether ICD-10 will improve care, it would seem obvious that more precise data should lead to better identification of potential quality problems and assessment of provider performance. There are multiple provisions in current law that alter Medicare payments for providers with excess patient complications. Unfortunately, the ICD-9 codes available to identify complications are woefully inadequate. If a patient experiences a complication from a graft or device, there is no way to specify the type of graft or device nor the kind of problem that occurred. How can we as a nation assess hospital outcomes, pay fairly, ensure accurate performance reports, and embrace value-based care if our coded data doesn’t provide such basic information? Doesn’t the public have a right to know this kind of information?
Maybe. In my opinion, the public may rank below patients in the priorities of some health care delivery outfits, professionals, and advisers.
Indexing is necessary. Are the codes the ones needed? In an automatic indexing system, what’s more important: [a] Generating revenue for the vendor; [b] Reducing costs to the customer of the automated tagging system; [c] Making the indexing look okay and good enough?
Stephen E Arnold, May 28, 2015
Cerebrant Discovery Platform from Content Analyst
May 6, 2015
A new content analysis platform boasts the ability to find “non-obvious” relationships within unstructured data, we learn from a write-up hosted at PRWeb, “Content Analyst Announces Cerebrant, a Revolutionary SaaS Discovery Platform to Provide Rapid Insight into Big Content.” The press release explains what makes Cerebrant special:
“Users can identify and select disparate collections of public and premium unstructured content such as scientific research papers, industry reports, syndicated research, news, Wikipedia and other internal and external repositories.
“Unlike alternative solutions, Cerebrant is not dependent upon Boolean search strings, exhaustive taxonomies, or word libraries since it leverages the power of the company’s proprietary Latent Semantic Indexing (LSI)-based learning engine. Users simply take a selection of text ranging from a short phrase, sentence, paragraph, or entire document and Cerebrant identifies and ranks the most conceptually related documents, articles and terms across the selected content sets ranging from tens of thousands to millions of text items.”
We’re told that Cerebrant is based on the company’s prominent CAAT machine learning engine. The write-up also notes that the platform is cloud-based, making it easy to implement and use. Content Analyst launched in 2004, and is based in Reston, Virginia, near Washington, DC. They also happen to be hiring, in case anyone here is interested.
Cynthia Murrell, May 6, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Microsoft Nudges English to Ideographs
May 5, 2015
Short honk: In my college days, I studied with a fellow who was the world’s expert in the morpheme burger. You are familiar with hamburger. Lev Soudek (I believe this was his name) set out to catalog every use of –burger he could find. Dr. Soudek was convinced that words had a future.
He is probably pondering the rise of ideographs like emoji. For insiders, a pictograph can be worth a thousand words. I suppose the morpheme burger is important to the emergence of the hamburger icon like this:
Microsoft is pushing into new territory according to “Microsoft Is First to Let You Flip the Middle Finger Emoji.” Attensity, Smartlogic, and other content processing systems will be quick to adapt. The new Microsoft is a pioneering outfit.
Is it possible to combine the hamburger icon with the middle finger emoji to convey a message without words.
Dr. Soudek, what do you think?
What about this alternative?
How would one express this thought? Modern language? Classy!
Stephen E Arnold, May 5, 2015