Microsoft and Mikojo Trigger Semantic Winds across Search Landscape
January 28, 2010
Semantic technology is blowing across the search landscape again. The word “semantic” and its use in phrases like “semantic technology” has a certain trendiness. When I see the word, I think of smart software that understands information in the way a human does. I also think of computationally sluggish processes and the complexity of language, particularly in synthetic languages like English. Google has considerable investment in semantic technology, but the company wisely tucks it away within larger systems and avoiding the technical battles that rage among different semantic technology factions. You can see Google’s semantic operations tucked within the Ramanathan Guha inventions disclosed in February 2007. Pay attention to the discussion of the system and method for “context”.
Gale force winds from semantic technology advocates. Image source: http://www.smh.com.au/ffximage/2008/11/08/paloma_wideweb__470x289,0.jpg
Microsoft’s Semantic Puff
Other companies are pushing the semantic shock troops forward. I read yesterday in Network World’s “Microsoft Talks Up Semantic Search Ambitions.” The article reminded me that Fast Search & Transfer SA offered some semantic functionality which I summarized in the 2006 version of the original Enterprise Search Report (the one with real beef, not tofu inside). Microsoft also purchased Powerset, a company that used some of Xerox PARC’s technology and its own wizardry to “understand” queries and create a rich index. The Network World story reported:
With semantic technologies, which also are being to referred to as Web 3.0, computers have a greater understanding of relationships between different information, rather than just forwarding links based on keyword searches. The end game for semantic search is “better, faster, cheaper, essentially,” said Prevost, who came over to Microsoft in the company’s 2008 acquisition of search engine vendor Powerset. Prevost is still general manager of Powerset. Semantic capabilities get users more relevant information and help them accomplish tasks and make decisions, said Prevost.
The payoff is that software understands humans. Sounds good, but it does little to alter the startling dominance of Google in general Web search and the rocket like rise of social search systems like Facebook. In a social context humans tell “friends” about meaning or better yet offer an answer or a relevant link. No search required.
I reported about the complexities of configuring the enterprise search system that Microsoft offers for SharePoint in an earlier Web log post. The challenge is complexity and the time and money required to make a “smart” software system perform to an acceptable level in terms of throughput in content processing and for the user. Users often prefer to ask someone or just use what appears in the top of a search results list.
The Blank Spaces in Social Media
January 25, 2010
For the last 14 months I have written a monthly column for Information World Review. I don’t recycle that information in this Web log. In fact, I try to steer clear of repeating information within and across my monthly columns and this Web log. I thought I would have a dearth of information with the writing demands these place upon me and the equally addled goslings.
I was wrong.
On February 1, 2010, we are going to create a second Web log with the very hot title of SSN. I won’t reveal what it is about. I can say that it will NOT discuss the social security numbering system. I am going to operate the information service as a test for several months. If we hit a comfortable stride, then we will shift from a public beta test to a full-scale operation.
Yes, we will accept advertising, advertorials, and other marketing tie ups. Some of the conventions of Beyond Search and the ArnoldIT.com services will be linked to the new Web log. No, we have not worked out the details, but one of the team is going to grab hold of this angle and manage this aspect of the new information service.
The broad topic area will fit between real time search (my Information World Review column), my Google write ups (the KMWorld column), and my area of expertise (large scale online search and systems). We will have the exact positioning hammered out by Wednesday of next week with the first content live online a few days later.
A Real Editor
The editor for this Monday through Friday Web log will be Jessica Bratcher, a former newspaper editor. She continues to instruct me in how “real” journalists work. I will never learn because I am a sales person with few skills and not much energy.
She has assembled a team of goslings to be who will follow the conventions of the Web log world with a heck of a lot more journalistic acumen than I bring to the write ups in this Web log.
The Content
The Web log will feature some new approaches to content germane to online information.
First, each week there will be a dialog about a particular online issue of interest to business professionals. The idea is to take a topic and look at it from different viewpoints. In Beyond Search, there is a single point of view, and we want to explore topics from different angles. The trope will be a semi-Socratic dialog involving my partners in this new, free online information service. Even though different people will be involved, you will recognize the dialog from its new icons:
Notice that both icons represent squawking and noisy birds. The idea is to have an edge and present information a person involved in business will find somewhat useful.
Second, there will be lists. A traditional Web log forces certain content into a stack with the most recent information at the top and the older information buried at the bottom of the pile. The new Web log will put certain information—such as lists and reference information—on pages that are static. We think you will find it easier to locate some of the special content we are gathering for this new information service.
Google and Its Security Woes
January 18, 2010
There are some practical issues that must be addressed when dealing with security. First, the people working on the security problem have to be vetted. This requires time and organization. Organizations in a hurry and not well organized are at greater risk than a plodding, more methodical outfit. Although troubling to some, the security people have to be subject to some type of monitoring as well. The idea is that layers of security methods and procedures are required. Again, this takes expertise and experience. Short cuts can increase risk.
Then when something bad happens, it is a good idea to look for indications that someone close to the matter is involved, intentionally or unintentionally. Some countries use clever methods to socially engineer an opportunity to exploit a weakness in security. I know that the idea of a team implies that everyone is going to run the game plan. Alas, that’s not always accurate.
In my experience, keeping an issue contained is a prudent first step. The idea that quick reaction or chatter helps may be an inaccurate one. Some outputs are necessary, but crazy talk is rarely helpful whether from pundits, poobahs, satraps, or azure chip consultants.
I was surprised to read several widely circulated news stories that provide some additional “information” or “disinformation” about the Google security matter. The work “attack” is attached to this issue, but I don’t know enough to be able to say whether this was an “attack” or one of those cute things that math club members perpetrate as a way to get attention, change grades for the football team, or transfer cafeteria money to a charity like Midnight Auto Supply.
The Great Wall of China was built for a reason. Some of those reasons exist for today’s Chinese governmental entities. Those who build the Great Wall were not concerned with the environmental or financial impact of the Great Wall. Priorities may be different in China than in other geographic areas or nation states. Image source: http://www.globusjourneys.com/Common/Images/Destinations/great-wall.jpg
That’s the problem with lots of information or lots of disinformation. There is uncertainty, what I call a “cloud of unknowing”.
Here’s what’s caught my attention. (Keep in mind that I have no solid opinion on this matter because I only know what flops into my newsreader and that information or disinformation is suspect by definition.)
Search Vendors Working the Content Food Chain
January 13, 2010
In the last six months, I have noticed that three companies are making an effort to respond to ZyLAB’s success in the end-to-end content processing sector. There has been some uninformed and misleading discussion of search and content processing companies shift to vertical market solutions. I think this view distorts what some vendors are doing; namely, when one company finds a way to make sales, the other vendors pile into the Volkswagen. This is not so much “imitation as flattery”. What is happening is that sales are tough to make. When a company finds an angle, the stampede is on. In a short period of time, an underserved sector in search and content processing has more people stomping around than Lady Gaga.
Let’s go back in history, a subject that most of the poobahs, azure chip consultants, and self appointed experts avoid. The idea that certain actions have surfaced before is no fun. Identifying a “new” trend is easier, particularly when the trend spotter’s “history” extends to his / her last Google query.
The Mobius strip is non-orientable, just like search solutions that provide end-to-end solutions. A path on a Mobius strip can be twice as long as the original strip of paper. That’s a good way for me to think about end-to-end search and content processing systems. Costs follow a similar trajectory as well.
In the dim mists of time, one of the first outfits to offer and end-to-end solution to content acquisitions, indexing, and search was—believe it or not—Excalibur. The first demonstration I received of the Excalibur RetrievalWare technology included scanning, conversion of the scanned image’s text to ASCII, indexing of the ASCII for an image, and search. The information processed in that demonstration was a competitor’s marketing collateral. There were online search systems, but these were mostly small scale systems due to the brutal costs of indexing large domains of HTML. A number of companies were pushing forward with the idea of integrated scanning systems. Sure, in the 1990s you could buy a high end scanner and software. But in order to build a system that minimized the fiddly human touch, you had to build the missing components yourself. Excalibur hooked up with resellers of high end scanners from companies like Bell+Howell, Fujitsu, and others. The notion of taking a scanned image and then via an in memory processing performing optical character recognition of the page image and then indexing that ASCII was a relatively new method. UMI (a unit of Bell+Howell) had a sophisticated production process to do this work. Big outfits like Thomson were interested in this type of process because lots of information in the early 1990s was still in hard copy form. To make a long story short, the Excalibur engineers were among the first to create commercial product that mostly worked, well, sort of. The indexing was an issue. Excalibur embarked on a journey that required enhancing the RetrievalWare product, generating ready-to-use controlled vocabularies for specific business sectors like defense and banking. As you may know, Excalibur’s original vision did not work so the company mrophed into a search and content processing company with a focus on business intelligence. The firm renamed itself as Convera. The origins of the company were mostly ignored as the Convera package of services chased government work, commercial accounts like Intel and the National Basketball Association (data center SaaS functions for the former and video searching for the hoopsters). When those changes did not work out too well, Convera refocused to become a for fee version of the free Google custom search engine. That did not work out too well either, and the company has be semi-dissolved.
Why’s this important?
First, the history shows that end-to-end processing is not new. Like much of the hot search innovations, I find the discoveries of the azure chip crowd a “been there, done that” experience. Processing paper and making it searchable is a basic way to approach certain persistent problems.
Second, the synopsis of the Excalibur trajectory makes clear that senior managers of search and content processing companies scramble, following well worn paths. The constant repositioning and restating of what a technology allegedly does is a characteristic of search and content processing.
Third, the shifts and jolts in the path of the Excalibur / Convera entity are predictable. The template is:
- Start with a problem
- Integrate
- Sell
- Engineer fixes on the fly
- Fail
- Identify a new problem
- Rinse, repeat.
What has popped out of my Overflight intel system is that law firms are now looking for a solution to a persistent information problem; that is, when a legal matter fires up, most search systems work just fine with content in electronic form. The hitch is that a great deal of paper is produced. If something exists in digital form and one law firm must provide that information to another law firm, some law firms convert the digital information to paper, slap on a code, and have FedEx deliver boxes of paper. The law firm receiving this paper no longer has the luxury of paying minions to grind through the paper. The new spin on the problem is that the law firm’s information technology people want to buy a hardware-software combination that allows a box of paper to be put in one end and the magic between the hard copy and the searchable, electronic instance of the documents are magically completed.
Well, that’s the idea. Some of the arabesques that vendors slap on this quite difficult problem include:
- Audit records so a law firm knows who looked at what when and for how long
- A billing method. Law firms want to do invoices, of course
- A single point solution so there is “one throat to choke”.
What the companies want is what Excalibur asserted it had almost 20 years ago.
ZyLAB, under the firm hand of Johann Scholtes (a former Dutch naval officer), has made inroads in this market sector. You can read an interview with him in the Search Wizards Speak series, so I won’t recycle that information in this write up.
Autonomy was quick to move to build out its end-to-end solutions for law firms and other clients with a paper and digital content problem. In fact, Autonomy just received an award for its end-to-end eDiscovery platform.
Brainware offers a similar system. That company, a couple of years ago, told me that it had to add staff to handle the demand for its scanning and search solution. Among the firm’s largest customers were law firms and, not surprisingly, the Federal government. You can read an interview with a Brainware executive (who is an attorney) in the Search Wizards Speak series.
I learned that Recommind has inked a deal with Daeja Image Systems for its various document processing software components. The idea is to be able to provide an end-to-end solution to law firms, government agencies, and other outfits that need a system that provides access to paper based content and digital content.
Let’s step back.
What this addled goose sees in these recent announcements is that the “new” is little more than a rediscovery that law firms have not yet cracked the back of the paper to digital job and been able to get a search system that provides access to the source material. Sure, there were solutions 20 years ago, but those solutions don’t meet a continuing need. Notice that this problem has been around for a long time, and I don’t think the present crop of solutions will solve the problem fully.
Lazarus, Azure Chip Consultants, and Search
January 8, 2010
A person called me today to tell me that a consulting firm is not accepting my statement “Search is dead”. Then I received a spam email that said, “Search is back.” I thought, “Yo, Lazarus. There be lots of dead search vendors out there. Example: Convera.
Who reports that search has risen? An azure chip consultant! Here’s what raced through my addled goose brain as I pondered the call and the “search is back” T shirt slogan:
In 2006, I was sitting on a pile of research about the search market sector. The data I collected included:
- Interviews with various procurement officers, search system managers, vendors, and financial analysts
- My own profiles of about 36 vendors of enterprise search systems plus the automated content files I generate using the Overflight system. A small scale version is available as a demo on ArnoldIT.com
- Information I had from my work as a systems engineering and technical advisor to several governments and their search system procurement teams
- My own experience licensing, testing, and evaluating search systems for clients. (I started doing this work after we created in 1993 The Point (Top 5% of the Internet) and sold it to Lycos, a unit of CMGI. I figured I should look into what Lycos was doing so I could speak with authority about its differences from BRS/Search, InQuire, Dialog (RECON), and IBM STAIRS III. I had familiarity with most of these systems through various projects in my pre Point (Top 5% of the Internet life).
- My Google research funded by the now-defunct BearStearns outfit and a couple of other well heeled organizations.
What was clear in 2006 was the following:
First, most of the search system vendors shared quite a bit of similarity. Despite the marketing baloney, the key differentiators among the flagship systems in 2006 were minor. Examples range from their basic architecture to their use of stemming to the methods of updating indexes. There were innovators, and I pointed out these companies in my talks and various writings, including the three editions of the Enterprise Search Report I wrote before I fell ill in February 2007 and quit doing that big encyclopedia type publication. These similarities made it very clear to me that innovation for enterprise search was shifting from the plain old key word indexing of structured records available since the advent of RECON and STAIRS to a more freeform approach with generally lousy relevance.
Get information access wrong, and some folks may find a new career. Source: http://www.seeing-stars.com/Images/ScenesFromMovies/AmericanBeautyMrSmiley%28BIG%29.JPG
Second, the more innovative vendors were making an effort in 2006 to take a document and provide some sort of context for it. Without a human indexer to assign a classification code to a document that is about marketing but does not contain the word “marketing”, this was rocket science. But when I examined these systems, there were two basic approaches which are still around today. The first was to use statistical methods to put documents together and make inferences and the other was a variation on human indexing but without humans doing most of the work. The idea was that a word list would contain synonyms. There were promising demonstrations of software methods that could “read” a document, but there were piggy and of use where money was no object.
Third, the Google approach which used social methods—that is, a human clicking on a link—were evident but not migrating to the enterprise world. Google was new but to make their 2006 method hum, lots of clicks were needed. In the enterprise, most documents never get clicked, so the 2006 Google method was truly lousy. Google has made improvements, mostly by implementing the older search methods, not by pushing the envelope as it has been doing with its Web search and dataspace efforts.
Fourth, most of the search vendors were trying like Dickens to get out of a “one size fits all” approach to enterprise search. Companies making sales were focusing on a specific niche or problem and selling a package of search and content searching that solved one problem. The failure of the boil the ocean approach was evident because user satisfaction data from my research funded by a government agency and other clients revealed that about two thirds of the users of an enterprise search system were dissatisfied or very dissatisfied with that search system. The solution, then, was to focus. My exemplary case was the use of the Endeca technology to allow Fidelity UK sales professionals to increase their productivity with content pushed to them using the Endeca system. The idea was that a broker could click on a link and the search results were displayed. No searching required. ClearForest got in the game by analyzing the dealer warranty repair comments. Endeca and ClearForest were harbingers of focus. ClearForest is owned by Thomson Reuters and in the open source software game too.
When I wrote the article in Online Magazine for Barbara Quint, one of my favorite editors, I explained these points in more detail. But it was clear that the financial pressures on Convera, for example, and the difficulty some of the more promising vendors like Entopia were having made the thin edge of survival glint in my desk lamp’s light. Autonomy by 2006 had shifted from search and organic growth to inorganic growth fueled by acquisitions that were adjacent to search.
IBM Jumps on a Bandwagon It Fell Off Earlier
January 6, 2010
Intelligent Enterprise reported that IBM is in the voice of the customer game. I thought IBM was already in the voice of the customer game. Goes to show what I know. The article “IBM Launches Voice-of-the-Customer Analytic Service” reveals all. The “all” is another of those glorious umbrella service offerings that in the 1970s made sense. Today, IBM is mostly a consulting firm with a backpack stuffed full of technology, open source programs, and consultants. In my opinion, the most interesting comment in the write up was:
VOCA has been in pilot deployment mode for nearly a year, according to English [IBM wizard], and tests have ranged from daily to monthly reporting scenarios. By year end, IBM plans to add text analytic and transcription and translation services for the major European languages and Arabic. In the first half of next year, the VOCA service will add speech-to-text technologies that will enable customers to mine customer support calls and other audio recordings.
Ah, the most recent customer support package will be forthcoming by the end of the year. To test this, I navigated to IBM.com and ran these queries:
- VOCA
- voice of the customer
- customer support systems
Here’s what I learned from these queries, but I urge you to run your own searches too:
VOCA
I received pointers to VOCA or voice of the customer analytics. The active link was to the December 16, 2009, news release which seemed to presage some of the Intelligent Enterprise comments. There were other links as well, including:
- A link to another news release
- Pointers to a “cross industry” news release here
- A link to Streamline Business Processes, which I did not understand but there was another link to a page of more news releases.
Okay, I get it. The search system indexes news releases. Not what I expected but I accept that marketing is more important than some other functions at IBM.
“Voice of the Customer” as a Bound Phrase
I got a different result set than I did from VOCA. The set was only 26 hits, and the first hit was a news release. The second and third hits were to an older news release and another link to the VOCA news release. Still not substantive content.
“Customer Support System” as a Bound Phrase
I got one hit to the Customer Support Newsletter date 2007 Q2. I thought this VOCA stuff was new. Guess I was correct when I perceived the story in Intelligent Enterprise as another marketing attempt by IBM to look relevant. Obviously the PDF newsletter did not make any sales.
Hopefully, IBM will find a way to make its actual products and services findable on its Web site. The present method of putting out a news release, getting a publication to parrot the information, and then sending a goose like me to the IBM Web site looking for concrete information is sufficient.
Stephen E. Arnold, January 4, 2010
Oyez, oyez, a freebie. I will report this sad state of affairs when Washington DC returns to work to the Bureau of Labor Statistics who cares about productivity such as that evidenced by IBM’s marketing and Web search teams.
Google Press, O Reilly, and a Possible Info Discontinuity
January 4, 2010
Google’s book on HTML5 is moving along. Soon it will be available for sale. At that moment, a seismic shock is triggered in the already Jello like world of traditional publishing. Oh, if you don’t know about the Google Press imprint, you can catch up on your reading by looking at:
- HTML5’s rel=”noreferrer”
- A version of the “book’s title page”
- Mr. Pilgrim’s own statement
For a more robust discussion of the tools Google will use as it solves the copyright problem for new, significant content, check out Google: The Digital Gutenberg, September 2009. Better yet, write me at seaky2000 at yahoo dot com and inquire about a 90 minute briefing on Google’s publishing technology and the disruptions these technologies are likely to let loose in 2010.
First, let me provide some context.
In Google: The Digital Gutenberg I pointed out that Google’s infrastructure works like a digital River Rouge. Put stuff in at one end and things come out the other. The steady progress of Google toward a clean, tidy solution to copyright hassles is for Google to become a publisher. What goes in at one end are content objects and what comes out the other can be just about anything Google can program its manufacturing system to produce.
Now I know that the publishers want Google to [a] quit being Google, which is tough since the Google is little more than a manifestation of technology anybody could have glued together 11 years ago, [b] subsidize publishers so the arbiters of what’s smart and what’s stupid can continue as museum curators of information, and [c] give publishers some of the profits from advertising so publishers can shop for white shoes and vintage motor yachts.
Google uses algorithms like a fishmonger to convert the beastie into tasty, easily sold fillets. Image source: http://www.fishingkites.co.nz/cleaning-fish/filleting_fish/fillet_2.jpg
The solution is simpler. When Google signs up an author, Google offers terms. The author takes the terms or leaves the terms. Now the Google does not go quickly into that good night. The Google takes baby steps. Google has a fondness for Tim O’Reilly, and it supports number of O’Reilly ventures, including the somewhat interesting Government 2.0 conference.
An Original Aggregator Teeters on the Brink
December 28, 2009
I sat on this write up for about a week. I read the December 20, 2009, “Revised and Condensed” write up in the New York Times. I don’t know if the piece is available online because I don’t use traditional media’s online services. I am more interested in how the traditional print and magazines to which I subscribe present information about the challenges consumer publishing in the US faces.
For your information, I ran a quick query before scheduling this write up for release on Beyond Search on December 28, 2009, and, to my surprise, this link on the New York Times’s Web site worked. Glory be!
My plan for this write up is to highlight some of the more striking points set forth in the article with the subtitle “A Reader’s Digest That Grandma Never Dreamed Of.” I won’t point out that Ms. Sperling, my anti-Arnold English teacher in high school, would have given the headline writer an F and inked in red: “A Reader’s Digest about Which Grandma Never Dreamed.” But why fiddle around with the small stuff when the overall point of the article is of larger import. I will comment on that at the end of this short write up.
Now that you have the plan of attack, let’s look at the passages I found interesting.
this sentence captures exquisitely the decay, the loss of a future, and the end of a traditional information company :
Walking the hallways now, it’s hard to imagine the bustle. More than half of the building is empty, a ghostly warren of empty cubicles and unused bathrooms. You can walk for long stretches without seeing anyone. A stand-alone brick addition has been condemned because of mold, a company spokesman said.
Ms. Sperling would have inked a circle around “mold” and written you have confused a frame or model with a saprotrophic fungi. Man, she was a picky one. She wanted the word spelled “mould”.
Image source: http://i.pbase.com/g5/61/391661/2/67960731.z5oyTWFv.jpg
Preliminary List of Beyond Search Evaluated Social Search Systems
December 23, 2009
The goslings and I had some disagreements about what to include and what to exclude. If you read my column in Incisive Media’s Information World Review, I have mentioned many of these systems. In London earlier this month a person asked me to run a table of the social search systems. I anticipate that a large number of azure chip consultants, poobahs, satraps, and SEO mavens will have a field day recycling these links. The addled goose is too old and too disinterested to honk much about short cuts.
As with our list of European enterprise search vendors, we will add to this list over time. I will not include my ratings for each system in this list. I have not decided about using my goose ratings as part of the Overflight service or one of the listings on my archive Web site. If you don’t agree with a site’s inclusion or if you have a site to suggest, use the comments section of the Web log. There will be some weird breaks and spacing issues. WordPress often baffles me with its handling of table code. If the breaks annoy you, the addled goose says, “Create your own list.” Honk.
Will Mr. Google Rustle the Adobe Cash Cow
December 18, 2009
I think most buisness intelligence write ups are dull. Corporate catastrophes can be fun! Just ask Bain, Boston Consulting Group, and other blue chip firms. I want to give you a glimpse of another Google disruption that is not in the “Sergey and Larry eat pizza books.” The informaton in this write up comes from open sources. The difference between this analysis of a single Google invention and telling anecdotes about advertising is that the Google is poised to put some major pain a some large outfits in a business sector not generally associated with Google. In this article, I refer to Google as Mr. Google and Googzilla. I find that making light of what may be one of the more significant capabiliteis of this company is fun for me. Enjoy. Oh, if you are annoyed by my writing style, may I remind you that this is my personal Web log and it available to you for free. Therefore, don’t write to complain about my approach, just go read something more appetizing to you.
Any one remember Andrew Herzfeld? Earlier this year, the New York Times pointed out that Andrew Hertzfeld, “who helped develop the original Maacintosh and now works at Google” that Mr. Google was looking for different cash cows. Graphical interfaces and related software wizardry are nothing new to Mr. Google. But Mr. Hertzfeld is a bit like Vint Cerf or Jeff Dean. These are humans with brains that dwarf the addled goose’s pitable gray matter. Mr. Hertzfeld is a wizard. In addition to the Macintosh work, he founded General Magic and then Eazel in 1999. He donned his Google T shirt in 2005. Not exactly an average Googler, but you get the idea that Mr. Hertzfeld has some graphics savvy amidst the Haskell crowd.
So, what’s Mr. Hertzfeld doing at Googzilla’s magic factory? Picasa? An in-browser image editor? I don’t know much, but I do know how to look at certain types of open source information. A recent example is US Patent 7631252, filed in July 2006. The title is “Selective Image Editing in a Browser”. To give you some context for Mr. Hertzfeld’s interests, he has a patent called “Graphical User Interface for Navigating btween Levels Displaying Hallway and Room Metaphors.” After looking at these two documents, my hunch is that the Google wants to visit the feedlot where Adobe’s cash cow Photoshop is getting fat.
You can read these documents and draw your own conclusion, but I am going to snap this invention into my Google capabilities matrix under “Graphics Disruption”. Hey, I am an addled goose, so those folks with image editing systems that run on the desktop or in the cloud can tell me I am off base. No problemo.
But, just for fun, let’s look at what the crystal clear prose of US7631252 tries to communicate.
Here’s the abstract:
Methods, tools, and systems are provided for editing an image in a browser. One method provides editing an image in a browser including maintaining a list of transformations applied to the image including a last transformation, receiving a selection from a user to rollback a transformation, the selection not including the last transformation, generating a unique identifier associated with the edited image without the selection and requesting a page using the unique identifier.
Not too exciting, right?
Now Mr. Google employs a junior poobah named Cyrus. This bright lad insists that I create illustrations for my books and lectures using Photoshop. The reason for this interesting assertion is that Cyrus does not read patent documents. Here’s a Google illustration that supports the patent:
If you know about online image editing, you can figure out that the simplified interface supports a number of controls. The feature seems to be that behind the “simple” facade are some Photoshop-like functions.
What makes the patent interesting to me is that Mr. Google is supporting some computationally intensive and storage gobbling functions. Browser based roll back is one example.
The other aspect of the invention that I noted was that there is some smart software clanking around in the background. One quick example is the auto recognition capability that invokes certain functions. Mr. Google provides 21 claims for this invention. Most of these till earth that other image editing outfits have trampled into hard packed clay. A couple of them are going to allow Mr. Google to exert some disruptive forces in the image editing markets.
To put this in some perspective, Mr. Google has a vector capability. Mr. Google has a bitmap editing capability. Mr. Google has a plan for something. I wonder if there is a confection called the “creative sweet” in Mr. Google’s candy shop.
Stephen E. Arnold, December 18, 2009
Oyez, oyez, I want to report to the Jet Propulsion Lab that I was not paid to write about this invention, the Googler who does not read patents, or the coming pressure for the kids from Adobe. I would like to get paid for this type of serious patent analysis. I won’t even get a lump of coal for Christmas.