In-Q-Tel Pumps Cash into Visible Technologies

October 21, 2009

Overflight snagged a news item called “Visible Technologies Announces Strategic Partnership with In-Q-Tel”. In-Q-Tel is the investment arm of one unit of the US government. Visible Technologies is a content processing company that ingests Web log, Tweets, and other social content and extracts information from these data.

The company said:

Through its comprehensive solution set, Visible Technologies helps organizations adopt new ways of gaining actionable insight from social media conversations. By using Visible Technologies’ platform, organizations both big and small can harness business intelligence derived from social media data to drive strategic direction and tactical implementation of marketing initiatives, improve the customer experience and grow business. Visible Technologies’ end-to-end suite, powered by the truCAST engine, encompasses global features that enable real-time visibility into online social conversations regardless of where dialogue is occurring. Additionally, the company’s truREPUTATION solution is a best-in-class online reputation management service that provides both individuals and brands an effective way to repair, protect and proactively promote their reputation in search engine results.

The company is no spring chicken. Founded in 2003, Visible Technologies has a range of monitoring, reputation, and content analysis tools. The firm’s social media monitoring system is a newer weapon in the company’s arsenal. With police and intelligence agencies struggling to deal with social media, an investment in a firm focusing on this type of content makes clear that the US government wants to keep pace with these content streams.

Stephen Arnold, October 21, 2009

Guha and the Google Trust Method Patent

October 16, 2009

I am a fan of Ramanathan Guha. I had a conversation not long ago with a person who doubted the value of my paying attention to Google’s patent documents. I can’t explain why I find these turgid, chaotic, and cryptic writings of interest. I read stuff about cooling ducts and slugging ads into anything that can be digitized, and I yawn. Then, oh, happy day. One of Google’s core wizards works with attorneys and a meaningful patent document arrives in Harrod’s Creek goose nest.

Today is such a day. The invention is “Search Result Ranking Based on Trust” which you can read courtesy of the every reliable USPTO by searching for US7,603,350 (filed in May 2006). Dr. Guha’s invention is described in this patent in this way:

A search engine system provides search results that are ranked according to a measure of the trust associated with entities that have provided labels for the documents in the search results. A search engine receives a query and selects documents relevant to the query. The search engine also determines labels associated with selected documents, and the trust ranks of the entities that provided the labels. The trust ranks are used to determine trust factors for the respective documents. The trust factors are used to adjust information retrieval scores of the documents. The search results are then ranked based on the adjusted information retrieval scores.

Now before you email me and ask, “Say, what?”, let me make three observations:

  • The invention is a component of a far larger data management technology initiative at Google. The implications of the research program are significant and may disrupt the stressed world of traditional RDBMS vendors at some point.
  • The notion of providing a “score” that signals the “reliability” or lack thereof is important in consumer searches, but it has some interesting implications for other sectors; for example, health.
  • The plumbing to perform “trust” scoring on petascale data flows gives me confidence to assert that Microsoft and other Google challengers are going to have to get in the game. Google is playing 3D chess and other outfits are struggling with checkers.

You can read more about Dr. Guha in my Google Version 2.0. He gets an entire chapter (maybe 30 pages of 10 pt type) for a suite of inventions that make it possible for Google to be the “semantic Web”. lever company, brilliant guy, Guha is.

Stephen Arnold, October 15, 2009

LexisNexis Jumps on Semantic Bandwagon

October 15, 2009

Pure Discovery, a Dallas based search and content processing company, has landed a mid-sized tuna, LexisNexis. Owned by publishing giant Reed Elsevier, LexisNexis faces some strong downstream water. The $1 billion plus operation is paddling its dugout canoe upstream. Government agencies, outfits like Gov Resources, and the Google are offering products and services that address the squeals from law firms. What is the cause of the legal eagle squeaks? The cost of running searches on the commercial online services like LexisNexis and Westlaw, among others like Questel. Clients are putting caps on some law firm expenditures. Even white shoe outfits in New York and Chicago are feeling the pinch.

I saw one short news item about this tie up in an article in Search Engine Watch.

Patent searching is a particularly exciting field of investigation. If you click over to the responsive USPTO, you can search patents for free. Tip: Print out the search hints before you begin. I am not sure who is responsible for this wonderful search system, but it is a wonder.

Semantic technology along with other sophisticated content processing tools can make life a little – notice the word “little” – easier for those conducting patent research. Even the patent examiners have to use third party systems because the corpus of the USPTO is a bit like a buggy without a horse in my opinion.

The company that LexisNexis tapped to provide its semantic technology is Pure Discovery in Dallas, Texas. I had one reference to the firm in my Overflight service and that was to an individual named Adam Keys, Twitter name therealadam. Mr. Keys left Pure Discovery in 2006 after two years at the company. I had a handwritten note to the effect that venture funding was provided in part by Zon Capital Partners in Princeton, New Jersey. I have little detail about how the Pure Discovery system works.

Here’s a description of the company I pulled from Zon’s Web site:

Pure Discovery (Dallas, TX) has developed enterprise semantic web software. Its offering combines automated semantic discovery with a peer networking architecture to transform static networks into dynamic ecosystems for knowledge discovery.

I snagged a few items from the firm’s Web site.

The product line up consists of KnowledgeGraph products. These include the PD BrainLibrary (“BrainLibrary is a breakthrough technology that harnesses the collective intelligence of organizations and their people in ways that have never been possible before), PD Transparent Concept Search (“PD Concept Search has completely removed the top off the black box and for the first time ever, users are not only able to see what has been learned by the system, but also use our QueryCloud application to control it.”), PD QueryCloud Visual Query Generator (“QueryCloud then lets users control what terms or phrases are used, not used, emphasized or de-emphasized. All with the simple click of a button.”), PD Clustering (“D Clustering dynamically orders similar documents into clusters enabling users to browse data by semantically related groups rather than looking at each individual document. PD Clustering is fast enough to cluster even the largest of document populations with a benchmark of over 80 million pages clustered in a 48 hr period on a single machine.”), and PD Near-Dupe Identification (“PureDiscovery’s Near-Dedupe Identification Engine provides instant value to any application by detecting and grouping near duplicate documents. Identifying documents with these slight variances results in dramatic savings in time wasted looking at the same document again and again.”) This information is from the Pure Discovery Web site here.

The company also offers its Transparent Concept Search Query Cloud.

The software is available for specific vertical markets and niches; for example, litigation support, “human capital management” (maybe human resources or knowledge management?), intellectual property, and homeland security and defense.

These are sophisticated functions. I look forward to examining the LexisNexis patent documents using this new tool. Perhaps LexisNexis has found a software bullet to kill the beasties chewing into its core business. If not, LexisNexis will face that rushing torrent without a paddle.

As more information flows to me, I will update this write up.

Stephen Arnold, October 15, 2009
I wrote this short post without so much as a thank you from anyone.

Exclusive Interview with CTO of BrightPlanet Now Available

October 13, 2009

William Bushee, BrightPlanet’s Vice President of Development and the company’s chief technologist, spoke with Stephen E. Arnold. The exclusive interview appears in the Search Wizards Speak series. Mr. Bushee was among the first search professionals to tackle Deep web information harvesting. The “Deep Web” refers to content that traditional Web indexing systems cannot access. Deep Web sites include most major news archives as well as thousands of specialized sources. These sources typically represent the best, most definitive content sources for their subject area. For example, in the health sciences field, the Centers for Disease Control, National Institutes of Health, PubMed, Mayo Clinic, and American Medical Association are all Deep Web sites, often inaccessible from conventional Web crawlers like Google and Yahoo. BrightPlanet supported the ArnoldIT.com analysis of the firm’s system. As a result of this investigation, the technology warranted an in depth discussion with Mr. Bushee.

The wide ranging interview focuses on BrightPlanet’s search, harvest, and OpenPlanet technology. Mr. Bushee told Search Wizards Speak: “As more information is being published directly to the Web, or published only on the Web, it is becoming critical that researchers and analysts have better ways of harvesting this content.”

Mr. Bushee told Search Wizards Speak:

There are two distinct problems that BrightPlanet focuses on for our customers. First we have the ability to harvest content from the Deep Web. And second, we can use our OpenPlanet framework to add enrichment, storage and visualization to harvested content. As more information is being published directly to the Web, or published only on the Web, it is becoming critical that researchers and analysts have better ways of harvesting this content. However, harvesting alone won’t solve the information overload problems researches are faced with today. The answer to a research project cannot be simply finding 5,000 raw documents, no matter how good they are. Researchers are already overwhelmed with too many links from Google and too much information in general. The answer needs to be better harvested content (not search), better analytics, better enrichment and better visualization of intelligence within the content – this is where BrightPlanet’s OpenPlanet framework comes into play. While BrightPlanet has a solid reputation within the Intelligence Community helping to fight the “War on Terror” our next mission is to be known as the commercial and academic leaders in harvesting relevant, high quality content from the Deep Web for those who need content for research, business intelligence or analysis.

You can read the full text of the interview at http://www.arnoldit.com/search-wizards-speak/brightplanet.html. More information about the company’s products and services is available at http://www.brightplanet.com. Mr. Bushee’s technology has gained solid support from some professional researchers and intelligence agencies. BrightPlanet has moved “beyond search” with its suite of content processing technology.

Stephen Arnold, October 13, 2009

Google on Path to Becoming the Internet

September 28, 2009

I thought I made Google’s intent clear in Google Version 2.0. The company provides a user with access to content within the Google index. The inventions reviewed briefly in The Google Legacy and in greater detail in Google Version 2.0 explain that information within the Google data management system can be sliced, diced, remixed, and output as new information objects. The analogy is similar to what an MBA does at Booz, McKinsey, or any other rental firm for semi-wizards. Intakes become high value outputs. I was delighted to read Erick Schonfeld’s “With Google Places, Concerns Rise that Google Just Wants to Link to Its Own Content.” The story makes clear that folks are now beginning to see that Google is a digital Gutenberg and is a different type of information company. Mr. Schonfeld wrote:

The concerns arise, however, back on Google’s main search page, where Google is indexing these Places pages. Since Google controls its own search index, it can push Google Places more prominently if it so desires. There isn’t a heck of a lot of evidence that Google is doing this yet, but the mere fact that Google is indexing these Places pages has the SEO world in a tizzy. And Google is indexing them, despite assurances to the contrary. If you do a search for the Burdick Chocolate Cafe in Boston, for instance, the Google Places page is the sixth result, above results from Yelp, Yahoo Travel, and New York Times Travel. This wouldn’t be so bad if Google wasn’t already linking to itself in the top “one Box” result, which shows a detail from Google Maps. So within the top ten results, two of them link back to Google content.

Directories are variants of vertical search. Google is much more than rich directory listings.

Let me give one example, and you are welcome to snag a copy of my three Google monographs for more examples.

Consider a deal between Google and a mobile telephone company. The users of the mobile telco’s service run a query. The deal makes it possible for the telco to use the content in the Google system. No query goes into the “world beyond Google”. The reason is that Google and the telco gain control over latency, content, and advertising. This makes sense. Let’s assume that this is a deal that Google crafts with an outfit like T Mobile. Remember: this is a hypothetical example. When I use my T Mobile device to get access to the T Mobile Internet service, the content comes from Google with its caches, distributed data centers, and proprietary methods for speeding results to a device. In this example, as a user, I just want fast access to content that is pretty routine; for example, traffic, weather, flight schedules. I don’t do much heavy lifting from my flakey BlackBerry or old person hostile iPhone / iTouch device. Google uses its magical ability to predict, slice, and dice to put what I want in my personal queue so it is ready before I know I need the info. Think “I am feeling doubly lucky”, a “real” patent application by the way. T Mobile wins. The user wins. The Google wins. The stuff not in the Google system loses.

Interesting? I think so. But the system goes well beyond directory listings. I have been writing about Dr. Guha, Simon Tong, Jeff Dean, and the Halevy team for a while. The inventions, systems and methods from this group have revolutionized information access in ways that reach well beyond local directory listings.

The Google has been pecking away for 11 years and I am pleased that some influential journalists / analysts are beginning to see the shape of the world’s first trans national information access company. Google is the digital Gutenberg and well into the process of moving info and data into a hyper state. Google is becoming the Internet. If one is not “in” Google, one may not exist for a certain sector of the Google user community. Googleo ergo sum.

Stephen Arnold, September 28, 2009

Google Waves Build

September 24, 2009

I am a supporter of Wave. I wrote a column about Google WAC-ing the enterprise. W means wave; A is Android, and C represents Chrome. I know that Google’s consumer focus is the pointy end of the Google WAC thrust, but more information about Wave is now splashing around my webbed feet here in rural Kentucky. You take a look at some interesting screenshots plus commentary in “Google Wave Developer Preview: Screenshots.” Perhaps you will assert, “Hey, addled goose, this is not search.” I reply, “Oh, yes, it is.” The notion of eye candy is like lipstick on a pig. Wave is a new animal that will carry you part of the way into dataspace.

Stephen Arnold, September 24, 2009

What If Google Books Goes Away?

September 21, 2009

I had a talk with one of my partners this morning. The article in TechRadar “Google Books Smacked Down by US Government” was the trigger. This Web log post captures the consequences portion of our discussion. I am not sure Google, authors, or any other pundit embroiled in the dust up over Google Books will agree with these points. That’s okay. I am capturing highlights for myself. If you have forgotten this function of this Beyond Search Web log, quit reading or look at the editorial policy for this marketing / diary publication.

Let’s jump into the discussion in media res. The battle is joined and at this time, Google is on the defensive. Keep in mind that Google has been plugging away at this Google Book “project” since 2000 or 2001 when it made a key hire from Caere (now folded into Nuance) to add a turbo charge to the Books project.

image

Who is David? Who is Goliath?

With nine years of effort under its belt, Google will get a broken snout if the Google Books project stops. Now, let’s assume that the courts stop Google. What might happen?

First, Google could just keep on scanning. Google lawyers will do lawyer-type things. The wheels of justice will grind forward. With enough money and lawyers, Google can buy time. Let’s face it. Publishers could run out of enthusiasm or cash. If the Google keeps on scanning, discourse will deteriorate, but the acquisition of data for the Google knowledge base and for Google repurposing keeps on keeping on.

Second, Google might agree. Shut up shop and go directly to authors with an offer to buy rights to their work. I have four or five publishers right now. I would toss them overboard for a chance to publish my next monograph on the Google system, let Google monetize it any way it sees fit, and give me a percentage of the revenue. Heck, if I get a couple of hundred a month from the Google I am ahead of the game. Note this: none of my publishers are selling very many expensive studies right now. The for fee columns I write produce a pittance as well. One publisher cut my pay by 30 percent as part of a shift to a four day week and a trimmed publishing schedule. Heck, I love my publishers, but I love an outfit that pays money more. I think quite a few authors would find publishing on the Google Press most interesting. If that happens, the Google Books project has a gap, but going forward, Google has the info and the publishers and non participating authors have a different type of competitive problem.

Third, Google cuts a new deal, adjusts the terms, and keeps on scanning books. Google’s management throws enough bird feed to the flock. Google is secure in its knowledge that the future belongs to a trans-national digital information platform stuffed with digital information of various types. No publisher or group of publishers has a comparable platform. Microsoft and Yahoo were in the book game and bailed out. Perhaps their platforms can at some point in the future match Google’s. But my hunch is that the critics of Google’s book project are not looking at the value of the information to Google’s knowledge base, Google’s repurposing technologies, and Google’s next generation dataspace applications. Because these are dark corners, the bright light of protest is illuminating the dust and mice only.

One theme runs through these three possibilities. Google gets information. In this game, the publishers have lost but have not recognized it. Without a better idea and without an alternative to the irreversible erosion of libraries, Google is not the miserable little worm that so many want the company to be. Just my opinion.

Stephen Arnold, September 21, 2009

Training Wheels for Business Intelligence?

September 17, 2009

Business intelligence is not like riding a bicycle. In fact, business intelligence requires quite a bit of statistical and mathematical sophistication. Some pundits and marketers believe that visualization will make the outputs of business intelligence systems “actionable”. I don’t agree. There’s another faction in business intelligence who see search as the solution to the brutal costs and complexities of business intelligence. I am on the fence about this “solution” for three reasons. First, if the underlying data are lousy, the outputs are lousy and the user is often none the wiser. Second, the notion of “search” is an interface spin. The user types a query and the system transforms the query into something the system can understand. What if the transformation goes off the tracks? The user is often none the wiser. Third, the notion of visualization combined with search is a typical marketing play: take two undefined notions which sound really good and glue them together. The result is an even more slippery term which, of course, no one defines with mathematical or financial precision.

Now read Channel Web’s “Visualization, Search, Among Emerging Trends in BI”, and you will see how the trade press creates a sense of purpose, movement, and innovation without providing any substance. The source of the article is none other than azure chip consultancy, the Gartner Group. I wrote about the firm’s assertion that no one can “copy” its information. I know at least one reason: I find quite a few of the firm’s assertions off the tracks upon this goose’s railroad runs.

Here’s the key passage in the Channel Web write up for me:

Schlegel identified seven emerging trends that will be key drivers for BI implementations, perhaps even down to the consumer level, in the future. The trends are: interactive visualization, in-memory analytics, BI integrated search, Software-as-a-Service, SOA/mash-ups, predictive modeling and social networking software. "A lot of technologies we’ll talk about to help build BI systems don’t even exist today, but some are right around the corner," he said. "Business intelligence can break out of the corporate world. Usually it’s consumer technology moving into the corporate world. I think it could be the other way around."

“Intelligence”, in my opinion, is an art or practice supported by human and machine-centric systems. Business intelligence remains a niche business because the vendors who market business intelligence systems rely on structured data, statistical routines taught in second and third year stats classes, and anchored in the programming tools from SAS and SPSS (now a unit of IBM). By the way, IBM now owns Cognos and SPSS, which seems to be a market share play, not a technology play in my opinion.)

The end of enterprise libraries caused a vacuum in some organization’s information access. The “regular” business intelligence unit focused on structured data and generating reports that look pretty much like the green bar reports I obtained from stats routines in the mid 1960s. To say that business intelligence methods are anchored in tradition is a bit of an understatement.

The surge in end user access to information on the Internet has thrown a curve to the business intelligence establishment. In response, SAS, for example, licensed the Inxight tools to process information and then purchased Teragram to obtain more of the “unstructured text goodness” that was lacking in traditional SAS installations. New vendors such as Attivio and Clarabridge have exploited this gap in the traditional Business Objects (now part of SAP and owner of Inxight), Cognos, SAS, and SPSS product offerings. I am not sure how successful these “crossover” companies will be. Clarabridge seems to have an edge because its technology plays well with MicroStrategy’s Version 9 system. Attivio is in more of a “go it alone” mode.

With Google’s Fusion Tables and WolframAlpha’s “search” service, there is increasing pressure on business intelligence vendors to:

  1. Cut prices
  2. Improve return on investment
  3. Handle transformation and meta metatagging of unstructured information
  4. Deliver better for fee outputs that the math folks from Google and Wolfram do for free.

My hunch is that the Gartner position reflects the traditional world of business intelligence and is designed to sell consulting services, maybe a conference or two.

Much can be done to enhance the usability of business intelligence. I think that in certain situations, visualization tools can clarify certain types of data. The notion of a search interface is a more complicated challenge. My research suggests that Google’s research into converting a query into a useful query that works across fact based information is light years ahead of what’s referenced in the trade publications and most consultants’ descriptions of next generation business intelligence.

When structured and unstructured content are processed in a meaningful way, new types of queries become possible. The outputs of these new types of queries deliver useful business intelligence. My view is that much of business intelligence is going to be disrupted when Google makes available some of its innovations.

In the meantime, the comfortable world of business intelligence will cruise along with incremental improvements until the Google disruption, if it takes place, reworks the landscape. Odds are 70 – 30 for Google to surprise the business intelligence world in the next six to nine months. Fusion Tables are baby steps.

Stephen Arnold, September 17, 2009

European Search Vendor Round Up

September 16, 2009

Updated at 8 29 am, September 17, 2009, to 23 vendors

I received a call from a very energetic, quite important investment wizard from a “big” financial firm yesterday. Based in Europe, the caller was having a bad hair day, and he seemed pushy, almost angry. I couldn’t figure out why he was out of sorts and why he was calling me. I asked him. He said, “I read your Web log and you annoy me with your poor coverage of European search vendors.”

I had to admit that I was baffled. I mentioned the companies that I tracked. But he wanted me to do more. I pointed out that the Web log is a marketing vehicle and he can pay me to cover his favorite investment in search. That really set him off. He wanted me to be a journalist (whatever that meant) and provide more detailed information about European vendors. And for free.

Right.

After the call, I took a moment and went through my files to see which European vendors I have mentioned and the general impression I have of each of these companies. The table below summarizes the companies I have either profiled in my for fee studies or the companies I have mentioned in this diary / marketing Web log. You may disagree with my opinions. I know that the azure chip consultants at Gartner, Ovum, Forrester, and others certainly do. But that’s understandable. The addled geese here in Harrod’s Creek actually install systems and test them, a step that most of the azure chip crowd just don’t have time because of their exciting work to generate enough revenue to keep the lights on, advise clients, and conduct social network marketing events. Just my opinion, folks. I am entitled to those despite the wide spread belief that I should be in the Happy Geese Retirement Home.

Vendor Function Opinion
Autonomy Search and eDiscovery One of the key players in content processing; good marketing
Bitext Semantic components Impressive technology
Brox Open source semantic tools Energetic, marketing centric open source play
Empolis GmbH Information management and business intel No cash tie with Attensity
Exalead Next generation application platform The leader in search and content processing technology
Expert System Semantic toolkit Works; can be tricky to get working the way the goslings want
Fast ESP Enterprise search, business intelligence, and everything else Legacy of a police investigation hangs over the core technology
InfoFinder Full featured enterprise search system my contact in Europe reports that this is a European technology. Listed customers are mostly in Norway.
Interse Scan Jour SharePoint enterprise search alternative Based in Copenhagen, the Interse system adds useful access functions to SharePoint; sold in Dec 2008
Intellisearch Enterprise search; closed US office Basic search positioned as a one size fits all system
Lumur Consulting Flax is a robust enterprise search system I have written positively about this system. Continues to improve with each release of the open source engine.
Lexalytics Sentiment analysis tools A no cash merger with a US company and UK based Infonics;
Linguamatics Content processing focused on pharma Insists that it does not have a price list
Living-e AG Information management No cash tie with Attensity
Mindbreeze Another SharePoint snap in for search Trying hard; interface confusing to some goslings
Neofonie Vertical search Founded in the late 1990s, created Fireball.de
Ontoprise GmbH Semantic search The firm’s semantic Web infrastructure product, OntoBroker, is at Version 5.3
Pertimm Enterprise search Now positioned as information management
PolySpot Enterprise search with workflow Now at Version 4.8, search, work flow, and faceted navigation
SAP Trex Search tool in NetWeaver; works with R/3 content Works; getting long in the tooth
Sinequa Enterprise search with workflow Now at Version 7, the system includes linguistic tools
Sowsoft High speed desktop search Excellent, lightweight desktop search
SurfRay Now focused on SharePoint Uncertain; emerging from some business uncertainties
Temis Content processing and discovery Original code and integrated components
Tesuji Lucene enterprise search Highly usable and speedy; recommended for open source installations

Updated at 8 29 am Eastern, September 17, 2009

Read more

SurfRay Reloaded

September 14, 2009

A happy quack to the reader who alerted me to the news about the reappearance of SurfRay, a company that dropped off my radar. The firm has announced via PR Newswire a new version of Ontolica. You can read the news release at the PR Newswire Web site. Note that PR Newswire links can go dark, so if this SharePoint compatible product interests you, you may want to do some sleuthing. Asserted in “SurfRay Announces Availability of Ontolica 4.0 for SharePoint, With New Reporting and Analytics Module” are analytics features. Furthermore, existing customers can upgrade for free through October 20, 2009. The Beyond Search team has not had an opportunity to kick the tires of this product although we did request information when rumors of the release reached us in Harrod’s Creek. You can get more information about the company at its Web site or by running this Devilfinder metasearch string. The product appears to compete in the same sector as Interse (also based in Denmark) and BA Insight (US). Some of the functionality asserted by SurfRay may be found in Coveo’s and Exalead’s SharePoint compatible systems. Adhere Solutions (owned by a Beyond Search gosling) offers software that makes it possible to use the Google Search Appliance to search, slice, and dice SharePoint content. With important announcements about Fast ESP (Microsoft’s enterprise search solution for large scale SharePoint installations), organizations with SharePoint have a large number of options to consider. The question that continues to flap around the goose pond is, “How can an organization determine which SharePoint solution is the appropriate one for that particular organization?” Marketing, not technology, seems to be the knife edge at the present time. Little wonder the geese at Beyond Search are addled. What a cornucopia of choices exist for the 100 million happy SharePoint license holders (if we accept the broad market size rumors bruited at conferences).

Stephen Arnold, September 14, 2009

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta