Mango Thrives in the Warmth of Solr

September 2, 2010

Mango library catalog helps to search libraries for the particular book, video, CD, or an ISBN, ISSN, and call number using criteria’s like keywords, title, author, location, et cetera. The Mango statistics measure the end user’s interaction with the Web browser, e.g. text messaging, using the folders, or searching for articles. The Florida Center for Library Automation web site news “Mango is now Solr-Powered!” states that the Mango catalogs will now run on the production servers and use the Solr software.

The news reveals a new term ‘Solango’ that is described as “the combination of the Mango discovery interface with the open source Solr indexing software published by the Apache Software Foundation”. It will replace the Endeca software and become a fully independent discovery platform, which will be able to ingest numerous data sources with no record limits, providing a new powerful open source-indexing, facet, and search engine for Mango.

Open source could put more of a squeeze on already strapped library vendors.

Leena Singh, September 2, 2010

Exclusive Interview: Charlie Hull, FLAX

September 1, 2010

Cambridge, England, has been among the leaders in the open source search sector. The firm offers the FLAX search system and offers a range of professional services for clients and those who wish to use FLAX. Mr. Hull will be one of the speakers in the upcoming Lucene Revolution Conference, and I sought his views about open source search.

image

Charlie Hull, FLAX

Two years ago, Mr. Hull participated in a spirited discussion about the future of enterprise search. I learned about the firm’s clients which include Cambridge University, IBuildings, and MyDeco, among others. After our “debate,” I learned that Mr. Hull worked with the Muscat team, a search system which provided access to a wide range of European content in English and other languages. Dr. Martin Porter’s Muscat system was forward looking and broke new ground in my opinion. With the surge of interest in open source search, I found his comments quite interesting. The full text of the interview appears below:

Why are you interested in open source search?

I first became interested in search over a decade ago, while working on next-generation user interfaces for a Bayesian web search tool. Search is increasingly becoming a pervasive, ubiquitous feature – but it’s still being done so badly in many cases. I want to help change that.  With open source, I firmly believe we’re seeing a truly disruptive approach to the search market, and a coming of age of some excellent technologies. I’m also pretty sure that open source search can match and even surpass commercial solutions in terms of accuracy, scalability and performance. It’s an exciting time!

What is your take on the community aspect of open source search?

On the positive side, a collaborative, community-based development method can work very well and lead to stable, secure and high-performing software with excellent support. However it all depends on the ‘shape’ of the community, and the ability of those within it to work together in a constructive way – luckily the open source search engines I’m familiar with have healthy and vibrant communities.

Commercial companies are playing what I call the “open source card.” Won’t that confuse people?

There are some companies who have added a drop of open source to their largely closed source offering – for example, they might have an open source version with far fewer features as tempting bait. I think customers are cleverer than this and will usually realize what defines ‘true’ open source – the source code is available, all of it, for free.

Those who have done their research will have realized true open source can give unmatched freedom and flexibility, and will have found companies like ourselves and Lucid Imagination who can help with development and ongoing support, to give a solid commercial backing to the open source community. They’ll also find that companies like ourselves regularly contribute code we develop back to the community.

What’s your take on the Oracle Google Java legal matter with regards to open source search?

Well, the Lucene engine is of course based on Java, but I can’t see any great risk to Lucene from this spat between Oracle and Google, which seems mainly to be about Oracle wanting a slice of Google’s Android operating system. I suspect that (as ever) the only real benefactors will be the lawyers…

What are the primary benefits of using open source search?

Freedom is the key one – freedom to choose how your search project is built, how it works and its future. Flexibility is important, as every project will need some amount of customization. The lack of ongoing license fees is an important economic consideration, although open source shouldn’t be seen as a ‘cheap and basic’ solution – these are solid, scalable and high performing technologies based on decades of experience. They’re mature and ready for action as well – we have implemented complete search solutions for our customers, scaling to millions of documents, in a matter of days.

When someone asks you why you don’t use a commercial search solution, what do you tell them?

The key tasks for any search solution are indexing the original data, providing search results and providing management tools. All of these will require custom development work in most cases, even with a closed source technology. So why pay license fees on top? The other thing to remember is anything could happen to the closed source technology – it could be bought up by another company, stuck on a shelf and you could be forced to ‘upgrade’ to something else, or a vital feature or supported platform could be discontinued…there’s too much risk. With open source you get the code, forever, to do what you want with. You can either develop it yourself, or engage experts like us to help.

What about integration? That’s a killer for many vendors in my experience.

Why so? Integrating search engines is what we do at Flax day-to-day – and since we’ve chosen highly flexible and adaptable open source technology, we can do this in a fraction of the time and cost. We don’t dictate to our customers how their systems will have to adapt to our search solution – we make our technology work for them. Whatever platform, programming language or framework you’re using, we can work with it.

How do people reach you?

Via our Web site at http://www.flax.co.uk – we’re based in Cambridge, England but we have customers worldwide. We’re always happy to speak to anyone with a search-related project or problem. You’ll also find me in Boston in October of course!

Thank you.

Stephen E Arnold, September 1, 2010

Freebie

Google and Its Yahoo Style Acquisitions

August 25, 2010

I don’t want to beat a dead Googzilla. But Google had a product search service that involved scanning catalogs. That went away, but I interpreted the effort as a way for the Google to learn about scanning, page fix ups, and indexing to an acceptable level of accuracy a page image. Google rolled out Froogle, which I thought was pretty clever. Not as good as the services from Pricewatch.com or Amazon.com, but I found it helpful for certain types of product research. Froogle was deemed too frisky, so the shopping product was renamed Google Shopping. Along the path of Catalog to Shopping, the Google integrated a shopping cart, which I like more than Amazon’s “one click” approach. Poor Amazon keeps forgetting that I like the one click approach. I get the privilege of going through a bunch of screens, clicking and entering items of data, in order to turn on one click. Then, without further ado, Amazon turns off one click for me. How thoughtful? Google figured out how not to annoy me with its Checkout. In my three Google monographs, I mentioned other features of Google’s shopping capabilities. These ranged from the bar code function to the predictive cuteness disclosed in Google’s open source documents.

I believed and still believe that Google’s in-house technology is  available for the Google to convert the Google Shopping service into a world beater.

Apparently I am wrong.

Google bought Like.com which is – care to guess – a shopping service. Point your browser thingy at “Google Buys Like.com” for the received wisdom about this deal from Fortune Magazine. Time Warner sure does understand online information, right? Here’s the key passage from the Fortune write up:

I [Fortune’s author] think the ~$100 million+ Like.com pick-up is an even bigger indication that Google wants to be an eCommerce platform.  Google won’t be a fulfillment house but they’ll happily take an affiliate cut of links they send to vendors.  And, even if Google casts aside Like.com’s affiliate business, Google still stands to make a lot of money advertising against the (30%) higher CPC rates that shopping sites can pull in. From a technology standpoint, Like.com’s image recognition/comparison engine can not only power shopping, it can also help in its Image Search product, which just recently saw a significant update.  Google has other experimental products like Goggles that could also benefit from the technology.

Okay.

My take is different:

  1. Google seems to be buying companies in the hope that the technology, customers, and staff will give Google a turbo boost in a sector in which Apple and Amazon, among others, are doing a darned good job getting my money. I don’t turn to Google’s Shopping service as frequently as I used to. Am I alone? Well, this deal seems to hint that I am not the only person ignoring Mother Google for products.
  2. In the eCommerce sector, Google has not mounted much of a product offering. Google hired an expert in eCommerce, but so far not much except this acquisition. I have seen zero use of the product functionality disclosed in the Guha Programmable Search Engine patent documents. Lots of weapons, no attack of significance that I have experienced.
  3. Google’s in house engineering teams may start to get the idea that their work can’t cut the mustard. Edmond Fallot’s black currant mustard, please! Google’s acquisitions seem to duplicate, not complement, technology Google has disclosed in its technical papers and patent applications. Maybe this stuff Google invented does not work or work in today’s market? Scary.
  4. Google’s customers may be tired of waiting. I know that I don’t think of Google when I am looking for network cables. I go to Amazon, check out its prices, and then run a query across deal sites. A cable is a cable no matter what Monster insists is true.

Bottom-line: Google has cash and has not yet diversified its revenue streams. The old saw no longer cuts for me. The notion that these acquisitions increase Google’s ad revenue does not get the job done. If the online ad market softens due to a bold action from Facebook or a less clumsy offering from Apple, the Google may have to do more than collect companies Yahoo style. Google has to do something. Microsoft Yahoo are now up and jogging.

Maybe Google is a hybrid of the “old” Microsoft and the “pre-Semel” Yahoo? Interesting thought in my opinion.

Stephen E Arnold, August 25, 2010

Freebie

Exclusive Interview: Satish Gannu, Cisco Systems Inc.

August 24, 2010

I made my way to San Jose, California, to find out about Cisco Systems and its rich media initiatives. Once I located Cisco Way, the company’s influence in the heart of Silicon Valley, I knew I would be able to connect with Satish Gannu,  a director of engineering in Cisco’s Media Experience and Analytics Business Unit.  Mr. Gannu leads the development team responsible for Cisco Pulse, a method for harnessing the collective expertise of an organization’s workforce. The idea is to apply next generation technology to the work place in order to make it quick and easy for employees to find the people and information they need to get their work done “in an instant.”

I had heard that Mr. Gannu is exploring the impact of video proliferation in the enterprise. Rich media require industrial-strength, smart network devices and software, both business sectors in which Cisco is one of the world’s leading vendors. I met with Mr. Gannu is Cisco Building 17 Cafeteria (appropriate because Mr. Gannu has worked at Cisco for 17 years). Before tackling rich media, he served as Director of Engineering in Cisco’s Security Technology Group. I did some poking around with my Overflight intelligence system and picked up signals that he is responsible for media transcoding, a technology that can bring some vendors’ network devices to their knees. Cisco’s high performance systems handle rich media. Mr. Gannu spearheads Cisco’s search and speech-to-text activities. He is giving a spotlight presentation at the October 7-8, 2010, Lucene Revolution Conference in Boston, Massachusetts. The conference is sponsored by Lucid Imagination.

cisco satish gannu

Satish Gannu, Director of Engineering, Cisco Systems Inc.

The full text of my interview with Mr. Gannu appears below:

Thanks for taking the time to talk with me?

No problem.

I think of Cisco as a vendor of sophisticated networking and infrastructure systems and software? Why is Cisco interested in search?

We set off to do the Pulse project in order to turn people’s communications in to a mechanism for finding the right people in your company. For finding people, we asked how do people communicate what they know?  People communicate what they know through documents — web page, or an email, or a Word document, or a PDF, and now, Video. Video is big for Cisco

Videos are difficult to consume or even find. The question we wanted to answer was, “Could we build a business-savvy recommendation engine. We wanted to develop a way to learn from user behavior and then recommend videos to people, not just in an organization but in other settings as well. We wanted to make videos more available for people to consume. Video is the next big thing in digital information, from You Tube coming to enterprise world.  In many ways, video represents a paradigm shift. Video content takes a a lot of storage space. We think that video is also difficult to consume, difficult to find. In search, we’ve always worked from document-based view. We are now expanding the idea of a document from text to rich media. We want to make video findable, browseable, and searchable. Obviously the network infrastructure must be up to the task. So rich media is a total indexing and search challenge.

Is there a publicly-accessible source of information about Cisco’s Pulse project?

Yes. I will email you the link and you may insert it in this interview. [Click here for the Pulse information.]

No problem. Are you using open source search technology at Cisco.

Yes, we believe a lot in the wisdom of the crowds. The idea that a community and some of the best minds can work together to develop and enhance search technology is appealing to us. We also like the principle that we should not invent something that is already available.

I know you acquired Jabber. Is it open source?

Yes, in late 2008 we purchased Cisco bought the company called Jabber. The engineers had developed a presence and messaging protocol and software. Cisco is also active in the Open Social Platform.

Would you briefly describe Open Social?

Sure. “Open Social” is a platform with a set of APIs developed by a community of social networking developers and vendors to structure and expose social data over the network, at opensocial.org. We’ve adopted Open Social to expose the social data interfaces in our product for use by our customers, leveraging both the standardization and the innovation of this process to make corporate data available within organizations in a predictable, easy-to use platform.

Why are you interested in Lucene/Solr?

We talked to multiple companies, and we decided that Lucene and Solr were the best search options. As I said, we didn’t want to reinvent the wheel.  We looked at available Lucene builds. We read the books. Then we started working with Lucid. Our hands on testing actually validated the software. We learned how mature it is. The road map for things which are coming up was important to us.

What do you mean?

Well, we had some specific ideas in mind. For example, we wanted to do certain extensions on top of basic Lucene. With the road map, open source gives us an an opportunity to do our own intellectual property on the top of Lucene/Solr.

Like video?

Yes, but I don’t want to get into too much detail. Lucene for video search is different.  With rich media sources we worry about how transcribe it, and then we have to get into how the system can implement relevancy and things like that.

One assumption we made is how people speak at a rate of two to three words per second.  So when we were doing tagging, we could calculate the length of the transcript and size of the document.

That’s helpful. What are the primary benefits of using Lucene/Solr?

One of our particular interests is figuring out how we can make it easy for people in an organization to find a person with expertise or information in a particular field. At Cisco, then, how our systems help users find people with specific expertise is core to our product.

So open source gives us the advantage of understanding what the software is doing. Then we can build on top of those capabilities., That’s how we determine what, which one to choose for.

Does the Lucene/Solr community provide useful developments?

Yes, that’s the wisdom of the crowds. In fact, the community is one of the reasons open source is thriving. In my opinion, the community is a big positive for us. In our group, we use open social too.  At Cisco, we are part of the enterprise Open Social consortium, and we play an active role in it.  We also publish an open source API.

I encourage my team be active participants in that and contribute. Many at Cisco are contributing certain extensions. We have added these on top of open social. We are giving our perspective to the community from our Pulse learnings. We are doing the same type of things for for Lucene/Solr.

My view is that if useful open source code is out there, everyone can make the best utilization of it.  And if a developer is using open source, there is the opportunity for making some enhancement on top of the existing code. It is possible to create your own intellectual property around open source too.

How has Lucid Imagination contributed to your success in working with Solr/Lucene?

We are not Lucene experts. We needed to know whether it’s possible, not possible, what are the caveats. The insight, which we got from consulting with Lucid Imagination helped open our eyes to the possibilities. That clinical knowledge is essential.

What have you learned about open source?

That’s a good question. Open source doesn’t always come for free.  We need to keep that in mind. One can get open source software. Like other software, one needs to maintain it and keep it up to date.

Where’s Lucid fit in?

Without Lucid We would have to send an email to the community, and wait for somebody to respond. Now I ping Lucid.

Can you give me an example?

Of course. If I have 20,000 users, I can have 100 million terms in one shard. If I need to scale this to 100,000 users and put up five shards, how do I handle these shards so that each is localized? What is the method for determining relevancy of hits in a result set? I get technical input from Lucid on these types of issues.

When someone asks you why you don’t use a commercial search solution, what do you tell them?

I get this question a lot. In my opinion, the commercial search systems are often in a black box. We occasionally want to have use this type of system. In fact, we do have a couple of other related products which use commercial search technologies.

But for us, analysis of context is the core. Context is what the search is about. And when you look at the code, we realized, how we use this functionality is central to our work. How we find people is one example of what we need. We need an open system. For a central function, the code cannot be a black box. Open source meets our need.

Thank you. How can a reader contact you?

My email is sgannu at cisco dot com.

Stephen E Arnold, August 24, 2010

Sponsored post

Exclusive Interview: Erik Arnold, Adhere Solutions

August 9, 2010

How does one consultant interview another? Cautiously. How does a father interview a son? Buy the Diet Coke and provide the questions before flipping on the digital recorder. I spoke with Erik Arnold, managing director of Adhere Solutions in a charming eatery in Chicago with a buzzing neon sign advertising “Free Refills” on Sunday. Today is Monday. Now some readers wonder if I write about my son and get paid for that work. Anyone who has a successful son knows that fathers get to pay. What’s my compensation? When you have a gosling flying circles around your goose pond, you will figure it out.

ErikSArnold

Erik Arnold, managing director of Adhere Solutions, will be giving a talk about the use of open source search technology for the White House’s USA.gov Web site.

Erik Arnold has over 15 years of experience in the search industry, divided uniquely between both Web and enterprise search. Adhere Solutions is a consulting firm that advises companies on improving their search systems. Prior to Adhere, Erik served as a subject matter expert for a government consulting company where he primarily worked with the House of Representatives and the USA.gov web portal. He started his career at Lycos, one of the first Internet search engines, where he was a product marketing manager. Erik then moved to NBCi search engine (Snap.com) where he served as business development manager.

He will be giving a talk about the impact of open source search on certain US government initiatives at the October 2010 Lucene Revolution Conference.

The full text of the interview appears below:

Here we are again talking about search technology.

That’s right.

For readers who may not know about your company, what’s an Adhere Solutions?

Adhere Solutions offers products and services that help organizations with their of search systems. We focus on Google and open source technologies. Adhere Solutions has been a trusted Google Enterprise Partner since 2007, with a client roster of Wal-Mart, Lexis-Nexis, and the Federal Trade Commission among others.

You have worked on the USA.gov and related Federal projects. When did you get into this type of work?

A decade ago. I think I did my first Federal consulting job in 2000 for the Clinton Administration.

Read more

Scaling with Solr, Python and Django

August 5, 2010

Scaling is tough problem. Gmail has had its share of hiccups. Reddit has recently made a switch in its search system to deal with latency. Twitter is embarking on an infrastructure project to cope with getting bigger. Toby White’s scaling tips are useful in my opinion. His Timetric Blog included a useful write up called “Scaling Search to a Million Pages with Solr, Python, and Django.” The article references a slide deck, which contains code snippets and explanatory details. You can locate an instance of the file at http://dl.dropbox.com/u/1942316/SolrMillionsOfDocs.pdf. In my opinion, one of the key points in the write up in the Timetric Blog is:

On the large scale, each installation will have its own problems, but three things you’ll almost certainly need to pay attention to are:

  • Decoupling reading from and writing to the index. They have very different performance characteristics (and writing presents special problems if you’re updating documents as well as adding brand new documents).
  • Working out the right balance of adding/committing/optimizing data. This will be driven by the frequency with which you add data, and how soon you need to be able to serve results from newly-added data. Must it be immediate, or can you wait seconds/minutes/hours?
  • Fine-tuning your tokenizers/analyzers. Although small and fiddly, this is an issue which will bite you more and more as a corpus of data grows. You’ll need to tweak your indexing algorithms away from the defaults; extracting relevant results from a pile of a million documents is much harder than from a few thousand.

You may want to check out Toby White’s Python/Solr library sunburnt. Worth a look.

Stephen E Arnold, August 5, 2010

4 August Ultrasaurus on Lucene/Solr

August 4, 2010

I quite like the image “ultrasaurus” evokes. A goose, in comparison, lacks oomph. Nevertheless, you will want to navigate to “Lucene/Solr Meet Up, July 28, 2010.” There are some interesting factoids in the thorough summary of the presentations and remarks.

Let me highlight four that struck me as interesting, and you can work your way through the original post to get the rest of this meetup’s flavor.

First, Salesforce.com seems to sporting a Lucene/Solr T shirt under the firm’s business casual garb. Bill Press, according to Ultrasaurus, offered some metrics about the scale of the firm’s operation; for example, eight terabytes of searchable information. The incremental indexing zips along with 70 percent of new content and deltas crunched in less than one minute.

Second, Lucid Imagination’s Grant Ingersoll provided some case examples. One sequence jumped out at me; that is: his suggested links for more information:

Lucid Imagination is the go-to outfit for Lucene/Solr engineering and professional services.

Finally, Jon Gifford from Loggly said:

Solr is awesome at what it does, but not so good for data mining. [So] plan to plug in Hadoop for large-volume analytics.

image

Possible logo for open source search solutions? Image source: http://wargames.spyz.org/convSALAMANDER.html

Will Lucene/Solr abandon their present logotypes and go for something along the line of a Spinosaurus. With Lucene/Solr adoptions moving upwards, a Spinosaurus might have easy pickings from clients of somewhat marginalized commercial search systems in Austria, Denmark, Germany, and other European Commission member states. Snack time may be approaching. SharePoint nibbles, anyone?

Stephen E Arnold, August 4, 2010

Taxodiary: At Last a Taxonomy News Service

August 3, 2010

I have tried to write about taxonomies, ontologies, and controlled term lists. I will be the first to admit that my approach has been to comment on the faux pundits, the so-called experts, and the azurini (self appointed experts in metatagging and indexing). The problem with the existing content flowing through the datasphere is that it is uninformed.

What makes commentary about tagging informed? Three attributes. First, I expect those who write about taxonomies to have built commercially-successful systems to manage terms lists and that those term lists are in wide use, conform to standards from ISO, ANSI, and similar outfits. Second, I expect those running the company to have broad experience in tagging for serious subjects, not the baloney that smacks of search engine optimization and snookering humans and algorithms with their alleged cleverness. Third, I expect the systems used to build taxonomies, manage classification schemes, and term lists to work; that is, a user can figure out how to get information out of a system relevant to his / her query.

taxodiary splash

Splash page for the Taxodiary news and information service.

How rare are these attributes?

Darned rare. When I worked on ABI/INFORM, Business Dateline, and the other database products, I relied on two people to guide my team and me. The first person is Betty Eddison, one of the leaders in indexing. May she rest in indexing heaven where SEO is confined to Hell. Betty was one of the founders of InMagic, a company on whose board I served for several years. Top notch. Care to argue? Get ready for a rumble, gentle reader.

The second person was Margie Hlava. Now Ms. Hlava, like Ms. Eddison, is one of the top guns in indexing. In fact, I would assert that she is on my yardstick either at the top or holds the top spot in this discipline. Please, keep in mind that her company Access Innovations and her partner Dr. Jay ven Eman are included in my reference to Ms. Hlava. How good is Ms. Hlava? Very good saith the goose.

Read more

Comparison Highlights Lucene

August 3, 2010

Vik Singh has posted a thorough and impartial comparative analysis of selected search engines. Singh used his own testing code, and kept the playing field level by not changing any numerical tuning parameters. He summarizes by saying:

Based on these preliminary results and anecdotal information I’ve collected from the web and people in the field (with more emphasis on the latter), I would probably recommend Lucene (which is an IR library – use a wrapper platform like Solr w/ Nutch if you need all the search dressings like snippets, crawlers, servlets) for many vertical search indexing applications – especially if you need something that runs decently well out of the box (as that’s what I’m mainly evaluating here) and community support.

Lucene earned a perfect 5/5 for support–highest of all tested platforms. (You can download Lucene/Solr at Lucid Imagination.)

As an IT professional, you are always on the lookout for ways to cut costs, and you also know that software licenses aren’t getting any cheaper, particularly for popular pro-sumer products such as Photoshop and Dreamweaver. http://www.osalt.com hosts a treasure trove of free, high-quality open source alternatives designed to save you time and money and still deliver a first-rate final product. By choosing an open source product, the user obtains a number of advantages compared to commercial products. Besides the fact that open source is always available for free, it is a transparent application, in that you are invited exclusively behind the scenes to view all source code and thereby to suggest improvements to the product. Furthermore, every product is covered by a large dedicated network, or community, who is more than willing to answer any questions you may have. http://www.osalt.com is definitely worth bookmarking.

Brett Quinn, August 3, 2010

Webnocular

July 27, 2010

I looked at this metasearch system a couple of weeks ago. I revisited it because a reader sent me a link to it, asking for my opinion. You can locate the site at http://www.webnocular.com/. Metasearch and mobile search are popular. The reason is that the cost of brute force Web indexing has made it impossible for smaller firms to compete. Exalead, now a unit of the French superstar services firm Dassault, has built an index of about eight billion Web pages. I use it first and then Google for my research. Google returns too many irrelevant results to keep this goose happy. Exalead’s method, on the other hand, does a much better job for the types of queries I routinely run. I also use Exalead to index Google’s own Web logs. I find that Google’s consumerist approach makes it tough to pinpoint some of Google’s own blog content. You can try the Exalead Google blog index at http://overflight.labs.exalead.com/.

Now what about Webnocular?

webnocular

The system takes a query, performs some normal metasearch tricks, fires off the request, gets the results back, and performs some special magic. The idea is that metasearch systems do not have to brute force index the Web like Exalead, Google, and Microsoft do. Heck, it is expensive and more complicated than it looks to the home economics majors who end up working at the azurini (second and third tier consulting companies).

A query for “enterprise search” returned some results after some chugging. The results were okay, but not as useful to me as a query for the phrase on Exalead, Ixquick, or Red Tram, which is becoming one of my favorite current information indexing services.

I did not download the add in toolbar. I find these invasive. I don’t tweet and I don’t post to Facebook. Who cares what an addled goose likes. If you are into tool bars and social media, you may want to give Webnocular a test drive. The company offers code “extenders” such as an Instant Messenger service which is “a full-featured chat program.” The company says:

[Webnocular Messenger] includes features such as Moderated chat, high load support, font/color/ customization, emoticons, private messaging, private chat room, profanity filtering, ignoring users, file Transfer, and many more!

Our take on the service is that it implements some good ideas, and it could catch fire among some user segments.According to Most Popular Websites, Webnocular is in the top million most popular Web sites.

Stephen E Arnold, July 27, 2010

Freebie

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta