A Google Plus Index Transition?

August 12, 2011

A few weeks after the Google Plus launch, Senior VP at Google Vic Gundotra is addressing feedback and criticism of the new social platform. Google Plus itself is a transition for Google, a move from their trademark keyword indexing and a move toward the trendy social tagging. In “Google Plus Is Being Changed This Week Based on User Feedback,” Google’s next move is discussed.

You may think Google could sit back and watch the Google Plus network grow, but that would be a mistake. The search company has realized it can’t just watch what happens, it needs to respond to users quickly in order to keep them happy and the network growing. While the general view of Google Plus is a positive one, there’s also a lot of criticism and user feedback of which Google is about to tackle.

Google is no doubt remembering failed ventures like Buzz and Wave while striving to make Google plus a lasting service. Another possible motivation is worth considering. Does Google see the end of the era of indexing? With social media placing more and more importance on social meaning within a given context, perhaps tagging is becoming more relevant than keyword indexing. If this is indeed the case, Google no doubt hopes to insure their dominance for the next generation through Google Plus.

Emily Rae Aldridge, August 11, 2011

Sponsored by Pandia.com, publishers of The New Landscape of Enterprise Search

The App Approach: A Dead End?

August 11, 2011

From the amazing statements department. Flash. Venture Beat’s “Nokia Exec: Android and iPhone Focus on the App Is “Outdated” caught my attention. For this write up, let;s assume the fellow is dead wrong. I am okay with headlines written for Bing and Google indexing subsystems. I am also okay with wild and crazy statements from cash strapped azure chip consultants, search vendors worried about making the next payment on the CEO’s company car, and unemployed English majors explaining that they are really social media experts. In an economic depression, words are worthless. When one has nothing to lose, is the approach “Go for broke?’

The assertion reported by Venture Beat’s Mobile Beat online publication was quite interesting. First, Nokia is not hitting any financial home runs. Say what you will about Apple, the outfit has a nifty balance sheet. Even the Google which is a giant ad system is able to “give away” a mobile operating system and make big waves. One example is the factoid that hundreds of thousands of Android-based devices are sold every time I check the weather in Harrod’s Creek.

image

A happy quack to http://zekjevets.blogspot.com/2010/02/alternative-racism.html 

Here’s the statement that snagged me. (This is a longer quote than we normally use, but I want to get the context right. Please, navigate to the Venture Beat original for the full story. Also, note that I have put in bold face the items upon which I wish to comment.)

Nokia’s future phones will merge the latest Microsoft Windows Phone software based on the Mango update (which Weber said has had great reviews) with Nokia’s hardware, which he said boasts reliability and phone call quality. Weber cited state-of-the-art imaging technology and battery performance as areas Nokia phones would excel in. Weber also said Nokia may beat competitors on pricing, thanks to the company’s significant global reach, which gives it economies of scale. Moreover, Weber said the company will launch its superphone portfolio with a focus on U.S. market, because he said winning in the U.S. market is what it takes to win globally. He also confirmed that Nokia will back the launch with the company’s largest marketing effort to date, though wouldn’t go into specifics. Weber called Android and the iOS phone platforms “outdated.” While Apple’s iPhone, and its underlying iOS operating system, set the standard for a modern user interface with “pinch and zoom,” Weber conceded, it also forces people to download multiple applications which they then have to navigate between. There’s a lot of touching involved as you press icons or buttons to activate application features. Android essentially “commoditized” this approach, Weber said.

Whew. Let me do an addled goose style run down.

  1. Reliability and call quality. In my experience the phone is only part of the reliability and call quality equation. There are networks involved. I have worked throughout the world and reliability and call quality has more to do with where I am than the handset. In the arctic circle my Treo 650 worked like a champ. In the hollow near my pond, I can’t get a coherent squawk from my BlackBerry. So how’s Nokia going to fix this? Nokia can’t. Baloney.
  2. Imaging and battery performance. Whoa, horsey. Putting a better camera in a phone is a question of economics and technical tradeoffs. The battery issue is a big deal. As crazy as Research in Motion’s present management set up is, the company does have good battery technology as does Apple. Nokia? Better get that pony aimed at the battery corral is my advice.

Read more

Content Analyst and iCONECT Team Up

August 10, 2011

The Content Analyst folks bought me dinner once. Nevertheless, my memories of a Taco Bell event have faded, and we wanted to let you know about the announcement “iCONECT Development, LLC Announced an Integration with Content Analyst Company, LLC.” ICONECT provides law practitioners with litigation support and collaboration software. Content Analyst furnishes advanced search tools and indexing technology.

Content Analyst’s software eschews the keyword approach and, instead, recognizes concepts and categories. ICONECT also works with categories, and places related documents in nested folders. The company focuses on improving document review workflow, their clients’ most time-consuming process.

The press release quotes Kurt Michel, President of Content Analyst, regarding the partnership:

We are excited that iCONECT has selected Content Analyst’s unique Dynamic Clustering and Categorization capabilities to help their partners and end-users reduce the time and cost of document review. We look forward to working with the iCONECT team and their industry leading partners to bring the power of Content Analyst’s advanced analytics to the thousands of attorneys using iCONECT software.

Our thoughts: Content Analyst is looking more like an original equipment manufacturer, providing technology as PLS did more than a decade ago. Nothing wrong with that approach.

Stephen E Arnold, August 10, 2011

Sponsored by Pandia.com, publishers of The New Landscape of Enterprise Search

Metadata Formally Recognized by Courts

August 7, 2011

Meta-Cognition, meaning to think about thinking, is a term psychologists love to throw around to discuss intelligence and the capacity to learn. Now, it seems the legal community is going to jump aboard the thinking-ship with their own term – metadata, to think about data, or more precisely, data thinking about data. The article, Technology: Recent Cases Help Evolve Guidelines for Producing Metadata: Keeping ESI Load Files in a Forensically Sound Manner that Preserves Metadata is Key, on Inside Counsel, examines the nature of metadata and tries to pin down a practical use for it.

The first part of the problem – what is metadata? – is universally agreed upon now days. Metadata is any non-visible data, such as author, word count, title (including changes), time/date stamps, etc…, connected to documents or other Electronically Stored Information (ESI). Lawyers can use this valuable information to nail down time lines, prove who monkeyed with a document, and which custodians did what to ESIs, in general.

As the legal community catches up with technology, more and more judges are ruling that metadata is not hearsay, but rather falls under the protection of ESI. Most recently, a judge set some practical guidelines for metadata:

“Judge Shira Scheindlin emphasized that metadata is an integral part of an electronic record. Although it is not legal precedent, her list is a reasonable set of guidelines for in-house counsel responding to ESI requests, as follows. Earlier this year, in National Day Laborer Organizing Network v. United States Immigration and Customs Enforcement Agency, 2011 WL 381625 (S.D.N.Y. Feb. 7, 2011) (opinion withdrawn upon agreement of the parties), Judge Shira Scheindlin emphasized that metadata is an integral part of an electronic record. Although it is not legal precedent, her list is a reasonable set of guidelines for in-house counsel responding to ESI requests, as follows. The metadata that should accompany the production of any text-based ESI includes: File Name…Custodian… Source Device…Source Path…Production Path…Modified Date…Modified Time…Time Offset Value…Identifier.”

Now that metadata is being recognized as a legitimate resource for information, indexing becomes even more vital than ever.

Catherine Lamsfuss, August 7, 2011

Sponsored by Quasar CA, your source for informed financial advisory services

IBM Sets New File Scanning Record

August 5, 2011

IBM’s announcements fascinate us. The company releases information about products, services, and inventions and then we don’t hear too much about them. We still are waiting for a live demo of the search prowess of Watson. We think indexing Wikipedia would be a good start, but it seems that Watson has developed an interest in medicine. No problem. We’re patient. (No pun intended.)

We liked the write up “IBM System Scans 10 Billion Files in 43 Minutes,” reports TecheEYE.net. That beats their own previous record set back in 2007. Writer Matthew Finnegan elaborates:

“IBM has successfully scanned 10 billion files in just 43 minutes, opening the doors to access of zettabytes of information storage. This means a massive improvement on the previous record, a relatively sluggish one billion files scanned in three hours.

Changes credited for the success include relying on a single platform data environment and management task simplification. Also, an algorithm was devised that maximized use of all ten eight-core systems in the General Parallel File System. Researchers expect this accomplishment to point the way to ever greater data management efficiency in the future.

Our view is that this seems like a lot of files, but without a comparison against some other vendors of high speed file access, we interpret the number as similar to Amazon’s reporting of how successful Amazon Web Services is. We think Amazon is successful, but the metrics are tough to anchor to something to which we can relate. IBM is, it appears, emulating Amazon’s approach to unanchored metrics.

Our question: when will we see these different and amazing technologies in Watson? When will we see a third party analysis of file scanning speed or better yet, an article from a customer detailing the method and payoff from IBM’s remarkable technology?

Cynthia Murrell, August 5, 2011

Sponsored by Pandia.com, publishers of The New Landscape of Enterprise Search

Make Metadata Useful. But What If the Tags Are Lousy?

July 28, 2011

I must be too old and too dense to understand why the noise about metadata gives me a headache. I came across a post or story on the CNBC.com Web site that was half way between a commercial and a rough draft of a automated indexing vendor’s temp file stuffed with drafts created by a clever intern. The post hauled around this weighty title: “EMA and ASG Webinar: 7 Best Practices For Making Metadata Useful”. The first thing I did was look up EMA and ASG because I was unfamiliar with the acronyms.

I learned that EMA represents a firm called Enterprise Management Associates. The company does information technology and data management research, industry analysis, and consulting. Fair enough. I have done some of the fuzzy wuzzy work for a couple of reasonably competent outfits, including the once stellar Booz, Allen & Hamilton and a handful of large, allegedly successful companies.

ASG is an acronym for ASG Software Solutions. The parent company grows via acquisitions just like Progress Software and, more recently, Google. The focus of the company seems to be “the cloud in your hand.” I am okay with a metaphorical description.

I am confused about metadata. Source: http://www.thebusyfool.com/wp-content/uploads/2011/05/Decisions_clipart.jpg

What caught my attention is the focus on metadata, which in my little world, is the domain of people with degrees in library and information science, years of experience in building ANSI standard controlled term lists, and hands on time with automated and human centric indexing, content processing, and related systems. An ANSI standard controlled term list is not management research, industry analysis, consulting, or the “cloud in your hand.” Controlled term lists which make life bearable for a person seeking information are quite difficult work, combining the vision of an architect and the nitty gritty stamina of a Roman legionnaire building a road through Gaul.

Here’s the passage that caught my attention and earned a place in my “quotes to note” folder:

As data grows horizontally across the enterprise, businesses are faced with the urgent need to better define data and create an accurate, transparent and accessible view of their metadata. Metadata management and business glossary are foundational technologies that can help companies achieve this goal. EMA developed seven best practices that guide companies to get the most of their data management. All attendees receive the complimentary White Paper Managing Metadata for Accessibility, Transparency and Accountability authored by Shawn Rogers.

I am not sure what some of these words and phrases mean. For example, “better define data”. My question, “What data?” Next I struggled with “create an accurate, transparent, and accessible view of their metadata.” Now there are commercial systems which allow “views” of controlled term lists. One such vendor is Access Innovations, an outfit which visited me in rural Kentucky to talk about new approaches to indexing certain types of problematic content which is proliferating in organizations. Think in terms of social content without much context other than a “handle”, date, and time even within a buttoned up company.

What do users know? Image source: http://www.computersunplugged.com.au/images/angry-man.gif

Another phrase that caught my limited attention was “metadata management and business glossary are foundational”. Okay, but before one manages, one must do a modest amount of work. Even automated systems benefit from smart algorithms helped with a friendly human crafted training document set or direct intervention by a professional information scientist. Some organizations use commercial controlled term lists to seed the automatic content tagging system. I am all for management, but I don’t think I want to jump from the hard work to “management” without going to the controlled vocabulary gym and doing some push ups. “Business glossary” baffled me and I was not annoyed by what seems to be a high school grammar misstep. Nope. The “business glossary” is a good thing, but it must be constructed to match the language of the users, the corpus, and the accepted terminology. Indexing a document with the term “terminal” is not too helpful unless there is a “field code” that pegs the terminal as one where I find airplanes, trains, death, or computer stuff. A “business glossary” does not appear from thin air,although a “cloud” outfit may have that notion. I know better.

I did a quick Google search for “Shawn Rogers,” author of the white paper. Note: I don’t know what a white paper is. The first hit is to a document which is on what I think is a pay-to-play information service called “b-eye”. The second hit points to a LinkedIn profile. I don’t know if this is “the” Shawn Rogers whom I seek. I learned that he is:

[a professional who] has more than 19 years of hands-on IT experience with a focus on Internet-enabled technology. In 2004 he cofounded the BeyeNETWORK and held the position of Executive Vice President and Editorial Director. Shawn guided the company’s international growth strategy and helped the BeyeNETWORK grow to 18 web sites around the world making it the largest and most read community covering the business intelligence, data warehousing, performance management and data integration space. The BeyeNETWORK was sold to TechTarget in April 2010.

I concluded this was “the” Mr. Rogers I sought and that he or his organization is darned good at search engine optimization type work.

What clicked in my mind was a triple tap of hypotheses:

  1. A couple of services firms have teamed up to cash in on the taxonomy and metadata craze. I thought metadata had come and gone, but obviously these firms are, to use Google’s metaphor, putting more wood behind the metadata thing. So, this is a marketing in order to sell services. As I said, I am okay with that.
  2. These firms have found a way to address the core problem of indexing by people who do not have the faintest idea of what’s involved in metatagging that helps users. One hopes.
  3. The two companies are not sure what the outcome of the webinar and the white paper distribution will be. In short, this is a fishing trip or an exploration of the paths on an island owned by a cruise company. There’s not much at risk.

Okay, enough.

Here’s my view on metadata.

First, most organizations have zero editorial policy and zero willingness to do the hard work required to dedupe, normalize, and tag content in a way that allows a user to find a particular item without sticky notes, making phone calls, or clicking and scanning stuff for the needed items. I think vendors promise the sun and moon and deliver gravel. Don’t agree? Use the comments section, please. Don’t call me.

Second, most of the vendors who offer industrial strength indexing and content processing systems know what needs to be done to make content findable. But the licensees often want a silver bullet. So the vendors remain silent on certain key points such as the Roman legionnaire working in the snow part. The cost part is often pushed to the margin as well.

Third, the information technology professionals “know” best. Not surprisingly most content access in organizations is a pretty lukewarm activity. I received an email last week chastising me for pointing out that more than half of an organization’s search system users were dissatisfied with whatever system the company made available. Hey, I just report the facts. I know how to find information in my organization.

Fourth, no one pays real attention to the user of a system. The top brass, the IT experts, and the vendors talk about the users. The users don’t know anything and whatever input those folks provide is not germane to the smarties. Little wonder that in some organizations systems are just worked around. Tells range from a Google search appliance in marketing to sticky notes on monitors.

Will I attend the webinar? Nah. I don’t do webinars. Do I want to change the world and make every organization have a super duper controlled term list and findable content? Nah. Don’t care. Do I want outfits like CNBC to do a tiny bit of content curation before posting unusual write ups with possible grammatical errors? You bet.  What if those metadata and other tags are uncontrolled, improperly applied, and mismatched to the lingo? Status quo, I assert.

Enjoy the webinar. Good luck with your metadata and the “cloud in your hands” approach. Back to the goose pond. Honk.

Stephen E Arnold, July 29, 2011

Sponsored by Pandia.com, publishers of The New Landscape of Enterprise Search, which is not a white paper and it is not free. But at $20, such a deal.

Microsoft, Search, and Perseverance

July 27, 2011

I no longer spend much time thinking about Microsoft search. Frankly it is too confusing. In my different monographs, I have mentioned or discussed in my typical superficial manner, Outlook search, Windows search, free SharePoint search, Fast Search,Cognition Technologies’ search add in, Powerset search and content processing, various research search systems, SQL Server search, the multiple accounting systems’ search, and probably a few others. But now the focus is on Bing search triggered in part by “Why Microsoft Won’t Dump Bing.” This screed was sparked by the New York Times spider friendly article revealing that Bing was an expensive proposition.

Okay, be still my heart.

Search is expensive. One of the mysteries of online is the natural monopoly. So when an online service is expensive, the number of people in the game is modest. Why is search expensive? As the amount of digital data goes up, so do the plumbing costs. When Mr. Internet was in diapers, indexing content was less expensive. Now the only thing that reduces the cost of indexing is processing modest amounts of content.

Microsoft dumped book scanning an indexing due to cost. But Microsoft is not in a position to concede online search to the fun laddies and lassies at the Google. If anything, Microsoft will spend even more money on search in the foreseeable future.

Here’s why:

First, Microsoft is competing with Google and Google is a monopoly. Knocking off the big dog is tough.

Second, Microsoft bought Fast Search & Transfer, which * actually * had a workable Web indexing system. With $1.2 billion for a darned interesting product, Microsoft is like the guy in the yellow woods who tries to walk down two roads simultaneously. Tough to do without a couple of closely aligned paths. Bing and Fast Search’s Web system are like one path around Yellowstone and the other path running the other way to Jackson Hole. Arduous undertaking even in a sci fi novel.

Third, Microsoft like Google does not have a unified search strategy. Yikes! Heresy. Consider Google. There is the Google Search Appliance, Site Search, the Android search, and the other bits and pieces that look like the same thing but do vary in some fascinating ways. Microsoft’s approach is similar, but Google seems to be morphing into Microsoft. One of the goslings at lunch a moment ago pointed out that he thought the Google banning of companies from Google+ was the genius of a former Microsoft employee. True? False? No one really cares because neither Google nor Bing will be changing their informed approach to search.

Let’s look forward.

Google and Microsoft will face some search challenges because search is no longer the focal point of the teens we have been observing. Asking friends seems to be popular. Apps that deliver info, ready to gobble, no provenance required.

My view is that we will using the decision engine longer than Microsoft bought Bing ads on Adam Carolla’s podcast.

Stephen E Arnold, July 27, 2011

Freebie just like Bing.

The Cost of Search: Bing Edition only $2.6 Billion

July 26, 2011

In July 2008, I wrote “Yahoo Cost Estimate”. Here’s the key passage:

I wanted to run through some of the cost data I have gathered over the years. The reason is this sentence in Miguel Helft’s “Yahoo Is Inviting Partners to Build on Its Search Power,” an essay that appeared in the Kentucky edition of the New York Times, July 10, 2008, page C5: “Yahoo estimates that it would cost $300 million to build a search service from scratch.”

At the time, the estimate was what I call a “crazy number.” How crazy was Yahoo’s guess-timate? According to “At Microsoft, Bing Too Costly to Keep,” on July 25, 2011, New York Times, section B 2 and in the July 24, 2011 article “Bing Becomes a Distraction for Microsoft”, we get some numbers. Yahoo couldn’t do search and it owned Inktomi and Fast Search’s Web search system. In 2008, Yahoo had $300 million and couldn’t do search then. Today Yahoo does not do search. Yahoo recycles Bing results. Now Bing is, if the New York Times is correct, too costly for Microsoft. Yikes. Search is expensive.

How does this sound? “Bing lost $2.6 billion in the latest fiscal year.” The Yahoo estimate was off by a factor of eight. Give Yahoo a break for inflation over the last 36 months, and the Yahoo number was wrong by a little less. On the other hand, when one adds up the total costs of Bing, the cost of Web search reaches Greek debt-scale levels.

What’s with the cost of Web search?

I have written extensively about the cost of search. You can get a run down of the various monographs which contain my thoughts about enterprise and Web search costs by running a query on this blog or looking at one of my books, monographs, and journal articles which explore search costs.

Here’s a rundown of the challenges.

First, on the Web information expands rapidly. Within the last three weeks, Google+ ran out of space. Search is not really tightly integrated with that service. If Google has problems scaling, other companies will too. And scaling is a black hole of search costs. As information expands, the infrastructure must be able to accommodate the data and the outputs of the indexing process. So lots of information to process equal lots of costs. In Microsoft’s case, we have a number which may or may not be accurate. But $2.6 billion is interesting. The New York Times does some fancy dancing around the $2.6 billion, but with the Web search landscape reduced pretty much to a handful of companies, the costs are a factor.

Second, I think that digital content is growing rapidly. No one knows exactly how fast, but I know from my test queries that certain content is either not indexed or it is no longer available. I think that indexing systems are becoming more selective and organizations, particularly government agencies, don’t have the money to make the information available as in the past. The result is that we may never know how much digital information is “on the Web” and it’s a safe bet that finding non indexed content is going to be more and more difficult. As a result, costs go up for what’s there, and there is neither money nor appetite for what is not indexed. Will commercial database producers pick up the slack? Nope. Too expensive. So the free Web index ride is over. Commercial services won’t move in. The economics of everything related to low value information mean content is going to be less findable. Maybe a breakthrough will reverse this situation, but I predict a worsening of content access for the foreseeable future.

Third, users want systems which think for them. The costs of transforming content and then using next generation methods to make that content via “search without search” methods are costly to “invent” and costly to maintain. As a result, the information which becomes available will be content that can be easily monetized. The only way to make the numbers work is to focus on what sells and then find a way to monetize that content. Getting the “what sells” part wrong will sink a search engine before the ink is dry on the VC’s first check. Getting the “monetize” part wrong means the company will be shut down, probably more quickly than in the past.

To sum up, the costs of search are interwoven. There’s the plumbing cost, the technology cost, the marketing cost, the sales cost, and the opportunity cost. When Microsoft cannot afford search, who can? Right now, there are just a few answers. Do you use Google, Blekko, Exalead Web Search, Yandex, Baidu, Jike, or Bing? How many will be financially viable in one year? History teaches us that there will be attrition in this cost battle.

And if information is indeed infinite, won’t the cost be infinite too or at least $2.6 billion?

Stephen E Arnold, July 26, 2011

Sponsored by Pandia.com, publishers of The New Landscape of Enterprise Search

Autonomy-Repsol Articles at E-Business

July 21, 2011

We’ve found an interesting roundup of Autonomy-related information on the Repsol deal at E-Business Library. What is interesting is that the page looks as if it were assembled automatically. Does Panda have a way to discern auto generated pages.

But automated or not, there’s a lot of information, and Autonomy should be quite happy with whoever created the Repsol page. Here’s an example from one of the documents snippetized by the service. The source is a this press release which sums up the Autonomy Repsol agreement this way:

“Autonomy Corporation plc (LSE: AU. or AU.L), a global leader in infrastructure software for the enterprise, today announced that Repsol, Spain’s largest oil and gas company, has selected Autonomy’s cornerstone technology, IDOL (Intelligent Data Operating Layer) and Autonomy Virage for knowledge management across the enterprise.”

Repsol is a huge company with a LOT of infrastructure to manage. Autonomy provides expert tools for managing and analyzing information, including unstructured data, with their IDOL suite of products. In addition, Autonomy Virage is one of the leaders in video and audio search. Repsol employees will now be able to harness this power to manage their wealth of information and to share across their global operation. Sounds like a good choice.

Check out the roundup of articles at E-Business for more information. If you want to know what Autonomy is doing, you can navigate to Autonomy.com. The firm does a good job of posting information in a timely manner about its deals.

Programmers at Web indexing engines have their work cut out for them. Novices in search may have difficulty discerning the gems published by the addled goose from the pages generated from unknown methods.

Cynthia Murrell July 21, 2011

Sponsored by Pandia.com, publishers of The New Landscape of Enterprise Search

Search and Security: Old Wine Rediscovered

July 20, 2011

There is nothing like the surprise on a user’s face when an indiscriminate content crawl allows a person to read confidential, health, or employment information. Over enthusiastic “search experts” often learn the hard way that conducting a thorough content audit * before * indexing content on an Intranet is a really good idea.

Computerworld’s new article “Security Manager’s Journal:The perils of enterprise search,” is an insight into the dangers of sloppy search parameters or what we call old wine rediscovered.

The author does a good job of addressing the security concerns that can pop up if an enterprise search is not well thought out.

 

If security concerns aren’t addressed, this is what you can expect: The IT team does some research, makes a choice, deploys the infrastructure and begins pointing it to data repositories. Before you know it, someone conducts a search with a term like “M&A” and turns up a sensitive document naming a company that’s being considered for acquisition, or a search for the word “salary” reveals an employee salary list that was saved in an inappropriate directory. In other words, people will be able to find all manner of documents that they shouldn’t have access to.

 

Thurman sites the ‘rule of least privilege’ or the rule that information should only be available to those who need to know of it. With enterprise searching, it means that queries should return only information relevant to the search and that the user is allowed to see.

All in all, a rather informative if redundant read that outlines a few security options and ideas.

What we find interesting is that such write ups have to be recommissioned. Not much sophistication in enterprise search land we fear.

Stephen E Arnold, July 20, 2011

Sponsored by ArticleOnePartners.com, the source for patent research

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta