MUT Bites: Security Perimeters May Not Work Very Well

December 26, 2024

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis blog post is the work of an authentic dinobaby. No smart software was used.

I spotted a summary of an item in Ars Technica which recycled a report from Checkmarx and Datadog Security Labs. If you want to read “Yearlong Supply Chain Attack Targeting Security Pros Steals 390,000 Credentials.” I want to skip what is now a soap opera story repeated again and again: Bad actors compromise a system, security professionals are aghast, and cybersecurity firms license more smart, agentic enabled systems. Repeat. Repeat. Repeat. That’s how soap operas worked when I was growing up.

Let’s jump to several observations:

  1. Cyber defenses are not working
  2. Cyber security vendors insist their systems are working because numerous threats were blocked. Just believe our log data. See. We protected you … a lot.
  3. Individual cyber security vendors are a cohort which can be compromised, not once in a mad minute of carelessness. No. Compromised for — wait for it — up to a year.

The engineering of software and systems is, one might conclude, rife with vulnerabilities. If the cyber security professionals cannot protect themselves, who can?

Stephen E Arnold, December 26, 2024

FReE tHoSe smaRT SoFtWarEs!

December 25, 2024

animated-dinosaur-image-0062No smart software involved. Just a dinobaby’s work.

Do you have the list of stop words you use in your NLP prompts? (If not, click here.) You are not happy when words on the list like “b*mb,” “terr*r funding,” and others do not return exactly what you are seeking? If you say, “Yes”, you will want to read “BEST-OF-N JAILBREAKING” by a Frisbee team complement of wizards; namely, John Hughes, Sara Price, Aengus Lynch, Rylan Schaeffer, Fazl Barez, Sanmi Koyejo, Henry Sleight, Erik Jones, Ethan Perez, and Mrinank Sharma. The people doing the heavy lifting were John Hughes (a consultant who does work for Speechmatics and Anthropic) and Mrinank Sharma (an Anthropic engineer involved in — wait for it — adversarial robustness).

The main point is that Anthropic linked wizards have figured out how to knock down the guard rails for smart software. And those stop words? Just whip up a snappy prompt, mix up the capital and lower case letters, and keep sending the query to a smart software. At some point, those capitalization and other fixes will cause the LLM to go your way. Want to whip up a surprise in your bathtub? LLMs will definitely help you out.

The paper has nifty charts and lots of academic hoo-hah. The key insight is what the many, many authors call “attack composition.” You will be able to get the how-to by reading the 73 page paper, probably a result of each author writing 10 pages in the hopes of landing an even more high paying, in demand gig.

Several observations:

  1. The idea that guard rails work is now called into question
  2. The disclosure of the method means that smart software will do whatever a clever bad actor wants
  3. The rush to AI is about market lock up, not the social benefit of the technology.

The new year will be interesting. The paper’s information is quite the holiday gift.

Stephen E Arnold, December 25, 2024

Apple and Google Texting Bad. So What Are the Options?

December 17, 2024

animated-dinosaur-image-0049_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis blog post flowed from the sluggish and infertile mind of a real live dinobaby. If there is art, smart software of some type was probably involved.

This headline caught my attention: “FBI Warns iPhone and Android Users to Stop Texting Each Other for a Disturbing Reason.” What is that disturbing reason? According to the online article cited:

Both the FBI and US Cybersecurity and Infrastructure Security Agency are warning users to use a messaging app instead and one that uses end-to-end encryption. The warning comes amid a major cyberattack that has been called ‘unprecedented’ by the press. The attack has impacted telecommunications companies such as AT&T and Verizon. By using encrypted messaging apps, texters can keep their messages hidden from hackers. The hacking campaign has been nicknamed Salt Typhoon by Microsoft and it has not been fully solved. NBC News called it ‘one of the largest intelligence compromises in US history’.

Interesting. I recall that AT&T was not particularly chatty when information about a security compromise floated around. T-Mobile has experienced a few security issues as well. And Verizon? Same boat.

image

It is interesting that a sinking boat is an image several art generators refuse to produce. Anyway, Midjourney — after a lot of semantic wrangling spit out this good enough image.

After reading the article, I decided to navigate to X.com and ask Grok what the options were for a person who has muscle memory for texting folks from an Apple or Android mobile device? To use Grok, navigate to the tweeter, log in, and look for the Grok word on the displayed page. I went through these hoops slowly because when I queried Google about “how to use Grok”, the Google told me I had to sign up for a premium membership. That was wrong. No wonder Google is a service I find a bit short of the target.

Here are the services Grok (Mr. Musk’s smart software designed to compete in the Big Boy AI dodgeball contest). I quote the tweet thing’s smart output:

  • “Signal: Known for its strong privacy features, Signal offers end-to-end encryption for texts, voice calls, and video calls. It’s open-source, which adds to its credibility for privacy-conscious users. Signal is available on both Android and iOS, making it a great cross-platform option.
  • WhatsApp: With over 2 billion users, WhatsApp is a popular choice for cross-platform messaging. It supports text, voice, and video calls with end-to-end encryption. However, it’s worth noting that WhatsApp is owned by Meta (formerly known as Facebook), which might be a concern for privacy.
  • Telegram: Offers a wide range of features like large group chats, channels, bots, and self-destructing messages. While it provides end-to-end encryption in its “Secret Chat” mode, not all chats are encrypted by default, which could be a privacy concern for some users. Telegram is available on both Android and iOS.
  • Facebook Messenger: Another Meta-owned service, it’s widely used due to its integration with the social network. It supports free calls, video chats, and even has features like payment options in some regions. However, its privacy policies are often scrutinized.
  • Google Chat: Part of Google Workspace, this app provides a platform for both personal and work-related communications with features like spaces for team collaboration. It’s integrated well with other Google services, although its adoption outside of Google’s ecosystem is less widespread.
  • Viber: Offers free calls and messages, including stickers, GIFs, and public chats. Its user base is significant in certain regions but less so globally compared to WhatsApp or Telegram. Viber also has end-to-end encryption for calls and messages.
  • Spike: Described as an email-based messaging app, Spike combines email and chat functionality, making it suitable for both personal and professional use. It’s noted for its cross-platform capabilities and privacy focus.
  • SimpleX Chat: An open-source, privacy-focused chat app that doesn’t require phone numbers for registration, focusing on decentralization and user privacy.
  • Element (formerly Riot.im): Based on the Matrix protocol, it’s an open-source, decentralized platform that supports secure, encrypted communication. It’s good for those looking for privacy with the flexibility of self-hosting.
  • Beeper: An all-in-one messaging app that aims to unify various messaging services into one platform, although it has limitations like no official iMessage support due to Apple’s policies.”

Each of these apps is allegedly an E2EE or end-to-end encrypted messaging service. These are not presented in alphabetical order. I assume that Grok is providing the names in a rank ordering based on some criteria not explained to me. But that’s the future of smart software. Just do something and say, “Good enough.”

I noted several things about this list:

  1. There are no Chinese messaging apps. Is this censorship from an outfit seemingly concerned about free speech. I was surprised at the omission of QQ and WeChat with appropriate surveillance notices from Grok.
  2. One approach is for messaging clients to rely on Extensible Messaging and Presence Protocol. Conversations for Android and ChatSecure for iOS were at one time options.
  3. Inclusion of Telegram is definitely interesting because Pavel Durov has reversed course and now cooperates with law enforcement. Telegram has even played nice with anti-CSAM organizations. The about face coincided with his detainment by French authorities.
  4. The Grok listing does not include new and possible interesting services like PrivateLine.io., which illustrates the shallow nature of the knowledge exposed to these smart systems. (Even Yandex.com lists this service in its search results.)
  5. Alphabetizing lists is just not part of the 2024 world it seems.

There are some broader questions about encrypted messaging which are not addressed in the cited write up or the Grok “smart” output; for example:

  1. Are other messaging apps encrypted end to end or are there “special” operations which make the content visible and loggable once the user sends the message?
  2. Is the encryption method used by these apps “unbreakable”?
  3. Are the encryption methods home grown or based on easily inspected open source methods?
  4. What entities have access to either the logged data about a message or access to the message payload?

The alarm has been sounded about the failure of some US telecommunications companies to protect their own systems and by extension the security of their customers. But numerous questions remain with partial or no answers. Answers are, from my point of view, thin.

Stephen E Arnold, December 17, 2024

FOGINT: Security Tools Over Promise & Under Deliver

November 22, 2024

While the United States and the rest of the world has been obsessed with the fallout of the former’s presidential election, bad actors planned terrorist plots. I24 News reports that after a soccer/football match in Amsterdam, there was a preplanned attack on Israeli fans: “Evidence From WhatsApp, Telegram Groups Shows Amsterdam Pogrom Was Organized.”

The Daily Telegraph located screenshots from WhatsApp and Telegram that displayed messages calling for a “Jew Hunt” after the game. The message writers were identified as Pro-Palestinian supports. The bad actors also called Jews “cancer dogs”, a vile slur in Dutch and told co-conspirators to bring fireworks to the planned attack. Dutch citizens and other observers were underwhelmed with the response of the Netherlands’ law enforcement. Even King Willem-Alexander noted that his country failed to protect the Jewish community when he spoke with Israeli President Isaac Herzog:

“Dutch king Willem-Alexander reportedly said to Israel’s President Isaac Herzog in a phone call on Friday morning that the ‘we failed the Jewish community of the Netherlands during World War II, and last night we failed again.’”

This an unfortunate example of the failure of cyber security tools that monitor social media. If this was a preplanned attack and the Daily Telegraph located the messages, then a cyber security company should have as well. These police ware and intelware systems failed to alert authorities. Is this another confirmation that cyber security and threat intelligence tools over promise and under deliver? Well, T-Mobile is compromised again and there is that minor lapse in Israel in October 2023.

Whitney Grace, November 22, 2024

Short Snort: How to Find Undocumented APIs

November 20, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The essay / how to “All the Data Can Be Yours” does a very good job of providing a hacker road map. The information in the write up includes:

  1. Tips for finding undocumented APIs in GitHub
  2. Spotting “fetch” requests
  3. WordPress default APIs
  4. Information in robots.txt files
  5. Using the Google
  6. Examining JavaScripts
  7. Poking into mobile apps
  8. Some helpful resources and tools.

Each of these items includes details; for example, specific search strings and “how to make a taco” type of instructions. Assembling this write up took quite a bit of work.

Those engaged in cyber security (white, gray, and black hat types) will find the write up quite interesting.

I want to point out that I am not criticizing the information per se. I do want to remind those with a desire to share their expertise of three behaviors:

  1. Some computer science and programming classes in interesting countries use this type of information to provide students with what I would call hands on instruction
  2. Some governments, not necessarily aligned with US interests, provide the tips to the employees and contractors to certain government agencies to test and then extend the functionalities of the techniques presented in the write up
  3. Certain information might be more effectively distributed in other communication channels.

Stephen E Arnold, November 20, 2024

Insider Threats: More Than Threat Reports and Cumbersome Cyber Systems Are Needed

November 13, 2024

dino orange_thumbSorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.

With actionable knowledge becoming increasingly concentrated, is it a surprise that bad actors go where the information is? One would think that organizations with high-value information would be more vigilant when it comes to hiring people from other countries, using faceless gig worker systems, or relying on an AI-infused résumé on LinkedIn. (Yep, that is a Microsoft entity.)

image

Thanks, OpenAI. Good enough.

The fact is that big technology outfits are supremely confident in their ability to do no wrong. Billions in revenue will boost one’s confidence in a firm’s management acumen. The UK newspaper Telegraph published “Why Chinese Spies Are Sending a Chill Through Silicon Valley.”

The write up says:

In recent years the US government has charged individuals with stealing technology from companies including Tesla, Apple and IBM and seeking to transfer it to China, often successfully. Last year, the intelligence chiefs of the “Five Eyes” nations clubbed together at Stanford University – the cradle of Silicon Valley innovation – to warn technology companies that they are increasingly under threat.

Did the technology outfits get the message?

The Telegram article adds:

Beijing’s mission to acquire cutting edge tech has been given greater urgency by strict US export controls, which have cut off China’s supply of advanced microchips and artificial intelligence systems. Ding, the former Google employee, is accused of stealing blueprints for the company’s AI chips. This has raised suspicions that the technology is being obtained illegally. US officials recently launched an investigation into how advanced chips had made it into a phone manufactured by China’s Huawei, amid concerns it is illegally bypassing a volley of American sanctions. Huawei has denied the claims.

With some non US engineers and professionals having skills needed by some of the high-flying outfits already aloft or working their hangers to launch their breakthrough product or service, US companies go through human resource and interview processes. However, many hires are made because a body is needed, someone knows the candidate, or the applicant is willing to work for less money than an equivalent person with a security clearance, for instance.

The result is that most knowledge centric organizations have zero idea about the security of their information. Remember Edward Snowden? He was visible. Others are not.

Let me share an anecdote without mentioning names or specific countries and companies.

A business colleague hailed from an Asian country. He maintained close ties with his family in his country of origin. He had a couple of cousins who worked in the US. I was at his company which provided computer equipment to the firm at which I was working in Silicon Valley. He explained to me that a certain “new” technology was going to be released later in the year. He gave me an overview of this “secret” project. I asked him where the data originated. He looked at me and said, “My cousin. I even got a demo and saw the prototype.”

I want to point out that this was not a hire. The information flowed along family lines. The sharing of information was okay because of the closeness of the family. I later learned the information was secret. I realized that doing an HR interview process is not going to keep secrets within an organization.

I ask the companies with cyber security software which has an insider threat identification capability, “How do you deal with family or high-school relationship information channels?”

The answer? Blank looks.

The Telegraph and most of the whiz bang HR methods and most of the cyber security systems don’t work. Cultural blind spots are a problem. Maybe smart software will prevent knowledge leakage. I think that some hard thinking needs to be applied to this problem. The Telegram write up does not tackle the job. I would assert that most organizations have fooled themselves. Billions and arrogance have interesting consequences.

Stephen E Arnold, November 13, 2024

Two New Coast Guard Cybersecurity Units Strengthen US Cyber Defense

November 13, 2024

Some may be surprised to learn the Coast Guard had one of the first military units to do signals intelligence. Early in the 20th century, the Coast Guard monitored radio traffic among US bad guys. It is good to see the branch pushing forward. “U.S. Coast Guard’s New Cyber Units: A Game Changer for National Security,” reveals a post from ClearanceJobs. The two units, the Coast Guard Reserve Unit USCYBER and 1941 Cyber Protection Team (CPT), will work with U.S. Cyber Command. Writer Peter Suciu informs us:

“The new cyber reserve units will offer service-wide capabilities for Coast Guardsman while allowing the service to retain cyber talent. The reserve commands will pull personnel from around the United States and will bring experience from the private and public sectors. Based in Washington, D.C., CPTs are the USCG’s deployable units responsible for offering cybersecurity capabilities to partners in the MTS [Marine Transportation System].”

Why tap reserve personnel for these units? Simple: valuable experience. We learn:

“‘Coast Guard Cyber is already benefitting from its reserve members,’ said Lt. Cmdr. Theodore Borny of the Office of Cyberspace Forces (CG-791), which began putting together these units in early 2023. ‘Formalizing reserves with cyber talent into cohesive units will give us the ability to channel a skillset that is very hard to acquire and retain.’”

The Coast Guard Reserve Unit will (mostly) work out of Fort Meade in Maryland, alongside the U.S. Cyber Command and the National Security Agency. The post reminds us the Coast Guard is unique: it operates under the Department of Homeland Security, while our other military branches are part of the Department of Defense. As the primary defender of our ports and waterways, brown water and blue water, we think the Coast Guard is well position capture and utilize cybersecurity intel.

Cynthia Murrell, November 13, 2024

Meta and China: Yeah, Unauthorized Use of Llama. Meh

November 8, 2024

dino orangeThis post is the work of a dinobaby. If there is art, accept the reality of our using smart art generators. We view it as a form of amusement.

That open source smart software, you remember, makes everything computer- and information-centric so much better. One open source champion laboring as a marketer told me, “Open source means no more contractual handcuffs, the ability to make changes without a hassle, and evidence of the community.

image

An AI-powered robot enters a meeting. One savvy executive asks in Chinese, “How are you? Are you here to kill the enemy?” Another executive, seated closer to the gas emitted from a cannister marked with hazardous materials warnings gasps, “I can’t breathe!” Thanks, Midjourney. Good enough.

How did those assertions work for China? If I can believe the “trusted” outputs of the “real” news outfit Reuters, just super cool. “Exclusive: Chinese Researchers Develop AI Model for Military Use on Back of Meta’s Llama”, those engaging folk of the Middle Kingdom:

… have used Meta’s publicly available Llama model to develop an AI tool for potential military applications, according to three academic papers and analysts.

Now that’s community!

The write up wobbles through some words about the alleged Chinese efforts and adds:

Meta has embraced the open release of many of its AI models, including Llama. It imposes restrictions on their use, including a requirement that services with more than 700 million users seek a license from the company. Its terms also prohibit use of the models for “military, warfare, nuclear industries or applications, espionage” and other activities subject to U.S. defense export controls, as well as for the development of weapons and content intended to “incite and promote violence”. However, because Meta’s models are public, the company has limited ways of enforcing those provisions.

In the spirit of such comments as “Senator, thank you for that question,” a Meta (aka Facebook), wizard allegedly said:

“That’s a drop in the ocean compared to most of these models (that) are trained with trillions of tokens so … it really makes me question what do they actually achieve here in terms of different capabilities,” said Joelle Pineau, a vice president of AI Research at Meta and a professor of computer science at McGill University in Canada.

My interpretation of the insight? Hey, that’s okay.

As readers of this blog know, I am not too keen on making certain information public. Unlike some outfits’ essays, Beyond Search tries to address topics without providing information of a sensitive nature. For example, search and retrieval is a hard problem. Big whoop.

But posting what I would term sensitive information as usable software for anyone to download and use strikes me as something which must be considered in a larger context; for example, a bad actor downloading an allegedly harmless penetration testing utility of the Metasploit-ilk. Could a bad actor use these types of software to compromise a commercial or government system? The answer is, “Duh, absolutely.”

Meta’s founder of the super helpful Facebook wants to bring people together. Community. Kumbaya. Sharing.

That has been the lubricant for amassing power, fame, and money… Oh, also a big gold necklace similar to the one’s I saw labeled “Pharaoh jewelry.”

Observations:

  1. Meta (Facebook) does open source for one reason: To blunt initiatives from its perceived competitors and to position itself to make money.
  2. Users of Meta’s properties are only data inputters and action points; that is, they are instrumentals.
  3. Bad actors love that open source software. They download it. They study it. They repurpose it to help the bad actors achieve their goals.

Did Meta include a kill switch in its open source software? Oh, sure. Meta is far-sighted, concerned with misuse of its innovations, and super duper worried about what an adversary of the US might do with that technology. On the bright side, if negotiations are required, the head of Meta (Facebook) allegedly speaks Chinese. Is that a benefit? He could talk with the weaponized robot dispensing biological warfare agents.

Stephen E Arnold, November 8, 2024

Microsoft 24H2: The Reality Versus Self Awareness

November 4, 2024

dino orangeSorry. Written by a dumb humanoid. Art? It is AI, folks. Eighty year old dinobabies cannot draw very well in my experience.

I spotted a short item titled “Microsoft Halts Windows 11 24H2 Update for Many PCs Due to Compatibility Issues.” Today is October 29, 2024. By the time you read this item, you may have a Windows equipped computer humming along on the charmingly named 11 24H2 update. That’s the one with Recall.

image

Microsoft does not see itself as slightly bedraggled. Those with failed updates do. Thanks, ChatGPT, good enough, but at least you work. MSFT Copilot has been down for six days with a glitch.

Now if you work at the Redmond facility where Google paranoia reigns, you probably have Recall running on your computing device as well as Teams’ assorted surveillance features. That means that when you run a query for “updates”, you may see screens presenting an array of information about non functioning drivers, printer errors, visits to the wonderfully organized knowledge bases, and possibly images of email from colleagues wanting to take kinetic action about the interns, new hires, and ham fisted colleagues who rolled out an update which does not update.

According to the write up offers this helpful advice:

We advise users against manually forcing the update through the Windows 11 Installation Assistant or media creation tool, especially on the system configurations mentioned above. Instead, users should check for updates to the specific software or hardware drivers causing the holds and wait for the blocks to be lifted naturally.

Okay.

Let’s look at this from the point of view of bad actors. These folks know that the “new” Windows with its many nifty new features has some issues. When the Softies cannot get wallpaper to work, one knows that deeper, more subtle issues are not on the wizards’ radar.

Thus, the 24H2 update will be installed on bad actors’ test systems and subjected to tests only a fan of Metasploit and related tools can appreciate. My analogy is that these individuals, some of whom are backed by nation states, will give the update the equivalent of a digital colonoscopy. Sorry, Redmond, no anesthetic this go round.

Why?

Microsoft suggests that security is Job Number One. Obviously when fingerprint security functions don’t work and the Windows Hello fails, the bad actor knows that other issues exist. My goodness. Why doesn’t Microsoft just turn its PR and advertising firms lose on Telegram hacking groups and announce, “Take me. I am yours!”

Several observations:

  1. The update is flawed
  2. Core functions do not work
  3. Partners, not Microsoft, are supposed to fix the broken slot machine of operating systems
  4. Microsoft is, once again, scrambling to do what it should have done correctly before releasing a deeply flawed bundle of software.

Net net: Blaming Google for European woes and pointing fingers at everything and everyone except itself, Microsoft is demonstrating that it cannot do a basic task correctly.  The only users who are happy are those legions of bad actors in the countries Microsoft accuses of making its life difficult. Sorry. Microsoft you did this, but you could blame Google, of course.

Stephen E Arnold, November 4, 2024

Computer Security and Good Enough Methods

November 1, 2024

dino orange_thumb_thumb_thumbWritten by a humanoid dinobaby. No AI except the illustration.

I read “TikTok Owner Sacks Intern for Sabotaging AI Project.” The BBC report is straight forward; it does not provide much “management” or “risk” commentary. In a nutshell, the allegedly China linked ByteDance hired or utilized an intern. The term “intern” used to mean a student who wanted to get experience. Today, “intern” has a number of meanings. For example, for certain cyber fraud outfits operating in Southeast Asia an “intern” could be:

  1. A person paid to do work in a special economic zone
  2. A person coerced into doing work for an organization engaged in cyber fraud
  3. A person who is indeed a student and wants to get some experience
  4. An individual kidnapped and forced to perform work; otherwise, bad things can happen in dark rooms.

What’s the BBC say? Here is a snippet:

TikTok owner, ByteDance, says it has sacked an intern for “maliciously interfering” with the training of one of its artificial intelligence (AI) models.

The punishment, according to the write up, was “contacting” the intern’s university. End of story.

My take on this incident is a bit different from the BBC’s.

First, how did a company allegedly linked to the Chinese government make a bad hire? If the student was recommended by a university, what mistake did the university and the professors training the young person commit. The idea is to crank out individuals who snap into certain roles. I am not sure the spirit of an American party school is part of the ByteDance and TikTok work culture, but I may be off base.

Second, when a company hires a gig worker or brings an intern into an organization, are today’s managers able to identify potential issues either with an individual’s work or that person’s inner wiring? The fact that an intern was able to fiddle with code indicates a failure of internal checks and balances. The larger question is, “Can organizations trust interns who are operating as insiders, but without the controls an organization should have over individual workers. This gaffe makes clear that modern management methods are not proactive; they are reactive. For that reason, insider threats exist and could do damage. ByteDance, according to the write up, downplayed the harm caused by the intern:

ByteDance also denied reports that the incident caused more than $10m (£7.7m) of damage by disrupting an AI training system made up of thousands of powerful graphics processing units (GPU).

Is this claim credible? Nope. I refer to the information about four companies “downplaying the impact of the SolarWinds hack.” US outfits don’t want to reveal the impact of a cyber issue. Are outfits like ByteDance and TikTok on the up and up about the impact of the intern’s actions.

Third, the larger question becomes, “How does an organization minimize insider threats?” As organizations seek to cut training staff and rely on lower cost labor?” The answer is, in my opinion, clear to me. An organization does what it can and hope for the best.

Like many parts of a life in an informationized world or datasphere in my lingo, the quality of most efforts is good enough. The approach guarantees problems in the future. These are problems which cannot be solved. Management just finds something to occupy its time. The victims are the users, the customers, or the clients.

The world, even when allegedly linked with nation states, is struggling to achieve good enough.

Stephen E Arnold, November 1, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta