So Much for Silicon Valley Solidarity

April 23, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I thought the entity called Benzinga was a press release service. Guess not. I received a link to what looked like a “real” news story written by a Benzinga Staff Writer name Jain Rounak. “Elon Musk Reacts As Marc Andreessen Says Google Is ‘Literally Run By Employee Mobs’ With ‘Chinese Spies’ Scooping Up AI Chip Designs.” The article is a short one, and it is not exactly what the title suggested to me. Nevertheless, let’s take a quick look at what seems to be some ripping of the Silicon Valley shibboleth of solidarity.

image

The members of the Happy Silicon Valley Social club are showing signs of dissention. Thanks, MSFT Copilot. How is your security today? Oh, really.

The hook for the story is another Google employee protest. The cause was a deal for Google to provide cloud services to Israel. I assume the Googlers split along ethno-political-religious lines: One group cheering for Hamas and another for Israel. (I don’t have any first-hand evidence, so I am leveraging the scant information in the Benzinga news story.

Then what? Apparently Marc Andreessen of Netscape fame and AI polemics offered some thoughts. I am not sure where these assertions were made or if they are on the money. But, I grant to Benzinga, that the Andreessen emissions are intriguing. Let’s look at one:

“The company is literally overrun by employee mobs, Chinese spies are walking AI chip designs out the door, and they turn the Founding Fathers and the Nazis black.”

The idea that there are “Google mobs” running from Foosball court to vending machines and then to their quiet space and then to the parking lot is interesting. Where’s Charles Dickens of Tale of Two Cities fame when you need an observer to document a revolution. Are Googlers building barricades in the passage ways? Are Prius and Tesla vehicles being set on fire?

In the midst of this chaotic environment, there are Chinese spies. I am not sure one has to walk chip designs anywhere. Emailing them or copying them from one Apple device to another works reasonably well in my experience. The reference to the Google art is a reminder that the high school management club approach to running a potential trillion dollar, alleged monopoly need some upgrades.

Where’s the Elon in this? I think I am supposed to realize that Elon and Andreessen are on the same mental wave length. The Google is not. Therefore, the happy family notion is shattered. Okay, Benzinga. Whatever. Drop those names. The facts? Well, drop those too.

Stephen E Arnold, April 23, 2024

Google AI: Who Is on First? I Do Not Know. No, No, He Is on Third

April 23, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A big reorg has rattled the Googlers. Not only are these wizards candidates for termination, the work groups are squished like the acrylic pour paintings thrilling YouTube crafters.

image

Image from Vizoli Art via YouTube at https://www.youtube.com/@VizoliArt

The image might be a representation of Google’s organization, but I am just a dinobaby without expertise in art or thing Googley. Let me give you an example.

I read “Google Consolidates Its DeepMind and Research Teams Amid AI Push” (from the trust outfit itself, Thomson Reuters). The story presents the date as April 18, 2024. I learned:

The search engine giant had merged its research units Google Brain and DeepMind a year back to sharpen its focus on AI development and get ahead of rivals like Microsoft,  a partner of ChatGPT and Sora maker OpenAI.

And who moves? The trust outfit says:

Google will relocate its Responsible AI teams – which focuses on safe AI development – from Research to DeepMind so that they are closer to where AI models are built and scaled, the company said in a blog post.

Ars Technica, which publishes articles without self-identifying with trust. “Google Merges the Android, Chrome, and Hardware Divisions.” That write up channels the acrylic pour approach to management, which Ars Technica describes this way:

Google Hardware SVP Rick Osterloh will lead the new “Platforms and Devices” division. Hiroshi Lockheimer, Google’s previous head of software platforms like Android and ChromeOS, will be headed to “some new projects” at Google.

Why? AI, of course.

But who runs this organizational mix up?

One answer appears in an odd little “real” news story from an outfit called Benzinga. “Google’s DeepMind to Lead Unified AI Charge as Company Seeks to Outpace Microsoft.” The write up asserts:

The reorganization will see all AI-related teams, including the development of the Gemini chatbot, consolidated under the DeepMind division led by Demis Hassabis. This consolidation encompasses research, model development, computing resources, and regulatory compliance teams…

I assume that the one big happy family of Googlers will sort out the intersections of AI, research, hardware, app software, smart software, lines of authority, P&L responsibility, and decision making. Based on my watching Google’s antics over the last 25 years, chaos seems to be part of the ethos of the company. One cannot forget that for the AI razzle dazzle, Code Red, and reorganizational acrylic pouring, advertising accounts for about 60 percent of the firm’s financial footstool.

Will Google’s management team be able to answer the question, “Who is on first?” Will the result of the company’s acrylic pour approach to organizational structures yield a YouTube video like this one? The creator Left Brained Artist explains why acrylic paints cracked, come apart, and generally look pretty darned terrible.

image

Will Google’s pouring units together result in a cracked result? Left Brained Artist’s suggestions may not apply to an online ad company trying to cope with difficult-to-predict competitors like the Zucker’s Meta or the Microsoft clump of AI stealth fighters: OpenAI, Mistral, et al.

Reviewing the information in these three write ups about Google, I will offer several of my unwanted and often irritating observations. Ready?

  1. Comparing the Microsoft AI re-organization to the Google AI re-organization it seems to be that Microsoft has a more logical set up. Judging from the information to which I have access, Microsoft is closing deals for its AI technology with government entities and selected software companies. Microsoft is doing practical engineering drawings; Google is dumping acrylic paint, hoping it will be pretty and make sense.
  2. Google seems to be struggling from a management point of view. We have sit ins, we have police hauling off Googlers, and we have layoffs. We have re-organizations. We have numerous signals that the blue chip consulting approach to an online advertising outfit is a bit unpredictable. Hey, just sell ads and use AI to help you do it without creating 1960s’ style college sophomore sit ins.
  3. Get organized. Make an attempt to answer the question, “Who is on first?

As Abbott and Costello explained:

Costello: Well, all I’m trying to find out is what’s the guy’s name on first base?

Abbott: Oh, no, no. What is on second base?

Costello: I’m not asking you who’s on second.

Abbott: Who’s on first.

Exactly. Just sell online ads.

Stephen E Arnold, April 23, 2024

More Inside Dope about McKinsey & Company

April 23, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It appears that blue chip consultants are finding some choppy waters in the exclusive money pond at the knowledge country club.

I Was a Consultant at McKinsey. Here’s the Frustrating Way They Pushed Me Out” reveals some interesting but essentially personal assertions about the blue chip consulting firm. McKinsey & Co. is associated in my mind with the pharmaceutical industry’s money maker, synthetic opioids. Living in Kentucky, evidence about the chemical compound is fairly easy to spot. Drive East of my home. Check out Nitro, West Virginia, and you can gather more evidence.

image

ChatGPT captures an elite group pushing someone neither liked nor connected out the door. Good enough.

The main idea of the write up is that McKinsey is presented as an exclusive club. Being liked and having connections are more important than any other capability. A “best of the best” on the outs is left marooned in a cube. The only email comes from a consultant offering help related to finding one’s future elsewhere. Fun.

What’s the firm doing in the first quarter of 2024? If the information in the Business Insider article is on the money, McKinsey is reinventing itself. Here are some of the possibly accurate statements in the  article:

  1. McKinsey & Co. has found easy consulting money drying up
  2. The firm is downsizing
  3. Work at McKinsey is mostly PowerPoint decks shaped to make the customer “look good”
  4. McKinsey does not follow its own high-value consulting advice when it comes to staffing.

What does the write up suggest? That is a question with different answers. For someone who has never worked at a blue chip consulting firm, the answer is, “Who cares?” For a person with some exposure to these outfits, the answer is, “So what’s new?” From an objective and reasonably well informed vantage point, the answer may be, “Are consulting firms a bunch of baloney?”

Change, however, is afoot. Let me cite one example. Competition for the blue-chip outfits was once narrowly defined. Now the competition is coming from unexpected places. I will offered one example to get your thought process rolling. Axios, a publishing company owned by , is now positioning its journalists as “experts.” Instead of charging a couple thousand of dollars per hour, Axios will sell a “name brand expert,” video calls, and special news reports. Plus, Axios will jump into the always-exciting world of conferences in semi-nice places.

How will McKinsey and its ilk respond? Will these firms reveal that they are also publishing houses and have been since their inception? Will they morph into giants of artificial intelligence, possibly creating their own models from the reams of proprietary reports, memoranda, emails, and consultant notes? Will McKinsey buy an Axios-type outfit and morph into something the partners from the 1960s would never recognize? Will blue-chip firms go out of business as individuals low-ball engagements to cash-conscious clients?

Net net: When a firm like McKinsey finds itself pilloried for failure to follow its own advice, the future is uncertain. Perhaps McKinsey should call another blue chip outfit? Better yet, buy some help from GLG or Coleman.

Stephen E Arnold, April 23, 2024

Paranoia or Is it Parano-AI? Yes

April 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I get a kick out of the information about the future impact of smart software. If those writing about the downstream consequences of artificial intelligence were on the beam, those folks would be camping out in one of those salubrious Las Vegas casinos. They are not. Thus, the prognostications provide more insight into the authors’ fears in my opinion.

4 15 scared executive

OpenAI produced this good enough image of a Top Dog reading reports about AI’s taking jobs from senior executives. Quite a messy desk, which is an indicator of an inferior executive mindset.

Here’s an example: “Even the Boss Is Worried! Hundreds of Chief Executives Fear AI Could Steal Their Jobs Too.” The write up is based on a study conducted by Censuswide for AND Digital. Here we go, fear lovers:

  1. A “jobs apocalypse”: “AI experts have predicted a 50-50 chance machines could take over all our jobs within a century.”
  2. Scared yet? “Nearly half – 43 per cent – of bosses polled admitted they too were worried AI could take steal their job.”
  3. Ignorance is bliss: “44 per cent of global CEOs did not think their staff were ready to handle AI.”
  4. Die now? “A survey of over 2,700 AI researchers in January meanwhile suggested AI could well be ‘better and cheaper’ than humans in every profession by 2116.”

My view is that the diffusion of certain types of smart software will occur over time. If the technology proves it can cuts costs and be good enough, then it will be applied where the benefits are easy to identify and monitor. When something goes off the rails, the smart software will suffer a set back. Changes will be made, and the “Let’s try again” approach will kick in. Can motivated individuals adapt? Sure. The top folks will adjust and continue to perform. The laggards will get an “Also Participated” ribbon and collect money by busking, cleaning houses, or painting houses. The good old Darwinian principles don’t change. A digital panther can kill you just as dead as a real panther.

Exciting? Not for a surviving dinobaby.

Stephen E Arnold, April 22, 2024

LinkedIn Content Ripple: Possible Wave Amplification

April 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google continues to make headlines. This morning (April 19, 2024) I flicked through the information in my assorted newsreaders. The coverage of Google’s calling the police and have alleged non-Googley professionals chatted up by law enforcement sparked many comments. One of those comments about this most recent demonstration of management mastery was from Dr. Timnit Gebru. My understanding of the Gebru incident is that she called attention to the bias in Google’s smart software systems and methods. She wrote a paper. Big thinkers at Google did not like the paper. The paper appeared, and Dr. Gebru disappeared from the Google payroll. I am have over simplified this remarkable management maneuver, but like some of Google’s synthetic data, I think I am close enough for horse shoes.

image

Is change coming to a social media service which has been quite homogeneous? Thanks, MSFT Copilot. How’s the security work coming?

Dr. Gebru posted a short item on LinkedIn, which is Microsoft’s professional social media service. Here’s what Dr. Gebru made available to LinkedIn’s members:

Not even 24 hrs after making history as the first company to mass fire workers for pro-Palestine protests, by summarily firing 28 people, Google announced that the “(ir)responsible AI org,” the one they created in response to firing me, is now reporting up the Israeli office, through an SVP there. Seems like they want us to know how forcefully and clearly they are backing this genocide.

To provide context, Dr. Gebru linked to a Medium (a begging for dollars information service). That article brandished the title “STATEMENT from Google Workers with the No Tech for Apartheid Campaign on Google’s Mass, Retaliatory Firings of Workers: [sic].” This Medium article is at this link. I am not sure if [a] these stories are going to require registration or payment to view and [b] the items will remain online.

What’s interesting about the Dr. Gebru item and her link is the comments made by LinkedIn members. These suggest that [a] most LinkedIn members either did not see Dr. Gebru’s post or were not motivated go click one of the “response” icons or [b] topics like Google’s management mastery are not popular with the LinkedIn audience.

Several observations based on my experience:

  1. Dr. Gebru’s use of LinkedIn may be a one-time shot, but on the other hand, it might provide ideas for others with a specific point of view to use as a platform
  2. With Apple’s willingness to remove Meta apps from the Chinese iPhone app store, will LinkedIn follow with its own filtering of content? I don’t know the answer to the question, but clicking on Dr. Gebru’s link will make it easy to track
  3. Will LinkedIn begin to experience greater pressure to allow content not related to self promotion and look for business contacts? I have noticed an uptick in requests from what appear to be machine-generated images preponderately young females asking, “Will you be my contact?” I routinely click, No, and I often add a comment along the lines of “I am 80 years old. Why do you want to interact with me?”

Net net: Change may be poised to test some of the professional social media service’s policies.

Stephen E Arnold, March 19, 2024

AI RIFing Financial Analysts (Juniors Only for Now). And Tomorrow?

April 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Bill Gates Worries AI Will Take His Job, Says, ‘Bill, Go Play Pickleball, I’ve Got Malaria Eradication’.” Mr. Gates is apparently about becoming farmer. He is busy buying land. He took time out from his billionaire work today to point out that AI will nuke lots of jobs. What type of jobs will be most at risk? Amazon seems to be focused on using robots and smart software to clear out expensive, unreliable humans.

But the interesting profession facing what might be called an interesting future are financial analysts. “AI Is Coming for Wall Street: Banks Are Reportedly Weighing Cutting Analyst Hiring by Two-Thirds” asserts:

Incoming classes of junior investment-banking analysts could up being cut as much as two-thirds, some of the people suggested, while those brought on board could fetch lower salaries, on account of their work being assisted by artificial intelligence.

Okay, it is other people’s money, so no big deal if the smart software hallucinates as long as there is churn and percentage scrapes. But what happens when the “senior” analysts leave or get fired? Will smart software replace them, or it the idea that junior analyst who are “smart” will move up and add value “smart” software cannot?

image

Thanks, OpenAI. This is a good depiction of the “best of the best” at a major Wall Street financial institution after learning their future was elsewhere.

The article points out:

The consulting firm Accenture has an even more extreme outlook for industry disruption, forecasting that AI could end up replacing or supplementing nearly 75% of all working hours in the banking sector.

Let’s look at the financial sector’s focus on analysts. What other industrial sectors use analysts? Here are several my team and I track:

  1. Intelligence (business and military)
  2. Law enforcement
  3. Law
  4. Medical subrogation
  5. Consulting firms (niche, general, and technical)
  6. Publishing.

If the great trimming at McKinsey and the big New York banks deliver profits, how quickly will AI-anchored software and systems diffuse across organizations?

The answer to the question is, “Fast.”

Stephen E Arnold, April 19, 2024

ChatGPT’s Use Goes Up But Election Info, Not Trusted

April 19, 2024

ChatGPT was released more than a year ago and Americans usage of the generative content engine increases. The Pew Research Center found that 23% of American adults used ChatGPT, up from 18% in July 2023. While the amount of people using ChatGPT continues to rise, many users are skeptical about the information it shares particularly related to election. The Pew Research Center posted a press release about this topic: “Americans’ Use of ChatGPT Is Ticking Up, But Few Trust Its Election Information.”

The Pew Research Center conducted a survey in February 2024 about how they use ChatGPT, such as for fun, learning, or workplace tasks. The respondents said they use the AI chatbot for these activities but they’re wary about trusting any information it spits out about the US 2024 presidential election. Four in ten adults have not too much or no trust in ChatGPT for accurate election information. Only 2% have a great deal or quite a bit of trust in the chatbot.

Pew found that 43% of younger adults (those under thirty years old) are more likely to use ChatGPT. That’s a ten point increase from 2023. Other age groups are using the chatbot more but the younger crowd remains the largest. Also Americans with more education are likely to use ChatGPT at 37% with postgraduate or other advanced degrees.

It’s also interesting to see how Americans are using ChatGPT: for entertainment, learning, or work.

“The share of employed Americans who have used ChatGPT on the job increased from 8% in March 2023 to 20% in February 2024, including an 8-point increase since July. Turning to U.S. adults overall, about one-in-five have used ChatGPT to learn something new (17%) or for entertainment (17%). These shares have increased from about one-in-ten in March 2023. Use of ChatGPT for work, learning or entertainment has largely risen across age groups over the past year. Still, there are striking differences between these groups (those 18 to 29, 30 to 49, and 50 and older)."

When it comes to the 2024 election, 38% or four in ten Americans don’t trust ChatGPT information, more specifically 18% don’t have too much trust and 20% have zero trust. The 2% outliers have a great deal/quite a bit of trust, while 10% of Americans have some trust. The other outlier groups are 15% of Americans who aren’t sure if they should trust ChatGPT or 34% who never heard of the chatbot. Regardless of political party, four in ten Republicans and Democrats don’t trust ChatGPT. It’s also noteworthy that very few people have turned to ChatGPT for election information.

Tech companies have pledged to prevent AI from being misused, but talk is cheap. Chatbots and big tech are programmed to return information that will keep users’ eyes glued to screen in the same vein as clickbait. Information does need to be curated, verified, and controlled to prevent misinformation. However, it draws a fine line between freedom of speech and suppression of information.

Whitney Grace, April 19, 2024

RIFed by AI? Do Not Give Hope Who Enter There

April 18, 2024

Rest assured, job seekers, it is not your imagination. Even those with impressive resumes are having trouble landing an interview, never mind a position. Case in point, Your Tango shares, “Former Google Employee Applies to 50 Jobs that He’s Overqualified For and Tracks the Alarming Number of Rejections.” Wrier Nia Tipton summarizes a pair of experiments documented on TikTok by ex-Googler Jonathan Javier. He found prospective employers were not impressed with his roles at some of the biggest tech firms in the world. In fact, his years of experience may have harmed his chances: his first 50 applications were designed to see how he would fare as an overqualified candidate. Most companies either did not respond or rejected him outright. He was not surprised. Tipton writes:

“Javier explained that recruiters are seeing hundreds of applications daily. ‘For me, whenever I put a job break out, I get about 30 to 50 every single day,’ he said. ‘So again, everybody, it’s sometimes not your resume. It’s sometimes that there’s so many qualified candidates that you might just be candidate number two and number three.’”

So take heart, applicants, rejections do not necessarily mean you are not worthy. There are just not enough positions to go around. The write-up points to February numbers from the Bureau of Labor Statistics that show that, while the number of available jobs has been growing, so is the unemployment rate. Javier’s experimentation continued:

“In another TikTok video, Jonathan continued his experiment and explained that he applied to 50 jobs with two similar resumes. The first resume showed that he was overqualified, while the other showed that he was qualified. Jonathan quickly received 24 rejections for the overqualified resume, while he received 15 rejections for the qualified resume. Neither got him any interviews. Something interesting that Javier noted was how fast he was rejected with his overqualified resume. From this, he observed that overqualified candidates are often overlooked in favor of candidates that fit 100% of the qualities they are looking for. ‘That’s unfortunate because it creates a bias for people who might be older or who might have a lot more experience, but they’re trying to transition into a specific industry or a new position,’ he said.”

Ouch. It is unclear what, if anything, can be done about this specificity bias in hiring. It seems all one can do is keep trying. But, not that way.

Cynthia Murrell, April 18, 2024

Kagi Search Beat Down

April 17, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

People surprise me. It is difficult to craft a search engine. Sure, a recent compsci graduate will tell you, “Piece of cake.” It is not. Even with oodles of open source technology, easily gettable content, and a few valiant individuals who actually want relevant results — search and retrieval are tough to get right. The secret to good search, in my opinion, is to define a domain, preferably a technical field, identify the relevant content, obtain rights, if necessary, and then do the indexing and the other “stuff.”

In my experience, it is a good idea to have either a friend with deep pockets, a US government grant (hello, NSF, said Google decades ago), or a credit card with a hefty credit line. Failing these generally acceptable solutions, one can venture into the land of other people’s money. When that runs out or just does not work, one can become a pay-to-play outfit. We know what that business model delivers. But for a tiny percentage of online users, a subscription service makes perfect sense. The only problem is that selling subscriptions is expensive, and there is the problem of churn. Lose a customer and spend quite a bit of money replacing that individual. Lose big customers spend oodles and oodles of money replacing that big spender.

I read “Do Not Use Kagi.” This, in turn, directed me to “Why I Lost Faith in Kagi.” Okay, what’s up with the Kagi booing? The “Lost Faith” article runs about 4,000 words. The key passage for me is:

Between the absolute blasé attitude towards privacy, the 100% dedication to AI being the future of search, and the completely misguided use of the company’s limited funds, I honestly can’t see Kagi as something I could ever recommend to people.

I looked at Kagi when it first became available, and I wrote a short email to the “Vlad” persona. I am not sure if I followed up. I was curious about how the blend of artificial intelligence and metasearch was going to deal with such issues as:

  1. Deduplication of results
  2. Latency when a complex query in a metasearch system has to wait for a module to do it thing
  3. How the business model was going to work: Expensive subscription, venture funding, collateral sales of the interface to law enforcement, advertising, etc..
  4. Controlling the cost of the pings, pipes, and power for the plumbing
  5. Spam control.

I know from experience that those dabbling in the search game ignore some of my routine questions. The reasons range from “we are smarter than you” to “our approach just handles these issues.”

image

Thanks, MSFT Copilot. Recognize anyone in the image you created?

I still struggle with the business model of non-ad supported search and retrieval systems. Subscriptions work. Well, they worked out of the gate for ChatGPT, but how many smart search systems do I want to join? Answer: Zero.

Metasearch systems are simply sucker fish on the shark bodies of a Web search operator. Bing is in the metasearch game because it is a fraction of the Googzilla operation. It is doing what it can to boost its user base. Just look at the wonky Edge ads and the rumored miniscule gain the additional of smart search has delivered to Bing traffic. Poor Yandex is relocating and finds itself in a different world from the cheerful environment of Russia.

Web content indexing is expensive, difficult, and tricky.

But why pick on Kagi? Beats me. Why not write about dogpile.com, ask.com, the duck thing, or startpage.com (formerly ixquick.com)? Each embodies a certain subsonic vibe, right?

Maybe it is the AI flavor of Kagi? Maybe it is the amateur hour approach taken with some functions? Maybe it is just a disconnect between an informed user and an entrepreneurial outfit running a mile a minute with a sign that says, “Subscribe”?

I don’t know, but it is interesting when Web search is essentially a massive disappointment that some bright GenX’er has not figured out a solution.

Stephen E Arnold, April 17, 2024

Meta: Innovating via Intentions

April 17, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Analytics India published “Meta Releases AI on WhatsApp, Looks Like Perplexity AI.” The headline caught my attention. I don’t pay much attention to the Zuckbook and the other Meta properties. The Analytics India story made this statement which caught my attention:

What users type in the search bar remains confidential and is not shared with Meta AI unless users intentionally send a query to the Meta AI chatbot.

I am okay with copying from Silicon Valley type outfits. That’s part of the game, which includes colors, shuffling staff, and providing jibber jabber instead of useful interfaces and documentation about policies. But think about the statement: “unless users intentionally send a query to the Meta AI chatbot.” Doesn’t that mean we don’t keep track of queries unless a user sends a query to the Zuckbook’s smart software? I love the “intention” because the user is making a choice between a search function which one of my team told me is not very useful and a “new” search system which will be better. If it is better, then user queries get piped into a smart search system for which the documentation is sparse. What happens to those data? How will those data be monetized? Will the data be shared with those who have a business relationship with Meta?

image

Thanks, MSFT Copilot. Good enough, but that’s what one might say about MSFT security, right?

So many questions.

The article states:

Users can still search their conversations for specific content without interacting with Meta AI, maintaining the same level of ease and privacy as before. Additionally, personal messages and calls remain end-to-end encrypted, ensuring neither WhatsApp nor Meta can access them, even with the Meta AI integration.

There is no substantiation of this assertion. Indeed, since the testimony of Frances Haugen, I am not certain what Meta does, and I am not willing to accept assertions about what is accessible to the firm’s employees and what is not. What about the metadata? Is that part of the chunk of data Meta cannot access?

Facebook, WhatsApp, and Instagram are interesting services. The information in the Meta services appears to be to be quite useful for a number of endeavors. Academic research groups are less helpful than they could be. Some have found data cut off or filtered. Imitating another AI outfit’s graphic design is the lowest on my list of Meta issues.

The company is profitable. It has considerable impact. The firm has oodles of data. But now a user’s intention gives permission to an interesting outfit to do whatever with that information. Unsettling? Nope, just part of the unregulated world of digital operations which some assert are having a somewhat negative impact on society. Yep, intentionally.

Stephen E Arnold, April 17, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta