The Evolution of Study Notes: From Lazy to Downright Slothful
April 22, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Study guides, Cliff Notes, movie versions, comic books, and bribing elder siblings or past students for their old homework and class notes were the how kids used to work their way through classes. Then came the Internet and over the years innovative people have perfected study guides. Some have even made successful businesses from study guides for literature, science, math, foreign language, writing, history, and more.
The quality of these study guides range from poor to fantastic. PinkMonkey.com is one of the average study guide websites. It has some free book guides while others are behind a paywall. There are also educational tips for different grades and advice for college applications. The information is a little dated but when it is combined with other educational and homework help websites it still has its uses.
PinkMonkey.com describes itself as:
“…a "G" rated study resource for junior high, high school, college students, teachers and home schoolers. What does PinkMonkey offer you? The World’s largest library of free online Literature Summaries, with over 460 Study Guides / Book Notes / Chapter Summaries online currently, and so much more. No more trips to the book store; no more fruitless searching for a booknote that no one ever has in stock! You’ll find it all here, online 24/7!”
YouTube, TikTok, and other platforms are also 24/7. They’re also being powered more and more by AI. It won’t be long before AI is condensing these guides and turning them into consumable videos. There are already channels that made study guides but homework still requires more than an AI answer.
ChatGPT and other generative AI algorithms are getting smarter by being trained on sets that pull their data from the Internet. These datasets include books, videos, and more. In the future, students will be relying on study guides in video format. The question to ask is how will they look? Will they summarize an entire book in fifteen seconds, take it chapter by chapter, or make movies powered by AI?
Whitey Grace, April 22, 2024
LinkedIn Content Ripple: Possible Wave Amplification
April 19, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Google continues to make headlines. This morning (April 19, 2024) I flicked through the information in my assorted newsreaders. The coverage of Google’s calling the police and have alleged non-Googley professionals chatted up by law enforcement sparked many comments. One of those comments about this most recent demonstration of management mastery was from Dr. Timnit Gebru. My understanding of the Gebru incident is that she called attention to the bias in Google’s smart software systems and methods. She wrote a paper. Big thinkers at Google did not like the paper. The paper appeared, and Dr. Gebru disappeared from the Google payroll. I am have over simplified this remarkable management maneuver, but like some of Google’s synthetic data, I think I am close enough for horse shoes.
Is change coming to a social media service which has been quite homogeneous? Thanks, MSFT Copilot. How’s the security work coming?
Dr. Gebru posted a short item on LinkedIn, which is Microsoft’s professional social media service. Here’s what Dr. Gebru made available to LinkedIn’s members:
Not even 24 hrs after making history as the first company to mass fire workers for pro-Palestine protests, by summarily firing 28 people, Google announced that the “(ir)responsible AI org,” the one they created in response to firing me, is now reporting up the Israeli office, through an SVP there. Seems like they want us to know how forcefully and clearly they are backing this genocide.
To provide context, Dr. Gebru linked to a Medium (a begging for dollars information service). That article brandished the title “STATEMENT from Google Workers with the No Tech for Apartheid Campaign on Google’s Mass, Retaliatory Firings of Workers: [sic].” This Medium article is at this link. I am not sure if [a] these stories are going to require registration or payment to view and [b] the items will remain online.
What’s interesting about the Dr. Gebru item and her link is the comments made by LinkedIn members. These suggest that [a] most LinkedIn members either did not see Dr. Gebru’s post or were not motivated go click one of the “response” icons or [b] topics like Google’s management mastery are not popular with the LinkedIn audience.
Several observations based on my experience:
- Dr. Gebru’s use of LinkedIn may be a one-time shot, but on the other hand, it might provide ideas for others with a specific point of view to use as a platform
- With Apple’s willingness to remove Meta apps from the Chinese iPhone app store, will LinkedIn follow with its own filtering of content? I don’t know the answer to the question, but clicking on Dr. Gebru’s link will make it easy to track
- Will LinkedIn begin to experience greater pressure to allow content not related to self promotion and look for business contacts? I have noticed an uptick in requests from what appear to be machine-generated images preponderately young females asking, “Will you be my contact?” I routinely click, No, and I often add a comment along the lines of “I am 80 years old. Why do you want to interact with me?”
Net net: Change may be poised to test some of the professional social media service’s policies.
Stephen E Arnold, March 19, 2024
AI RIFing Financial Analysts (Juniors Only for Now). And Tomorrow?
April 19, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “Bill Gates Worries AI Will Take His Job, Says, ‘Bill, Go Play Pickleball, I’ve Got Malaria Eradication’.” Mr. Gates is apparently about becoming farmer. He is busy buying land. He took time out from his billionaire work today to point out that AI will nuke lots of jobs. What type of jobs will be most at risk? Amazon seems to be focused on using robots and smart software to clear out expensive, unreliable humans.
But the interesting profession facing what might be called an interesting future are financial analysts. “AI Is Coming for Wall Street: Banks Are Reportedly Weighing Cutting Analyst Hiring by Two-Thirds” asserts:
Incoming classes of junior investment-banking analysts could up being cut as much as two-thirds, some of the people suggested, while those brought on board could fetch lower salaries, on account of their work being assisted by artificial intelligence.
Okay, it is other people’s money, so no big deal if the smart software hallucinates as long as there is churn and percentage scrapes. But what happens when the “senior” analysts leave or get fired? Will smart software replace them, or it the idea that junior analyst who are “smart” will move up and add value “smart” software cannot?
Thanks, OpenAI. This is a good depiction of the “best of the best” at a major Wall Street financial institution after learning their future was elsewhere.
The article points out:
The consulting firm Accenture has an even more extreme outlook for industry disruption, forecasting that AI could end up replacing or supplementing nearly 75% of all working hours in the banking sector.
Let’s look at the financial sector’s focus on analysts. What other industrial sectors use analysts? Here are several my team and I track:
- Intelligence (business and military)
- Law enforcement
- Law
- Medical subrogation
- Consulting firms (niche, general, and technical)
- Publishing.
If the great trimming at McKinsey and the big New York banks deliver profits, how quickly will AI-anchored software and systems diffuse across organizations?
The answer to the question is, “Fast.”
Stephen E Arnold, April 19, 2024
ChatGPT’s Use Goes Up But Election Info, Not Trusted
April 19, 2024
ChatGPT was released more than a year ago and Americans usage of the generative content engine increases. The Pew Research Center found that 23% of American adults used ChatGPT, up from 18% in July 2023. While the amount of people using ChatGPT continues to rise, many users are skeptical about the information it shares particularly related to election. The Pew Research Center posted a press release about this topic: “Americans’ Use of ChatGPT Is Ticking Up, But Few Trust Its Election Information.”
The Pew Research Center conducted a survey in February 2024 about how they use ChatGPT, such as for fun, learning, or workplace tasks. The respondents said they use the AI chatbot for these activities but they’re wary about trusting any information it spits out about the US 2024 presidential election. Four in ten adults have not too much or no trust in ChatGPT for accurate election information. Only 2% have a great deal or quite a bit of trust in the chatbot.
Pew found that 43% of younger adults (those under thirty years old) are more likely to use ChatGPT. That’s a ten point increase from 2023. Other age groups are using the chatbot more but the younger crowd remains the largest. Also Americans with more education are likely to use ChatGPT at 37% with postgraduate or other advanced degrees.
It’s also interesting to see how Americans are using ChatGPT: for entertainment, learning, or work.
“The share of employed Americans who have used ChatGPT on the job increased from 8% in March 2023 to 20% in February 2024, including an 8-point increase since July. Turning to U.S. adults overall, about one-in-five have used ChatGPT to learn something new (17%) or for entertainment (17%). These shares have increased from about one-in-ten in March 2023. Use of ChatGPT for work, learning or entertainment has largely risen across age groups over the past year. Still, there are striking differences between these groups (those 18 to 29, 30 to 49, and 50 and older)."
When it comes to the 2024 election, 38% or four in ten Americans don’t trust ChatGPT information, more specifically 18% don’t have too much trust and 20% have zero trust. The 2% outliers have a great deal/quite a bit of trust, while 10% of Americans have some trust. The other outlier groups are 15% of Americans who aren’t sure if they should trust ChatGPT or 34% who never heard of the chatbot. Regardless of political party, four in ten Republicans and Democrats don’t trust ChatGPT. It’s also noteworthy that very few people have turned to ChatGPT for election information.
Tech companies have pledged to prevent AI from being misused, but talk is cheap. Chatbots and big tech are programmed to return information that will keep users’ eyes glued to screen in the same vein as clickbait. Information does need to be curated, verified, and controlled to prevent misinformation. However, it draws a fine line between freedom of speech and suppression of information.
Whitney Grace, April 19, 2024
Google Gem: Arresting People Management
April 18, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I have worked for some well-managed outfits: Halliburton, Booz Allen, Ziff Communications, and others in the 55 year career. The idea that employees at Halliburton Nuclear (my assignment) would occupy the offices of a senior officer like Eugene Saltarelli was inconceivable. (Mr. Saltarelli sported a facial scar. When asked about the disfigurement, he would stare at the interlocutor and ask, “What scar?” Do you want to “take over” his office?) Another of my superiors at a firm in New York had a special method of shaping employee behavior. This professional did nothing to suppress rumors that two of his wives drowned during “storms” after falling off his sail boat. Did I entertain taking over his many-windowed office in Manhattan? Answer: Are you sure you internalized the anecdote?
Another Google management gem glitters in the public spot light.
But at the Google life seems to be different, maybe a little more frisky absent psychological behavior controls. I read “Nine Google Workers Get Arrested After Sit-In Protest over $1.2B Cloud Deal with Israel.” The main idea seems to be that someone at Google sold cloud services to the Israeli government. Employees apparently viewed the contract as bad, wrong, stupid, or some combination of attributes. The fix involved a 1960s-style sit in. After a period of time elapsed, someone at Google called the police. The employee-protesters were arrested.
I recall hearing years ago that Google faced a similar push back about a contract with the US government. To be honest, Google has generated so many human resource moments, I have a tough time recalling each. A few are Mt. Everests of excellence; for example, the termination of Dr. Timnit Gebru. This Googler had the nerve to question the bias of Google’s smart software. She departed. I assume she enjoyed the images of biased signers of documents related to America’s independence and multi-ethnic soldiers in the World War II German army. Bias? Google thinks not I guess.
The protest occurs as the Google tries to cope with increased market pressure and the tough-to-control costs of smart software. The quick fix is to nuke or RIF employees. “Google Lays Off Workers As Part of Pretty Large-Scale Restructuring” reports by citing Business Insider:
Ruth Porat, Google’s chief financial officer, sent an email to employees announcing that the company would create “growth hubs” in India, Mexico and Ireland. The unspecified number of layoffs will affect teams in the company’s finance department, including its treasury, business services and revenue cash operations units
That looks like off-shoring to me. The idea was a cookie cutter solution spun up by blue chip consulting companies 20, maybe 30 years ago. On paper, the math is more enticing than a new Land Rover and about as reliable. A state-side worker costs X fully loaded with G&A, benefits, etc. An off-shore worker costs X minus Y. If the delta means cost savings, go for it. What’s not to like?
According to a source cited in the New York Post:
“As we’ve said, we’re responsibly investing in our company’s biggest priorities and the significant opportunities ahead… To best position us for these opportunities, throughout the second half of 2023 and into 2024, a number of our teams made changes to become more efficient and work better, remove layers and align their resources to their biggest product priorities.
Yep, align. That senior management team has a way with words.
Will those who are in fear of their jobs join in the increasingly routine Google employee protests? Will disgruntled staff sandbag products and code? Will those who are terminated write tell-alls about their experiences at an outfit operating under Code Red for more than a year?
Several observations:
- Microsoft’s quite effective push of its AI products and services continues. In certain key markets like New York City and the US government, Google is on the defensive. Hint: Microsoft has the advantage, and the Google is struggling to catch up.
- Google’s management of its personnel seems to create the wrong type of news. Example: Staff arrests. Is that part of Peter Drucker’s management advice.
- The Google leadership team appears to lack the ability to do their job in a way that operates in a quiet, effective, positive, and measured way.
Net net: The online ad money machine keeps running. But if the investigations into Google’s business practices get traction, Google will have additional challenges to face. The Sundar & Prabhakar Comedy team should make a TikTok-type, how-to video about human resource management. I would prefer a short video about the origin story for the online advertising method which allowed Google to become a fascinating outfit.
Stephen E Arnold, April 18, 2024
Will Google Fix Up On-the-Blink Israeli Intelligence Capability?
April 18, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Voyager Labs “value” may be slipping. The poster child for unwanted specialized software publicity (NSO Group) finds itself the focal point of some legal eagles. The specialized software systems that monitor, detect, and alert — quite frankly — seemed to be distracted before and during the October 2023 attack. What’s happening to Israel’s advanced intelligence capabilities with its secret units, mustered out wizards creating intelligence solutions, and doing the Madison Avenue thing at conferences? What’s happening is that the hyperbole seems to be a bit more advanced than some of the systems themselves.
Government leaders and military intelligence professionals listen raptly as the young wizard explains how the online advertising company can shore up a country’s intelligence capabilities. Thanks, MidJourney. You are good enough, and the modified free MSFT Copilot is not.
What’s the fix? Let me share one wild idea with you: Let Google do it. Time (once the stablemate of the AI-road kill Sports Illustrated) published this write up with this title:
Exclusive: Google Contract Shows Deal With Israel Defense Ministry
The write up says:
Google provides cloud computing services to the Israeli Ministry of Defense, and the tech giant has negotiated deepening its partnership during Israel’s war in Gaza, a company document viewed by TIME shows. The Israeli Ministry of Defense, according to the document, has its own “landing zone” into Google Cloud—a secure entry point to Google-provided computing infrastructure, which would allow the ministry to store and process data, and access AI services. [The wonky capitalization is part of the style manual I assume. Nice, shouting with capital letters.]
The article then includes this paragraph:
Google recently described its work for the Israeli government as largely for civilian purposes. “We have been very clear that the Nimbus contract is for workloads running on our commercial platform by Israeli government ministries such as finance, healthcare, transportation, and education,” a Google spokesperson told TIME for a story published on April 8. “Our work is not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services.”
Does this mean that Google shaped or weaponized information about the work with Israel? Probably not: The intent strikes me as similar to the “Senator, thank you for the question” lingo offered at some US government hearings. That’s just the truth poorly understood by those who are not Googley.
I am not sure if the Time story has its “real” news lens in focus, but let’s look at this interesting statement:
The news comes after recent reports in the Israeli media have alleged the country’s military, controlled by the Ministry of Defense, is using an AI-powered system to select targets for air-strikes on Gaza. Such an AI system would likely require cloud computing infrastructure to function. The Google contract seen by TIME does not specify for what military applications, if any, the Ministry of Defense uses Google Cloud, and there is no evidence Google Cloud technology is being used for targeting purposes. But Google employees who spoke with TIME said the company has little ability to monitor what customers, especially sovereign nations like Israel, are doing on its cloud infrastructure.
The online story included an allegedly “real” photograph of a bunch of people who were allegedly unhappy with the Google deal with Israel. Google does have a cohort of wizards who seem to enjoy protesting Google’s work with a nation state. Are Google’s managers okay with this type of activity? Seems like it.
Net net: I think the core issue is that some of the Israeli intelligence capability is sputtering. Will Google fix it up? Sure, if one believes the intelware brochures and PowerPoints on display at specialized intelligence conferences, why not perceive Google as just what the country needs after the attack and amidst increasing tensions with other nation states not too far from Tel Aviv? Belief is good. Madison Avenue thinking is good. Cloud services are good. Failure is not just bad; it could mean zero warning for another action against Israel. Do brochures about intelware stop bullets and missiles?
Stephen E Arnold, April 18, 2024
RIFed by AI? Do Not Give Hope Who Enter There
April 18, 2024
Rest assured, job seekers, it is not your imagination. Even those with impressive resumes are having trouble landing an interview, never mind a position. Case in point, Your Tango shares, “Former Google Employee Applies to 50 Jobs that He’s Overqualified For and Tracks the Alarming Number of Rejections.” Wrier Nia Tipton summarizes a pair of experiments documented on TikTok by ex-Googler Jonathan Javier. He found prospective employers were not impressed with his roles at some of the biggest tech firms in the world. In fact, his years of experience may have harmed his chances: his first 50 applications were designed to see how he would fare as an overqualified candidate. Most companies either did not respond or rejected him outright. He was not surprised. Tipton writes:
“Javier explained that recruiters are seeing hundreds of applications daily. ‘For me, whenever I put a job break out, I get about 30 to 50 every single day,’ he said. ‘So again, everybody, it’s sometimes not your resume. It’s sometimes that there’s so many qualified candidates that you might just be candidate number two and number three.’”
So take heart, applicants, rejections do not necessarily mean you are not worthy. There are just not enough positions to go around. The write-up points to February numbers from the Bureau of Labor Statistics that show that, while the number of available jobs has been growing, so is the unemployment rate. Javier’s experimentation continued:
“In another TikTok video, Jonathan continued his experiment and explained that he applied to 50 jobs with two similar resumes. The first resume showed that he was overqualified, while the other showed that he was qualified. Jonathan quickly received 24 rejections for the overqualified resume, while he received 15 rejections for the qualified resume. Neither got him any interviews. Something interesting that Javier noted was how fast he was rejected with his overqualified resume. From this, he observed that overqualified candidates are often overlooked in favor of candidates that fit 100% of the qualities they are looking for. ‘That’s unfortunate because it creates a bias for people who might be older or who might have a lot more experience, but they’re trying to transition into a specific industry or a new position,’ he said.”
Ouch. It is unclear what, if anything, can be done about this specificity bias in hiring. It seems all one can do is keep trying. But, not that way.
Cynthia Murrell, April 18, 2024
Kagi Search Beat Down
April 17, 2024
This essay is the work of a dumb dinobaby. No smart software required.
People surprise me. It is difficult to craft a search engine. Sure, a recent compsci graduate will tell you, “Piece of cake.” It is not. Even with oodles of open source technology, easily gettable content, and a few valiant individuals who actually want relevant results — search and retrieval are tough to get right. The secret to good search, in my opinion, is to define a domain, preferably a technical field, identify the relevant content, obtain rights, if necessary, and then do the indexing and the other “stuff.”
In my experience, it is a good idea to have either a friend with deep pockets, a US government grant (hello, NSF, said Google decades ago), or a credit card with a hefty credit line. Failing these generally acceptable solutions, one can venture into the land of other people’s money. When that runs out or just does not work, one can become a pay-to-play outfit. We know what that business model delivers. But for a tiny percentage of online users, a subscription service makes perfect sense. The only problem is that selling subscriptions is expensive, and there is the problem of churn. Lose a customer and spend quite a bit of money replacing that individual. Lose big customers spend oodles and oodles of money replacing that big spender.
I read “Do Not Use Kagi.” This, in turn, directed me to “Why I Lost Faith in Kagi.” Okay, what’s up with the Kagi booing? The “Lost Faith” article runs about 4,000 words. The key passage for me is:
Between the absolute blasé attitude towards privacy, the 100% dedication to AI being the future of search, and the completely misguided use of the company’s limited funds, I honestly can’t see Kagi as something I could ever recommend to people.
I looked at Kagi when it first became available, and I wrote a short email to the “Vlad” persona. I am not sure if I followed up. I was curious about how the blend of artificial intelligence and metasearch was going to deal with such issues as:
- Deduplication of results
- Latency when a complex query in a metasearch system has to wait for a module to do it thing
- How the business model was going to work: Expensive subscription, venture funding, collateral sales of the interface to law enforcement, advertising, etc..
- Controlling the cost of the pings, pipes, and power for the plumbing
- Spam control.
I know from experience that those dabbling in the search game ignore some of my routine questions. The reasons range from “we are smarter than you” to “our approach just handles these issues.”
Thanks, MSFT Copilot. Recognize anyone in the image you created?
I still struggle with the business model of non-ad supported search and retrieval systems. Subscriptions work. Well, they worked out of the gate for ChatGPT, but how many smart search systems do I want to join? Answer: Zero.
Metasearch systems are simply sucker fish on the shark bodies of a Web search operator. Bing is in the metasearch game because it is a fraction of the Googzilla operation. It is doing what it can to boost its user base. Just look at the wonky Edge ads and the rumored miniscule gain the additional of smart search has delivered to Bing traffic. Poor Yandex is relocating and finds itself in a different world from the cheerful environment of Russia.
Web content indexing is expensive, difficult, and tricky.
But why pick on Kagi? Beats me. Why not write about dogpile.com, ask.com, the duck thing, or startpage.com (formerly ixquick.com)? Each embodies a certain subsonic vibe, right?
Maybe it is the AI flavor of Kagi? Maybe it is the amateur hour approach taken with some functions? Maybe it is just a disconnect between an informed user and an entrepreneurial outfit running a mile a minute with a sign that says, “Subscribe”?
I don’t know, but it is interesting when Web search is essentially a massive disappointment that some bright GenX’er has not figured out a solution.
Stephen E Arnold, April 17, 2024
The National Public Radio Entity Emulates Grandma
April 17, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I can hear my grandmother telling my cousin Larry. Chew your food. Or… no television for you tonight. The time was 6 30 pm. The date was March 3, 1956. My cousin and I were being “watched” when our parents were at a political rally and banquet. Grandmother was in charge, and my cousin was edging close to being sent to grandfather for a whack with his wooden paddle. Tough love I suppose. I was a good boy. I chewed my food and worked to avoid the Wrath of Ma. I did the time travel thing when I read “NPR Suspends Veteran Editor As It Grapples with His Public Criticism.” I avoid begging for dollars outfits. I had no idea what the issue is or was.
“Gea’t haspoy” which means in grandmother speak: “That’s it. No TV for you tonight. In the morning, both of you are going to help Grandpa mow the yard and rake up the grass.” Thanks, NPR. Oh, sorry, thanks MSFT Copilot. You do the censorship thing too, don’t you?
The write up explains:
NPR has formally punished Uri Berliner, the senior editor who publicly argued a week ago that the network had “lost America’s trust” by approaching news stories with a rigidly progressive mindset.
Oh, I get it. NPR allegedly shapes stories. A “real” journalist does not go along with the program. The progressive leaning outfit ignores the free speech angle. The “real” journalist is punished with five days in a virtual hoosegow. An NPR “real” journalist published an essay critical of NPR and then vented on a podcast.
The article I have cited is an NPR article. I guess self criticism is progressive trait maybe? Any way, the article about the grandma action stated:
In rebuking Berliner, NPR said he had also publicly released proprietary information about audience demographics, which it considers confidential. He said those figures “were essentially marketing material. If they had been really good, they probably would have distributed them and sent them out to the world.”
There is no hint that this “real” journalist shares beliefs believed to be held by Julian Assange or that bold soul Edward Snowden, both of whom have danced with super interesting information.
Several observations:
- NPR’s suspending an employee reminds me of my grandmother punishing us for not following her wacky rules
- NPR is definitely implementing a type of information shaping; if it were not, what’s the big deal about a grousing employee? How many of these does Google have protesting in a year?
- Banning a person who is expressing an opinion strikes me as a tasty blend of X.com and that master motivator Joe Stalin. But that’s just my dinobaby mind have a walk-about.
Net net: What media are not censoring, muddled, and into acting like grandma?
Stephen E Arnold, April 15, 2024
Meta: Innovating via Intentions
April 17, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Analytics India published “Meta Releases AI on WhatsApp, Looks Like Perplexity AI.” The headline caught my attention. I don’t pay much attention to the Zuckbook and the other Meta properties. The Analytics India story made this statement which caught my attention:
What users type in the search bar remains confidential and is not shared with Meta AI unless users intentionally send a query to the Meta AI chatbot.
I am okay with copying from Silicon Valley type outfits. That’s part of the game, which includes colors, shuffling staff, and providing jibber jabber instead of useful interfaces and documentation about policies. But think about the statement: “unless users intentionally send a query to the Meta AI chatbot.” Doesn’t that mean we don’t keep track of queries unless a user sends a query to the Zuckbook’s smart software? I love the “intention” because the user is making a choice between a search function which one of my team told me is not very useful and a “new” search system which will be better. If it is better, then user queries get piped into a smart search system for which the documentation is sparse. What happens to those data? How will those data be monetized? Will the data be shared with those who have a business relationship with Meta?
Thanks, MSFT Copilot. Good enough, but that’s what one might say about MSFT security, right?
So many questions.
The article states:
Users can still search their conversations for specific content without interacting with Meta AI, maintaining the same level of ease and privacy as before. Additionally, personal messages and calls remain end-to-end encrypted, ensuring neither WhatsApp nor Meta can access them, even with the Meta AI integration.
There is no substantiation of this assertion. Indeed, since the testimony of Frances Haugen, I am not certain what Meta does, and I am not willing to accept assertions about what is accessible to the firm’s employees and what is not. What about the metadata? Is that part of the chunk of data Meta cannot access?
Facebook, WhatsApp, and Instagram are interesting services. The information in the Meta services appears to be to be quite useful for a number of endeavors. Academic research groups are less helpful than they could be. Some have found data cut off or filtered. Imitating another AI outfit’s graphic design is the lowest on my list of Meta issues.
The company is profitable. It has considerable impact. The firm has oodles of data. But now a user’s intention gives permission to an interesting outfit to do whatever with that information. Unsettling? Nope, just part of the unregulated world of digital operations which some assert are having a somewhat negative impact on society. Yep, intentionally.
Stephen E Arnold, April 17, 2024