News Flash about SEO: Just 20 Years Too Late but, Hey, Who Pays Attention?

June 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read an article which would have been news a couple of decades ago. But I am a dinobaby (please, see anigif bouncing in an annoying manner) and I am hopelessly out of touch with what “real news” is.

6 16 unhappy woman

An entrepreneur who just learned that in order to get traffic to her business Web site, she will have to spend big bucks and do search engine optimization, make YouTube videos (long and short), and follow Google’s implicit and explicit rules. Sad, MBA, I believe. The Moping Mistress of the Universe is a construct generated by the ever-innovative MidJourney and its delightful Discord interface.

The write up catching my attention is — hang on to your latte — “A Storefront for Robots: The SEO Arms Race Has Left Google and the Web Drowning in Garbage Text, with Customers and Businesses Flailing to Find Each Other.” I wondered if the word “flailing” is a typographic error or misspelling of “failing.” Failing strikes me as a more applicable word.

The thesis of the write up is that the destruction of precision and recall as useful for relevant online search and retrieval is not part of the Google game plan.

The write up asserts:

The result is SEO chum produced at scale, faster and cheaper than ever before. The internet looks the way it does largely to feed an ever-changing, opaque Google Search algorithm. Now, as the company itself builds AI search bots, the business as it stands is poised to eat itself.

Ah, ha. Garbage in, garbage out! Brilliant. The write up is about 4,000 words and makes clear that ecommerce requires generating baloney for Google.

To sum up, if you want traffic, do search engine optimization. The problem with the write up is that it is incorrect.

Let me explain. Navigate to “Google Earned $10 Million by Allowing Misleading Anti-Abortion Ads from Fake Clinics, Report Says.” What’s the point of this report? The answer is, “Google ads.” And money from a controversial group of supporters and detractors. Yes! An arms race of advertising.

Of course, SEO won’t work. Why would it? Google’s business is selling advertising. If you don’t believe me, just go to a conference and ask any Googler — including those wearing Ivory Tower Worker” pins — and ask, “How important is Google’s ad business?” But you know what most Googlers will say, don’t you?

For decades, Google has cultivated the SEO ploy for one reason. Failed SEO campaigns end up one place, “Google Advertising.”

Why?

If you want traffic, like the abortion ad buyers, pony up the cash. The Google will punch the Pay to Play button, and traffic results. One change kicked in after 2006. The mom-and-pop ad buyers were not as important as one of the “brand” advertisers. And what was that change? Small advertisers were left to the SEO experts who could then sell “small” ad campaigns when the hapless user learned that no one on the planet could locate the financial advisory firm named “Financial Specialist Advisors.” Ah, then there was Google Local. A Googley spin on Yellow Pages. And there have been other innovations to make it possible for advertisers of any size to get traffic, not much because small advertisers spend small money. But ad dollars are what keeps Googzilla alive.

Net net: Keep in mind that Google wants to be the Internet. (AMP that up, folks.) Google wants people to trust the friendly beastie. The Googzilla is into responsibility. The Google is truth, justice, and the digital way. Is the criticism of the Google warranted? Sure, constructive criticism is a positive for some. The problem I have is that it is 20 years too late. Who cares? The EU seems to have an interest.

Stephen E Arnold, June 21, 2023

Many Regulators, Many Countries Cannot Figure Out How to Regulate AI

June 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

American and European technology and trade leaders met in Sweden for the Trade and Tech Council (TTC) summit. They met at the beginning of June to discuss their sector’s future. One of the main talking points was how to control AI. The one thing all the leaders agreed on was that they could not agree on anything. Politico tells more about the story in: “The Struggle To Control AI.”

The main AI topic international leaders discussed was generative AI, such as Google’s Bard and ChatGPT from OpenAI, and its influence on humanity. The potential for generative AI is limitless, but there are worries that it poses threats to global security and would ruin the job market. The leaders want to prove to the world that democratic governments advances as quickly as technology advances.

6 17 fat pandas

A group of regulators discuss regulating AI. The regulators are enjoying a largely unregulated lunch of fast good stuffed with chemicals. Some of these have interesting consequences. One regulator says, “Pass the salt.” Another says, “What about AI and ML?” A third says, “Are those toppings?” The scene was generated by the copyright maven MidJourney.

Leaders from Europe and the United States are anxious to make laws that regulate how AI works in conjunction with society. The TTC’s goal is to develop non-binding standards about AI transparency, risk audits, and technical details. The non-binding standards would police AI so it does not destroy humanity and the planet. The plan is to present the standards at the G7 in Fall 2023.

Europe and the United States need to agree on the standards, except they are not-so that leaves room for China to promote its authoritarian version of AI. The European Union has written the majority of the digital rulebook that Western societies follows. The US has other ideas:

“The U.S., on the other hand, prefers a more hands-off approach, relying on industry to come up with its own safeguards. Ongoing political divisions within Congress make it unlikely any AI-specific legislation will be passed before next year’s U.S. election. The Biden administration has made international collaboration on AI a policy priority, especially because a majority of the leading AI companies like Google, Microsoft and OpenAI, are headquartered in the U.S. For Washington, helping these companies compete against China’s rivals is also a national security priority.”

The European Union wants to do things one way, the United States has other ideas. It is all about talking heads speaking legalese mixed with ethics, while China is pushing its own agenda.

Whitney Grace, June 21, 2023

Facebook: Alleged Management Methods to Improve the Firm

June 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I recall learning that some employees have been locked in their offices. The idea was that incarceration would improve productivity and reduce costs. Sounds like an adolescent or high school science club management method. I spotted a couple of other examples of 2023’s contributions to modern management principles. My sources, of course, are online, and I believe everything I read online. I try to emulate ChatGPT type systems because those constructs are just wonderful.

I have no idea if the information in these two articles I will cite is on the money. Just reading them made me giddy with new found knowledge. I did not think of implementing these management tactics when I worked in an old-fashioned, eat-your-meat raw company.

6 19 modern mgmt plan

MidJourney captures the essence of modern management brilliance. Like a chess master, the here-and-now move prepares for the brilliant win at the end of the game.

The first write up is “Silicon Valley’s Shocking Substance Abuse: Facebook Managers Turned Blind Eye If They Thought It Boosted Productivity, Insider Claims, As Killing of Cash App Founder Bob Lee Exposes Hardcore Drug-Taking.” The write up in the “real news” service says:

Facebook managers turned a blind eye to substance abuse if they felt it boosted productivity, an insider has claimed, as Bob Lee’s killing shines a light on hardcore drug culture in Silicon Valley. Dave Marlon, who founded one of the largest addiction recovery centers in the US and has worked with several Facebook employees, alleges that managers at the tech giant knew about workers taking drugs in the office but accepted it as part of the culture. He told DailyMail.com that what he would describe as ‘severe substance abuse’ was referred to in the industry as the ‘quirks of being a tech employee’.

Facebook? Interesting.

The second write up points out that the payoff for management is what I call RIF’ing or reduction in force methods. This is a variation of you don’t belong here or Let them go. The write up is titled “Meta Lost a Third of Its AI Researchers Over the Last Year. Now It’s Struggling to Keep Up” reports:

Zuckerberg dubbed 2023 the “year of efficiency” in a February earnings release. Meta laid off over 11,000 employees in November, and continued to shut down projects in the months that followed.

The efficiency tactic has worked. There are fewer people working on smart software. The downside? Nothing significant other than watching other companies zoom farther ahead on the Information Superhighway.

To recap: Facebook allegedly combined “looking the other way” with “efficiency.” Quite a management one-two. As a dinobaby, these innovative techniques are difficult for me to comprehend. I hope that neither write up captures the essence of the Facebook way. Well, sort of hope.

Stephen E Arnold, June 20, 2023

Call 9-1-1. AI Will Say Hello Soon

June 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

My informal research suggests that every intelware and policeware vendor is working to infuse artificial intelligence or in my lingo “smart software” into their products and services. Most of these firms are not Chatty Cathies. The information about innovations is dribbled out in talks at restricted attendance events or in talks given at these events. This means that information does not zip around like the posts on the increasingly less use Twitter service #osint.

6 17 govt lunch

Government officials talking about smart software which could reduce costs but the current budget does not allow its licensing. Furthermore, time is required to rethink what to do with the humanoids who will be rendered surplus and ripe for RIF’ing. One of the attendees wisely asks, “Does anyone want dessert?” A wag of the dinobaby’s tail to MidJourney which has generated an original illustration unrelated to any content object upon which the system inadvertently fed. Smart software has to gobble lunch just like government officials.

However, once in a while, some information becomes public and “real news” outfits recognize the value of the information and make useful factoids available. That’s what happened in “A.I. Call Taker Will Begin Taking Over Police Non-Emergency Phone Lines Next Week: Artificial Intelligence Is Kind of a Scary Word for Us,” Admits Dispatch Director.”

Let me highlight a couple of statements in the cited article.

First, I circled this statement about Portland, Oregon’s new smart system:

A automated attendant will answer the phone on nonemergency and based on the answers using artificial intelligence—and that’s kind of a scary word for us at times—will determine if that caller needs to speak to an actual call taker,” BOEC director Bob Cozzie told city commissioners yesterday.

I found this interesting and suggestive of how some government professionals will view the smart software-infused system.

Second, I underlined this passage:

The new AI system was one of several new initiatives that were either announced or proposed at yesterday’s 90-minute city “work session” where commissioners grilled officials and consultants about potential ways to address the crisis.

The “crisis”, as I understand it, boils down to staffing and budgets.

Several observations:

  1. The write up makes a cautious approach to smart software. What will this mean for adoption of even more sophisticated services included in intelware and policeware solutions?
  2. The message I derived from the write up is that governmental entities are not sure what to do. Will this cloud of unknowing have a impact on adoption of AI-infused intelware and policeware systems?
  3. The article did not include information from the vendor? Is this fact provide information about the reporter’s research or does it suggest the vendor was not cooperative. Intelware and policeware companies are not particularly cooperative nor are some of the firms set up to respond to outside inquiries. Will those marketing decisions slow down adoption of smart software?

I will let you ponder the implications of this brief, and not particularly detailed article. I would suggest that intelware and policeware vendors put on their marketing hats and plug them into smart software. Some new hurdles for making sales may be on the horizon.

Stephen E  Arnold, June 20. 2023

The Famous Google Paper about Attention, a Code Word for Transformer Methods

June 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Wow, many people are excited a Bloomberg article calledThe AI Boom Has Silicon Valley on Another Manic Quest to Change the World: A Guide to the New AI Technologies, Evangelists, Skeptics and Everyone Else Caught Up in the Flood of Cash and Enthusiasm Reshaping the Industry.”

In the tweets and LinkedIn posts one small factoid is omitted from the second hand content. If you want to read the famous DeepMind-centric paper which doomed the Google Brain folks to watch their future from the cheap seats, you can find “Attention Is All You Need”, branded with the imprimatur of the Neural Information Processing Systems Conference held in 2017. Here’s the link to the paper.

For those who read the paper, I would like to suggest several questions to consider:

  1. What economic gain does Google derive from proliferation of its transformer system and method; for example, the open sourcing of the code?
  2. What does “attention” mean for [a] the cost of training and [b] the ability to steer the system and method? (Please, consider the question from the point of view of the user’s attention, the system and method’s attention, and a third-party meta-monitoring system such as advertising.)
  3. What other tasks of humans, software, and systems can benefit from the user of the Transformer system and methods?

I am okay with excitement for a 2017 paper, but including a link to the foundation document might be helpful to some, not many, but some.

Net net: Think about Google’s use of the word “trust” and “responsibility” when you answer the three suggested questions.

Stephen E Arnold, June 20, 2023

Digital Belly Cutting: Reddit and Twitter on the Path of Silicon Honor?

June 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

For some reason, I remember my freshman year in high school. My family had returned from Brazil, and the US secondary school adventure was new. The first class of the day in 1958 parked me in front of Miss Dalton, the English teacher. She explained that we had to use the library to write a short essay about foreign country. Brazil did not interest me, so with the wisdom of a freshman in high school, I chose Japan. My topic? Seppuku or hara-kiri. Yep, belly cutting.

The idea that caught my teen attention was the idea that a warrior who was shamed, a loser in battle, or fighter wanting to make a point would stab himself with his sword. (Females slit their throats.) The point of the exercise was to make clear, this warrior had moxie, commitment, and maybe a bit of psychological baggage. An inner wiring (maybe guilt) would motivate you to kill oneself in a hard-to-ignore way. Wild stuff for a 13 year old.

6 20 hara kiri with a laptop

A digital samurai preparing to commit hara-kiri with a brand new really sharp MacBook Air M2. Believe it or not, MidJourney objected to an instruction to depict a Japanese warrior committing seppuku with a laptop. Change the instruction and the system happily depicts a digital warrior initiating the Reddit- and Twitter-type processes. Thanks, MidJourney for the censorship.

I have watched a highly regarded innovator chop away at his innards with the management “enhancements” implemented at Twitter. I am not a Twitter user, but I think that an unarticulated motive is causing the service to be “improved.” “Twitter: Five Ways Elon Musk Has Changed the Platform for Users” summarizes a few of the most newsworthy modifications. For example, the rocket and EV wizard quickly reduced staff, modified access to third-party applications, and fooled around with check marks. The impact has been intriguing. Billions in value have been disappeared. Some who rose to fame by using Tweets as a personal promotional engine have amped up their efforts to be short text influencers. The overall effect has been to reduce scrutiny because the impactful wounds are messy. Once concerned regulators apparently have shifted their focus. Messy stuff.

Now a social media service called Reddit is taking some cues from the Musk Manual of Management. The gold standard in management excellence — that would be CNN, of course — documented some Reddit’s actions in “Reddit’s Fight with Its Most Powerful Users Enters New Phase As Blackout Continues.” The esteemed news service stated:

The company also appears to be laying the groundwork for ejecting forum moderators committed to continuing the protests, a move that could force open some communities that currently remain closed to the public. In response, some moderators have vowed to put pressure on Reddit’s advertisers and investors.

Without users and moderators will Reddit thrive? If the information in a recent Wired article is correct, the answer is, “Maybe not.” (See “The Reddit Blackout Is Breaking Reddit.”)

Why are two large US social media companies taking steps that will either impair their ability to perform technically and financially or worse, chopping away at themselves?

My thought is that the management of both firms know that regulators and stakeholders are closing in. Both companies want people who die for the firm. The companies are at war with idea, their users, and their supporters. But what’s the motivation?

Let’s do a thought experiment. Imagine that the senior managers at both companies know that they have lost the battle for the hearts and minds of regulators, some users, third-party developers, and those who would spend money to advertise. But like a person promoted to a senior position, the pressure of the promotion causes the individuals to doubt themselves. These people are the Peter Principle personified. Unconsciously they want to avail to avoid their inner demons and possibly some financial stakeholders.

The senior managers are taking what they perceive as a strong way out — digital hara-kiri. Of course, it is messy. But the pain, the ugliness, and the courage are notable. Those who are not caught in the sticky Web of social media ignore the horrors. For Silicon Valley “real” news professionals, many users dependent on the two platforms, and those who have surfed on the firms’ content have to watch.

Are the managers honorable? Do some respect their tough decisions? Are the senior managers’ inner demons and their sense of shame assuaged? I don’t know. But the action is messy on the path of honor via self-disembowelment.

For a different angle on what’s happened at Facebook and Google, take a look at “The Rot Economy.”

Stephen E Arnold, June 19, 2023

Intellectual Property: What Does That Mean, Samsung?

June 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Former Samsung Executive Accused of Trying to Copy an Entire Chip Plant in China.” I have no idea if [a] the story is straight and true, [b] a disinformation post aimed at China, [c] something a “real news” type just concocted with the help of a hallucinating chunk of smart software, [d] a story emerging from a lunch meeting with “what if” ideas and “hypotheticals” were flitting from Chinese take out container to take out container.

It does not matter. I find it bold, audacious, and almost believable.

6 12 stealing documents

A single engineer’s pile of schematics, process flow diagrams, and details of third party hardware require to build a Samsung-like outfit. The illustration comes from the fertile zeros and ones at MidJourney.

The write up reports:

Prosecutors in the Suwon District have indicted a former Samsung executive for allegedly stealing semiconductor plant blueprints and technology from the leading chipmaker, BusinessKorea reports. They didn’t name the 65-year-old defendant, who also previously served as vice president of another Korean chipmaker SK Hynix, but claimed he stole the information between 2018 and 2019. The leak reportedly cost Samsung about $230 million.

Why would someone steal information to duplicate a facility which is probably getting long in the tooth? That’s a good question. Why not steal from the departments of several companies which are planning facilities to be constructed in 2025? The write up states:

The defendant allegedly planned to build a semiconductor in Xi’an, China, less than a mile from an existing Samsung plant. He hired 200 employees from SK Hynix and Samsung to obtain their trade secrets while also teaming up with an unnamed Taiwanese electronics manufacturing company that pledged $6.2 billion to build the new semiconductor plant — the partnership fell through. However, the defendant was able to secure about $358 million from Chinese investors, which he used to create prototypes in a Chengdu, China-based plant. The plant was reportedly also built using stolen Samsung information, according to prosecutors.

Three countries identified. The alleged plant would be located in easy-to-reach Xi’an. (Take a look at the nifty entrance to the walled city. Does that look like a trap to you? It did to me.)

My hunch is that there is more to this story. But it does a great job of casting shade on the Middle Kingdom. Does anyone doubt the risk posed by insiders who get frisky? I want to ask Samsung’s human resources professional about that vetting process for new hires and what happens when a dinobaby leaves the company with some wrinkles, gray hair, and information. My hunch is that the answer will be, “Not much.”

Stephen E Arnold, June 19, 2023

Google: Smart Software Confusion

June 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I cannot understand. Not only am I old; I am a dinobaby. Furthermore, I am like one of William James’s straw men: Easy to knock down or set on fire. Bear with me this morning.

I read “Google Skeptical of AI: Google Doesn’t Trust Its Own AI Chatbots, Asks Employees Not to Use Bard.” The write up asserts as “real” information:

It seems that Google doesn’t trust any AI chatbot, including its own Bard AI bot. In an update to its security measures, Alphabet Inc., Google’s parent company has asked its employees to keep sensitive data away from public AI chatbots, including their own Bard AI.

The go-to word for the Google in the last few weeks is “trust.” The quote points out that Google doesn’t “trust” its own smart software. Does this mean that Google does not “trust” that which it created and is making available to its “users”?

6 17 google gatekeeper

MidJourney, an interesting but possibly insecure and secret-filled smart software system, generated this image of Googzilla as a gatekeeper. Are gatekeepers in place to make money, control who does what, and record the comings and goings of people, data, and content objects?

As I said, I am a dinobaby, and I think I am dumb. I don’t follow the circular reasoning; for example:

Google is worried that human reviewers may have access to the chat logs that these chatbots generate. AI developers often use this data to train their LLMs more, which poses a risk of data leaks.

Now the ante has gone up. The issue is one of protecting itself from its own software. Furthermore, if the statement is accurate, I take the words to mean that Google’s Mandiant-infused, super duper, security trooper cannot protect Google from itself.

Can my interpretation be correct? I hope not.

Then I read “This Google Leader Says ML Infrastructure Is Conduit to Company’s AI Success.” The “this” refers to an entity called Nadav Eiron, a Stanford PhD and Googley wizard. The use of the word “conduit” baffles me because I thought “conduit” was a noun, not a verb. That goes to support my contention that I am a dumb humanoid.

Now let’s look at the text of this write up about Google’s smart software. I noted this passage:

The journey from a great idea to a great product is very, very long and complicated. It’s especially complicated and expensive when it’s not one product but like 25, or however many were announced that Google I/O. And with the complexity that comes with doing all that in a way that’s scalable, responsible, sustainable and maintainable.

I recall someone telling me when I worked at a Fancy Dan blue chip consulting firm, “Stephen, two objectives are zero objectives.” Obviously Google is orders of magnitude more capable than the bozos at the consulting company. Google can do 25 objectives. Impressive.

I noted this statement:

we created the OpenXLA [an open-source ML compiler ecosystem co-developed by AI/ML industry leaders to compile and optimize models from all leading ML frameworks] because the interface into the compiler in the middle is something that would benefit everybody if it’s commoditized and standardized.

I think this means that Google wants to be the gatekeeper or man in the middle.

Now let’s consider the first article cited. Google does not want its employees to use smart software because it cannot be trusted.

Is it logical to conclude that Google and its partners should use software which is not trusted? Should Google and its partners not use smart software because it is not secure? Given these constraints, how does Google make advances in smart software?

My perception is:

  1. Google is not sure what to do
  2. Google wants to position its untrusted and insecure software as the industry standard
  3. Google wants to preserve its position in a workflow to maximize its profit and influence in markets.

You may not agree. But when articles present messages which are alarming and clearly focused on market control, I turn my skeptic control knob. By the way, the headline should be “Google’s Nadav Eiron Says Machine Learning Infrastructure Is a Conduit to Facilitate Google’s Control of Smart Software.”

Stephen E Arnold, June 19, 2023

The Value of AI and the De-Valuing of Humanoids: Long Lines for Food Stamps Ahead?

June 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

AI, AI, AI-Yai-Ai. That could be a country western lyric. Maybe it is? I am not a fan of Grand Old Opry-type entertainment. I do enjoy what I call “Dark AI humor.” If the flow of amusing crAIziness continues, could it become a staple of comedy shows on Tubi or Pluto?

How many people live (theoretically) in the United States? The answer, according to an unimpeachable source, is 336,713,783. I love the precision of smart search software.

Consider the factoid in “300 Million Jobs Will Be Replaced, Diminished by Artificial Intelligence, Report Warns.” If we assume the population of the US is 337 million (sorry You.com), this works out to a trivial 37 million people who will have been promoted by smart software to the “Get Paycheck” social class. I may be overstating the “Paycheck Class,” but this is AI land, so numbers are fuzzified because you know… probability.

The write up points out:

Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work.

Disruption rocks on.

Now consider the information in “People Who Work with AI Are More Likely to Be Lonely, Suffer from Insomnia and Booze after Work, Study Finds.” The write up asserts:

Analysis revealed employees who interacted more frequently with AI systems were more likely to experience loneliness, insomnia and increased after-work alcohol consumption. But they also found these employees were more likely to offer to help their coworkers – a response that may be triggered by the need for social contact, the team said. Other experiments in the US, Indonesia and Malaysia, involving property management companies and a tech company, yielded similar results.

Let’s assume both articles contain actual factual information. Imagine managing a group of individuals in the top tier. Now think about those who are in the lower tier. Any fancy management ideas? I have none.

Exciting for sure.

Stephen E Arnold, June 16, 2023

Newsflash: Common Sense Illuminates Friendly Fish for Phishers

June 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Here’s a quick insider threat and phishing victim test: [a] Are you really friendly, fraternity or sorority social officer gregarious humanoid? [b] Are you a person who says “Yes” to almost any suggestion a friend or stranger makes to you? [c] Are you curious about emails offering big bucks, free prizes, or great deals on avocado slicers?

If you resonated with a, b, or c, researchers have some news for you.

Younger, More Extroverted, and More Agreeable Individuals Are More Vulnerable to Email Phishing Scams” reports:

… the older you are, the less susceptible you are to phishing scams. In addition, highly extroverted and agreeable people are more susceptible to this style of cyber attack. This research holds the potential to provide valuable guidance for future cybersecurity training, considering the specific knowledge and skills required to address age and personality differences.

The research summary continues:

The results of the current study support the idea that people with poor self-control and impulsive tendencies are more likely to misclassify phishing emails as legitimate. Interestingly, impulsive individuals also tend to be less confident in their classifications, suggesting they are somewhat aware of their vulnerability.

It is good to be an old, irascible, skeptical dinobaby after all.

Stephen E Arnold, June 16, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta