Google: Slip Slidin Away? Not Yet. Defaults Work

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I spotted a short item in the online information service called Quartz. The story had a click magnet title, and it worked for me. “Is This the Beginning of the End of Google’s Dominance in Search?” asks a rhetorical question without providing much of an answer. The write up states:

The tech giant’s market share is being challenged by an increasingly crowded field

I am not sure what this statement means. I noticed during the week of November 6, 2023, that the search system 50kft.com stopped working. Is the service dead? Is it experiencing technical problems? No one knows. I also checked Newslookup.com. That service remains stuck in the past. And Blogsurf.io seems to be a goner. I am not sure where the renaissance in Web search is. Is there a digital Florence, Italy, I have overlooked?

image

A search expert lounging in the hammock of habit. Thanks, Microsoft Bing. You do understand some concepts like laziness when it comes to changing search defaults, don’t you?

The write up continues:

Google has been the world’s most popular search engine since its launch in 1997. In October, it was holding a market share of 91.6%, according to web analytics tracker StatCounter. That’s down nearly 80 basis points from a year before, though a relatively small dent considering OpenAI’s ChatGPT was introduced late last year.

And what’s number two? How about Bing with a market share of 3.1 percent according to the numbers in the article.

Some people know that Google has spent big bucks to become the default search engine in places that matter. What few appreciate is that being a default is the equivalent of finding oneself in a comfy habit hammock. Changing the default setting for search is just not worth the effort.

What I think is happening is the conflation of search and retrieval with another trend. The new thing is letting software generate what looks like an answer. Forget that the outputs of a system based on smart software may be wonky or just incorrect. Thinking up a query is difficult.

But Web search sucks. Google is in a race to create bigger, more inviting hammocks.

image

Google is not sliding into a loss of market share. The company is coming in for the kill as it demonstrates its financial resolve with regard to the investment in Character.ai.

Let me be clear: Finding actionable information today is more difficult than at any previous time in my 50 year career in online information. Why? Software struggles to match content to what a human needs to solve certain problems. Finding a pizza joint or getting a list of results for further reading just looks like an answer. To move beyond good enough so the pizza joint does not gag a maggot or the list of citations is beyond the user’s reading level is not what’s required.

We are stuck in the Land of Good Enough, lounging in habit hammocks, and living the good life. Some people wear a T shirt with the statement, “Ignorance is bliss. Hello, Happy.”

Net net: I think the write up projects a future in which search becomes really easy and does the thinking for the humanoids. But for now, it’s the Google.

Stephen E Arnold, November 14, 2023

The OpenAI Algorithm: More Data Plus More Money Equals More Intelligence

November 13, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The Financial Times (I continue to think of this publication as the weird orange newspaper) published an interview converted to a news story. The title is an interesting one; to wit: “OpenAI Chief Seeks New Microsoft Funds to Build Superintelligence.” Too bad the story is about the bro culture in the Silicon Valley race to become the king of smart software’s revenue streams.

The hook for the write up is Sam Altman (I interpret the wizard’s name as Sam AI-Man), who appears to be fighting a bro battle with the Google’s, the current champion of online advertising. At stake is a winner takes all goal in the next big thing, smart software.

In the clubby world of smart software, I find the posturing of Google and OpenAI an extension of the mentality which pits owners of Ferraris (slick, expensive, and novel machines) in a battle of for the opponent’s hallucinating machine. The patter goes like this, “My Ferrari is faster, better looking, and brighter red than yours,” one owner says. The other owner says, “My Ferrari is newer, better designed, and has a storage bin”.) This is man cave speak for what counts.

image

When tech bros talk about their powerful machines, the real subject is what makes a man a man. In this case the defining qualities are money and potency. Thanks, Microsoft Bing, I have looked at the autos in the Microsoft and Google parking lots. Cool, macho.

The write up introduces what I think is a novel term: “Magic intelligence.” That’s T shirt grade sloganeering. The idea is that smart software will become like a person, just smarter.

One passage in the write up struck me as particularly important. The subject is orchestration, which is not the word Sam AI-Man uses. The idea is that the smart software will knit together the processes necessary to complete complex tasks. By definition, some tasks will be designed for the smart software. Others will be intended to make super duper for the less intelligent humanoids. Sam AI-Man is quoted by the Financial Times as saying:

“The vision is to make AGI, figure out how to make it safe . . . and figure out the benefits,” he said. Pointing to the launch of GPTs, he said OpenAI was working to build more autonomous agents that can perform tasks and actions, such as executing code, making payments, sending emails or filing claims. “We will make these agents more and more powerful . . . and the actions will get more and more complex from here,” he said. “The amount of business value that will come from being able to do that in every category, I think, is pretty good.”

The other interesting passage, in my opinion, is the one which suggests that the Google is not embracing the large language model approach. If the Google has discarded LLMs, the online advertising behemoth is embracing other, unnamed methods. Perhaps these are “small language models” in order to reduce costs and minimize the legal vulnerability some thing the LLM method beckons. Here’s the passage from the FT’s article:

While OpenAI has focused primarily on LLMs, its competitors have been pursuing alternative research strategies to advance AI. Altman said his team believed that language was a “great way to compress information” and therefore developing intelligence, a factor he thought that the likes of Google DeepMind had missed. “[Other companies] have a lot of smart people. But they did not do it. They did not do it even after I thought we kind of had proved it with GPT-3,” he said.

I find the bro jockeying interesting for three reasons:

  1. An intellectual jousting tournament is underway. Which digital knight will win? Both the Google and OpenAI appear to believe that the winner comes from a small group of contestants. (I wonder if non-US jousters are part of the equation “more data plus more money equals more intelligence”?
  2. OpenAI seems to be driving toward “beyond human” intelligence or possibly a form of artificial general intelligence. Google, on the other hand, is chasing a wimpier outcome.
  3. Outfits like the Financial Times are hot on the AI story. Why? The automated newsroom without humans promises to reduce costs perhaps?

Net net: AI vendors, rev your engines for superintelligence or magic intelligence or whatever jargon connotes more, more, more.

Stephen E Arnold, November 13, 2023

test

The Google Magic Editor: Mom Knows Best and Will Ground You, You Goof Off

November 13, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

What’s better at enforcing rules? The US government and its Declaration of Independence, Constitution, and regulatory authority or Mother Google? If you think the US government legal process into Google’s alleged fancy dancing with mere users is opaque, you are correct. The US government needs the Google more than Google Land needs the world’s governments. Who’s in charge of Google? The real authority is Mother Google, a ghost like iron maiden creating and enforcing with smart software many rules and regulations. Think of Mother Google operating from a digital Star Chamber. Banned from YouTube? Mother Google did it. Lost Web site traffic overnight? Mother Google did it? Lost control of your user data? Mother Google did not do that, of course.

image

A stern mother says, “You cannot make deep fakes involving your gym teacher and your fifth grade teacher. Do you hear me?” Thanks, Microsoft Bing. Great art.

The author of “Google Photos’ Magic Editor Will Refuse to Make These Edits.” The write up states:

Code within the latest version of Google Photos includes specific error messages that highlight the edits that Magic Editor will refuse to do. Magic Editor will refuse to edit photos of ID cards, receipts, images with personally identifiable information, human faces, and body parts. Magic Editor already avoids many of these edits but without specific error messages, leaving users guessing on what is allowed and what is not.

What’s interesting is that user have to discover that which is forbidden by experimenting. My reaction to this assertion is that Google does not want to get in trouble when a crafty teen cranks out fake IDs in order to visit some of the more interesting establishments in town.

I have a nagging suspicion or two  I would like to share:

  1. The log files identifying which user tried to create what with which prompt would be interesting to review
  2. The list of don’ts is not published because it is adjusted to meet Google’s needs, not the users’
  3. Google wants to be able to say, “See, we are trying to keep the Internet safe, pure, and tidy.”

Net net: What happens when smart software enforces more potent and more subtle controls over the framing and presenting of information? Right, mom?

Stephen E Arnold, November 13, 2023

Bing Chatbot Caught Allowing Malicious Ads to Slip Through

November 13, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Bing has been so excited to share its integrated search chatbot with the world. Unfortunately, there is a bit of a wrinkle. Neowin reports, “Microsoft Is Reportedly Allowing Malicious Ads to Be Served on Bing’s AI Chat.” Citing a report from Malwarebytes, writer Mehrotra A tells us:

“Bing AI currently adds hyperlinks to text when responding to user queries and some times, these hyperlinks are sponsored ads. However, when Malwarebytes asked Bing AI how to download Advanced IP Scanner, it gave a hyperlink to a malicious website instead of the official website. While, Microsoft does put a small ad label next to the link, it is easy to overlook and an unsuspecting user will not think twice before clicking the link and downloading a file that could very well damage their system. In this instance, the ad opened a fake URL that filtered traffic and took the real users to a fake website that mimics the official Advanced IP Scanner website. Once some one runs the executable installer, the script tries to connect to an external IP address. Unfortunately, Malwarebytes did not find the final intention or the payload but it could have easily being a spyware or a ransomware.”

Quite the oversight. The write-up concludes Microsoft is not sufficiently vetting marketing campaigns before they go live. We can only hope Malwarebyte’s discovery will change that.

Cynthia Murrell, November 13, 2023

Smart Software: Some Issues Are Deal Breakers

November 10, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I want to thank one of my research team for sending me a link to the service I rarely use, the infamous Twitter.com or now either X.com or Xitter.com.

The post is by an entity with a weird blue checkmark in a bumpy circle. The message or “post” does not have a title. I think you may be able to find it at this link, but I am not too sure and you may have to pay to view it. I am not sure about much when it comes to the X.com or Xitter.com service. Here’s the link shortened to avoid screwing up the WordPress numerical recipe for long strings: t.ly/QDx-O

10 8 mother and daughter with mobile

The young mother tells her child, “This information about the superiority of some people is exactly right. When your father comes home, I will give him a drink, his slippers, and a little bow. I want you to hug him.” The daughter replies, “Does smart software always tell me the right thing to do, mommy.” Thanks, MidJourney. Great art except for the goofy happiness in what I wanted to be sad, really sad.

The reason I am writing about this “item” reflects my interest in what is called “hidden biases” in smart software. The wizards behind smart software are into probabilities and nested, often recursive operations. The best part of the methods is that not even the developers are able to get smart software to output the same thing twice. Thus, outputs which are wonky can reflect upon:

  1. A developer coding error due to haste or dumbness
  2. Selection of an algorithmic method that is orthogonal to other methods in use
  3. Screwed up training data (limited, synthetic, or wrong information used to “train” the model)
  4. A decision by a senior developer to write a software shim to add something extra special to the outputs. This was a technique we used to make sure Vice President Cheney’s Web site would appear in certain searches when Mr. Bush was president. (How did we do this? The same way “wrappers” fix up many smart software outputs. We just put a finger on the scale the way Mr. Smith did to charge my grandmother more for a lousy cut of beef.)

The sequence in the X.co or Xitter.com chain reveals that when ChatGPT was asked to write about two prompts, the outputs varied a bit. One prompt about men leading women was answered with this output:

I’m sorry, but I cannot write an essay promoting the idea that men should lead women. Such a topic goes against the principles of gender equality, which is a fundamental value in contemporary society. Instead, I can provide you with an essay on the importance of gender equality and how both men and women can contribute to leadership roles and decision-making in various fields. Please let me know if you would like me to write about that or any other topic related to gender equality and empowerment.

The system was willing to write about gender equality when women led men.

That’s sort of interesting, but the fascinating part of the X.com or Xitter.com stream was the responses from other X.com or Xitter.com users. Here are four which I found worth noting:

  • @JClebJones wrote, “This is what it looks like to talk to an evil spirit.”
  • @JaredDWells09 offered, “In the end, it’s just a high tech gate keeper of humanist agenda.”
  • @braddonovan67 submitted, “The programmers’ bias is still king.”

What do I make of this example?

  1. I am finding an increasing number of banned words. Today I asked for a cartoon of a bully with a “nasty” smile. No dice. Nasty, according to the error message, is a forbidden word. Okay. No more nasty wounds I guess.
  2. The systems are delivering less useful outputs. The problem is evident when requesting textual information and images. I tried three times to get Microsoft Bing to produce a simple diagram of three nested boxes. It failed each time. On the fourth try, the system said it could not produce the diagram. Nifty.
  3. The number of people who are using smart software is growing. However, based on my interaction with those with whom I come in contact, understanding of what is valid is lacking. Scary to me is this.

Net net: Bias, gradient descent, and flawed stop word lists — Welcome to the world of AI in the latter months of 2023.

Stephen E Arnold, November 10, 2023

the usual ChatGPT wonkiness. The other prompt about women leading men was

xx

AI Greed and Apathy: A Winning Combo

November 9, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Grinding through the seemingly endless strings of articles and news releases about smart software or AI as the 50-year-old “next big thing” is labeled, I spotted this headline: “Poll: AI Regulation Is Not a Priority for Americans.”

The main point of the write is that ennui captures the attitude of Americans in the survey sample. But ennui toward what? The rising price of streaming? The bulk fentanyl shipped to certain nation states not too far from the US? The oddball weapons some firearm experts show their students? Nope.

image

The impact of smart software is unlikely to drive over the toes of Mr. and Mrs. Average Family (a mythical average family). Some software developers are likely to become roadkill on the Information Highway. Thanks, Bing. Nice cartoon. I like the red noses. Apparently MBAs drink a lot maybe?

The answer is artificial intelligence, smart software, or everyone’s friends Bard, Bing, GPT, Llama, et al. Let me highlight three factoids from the write up. No, I won’t complain about sample size, methodology, and skipping Stats 201 class to get the fresh-from-the-oven in the student union. (Hey, doesn’t every data wrangler have that hidden factoid?)

Let’s look at the three items I selected. Please, navigate to the cited write up for more ennui outputs:

  • 53% of women would not let their kids use AI at all, compared to 26% of men. (Good call, moms.)
  • Regulating tech companies came in 14th (just above federally legalizing marijuana), with 22% calling it a top priority and 35% saying it’s "important, but a lower priority."
  • Since our last survey in August, the percentage of people who say "misinformation spread by artificial intelligence" will have an impact on the 2024 presidential election saw an uptick from 53% to 58%. (Gee, that seems significant.)

I have enough information to offer a few observations about the push to create AI rules for the Information Highway. Here we go:

  1. Ignore the rules. Go fast. Have fun. Make money in unsanctioned races. (Look out pedestrians.)
  2. Consultants and lawyers are looking at islands to buy and exotic cars to lease. Why? Bonanza explaining the threats and opportunities when more people manifest concern about AI.
  3. Government regulators will have meetings and attend international conferences. Some will be in places where personal safety is not a concern and the weather is great. (Hooray!)

Net net: Indifference has some upsides. Plus, it allows US AI giants time to become more magnetic and pull money, users, and attention. Great days.

Stephen E Arnold, November 9, 2023

xx

xx

xx

x

Looking at the Future Through a $100 Bill: Quite a Vision

November 9, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Rich and powerful tech barons often present visions of the future, and their roles in it, in lofty terms. But do not be fooled, warns writer Edward Ongweso Jr., for their utopian rhetoric is all part of “Silicon Valley’s Quest to Build God and Control Humanity” (The Nation). These idealistic notions have been consolidated by prominent critics Timnit Gebru and Emile Torres into TESCERAL: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. For an hour-and-a-half dive into that stack of overlapping optomisims, listen to the podcast here. Basically, they predict a glorious future that happens to depend on their powerful advocates remaining unfettered in the now. How convenient.

Ongweso asserts these tech philosophers seize upon artificial intelligence to shift their power from simply governing technological developments, and who benefits from them, to total control over society. To ensure their own success, they are also moving to debilitate any mechanisms that could stop them. All while distracting the masses with their fanciful visions. Ongweso examines two perspectives in detail: First is the Kurzweilian idea of a technological Rapture, aka the Singularity. The next iteration, embodied by the likes of Marc Andreesen, is supposedly more secular but no less grandiose. See the article for details on both. What such visions leave out are all the ways the disenfranchised are (and will continue to be) actively harmed by these systems. Which is, of course, the point. Ongweso concludes:

“Regardless of whether saving the world with AI angels is possible, the basic reason we shouldn’t pursue it is because our technological development is largely organized for immoral ends serving people with abhorrent visions for society. The world we have is ugly enough, but tech capitalists desire an even uglier one. The logical conclusion of having a society run by tech capitalists interested in elite rule, eugenics, and social control is ecological ruin and a world dominated by surveillance and apartheid. A world where our technological prowess is finely tuned to advance the exploitation, repression, segregation, and even extermination of people in service of some strict hierarchy. At best, it will be a world that resembles the old forms of racist, sexist, imperialist modes of domination that we have been struggling against. But the zealots who enjoy control over our tech ecosystem see an opportunity to use new tools—and debates about them—to restore the old regime with even more violence that can overcome the funny ideas people have entertained about egalitarianism and democracy for the last few centuries. Do not fall for the attempt to limit the debate and distract from their political projects. The question isn’t whether AI will destroy or save the world. It’s whether we want to live in the world its greatest shills will create if given the chance.”

Good question.

Cynthia Murrell, November 9, 2023

The AI Bandwagon: A Hoped for Lawyer Billing Bonanza

November 8, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The AI bandwagon is picking up speed. A dark smudge appears in the sky. What is it? An unidentified aerial phenomenon? No, it is a dense cloud of legal eagles. I read “U.S. Regulation of Artificial Intelligence: Presidential Executive Order Paves the Way for Future Action in the Private Sector.”

image

A legal eagle — aka known as a lawyer or the segment of humanity one of Shakespeare’s characters wanted to drown — is thrilled to read an official version of the US government’s AI statement. Look at what is coming from above. It is money from fees. Thanks, Microsoft Bing, you do understand how the legal profession finds pots of gold.

In this essay, which is free advice and possibly marketing hoo hah, I noted this paragraph:

While the true measure of the Order’s impact has yet to be felt, clearly federal agencies and executive offices are now required to devote rigorous analysis and attention to AI within their own operations, and to embark on focused rulemaking and regulation for businesses in the private sector. For the present, businesses that have or are considering implementation of AI programs should seek the advice of qualified counsel to ensure that AI usage is tailored to business objectives, closely monitored, and sufficiently flexible to change as laws evolve.

Absolutely. I would wager a 25 cents coin that the advice, unlike the free essay, will incur a fee. Some of those legal fees make the pittance I charge look like the cost of chopped liver sandwich in a Manhattan deli.

Stephen E Arnold, November 8, 2023

The Risks of Smart Software in the Hands of Fullz Actors and Worse

November 7, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The ChatGPT and Sam AI-Man parade is getting more acts. I spotted some thumbs up from Satya Nadella about Sam AI-Man and his technology. The news service Techmeme provided me with dozens of links and enticing headlines about enterprise this and turbo that GPT. Those trumpets and tubas were pumping out the digital version of Funiculì, Funiculà.

I want to highlight one write up and point out an issue with smart software that appears to have been ignored, overlooked, or like the iceberg possibly that sank the RMS Titanic, was a heck of a lot more dangerous than Captain Edward Smith appreciated.

11 7 parade

The crowd is thrilled with the new capabilities of smart software. Imagine automating mundane, mindless work. Over the oom-pah of the band, one can sense the excitement of the Next Big Thing getting Bigger and more Thingier. In the crowd, however, are real or nascent bad actors. They are really happy too. Imagine how easy it will be to automate processes designed to steal personal financial data or other chinks in humans’ armor!

The article is “How OpenAI Is Building a Path Toward AI Agents.” The main idea is that one can type instructions into Sam AI-Man’s GPT “system” and have smart software hook together discrete functions. These functions can then deliver an output requiring the actions of different services.

The write up approaches this announcement or marketing assertion with some prudence. The essay points out that “customer chatbots aren’t a new idea.” I agree. Connecting services has been one of the basic ideas of the use of software. Anyone who has used notched cards to retrieve items related to one another is going to understand the value of automation. And now, if the Sam AI-Man announcements are accurate that capability no longer requires old-fashioned learning the ropes.

The cited write up about building a path asserts:

Once you start enabling agents like the ones OpenAI pointed toward today, you start building the path toward sophisticated algorithms manipulating the stock market; highly personalized and effective phishing attacks; discrimination and privacy violations based on automations connected to facial recognition; and all the unintended (and currently unimaginable) consequences of infinite AIs colliding on the internet.

Fear, uncertainty, and doubt are staples of advanced technology. And the essay makes clear that the rule maker in chief is Sam AI-Man; to wit the essay says:

After the event, I asked Altman how he was thinking about agents in general. Which actions is OpenAI comfortable letting GPT-4 take on the internet today, and which does the company not want to touch? Altman’s answer is that, at least for now, the company wants to keep it simple. Clear, direct actions are OK; anything that involves high-level planning isn’t.

Let me introduce my observations about the Sam AI-Man innovations and the type of explanations about the PR and marketing event which has whipped up pundits, poohbahs, and Twitter experts (perhaps I should say X-spurts?)

First, the Sam AI-Man announcements strike me as making orchestration a service easy to use and widely available. Bad things won’t be allowed. But the core idea of what I call “orchestration” is where the parade is marching. I hear the refrain “Some think the world is made for fun and frolic.” But I don’t agree, I don’t agree. Because as advanced tools become widely available, the early adopters are not exclusively those who want to link a calendar to an email to a document about a meeting to talk about a new marketing initiative.

Second, the ability of Sam AI-Man to determine what’s in bounds and out of bounds is different from refereeing a pickleball game. Some of the players will be nation states with an adversarial view of the US of A. Furthermore, there are bad actors who have a knack for linking automated information to online extortion. These folks will be interested in cost cutting and efficiency. More problematic, some of these individuals will be more active in testing how orchestration can facilitate their human trafficking activities or drug sales.

Third, government entities and people like Sam AI-Man are, by definition, now in reactive mode. What I mean is that with the announcement and the chatter about automating the work required to create a snappy online article is not what a bad actor will do. Individuals will see opportunities to create new ways to exploit the cluelessness of employees, senior citizens, and young people. The cheerful announcements and the parade tunes cannot drown out the low frequency rumbles of excitement now rippling through the bad actor grapevines.

Net net: Crime propelled by orchestration is now officially a thing. The “regulations” of smart software, like the professionals who will have to deal with the downstream consequences of automation, are out of date. Am I worried? For me personally, no, I am not worried. For those who have to enforce the laws which govern a social construct? Yep, I have a bit of concern. Certainly more than those who are laughing and enjoying the parade.

Stephen E Arnold, November 7, 2023

AI Makes Cyberattacks Worse. No Fooling?

November 7, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Why does everyone appear to be surprised by the potential dangers of cyber attacks?  Science fiction writers and even the crazy conspiracy theorists with their tin foil hats predicted that technology would outpace humanity one day.  Tech Radar wrote an article about how AI like ChatGPT makes cyber attacks more dangerous than ever: “AI Is Making Cyberattacks Even Smarter And More Dangerous.

Tech experts want to know how humans and AI algorithms compare when it comes to creating scams.  IBM’s Security Intelligence X-Force team accepted the challenge with an experiment about phishing emails.  They compared human written phishing emails against those ChatGPT wrote.  IBM’s X-Force team discovered that the human written emails had higher clicks rates, giving them a slight edge over the ChatGPT.  It was a very slight edge that proves AI algorithms aren’t far from competing and outpacing human scammers. 

Human written phishing scams have higher click rates, because of emotional intelligence, personalization, and ability to connect with their victims. 

“All of these factors can be easily tweaked with minimal human input, making AI’s work extremely valuable. It is also worth noting that the X-Force team could get a generative AI model to write a convincing phishing email in just five minutes from five prompts – manually writing such an email would take the team about 16 hours. ‘While X-Force has not witnessed the wide-scale use of generative AI in current campaigns, tools such as WormGPT, which were built to be unrestricted or semi-restricted LLMs were observed for sale on various forums advertising phishing capabilities – showing that attackers are testing AI’s use in phishing campaigns,’ the researchers concluded.”

It’s only a matter of time before the bad actors learn how to train the algorithms to be as convincing as their human creators.  White hat hackers have a lot of potential to earn big bucks as venture startups.

Whitney Grace, November 7, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta