AI Makes Cyberattacks Worse. No Fooling?

November 7, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Why does everyone appear to be surprised by the potential dangers of cyber attacks?  Science fiction writers and even the crazy conspiracy theorists with their tin foil hats predicted that technology would outpace humanity one day.  Tech Radar wrote an article about how AI like ChatGPT makes cyber attacks more dangerous than ever: “AI Is Making Cyberattacks Even Smarter And More Dangerous.

Tech experts want to know how humans and AI algorithms compare when it comes to creating scams.  IBM’s Security Intelligence X-Force team accepted the challenge with an experiment about phishing emails.  They compared human written phishing emails against those ChatGPT wrote.  IBM’s X-Force team discovered that the human written emails had higher clicks rates, giving them a slight edge over the ChatGPT.  It was a very slight edge that proves AI algorithms aren’t far from competing and outpacing human scammers. 

Human written phishing scams have higher click rates, because of emotional intelligence, personalization, and ability to connect with their victims. 

“All of these factors can be easily tweaked with minimal human input, making AI’s work extremely valuable. It is also worth noting that the X-Force team could get a generative AI model to write a convincing phishing email in just five minutes from five prompts – manually writing such an email would take the team about 16 hours. ‘While X-Force has not witnessed the wide-scale use of generative AI in current campaigns, tools such as WormGPT, which were built to be unrestricted or semi-restricted LLMs were observed for sale on various forums advertising phishing capabilities – showing that attackers are testing AI’s use in phishing campaigns,’ the researchers concluded.”

It’s only a matter of time before the bad actors learn how to train the algorithms to be as convincing as their human creators.  White hat hackers have a lot of potential to earn big bucks as venture startups.

Whitney Grace, November 7, 2023

Tech Writer Overly Frustrated With Companies

November 7, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

We all begin our adulthoods as wide-eyed, naïve go-getters who are out to change the world.  It only takes a few years for our hopes and dreams to be dashed by the menial, insufferable behaviors that plague businesses.  We all have stories about incompetence, wasted resources, passing the buck, and butt kissers.  Ludicity is a blog written by a tech engineer where he vents his frustrations and shares his observations about his chosen field.  His first post in November 2023 highlights the stupidity of humanity and upper management: “What The Goddamn Hell Is Going On In The Tech Industry?

For this specific post, the author reflects on a comment he received regarding how companies can save money by eliminating useless bodies and giving the competent staff the freedom to do their jobs.  The comment in question blamed the author for creating unnecessary stress and not being a team player.  In turn, the author pointed out the illogical actions of the comment and subsequently dunked his head in water to dampen his screams.  The author writes Ludicity for cathartic reasons, especially to commiserate with his fellow engineers. 

The author turned 29 in 2023, so he’s ending his twenties with the same depression and dismal outlook we all share:

“There’s just some massive unwashed mass of utterly stupid companies where nothing makes any sense, and the only efficiencies exist in the department that generates the money to fund the other stupid stuff, and then a few places doing things halfway right. The places doing things right tend to be characterized by being small, not being obsessed with growth, and having calm, compassionate founders who still keep a hand on the wheel. And the people that work there tend not to know the people that work elsewhere. They’re just in some blessed bubble where the dysfunction still exists in serious quantities, but that quantity is like 1/10th the intensity of what it is elsewhere.”

The author, however, still possesses hope.  He wants to connect with like-minded individuals who are tired of the same corporate shill and want to work together at a company that actually gets work done. 

We all want to do that.  Unfortunately the author might be better off starting his own company to attract his brethren and see what happens.  It’ll be hard but not as hard as going back to school or dealing with corporate echo chambers.

Whitney Grace, November 7, 2023

ACM Kills Print Publications But Dodges the Money Issue

November 6, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

In January 2024, the Association for Computing Machinery will kill off its print publication. “Ceasing Print Publication of ACM Journals and Transaction” says good bye to the hard copy instances of Communications of ACM, ACM InRoads, and a couple of other publications. It is possible that ACM will continue to produce print versions of material for students. (I thought students were accustomed to digital content. Guess the ACM knows something I don’t. That’s not too difficult. I am a dinobaby, who read ACM publications for the stories, not the pictures.)

image

The perspiring clerk asks, “But what about saving the whales?” The CFO carrying the burden of talking to auditors, replies, “It’s money stupid, not that PR baloney.” Thanks, Microsoft Bind. You understand accountants perspiring. Do you have experience answering IRS questions about some calculations related to Puerto Rico?

Why would a professional trade outfit dismiss paper? My immediate and uninformed answer to this question is, “Cost. Stuff like printing, storage, fulfillment, and design cost money.” I would be wrong, of course. The ACM gives these reasons:

  • Be environmentally friendly. (Don’t ACM supporters use power sucking data centers often powered by coal?)(
  • Electronic publications have more features. (One example is a way to charge a person who wants to read an article and cut off at the bud the daring soul pumping money into a photocopy machine to have an article to read whilst taking a break from the coffee and mobile phone habit.)
  • Subscriptions are tanking.

I think the “subscriptions” bit is a way to say, “Print stuff is very expensive to produce and more expensive to sell.”

With the New York Times allegedly poised to use smart software to write its articles, when will the ACM dispense with member contributions?

Stephen E Arnold, November 6, 2023

Will Apple Weather Forecast Storms in Beijing?

November 6, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The stock markets in the US have been surfing on the wave skimmers owned by the “magnificent seven.” The phrase refers to the FAANG crowd plus that AI fave NVidia and everyone’s favorite auto from Tesla. Has something gone subtly amiss at Apple, the darling of the hip graphics and “I love Linux” crowd?

10 29 riainy day

“My weather app said it would be warm and sunny. What happened to smart software?” says the disenchanted young person. Rain is a good thing, not a bummer. Thanks, MidJourney. This image reminds me of those weird illustrations of waifs with big eyes. Inspiration is where one finds it.

I don’t know. I would point to one faint signal contained in the online write up “Why Apple’s Weather App Is So Bad.” The article makes it clear that weather forecasting is tricky. Software is not yet up to the of delivering accurate information about rain. Rain, I suppose, is one of those natural phenomena opaque to smart people, smart software, and smart acquisitions.

The statement in the write up which caught my attention was:

Over this time, this relentless weekend-only rain has also affirmed that Apple’s weather app is pretty much useless. Personally, I’ve learned that the app cannot distinguish between “light rain” and “rain,” that the percentages it spits out feel bogus, and to never trust it when it tells you what time the rain will stop. I’m not alone. My friends and coworkers also have various stories about how the app has let them down, or how sometimes it just won’t work. Some even talk about Dark Sky, a weather-forecasting app that Apple bought in 2020, with a mournful, wistful sadness, like a lost love. Apple says Dark Sky’s most beloved features have been integrated into its app, but Dark Sky fans aren’t convinced. Things were different then, they say. Things were better.

Did you spot the knife twist? Here it is, ripped from the heart of the paragraph:

sometimes it just won’t work

No big deal. A weather app. But Apple appeared to have ripped a page from the Google’s Management Handbook. Jon Stewart departed from Apple. The reasons are mysterious, a bit like the Dark Sky falling in Cupertino. I also noticed that Apple has a certain connection to China, particularly with regard to that most magical and almost unchanged candy bar phone. Granted it revolutionized Apple’s financial position, but does the contractor who assist me required a device to thaw the hearts of Apple lovers on a ski slope. (No raid predicted, I assume.)

Net net: Rain, Mr. Stewart, and the supply chain to China. Are these signals worth monitoring? Probably not. When I need a weather forecast, this dinobaby just looks out a window, not at a mobile phone.

Stephen E Arnold, November 6, 2023

Social Media: A No-Limits Zone Scammers

November 6, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Scams have plagued social media since its inception and it’s only getting worse. The FTC described the current state of social media scams in, “Social Media: A Golden Goose For Scammers.” Scammers and other bad actors are hiding in plain sight on popular social media platforms. The FTC’s Consumer Sentinel Network reported that one in four people lost money to scams that began on social media. In total people reported losing $2.7 billion to social media scams but the number could be greater because most cases aren’t reported.

It’s sobering the way bad actors target victims:

“Social media gives scammers an edge in several ways. They can easily manufacture a fake persona, or hack into your profile, pretend to be you, and con your friends. They can learn to tailor their approach from what you share on social media. And scammers who place ads can even use tools available to advertisers to methodically target you based on personal details, such as your age, interests, or past purchases. All of this costs them next to nothing to reach billions of people from anywhere in the world.”

Scammers don’t discriminate against age. Surprisingly, younger groups lost the most to bad actors. Forty-seven percent of people 18-19 were defrauded in the first six months of 2023, while only 38% of people 20-29 were hit. The numbers decrease with age and the decline of older generations not using social media.

The biggest reported scams were related to online shopping, usually people who tried to buy something off social media. The total loss was 44% from January-June 2023. Fake investment opportunities grossed the largest amount of profit for scammers at 53%. Most of the “opportunities” were cryptocurrency operations. Romance scams had the second highest losses for victims. These encounters start innocuous enough but always end with love bombing and money requests.

Take precautions such as making your social media profiles private, investigate if your friends suddenly ask you for money, don’t instantly fall in love with random strangers, and research companies before you make investments. It’s all old, yet sagacious advice for the digital age.

Whitney Grace, November 6, 2023

Is Utah a Step Behind As Meta Threads Picks Up Steam?

November 3, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Now that TikTok has become firmly embedded in US culture, regulators are finally getting around to addressing its purported harms. Utah joins Arkansas and Indiana in suing parent company ByteDance even as the US Supreme Court considers whether social-media regulation violates  the US Constitution. No, it is not the threat of Chinese spying that has Utah’s Division of Consumer Protection taking action this time. Rather, Digital Trends reports, “TikTok Sued by Utah Over Alleged Child Addiction Harm.” Yes, that’s a big concern too. Writer Treavor Mogg tells us:

“Utah’s filing focuses on the app’s alleged negative impact on children, claiming that TikTok ‘surreptitiously designed and deployed addictive features to hook young users into endlessly scrolling through the company’s app.’ It accused TikTok of wanting Utah citizens to ‘spend as much time on its app as possible so it can place advertisements in front of them more often,’ and alleges that the company ‘misled young users and their parents about the app’s dangers.’ In damning comments shared in a statement on Tuesday, Utah Attorney General Sean D. Reyes said: ‘I’m tired of TikTok lying to Utah parents. I’m tired of our kids losing their innocence and even their lives addicted to the dark side of social media. TikTok will only change if put at legal risk — and ‘at risk’ is where they have left our youth in exchange for profit and greed. Immediate and pervasive threats require swift and bold responses. We have a compelling case against TikTok. Our kids are worth the fight.’”

Reyes is not bluffing. The state has already passed laws to limit minors’ social media usage, with measures such as verified parental consent required for sign-ups and even making accounts and messages accessible to parents. Though many are concerned the latter is a violation of kids’ privacy, the laws are scheduled to go into effect next year.

But what about the other social media apps? Elon is not dragging his heels. And the Zuck? Always the Zuck.

Cynthia Murrell, November 3, 2023

The Brin-A-Loon: A Lofty Idea Is Ready to Take Flight

November 3, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I read “Sergey Brin’s 400-Foot Airship Reportedly Cleared for Takeoff.” I am not sure how many people know about Mr. Brin’s fascination with a balloon larger than Vladimir Putin’s yacht. The article reports:

While the concept of rigid airships and the basic airframe design are a throwback to pre-Hindenburg times of the early 1900s, Pathfinder 1 uses a frame made from 96 welded titanium hubs, joined by some 289 reinforced carbon fiber tubes. These materials advances keep it light enough to fly using helium, rather than hydrogen as a lift gas.

10 28 brinaloon

A high technology balloon flies near the Stanford campus, heading toward the Paul Allen Building. Will the aspiring network wizards notice the balloon? Probably not. Thanks, MidJourney. A bit like the movie posters I saw as a kid, but close enough for horseshoes and the Brin-A-Loon.

High tech. Plus helium (an increasingly scarce resource for the Brin-A-Loon and party balloons at Dollar General) does not explode. Remember that newsreel footage from New Jersey. Hydrogen, not helium.

The article continues:

According to IEEE Spectrum, the company has now been awarded the special airworthiness certificate required to fly this beast outdoors – at less than 1,500 ft (460 m) of altitude, and within the boundaries of Moffett Field and the neighboring Palo Alto Airport’s airspace.

Will there be UFO reports on TikTok and YouTube?

What’s the purpose of the Brin-A-Loon? The write up opines:

LTA says its chief focus is humanitarian aid; airships can get bulk cargo in and people out of disaster areas when roads and airstrips are destroyed and there’s no way for other large aircraft to get in and out. Secondary opportunities include slow point-to-point cargo operations, although the airships will be grounded if the weather doesn’t co-operate.

I remember the Loon balloons. The idea was to use Loon balloons to deliver Internet access in places like Sri Lanka, Puerto Rico, and Africa. Great idea. The hitch in the float along was that the weather was a bit of an issue. Oh, the software — like much of the Googley code floating around — was a bit problematic.

The loon balloons are gone. But the Brin-A-Loon is ready to take to the air. The craft may find a home in Ohio. Good for Ohio. And the Brinaloon will be filled with helium like birthday party balloons. Safer than hydrogen. Will the next innovation be the Brin-Train, a novel implementation of the 18th century Leland Stanford railroad engines?

Stephen E Arnold, November 3, 2023

How Generative Graphics AI Might Be Used to Embed Hidden Messages

November 3, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Subliminal advertising is back, now with an AI boost. At least that is the conclusion of one Tweeter (X-er?) who posted a few examples of the allegedly frightful possibilities. The Creative Bloq considers, “Should We Be Scared of Hidden Messages in AI Optical Illusions?” Writer Joseph Foley tells us:

“Some of the AI optical illusions we’ve seen recently have been slightly mesmerizing, but some people are concerned that they could also be dangerous. ‘Many talk about the dangers of “AGI” taking over humans. But you should worry more about humans using AI to control other humans,’ Cocktail Peanut wrote in a post on Twitter, providing the example of the McDonald’s logo embedded in an anime-style AI-generated illustration. The first example wasn’t very subtle. But Peanut followed up with less obvious optical illusions, all made using a Stable Diffusion-powered Hugging Face space called Diffusion Illusion HQ created by Angry PenguinPNG. The workflow for making the illusions, using Monster Labs QR Control Net, was apparently discovered by accident. The ControlNet technique allows users to specify inputs, for example specific images or words, to gain more control over AI image generations. Monster Labs’ tool was created to allow QR codes to be used as input so the AI would generate usable but artistic QR codes as an output, but users discovered that it could also be used to hide patterns or words in AI-generated scenes.”

Hidden messages in ads have been around since 1957, though they are officially banned as “deceptive advertising” in the US. The concern here is that AI will make the technique much, much cheaper and easier. Interesting but not really surprising. Should we be concerned? Foley thinks not. He notes the few studies on subliminal advertising suggest it is not very effective. Will companies, and even some governments, try it anyway? Probably.

Cynthia Murrell, November 3, 2023

Knowledge Workers, AI Software Is Cheaper and Does Not Take Vacations. Worried Yet?

November 2, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I believe the 21st century is the era of good enough or close enough for horseshoes products and services. Excellence is a surprise, not a goal. At a talk I gave at CeBIT years ago, I explained that certain information centric technologies had reached the “let’s give up” stage of development. Fresh in my mind were the lessons I learned writing a compendium of information access systems published as “The Enterprise Search Report” by a company lost to me in the mists of time.

11 1 replaced by ai

“I just learned that our department will be replaced by smart software,” says the MBA from Harvard. The female MBA from Stanford emits a scream just like the one she let loose after scuffing her new Manuel Blahnik (Rodríguez) shoes. Thanks, MidJourney, you delivered an image with a bit of perspective. Good enough work.

I identified the flaws in implementations of knowledge management, information governance, and enterprise search products. The “good enough” comment was made to me during the Q-and-A session. The younger person pointed out that systems for finding information — regardless of the words I used to describe what most knowledge workers did — was “good enough.” I recall the simile the intense young person offered as I was leaving the lecture hall. Vivid now years later was the comment that improving information access was like making catalytic converters deliver zero emissions. Thus, information access can’t get where it should be. The technology is good enough.

I wonder if that person has read “AI Anxiety As Computers Get Super Smart.” Probably not. I believe that young person knew more than I did. As a dinobaby, I just smiled and listened. I am a smart dinobaby in some situations. I noted this passage in the cited article:

Generative AI, however, can take aim at white-collar jobs such as lawyers, doctors, teachers, journalists, and even computer programmers. A report from the McKinsey consulting firm estimates that by the end of this decade, as much as 30 percent of the hours worked in the United States could be automated in a trend accelerated by generative AI.

Executive orders and government proclamations are unlikely to have much effect on some people. The write up points out:

Generative AI makes it easier for scammers to create convincing phishing emails, perhaps even learning enough about targets to personalize approaches. Technology lets them copy a face or a voice, and thus trick people into falling for deceptions such as claims a loved one is in danger, for example.

What’s the fix? One that is good enough probably won’t have much effect.

Stephen E Arnold, November 2, 2023

test

Microsoft at Davos: Is Your Hair on Fire, Google?

November 2, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Microsoft said at the January 2023 Davos, AI is the next big thing. The result? Google shifted into Code Red and delivered a wild and crazy demonstration of a deeply flawed AI system in February 2023. I think the phrase “Code Red” became associated to the state of panic within the comfy confines of Googzilla’s executive suites, real and virtual.

Sam AI-man made appearances speaking to anyone who would listen words like “billion dollar investment,” efficiency, and work processes. The result? Googzilla itself  found out that whether Microsoft’s brilliant marketing of AI worked or not, the Softies had just demonstrated that it — not the Google — was a “leader”. The new Microsoft could create revenue  and credibility problems for the Versailles of technology companies.

Therefore, the Google tried to try and be nimble and make the myth of engineering prowess into reality, not a CGI version of Camelot. The PR Camelot featured Google as the Big Dog in the AI world. After all, Google had done the protein thing, an achievement which made absolutely no sense to 99 percent of the earth’s population. Some asked, “What the heck is a protein folder?” I want a Google Waze service that shows me where traffic cameras are.

The Google executives apparently went to meetings with their hair on fire.

11 2 code red at google

A group of Google executives in a meeting with their hair on fire after Microsoft’s Davos AI announcement. Google wanted teams to manifest AI prowess everywhere, lickity split. Google reorganized. Google probed Anthropic and one Googler invested in the company. Dr. Prabhakar Raghavan demonstrated peculiar communication skills.

I had these thoughts after I read “Google Didn’t Rush Bard Chatbot to Beat Microsoft, Executive Says.” So what was this Code Red thing? Why has Google — the quantum supremacy and global leader in online advertising and protein folding — be lagging behind Microsoft? What is it now? Oh, yeah. Almost a year, a reorganization of the Google’s smart software group, and one of Google’s own employees explaining that AI could have a negative impact on the world. Oh, yeah, that guy is one of the founders of Google’s DeepMind AI group. I won’t mention the Googler who thought his chatbot was alive and ended up with an opportunity to find his future elsewhere. Right. Code Red. I want to note Timnit Gebru and the stochastic parrot, the Jeff Dean lateral arabesque, and the significant investment in a competitor’s AI technology. Right. Standard operating procedure for an online advertising company with a fairly healthy self concept about its excellence and droit du seigneur.

The Bloomberg article reports which I am assuming is “real”, actual factual information:

A senior Google executive disputed suggestions that the company rushed to release its artificial intelligence-based chatbot Bard earlier this year to beat a similar offering from rival Microsoft Corp. Testifying in Google’s defense at the Justice Department’s antitrust trial against the search giant, Elizabeth Reid, a vice president of search, acknowledged that Bard gave “a wrong answer” during its public unveiling in February. But she rejected the contention by government lawyer David Dahlquist that Bard was “rushed” out after Microsoft announced it was integrating generative AI into its own Bing search engine.

The real news story pointed out:

Google’s public demonstration of Bard underwhelmed investors. In one instance, Bard was asked about new discoveries from the James Webb Space Telescope. The chatbot incorrectly stated the telescope was used to take the first pictures of a planet outside the Earth’s solar system. While the Webb telescope was the first to photograph one particular planet outside the Earth’s solar system, NASA first photographed a so-called exoplanet in 2004. The mistake led to a sharp fall in Alphabet’s stock. “It’s a very subtle language difference,” Reid said in explaining the error in her testimony Wednesday. “The amount of effort to ensure that a paragraph is correct is quite a lot of work.” “The challenges of fact-checking are hard,” she added.

Yes, facts are hard in Hallucinationville? I think the concept I take away from this statement is that PR is easier than making technology work. But today Google and similar firms are caught in what I call a “close enough for horseshoes” mind set. Smart software, in my experience, is like my dear, departed mother’s  not-quite-done pineapple upside down cakes. Yikes, those were a mess. I could eat the maraschino cherries but nothing else. The rest was deposited in the trash bin.

And where are the “experts” in smart search? Prabhakar? Danny? I wonder if they are embarrassed by their loss of their thick lustrous hair. I think some of it may have been singed after the outstanding Paris demonstration and subsequent Mountain View baloney festivals. Was Google behaving like a child frantically searching for his mom at the AI carnival? I suppose when one is swathed in entitlements, cashing huge paychecks, and obfuscating exactly how the money is extracted from advertisers, reality is distorted.

Net net: Microsoft at Davos caused Google’s February 2023 Paris presentation. That mad scramble has caused to conclude that talking about AI is a heck of a lot easier than delivering reliable, functional, and thought out products. Is it possible to deliver such products when one’s hair is on fire? Some data say, “Nope.”

Stephen E Arnold, November 2, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta