Another Google AI PR Push from a British Googler
November 27, 2024
This write up is the work of a humanoid who admits he is a dinobaby; that is, deadwood too old to employ. By the way, the “dinobaby” lingo allegedly emerged from IBM during its housecleaning event years ago. The art, however, is from MidJourney and definitely AI fakery.
With the US Department of Justice suggesting a haircut for the Google, the company is ramping up its AI PR. As you may recall, a Googler suggested that Google should not be constrained because Google has to be Google to do Google AI. With AI a wonderful benefit to customer service cost reductions and delivering advertising to those who use Google search, Google wants to get the word out.
The “art” was output by OpenAI, and I am not sure if it is quantumly supreme. The reason, “OpenAI is not Google.”
Examples include:
- “Demis Hassabis, Nobel Prize winner in Chemistry: We Will Need a Handful of Breakthroughs Before We Reach Artificial General Intelligence” in El Pais
- Fast Company’s “The Future According to Google DeepMind CEO Demis Hassabis”
- “Google DeepMind AI Can Expertly Fix Errors in Quantum Computers” in New Scientist
The articles share several themes:
- Google’s AI is great and the company is working hard to make it greater
- Google’s research is pretty darn close to making AI smarter
- Google is doing good and wants to do more to make life even gooder.
From the razzmatazz world quantum computers to the practical applications for Bill and Betty Average, Google is the driving force for smart software.
It has the transformer expertise. It has a Nobel prize winner. It has a building in London’s Knowledge Quarter.
What the write ups do not talk about is the suggestion that the Google needs a haircut, specifically, its Chrome browser has to chopped off. The PR push has another goal in my opinion. Google must be seen as a prime mover in the technology that everyone absolutely must have: Googley AI.
With investors wondering if the money pumped into smart software will pay off, Google is doing what it can from what some might call its monopoly position to advance the agenda of Google’s technology. Microsoft, Amazon, and some Chinese outfits are spending billions to make sure they are part of the next big thing. Meta is chugging along with its open source approach. Apple is letting its AI fruit ripen which takes time.
Copyright hassles, electric power demands, and the alleged diminishing returns from high flier OpenAI mean that someone has to stand up and say, “AI is wonderful. Google is more wonderful.”
What’s interesting is that in each of the cited stories, notes of skepticism are evident; to wit:
El Pais says, “The CEO of Google DeepMind cools expectations about the progress of AGI…” Okay. Not exactly a rah rah statement.
Fast Company says, “That Google has had to apologize for glitches discovered by users underlines the urgency with which it’s been shipping features in the post-ChatGPT age.” Translation: Ooops.
New Scientist says, “That Google has had to apologize for glitches discovered by users underlines the urgency with which it’s been shipping features in the post-ChatGPT age.” Okay. Six percent. One method produces 100 error fixes. Google can fix 106 errors. Progress? Yep. Revolutionary? To some, sure. To others, not so much.
Each of these AI PR waves are little more than marketing. What’s interesting is that Google may be able to prevent significant changes to its operations if it can make Google the pivot point for the next big thing. I wonder if those involved in prosecuting the different cases about Google’s business behavior are convinced.
That chatter about selling Google’s browser is the background radiation against which these PR emissions are output. Will they be heard?
Stephen E Arnold, November 27, 2024
A New Frankie Bursts on the Music Scene
November 27, 2024
So here is a minor but unfortunate thing that just happened to our culture: As the BBC reports, “Zuckerberg Records ‘Romantic’ Cover of Explicit Rap Hit.” Let me advise you, dear reader, to avoid hearing even a portion of this track if you possibly can. That goes double if you are a fan of the original. I wish I could unhear it. Writer Paul Glynn tells us:
“Facebook founder Mark Zuckerberg has recorded his own version of rap track Get Low alongside US star T-Pain, in tribute to his wife Priscilla Chan for their ‘dating anniversary.’ Zuckerberg sings with the help of Auto-Tune on an acoustic guitar reworking of the filthy floor-filler, which was originally a hit for Lil Jon and the East Side Boyz in 2003. ‘Get Low was playing when I first met Priscilla at a college party, so every year we listen to it on our dating anniversary,’ the Meta boss explained on his own platform Instagram.”
How sweet. Would that Zuckerberg (or “Z-Pain,” as he has styled himself for this stunt) had left it at that. His “lyrical” treatment renders the raunchy lines surreal, and not in a good way. Fortunately for him, his wife welcomed the gesture as “so romantic.” But why subject the rest of us to this acoustic, Auto-Tuned abomination? Shouting one’s love from the rooftops is one thing. This is quite another.
For the morbidly curious, here is a link to Zuckerberg’s (not safe for work) creation. Don’t say we didn’t warn you. For comparison and/or a palate cleanser, here is the original (even less safe for work) Lil Jon & The East Side Boyz video on YouTube. What a time to be alive! A new Frank Sinatra is upon us.
Cynthia Murrell, November 27, 2024
Marketing Jobs Require More Than AI-Know How
November 26, 2024
Marketing remains a lucrative industry, but it’s become even more complex with the advent of AI. McKinsey recommends that marketers will find growth through portfolio management and enhancing capabilities to improve performance. What does that mean? Christine Y. Chen explains in her article: “Connecting For Growth: A Makeover For Your Marketing Operating Model.”
Every year marketers must meet higher expectations to deliver strong brands and growth while maximizing effectiveness and efficiency. It sounds like a buzzword salad, but it’s actually a big order for marketers to do better, do more, and with less time and resources. With these innovating demands, AI is deployed in there somewhere and can be a useful tool:
“Marketing leaders are expected to apply new energy to identifying growth opportunities, bring their companies’ missions to life, build immersive and connected brand experiences, link purpose to business outcomes, capitalize on new technologies, and more. At the same time, CMOs are under increasing pressure to provide results and serve as responsible stewards of marketing resources to achieve growth agendas. They’re growth leaders whose remit continues to expand, with CMOs taking on more functional areas traditionally seen as outside the purview of marketing. Such areas include generative AI (gen AI), innovation, and sales and e-commerce.”
Marketers are rising to the challenge of meeting their roles, but they are finding that technology doesn’t meet their needs. Today’s marketers want to propel growth by organizing and connecting their teams, mobilizing beyond reporting lines, and concentrating on specific strategics to scale up. They are achieving these goes by establishing ground rules to provide incentives for agility, over investing in important matters, and connecting expertise to growth drivers. They believe that AI is a usable tool but doesn’t meet al their needs:
“A 2023 McKinsey report on the economic potential of gen AI found that gen AI productivity in marketing could be worth about $463 billion annually.3How generative AI can boost consumer marketing,” McKinsey, December 5, 2023. CMOs say they embrace this promise, with 74 percent of our survey respondents viewing gen AI as more of an opportunity than a risk. However, a gap remains between the importance that marketing leaders place on gen AI and how far they feel their organizations have progressed in building relevant capabilities for it. Only 9 percent state that they have evaluated gen-AI-enabled automation opportunities, just 5 percent are building gen AI capabilities, and a mere 4 percent are scaling up gen AI use cases.”
High-skilled jobs still require an immense amount of education, people skills, business ingenuity, and knowledge about how AI can be used to scale up an operation. Humans aren’t obsolete yet people!
Whitney Grace, November 26, 2024
Marketers, Deep Fakes Work
November 14, 2024
Bad actors use AI for pranks all the time, but this could be the first time AI pranked an entire Irish town out of its own volition. KXAN reports on the folly: “AI Slop Site Sends Thousands In Ireland To Fake Halloween Parade.” The website MySpiritHalloween.com dubs itself as the ultimate resource for all things Halloween. The website uses AI-generated content and an article told the entire city of Dublin that there would be a parade.
If this was a small Irish village, there would giggles and the police would investigate criminal mischief before they stopped wasting their time. Dublin, however, is one of the country’s biggest cities and folks appeared in the thousands to see the Halloween parade. They eventually figured something was wrong without the presence of barriers, law enforcement, and (most importantly) costume people on floats!
MySpiritHalloween is owned by Naszir Ali and he was embarrassed by the situation.
Per Ali’s explanation, his SEO agency creates websites and ranks them on Google. He says the company hired content writers who were in charge of adding and removing events all across the globe as they learned whether or not they were happening. He said the Dublin event went unreported as fake and that the website quickly corrected the listing to show it had been cancelled….
Ali said that his website was built and helped along by the use of AI but that the technology only accounts for 10-20% of the website’s content. He added that, according to him, AI content won’t completely help a website get ranked on Google’s first page and that the reason so many people saw MySpiritHalloween.com was because it was ranked on Google’s first page — due to what he calls “80% involvement” from actual humans.”
Ali claims his website is based in Illinois, but all investigations found that its hosted in Pakistan. Ali’s website is one among millions that use AI-generated content to manipulate Google’s algorithm. Ali is correct that real humans did make the parade rise to the top of Google’s search results, but he was responsible for the content.
Media around the globe took this as an opportunity to teach the parade goers and others about being aware of AI-generated scams. Marketers, launch your AI infused fakery.
Whitney Grace, November 14, 2024
Microsoft Does the Me Too with AI. Si, Wee Wee
November 8, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
True or false: “Windows Intelligence” or Microsoft dorkiness? The article “Windows Intelligence: Microsoft May Drop Copilot in Major AI Rebranding” contains an interesting subtitle. Here it is:
Copying Apple for marketing’s sake, as usual.
Is that a snarky subtitle or not? It follows up the idea of “rebranding” Satya Nadella’s next big thing with an assertion of possibly Copy Cat behavior. I like it.
The article says:
Copilot’s chatbot service, Windows Recall, and other AI-related functionalities could soon become known simply as “Windows Intelligence.” According to a recently surfaced reference included in a template file for the Group Policy Object Editor (AppPrivacy.adml), Windows Intelligence is the umbrella term Microsoft possibly chose to unify the many AI features currently being integrated into the PC operating system.
Whether such an attempt to capture the cleverness of Apple intelligence (AI for short) with the snappy WI is a stroke of genius or a clumsy me too move, time will reveal.
Let’s assume that the Softies take time out from their labors on the security issues bedeviling users and apply its considerable expertise to the task of move from Copilot to WI. One question which will arise may be, “How does one pronounce WI?”
I personally like “whee” as in the sound seven year olds make when sliding an old fashioned playground toy. I can envision some people using the WI sound in the word “whiff.” The sound of the first two letters of “weak” is a possibility as well. And, one could emit the sound “wuh” which rhymes with “Duh.”
My hunch is that this article’s assertion is unsubstantiated beyond a snippet of text in “code.” Nevertheless I enjoyed the write up. My hunch is that the author had some fun putting the article together as well.
Oh, Copilot image generation has been down for more than a weak, oh, sorry, week. That’s very close to the whiff sound for the alleged WI acronym. Oh, well, a whiff by any other name should reveal so much about a high technology giant.
Stephen E Arnold, November 8, 2024
Twenty Five Percent of How Much, Google?
November 6, 2024
The post is the work of a humanoid who happens to be a dinobaby. GenX, Y, and Z, read at your own risk. If art is included, smart software produces these banal images.
I read the encomia to Google’s quarterly report. In a nutshell, everything is coming up roses even the hyperbole. One news hook which has snagged some “real” news professionals is that “more than a quarter of new code at Google is generated by AI.” The exclamation point is implicit. Google’s AI PR is different from some other firms; for example, Samsung blames its financial performance disappointments on some AI. Winners and losers in a game in which some think the oligopolies are automatic winners.
An AI believer sees the future which is arriving “soon, real soon.” Thanks, You.com. Good enough because I don’t have the energy to work around your guard rails.
The question is, “How much code and technical debt does Google have after a quarter century of its court-described monopolistic behavior? Oh, that number is unknown. How many current Google engineers fool around with that legacy code? Oh, that number is unknown and probably for very good reasons. The old crowd of wizards has been hit with retirement, cashing in and cashing out, and “leadership” nervous about fiddling with some processes that are “good enough.” But 25 years. No worries.
The big news is that 25 percent of “new” code is written by smart software and then checked by the current and wizardly professionals. How much “new” code is written each year for the last three years? What percentage of the total Google code base is “new” in the years between 2021 and 2024? My hunch is that “new” is relative. I also surmise that smart software doing 25 percent of the work is one of those PR and Wall Street targeted assertions specifically designed to make the Google stock go up. And it worked.
However, I noted this Washington Post article: “Meet the Super Users Who Tap AI to Get Ahead at Work.” Buried in that write up which ran the mostly rah rah AI “real” news article coincident with Google’s AI spinning quarterly reports one interesting comment:
Adoption of AI at work is still relatively nascent. About 67 percent of workers say they never use AI for their jobs compared to 4 percent who say they use it daily, according to a recent survey by Gallup.
One can interpret this as saying, “Imagine the growth that is coming from reduced costs. Get rid of most coders and just use Google’s and other firms’ smart programming tools.
Another interpretation is, “The actual use is much less robust than the AI hyperbole machine suggests.”
Which is it?
Several observations:
- Many people want AI to pump some life into the economic fuel tank. By golly, AI is going to be the next big thing. I agree, but I think the Gallup data indicates that the go go view is like looking at a field of corn from a crop duster zipping along at 1,000 feet. The perspective from the airplane is different from the person walking amidst the stalks.
- The lack of data behind Google-type assertions about how much machine code is in the Google mix sounds good, but where are the data? Google, aren’t you data driven? So, where’s the back up data for the 25 percent assertion.
- Smart software seems to be something that is expensive, requires dreams of small nuclear reactors next to a data center adjacent a hospital. Yeah, maybe once the impact statements, the nuclear waste, and the skilled worker issues have been addressed. Soon as measured in environmental impact statement time which is different from quarterly report time.
Net net: Google desperately wants to be the winner in smart software. The company is suggesting that if it were broken apart by crazed government officials, smart software would die. Insert the exclamation mark. Maybe two or three. That’s unlikely. The blurring of “as is” with “to be” is interesting and misleading.
Stephen E Arnold, November 6, 2024
Great Moments in Marketing: MSFT Copilot, the Salesforce Take
November 1, 2024
A humanoid wrote this essay. I tried to get MSFT Copilot to work, but it remains dead. That makes four days with weird messages about a glitch. That’s the standard: Good enough.
It’s not often I get a kick out of comments from myth-making billionaires. I read through the boy wonder to company founder titled “An Interview with Salesforce CEO Marc Benioff about AI Abundance.” No paywall on this essay, unlike the New York Times’ downer about smart software which appears to have played a part in a teen’s suicide. Imagine when Perplexity can control a person’s computer. What exciting stories will appear. Here’s an example of what may be more common in 2025.
Great moments in Salesforce marketing. A senior Agentforce executive considers great marketing and brand ideas of the past. Inspiration strikes. In 2024, he will make fun of Clippy. Yes, a 1995 reference will resonate with young deciders in 2024. Thanks, Stable Diffusion. You are working; MSFT Copilot is not.
The focus today is a single statement in this interview with the big dog of Salesforce. Here’s the quote:
Well, I guess it wasn’t the AGI that we were expecting because I think that there has been a level of sell, including Microsoft Copilot, this thing is a complete disaster. It’s like, what is this thing on my computer? I don’t even understand why Microsoft is saying that Copilot is their vision of how you’re going to transform your company with AI, and you are going to become more productive. You’re going to augment your employees, you’re going to lower your cost, improve your customer relationships, and fundamentally expand all your KPIs with Copilot. I would say, “No, Copilot is the new Clippy”, I’m even playing with a paperclip right now.
Let’s think about this series of references and assertions.
First, there is the direct statement “Microsoft Copilot, this thing is a complete disaster.” Let’s assume the big dog of Salesforce is right. The large and much loved company — Yes, I am speaking about Microsoft — rolled out a number of implementations, applications, and assertions. The firm caught everyone’s favorite Web search engine with its figurative pants down like a hapless Russian trooper about to be dispatched by a Ukrainian drone equipped with a variant of RTX. (That stuff goes bang.) Microsoft “won” a marketing battle and gained the advantage of time. Google with its Sundar & Prabhakar Comedy Act created an audience. Microsoft seized the opportunity to talk to the audience. The audience applauded. Whether the technology worked, in my opinion was secondary. Microsoft wanted to be seen as the jazzy leader.
Second, the idea of a disaster is interesting. Since Microsoft relied on what may be the world’s weirdest organizational set up and supported the crumbling structure, other companies have created smart software which surfs on Google’s transformer ideas. Microsoft did not create a disaster; it had not done anything of note in the smart software world. Microsoft is a marketer. The technology is a second class citizen. The disaster is that Microsoft’s marketing seems to be out of sync with what the PowerPoint decks say. So what’s new? The answer is, “Nothing.” The problem is that some people don’t see Microsoft’s smart software as a disaster. One example is Palantir, which is Microsoft’s new best friend. The US government cannot rely on Microsoft enough. Those contract renewals keep on rolling. Furthermore the “certified” partners could not be more thrilled. Virtually every customer and prospect wants to do something with AI. When the blind lead the blind, a person with really bad eyesight has an advantage. That’s Microsoft. Like it or not.
Third, the pitch about “transforming your company” is baloney. But it sounds good. It helps a company do something “new” but within the really familiar confines of Microsoft software. In the good old days, it was IBM that provided the cover for doing something, anything, which could produce a marketing opportunity or a way to add a bit pizazz to a 1955 Chevrolet two door 210 sedan. Thus, whether the AI works or does not work, one must not lose sight of the fact that Microsoft centric outfits are going to go with Microsoft because most professionals need PowerPoint and the bean counters do not understand anything except Excel. What strikes me as important that Microsoft can use modest, even inept smart software, and come out a winner. Who is complaining? The Fortune 1000, the US Federal government, the legions of MBA students who cannot do a class project without Excel, PowerPoint, and Word?
Finally, the ultimate reference in the quote is Clippy. Personally I think the big dog at Salesforce should have invoked both Bob and Clippy. Regardless of the “joke” hooked to these somewhat flawed concepts, the names “Bob” and “Clippy” have resonance. Bob rolled out in 1995. Clippy helped so many people beginning in the same year. Decades later Microsoft’s really odd software is going to cause a 20 something who was not born to turn away from Microsoft products and services? Nope.
Let’s sum up: Salesforce is working hard to get a marketing lift by making Microsoft look stupid. Believe me. Microsoft does not need any help. Perhaps the big dog should come up with a marketing approach that replicates or comes close to what Microsoft pulled off in 2023. Google still hasn’t recovered fully from that kung fu blow.
The big dog needs to up its marketing game. Say Salesforce and what’s the reaction? Maybe meh.
Stephen E Arnold, November 1, 2024
Online Search: The Old Function Is in Play
October 18, 2024
Just a humanoid processing information related to online services and information access.
We spotted an interesting marketing pitch from Kagi.com, the pay-to-play Web search service. The information is located on the Kagi.com Help page at this link. The approach is what I call “fact-centric marketing.” In the article, you will find facts like these:
In 2022 alone, search advertising spending reached a staggering 185.35 billion U.S. dollars worldwide, and this is forecast to grow by six percent annually until 2028, hitting nearly 261 billion U.S. dollars.
There is a bit of consultant-type analysis which explains the difference between Google’s approach labeled “ad-based search” and the Kagi.com approach called “user-centric search.” I don’t want to get into an argument about these somewhat stark bifurcations in the murky world of information access, search, and retrieval. Let’s just accept the assertion.
I noted more numbers. Here’s a sampling (not statistically valid, of course):
Google generated $76 billion in US ad revenue in 2023. Google had 274 million unique visitors in the US as of February 2023. To estimate the revenue per user, we can divide the 2023 US ad revenue by the 2023 number of users: $76 billion / 274 million = $277 revenue per user in the US or $23 USD per month, on average! That means there is someone, somewhere, a third party and a complete stranger, an advertiser, paying $23 per month for your searches.
The Kagi.com point is:
Choosing to subscribe to Kagi means that while you are now paying for your search you are getting a fair value for your money, you are getting more relevant results, are able to personalize your experience and take advantage of all the tools and features we built, all while protecting your and your family’s privacy and data.
Why am I highlighting this Kagi.com Help information? Leo Laporte on the October 13, 2024, This Week in Tech program talked about Kagi. He asserted that Kagi uses Bing, Google, and its own search index. I found this interesting. If true, Mr. Laporte is disseminating the idea that Kagi.com is a metasearch engine like Ixquick.com (now StartPage.com). The murkiness about what a Web search engine presents to a user is interesting.
A smart person is explaining why paying for search and retrieval is a great idea. It may be, but Google has other ideas. Thanks, You.com. Good enough
In the last couple of days I received an invitation to join a webinar about a search system called Swirl, which connotes mixing content perhaps? I also received a spam message from a fund called TheStreet explaining that the firm has purchased a block of Elastic B.V. shares. A company called provided an interesting explanation of what struck me as a useful way to present search results.
Everywhere companies are circling back to the idea that one cannot “find” needed information.
With Google facing actual consequences for its business practices, that company is now suggesting this angle: “Hey, you can’t break us up. Innovation in AI will suffer.”
So what is the future? Will vendors get a chance to use the Google search index for free? Will alternative Web search solutions become financial wins? Will metasearch triumph, using multiple indexes and compiling a single list of results? Will new-fangled solutions like Glean dominate enterprise information access and then move into the mainstream? Will visual approaches to information access kick “words” to the curb?
Here are some questions I like to ask those who assert that they are online experts, and I include those in the OSINT specialist clan as well:
- Finding information is an unsolved problem. Can you, for example, easily locate a specific frame from a video your mobile device captured a year ago?
- Can you locate the specific expression in a book about linear algebra germane to the question you have about its application to an AI procedure?
- Are you able to find quickly the telephone number (valid at the time of the query) for a colleague you met three years ago at an international conference?
As 2024 rushes to what is likely to be a tumultuous conclusion, I want to point out that finding information is a very difficult job. Most people tell themselves they can find the information needed to address a specific question or task. In reality, these folks are living in a cloud of unknowing. Smart software has not made keyword search obsolete. For many users, ChatGPT or other smart software is a variant of search. If it is easy to use and looks okay, the output is outstanding.
So what? I am not sure the problem of finding the right information at the right time has been solved. Free or for fee, ad supported or open sourced, dumb string matching or Fancy Dan probabilistic pattern identification — none is delivering what so many people believe are on point, relevant, timely information. Don’t even get me started on the issue of “correct” or “accurate.”
Marketers, stand down. Your assertions, webinars, advertisements, special promotions, jargon, and buzzwords do not deliver findability to users who don’t want to expend effort to move beyond good enough. I know one thing for certain, however: Finding relevant information is now more difficult than it was a year ago. I have a hunch the task is only become harder.
Stephen E Arnold, October 18, 2024
Gee, Will the Gartner Group Consultants Require Upskilling?
October 16, 2024
The only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.
I have a steady stream of baloney crossing my screen each day. I want to call attention to one of the most remarkable and unsupported statements I have seen in months. The PR document “Gartner Says Generative AI Will Require 80% of Engineering Workforce to Upskill Through 2027” contains a number of remarkable statements. Let’s look at a couple.
How an allegedly big time consultant is received in a secure artificial intelligence laboratory. Thanks, MSFT Copilot, good enough.
How about this one?
Through 2027, generative AI (GenAI) will spawn new roles in software engineering and operations, requiring 80% of the engineering workforce to upskill, according to Gartner, Inc.
My thought is that the virtual band of wizards which comprise Gartner cook up data the way I microwave a burrito when I am hungry. Pick a common number like the 80-20 Pareto figure. It is familiar and just use it. Personally I was disappointed that Gartner did not use 67 percent, but that’s just an old former blue chip consultant pointing out that round numbers are inherently suspicious. But does Gartner care? My hunch is that whoever reviewed the news release was happy with 80 percent. Did anyone question this number? Obviously not: There are zero supporting data, no information about how it was derived, and no hint of the methodology used by the incredible Gartner wizards. That’s a clue that these are microwaved burritos from a bulk purchase discount grocery.
How about this statement which cites a … wait for it … Gartner wizard as the source of the information?
“In the AI-native era, software engineers will adopt an ‘AI-first’ mindset, where they primarily focus on steering AI agents toward the most relevant context and constraints for a given task,” said Walsh. This will make natural-language prompt engineering and retrieval-augmented generation (RAG) skills essential for software engineers.
I love the phrase “AI native” and I think dubbing the period from January 2023 when Microsoft demonstrated its marketing acumen by announcing the semi-tie up with OpenAI. The code generation systems help exactly what “engineer”? One has to know quite a bit to craft a query, examine the outputs, and do any touch ups to get the outputs working as marketed? The notion of “steering” ignores what may be an AI problem no one at Gartner has considered; for example, emergent patterns in the code generated. This means, “Surprise.” My hunch is that the idea of multi-layered neural networks behaving in a way that produces hitherto unnoticed patterns is of little interest to Gartner. That outfit wants to sell consulting work, not noodle about the notion of emergence which is a biased suite of computations. Steering is good for those who know what’s cooking and have a seat at the table in the kitchen. Is Gartner given access to the oven, the fridge, and the utensils? Nope.
Finally, how about this statement?
According to a Gartner survey conducted in the fourth quarter of 2023 among 300 U.S. and U.K. organizations, 56% of software engineering leaders rated AI/machine learning (ML) engineer as the most in-demand role for 2024, and they rated applying AI/ML to applications as the biggest skills gap.
Okay, this is late 2024 (October to be exact). The study data are a year old. So far the outputs of smart coding systems remain a work in progress. In fact, Dr. Sabine Hossenfelder has a short video which explains why the smart AI programmer in a box may be more disappointing than other hyperbole artists claim. If you want Dr. Hossenfelder’s view, click here. In a nutshell, she explains in a very nice way about the giant bologna slide plopped on many diners’ plates. The study Dr. Hossenfelder cites suggests that productivity boosts are another slice of bologna. The 41 percent increase in bugs provides a hint of the problems the good doctor notes.
Net net: I wish the cited article WERE generated by smart software. What makes me nervous is that I think real, live humans cooked up something similar to a boiled shoe. Let me ask a more significant question. Will Gartner experts require upskilling for the new world of smart software? The answer is, “Yes.” Even today’s sketchy AI outputs information often more believable that this Gartner 80 percent confection.
Stephen E Arnold, October 16, 2024
The GoldenJackals Are Running Free
October 11, 2024
The only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.
Remember the joke about security. Unplugged computer in a locked room. Ho ho ho. “Mind the (Air) Gap: GoldenJackal Gooses Government Guardrails” reports that security is getting more difficult. The write up says:
GoldenJackal used a custom toolset to target air-gapped systems at a South Asian embassy in Belarus since at least August 2019… These toolsets provide GoldenJackal a wide set of capabilities for compromising and persisting in targeted networks. Victimized systems are abused to collect interesting information, process the information, exfiltrate files, and distribute files, configurations and commands to other systems. The ultimate goal of GoldenJackal seems to be stealing confidential information, especially from high-profile machines that might not be connected to the internet.
What’s interesting is that the sporty folks at GoldenJackal can access the equivalent of the unplugged computer in a locked room. Not exactly, of course, but allegedly darned close.
Microsoft Copilot does a great job of presenting an easy to use cyber security system and console. Good work.
The cyber experts revealing this exploit learned of it in 2020. I think that is more than three years ago. I noted the story in October 2024. My initial question was, “What took so long to provide some information which is designed to spark fear and ESET sales?”
The write up does not tackle this question but the write up reveals that the vector of compromise was a USB drive (thumb drive). The write up provides some detail about how the exploit works, including a code snippet and screen shots. One of the interesting points in the write up is that Kaspersky, a recently banned vendor in the US, documented some of the tools a year earlier.
The conclusion of the article is interesting; to wit:
Managing to deploy two separate toolsets for breaching air-gapped networks in only five years shows that GoldenJackal is a sophisticated threat actor aware of network segmentation used by its targets.
Several observations come to mind:
- Repackaging and enhancing existing malware into tool bundles demonstrates the value of blending old and new methods.
- The 60 month time lag suggests that the GoldenJackal crowd is organized and willing to invest time in crafting a headache inducer for government cyber security professionals
- With the plethora of cyber alert firms monitoring everything from secure “work use only” laptops to useful outputs from a range of devices, systems, and apps why is it that only one company sufficiently alert or skilled to explain the droppings of the GoldenJackal?
I learn about new exploits every couple of days. What is now clear to me is that a cyber security firm which discovers something novel does so by accident. This leads me to formulate the hypothesis that most cyber security services are not particularly good at spotting what I would call “repackaged systems and methods.” With a bit of lipstick, bad actors are able to operate for what appears to be significant periods of time without detection.
If this hypothesis is correct, US government memoranda, cyber security white papers, and academic type articles may be little more than puffery. “Puffery,” as we have learned is no big deal. Perhaps that is what expensive cyber security systems and services are to bad actors: No big deal.
Stephen E Arnold, October 11, 2024
One