Boxing Day Cheat Sheet for AI Marketing: Happy New Year!

December 27, 2024

Other than automation and taking the creative talent out of the entertainment industry, where is AI headed in 2025? The lowdown for the upcoming year can be found on the Techknowledgeon AI blog and its post: “The Rise Of Artificial Intelligence: Know The Answers That Makes You Sensible About AI.”

The article acts as a primer for what AI I, its advantages, and answering important questions about the technology. The questions that grab our attention are “Will AI take over humans one day?” And “Is AI an Existential Threat to Humanity?” Here’s the answer to the first question:

“The idea of AI taking over humanity has been a recurring theme in science fiction and a topic of genuine concern among some experts. While AI is advancing at an incredible pace, its potential to surpass or dominate human capabilities is still a subject of intense debate. Let’s explore this question in detail.

AI, despite its impressive capabilities, has significant limitations:

  • Lack of General Intelligence: Most AI today is classified as narrow AI, meaning it excels at specific tasks but lacks the broader reasoning abilities of human intelligence.
  • Dependency on Humans: AI systems require extensive human oversight for design, training, and maintenance.
  • Absence of Creativity and Emotion: While AI can simulate creativity, it doesn’t possess intrinsic emotions, intuition, or consciousness.

And then the second one is:

“Instead of "taking over," AI is more likely to serve as an augmentation tool:

  • Workforce Support: AI-powered systems are designed to complement human skills, automating repetitive tasks and freeing up time for creative and strategic thinking.
  • Health Monitoring: AI assists doctors but doesn’t replace the human judgment necessary for patient care.
  • Smart Assistants: Tools like Alexa or Google Assistant enhance convenience but operate under strict limitations.”

So AI has a long way to go before it replaces humanity and the singularity of surpassing human intelligence is either a long way off or might never happen.

This dossier includes useful information to understand where AI is going and will help anyone interested in learning what AI algorithms are projected to do in 2025.

Whitney Grace, December 27, 2024

2025 Consulting Jive

December 26, 2024

Here you go. I have extracted of list of the jargon one needs to write reports, give talks, and mesmerize those with a desire to be the smartest people in the room:

  • Agentic AI
  • AI governance platforms
  • Ambient invisible intelligence
  • Augmented human capability
  • Autonomous businesses
  • BBMIs (Brain-Body Machine Interfaces)
  • Brand reputation
  • Business benefits
  • Contextual awareness
  • Continuous adaptive trust model
  • Cryptography
  • Data privacy
  • Disinformation security
  • Energy-efficient computing
  • Guardrails
  • Hybrid computing
  • Human-machine synergy
  • Identity validation
  • Immersive experiences
  • Model lifecycle management
  • Multilayered adaptive learning
  • Neurological enhancement
  • Polyfunctional robots
  • Post-quantum cryptography (PQC)
  • Provenance
  • Quantum computing (QC)
  • Real-time personalization
  • Risk scoring
  • Spatial computing
  • Sustainability
  • Transparency
  • UBMIs (User-Brain Machine Interfaces)

Did this spark your enthusiasm for modern jingo jango. Hats off to the Gartner Group. Wow! Great. Is the list complete? Of course not. I left out bullish*t.

Stephen E Arnold, December 26, 2024

Modern Management Revealed and It Is Jaundiced with a Sickly Yellowish Cast

December 26, 2024

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis blog post is the work of an authentic dinobaby. No smart software was used.

I was zipping through the YCombinator list of “important” items and spotted this one: “Time for a Code-Yellow?: A Blunt Instrument That Works.” I associated Code Yellow with the Google knee jerk in early 2023 when Microsoft rolled out its smart software deal with OpenAI. Immediately Google was on the backfoot. Word filtered across the blogs and “real” news sources that the world’s biggest online ad outfit and most easily sued company was reeling. The company declared a “Code Yellow,” a “Code Red,” and probably a Code 300 Terahertz to really goose the Googlers.

image

Grok does a code yellow. Good enough.

I found the reaction, the fumbling, and the management imperative as wonky as McKinsey getting ensnared in its logical opioid consulting work. What will those MBAs come up with next?

The “Time for a Code Yellow” is interesting. Read it. I want to focus on a handful of supplemental observations which appeared in the comments to the citation for the article. These, I believe, make clear the “problem” that is causing many societal problems including the egregious actions of big companies, some government agencies, and those do-good non-governmental organizations.

Here we go and the italics are my observation on the individual insights:

Tubojet1321 says: “If everything is an emergency, nothing is an emergency.” Excellent observation.

nine_zeros says: “Eventually everyone learns inaction.” Yep, meetings are more important than doing.The fix is to have another meeting.

magical hippo says: “My dad used to flippantly say he had three piles of papers on his desk: “urgent”, “very urgent” and “no longer urgent”. The modern organization creates bureaucratic friction at a much faster pace.

x0x0 says: “I’m utter sh*t at management, [I] refuse to prioritize until it’s a company-threatening crisis, and I’m happy to make my team suffer for my incompetence.” Outstanding self critique.

Lammy says: “The etymology is not green/yellow/red. It’s just not-Yellow or yes-Yellow. See Stephen Levy’s In The Plex (2011) pg186: ‘A Code Yellow is named after a tank top of that color owned by engineering director Wayne Rosing. During Code Yellow a leader is given the shirt and can tap anyone at Google and force him or her to drop a current project to help out. Often, the Code Yellow leader escalates the emergency into a war room situation and pulls people out of their offices and into a conference room for a more extended struggle.’ Really? I thought the popularization of “yellow” as a caution or warning became a shared understanding in the US with the advent of trains long before T shirts and Google. Note: Train professionals used a signaling system before Messrs. Brin and Page “discovered” Jon Kleinberg’s CLEVER patent.

lizzas says: “24/7 oncall to … be yanked onto something the boss fancies. No thanks. What about… planning?” Planning. Let’s call a meeting, talk about a plan, then have a meeting to discuss options, and finally have a meeting to do planning. Sounds like a plan.

I have a headache from the flashing yellow lights. Amazing about Google’s originality, isn’t it? Oh, over the holiday downtime, check out Dr. Jon Kleinberg and what he was doing at IBM’s Almaden Research Laboratory in US6112202, filed in 1997. Are those yellow lights still flashing?

Stephen E Arnold, December 26, 2024

MUT Bites: Security Perimeters May Not Work Very Well

December 26, 2024

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis blog post is the work of an authentic dinobaby. No smart software was used.

I spotted a summary of an item in Ars Technica which recycled a report from Checkmarx and Datadog Security Labs. If you want to read “Yearlong Supply Chain Attack Targeting Security Pros Steals 390,000 Credentials.” I want to skip what is now a soap opera story repeated again and again: Bad actors compromise a system, security professionals are aghast, and cybersecurity firms license more smart, agentic enabled systems. Repeat. Repeat. Repeat. That’s how soap operas worked when I was growing up.

Let’s jump to several observations:

  1. Cyber defenses are not working
  2. Cyber security vendors insist their systems are working because numerous threats were blocked. Just believe our log data. See. We protected you … a lot.
  3. Individual cyber security vendors are a cohort which can be compromised, not once in a mad minute of carelessness. No. Compromised for — wait for it — up to a year.

The engineering of software and systems is, one might conclude, rife with vulnerabilities. If the cyber security professionals cannot protect themselves, who can?

Stephen E Arnold, December 26, 2024

Juicing Up RAG: The RAG Bop Bop

December 26, 2024

Can improved information retrieval techniques lead to more relevant data for AI models? One startup is using a pair of existing technologies to attempt just that. MarkTechPost invites us to “Meet CircleMind: An AI Startup that is Transforming Retrieval Augmented Generation with Knowledge Graphs and PageRank.” Writer Shobha Kakkar begins by defining Retrieval Augmented Generation (RAG). For those unfamiliar, it basically combines information retrieval with language generation. Traditionally, these models use either keyword searches or dense vector embeddings. This means a lot of irrelevant and unauthoritative data get raked in with the juicy bits. The write-up explains how this new method refines the process:

“CircleMind’s approach revolves around two key technologies: Knowledge Graphs and the PageRank Algorithm. Knowledge graphs are structured networks of interconnected entities—think people, places, organizations—designed to represent the relationships between various concepts. They help machines not just identify words but understand their connections, thereby elevating how context is both interpreted and applied during the generation of responses. This richer representation of relationships helps CircleMind retrieve data that is more nuanced and contextually accurate. However, understanding relationships is only part of the solution. CircleMind also leverages the PageRank algorithm, a technique developed by Google’s founders in the late 1990s that measures the importance of nodes within a graph based on the quantity and quality of incoming links. Applied to a knowledge graph, PageRank can prioritize nodes that are more authoritative and well-connected. In CircleMind’s context, this ensures that the retrieved information is not only relevant but also carries a measure of authority and trustworthiness. By combining these two techniques, CircleMind enhances both the quality and reliability of the information retrieved, providing more contextually appropriate data for LLMs to generate responses.”

CircleMind notes its approach is still in its early stages, and expects it to take some time to iron out all the kinks. Scaling it up will require clearing hurdles of speed and computational costs. Meanwhile, a few early users are getting a taste of the beta version now. Based in San Francisco, the young startup was launched in 2024.

Cynthia Murrell, December 26, 2024

Does Apple Thinks Google Is Inept?

December 25, 2024

At a pre-holiday get together, I heard Wilson say, “Don’t ever think you’re completely useless. You can always be used as a bad example.”

I read the trust outfit’s write up “Apple Seeks to Defend Google’s Billion Dollar Payments in Search Case.” I found the story cutting two ways.

Apple, a big outfit, believes that it can explain in a compelling way why Google should be paying Apple to make Google search the default search engine on Apple devices. Do you remember the Walt Disney film  The Hunchback of Notre Dame? I love an argument with a twisted back story. Apple seems to be saying to Google: “Stupidity is far more dangerous than evil. Evil takes a break from time to time. Stupidity does not.”

The Thomson Reuters article offers:

Apple has asked to participate in Google’s upcoming U.S. antitrust trial over online search, saying it cannot rely on Google to defend revenue-sharing agreements that send the iPhone maker billions of dollars each year for making Google the default search engine on its Safari browser.

Apple wants that $20 billion a year and certainly seems to be sending a signal that Google will screw up the deal with a Googley argument. At the same holiday party, Wilson’s significant other observed, ““My people skills are just fine. It’s my tolerance to idiots that needs work.” I wonder if that person was talking about Apple?

Apple may be fearful that Google will lurch into Code Yellow, tell the jury that gluing cheese on pizza is logical, and explain that it is not a monopoly. Apple does not want to be in the court cafeteria and hear, “I heard Google ask the waiter, “How do you prepare chicken?” The waiter replied, “Nothing special. The cook just says, “You are going to die.”

The Thomson Reuters’ article offers this:

Apple wants to call witnesses to testify at an April trial. Prosecutors will seek to show Google must take several measures, including selling its Chrome web browser and potentially its Android operating system, to restore competition in online search. “Google can no longer adequately represent Apple’s interests: Google must now defend against a broad effort to break up its business units,” Apple said.

I had a professor from Oklahoma who told our class:

“If Stupidity got us into this mess, then why can’t it get us out?”

Apple and Google arguing in court. Google has a lousy track record in court. Apple is confident it can convince a court that taking Google’s money is okay.

Albert Eistein allegedly observed:

The difference between stupidity and genius is that genius has its limits.

Yep, Apple and Google, quite a pair.

Stephen E Arnold, December 25, 2024

Anthropic Gifts a Feeling of Safety: Insecurity Blooms This Holiday Season

December 25, 2024

animated-dinosaur-image-0055_thumb_thumb_thumbWritten by a dinobaby, not an over-achieving, unexplainable AI system.

TechCrunch published “Google Is Using Anthropic’s Claude to Improve Its Gemini AI.” The write up reports:

Contractors working to improve Google’s Gemini AI are comparing its answers against outputs produced by Anthropic’s competitor model Claude, according to internal correspondence seen by TechCrunch. Google would not say, when reached by TechCrunch for comment, if it had obtained permission for its use of Claude in testing against Gemini.

Beyond Search notes Pymnts.com report from February 5, 2023, that Google invested at that time $300 million in Anthropic. Beyond Search recalls a presentation at a law enforcement conference. One comment made by an attendee to me suggested that Google was well aware of Anthropic’s so-called constitutional AI. I am immune to AI and crypto babble, but I did chase down “constitutional AI” because the image the bound phrase sparked in my mind was that of the mess my French bulldog delivers when he has eaten spicy food.

image

The illustration comes from You.com. Kwanzaa was the magic word. Good enough.

The explanation consumes 34 pages of an ArXiv paper called “Constitutional AI: Harmlessness from AI Feedback.” The paper has more than 48 authors. (Headhunters, please, take note when you need to recruit AI wizards.) I read the paper, and I think — please, note, “think” — the main idea is:

Humans provides some input. Then the Anthropic system figures out how to achieve helpfulness and instruction-following without human feedback. And the “constitution”? Those are the human-created rules necessary to get the smart software rolling along. Presumably Anthropic’s algorithms ride without training wheels forevermore.

The CAI acronym has not caught on like the snappier RAG or “retrieval augmented generation” or the most spectacular jargon “synthetic data.” But obviously Google understands and values to the tune of hundreds of millions of dollars, staff time, and the attention of big Googler thinkers like Jeff Dean (who once was the Big Dog of AI) but has given way to the alpha dog at DeepMind).

The swizzle for this “testing” or whatever the Googlers are doing is “safety.” I know that when I ask for an image like “a high school teacher at the greenboard talking to students who are immersed in their mobile phones”, I am informed that the image is not safe. I assume Anthropic will make such crazy prohibitions slightly less incomprehensible. Well, maybe, maybe not.

Several observations are warranted:

  1. Google’s investment in Anthropic took place shortly after the Microsoft AI marketing coup in 2023. Perhaps someone knew that Google’s “we invented it” transformer technology was becoming a bit of a problem
  2. Despite the Google “we are the bestest” in AI technology, the company continues to feel the need to prove that it is the bestest. That’s good. Self- knowledge and defeating “not invented here” malaise are positives.
  3. DeepMind itself — although identified as the go-to place for the most bestest AI technology — may not be perceived as the outfit with the Holy Grail, the secret to eternal life, and the owner of most of the land on which the Seven Cities of Cibola are erected.

Net net: Lots of authors, Google testing itself, and a bit of Google’s inferiority complex — Quite a Kwanzaa gift.

Stephen E Arnold, December 25, 2024

FReE tHoSe smaRT SoFtWarEs!

December 25, 2024

animated-dinosaur-image-0062No smart software involved. Just a dinobaby’s work.

Do you have the list of stop words you use in your NLP prompts? (If not, click here.) You are not happy when words on the list like “b*mb,” “terr*r funding,” and others do not return exactly what you are seeking? If you say, “Yes”, you will want to read “BEST-OF-N JAILBREAKING” by a Frisbee team complement of wizards; namely, John Hughes, Sara Price, Aengus Lynch, Rylan Schaeffer, Fazl Barez, Sanmi Koyejo, Henry Sleight, Erik Jones, Ethan Perez, and Mrinank Sharma. The people doing the heavy lifting were John Hughes (a consultant who does work for Speechmatics and Anthropic) and Mrinank Sharma (an Anthropic engineer involved in — wait for it — adversarial robustness).

The main point is that Anthropic linked wizards have figured out how to knock down the guard rails for smart software. And those stop words? Just whip up a snappy prompt, mix up the capital and lower case letters, and keep sending the query to a smart software. At some point, those capitalization and other fixes will cause the LLM to go your way. Want to whip up a surprise in your bathtub? LLMs will definitely help you out.

The paper has nifty charts and lots of academic hoo-hah. The key insight is what the many, many authors call “attack composition.” You will be able to get the how-to by reading the 73 page paper, probably a result of each author writing 10 pages in the hopes of landing an even more high paying, in demand gig.

Several observations:

  1. The idea that guard rails work is now called into question
  2. The disclosure of the method means that smart software will do whatever a clever bad actor wants
  3. The rush to AI is about market lock up, not the social benefit of the technology.

The new year will be interesting. The paper’s information is quite the holiday gift.

Stephen E Arnold, December 25, 2024

McKinsey Takes One for the Team

December 25, 2024

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis blog post is the work of an authentic dinobaby. No smart software was used.

I read the “real” news in “McKinsey & Company to Pay $650 Million for Role in Opioid Crisis.” The write up asserts:

The global consulting firm McKinsey and Company Friday [December 13, 2024] agreed to pay $650 million to settle a federal probe into its role in helping “turbocharge” sales of the highly addictive opioid painkiller OxyContin for Purdue Pharma…

If I were still working at a big time blue chip consulting firm, I would suggest to the NPR outfit that its researchers should have:

  1. Estimated the fees billed for opioid-related consulting projects
  2. Pulled together the estimated number of deaths from illegal / quasi-legal opioid overdoses
  3. Calculated the revenue per death
  4. Calculated the cost per death
  5. Presented the delta between the two totals.
  6. Presented to aggregate revenue generated for McKinsey’s clients from opioid sales
  7. Estimated the amount spent to “educate” physicians about the merits of synthetic opioids.

Interviewing a couple of parents or surviving spouses from Indiana, Kentucky, or West Virginia would have added some local color. But assembling these data cannot be done with a TikTok query. Hence, the write up as it was presented.

Isn’t that efficiency of MBA think outstanding? I did like the Friday the 13th timing. A red ink Friday? Nope. The fine doesn’t do the job for big time Blue Chip consulting firms. Just like EU fines don’t deter the Big Tech outfits. Perhaps something with real consequences is needed? Who am I kidding?

Stephen E Arnold, December 25, 2024

The Future: State Control of Social Media Access, Some Hope

December 25, 2024

It’s great that parents are concerned for their children’s welfare, especially when there are clear and documented dangers. The Internet has been in concerned parents’ crosshairs since its proliferation. Back in the AOL days it was easier to monitor kids access, you simply didn’t allow them to log on and you reviewed their browser history. However, with the advent of mobile devices and the necessity of the Internet for everyday living, parents are baffled on how to control their children and so is the Australian government. In an extreme case, the Australian parents proposed a bill to ban kids under the age of sixteen from using social media. The Senior relates how they are winning the battle: “Parents To Lose Final Say In Social Media Ban For Kids.”

The proposed bill is from Prime Minister Anthony Albanese’s administration and it plans to ban all kids under the age of sixteen from any and other social media platforms. Parents are taken out of the equation entirely. Parents will not be allowed to consent and many see it as a violation of their civil and parental rights.

The bill hasn’t been drafted yet and probably won’t be in 2024. It is believed that the first legislation on the bill will be in 2025 and will slowly work its way through the Australian parliament. The blanket ban would also not require age verification:

“Asked if parents would be allowed to consent to their children being on social media at a younger age, Communications Minister Michelle Rowland told Labor’s party room meeting “no”. She said people using social media would not have to upload proof of identity directly to those platforms, when minimum age requirements kick in. ‘The opposition is the only party arguing that people should upload 100 points of ID and give it to TikTok,’ she told the meeting. The government wants 12 months of consultation to figure out exactly how the ban will be enforced.”

Australia doesn’t have faith in parents’ efforts to regulate their kids on social media, so the government is acting in the kids’ best interests. It does sound like the government is overstepping, but social media experts and mental health professionals have documented the potential and real harm of social media on kids. Many parents also don’t monitor and discipline their children’s Internet usage habits. Is this an overstep by the government? No, just a first step.

Whitney Grace, December 25, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta