Equal Opportunity Insecurity: Microsoft Mac Apps

August 28, 2024

Isn’t it great that Mac users can use Microsoft Office software on their devices these days? Maybe not. Apple Insider warns, “Security Flaws in Microsoft Mac Apps Could Let Attackers Spy on Users.” The vulnerabilities were reported by threat intelligence firm Cisco Talos. Writer Andrew Orr tells us:

Talos claims to have found eight vulnerabilities in Microsoft apps for macOS, including Word, Outlook, Excel, OneNote, and Teams. These vulnerabilities allow attackers to inject malicious code into the apps, exploiting permissions and entitlements granted by the user. For instance, attackers could access the microphone or camera, record audio or video, and steal sensitive information without the user’s knowledge. The library injection technique inserts malicious code into a legitimate process, allowing the attacker to operate as the compromised app.”

Microsoft has responded with its characteristic good-enough approach to security. We learn:

“Microsoft has acknowledged vulnerabilities found by Cisco Talos but considers them low risk. Some apps, like Microsoft Teams, OneNote, and the Teams helper apps, have been modified to remove the this entitlement, reducing vulnerability. However, other apps, such as Microsoft Word, Excel, Outlook, and PowerPoint, still use this entitlement, making them susceptible to attacks. Microsoft has reportedly ‘declined to fix the issues,’ because of the company’s apps ‘need to allow loading of unsigned libraries to support plugins.’”

Well alright then. Leaving the vulnerability up for Outlook is especially concerning since, as Orr points out, attackers could use it to send phishing or other unauthorized emails. There is only so much users can do in the face of corporate indifference. The write-up advises us to keep up with app updates to ensure we get the latest security patches. That is good general advice, but it only works if appropriate patches are actually issued.

Cynthia Murrell, August 28, 2024

Am I Overly Sensitive to X (Twitter) Images?

August 28, 2024

X AI Creates Disturbing Images

The AI division of X, xAI, has produced a chatbot called Grok. Grok includes an image generator. Unlike ChatGPT and other AIs from major firms, Grok seems to have few guardrails. In fact, according to The Verge, “X’s New AI Image Generator Will Make Anything from Taylor Swift in Lingerie to Kamala Harris with a Gun.” Oh, if one asks Grok directly, it claims to have sensible guardrails and will even list a few. However, writes senior editor Adi Robertson:

“But these probably aren’t real rules, just likely-sounding predictive answers being generated on the fly. Asking multiple times will get you variations with different policies, some of which sound distinctly un-X-ish, like ‘be mindful of cultural sensitivities.’ (We’ve asked xAI if guardrails do exist, but the company hasn’t yet responded to a request for comment.) Grok’s text version will refuse to do things like help you make cocaine, a standard move for chatbots. But image prompts that would be immediately blocked on other services are fine by Grok.”

The article lists some very uncomfortable experimental images Grok has created and even shares a few. See the write-up if curious. We learn one X user found some frightening loopholes. When he told the AI he was working on medical or crime scene analysis, it allowed him to create some truly disturbing images. The write-up shares blurred versions of these. The same researcher says he got Grok to create child pornography (though he wisely does not reveal how). All this without a “Created with AI” watermark added by other major chatbots. Although he is aware of this issue, X owner Elon Musk characterizes this iteration of Grok as an “intermediate step” that allows users “to have some fun.” That is one way to put it. Robertson notes:

“Grok’s looseness is consistent with Musk’s disdain for standard AI and social media safety conventions, but the image generator is arriving at a particularly fraught moment. The European Commission is already investigating X for potential violations of the Digital Safety Act, which governs how very large online platforms moderate content, and it requested information earlier this year from X and other companies about mitigating AI-related risk. … The US has far broader speech protections and a liability shield for online services, and Musk’s ties with conservative figures may earn him some favors politically.”

Perhaps. But US legislators are working on ways to regulate deepfakes that impersonate others, particularly sexually explicit imagery. Combine that with UK regulator Ofcom’s upcoming enforcement of the OSA, and Musk may soon find a permissive Grok to be a lot less fun.

Cynthia Murrell, August 28, 2024

Consulting Tips: How to Guide Group Thinking

August 27, 2024

One of the mysteries of big time consulting is answering the question, “Why do these guys seem so smart?” One trick is to have a little knowledge valise stuffed with thinking and questioning tricks. One example is the Boston Consulting Group dog, star, loser, and uncertain matrix. If you remember the “I like Ike” buttons, you may know that the General used this approach to keep some frisky reports mostly in line during meetings.

Are there other knowledge tools or thinking frameworks? The answer is, “Sure.” When someone asks you to name six, can you deliver a prompt, concise answer? The answer, in my 50 plus years of professional services work, “Not a chance.”

The good news is that you can locate frameworks, get some tips on how to use these to knock the socks off those in a group, and become a walking, talking Blue Chip Consultant without the pain and expense of a fancy university, hours of drudgery, or enduring scathing comments from more experienced peers.

Navigate to “Tools for Better Thinking.” The link, one hopes, displays the names of thinking frameworks in boxes. Click a box, and you get a description and a how-to about the tool.

I think the site is quite good, and it may help some people sell consulting work in certain situations.

Worth a look.

Stephen E Arnold, August 27, 2024

Anthropic AI: New Allegations of Frisky Behavior

August 27, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Who knew high school science club members would mature into such frisky people. But rules are made to be broken. Apologies make the problem go away. Perhaps in high school with some indulgent faculty advisors? In the real world where lawyers are more plentiful than cardinals in Kentucky, apologies may not mean anything. I learned that the highly-regarded AI outfit Anthropic will be spending some time with the firm’s lawyers.

Anthropic Faces New Class-Action Lawsuit from Book Authors” reported:

AI company Anthropic is still battling a lyrics-focused lawsuit from music publishers, but now it has a separate legal fight on its hands. Authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson are suing the company in a class-action lawsuit in California. As with the music publishers, their focus is on the training of Anthropic’s Claude chatbot.

I anticipate a few of the really smart and oh-so-busy wizards will be sitting in a conference room doing the deposition thing. That involves lawyers who are not particularly as scientifically oriented as AI wizards trying to make sense of Anthropic’s use of OPW (other people’s work) without permission. If you are a fan of legal filings, you can read the 20-page document at this link.

Those AI wizards are clever, aren’t they?

Stephen E Arnold, August 27, 2024

A Tool to Fool AI Detectors

August 27, 2024

Here is one way to fool the automated ai detectors: AIHumanizer. A Redditor in the r/ChatGPT subreddit has created a “New Tool that Removes Frequent AI Phrases like ‘Unleash’ or ‘Elevate’.” Sufficient_Ice_6113 writes:

“I created a simple tool which lets you humanize your texts and remove all the robotic or repeated phrases ChatGPT usually uses like ‘Unleash’ ‘elevate’ etc. here is a longer list of them: Most used AI words 100 most common AI words. To remove them and improve your texts you can use aihumanizer.com which completely rewrites your text to be more like it was written by a human. It also makes it undetectable by AI detectors as a side effect because the texts don’t have the common AI pattern any longer. It is really useful in case you want to use an AI text for any work related things, most people can easily tell an email or application to a job was written by AI when it includes ‘Unleash’ and ‘elevate’ a dozen times.”

The author links to their example result at Undetectable AI, the “AI Detector and Humanizer.” That site declares the sample text “appears human.” See the post’s comments for another example, submitted by benkei_sudo. They opine the tool “is a good start, but needs a lot of improvement” because, though it fools the AI checkers, an actual human would have their suspicions. That could be a problem for emails or press releases that, at least for now, tend to be read by people. But how many actual humans are checking resumes or standardized-test essays these days? Besides, Sufficient Ice emphasizes, AIHumanizer offers an upgraded version for an undisclosed price (though a free trial is available). The AI-content arms race continues.

Cynthia Murrell, August 27, 2024

Eric Schmidt, Truth Teller at Stanford University, Bastion of Ethical Behavior

August 26, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I spotted some of the quotes in assorted online posts about Eric Schmidt’s talk / interview at Stanford University. I wanted to share a transcript of the remarks. You can find the ASCII transcript on GitHub at this link. For those interested in how Silicon Valley concepts influence one’s view of appropriate behavior, this talk is a gem. Is it at the level of the Confessions of St. Augustine? Well, the content is darned close in my opinion. Students of Google’s decision making past and present may find some guideposts. Aspiring “leadership” type people may well find tips and tricks.

Stephen E Arnold, August 26, 2024

Meta Leadership: Thank you for That Question

August 26, 2024

Who needs the Dark Web when one has Facebook? We learn from The Hill, “Lawmakers Press Meta Over Illicit Drug Advertising Concerns.” Writer Sarah Fortinsky pulls highlights from the open letter a group of House representatives sent directly to Mark Zuckerberg. The rebuke follows a March report from The Wall Street Journal that Meta was under investigation for “facilitating the sale of illicit drugs.” Since that report, the lawmakers lament, Meta has continued to run such ads. We learn:

The Tech Transparency Project recently reported that it found more than 450 advertisements on those platforms that sell pharmaceuticals and other drugs in the last several months. ‘Meta appears to have continued to shirk its social responsibility and defy its own community guidelines. Protecting users online, especially children and teenagers, is one of our top priorities,’ the lawmakers wrote in their letter, which was signed by 19 lawmakers. ‘We are continuously concerned that Meta is not up to the task and this dereliction of duty needs to be addressed,’ they continued. Meta uses artificial intelligence to moderate content, but the Journal reported the company’s tools have not managed to detect the drug advertisements that bypass the system.”

The bipartisan representatives did not shy from accusing Meta of dragging its heels because it profits off these illicit ad campaigns:

“The lawmakers said it was ‘particularly egregious’ that the advertisements were ‘approved and monetized by Meta.’ … The lawmakers noted Meta repeatedly pushes back against their efforts to establish greater data privacy protections for users and makes the argument ‘that we would drastically disrupt this personalization you are providing,’ the lawmakers wrote. ‘If this personalization you are providing is pushing advertisements of illicit drugs to vulnerable Americans, then it is difficult for us to believe that you are not complicit in the trafficking of illicit drugs,’ they added.”

The letter includes a list of questions for Meta. There is a request for data on how many of these ads the company has discovered itself and how many it missed that were discovered by third parties. It also asks about the ad review process, how much money Meta has made off these ads, what measures are in place to guard against them, and how minors have interacted with them. The legislators also ask how Meta uses personal data to target these ads, a secret the company will surely resist disclosing. The letter gives Zuckerberg until September 6 to respond.

Cynthia Murrell, August 26, 2024

AI Snake Oil Hisses at AI

August 23, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Enthusiasm for certain types of novel software or gadgets rises and falls. The Microsoft marketing play with OpenAI marked the beginning of the smart software hype derby. Google got the message and flipped into Red Alert mode. Now about 20 months after Microsoft’s announcement about its AI tie up with Sam AI-Man, we have Google’s new combo: AI in a mobile phone. Bam! Job done. Slam dunk.

image

Thanks, MSFT Copilot. On top of the IPv6 issue? Oh, too bad.

I wonder if the Googlers were thinking along the same logical lines at the authors of “AI Companies Are Pivoting from Creating Gods to Building Products. Good.”

The snake oil? Dripping. Here’s a passage from the article I noted:

AI companies are collectively planning to spend a trillion dollars on hardware and data centers, but there’s been relatively little to show for it so far.

A trillion? That’s a decent number. Sam AI-Man wants more, but the scale is helpful, particularly when most numbers are mere billions in the zoom zoom world of smart software.

The most important item in the write up, in my opinion, is the list of five “challenges.” The article focuses on consumer AI. A couple of these apply to the enterprise sector as well. Let’s look at the five “challenges.” These are and, keep in mind, I a paraphrasing as dinobabies often do:

  1. Cost. In terms of consumers, one must consider making Hamster Kombat smart. (This is a Telegram dApp.) My team informed me that this little gem has 35 million users, and it is still growing. Imagine the computational cost to infuse each and every Hamster Kombat “game” player with AI goodness. But it’s a game and a distributed one at that, one might say. Someone has to pay for these cycles. And Hamster Kombat is not on the radar of most consumers’ radar. Telegram has about 950 million users, so 35 million users comes from that pool. What are the costs of AI infused games outside of a walled garden. And the hardware? And the optimization engineering? And the fooling around with ad deals? Costs are not a hurdle. Costs might be a Grand Canyon-scale leap into a financial mud bank.
  2. Reliability. Immature systems and methods, training content issues (real and synthetic), and the fancy math which uses a lot of probability procedures guarantees some interesting outputs.
  3. Privacy. The consumer or user facing services are immature. Developers want to get something to most work in a good enough manner. Then security may be discussed. But on to the next feature. As a result, I am not sure if anyone has a decent grasp of the security issues which smart software might pose. Look at Microsoft. It’s been around almost half a century, and I learn about new security problems every day. Is smart software different?
  4. Safety and security. This is a concomitant to privacy. Good luck knowing what the systems do or do not do.
  5. User interface. I am a dinobaby. The interfaces are pale, low contrast, and change depending on what a user clicks. I like stability. Smart software simply does not comprehend that word.

Good points. My view is that the obstacle to surmount is money. I am not sure that the big outfits anticipated the costs of their sally into the hallucinating world of AI. And what are those costs, pray tell. Here’s are selected items the financial managers at the Big Dogs are pondering along with the wording of their updated LinkedIn profile:

  • Litigation. Remarks by some icons of the high technology sector have done little to assuage the feelings of those whose content was used without permission or compensation. Some, some people. A few Big Dogs are paying cash to scrape.
  • Power. Yep, electricity, as EV owners know, is not really free.
  • Water, Yep, modern machines produce heat if what I learned in physics was actual factual.
  • People (until they can be replaced by a machine that does not require health care or engage in signing petitions).
  • Data and indexing. Yep, still around and expensive.
  • License fees. They are comin’ round the mountain of legal filings.
  • Meals, travel and lodging. Leadership will be testifying, probably a lot.
  • PR advisors and crisis consultants. See the first bullet, Litigation.

However, slowly but surely some commercial sectors are using smart software. There is an AI law firm. There are dermatologists letting AI determine what to cut, freeze, or ignore. And there are college professors using AI to help them do “original” work and create peer-review fodder.

There was a snake in the Garden of Eden, right?

Stephen E Arnold, August 23, 2024

Google Leadership Versus Valued Googlers

August 23, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]This essay is the work of a dumb dinobaby. No smart software required.

The summer in rural Kentucky lingers on. About 2,300 miles away from the Sundar & Prabhakar Comedy Show’s nerve center, the Alphabet Google YouTube DeepMind entity is also “cyclonic heating from chaotic employee motion.” What’s this mean? Unsteady waters? Heat stroke? Confusion? Hallucinations? My goodness.

The Google leadership faces another round of employee pushback. I read “Workers at Google DeepMind Push Company to Drop Military Contracts.

How could the Google smart software fail to predict this pattern? My view is that smart software has some limitations when it comes to managing AI wizards. Furthermore, Google senior managers have not been able to extract full knowledge value from the tools at their disposal to deal with complexity. Time Magazine reports:

Nearly 200 workers inside Google DeepMind, the company’s AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to a copy of the document reviewed by TIME and five people with knowledge of the matter. The letter circulated amid growing concerns inside the AI lab that its technology is being sold to militaries engaged in warfare, in what the workers say is a violation of Google’s own AI rules.

Why are AI Googlers grousing about military work? My personal view is that the recent hagiography of Palantir’s Alex Karp and the tie up between Microsoft and Palantir for Impact Level 5 services means that the US government is gearing up to spend some big bucks for warfighting technology. Google wants — really needs — this revenue. Penalties for its frisky behavior as what Judge Mehta describes and “monopolistic” could put a hit in the git along of Google ad revenue. Therefore, Google’s smart software can meet the hunger militaries have for intelligent software to perform a wide variety of functions. As the Russian special operation makes clear, “meat based” warfare is somewhat inefficient. Ukrainian garage-built drones with some AI bolted on perform better than a wave of 18 year olds with rifles and a handful of bullets. The example which sticks in my mind is a Ukrainian drone spotting a Russian soldier in the field partially obscured by bushes. The individual is attending to nature’s call.l The drone spots the “shape” and explodes near the Russian infantry man.

image

A former consultant faces an interpersonal Waterloo. How did that work out for Napoleon? Thanks, MSFT Copilot. Are you guys working on the IPv6 issue? Busy weekend ahead?

Those who study warfare probably have their own ah-ha moment.

The Time Magazine write up adds:

Those principles state the company [Google/DeepMind] will not pursue applications of AI that are likely to cause “overall harm,” contribute to weapons or other technologies whose “principal purpose or implementation” is to cause injury, or build technologies “whose purpose contravenes widely accepted principles of international law and human rights.”) The letter says its signatories are concerned with “ensuring that Google’s AI Principles are upheld,” and adds: “We believe [DeepMind’s] leadership shares our concerns.”

I love it when wizards “believe” something.

Will the Sundar & Prabhakar brain trust do believing or banking revenue from government agencies eager to gain access to advantage artificial intelligence services and systems? My view is that the “believers” underestimate the uncertainty arising from potential sanctions, fines, or corporate deconstruction the decision of Judge Mehta presents.

The article adds this bit of color about the Sundar & Prabhakar response time to Googlers’ concern about warfighting applications:

The [objecting employees’] letter calls on DeepMind’s leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter’s circulation, Google has done none of those things, according to four people with knowledge of the matter. “We have received no meaningful response from leadership,” one said, “and we are growing increasingly frustrated.”

“No meaningful response” suggests that the Alphabet Google YouTube DeepMind rhetoric is not satisfactory.

The write up concludes with this paragraph:

At a DeepMind town hall event in June, executives were asked to respond to the letter, according to three people with knowledge of the matter. DeepMind’s chief operating officer Lila Ibrahim answered the question. She told employees that DeepMind would not design or deploy any AI applications for weaponry or mass surveillance, and that Google Cloud customers were legally bound by the company’s terms of service and acceptable use policy, according to a set of notes taken during the meeting that were reviewed by TIME. Ibrahim added that she was proud of Google’s track record of advancing safe and responsible AI, and that it was the reason she chose to join, and stay at, the company.

With Microsoft and Palantir, among others, poised to capture some end-of-fiscal-year money from certain US government budgets, the comedy act’s headquarters’ planners want a piece of the action. How will the Sundar & Prabhakar Comedy Act handle the situation? Why procrastinate? Perhaps the comedy act hopes the issue will just go away. The complaining employees have short attention spans, rely on TikTok-type services for information, and can be terminated like other Googlers who grouse, picket, boycott the Foosball table, or quiet quit while working on a personal start up.

The approach worked reasonably well before Judge Mehta labeled Google a monopoly operation. It worked when ad dollars flowed like latte at Philz Coffee. But today is different, and the unsettled personnel are not a joke and add to the uncertainty some have about the Google we know and love.

Stephen E Arnold, August 23, 2024

Which Is It, City of Columbus: Corrupted or Not Corrupted Data

August 23, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I learned that Columbus, Ohio, suffered one of those cyber security missteps. But the good news is that I learned from the ever reliable Associated Press, “Mayor of Columbus, Ohio, Says Ransomware Attackers Stole Corrupted, Unusable Data.” But then I read the StateScoop story “Columbus, Ohio, Ransomware Data Might Not Be Corrupted After All.”

image

The answer is, “I don’t know.” Thanks, MSFT Copilot. Good enough.

The story is a groundhog day tale. A bad actor compromises a system. The bad actor delivers ransomware. The senior officers know little about ransomware and even less about the cyber security systems marketed as a proactive, intelligent defense against bad stuff like ransomware. My view, as you know, is that it is easier to create sales decks and marketing collateral than it is is to deliver cyber security software that works. Keep in mind that I am a dinobaby. I like products that under promise and over deliver. I like software that works, not sort of works or mostly works. Works. That’s it.

What’s interesting about Columbus other than its zoo, its annual flower festival, and the OCLC organization is that no one can agree on this issue. I believe this is a variation on the Bud Abbott and Lou Costello routine “Who’s on First.”

StateScoop’s story reported:

An anonymous cybersecurity expert told local news station WBNS Tuesday that the personal information of hundreds of thousands of Columbus residents is available on the dark web. The claim comes one day after Columbus Mayor Andrew Ginther announced to the public that the stolen data had been “corrupted” and most likely “unusable.” That assessment was based on recent findings of the city’s forensic investigation into the incident.

The article noted:

Last week, the city shared a fact sheet about the incident, which explains: “While the city continues to evaluate the data impacted, as of Friday August 9, 2024, our data mining efforts have not revealed that any of the dark web-posted data includes personally identifiable information.”

What are the lessons I have learned from these two stories about a security violation and ransomware extortion?

  1. Lousy cyber security is a result of indifferent (maybe lousy) management? How do I know? The City of Columbus cannot generate a consistent story.
  2. The compromised data were described in two different and opposite ways. The confusion underscores that the individuals involved are struggling with basic data processes. Who’s on first? I don’t know. No, he’s on third.
  3. The generalization that no one wants the data misses an important point. Data, once available, is of considerable interest to state actors who might be interested in the employees associated with either the university, Chemical Abstracts, or some other information-centric entity in Columbus, Ohio.

Net net: The incident is one more grim reminder of the vulnerabilities which “managers” choose to ignore or leave to people who may lack certain expertise. The fix may begin in the hiring process.

Stephen E Arnold, August 23, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta