More on Intelligence and Social Media: Birds Versus Humans

May 10, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I have zero clue if the two stories about which I will write a short essay are accurate. I don’t care because the two news items are in delicious tension. Do you feel the frisson? The first story is “Parrots in Captivity Seem to Enjoy Video-Chatting with Their Friends on Messenger.” The core of the story strikes me as:

A new (very small) study led by researchers at the University of Glasgow and Northeastern University compared parrots’ responses when given the option to video chat with other birds via Meta’s Messenger versus watching pre-recorded videos. And it seems they’ve got a preference for real-time conversations.

image

Thanks, MSFT Copilot. Is your security a type of deep fake?

Let me make this somewhat over-burdened sentence more direct. Birds like to talk to other birds, live, and in real time. The bird is not keen on the type of video humans gobble up.

Now the second story. It has the click–licious headline “Gen Z Mostly Doesn’t Care If Influencers Are Actual Humans, New Study Shows.” The main idea of this “real” news story is, in my opinion:

The report [from a Sprout Social report] notes that 46 percent of Gen Z respondents, specifically, said they would be more interested in a brand that worked with an influencer generated with AI.

The idea is that humans are okay with fake video which aims to influence them through fake humans.

My atrophied dinobaby brain interprets the information in each cited article this way: Birds prefer to interact with real birds. Humans are okay with fake humans. I will have to recalibrate my understanding of the bird brain.

Let’s assume both write ups are chock full of statistically-valid data. The assorted data processes conformed to the requirements of Statistics 101. The researchers suggest humans are okay with fake data. Birds, specifically parrots, prefer  the real doo-dah.

Observations:

  1. Humans may not be able to differentiate real from fake. When presented with fakes, humans may prefer the bogus.
  2. I may need to reassess the meaning of the phrase “bird brain.”
  3. Researchers demonstrate the results of years of study in these findings.

Net net: The chills I  referenced in the first paragraph of this essay I now recognize as fear.

Stephen E Arnold, May 10, 2024

Google Stomps into the Threat Intelligence Sector: AI and More

May 7, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Before commenting on Google’s threat services news. I want to remind you of the link to the list of Google initiatives which did not survive. You can find the list at Killed by Google. I want to mention this resource because Google’s product innovation and management methods are interesting to say the least. Operating in Code Red or Yellow Alert or whatever the Google crisis buzzword is, generating sustainable revenue beyond online advertising has proven to be a bit of a challenge. Google is more comfortable using such methods as [a] buying and trying to scale it, [b] imitating another firm’s innovation, and [c] dumping big money into secret projects in the hopes that what comes out will not result in the firm’s getting its “glass” kicked to the curb.

image

Google makes a big entrance at the RSA Conference. Thanks, MSFT Copilot. Have you considerate purchasing Google’s threat intelligence service?

With that as background, Google has introduced an “unmatched” cyber security service. The information was described at the RSA security conference and in a quite Googley blog post “Introducing Google Threat Intelligence: Actionable threat intelligence at Google Scale.” Please, note the operative word “scale.” If the service does not make money, Google will “not put wood behind” the effort. People won’t work on the project, and it will be left to dangle in the wind or just shot like Cricket, a now famous example of animal husbandry. (Google’s Cricket was the Google Appliance. Remember that? Take over the enterprise search market. Nope. Bang, hasta la vista.)

Google’s new service aims squarely at the comparatively well-established and now maturing cyber security market. I have to check to see who owns what. Venture firms and others with money have been buying promising cyber security firms. Google owned a piece of Recorded Future. Now Recorded Future is owned by a third party outfit called Insight. Darktrace has been or will be purchased by Thoma Bravo. Consolidation is underway. Thus, it makes sense to Google to enter the threat intelligence market, using its Mandiant unit as a springboard, one of those home diving boards, not the cliff in Acapulco diving platform.

The write up says:

we are announcing Google Threat Intelligence, a new offering that combines the unmatched depth of our Mandiant frontline expertise, the global reach of the VirusTotal community, and the breadth of visibility only Google can deliver, based on billions of signals across devices and emails. Google Threat Intelligence includes Gemini in Threat Intelligence, our AI-powered agent that provides conversational search across our vast repository of threat intelligence, enabling customers to gain insights and protect themselves from threats faster than ever before.

Google to its credit did not trot out the “quantum supremacy” lingo, but the marketers did assert that the service offers “unmatched visibility in threats.” I like the “unmatched.” Not supreme, just unmatched. The graphic below illustrates the elements of the unmatchedness:

image

Credit to the Google 2024

But where is artificial intelligence in the diagram? Don’t worry. The blog explains that Gemini (Google’s AI “system”) delivers

AI-driven operationalization

But the foundation of the new service is Gemini, which does not appear in the diagram. That does not matter, the Code Red crowd explains:

Gemini 1.5 Pro offers the world’s longest context window, with support for up to 1 million tokens. It can dramatically simplify the technical and labor-intensive process of reverse engineering malware — one of the most advanced malware-analysis techniques available to cybersecurity professionals. In fact, it was able to process the entire decompiled code of the malware file for WannaCry in a single pass, taking 34 seconds to deliver its analysis and identify the kill switch. We also offer a Gemini-driven entity extraction tool to automate data fusion and enrichment. It can automatically crawl the web for relevant open source intelligence (OSINT), and classify online industry threat reporting. It then converts this information to knowledge collections, with corresponding hunting and response packs pulled from motivations, targets, tactics, techniques, and procedures (TTPs), actors, toolkits, and Indicators of Compromise (IoCs). Google Threat Intelligence can distill more than a decade of threat reports to produce comprehensive, custom summaries in seconds.

I like the “indicators of compromise.”

Several observations:

  1. Will this service be another Google Appliance-type play for the enterprise market? It is too soon to tell, but with the pressure mounting from regulators, staff management issues, competitors, and savvy marketers in Redmond “indicators” of success will be known in the next six to 12 months
  2. Is this a business or just another item on a punch list? The answer to the question may be provided by what the established players in the threat intelligence market do and what actions Amazon and Microsoft take. Is a new round of big money acquisitions going to begin?
  3. Will enterprise customers “just buy Google”? Chief security officers have demonstrated that buying multiple security systems is a “safe” approach to a job which is difficult: Protecting their employers from deeply flawed software and years of ignoring online security.

Net net: In a maturing market, three factors may signal how the big, new Google service will develop. These are [a] price, [b] perceived efficacy, and [c] avoidance of a major issue like the SolarWinds’ matter. I am rooting for Googzilla, but I still wonder why Google shifted from Recorded Future to acquisitions and me-too methods. Oh, well. I am a dinobaby and cannot be expected to understand.

Stephen E Arnold, May 7, 2024

Buffeting AI: A Dinobaby Is Nervous

May 7, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I am not sure the “go fast” folks are going to be thrilled with a dinobaby rich guy’s view of smart software. I read “Warren Buffett’s Warning about AI.” The write up included several interesting observations. The only problem is that smart software is out of the bag. Outfits like Meta are pushing the open source AI ball forward. Other outfits are pushing, but Meta has big bucks. Big bucks matter in AI Land.

image

Yes, dinobaby. You are on the right wavelength. Do you think anyone will listen? I don’t. Thanks, MSFT Copilot. Keep up the good work on security.

Let’s look at a handful of statements from the write up and do some observing while some in the Commonwealth of Kentucky recover from the Derby.

First, the oracle of Omaha allegedly said:

“When you think about the potential for scamming people… Scamming has always been part of the American scene. If I was interested in investing in scamming— it’s gonna be the growth industry of all time.”

Mr. Buffet has nailed the scamming angle. I particularly liked the “always.” Imagine a country built upon scamming. That makes one feel warm and fuzzy about America. Imagine how those who are hostile to US interests interpret the comment. Ill will toward the US can now be based on the premise that “scamming has always been part of the American scene.” Trust us? Just ignore the oracle of Omaha? Unlikely.

Second, the wise, frugal icon allegedly communicated that:

the technology would affect “anything that’s labor sensitive” and that for workers it could “create an enormous amount of leisure time.”

What will those individuals do with that “leisure time”? Gobbling down social media? Working on volunteer projects like picking up trash from streets and highways?

The final item I will cite is his 2018 statement:

“Cyber is uncharted territory. It’s going to get worse, not better.”

Is that a bit negative?

Stephen E Arnold, May 7, 2024

The Everything About AI Report

May 7, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read the Stanford Artificial Intelligence Report. If you have have not seen the 500 page document, click here.  I spotted an interesting summary of the document. “Things Everyone Should Understand About the Stanford AI Index Report” is the work of Logan Thorneloe, an author previously unknown to me. I want to highlight three points I carried away from Mr. Thorneloe’s essay. These may make more sense after you have worked through the beefy Stanford document, which, due to its size, makes clear that Stanford wants to be linked to the the AI spaceship. (Does Stanford’s AI effort look like Mr. Musk’s or Mr. Bezos’ rocket? I am leaning toward the Bezos design.)

image

An amazed student absorbs information about the Stanford AI Index Report. Thanks, MSFT. Good enough.

The summary of the 500 page document makes clear that Stanford wants to track the progress of smart software, provide a policy document so that Stanford can obviously influence policy decisions made by people who are not AI experts, and then “highlight ethical considerations.” The assumption by Mr. Thorneloe and by the AI report itself is that Stanford is equipped to make ethical anything. The president of Stanford departed under a cloud for acting in an unethical manner. Plus some of the AI firms have a number of Stanford graduates on their AI teams. Are those teams responsible for depictions of inaccurate historical personages? Okay, that’s enough about ethics. My hunch is that Stanford wants to be perceived as a leader. Mr. Thorneloe seems to accept this idea as a-okay.

The second point for me in the summary is that Mr. Thorneloe goes along with the idea that the Stanford report is unbiased. Writing about AI is, in my opinion of course, inherently biased. That’s’ the reason there are AI cheerleaders and AI doomsayers. AI is probability. How the software gets smart is biased by [a] how the thresholds are rigged up when a smart system is built, [b] the humans who do the training of the system and then “fine tune” or “calibrate” the smart software to produce acceptable results, and [b] the information used to train the system. More recently, human developers have been creating wrappers which effectively prevent the smart software from generating pornography or other “improper” or “unacceptable” outputs. I think the “bias” angle needs some critical thinking. Stanford’s report wants to cover the AI waterfront as Stanford maps and presents the geography of AI.

The final point is the rundown of Mr. Thorneloe’s take-aways from the report. He presents ten. I think there may just be three. First, the AI work is very expensive. That leads to the conclusion that only certain firms can be in the AI game and expect to win and win big. To me, this means that Stanford wants the good old days of Silicon Valley to come back again. I am not sure that this approach to an important, yet immature technology, is a particularly good idea. One does not fix up problems with technology. Technology creates some problems, and like social media, what AI generates may have a dark side. With big money controlling the game, what’s that mean? That’s a tough question to answer. The US wants China and Russia to promise not to use AI in their nuclear weapons system. Yeah, that will work.

Another take-away which seems important is the assumption that workers will be more productive. This is an interesting assertion. I understand that one can use AI to eliminate call centers. However, has Stanford made a case that the benefits outweigh the drawbacks of AI? Mr. Thorneloe seems to be okay with the assumption underlying the good old consultant-type of magic.

The general take-away from the list of ten take-aways is that AI is fueled by “industry.” What happened the Stanford Artificial Intelligence Lab, synthetic data, and the high-confidence outputs? Nothing has happened. AI hallucinates. AI gets facts wrong. AI is a collection of technologies looking for problems to solve.

Net net: Mr. Thorneloe’s summary is useful. The Stanford report is useful. Some AI is useful. Writing 500 pages about a fast moving collection of technologies is interesting. I cannot wait for the 2024 edition. I assume “everyone” will understand AI PR.

Stephen E Arnold, May 7, 2024

Microsoft Security Messaging: Which Is What?

May 6, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I am a dinobaby. I am easily confused. I read two “real” news items and came away confused. The first story is “Microsoft Overhaul Treats Security As Top Priority after a Series of Failures.” The subtitle is interesting too because it links “security” to monetary compensation. That’s an incentive, but why isn’t security just part of work at an alleged monopoly’s products and services? I surmise the answer is, “Because security costs money, a lot of money.” That article asserts:

After a scathing report from the US Cyber Safety Review Board recently concluded that “Microsoft’s security culture was inadequate and requires an overhaul,” it’s doing just that by outlining a set of security principles and goals that are tied to compensation packages for Microsoft’s senior leadership team.

Okay. But security emerges from basic engineering decisions; for instance, does a developer spend time figuring out and resolving security when dependencies are unknown or documented only by a grousing user in a comment posted on a technical forum? Or, does the developer include a new feature and moves on to the next task, assuming that someone else or an automated process will make sure everything works without opening the door to the curious bad actor? I think that Microsoft assumes it deploys secure systems and that its customers have the responsibility to ensure their systems’ security.

image

The cyber racoons found the secure picnic basket was easily opened. The well-fed, previously content humans seem dismayed that their goodies were stolen. Thanks, MSFT Copilot. Definitely good enough.

The write up adds that Microsoft has three security principles and six security pillars. I won’t list these because the words chosen strike me like those produced by a lawyer, an MBA, and a large language model. Remember. I am a dinobaby. Six plus three is nine things. Some car executive said a long time ago, “Two objectives is no objective.” I would add nine generalizations are not a culture of security. Nine is like Microsoft Word features. No one can keep track of them because most users use Word to produce Words. The other stuff is usually confusing, in the way, or presented in a way that finding a specific feature is an exercise in frustration. Is Word secure? Sure, just download some nifty documents from a frisky Telegram group or the Dark Web.

The write up concludes with a weird statement. Let me quote it:

I reported last month that inside Microsoft there is concern that the recent security attacks could seriously undermine trust in the company. “Ultimately, Microsoft runs on trust and this trust must be earned and maintained,” says Bell. “As a global provider of software, infrastructure and cloud services, we feel a deep responsibility to do our part to keep the world safe and secure. Our promise is to continually improve and adapt to the evolving needs of cybersecurity. This is job #1 for us.”

First, there is the notion of trust. Perhaps Edge’s persistence and advertising in the start menu, SolarWinds, and the legions of Chinese and Russian bad actors undermine whatever trust exists. Most users are clueless about security issues baked into certain systems. They assume; they don’t trust. Cyber security professionals buy third party security solutions like shopping at a grocery store. Big companies’ senior executive don’t understand why the problem exists. Lawyers and accountants understand many things. Digital security is often not a core competency. “Let the cloud handle it,” sounds pretty good when the fourth IT manager or the third security officer quit this year.

Now the second write up. “Microsoft’s Responsible AI Chief Worries about the Open Web.” First, recall that Microsoft owns GitHub, a very convenient source for individuals looking to perform interesting tasks. Some are good tasks like snagging a script to perform a specific function for a church’s database. Other software does interesting things in order to help a user shore up security. Rapid 7 metasploit-framework is an interesting example. Almost anyone can find quite a bit of useful software on GitHub. When I lectured in a central European country’s main technical university, the students were familiar with GitHub. Oh, boy, were they.

In this second write up I learned that Microsoft has released a 39 page “report” which looks a lot like a PowerPoint presentation created by a blue-chip consulting firm. You can download the document at this link, at least you could as of May 6, 2024. “Security” appears 78 times in the document. There are “security reviews.” There is “cybersecurity development” and a reference to something called “Our Aether Security Engineering Guidance.” There is “red teaming” for biosecurity and cybersecurity. There is security in Azure AI. There are security reviews. There is the use of Copilot for security. There is something called PyRIT which “enables security professionals and machine learning engineers to proactively find risks in their generative applications.” There is partnering with MITRE for security guidance. And there are four footnotes to the document about security.

What strikes me is that security is definitely a popular concept in the document. But the principles and pillars apparently require AI context. As I worked through the PowerPoint, I formed the opinion that a committee worked with a small group of wordsmiths and crafted a rather elaborate word salad about going all in with Microsoft AI. Then the group added “security” the way my mother would chop up a red pepper and put it in a salad for color.

I want to offer several observations:

  1. Both documents suggest to me that Microsoft is now pushing “security” as Job One, a slogan used by the Ford Motor Co. (How are those Fords fairing in the reliability ratings?) Saying words and doing are two different things.
  2. The rhetoric of the two documents remind me of Gertrude’s statement, “The lady doth protest too much, methinks.” (Hamlet? Remember?)
  3. The US government, most large organizations, and many individuals “assume” that Microsoft has taken security seriously for decades. The jargon-and-blather PowerPoint make clear that Microsoft is trying to find a nice way to say, “We are saying we will do better already. Just listen, people.”

Net net: Bandying about the word trust or the word security puts everyone on notice that Microsoft knows it has a security problem. But the key point is that bad actors know it, exploit the security issues, and believe that Microsoft software and services will be a reliable source of opportunity of mischief. Ransomware? Absolutely. Exposed data? You bet your life. Free hacking tools? Let’s go. Does Microsoft have a security problem? The word form is incorrect. Does Microsoft have security problems? You know the answer. Aether.

Stephen E Arnold, May 6, 2024

Generative AI Means Big Money…Maybe

May 6, 2024

Whenever new technology appears on the horizon, there are always optimistic, venture capitalists that jump on the idea that it will be a gold mine. While this is occasionally true, other times it’s a bust. Anything can sound feasible on paper, but reality often proves that brilliant ideas don’t work. Medium published Ashish Karan’s article, “Generative AI: A New Gold Rush For Software Engineering.”

Kakran opens his article asserting the brilliant simplicity of Einstein’s E=mc² formula to inspire readers. He alludes that generative AI will revolutionize industries like Einstein’s formula changed physics. He also says that white collar jobs stand to be automated for the first time in history. White collar jobs have been automated or made obsolete for centuries.

Kakran then runs numbers complete with charts and explanations about how generative AI is going to change the world. His diagrams and explanations probably mean something but it reads like white paper gibberish. This part makes sense:

“If you rewind to the year 2008, you will suddenly hear a lot of skepticism about the cloud. Would it ever make sense to move your apps and data from private or colo [cated] data centers to cloud thereby losing fine-grained control. But the development of multi-cloud and devops technologies made it possible for enterprises to not only feel comfortable but accelerate their move to the cloud. Generative AI today might be comparable to cloud in 2008. It means a lot of innovative large companies are still to be founded. For founders, this is an enormous opportunity to create impactful products as the entire stack is currently getting built.”

The author is correct that are business opportunities to leverage generative AI. Is it a California gold rush? Nobody knows. If you have the funding, expertise, and a good idea then follow it. If not, maybe focusing on a more attainable career is better.

Whitey Grace, May 6, 2024

Microsoft: Security Debt and a Cooked Goose

May 3, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Microsoft has a deputy security officer. Who is it? For reasons of security, I don’t know. What I do know is that our test VPNs no longer work. That’s a good way to enforce reduced security: Just break Windows 11. (Oh, the pushed messages work just fine.)

image

Is Microsoft’s security goose cooked? Thanks, MSFT Copilot. Keep following your security recipe.

I read “At Microsoft, Years of Security Debt Come Crashing Down.” The idea is that technical debt has little hidden chambers, in this case, security debt. The write up says:

…negligence, misguided investments and hubris have left the enterprise giant on its back foot.

How has Microsoft responded? Great financial report and this type of news:

… in early April, the federal Cyber Safety Review Board released a long-anticipated report which showed the company failed to prevent a massive 2023 hack of its Microsoft Exchange Online environment. The hack by a People’s Republic of China-linked espionage actor led to the theft of 60,000 State Department emails and gained access to other high-profile officials.

Bad? Not as bad as this reminder that there are some concerning issues

What is interesting is that big outfits, government agencies, and start ups just use Windows. It’s ubiquitous, relatively cheap, and good enough. Apple’s software is fine, but it is different. Linux has its fans, but it is work. Therefore, hello Windows and Microsoft.

The article states:

Just weeks ago, the Cybersecurity and Infrastructure Security Agency issued an emergency directive, which orders federal civilian agencies to mitigate vulnerabilities in their networks, analyze the content of stolen emails, reset credentials and take additional steps to secure Microsoft Azure accounts.

The problem is that Microsoft has been successful in becoming for many government and commercial entities the only game in town. This warrants several observations:

  1. The Microsoft software ecosystem may be impossible to secure due to its size and complexity
  2. Government entities from America to Zimbabwe find the software “good enough”
  3. Security — despite the chit chat — is expensive and often given cursory attention by system architects, programmers, and clients.

The hope is that smart software will identify, mitigate, and choke off the cyber threats. At cyber security conferences, I wonder if the attendees are paying attention to Emily Dickinson (the sporty nun of Amherst), who wrote:

Hope is the thing with feathers
That perches in the soul
And sings the tune without the words
And never stops at all.

My thought is that more than hope may be necessary. Hope in AI is the cute security trick of the day. Instead of a happy bird, we may end up with a cooked goose.

Stephen E Arnold, May 3, 2024

Amazon: Big Bucks from Bogus Books

May 3, 2024

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Anyone who shops for books on Amazon should proceed with caution now that “Fake AI-Generated Books Swarm Amazon.” Good e-Reader’s Navkiran Dhaliwal cites an article from Wired as she describes one author’s somewhat ironic experience:

“In 2019, AI researcher Melanie Mitchell wrote a book called ‘Artificial Intelligence: A Guide for Thinking Humans’. The book explains how AI affects us. ChatGPT sparked a new interest in AI a few years later, but something unexpected happened. A fake version of Melanie’s book showed up on Amazon. People were trying to make money by copying her work. … Melanie Mitchell found out that when she looked for her book on Amazon, another ebook with the same title was released last September. This other book was much shorter, only 45 pages. This book copied Melanie’s ideas but in a weird and not-so-good way. The author listed was ‘Shumaila Majid,’ but there was no information about them – no bio, picture, or anything online. You’ll see many similar books summarizing recently published titles when you click on that name. The worst part is she could not find a solution to this problem.”

It took intervention from WIRED to get Amazon to remove the algorithmic copycat. The magazine had Reality Defender confirm there was a 99% chance it was fake then contacted Amazon. That finally did the trick. Still, it is unclear whether it is illegal to vend AI-generated “summaries” of existing works and sell them under the original title. Regardless, asserts Mitchell, Amazon should take steps to prevent the practice. Seems reasonable.

And Amazon cares. No, really. Really it does.

Cynthia Murrell, April 29, 2024

AI: Strip Mining Life Itself

May 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I may be — like a AI system — hallucinating. I think I am seeing more philosophical essays and medieval ratio recently. A candidate expository writing is “To Understand the Risks Posed by AI, Follow the Money.” After reading the write up, I did not get a sense that the focus was on following the money. Nevertheless, I circled several statements which caught my attention.

Let’s look at these, and you may want to navigate to the original essay to get each statement’s context.

First, the authors focus on what they as academic thinkers call “an extractive business model.” When I saw the term, I thought of the strip mines in Illinois. Giant draglines stripped the earth to expose coal. Once the coal was extracted, the scarred earth was bulldozed into what looked like regular prairie. It was not. Weeds grew. But to get corn or soy beans, the farmer had to spend big bucks to get chemicals and some Fancy Dan equipment to coax the trashed landscape to utility. Nice.

The essay does not make the downside of extractive practices clear. I will. Take a look at a group of teens in a fast food restaurant or at a public event. The group is a consequence of the online environment in which the individual spends hours each day. I am not sure how well the chemicals and equipment used to rehabilitate the strip minded prairie applies to humans, but I assume someone will do a study and report.

image

The second statement warranting a blue exclamation mark is:

Algorithms have become market gatekeepers and value allocators, and are now becoming producers and arbiters of knowledge.

From my perspective, the algorithms are expressions of human intent. The algorithms are not the gatekeepers and allocators. The algorithms express the intent, goals, and desire of the individuals who create them. The “users” knowingly or unknowingly give up certain thought methods and procedures to provide what appears to be something scratches a Maslow’s Hierarchy of Needs’ itch. I think in terms of the medieval Great Chain of Being. The people at the top own the companies. Their instrument of control is their service. The rest of the hierarchy reflects a skewed social order. A fish understands only the environment of the fish bowl. The rest of the “world” is tough to perceive and understand. In short, the fish is trapped. Online users (addicts?) are trapped.

The third statement I marked is:

The limits we place on algorithms and AI models will be instrumental to directing economic activity and human attention towards productive ends.

Okay, who exactly is going to place limits? The farmer who leased his land to the strip mining outfit made a decision. He traded the land for money. Who is to blame? The mining outfit? The farmer? The system which allowed the transaction?

The situation at this moment is that yip yap about open source AI and the other handwaving cannot alter the fact that a handful of large US companies and a number of motivated nation states are going to spend what’s necessary to obtain control.

Net net: Houston, we have a problem. Money buys power. AI is a next generation way to get it.

Stephen E Arnold, May 2, 2024

Using AI But For Avoiding Dumb Stuff One Hopes

May 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting essay called “How I Use AI To Help With TechDirt (And, No, It’s Not Writing Articles).” The main point of the write up is that artificial intelligence or smart software (my preferred phrase) can be useful for certain use cases. The article states:

I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles. It’s basically to help me brainstorm, critique my articles, and make suggestions on how to improve them.

image

Thanks, MSFT Copilot. Bad grammar and an incorrect use of the apostrophe. Also, I was much dumber looking in the 9th grade. But good enough, the motto of some big software outfits, right?

The idea is that an AI system can function as a partner, research assistant, editor, and interlocutor. That sounds like what Microsoft calls a “copilot.” The article continues:

I initially couldn’t think of anything to ask the AI, so I asked people in Lex’s Discord how they used it. One user sent back a “scorecard” that he had created, which he asked Lex to use to review everything he wrote.

The use case is that smart software function like Miss Dalton, my English composition teacher at Woodruff High School in 1958. She was a firm believer in diagramming sentences, following the precepts of the Tressler & Christ textbook, and arcane rules such as capitalizing the first word following a color (correctly used, of course).

I think her approach was intended to force students in 1958 to perform these word and text manipulations automatically. Then when we trooped to the library every month to do “research” on a topic she assigned, we could focus on the content, the logic, and the structural presentation of the information. If you attend one of my lectures, you can see that I am struggling to live up to her ideals.

However, when I plugged in my comments about Telegram as a platform tailored to obfuscated communications, the delivery of malware and X-rated content, and enforcing a myth that the entity known as Mr. Durov does not cooperate with certain entities to filter content, AI systems failed miserably. Not only were the systems lacking content, one — Microsoft Copilot, to be specific — had no functional content of collapse. Two other systems balked at the idea of delivering CSAM within a Group’s Channel devoted to paying customers of what is either illegal or extremely unpleasant content.

Several observations are warranted:

  1. For certain types of content, the systems lack sufficient data to know what the heck I am talking about
  2. For illegal activities, the systems are either pretending to be really stupid or the developers have added STOP words to the filters to make darned sure to improper output would be presented
  3. The systems’ are not up-to-date; for example, Mr. Durov was interviewed by Tucker Carlson a week before Mr. Durov blocked Ukraine Telegram Groups’ content to Telegram users in Russia.

Is it, therefore, reasonable to depend on a smart software system to provide input on a “newish” topic? Is it possible the smart software systems are fiddled by the developers so that no useful information is delivered to the user (free or paying)?

Net net: I am delighted people are finding smart software useful. For my lectures to law enforcement officers and cyber investigators, smart software is as of May 1, 2024, not ready for prime time. My concern is that some individuals may not discern the problems with the outputs. Writing about the law and its interpretation is an area about which I am not qualified to comment. But perhaps legal content is different from garden variety criminal operations. No, I won’t ask, “What’s criminal?” I would rather rely on Miss Dalton taught in 1958. Why? I am a dinobaby and deeply skeptical of probabilistic-based systems which do not incorporate Kolmogorov-Arnold methods. Hey, that’s my relative’s approach.

Stephen E Arnold, May 1, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta