AI Coding: Better, Faster, Cheaper. Just Pick Two, Please

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Visual Studio Magazine is not on my must-read list. Nevertheless, one of my research team told me that I needed to read “New GitHub Copilot Research Finds “Downward Pressure on Code Quality.” I had no idea what “downward pressure” means. I read the article trying to figure out what the plain English meaning of this tortured phrase meant. Was it the downward pressure on the metatarsals when a person is running to a job interview? Was it the deadly downward pressure exerted on the OceanGate submersible? Was it the force illustrated in the YouTube “Hydraulic Press Channel”?

image

A partner at a venture firms wants his open source recipients to produce more code better, faster, and cheaper. (He does not explain that one must pick two.) Thanks MSFT Copilot Bing thing. Good enough. But the green? Wow.

Wrong.

The writeup is a content marketing piece for a research report. That’s okay. I think a human may have written most of the article. Despite the frippery in the article, I spotted several factoids. If these are indeed verifiable, excitement in the world of machine generated open source software will ensue. Why does this matter? Well, in the words of the SmartNews content engine, “Read on.”

Here are the items of interest to me:

  1. Bad code is being created and added to the GitHub repositories.
  2. Code is recycled, despite smart efforts to reduce the copy-paste approach to programming.
  3. AI is preparing a field in which lousy, flawed, and possible worse software will flourish.

Stephen E Arnold, January 29, 2024

Modern Poison: Models, Data, and Outputs. Worry? Nah.

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

One bad apple does not a failed harvest make. Let’s hope. I read “Poisoned AI Went Rogue During Training and Couldn’t Be Taught to Behave Again in Legitimately Scary Study.” In several of my lectures in 2023 I included a section about poisoned data. When I described the method and provided some examples of content injection, the audience was mostly indifferent. When I delivered a similar talk in October 2023, those in my audience were attentive. The concept of intentionally fooling around with model thresholds, data used for training, and exploiting large language model developers’ efforts to process more current or what some call “real time” data hit home. For each of these lectures, my audience was composed of investigators and intelligence analysts.

image

How many bad apples are in the spectrum of smart software? Give up. Don’t feel bad. No one knows. Perhaps it is better to ignore the poisoned data problem? There is money to be made and innovators to chase the gold rush. Thanks, MSFT Copilot Bing thing. How is your email security? Oh, good enough, like the illustration with lots of bugs.

Write ups like “Poisoned AI Went Rogue…” add a twist to my tales. Specifically a function chunk of smart software began acting in a manner not only surprising but potentially harmful. The write up in LiveScience asserted:

AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

Interesting. The article noted:

Artificial intelligence (AI) systems that were trained to be secretly malicious resisted state-of-the-art safety methods designed to "purge" them of dishonesty …  Researchers programmed various large language models (LLMs) — generative AI systems similar to ChatGPT — to behave maliciously. Then, they tried to remove this behavior by applying several safety training techniques designed to root out deception and ill intent. They found that regardless of the training technique or size of the model, the LLMs continued to misbehave.

Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, is quoted as saying:

"I think our results indicate that we don’t currently have a good defense against deception in AI systems — either via model poisoning or emergent deception — other than hoping it won’t happen…  And since we have really no way of knowing how likely it is for it to happen, that means we have no reliable defense against it. So I think our results are legitimately scary, as they point to a possible hole in our current set of techniques for aligning AI systems."

If you want to read the research paper, you can find it at this link. Note that one of the authors is affiliated with the Amazon- and Google-supported Anthropic AI company.

Net net: We do not have at this time a “good defense” against this type of LLM poisoning. Do I have a clever observation, some words of reassurance, or any ideas for remediation?

Nope.

Stephen E Arnold, January 29, 2024

AI Will Take Whose Job, Ms. Newscaster?

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Will AI take jobs? Abso-frickin-lutely. Why? Cost savings. Period. In an era of “good enough” is the new mark of excellence, hallucinating software is going to speed up some really annoying commercial functions and reduce costs. What if the customers object to being called dorks? Too bad. The company will apologize, take down the wonky system, and put up another smart service. Better? No, good enough. Faster? Yep. Cheaper? Bet your bippy on that, pilgrim. (See, for a chuckle, AI Chatbot At Delivery Firm DPD Goes Rogue, Insults Customer And Criticizes Company.)

image

Hey, MSFT Bing thing, good enough. How is that MSFT email security today, kiddo?

I found this Fox write up fascinating: “Two-Thirds of Americans Say AI Could Do Their Job.” That works out to about 67 percent of an estimated workforce of 120 million to a couple of Costco parking lots of people. Give or take a few, of course.

The write up says:

A recent survey conducted by Spokeo found that despite seeing the potential benefits of AI, 66.6% of the 1,027 respondents admitted AI could carry out their workplace duties, and 74.8% said they were concerned about the technology’s impact on their industry as a whole.

Oh, oh. Now it is 75 percent. Add a few more Costco parking lots of people holding signs like “Will broadcast for food”, “Will think for food,” or “Will hold a sign for Happy Pollo Tacos.” (Didn’t some wizard at Davos suggest that five percent of jobs would be affected? Yeah, that’s on the money.)

The write up adds:

“Whether it’s because people realize that a lot of work can be easily automated, or they believe the hype in the media that AI is more advanced and powerful than it is, the AI box has now been opened. … The vast majority of those surveyed, 79.1%, said they think employers should offer training for ChatGPT and other AI tools.

Yep, take those free training courses advertised by some of the tech feudalists. You too can become an AI sales person just like “search experts” morphed into search engine optimization specialists. How is that working out? Good for the Google. For some others, a way station on the bus ride to the unemployment bureau perhaps?

Several observations:

  1. Smart software can generate the fake personas and the content. What’s the outlook for talking heads who are not celebrities or influencers as “real” journalists?
  2. Most people overestimate their value. Now the jobs for which these individuals compete, will go to the top one percent. Welcome to the feudal world of 21st century.
  3. More than holding signs and looking sad will be needed to generate revenue for some people.

And what about Fox news reports like the one on which this short essay is based? AI, baby, just like Sports Illustrated and the estimable SmartNews.

Stephen E Arnold, January 29, 2024

Why Stuff Does Not Work: Airplane Doors, Health Care Services, and Cyber Security Systems, Among Others

January 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Downward Spiral of Technology” stuck a chord with me. Think about building monuments in the reign of Cleopatra. The workers can check out the sphinx and giant stone blocks in the pyramids and ask, “What happened to the technology? We are banging with bronze and crappy metal compounds and those ancient dudes were zipping along with snappier tech.? That conversation is imaginary, of course.

The author of “The Downward Spiral” is focusing on less dusty technology, the theme might resonate with my made up stone workers. Modern technology lacks some of the zing of the older methods. The essay by Thomas Klaffke hit on some themes my team has shared whilst stuffing Five Guys’s burgers in their shark-like mouths.

Here are several points I want to highlight. In closing, I will offer some of my team’s observations on the outcome of the Icarus emulators.

First, let’s think about search. One cannot do anything unless one can find electronic content. (Lawyers, please, don’t tell me you have associates work through the mostly-for-show books in your offices. You use online services. Your opponents in court print stuff out to make life miserable. But electronic content is the cat’s pajamas in my opinion.)

Here’s a table from the Mr. Klaffke essay:

image

Two things are important in this comparison of the “old” tech and the “new” tech deployed by the estimable Google outfit. Number one: Search in Google’s early days made an attempt to provide content relevant to the query. The system was reasonably good, but it was not perfect. Messrs. Brin and Page fancy danced around issues like disambiguation, date and time data, date and time of crawl, and forward and rearward truncation. Flash forward to the present day, the massive contributions of Prabhakar Raghavan and other “in charge of search” deliver irrelevant information. To find useful material, navigate to a Google Dorks service and use those tips and tricks. Otherwise, forget it and give Swisscows.com, StartPage.com, or Yandex.com a whirl. You are correct. I don’t use the smart Web search engines. I am a dinobaby, and I don’t want thresholds set by a 20 year old filtering information for me. Thanks but no thanks.

The second point is that search today is a monopoly. It takes specialized expertise to find useful, actionable, and accurate information. Most people — even those with law degrees, MBAs, and the ability to copy and paste code — cannot cope with provenance, verification, validation, and informed filtering performed by a subject matter expert. Baloney does not work in my corner of the world. Baloney is not a favorite food group for me or those who are on my team. Kudos to Mr. Klaffke to make this point. Let’s hope someone listens. I have given up trying to communicate the intellectual issues lousy search and retrieval creates. Good enough. Nope.

image

Yep, some of today’s tools are less effective than modern gizmos. Hey, how about those new mobile phones? Thanks, MSFT Copilot Bing thing. Good enough. How’s the MSFT email security today? Oh, I asked that already.

Second, Mr Klaffke gently reminds his reader that most people do not know snow cones from Shinola when it comes to information. Most people assume that a computer output is correct. This is just plain stupid. He provides some useful examples of problems with hardware and user behavior. Are his examples ones that will change behaviors. Nope. It is, in my opinion, too late. Information is an undifferentiated haze of words, phrases, ideas, facts, and opinions. Living in a haze and letting signals from online emitters guide one is a good way to run a tiny boat into a big reef. Enjoy the swim.

Third, Mr. Klaffke introduces the plumbing of the good-enough mentality. He is accurate. Some major social functions are broken. At lunch today, I mentioned the writings about ethics by Thomas Dewey and William James. My point was that these fellows wrote about behavior associated with a world long gone. It would be trendy to wear a top hat and ride in a horse drawn carriage. It would not be trendy to expect that a person would work and do his or her best to do a good job for the agreed-upon wage. Today I watched a worker who played with his mobile phone instead of stocking the shelves in the local grocery store. That’s the norm. Good enough is plenty good. Why work? Just pay me, and I will check out Instagram.

I do not agree with Mr. Klaffke’s closing statement; to wit:

The problem is not that the “machine” of humanity, of earth is broken and therefore needs an upgrade. The problem is that we think of it as a “machine”.

The problem is that worldwide shared values and cultural norms are eroding. Once the glue gives way, we are in deep doo doo.

Here are my observations:

  1. No entity, including governments, can do anything to reverse thousands of years of cultural accretion of norms, standards, and shared beliefs.
  2. The vast majority of people alive today are reverting back to some fascinating behaviors. “Fascinating” is not a positive in the sense in which I am using the word.
  3. Online has accelerated the stress on social glue; smart software is the turbocharger of abrupt, hard-to-understand change.

Net net: Please, read Mr. Klaffke’s essay. You may have an idea for remediating one or more of today’s challenges.

Stephen E Arnold, January 25, 2024

Education on the Cheap: No AI Required

January 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I don’t write about education too often. I do like to mention the plagiarizing methods of some academics. What fun! I located a true research gem (probably non-reproducible, hallucinogenic, or just synthetic but I don’t care). “Emergency-Hired Teachers Do Just as Well as Those Who Go Through Normal Training” states:

New research from Massachusetts and New Jersey suggests maybe not. In both states, teachers who entered the profession without completing the full requirements performed no worse than their normally trained peers.

image

A sanitation worker with a high school diploma is teaching advanced seventh graders about linear equations. The students are engaged… with their mobile phones. Hey, good enough, MSFT Copilot Bing thing. Good enough.

Then a modest question:

The better question now is why these temporary waivers aren’t being made permanent.

And what’s the write up say? I quote:

In other words, making it harder to become a teacher will reduce the supply but offers no guarantee that those who meet the bar will actually be effective in the classroom.

Huh?

Using people who did not slog through college and learned something (one hopes) is expensive. Think of the cost savings when using those who are untrained and unencumbered with expectations of big money! When good enough is the benchmark of excellence, embrace those without an comprehensive four-year or more education. Ooops. Who wants that?

I thought that I once heard that the best, most educated teaching professionals should work with the youngest students. I must have been doing some of that AI-addled thinking common among some in the old age home. When’s lunch?

Stephen E Arnold, January 26, 2024

Apple, Now Number One, But Maybe Not in Mobile Security?

January 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

MIT Professor Stuart E. Madnick allegedly discovered that iPhone data breaches tripled between 2013-2022. Venture Beat explains more in the article “Why Attackers Love To Target Misconfigured Clouds And Phones.”

Hackers use every method to benefit from misconfiguration, but ransomware is their favorite technique. Madnick discovered a near 50% increase in ransomware attacks in organizations in the first six months of 2023 compared to 2022. After finding the breach, hackers then attack organizations’ mobile phone fleets. They freeze all communications until the ransom is paid.

Bad actors want to find the easiest ways into clouds. Unfortunately organizations are unaware that attacks happen when they don’t monitor their networks:

Merritt Baer, Field CISO at Lacework, says that bad actors look first for an easy front door to access misconfigured clouds, the identities and access to entire fleets of mobile devices. “Novel exploits (zero-days) or even new uses of existing exploits are expensive to research and discover. Why burn an expensive zero-day when you don’t need to? Most bad actors can find a way in through the “front door”– that is, using legitimate credentials (in unauthorized ways).”

Baer added, ‘This avenue works because most permissions are overprovisioned (they aren’t pruned down/least privileged as much as they could be), and because with legitimate credentials, it’s hard to tell which calls are authorized/ done by a real user versus malicious/ done by a bad actor.’”

Almost 99% of cloud security breaches are due to incorrectly set manual controls. Also nearly 50% of organizations unintentionally exposed storage, APIs, network scents, and applications. These breaches cost an average of $4 million to solve.

Organizations need to rely on more than encryption to protect their infrastructures. Most attacks occur because bad actors use authenticate credentials. Unified endpoint management, passwordless multi-factor authentication, and mobile device management housed on a single platform is the best defense.

How about these possibly true revelations about Apple?

Whitney Grace, January 26, 2024

AI and Web Search: A Meh-crosoft and Google Mismatch

January 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a shocking report summary. Is the report like one of those Harvard Medical scholarly articles or an essay from the former president of Stanford University? I don’t know. Nevertheless, let’s look at the assertions in “Report: ChatGPT Hasn’t Helped Bing Compete With Google.” I am not sure if the information provides convincing proof that Googzilla is a big, healthy market dominator or if Microsoft has been fooling itself about the power of the artificial intelligence revolution.

image

The young inventor presents his next big thing to a savvy senior executive at a techno-feudal company. The senior executive is impressed. Are you? I know I am. Thanks, MSFT Copilot Bing thing. Too bad you timed out and told me, “I apologize for the confusion. I’ll try to create a more cartoon-style illustration this time.” Then you crashed. Good enough, right?

Let’s look at the write up. I noted this passage which is coming to me third, maybe fourth hand, but I am a dinobaby and I go with the online flow:

Microsoft added the generative artificial intelligence (AI) tool to its search engine early last year after investing $10 billion in ChatGPT creator OpenAI. But according to a recent Bloomberg News report — which cited data analytics company StatCounter — Bing ended 2023 with just 3.4% of the worldwide search market, compared to Google’s 91.6% share. That’s up less than 1 percentage point since the company announced the ChatGPT integration last January.

I am okay with the $10 billion. Why not bet big? The tactics works for some each year at the Kentucky Derby. I don’t know about the 91.6 number, however. The point six is troubling. What’s with precision when dealing with a result that makes clear that of 100 random people on line at the ever efficient BWI Airport, only eight will know how to retrieve information from another Web search system; for example, the busy Bing or the super reliable Yandex.ru service.

If we assume that the Bing information of modest user uptake, those $10 billion were not enough to do much more than get the management experts at Alphabet to press the Red Alert fire alarm. One could reason: Google is a monopoly in spirit if not in actual fact. If we accept the market share of Bing, Microsoft is putting life preservers manufactured with marketing foam and bricks on its Paul Allen-esque super yacht.

The write up says via what looks like recycled information:

“We are at the gold rush moment when it comes to AI and search,” Shane Greenstein, an economist and professor at Harvard Business School, told Bloomberg. “At the moment, I doubt AI will move the needle because, in search, you need a flywheel: the more searches you have, the better answers are. Google is the only firm who has this dynamic well-established.”

Yeah, Harvard. Oh, well, the sweatshirts are recognized the world over. Accuracy, trust, and integrity implied too.

Net net: What’s next? Will Microsoft make it even more difficult to use another outfit’s search system. Swisscows.com, you may be headed for the abattoir. StartPage.com, you will face your end.

Stephen E Arnold, January 25, 2024

Content Mastication: A Controversial Business Tactic

January 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In the midst of the unfolding copyright issues, I found this post quite interesting. Torrent Freak published a story titled “Meta Admits Use of ‘Pirated’ Book Dataset to Train AI.” Is the story spot on? I sure don’t know. Nevertheless, the headline is a magnetic one. The story reports:

The cases allege that tech companies, including Meta and OpenAI, used the controversial Books3 dataset to train their models. The Books3 dataset has a clear piracy angle. It was created by AI researcher Shawn Presser in 2020, who scraped the library of ‘pirate’ site Bibliotik. This book archive was publicly hosted by digital archiving collective ‘The Eye‘ at the time, alongside various other data sources.

image

A combination of old-fashioned content collection and smart systems move information from Point A (a copyright owner’s night table) to a smart software system. MSFT’s second class Copilot Bing thing created this cartoon. Sigh. Not even good enough now in my opinion.

What was in the Books3 data collection? The TF story elucidates:

The general vision was that the plaintext collection of more than 195,000 books, which is nearly 37GB…

What did Meta allegedly do to make its Llama smarter than the average member of the Camelidae family? Let’s roll the TF quote:

Responding to a lawsuit from writer/comedian Sarah Silverman, author Richard Kadrey, and other rights holders, the tech giant admits that “portions of Books3” were used to train the Llama AI model before its public release. “Meta admits that it used portions of the Books3 dataset, among many other materials, to train Llama 1 and Llama 2,” Meta writes in its answer [to a court].

The article does not include any statements like “Thank you for the question” or “I don’t know. My team will provide the answer at the earliest possible moment.” Nope. Just an alleged admission.

How will the Meta and parallel copyright legal matter evolve? Beyond Search has zero clue. The US judicial system has deep and mysterious logic. One thing is certain: Senior executives do not like uncertainty and risk. The copyright litigation seems tailored to cause some techno feudalists to imagine a world in which laws, annoying regulators, and people yapping about intellectual property were nudged into a different line of work. One example which comes to mind is building secure bunkers or taking care of the lawn.

Stephen E Arnold, January 25, 2024

Fujitsu: Good Enough Software, Pretty Good Swizzling

January 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The USPS is often interesting. But the UK’s postal system, however, is much worse. I think we can thank the public private US postal construct for not screwing over those who manage branch offices. Computer Weekly details how the UK postal system’s leaders knowingly had an IT problem and blamed employees: “Fujitsu Bosses Knew About Post Office Horizon IT Flaws, Says Insider.”

The UK postal system used the Post Office Horizon IT system supplied by Fujitsu. The Fujitsu bosses allowed it to be knowingly installed despite massive problems. Hundreds of UK subpostmasters were accused of fraud and false accounting. They were held liable. Many were imprisoned, had their finances ruined, and lost jobs. Many of the UK subpostmasters fought the accusations. It wasn’t until 2019 that the UK High Court proved it was Horizon IT’s fault.

The Fujitsu that “designed” the postal IT system didn’t have the correct education and experience for the project. It was built on a project that didn’t properly record and process payments. A developer on the project shared with Computer Weekly:

“‘To my knowledge, no one on the team had a computer science degree or any degree-level qualifications in the right field. They might have had lower-level qualifications or certifications, but none of them had any experience in big development projects, or knew how to do any of this stuff properly. They didn’t know how to do it.’”

The Post Office Horizon It system was the largest commercial system in Europe and it didn’t work. The software was bloated, transcribed gibberish, and was held together with the digital equivalent of Scotch tape. This case is the largest miscarriage of justice in current UK history. Thankfully the truth has come out and the subpostmasters will be compensated. The compensation doesn’t return stolen time but it will ease their current burdens.

Fujitsu is getting some scrutiny. Does the company manufacture grocery self check out stations? If so, more outstanding work.

Whitney Grace, January 25, 2024

The NSO Group Back in the News: Is That a Good Thing?

January 24, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Some outfits struggle to get PR, not the NSO Group. The situation is no “dream.” I spotted this write up in 9 to 5 Mac: “Apple Wins Early Battle against NSO after Suing Spyware Mercenaries for Attacking iPhone Users.” For me, the main point of the article is:

Judge Donato ruled that NSO Group’s request for dismissal in the US in favor of a trial in Israel didn’t meet the bar. Instead, Judge Donato suggested that Apple would face the same challenges in Israel that NSO faces in the US.

image

A senior manager who is an attorney skilled in government processes looks at the desk in his new office. Wow, that looks untidy. Thanks, MSFT Copilot Bing thing. How’s that email security issue coming along? Ah, good enough, you say?

I think this means that the legal spat will be fought in the US of A. Here’s the sentence quoted by 9 to 5 Mac which allegedly appeared in a court document:

NSO has not demonstrated otherwise. NSO also overlooks the fact that the challenges will be amenable to a number of mitigating practices.

The write up includes this passage:

An Apple spokesperson tells 9to5Mac that the company will continue to protect users against 21st century mercenaries like the NSO Group. Litigation against the Pegasus spyware maker is part of a larger effort to protect users…

From my point of view, the techno feudal outfit has surfed on the PR magnetism of the NSO Group. Furthermore, the management team at NSO Group faces what seems to be a bit of a legal hassle. Some may believe that the often ineffective Israeli cyber security technology which failed to signal, thwart, or disrupt the October 2023 dust up requires more intense scrutiny. NSO Group, therefore, is in the spotlight.

More interesting from my vantage point is the question, “How can NSO Group’s lawyering-savvy senior management not demonstrate its case in such a way to, in effect, kill some of the PR magnetism. Take it from me. This is not a “dream” assignment for NSO Group’s legal eagles. I would also be remiss if I did not mention that Apple has quite a bit of spare cash with which to feather the nest of legal eagles. Apple wants to be perceived as the user’s privacy advocate and BFF. When it comes to spending money and rounding up those who love their Apple devices, the estimable Cupertino outfit may be a bit of a challenge, even to attorneys with NSA and DHS experience.

As someone said about publicity, any publicity is good publicity. I am not sure the categorical affirmative is shared by everyone involved with NSO Group. And where is Hulio? He’s down by the school yard. He doesn’t know where he’s going, but Hulio is going the other way. (A tip of the hat to Paul Simon and his 1972 hit.)

Stephen E Arnold, January 24, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta