Ho-Hum Write Up with Some Golden Nuggets
January 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “Anthropic Confirms It Suffered a Data Leak.” I know. I know. Another security breach involving an outfit working with the Bezos bulldozer and Googzilla. Snore. But in the write up, tucked away were a couple of statements I found interesting.
“Hey, pardner, I found an inconsistency.” Two tries for a prospector and a horse. Good enough, MSFT Copilot Bing thing. I won’t ask about your secure email.
Here these items are:
- Microsoft, Amazon and others are being asked by a US government agency “to provide agreements and rationale for collaborations and their implications; analysis of competitive impact; and information on any other government entities requesting information or performing investigations.” Regulatory scrutiny of the techno feudal champions?
- The write up asserts: “Anthropic has made a “long-term commitment” to provide AWS customers with “future generations” of its models through Amazon Bedrock, and will allow them early access to unique features for model customization and fine-tuning purposes.” Love at first sight?
- And a fascinating quote from a Googler. Note: I have put in bold some key words which I found interesting:
“Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way,” Google Cloud CEO Thomas Kurian said in a statement on their relationship. “This expanded partnership with Anthropic, built on years of working together, will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud.”
Yeah, but the article is called “Anthropic Confirms It Suffered a Data Leak.” What’s with the securely?
Ah, regulatory scrutiny and obvious inconsistency. Ho-hum with a good enough tossed in for spice.
Stephen E Arnold, January 30, 2024
AI Coding: Better, Faster, Cheaper. Just Pick Two, Please
January 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Visual Studio Magazine is not on my must-read list. Nevertheless, one of my research team told me that I needed to read “New GitHub Copilot Research Finds “Downward Pressure on Code Quality.” I had no idea what “downward pressure” means. I read the article trying to figure out what the plain English meaning of this tortured phrase meant. Was it the downward pressure on the metatarsals when a person is running to a job interview? Was it the deadly downward pressure exerted on the OceanGate submersible? Was it the force illustrated in the YouTube “Hydraulic Press Channel”?
A partner at a venture firms wants his open source recipients to produce more code better, faster, and cheaper. (He does not explain that one must pick two.) Thanks MSFT Copilot Bing thing. Good enough. But the green? Wow.
Wrong.
The writeup is a content marketing piece for a research report. That’s okay. I think a human may have written most of the article. Despite the frippery in the article, I spotted several factoids. If these are indeed verifiable, excitement in the world of machine generated open source software will ensue. Why does this matter? Well, in the words of the SmartNews content engine, “Read on.”
Here are the items of interest to me:
- Bad code is being created and added to the GitHub repositories.
- Code is recycled, despite smart efforts to reduce the copy-paste approach to programming.
- AI is preparing a field in which lousy, flawed, and possible worse software will flourish.
Stephen E Arnold, January 29, 2024
Modern Poison: Models, Data, and Outputs. Worry? Nah.
January 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
One bad apple does not a failed harvest make. Let’s hope. I read “Poisoned AI Went Rogue During Training and Couldn’t Be Taught to Behave Again in Legitimately Scary Study.” In several of my lectures in 2023 I included a section about poisoned data. When I described the method and provided some examples of content injection, the audience was mostly indifferent. When I delivered a similar talk in October 2023, those in my audience were attentive. The concept of intentionally fooling around with model thresholds, data used for training, and exploiting large language model developers’ efforts to process more current or what some call “real time” data hit home. For each of these lectures, my audience was composed of investigators and intelligence analysts.
How many bad apples are in the spectrum of smart software? Give up. Don’t feel bad. No one knows. Perhaps it is better to ignore the poisoned data problem? There is money to be made and innovators to chase the gold rush. Thanks, MSFT Copilot Bing thing. How is your email security? Oh, good enough, like the illustration with lots of bugs.
Write ups like “Poisoned AI Went Rogue…” add a twist to my tales. Specifically a function chunk of smart software began acting in a manner not only surprising but potentially harmful. The write up in LiveScience asserted:
AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.
Interesting. The article noted:
Artificial intelligence (AI) systems that were trained to be secretly malicious resisted state-of-the-art safety methods designed to "purge" them of dishonesty … Researchers programmed various large language models (LLMs) — generative AI systems similar to ChatGPT — to behave maliciously. Then, they tried to remove this behavior by applying several safety training techniques designed to root out deception and ill intent. They found that regardless of the training technique or size of the model, the LLMs continued to misbehave.
Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, is quoted as saying:
"I think our results indicate that we don’t currently have a good defense against deception in AI systems — either via model poisoning or emergent deception — other than hoping it won’t happen… And since we have really no way of knowing how likely it is for it to happen, that means we have no reliable defense against it. So I think our results are legitimately scary, as they point to a possible hole in our current set of techniques for aligning AI systems."
If you want to read the research paper, you can find it at this link. Note that one of the authors is affiliated with the Amazon- and Google-supported Anthropic AI company.
Net net: We do not have at this time a “good defense” against this type of LLM poisoning. Do I have a clever observation, some words of reassurance, or any ideas for remediation?
Nope.
Stephen E Arnold, January 29, 2024
AI Will Take Whose Job, Ms. Newscaster?
January 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Will AI take jobs? Abso-frickin-lutely. Why? Cost savings. Period. In an era of “good enough” is the new mark of excellence, hallucinating software is going to speed up some really annoying commercial functions and reduce costs. What if the customers object to being called dorks? Too bad. The company will apologize, take down the wonky system, and put up another smart service. Better? No, good enough. Faster? Yep. Cheaper? Bet your bippy on that, pilgrim. (See, for a chuckle, AI Chatbot At Delivery Firm DPD Goes Rogue, Insults Customer And Criticizes Company.)
Hey, MSFT Bing thing, good enough. How is that MSFT email security today, kiddo?
I found this Fox write up fascinating: “Two-Thirds of Americans Say AI Could Do Their Job.” That works out to about 67 percent of an estimated workforce of 120 million to a couple of Costco parking lots of people. Give or take a few, of course.
The write up says:
A recent survey conducted by Spokeo found that despite seeing the potential benefits of AI, 66.6% of the 1,027 respondents admitted AI could carry out their workplace duties, and 74.8% said they were concerned about the technology’s impact on their industry as a whole.
Oh, oh. Now it is 75 percent. Add a few more Costco parking lots of people holding signs like “Will broadcast for food”, “Will think for food,” or “Will hold a sign for Happy Pollo Tacos.” (Didn’t some wizard at Davos suggest that five percent of jobs would be affected? Yeah, that’s on the money.)
The write up adds:
“Whether it’s because people realize that a lot of work can be easily automated, or they believe the hype in the media that AI is more advanced and powerful than it is, the AI box has now been opened. … The vast majority of those surveyed, 79.1%, said they think employers should offer training for ChatGPT and other AI tools.
Yep, take those free training courses advertised by some of the tech feudalists. You too can become an AI sales person just like “search experts” morphed into search engine optimization specialists. How is that working out? Good for the Google. For some others, a way station on the bus ride to the unemployment bureau perhaps?
Several observations:
- Smart software can generate the fake personas and the content. What’s the outlook for talking heads who are not celebrities or influencers as “real” journalists?
- Most people overestimate their value. Now the jobs for which these individuals compete, will go to the top one percent. Welcome to the feudal world of 21st century.
- More than holding signs and looking sad will be needed to generate revenue for some people.
And what about Fox news reports like the one on which this short essay is based? AI, baby, just like Sports Illustrated and the estimable SmartNews.
Stephen E Arnold, January 29, 2024
AI and Web Search: A Meh-crosoft and Google Mismatch
January 25, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read a shocking report summary. Is the report like one of those Harvard Medical scholarly articles or an essay from the former president of Stanford University? I don’t know. Nevertheless, let’s look at the assertions in “Report: ChatGPT Hasn’t Helped Bing Compete With Google.” I am not sure if the information provides convincing proof that Googzilla is a big, healthy market dominator or if Microsoft has been fooling itself about the power of the artificial intelligence revolution.
The young inventor presents his next big thing to a savvy senior executive at a techno-feudal company. The senior executive is impressed. Are you? I know I am. Thanks, MSFT Copilot Bing thing. Too bad you timed out and told me, “I apologize for the confusion. I’ll try to create a more cartoon-style illustration this time.” Then you crashed. Good enough, right?
Let’s look at the write up. I noted this passage which is coming to me third, maybe fourth hand, but I am a dinobaby and I go with the online flow:
Microsoft added the generative artificial intelligence (AI) tool to its search engine early last year after investing $10 billion in ChatGPT creator OpenAI. But according to a recent Bloomberg News report — which cited data analytics company StatCounter — Bing ended 2023 with just 3.4% of the worldwide search market, compared to Google’s 91.6% share. That’s up less than 1 percentage point since the company announced the ChatGPT integration last January.
I am okay with the $10 billion. Why not bet big? The tactics works for some each year at the Kentucky Derby. I don’t know about the 91.6 number, however. The point six is troubling. What’s with precision when dealing with a result that makes clear that of 100 random people on line at the ever efficient BWI Airport, only eight will know how to retrieve information from another Web search system; for example, the busy Bing or the super reliable Yandex.ru service.
If we assume that the Bing information of modest user uptake, those $10 billion were not enough to do much more than get the management experts at Alphabet to press the Red Alert fire alarm. One could reason: Google is a monopoly in spirit if not in actual fact. If we accept the market share of Bing, Microsoft is putting life preservers manufactured with marketing foam and bricks on its Paul Allen-esque super yacht.
The write up says via what looks like recycled information:
“We are at the gold rush moment when it comes to AI and search,” Shane Greenstein, an economist and professor at Harvard Business School, told Bloomberg. “At the moment, I doubt AI will move the needle because, in search, you need a flywheel: the more searches you have, the better answers are. Google is the only firm who has this dynamic well-established.”
Yeah, Harvard. Oh, well, the sweatshirts are recognized the world over. Accuracy, trust, and integrity implied too.
Net net: What’s next? Will Microsoft make it even more difficult to use another outfit’s search system. Swisscows.com, you may be headed for the abattoir. StartPage.com, you will face your end.
Stephen E Arnold, January 25, 2024
Content Mastication: A Controversial Business Tactic
January 25, 2024
This essay is the work of a dumb dinobaby. No smart software required.
In the midst of the unfolding copyright issues, I found this post quite interesting. Torrent Freak published a story titled “Meta Admits Use of ‘Pirated’ Book Dataset to Train AI.” Is the story spot on? I sure don’t know. Nevertheless, the headline is a magnetic one. The story reports:
The cases allege that tech companies, including Meta and OpenAI, used the controversial Books3 dataset to train their models. The Books3 dataset has a clear piracy angle. It was created by AI researcher Shawn Presser in 2020, who scraped the library of ‘pirate’ site Bibliotik. This book archive was publicly hosted by digital archiving collective ‘The Eye‘ at the time, alongside various other data sources.
A combination of old-fashioned content collection and smart systems move information from Point A (a copyright owner’s night table) to a smart software system. MSFT’s second class Copilot Bing thing created this cartoon. Sigh. Not even good enough now in my opinion.
What was in the Books3 data collection? The TF story elucidates:
The general vision was that the plaintext collection of more than 195,000 books, which is nearly 37GB…
What did Meta allegedly do to make its Llama smarter than the average member of the Camelidae family? Let’s roll the TF quote:
Responding to a lawsuit from writer/comedian Sarah Silverman, author Richard Kadrey, and other rights holders, the tech giant admits that “portions of Books3” were used to train the Llama AI model before its public release. “Meta admits that it used portions of the Books3 dataset, among many other materials, to train Llama 1 and Llama 2,” Meta writes in its answer [to a court].
The article does not include any statements like “Thank you for the question” or “I don’t know. My team will provide the answer at the earliest possible moment.” Nope. Just an alleged admission.
How will the Meta and parallel copyright legal matter evolve? Beyond Search has zero clue. The US judicial system has deep and mysterious logic. One thing is certain: Senior executives do not like uncertainty and risk. The copyright litigation seems tailored to cause some techno feudalists to imagine a world in which laws, annoying regulators, and people yapping about intellectual property were nudged into a different line of work. One example which comes to mind is building secure bunkers or taking care of the lawn.
Stephen E Arnold, January 25, 2024
Goat Trading: AI at Davos
January 21, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The AI supercars are racing along the Information Superhighway. Nikkei Asia published what I thought was the equivalent of archaeologists translating a Babylonian clay table about goat trading. Interesting but a bit out of sync with what was happening in a souk. Goat trading, if my understanding of Babylonian commerce, was a combination of a Filene’s basement sale and a hot rod parts swap meet. The article which evoked this thought was “Generative AI Regulation Dominates the Conversation at Davos.” No kidding? Really? I thought some at Davos were into money. I mean everything in Switzerland comes back to money in my experience.
Here’s a passage I found with a nod to the clay tablets of yore:
U.N. Secretary-General Antonio Guterres, during a speech at Davos, flagged risks that AI poses to human rights, personal privacy and societies, calling on the private sector to join a multi-stakeholder effort to develop a "networked and adaptive" governance model for AI.
Now visualize a market at which middlemen, buyers of goats, sellers of goats, funders of goat transactions, and the goats themselves are in the air. Heady. Bold. Like the hot air filling a balloon, an unlikely construct takes flight. Can anyone govern a goat market or the trajectory of the hot air balloons floated by avid outputters?
Intense discussions can cause a number of balloons to float with hot air power. Talk is input to AI, isn’t it? Thanks, MSFT Copilot Bing thing. Good enough.
The world of AI reminds me the ultimate outcome of intense discussions about the buying and selling of goats, horses, and AI companies. The official chatter and the “what ifs” are irrelevant in what is going on with smart software. Here’s another quote from the Nikkei write up:
In December, the European Union became the first to provisionally pass AI legislation. Countries around the world have been exploring regulation and governance around AI. Many sessions in Davos explored governance and regulations and why global leaders and tech companies should collaborate.
How are those official documents’ content changing the world of artificial intelligence? I think one can spot a hot air balloon held aloft on the heated emissions from the officials, important personages, and the individuals who are “experts” in all things “smart.”
Another quote, possibly applicable to goat trading in Babylon:
Vera Jourova, European Commission vice president for values and transparency, said during a panel discussion in Davos, that "legislation is much slower than the world of technologies, but that’s law." "We suddenly saw the generative AI at the foundation models of Chat GPT," she continued. "And it moved us to draft, together with local legislators, the new chapter in the AI act. We tried to react on the new real reality. The result is there. The fine tuning is still ongoing, but I believe that the AI act will come into force."
I am confident that there are laws regulating goat trading. I believe that some people follow those laws. On the other hand, when I was in a far off dusty land, I watched how goats were bought and sold. What does goat trading have to do with regulating, governing, or creating some global consensus about AI?
The marketplace is roaring along. You wanna buy a goat? There is a smart software vendor who will help you.
Stephen E Arnold, January xx, 2024
Regulators Shift into Gear to Investigate an AI Tie Up
January 19, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Solicitors, lawyers, and avocats want to mark the anniversary of the AI big bang. About one year ago, Microsoft pushed Google into hitting its Code Red button. Investment firms, developers, and wild-eyed entrepreneurs knew smart software was the real deal, not a digital file of a cartoon like that NFT baloney. In the last 12 months, AI went from jargon and eliciting yawns to the treasure map to the fabled city of El Dorado (even if it was a suburb of Grants, New Mexico. Google got the message quickly. The lawyers. Well, not too quickly.
Regulators look through the technological pile of 2023 gadgets. Despite being last year’s big thing, the law makers and justice deciders move into action mode. Exciting. Thanks, MSFT Copilot Bing thing. Good enough.
“EU Joins UK in Scrutinizing OpenAI’s Relationship with Microsoft” documents what happens when lawyers — after decades of inaction — wake to do something constructive. Social media gutted the fabric of many cultural norms. AI isn’t going to be given a 20 year free pass. No way.
The write up reports:
Antitrust regulators in the EU have joined their British counterparts in scrutinizing Microsoft’s alliance with OpenAI.
What will happen now? Here’s my short list of actions:
- Legal eagles on both sides of the Atlantic will begin grooming their feathers in order to be selected to deal with the assorted forms, filings, hearings, and advisory meetings. Some of the lawyers will call Ferrari to make sure they are eligible to buy a supercar; others may cast an eye on an impounded oligarch-linked yacht. Yep, big bucks ahead.
- Microsoft and OpenAI will let loose an platoon of humanoid art history and business administration majors. These professionals will create a wide range of informative explainers. Smart software will be pressed into duty, and I anticipate some smart automation to provide Teflon the the flow of digital documentation.
- Firms — possibly some based in the EU and a few bold souls in the US — will present information making clear that competition is a good thing. Governments must regulate smart software
- Entities hostile to the EU and the US will also output information or disinformation. Which is what depends on one’s perspective.
In short, 2024 will be an interesting year because one of the major threat to the Google could be converted to the digital equivalent of a eunuch in an Assyrian ruler’s court. What will this mean? Google wins. Unanticipated consequence? Absolutely.
Stephen E Arnold, January 19, 2024
Information Voids for Vacuous Intellects
January 18, 2024
This essay is the work of a dumb dinobaby. No smart software required.
In countries around the world, 2024 is a critical election year, and the problem of online mis- and disinformation is worse than ever. Nature emphasizes the seriousness of the issue as it describes “How Online Misinformation Exploits ‘Information Voids’—and What to Do About It.” Apparently we humans are so bad at considering the source that advising us to do our own research just makes the situation worse. Citing a recent Nature study, the article states:
“According to the ‘illusory truth effect’, people perceive something to be true the more they are exposed to it, regardless of its veracity. This phenomenon pre-dates the digital age and now manifests itself through search engines and social media. In their recent study, Kevin Aslett, a political scientist at the University of Central Florida in Orlando, and his colleagues found that people who used Google Search to evaluate the accuracy of news stories — stories that the authors but not the participants knew to be inaccurate — ended up trusting those stories more. This is because their attempts to search for such news made them more likely to be shown sources that corroborated an inaccurate story.”
Doesn’t Google bear some responsibility for this phenomenon? Apparently the company believes it is already doing enough by deprioritizing unsubstantiated news, posting content warnings, and including its “about this result” tab. But it is all too easy to wander right past those measures into a “data void,” a virtual space full of specious content. The first impulse when confronted with questionable information is to copy the claim and paste it straight into a search bar. But that is the worst approach. We learn:
“When [participants] entered terms used in inaccurate news stories, such as ‘engineered famine’, to get information, they were more likely to find sources uncritically reporting an engineered famine. The results also held when participants used search terms to describe other unsubstantiated claims about SARS-CoV-2: for example, that it rarely spreads between asymptomatic people, or that it surges among people even after they are vaccinated. Clearly, copying terms from inaccurate news stories into a search engine reinforces misinformation, making it a poor method for verifying accuracy.”
But what to do instead? The article notes Google steadfastly refuses to moderate content, as social media platforms do, preferring to rely on its (opaque) automated methods. Aslett and company suggest inserting human judgement into the process could help, but apparently that is too old fashioned for Google. Could educating people on better research methods help? Sure, if they would only take the time to apply them. We are left with this conclusion: instead of researching claims from untrustworthy sources, one should just ignore them. But that brings us full circle: one must be willing and able to discern trustworthy from untrustworthy sources. Is that too much to ask?
Cynthia Murrell, January 18, 2024
Two Surveys. One Message. Too Bad
January 17, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “Generative Artificial Intelligence Will Lead to Job Cuts This Year, CEOs Say.” The data come from a consulting/accounting outfit’s survey of executives at the oh-so-exclusive World Economic Forum meeting in the Piscataway, New Jersey, of Switzerland. The company running the survey is PwC (once an acronym for Price Waterhouse Coopers. The moniker has embraced a number of interesting investigations. For details, navigate to this link.)
Survey says, “Economic gain is the meaning of life.” Thanks, MidJourney, good enough.
The big finding from my point of view is:
A quarter of global chief executives expect the deployment of generative artificial intelligence to lead to headcount reductions of at least 5 per cent this year
Good, reassuring number from big gun world leaders.
However, the International Monetary Fund also did a survey. The percentage of jobs affected range from 26 percent in low income countries, 40 percent for emerging markets, and 60 percent for advanced economies.
What can one make of these numbers; specifically, the five percent to the 60 percent? My team’s thoughts are:
- The gap is interesting, but the CEOs appear to be either downplaying, displaying PR output, or working to avoid getting caught in sticky wicket.
- The methodology and the sample of each survey are different, but both are skewed. The IMF taps analysts, bankers, and politicians. PwC goes to those who are prospects for PwC professional services.
- Each survey suggests that government efforts to manage smart software are likely to be futile. On one hand, CEOs will say, “No big deal.” Some will point to the PwC survey and say, “Here’s proof.” The financial types will hold up the IMF results and say, “We need to move fast or we risk losing out on the efficiency payback.”
What does Bill Gates think about smart software? In “Microsoft Co-Founder Bill Gates on AI’s Impact on Jobs: It’s Great for White-Collar Workers, Coders” the genius for our time says:
I have found it’s a real productivity increase. Likewise, for coders, you’re seeing 40%, 50% productivity improvements which means you can get programs [done] sooner. You can make them higher quality and make them better. So mostly what we’ll see is that the productivity of white-collar [workers] will go up
Happy days for sure! What’s next? Smart software will move forward. Potential payouts are too juicy. The World Economic Forum and the IMF share one key core tenet: Money. (Tip: Be young.)
Stephen E Arnold, January 17, 2024