Can Your Job Be Orchestrated? Yes? Okay, It Will Be Smartified
March 13, 2024
This essay is the work of a dumb dinobaby. No smart software required.
My work career over the last 60 years has been filled with luck. I have been in the right place at the right time. I have been in companies which have been acquired, reassigned, and exposed to opportunities which just seemed to appear. Unlike today’s young college graduate, I never thought once about being able to get a “job.” I just bumbled along. In an interview for something called Singularity, the interviewer asked me, “What’s been the key to your success?” I answered, “Luck.” (Please, keep in mind that the interviewer assumed I was a success, but he had no idea that I did not want to be a success. I just wanted to do interesting work.)
Thanks, MSFT Copilot. Will smart software do your server security? Ho ho ho.
Would I be able to get a job today if I were 20 years old? Believe it or not, I told my son in one of our conversations about smart software: “Probably not.” I thought about this comment when I read today (March 13, 2024) the essay “Devin AI Can Write Complete Source Code.” The main idea of the article is that artificial intelligence, properly trained, appropriately resourced can do what only humans could do in 1966 (when I graduated with a BA degree from a so so university in flyover country). The write up states:
Devin is a Generative AI Coding Assistant developed by Cognition that can write and deploy codes of up to hundreds of lines with just a single prompt. Although there are some similar tools for the same purpose such as Microsoft’s Copilot, Devin is quite the advancement as it not only generates the source code for software or website but it debugs the end-to-end before the final execution.
Let’s assume the write up is mostly accurate. It does not matter. Smart software will be shaped to deliver what I call orchestrated solutions either today, tomorrow or next month. Jobs already nuked by smartification are customer service reps, boilerplate writing jobs (hello, McKinsey), and translation. Some footloose and fancy free gig workers without AI skills may face dilemmas about whether to pursue begging, YouTubing the van life, or doing some spelunking in the Chemical Abstracts database for molecular recipes in a Walmart restroom.
The trajectory of applied AI is reasonably clear to me. Once “programming” gets swept into the Prada bag of AI, what other professions will be smartified? Once again, the likely path is light by dim but visible Alibaba solar lights for the garden:
- Legal tasks which are repetitive even though the cases are different, the work flow is something an average law school graduate can master and learn to loathe
- Forensic accounting. Accountants are essentially Ground Hog Day people, because every tax cycle is the same old same old
- Routine one-day surgeries. Sorry, dermatologists, cataract shops, and kidney stone crunchers. Robots will do the job and not screw up the DRG codes too much.
- Marketers. I know marketing requires creative thinking. Okay, but based on the Super Bowl ads this year, I think some clients will be willing to give smart software a whirl. Too bad about filming a horse galloping along the beach in Half Moon Bay though. Oh, well.
That’s enough of the professionals who will be affected by orchestrated work flows surfing on smartified software.
Why am I bothering to write down what seems painfully obvious to my research team?
I just wanted another reason to say, “I am glad I am old.” What many young college graduates will discover that despite my “luck” over the course of my work career, smartified software will not only kill some types of work. Smart software will remove the surprise in a serendipitous life journey.
To reiterate my point: I am glad I am old and understand efficiency, smartification, and the value of having been lucky.
Stephen E Arnold, March 13, 2024
AI Bubble Gum Cards
March 13, 2024
This essay is the work of a dumb dinobaby. No smart software required.
A publication for electrical engineers has created a new mechanism for making AI into a collectible. Navigate to “The AI apocalypse: A Scorecard.” Scroll down to the part of the post which looks like the gems from the 1050s:
The idea is to pick 22 experts and gather their big ideas about AI’s potential to destroy humanity. Here’s one example of an IEEE bubble gum card:
© by the estimable IEEE.
The information on the cards is eclectic. It is clear that some people think smart software will kill me and you. Others are not worried.
My thought is that IEEE should expand upon this concept; for example, here are some bubble gum card ideas:
- Do the NFT play? These might be easier to sell than IEEE memberships and subscriptions to the magazine
- Offer actual, fungible packs of trading cards with throw-back bubble gum
- Create an AI movie about AI experts with opposing ideas doing battle in a video game type world. Zap. You lose, you doubter.
But the old-fashioned approach to selling trading cards to grade school kids won’t work. First, there are very few corner stores near schools in many cities. Two, a special interest group will agitate to block the sale of cards about AI because the inclusion of chewing gum will damage children’s teeth. And, three, kids today want TikToks, at least until the service is banned from a fast-acting group of elected officials.
I think the IEEE will go in a different direction; for example, micro USBs with AI images and source code on them. Or, the IEEE should just advance to the 21st-century and start producing short-form AI videos.
The IEEE does have an opportunity. AI collectibles.
Stephen E Arnold, March 13, 2024
Want Clicks: Do Sad, Really, Really Sorrowful
March 13, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The US is a hotbed of negative news. It’s what drives the media and perpetuates the culture of fear that (arguably) has plagued the country since colonial times. US citizens and now the rest of the world are so addicted to bad news that a research team got the brilliant idea to study what words people click. Nieman Lab wrote about the study in, “Negative Words In News Headlines Generate More Clicks-But Sad Words Are More Effective Than Angry Or Scary Ones.”
Thanks, MSFT Copilot. One of Redmond’s security professionals I surmise?
Negative words are prevalent in headlines because they sell clicks. The Nature Human Behavior(u)r journal published a study called “Negativity Drives Online News Consumption.” The study analyzed the effect of negative and emotional words on news consumption and the research team discovered that negativity increased clickability. These findings also confirm the well-documented behavior of humans seeking negativity in all information-seeking.
It coincides with humanity’s instinct to be vigilant of any danger and avoid it. While humans instinctually gravitate towards negative headlines, certain negative words are more popular than others. Humans apparently are driven to click on sad-related synonyms, avoid anything resembling joy or fear, and angry words don’t have any effect. It all goes back to survival:
“And if we are to believe “Bad is stronger than good” derives from evolutionary psychology — that it arose as a useful heuristic to detect threats in our environment — why would fear-related words reduce likelihood to click? (The authors hypothesize that fear and anger might be more important in generating sharing behavior — which is public-facing — than clicks, which are private.)
In any event, this study puts some hard numbers to what, in most newsrooms, has been more of an editorial hunch: Readers are more drawn to negativity than to positivity. But thankfully, the effect size is small — and I’d wager that it’d be even smaller for any outlet that decided to lean too far in one direction or the other.”
It could also be a strict diet of danger-filled media too.
Whitney Grace, March 13, 2024
AI to AI Program for March 12, 2024, Now Available
March 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Erik Arnold, with some assistance from Stephen E Arnold (the father) has produced another installment of AI to AI: Smart Software for Government Use Cases.” The program presents news and analysis about the use of artificial intelligence (smart software) in government agencies.
The ad-free program features Erik S. Arnold, Managing Director of Govwizely, a Washington, DC consulting and engineering services firm. Arnold has extensive experience working on technology projects for the US Congress, the Capitol Police, the Department of Commerce, and the White House. Stephen E Arnold, an adviser to Govwizely, also participates in the program. The current episode explores five topics in an father-and-son exploration of important, yet rarely discussed subjects. These include the analysis of law enforcement body camera video by smart software, the appointment of an AI information czar by the US Department of Justice, copyright issues faced by UK artificial intelligence projects, the role of the US Marines in the Department of Defense’s smart software projects, and the potential use of artificial intelligence in the US Patent Office.
The video is available on YouTube at https://youtu.be/nsKki5P3PkA. The Apple audio podcast is at this link.
Stephen E Arnold, March 12, 2024
AI Hermeneutics: The Fire Fights of Interpretation Flame
March 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
My hunch is that not too many of the thumb-typing, TikTok generation know what hermeneutics means. Furthermore, like most of their parents, these future masters of the phone-iverse don’t care. “Let software think for me” would make a nifty T shirt slogan at a technology conference.
This morning (March 12, 2024) I read three quite different write ups. Let me highlight each and then link the content of those documents to the the problem of interpretation of religious texts.
Thanks, MSFT Copilot. I am confident your security team is up to this task.
The first write up is a news story called “Elon Musk’s AI to Open Source Grok This Week.” The main point for me is that Mr. Musk will put the label “open source” on his Grok artificial intelligence software. The write up includes an interesting quote; to wit:
Musk further adds that the whole idea of him founding OpenAI was about open sourcing AI. He highlighted his discussion with Larry Page, the former CEO of Google, who was Musk’s friend then. “I sat in his house and talked about AI safety, and Larry did not care about AI safety at all.”
The implication is that Mr. Musk does care about safety. Okay, let’s accept that.
The second story is an ArXiv paper called “Stealing Part of a Production Language Model.” The authors are nine Googlers, two ETH wizards, one University of Washington professor, one OpenAI researcher, and one McGill University smart software luminary. In short, the big outfits are making clear that closed or open, software is rising to the task of revealing some of the inner workings of these “next big things.” The paper states:
We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like OpenAI’s ChatGPT or Google’s PaLM-2…. For under $20 USD, our attack extracts the entire projection matrix of OpenAI’s ada and babbage language models.
The third item is “How Do Neural Networks Learn? A Mathematical Formula Explains How They Detect Relevant Patterns.” The main idea of this write up is that software can perform an X-ray type analysis of a black box and present some useful data about the inner workings of numerical recipes about which many AI “experts” feign total ignorance.
Several observations:
- Open source software is available to download largely without encumbrances. Good actors and bad actors can use this software and its components to let users put on a happy face or bedevil the world’s cyber security experts. Either way, smart software is out of the bag.
- In the event that someone or some organization has secrets buried in its software, those secrets can be exposed. One the secret is known, the good actors and the bad actors can surf on that information.
- The notion of an attack surface for smart software now includes the numerical recipes and the model itself. Toss in the notion of data poisoning, and the notion of vulnerability must be recast from a specific attack to a much larger type of exploitation.
Net net: I assume the many committees, NGOs, and government entities discussing AI have considered these points and incorporated these articles into informed policies. In the meantime, the AI parade continues to attract participants. Who has time to fool around with the hermeneutics of smart software?
Stephen E Arnold, March 12, 2024
Microsoft and Security: A Rerun with the Same Worn-Out Script
March 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The Marvel cinematic universe has spawned two dozen sequels. Microsoft’s security circus features are moving up fast in the reprise business. Unfortunately there is no super hero who comes to the rescue of the giant American firm. The villains in these big screen stunners are a bit like those in the James Bond films. Microsoft seems to prefer to wrestle with the allegedly Russian cozy bear or at least convert a cartoon animal into the personification of evil.
Thanks, MSFT, you have nailed security theater and reruns of the same tired story.
What’s interesting about these security blockbusters is that each follows a Hollywood style “you’ve seen this before nudge nudge” approach to the entertainment. The sequence is a belated announcement that Microsoft security has been breached. The evil bad actors have stolen data, corrupted software, and by brute force foiled the norm cores in Microsoft World. Then announcements about fixes that the Microsoft custoemr must implement along with admonitions to keep that MSFT software updated and warnings about using “old” computers, etc. etc.
“Russian Hackers Accessed Microsoft Source Code” is the equivalent of New York Times film review. The write up reports:
In January, Microsoft disclosed that Russian hackers had breached the company’s systems and managed to read emails belonging to senior executives. Now, the company has revealed that the breach was worse than initially understood and that the Russian hackers accessed Microsoft source code. Friday’s revelation — made in a blog post and a filing with the Securities and Exchange Commission — is the latest in a string of breaches affecting the company that have raised major questions in Washington about Microsoft’s security posture.
Well, that’s harsh. No mention of the estimable alleged monopoly’s releasing the information on March 7, 2024. I am capturing my thoughts on March 8, 2024. But with college basketball moving toward tournament time, who cares? I am not really sure any more. And Washington? Does the name evoke a person, a committee, a committee consisting of the heads of security committees, someone in the White House, an “expert” at the suddenly famous National Bureau of Standards, or absolutely no one.
The write asserts:
The company is concerned, however, that “Midnight Blizzard is attempting to use secrets of different types it has found,” including in emails between customers and Microsoft. “As we discover them in our exfiltrated email, we have been and are reaching out to these customers to assist them in taking mitigating measures,” the company said in its blog post. The company describes the incident as an example of “what has become more broadly an unprecedented global threat landscape, especially in terms of sophisticated nation-state attacks.” In response, the company has said it is increasing the resources and attention devoted to securing its systems.
Microsoft is “reaching out.” I can reach for a donut, but I do not grasp it and gobble it down. “Reach” is not the same as fixing the problems Microsoft caused.
Several observations:
- Microsoft is an alleged monopoly, and it is allowing its digital trains to set fire to the fields, homes, and businesses which have to use its tracks. Isn’t it time for purposeful action from the US government agencies with direct responsibility for cyber security and appropriate business conduct?
- Can Microsoft remediate its problems? My answer is, “No.” Vulnerabilities are engineered in because no one has the time, energy, or interest to chase down problems and fix them. There is an ageing programmer named Steve Gibson. His approach to software is the exact opposite of Microsoft’s. Mr. Gibson will never be a trillion dollar operation, but his software works. Perhaps Microsoft should consider adopting some of Mr. Gibson’s methods.
- Customers have to take a close look at the security breaches endlessly reported by cyber security companies. Some outfits’ software is on the list most of the time. Other companies’ software is an infrequent visitor to these breach parties. Is it time for customers to be looking for an alternative to what Microsoft provides?
Net net: A new security release will be coming to the computer near you. Don’t fail to miss it.
Stephen E Arnold, March 12, 2024
x
x
x
x
x
Another Small Victory for OpenAI Against Authors
March 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
For those following the fight between human content creators and AI firms, score one for the algorithm engineers. TorrentFreak reports, “Court Dismisses Authors’ Copyright Infringement Claims Against OpenAI.” At issue is generative AI’s practice of feeding on humans’ work, without compensation, in order to mimic it. Multiple suits have been filed by record labels, writers, and visual artists. Reporter Ernesto Van der Sar writes:
“Several of the lawsuits filed by book authors include a piracy component. The cases allege that tech companies, including Meta and OpenAI, used the controversial Books3 dataset to train their models. The Books3 dataset was created by AI researcher Shawn Presser in 2020, who scraped the library of ‘pirate’ site Bibliotik. The general vision was that the plaintext collection of more than 195,000 books, which is nearly 37GB in size, could help AI enthusiasts build better models. The vision wasn’t wrong; large text archives are great training material for Large Language Models, but many authors disapprove of their works being used in this manner, without permission or compensation.”
A large group of rights holders have a football team. Those big folks are chasing the small but feisty opponent down the field. Which team will score? Thanks, MSFT Copilot. Keep up the good enough work.
Is that so unreasonable? Maybe not, but existing copyright law did not foresee this situation. We learn:
“After reviewing input from both sides, California District Judge Araceli Martínez-Olguín ruled on the matter. In her order, she largely sides with OpenAI. The vicarious copyright infringement claim fails because the court doesn’t agree that all output produced by OpenAI’s models can be seen as a derivative work. To survive, the infringement claim has to be more concrete.”
The plaintiffs are not out of moves, however. They can still file an amended complaint. But unless updated legislation is passed in the meantime, they may just be rebuffed again. So all they need is for Congress to act quickly to protect artists from tech firms. Any day now.
Cynthia Murrell, March 12, 2024
Thomson Reuters Is Going to Do AI: Run Faster
March 11, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Thomson Reuters, a mostly low profile outfit, is going to do AI. Why’s this interesting to law schools, lawyers, accountants, special librarians, libraries, and others who “pay” for “real” information? There are three reasons:
- Money
- Markets
- Mania.
Thomson Reuters has been a tech talker for decades. The company created skunk works. It hired quirky MIT wizards. I bought businesses with information technology. But underneath the professional publishing clear coat, the firm is the creation of Lord Thomson of Fleet. The firm has a track record of being able to turn a profit on its $7 billion in revenues. But the future, if news reports are accurate, is artificial intelligence or smart software.
The young publishing executive says, “I have go to get ahead of this AI bus before it runs over me.” Thanks, MSFT Copilot. Working on security today?
But wait! What makes Thomson Reuters different from the New York Times or (heaven forbid the question) Rupert Murdoch’s confections? The answer is in my opinion: Thomson Reuters does the trust thing and is a professional publisher. I don’t want to explain that in the world of Lord Thomson of Fleet that publishing is publishing. Nope. Not going there. Thomson Reuters is a custom made billiard cue, not one of those bar pool cheapos.
As appropriate to today’s Thomson Reuters, the news appeared in Thomson’s own news releases first; for example, “Thomson Reuters Profit Beats Estimates Amid AI Push.” Yep, AI drives profits. That’s the “m” in money. Plus, Thomson late last year this article found its way to the law firm market (yep, that’s the second “m”): “Morgan Lewis and Thomson Reuters Enter into Partnership to Put Law Firms’ Needs at the Heart of AI Development.”
Now the third “m” or mania. Here’s a representative story, “Thomson Reuters to Invest US$8 billion in a Substantial AI-Focused Spending Initiative.” You can also check out the Financial Times’s report at this link.
Thomson Reuters is a $7 billion corporation. If the $8 billion number is on the money, the venerable news outfit is going to spend the equivalent on one year’s revenue acquiring and investing in smart software. In terms of professional publishing, this chunk of change is roughly the equivalent of Sam AI-Man’s need for trillions of dollars for his smart software business.
Several thoughts struck me as I was reading about the $8 billion investment in smart software:
- In terms of publishing or more narrowly professional publishing, $8 billion will take some time to spend. But time is not on the side of publishing decision making processes. When the check is written for an AI investment, there may be some who ask, “Is this the correct investment? After all, aren’t we professional publishers serving lawyers, accountants, and researchers?”
- The US legal processes are interesting. But the minor challenge of Crown copyright adds a bit of spice to certain investments. The UK government itself is reluctant to push into some AI areas due to concerns that certain information may not be available unless the red tape about copyright has been trimmed, rolled, and put on the shelf. Without being disrespectful, Thomson Reuters could find that some of the $8 billion headed into its clients pockets as legal challenges make their way through courts in Britain, Canada, and the US and probably some frisky EU states.
- The game for AI seems to be breaking into two what a former Greek minister calls the techno feudal set up. On one hand, there are giant technology centric companies (of which Thomson Reuters is not one of the club members). These are Google- and Microsoft-scale outfits with infrastructure, data, customers, and multiple business models. On the other hand, there are the Product Watch outfits which are using open source and APIs to create “new” and “important” AI businesses, applications, and solutions. In short, there are some barons and a whole grab-bag of lesser folk. Is Thomson Reuters going to be able to run with the barons. Remember, please, the barons are riding stallions. Thomson Reuter-type firms either walk or ride donkeys.
Net net: If Thomson Reuters spends $8 billion on smart software, how many lawyers, accountants, and researchers will be put out of work? The risks are not just bad AI investments. The threat maybe to gut the billing power of the paying customers for Thomson Reuters’ content. This will be entertaining to watch.
PS. The third “m”? It is mania, AI mania.
Stephen E Arnold, March 11, 2024
x
x
x
x
x
Palantir: The UK Wants a Silver Bullet
March 11, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The UK is an interesting nation state. On one hand, one has upmarket, high-class activities taking place not too far from the squatters in Bristol. Fancy lingo, nifty arguments (Here, here!) match up nicely with some wonky computer decisions. The British government seems to have a keen interest in finding silver bullets; that is, solutions which will make problems go away. How did that work for the postal service?
I read “Health Data – It Isn’t Just Palantir or Bust,” written by lawyer, pundit, novelist, and wizard Cory Doctorow. The essay focuses on a tender offer captured by Palantir Technologies. The idea is that the British National Health Service has lots of data. The NHS has done some wild and crazy things to make those exposed to the NHS safer. Sorry, I can’t explain one taxonomy-centric project which went exactly nowhere despite the press releases generated by the vendors, speeches, presentations, and assurances that, by gad, these health data will be managed. Yeah, and Bristol’s nasty areas will be fixed up soon.
The British government professional is struggling with software that was described as a single solution. Thanks, MSFT Copilot. How is your security perimeter working today? Oh, that’s too bad. Good enough.
What is interesting about the write up is not the somewhat repetitive retelling of the NHS’ computer challenges. I want to highlight the comments from the lawyer – novelist about the American intelware outfit Palantir Technologies. What do we learn about Palantir?
Here the first quote from the essay:
But handing it all over to companies like Palantir isn’t the only option
The idea that a person munching on fish and chips in Swindon will know about Palantir is effectively zero. But it is clear that “like Palantir” suggests something interesting, maybe fascinating.
Here’s another reference to Palantir:
Even more bizarre are the plans to flog NHS data to foreign military surveillance giants like Palantir, with the promise that anonymization will somehow keep Britons safe from a company that is literally named after an evil, all-seeing magic talisman employed by the principal villain of Lord of the Rings (“Sauron, are we the baddies?”).
The word choice is painting a picture of an American intelware company which does focus on conveying a negative message; for instance, the words safe, evil, all seeing, villain, baddies, etc. What’s going on?
The British Medical Association and the conference of England LMC Representatives have endorsed OpenSAFELY and condemned Palantir. The idea that we must either let Palantir make off with every Briton’s most intimate health secrets or doom millions to suffer and die of preventable illness is a provably false choice.
It seems that the American company is known to the BMA and an NGO have figured out Palantir is a bit of a sticky wicket.
Several observations:
- My view is that Palantir promised a silver bullet to solve some of the NHS data challenges. The British government accepted the argument, so full steam ahead. Thus, the problem, I would suggest, is the procurement process
- The agenda in the write up is to associate Palantir with some relatively negative concepts. Is this fair? Probably not but it is typical of certain “real” analysts and journalists to mix up complex issues in order to create doubt about vendors of specialized software. These outfits are not perfect, but their products are a response to quite difficult problems.
- I think the write up is a mash up of anger about tender offers, the ineptitude of British government computer skills, the use of cross correlation as a symbol of Satan, and a social outrage about the Britain which is versus what some wish it were.
Net net: Will Palantir change because of this negative characterization of its products and services? Nope. Will the NHS change? Are you kidding me, of course not. Will the British government’s quest for silver bullet solutions stop? Let’s tackle this last question this way: “Why not write it in a snail mail letter and drop it in the post?”
Intelware is just so versatile at least in the marketing collateral.
Stephen E Arnold, March 11, 2024
In Tech We Mistrust
March 11, 2024
While tech firms were dumping billions into AI, they may have overlooked one key component: consumer faith. The Hill reports, “Trust in AI Companies Drops to 35 Percent in New Study.” We note that 35% figure is for the US only, while the global drop was a mere 8%. Still, that is the wrong direction for anyone with a stake in the market. So what is happening? Writer Filip Timotija tells us:
So it is not just AI we mistrust, it is tech companies as a whole. That tracks. The study polled 32,000 people across 28 countries. Timotija reminds us regulators in the US and abroad are scrambling to catch up. Will fear of consumer rejection do what neither lagging lawmakers nor common decency can? The write-up notes:
“Westcott argued the findings should be a ‘wake up call’ for AI companies to ‘build back credibility through ethical innovation, genuine community engagement and partnerships that place people and their concerns at the heart of AI developments.’ As for the impacts on the future for the industry as a whole, ‘societal acceptance of the technology is now at a crossroads,’ he said, adding that trust in AI and the companies producing it should be seen ‘not just as a challenge, but an opportunity.’” “Multiple factors contributed to the decline in trust toward the companies polled in the data, according to Justin Westcott, Edelman’s chair of global technology. ‘Key among these are fears related to privacy invasion, the potential for AI to devalue human contributions, and apprehensions about unregulated technological leaps outpacing ethical considerations,’ Westcott said, adding ‘the data points to a perceived lack of transparency and accountability in how AI companies operate and engage with societal impacts.’ Technology as a whole is losing its lead in trust among sectors, Edelman said, highlighting the key findings from the study. ‘Eight years ago, technology was the leading industry in trust in 90 percent of the countries we study,’ researchers wrote, referring to the 28 countries. ‘Now it is most trusted only in half.’”
Yes, an opportunity. All AI companies must do is emphasize ethics, transparency, and societal benefits over profits. Surely big tech firms will get right on that.
Cynthia Murrell, March 11, 2024