Blue Chip Consultants Embrace Smart Software: Some Possible But Fanciful Reasons Offered
June 7, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
VentureBeat published an interesting essay about the blue-chip consulting firm, McKinsey & Co. Some may associate the firm with its work for the pharmaceutical industry. Others may pull up a memory of a corporate restructuring guided by McKinsey consultants which caused employees to have an opportunity to find their futures elsewhere. A select few can pull up a memory of a McKinsey recruiting pitch at one of the business schools known to produce cheerful Type As who desperately sought the approval of their employer. I recall do a miniscule project related to a mathematical technique productized by a company almost no one remembers. I think the name of the firm was CrossZ. Ah, memories.
“McKinsey Says About Half of Its Employees Are Using Generative AI” reports via a quote from Ben Ellencweig (a McKinsey blue chip consultant):
About half of [our employees] are using those services with McKinsey’s permission.
Is this half “regular” McKinsey wizards? That’s ambiguous. McKinsey has set up QuantumBlack, a unit focused on consulting about artificial intelligence.
The article included a statement which reminded me of what I call the “vernacular of the entitled”; to wit:
Ellencweig emphasized that McKinsey had guardrails for employees using generative AI, including “guidelines and principles” about what information the workers could input into these services. “We do not upload confidential information,” Ellencweig said.
A senior consultant from an unknown consulting firm explains how a hypothetical smart fire hydrant disguised as a beverage dispenser can distribute variants of Hydrocodone Bitartrate or an analog to “users.” The illustration was created by the smart bytes at MidJourney.
Yep, guardrails. Guidelines. Principles. I wonder if McKinsey and Google are using the same communications consulting firm. The lingo is designed to reassure, to suggest an ethical compass in good working order.
Another McKinsey expert said, according to the write up:
McKinsey was testing most of the leading generative AI services: “For all the major players, our tech folks have them all in a sandbox, [and are] playing with them every day,” he said.
But what about the “half”? If half means those in Black Quantum, McKinsey is in the Wright Bros. stage of technological application. However, if the half applies to the entire McKinsey work force, that raises a number of interesting questions about what information is where and how those factoids are being used.
If I were not a dinobaby with a few spins in the blue chip consulting machine, I would track down what Bain, BCG, Booz, et al were saying about their AI practice areas. I am a dinobaby.
What catches my attention is the use of smart software in these firms; for example, here are a couple of questions I have:
- Will McKinsey and similar firms use the technology to reduce the number of expensive consultants and analysts while maintaining or increasing the costs of projects?
- Will McKinsey and similar firms maintain their present staffing levels and boost the requirements for a bonus or promotion as measured by billability and profit?
- Will McKinsey and similar firms use the technology, increase the number of staff who can integrate smart software into their work, and force out the Luddites who do not get with the AI program?
- Will McKinsey cherry pick ways to use the technology to maximize partner profits and scramble to deal with glitches in the new fabric of being smarter than the average client?
My instinct is that more money will be spent on marketing the use of smart software. Luddites will be allowed to find their future at an azure chip firm (lower tier consulting company) or return to their parents’ home. New hires with AI smarts will ride the leather seats in McKinsey’s carpetland. Decisions will be directed at [a] maximizing revenue, [b] beating out other blue chip outfits for juicy jobs, and [c] chasing terminated high tech professionals who own a suit and don’t eat enhanced gummies during an interview.
And for the clients? Hey, check out the way McKinsey produces payoff for its clients in “When McKinsey Comes to Town: The Hidden Influence of the World’s Most Powerful Consulting Firm.”
Stephen E Arnold, June 7, 2023
Google: Responsible and Trustworthy Chrome Extensions with a Dab of Respect the User
June 7, 2023
“More Malicious Extensions in Chrome Web Store” documents some Chrome extensions (add ins) which allegedly compromise a user’s computer. Google has been using words like responsible and trust with increasing frequency. With Chrome in use by more than half of those with computing devices, what’s the dividing line between trust and responsibility for Google smart software and stupid but market leading software like Chrome. If a non-Google third party can spot allegedly problematic extensions, why can’t Google? Is part of the answer, “Talk is cheap. Fixing software is expensive”? That’s a good question.
The cited article states:
… we are at 18 malicious extensions with a combined user count of 55 million. The most popular of these extensions are Autoskip for Youtube, Crystal Ad block and Brisk VPN: nine, six and five million users respectively.
The write up crawfishes, stating:
Mind you: just because these extensions monetized by redirecting search pages two years ago, it doesn’t mean that they still limit themselves to it now. There are way more dangerous things one can do with the power to inject arbitrary JavaScript code into each and every website.
My reaction is that why are these allegedly malicious components in the Google “store” in the first place?
I think the answer is obvious: Talk is cheap. Fixing software is expensive. You may disagree, but I hold fast to my opinion.
Stephen E Arnold, June 7, 2023
Old School Book Reviewers, BookTok Is Eating Your Lunch Now
June 7, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Perhaps to counter recent aspersions on its character, TikTok seems eager to transfer prestige from one of its popular forums to itself. Mashable reports, “TikTok Is Launching its Own Book Awards.” The BookTok community has grown so influential it apparently boosts book sales and inspires TV and movie producers. Writer Meera Navlakha reports:
“TikTok knows the power of this community, and is expanding on it. First, a TikTok Book Club was launched on the platform in July 2022; a partnership with Penguin Random House followed in September. Now, the app is officially launching the TikTok Book Awards: a first-of-its-kind celebration of the BookTok community, specifically in the UK and Ireland. The 2023 TikTok Book Awards will honour favourite authors, books, and creators across nine categories. These range ‘Creator of the Year’ to ‘Best BookTok Revival’ to ‘Best Book I Wish I Could Read Again For The First Time’. Those within the BookTok ecosystem, including creators and fans, will help curate the nominees, using the hashtag #TikTokBookAwards. The long-list will then be judged by experts, including author Candice Brathwaite, creators Coco and Ben, and Trâm-Anh Doan, the head of social media at Bloomsbury Publishing. Finally, the TikTok community within the UK and Ireland will vote on the short-list in July, through an in-app hub.”
What an efficient plan. This single, geographically limited initiative may not be enough to outweigh concerns about TikTok’s security. But if the platform can appropriate more of its communities’ deliberations, perhaps it can gain the prestige of a digital newspaper of record. All with nearly no effort on its part.
Cynthia Murrell, June 7, 2023
Microsoft: Just a Minor Thing
June 6, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Several years ago, I was asked to be a technical advisor to a UK group focused on improper actions directed toward children. Since then, I have paid some attention to the information about young people that some online services collect. One of the more troubling facets of improper actions intended to compromise the privacy, security, and possibly the safety of minors is the role data aggregators play. Whether gathering information from “harmless” apps favored by young people to surreptitious collection and cross correlation of young users’ online travels, these often surreptitious actions of people and their systems trouble me.
The “anything goes” approach of some organizations is often masked by public statements and the use of words like “trust” when explaining how information “hoovering” operations are set up, implemented, and used to generate revenue or other outcomes. I am not comfortable identifying some of these, however.
A regulator and a big company representative talking about a satisfactory resolution to the regrettable collection of kiddie data. Both appear to be satisfied with another job well done. The image was generated by the MidJourney smart software.
Instead, let me direct your attention to the BBC report “Microsoft to Pay $20m for Child Privacy Violations.” The write up states as “real news”:
Microsoft will pay $20m (£16m) to US federal regulators after it was found to have illegally collected
data on children who had started Xbox accounts.
The write up states:
From 2015 to 2020 Microsoft retained data “sometimes for years” from the account set up, even when a parent failed to complete the process …The company also failed to inform parents about all the data it was collecting, including the user’s profile picture and that data was being distributed to third parties.
Will the leader in smart software and clever marketing have an explanation? Of course. That’s what advisory firms and lawyers help their clients deliver; for example:
“Regrettably, we did not meet customer expectations and are committed to complying with the order to continue improving upon our safety measures,” Microsoft’s Dave McCarthy, CVP of Xbox Player Services, wrote in an Xbox blog post. “We believe that we can and should do more, and we’ll remain steadfast in our commitment to safety, privacy, and security for our community.”
Sounds good.
From my point of view, something is out of alignment. Perhaps it is my old-fashioned idea that young people’s online activities require a more thoughtful approach by large companies, data aggregators, and click capturing systems. The thought, it seems, is directed at finding ways to take advantage of weak regulation, inattentive parents and guardians, and often-uninformed young people.
Like other ethical black holes in certain organizations, surfing for fun or money on children seems inappropriate. Does $20 million have an impact on a giant company? Nope. The ethical and moral foundation of decision making is enabling these data collection activities. And $20 million causes little or no pain. Therefore, why not continue these practices and do a better job of keeping the procedures secret?
Pragmatism is the name of the game it seems. And kiddie data? Fair game to some adrift in an ethical swamp. Just a minor thing.
Stephen E Arnold, June 6, 2023
Software Cannot Process Numbers Derived from Getty Pix, Honks Getty Legal Eagle
June 6, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Getty Asks London Court to Stop UK Sales of Stability AI System.” The write up comes from a service which, like Google, bandies about the word trust with considerable confidence. The main idea is that software is processing images available in the form of Web content, converting these to numbers, and using the zeros and ones to create pictures.
The write up states:
The Seattle-based company [Getty] accuses the company of breaching its copyright by using its images to “train” its Stable Diffusion system, according to the filing dated May 12, [2023].
I found this statement in the trusted write up fascinating:
Getty is seeking as-yet unspecified damages. It is also asking the High Court to order Stability AI to hand over or destroy all versions of Stable Diffusion that may infringe Getty’s intellectual property rights.
When I read this, I wonder if the scribes upon learning about the threat Gutenberg’s printing press represented were experiencing their “Getty moment.” The advanced technology of the adapted olive press and hand carved wooden letters meant that the quill pen champions had to adapt or find their future emptying garderobes (aka chamber pots).
Scribes prepare to throw a Gutenberg printing press and the evil innovator Gutenberg in the Rhine River. Image was produced by the evil incarnate code of MidJourney. Getty is not impressed like letters on paper with the outputs of Beelzebub-inspired innovations.
How did that rebellion against technology work out? Yeah. Disruption.
What happens if the legal system in the UK and possibly the US jump on the no innovation train? Japan’s decision points to one option: Using what’s on the Web is just fine. And China? Yep, those folks in the Middle Kingdom will definitely conform to the UK and maybe US rules and regulations. What about outposts of innovation in Armenia? Johnnies on the spot (not pot, please). But what about those computer science students at Cambridge University? Jail and fines are too good for them. To the gibbet.
Stephen E Arnold, June 6, 2023
India and Its Management Secrets: Under Utilized Staff Motivation Technique
June 6, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I am not sure if the information in the article is accurate, but it sure is entertaining. If true, I think I have identified one of those management secrets which makes wizards from India such outstanding managers. Navigate to “Company Blocks Employees from Leaving Office: Now, a Clarification.” The write up states:
Coding Ninjas, a Gurugram-based edtech institute, has issued clarification on a recent incident that saw its employees being ‘locked’ inside the office so that they cannot exit ‘without permission.’
And what was the clarification? Let me guess. Heck, no. Just a misunderstanding. The write up explains:
… the company [Coding Ninjas, remember?], while acknowledging the incident, attributed it to a ‘regrettable’ action by an employee. The ‘action,’ noted, was ‘immediately rectified within minutes,’ and the individual in question acknowledged his ‘mistake’ and apologized for it. Further saying that the founders had expressed their ‘regret’ and apologized to the staff, Coding Ninjas described this as an ‘isolated’ incident. Coding Ninjas’ senior executive gets gate locked to stop employees from leaving office; company says action ‘regrettable…’
For another take on this interesting management approach to ensuring productivity, check out “Coding Ninjas’ Senior Executive Gets Gate Locked to Stop Employees from Leaving Office; Company Says Action ‘Regrettable’.”
What if you were to look for a link to this story on Reddit? I located a page which showed a door being locked. Exciting footage was available at this link on June 6, 2023 at this link. (If the information has been deleted, you have learned something about Reddit.com in my opinion.)
My interpretation of this enjoyable incident (if indeed true) is:
- Something to keep in mind when accepting a job in Mumbai or similar technology hot spot
- Why graduates of the Indian Institutes of Technology are in such demand; those folks are indeed practical and focused on maximizing employee productivity as measured in minutes in a facility
- A solution to employees who want to work from home. When an employee wants a paycheck, make them come to the office and lock the employees in. Works well and the effectiveness is evident in prisons and re-education facilities in a number of countries.
And regrettable? Yes, in terms of PR. No, in terms of getting snagged in what may be fake news. Is this a precept of the high school science club management method. Yep. Yep.
Stephen E Arnold, June 6, 2023
The Google AI Way: EEAT or Video Injection?
June 5, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Over the weekend, I spotted a couple of signals from the Google marketing factory. The first is the cheerleading by that great champion of objective search results, Danny Sullivan who wrote with Chris Nelson “Rewarding High Quality Content, However, It Is Produced.” The authors pointed out that their essay is on behalf of the Google Search Quality team. This “team” speaks loudly to me when we run test queries on Google.com. Once in a while — not often, mind you — a relevant result will appear in the first page or two of results.
The subject of this essay by Messrs.Sullivan and Nelson is EEAT. My research team and I think that the fascinating acronym is pronounced like to word “eat” in the sense of ingesting gummy cannabinoids. (One hopes these are not the prohibited compounds such as Delta-9 THC.) The idea is to pop something in your mouth and chew. As the compound (fact and fiction, GPT generated content and factoids) dissolve and make their way into one’s system, the psychoactive reaction is greater perceived dependence on the Google products. You may not agree, but that’s how I interpret the essay.
So what’s EEAT? I am not sure my team and I are getting with the Google script. The correct and Googley answer is:
Expertise, experience, authoritativeness, and trustworthiness.
The write up says:
Focusing on rewarding quality content has been core to Google since we began. It continues today, including through our ranking systems designed to surface reliable information and our helpful content system. The helpful content system was introduced last year to better ensure those searching get content created primarily for people, rather than for search ranking purposes.
I wonder if this text has been incorporated in the Sundar and Prabhakar Comedy Show? I would suggest that it replace the words about meeting users’ needs.
The meat of the synthetic turkey burger strikes me as:
it’s important to recognize that not all use of automation, including AI generation, is spam. Automation has long been used to generate helpful content, such as sports scores, weather forecasts, and transcripts. AI has the ability to power new levels of expression and creativity, and to serve as a critical tool to help people create great content for the web.
Synthetic or manufactured information, content objects, data, and other outputs are okay with us. We’re Google, of course, and we are equipped with expertise, experience, authoritativeness, and trustworthiness to decide what is quality and what is not.
I can almost visualize a T shirt with the phrase “EEAT It” silkscreened on the back with a cheerful Google logo on the front. Catchy. EEAT It. I want one. Perhaps a pop tune can be sampled and used to generate a synthetic song similar to Michael Jackson’s “Beat It”? Google AI would dodge the Weird Al Yankovic version of the 1983 hit. Google’s version might include the refrain:
Just EEAT it (EEAT it, EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it, ha, ha, ha, ha)
EEAT it (EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it)
If chowing down on this Google information is not to your liking, one can get with the Google program via a direct video injection. Google has been publicizing its free video training program from India to LinkedIn (a Microsoft property to give the social media service its due). Navigate to “Master Generative AI for Free from Google’s Courses.” The free, free courses are obviously advertisements for the Google way of smart software. Remember the key sequence: Expertise, experience, authoritativeness, and trustworthiness.
The courses are:
- Introduction to Generative AI
- Introduction to Large Language Models
- Attention Mechanism
- Transformer Models and BERT Model
- Introduction to Image Generation
- Create Image Captioning Models
- Encoder-Decoder Architecture
- Introduction to Responsible AI (remember the phrase “Expertise, experience, authoritativeness, and trustworthiness.”)
- Introduction to Generative AI Studio
- Generative AI Explorer (Vertex AI).
Why is Google offering free infomercials about its approach to AI?
The cited article answers the question this way:
By 2030, experts anticipate the generative AI market to reach an impressive $109.3 billion, signifying a promising outlook that is captivating investors across the board. [Emphasis added.]
How will Microsoft respond to the EEAT It positioning?
Just EEAT it (EEAT it, EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it, ha, ha, ha, ha)
EEAT it (EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it)
Stephen E Arnold, June 5, 2023
IBM Dino Baby Unhappy about Being Outed as Dinobaby in the Baby Wizards Sandbox
June 5, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I learned the term “dinobaby” reading blog posts about IBM workers who alleged Big Blue wanted younger workers. After thinking about the term, I embraced it. This blog post features an animated GIF of me dancing in my home office. I try to avoid the following: [a] Millennials, GenX, GenZ, and GenY super wizards; [b] former IBM workers who grouse about growing old and not liking a world without CICS; and [c] individuals with advanced degrees who want to talk with me about “smart software.” I have to admit that I have not been particularly successful in this effort in 2023: Conferences, Zooms, face-to-face meetings, lunches, yada yada. Either I am the most magnetic dinobaby in Harrod’s Creek, or these jejune world changers are clueless. (Maybe I should live in a cave on a mountain and accept acolytes?)
I read “Laid-Off 60-Year-Old Kyndryl Exec Says He Was Told IT Giant Wanted New Blood.” The write up includes a number of interesting statements. Here’s one:
BM has been sued numerous times for age discrimination since 2018 when it was reported that company leadership carried out a plan to de-age its workforce – charges IBM has consistently denied, despite US Equal Employment Opportunity Commission (EEOC) findings to the contrary and confidential settlements.
Would IBM deny allegations of age discrimination? There are so many ways to terminate employees today. Why use the “you are old, so you are RIF’ed” ploy? In my opinion, it is an example of the lack of management finesse evident in many once high-flying companies today. I term the methods apparently in use at outfits like Twitter, Google, Facebook, and others as “high school science club management methods” or H2S2M2. The acronym has not caught one, but I assume that someone with a subscription to ChatGPT will use AI to write a book on the subject soon.
The write up also includes this statement:
Liss-Riordan [an attorney representing the dinobaby] said she has also been told that an algorithm was used to identify those who would lose their jobs, but had no further details to provide with regard to that allegation.
Several observations are warranted:
- Discrimination is nothing new. Oldsters will be nuked. No question about it. Why? Old people like me (I am 78) make younger folks nervous because we belong in warehouses for the soon dead, not giving lectures to the leaders of today and tomorrow.
- Younger folks do not know what they do not know. Consequently, opportunities exist to [a] make fun of young wizards as I do in this blog Monday through Friday since 2008 and [b] charge these “masters of the universe” money to talk about that which is part of their great unknowing. Billing is rejuvenating.
- No one cares. One can sue. One can rage. One can find solace in chemicals, fast cars, or climbing a mountain. But it is important to keep one thing in mind: No one cares.
Net net: Does IBM practice dark arts to rid the firm of those who slow down Zoom meetings, raise questions to which no one knows answers, and burdens on benefits plans? My hunch is that IBM type outfits will do what’s necessary to keep the camp ground free of old timers. Who wouldn’t?
Stephen E Arnold, June 5, 2023
Smart Software and a Re-Run of Paradise Lost Joined Progress
June 5, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I picked up two non-so-faint and definitely not-encrypted signals about the goals of Google and Microsoft for smart software.
Which company will emerge as the one true force in smart software? MidJourney did not pick a winner, just what the top dog will wear to the next quarterly sales report delivered via a neutral Zoom call.
Navigate to the visually thrilling podcast hosted by Lex Fridman, an American MIT wizard. He interviewed the voluble Google wizard Chris Lattner. The subject was the Future of Programming and AI. After listening to the interview, I concluded the following:
- Google wants to define and control the “meta” framework for artificial intelligence. What’s this mean? Think a digital version of a happy family: Vishnu, Brahma, and Shiva, among others.
- Google has an advantage when it comes to doing smart software because its humanoids have learned what works, what to do, and how to do certain things.
- The complexity of Google’s multi-pronged smart software methods, its home-brew programming languages, and its proprietary hardware are nothing more than innovation. Simple? Innovation means no one outside of the Google AI cortex can possibly duplicate, understand, or outperform Googzilla.
- Google has money and will continue to spend it to deliver the Vishnu, Brahma, and Shiva experience in my interpretation of programmer speak.
How’s that sound? I assume that the fruit fly start ups are going to ignore the vibrations emitted from Chris Lattner, the voluble Chris Lattner, I want to emphasize. But like those short-lived Diptera, one can derive some insights from the efforts of less well-informed, dependent, and less-well-funded lab experiments.
Okay, that’s signal number one.
Signal number two appears in “Microsoft Signs Deal for AI Computing Power with Nvidia-Backed CoreWeave That Could Be Worth Billions.” This “real news” story asserts:
… Microsoft has agreed to spend potentially billions of dollars over multiple years on cloud computing infrastructure from startup CoreWeave …
CoreWeave? Yep, the company “sells simplified access to Nvidia’s graphics processing units, or GPUs, which are considered the best available on the market for running AI models.” By the way, nVidia has invested in this outfit. What’s this signal mean to me? Here are the flickering lines on my oscilloscope:
- Microsoft wants to put smart software into its widely-used enterprise applications in order to make the one true religion of smart software. The idea, of course, is to pass the collection plate and convert dead dog software into racing greyhounds.
- Microsoft has an advantage because when an MBA does calculations and probably letters to significant others, Excel is the go-to solution. Some people create art in Excel and then sell it. MBAs just get spreadsheet fever and do leveraged buyouts. With smart software the Microsoft alleged monopoly does the billing.
- The wild and wonderful world of Azure is going to become smarter because… well, Microsoft does smart things. Imagine the demand for training courses, certification for Microsoft engineers, and how-to YouTube videos.
- Microsoft has money and will continue to achieve compulsory attendance at the Church of Redmond.
Net net: Two titans will compete. I am thinking about the battle between the John Milton’s protagonist and antagonist in “Paradise Lost.” This will be fun to watch whilst eating chicken korma.
Stephen E Arnold, June 5, 2023
AI Allegedly Doing Its Thing: Let Fake News Fly Free
June 2, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I cannot resist this short item about the smart software. Stories has appeared in my newsfeeds about AI which allegedly concluded that to complete its mission, it had to remove an obstacle — the human operator.
A number of news sources reported as actual factual that a human operator of a smart weapon system was annoying the smart software. The smart software decided that the humanoid was causing a mission to fail. The smart software concluded that the humanoid had to be killed so the smart software could go kill more humanoids.
I collect examples of thought provoking fake news. It’s my new hobby and provides useful material for my “OSINT Blindspots” lectures. (The next big one will be in October 2023 after I return from Europe in late September 2023.)
However, the write up “US Air Force Denies AI Drone Attacked Operator in Test” presents a different angle on the story about evil software. I noted this passage from an informed observer:
Steve Wright, professor of aerospace engineering at the University of the West of England, and an expert in unmanned aerial vehicles, told me jokingly that he had “always been a fan of the Terminator films” when I asked him for his thoughts about the story. “In aircraft control computers there are two things to worry about: ‘do the right thing’ and ‘don’t do the wrong thing’, so this is a classic example of the second,” he said. “In reality we address this by always including a second computer that has been programmed using old-style techniques, and this can pull the plug as soon as the first one does something strange.”
Now the question: Did smart software do the right thing. Did it go after its humanoid partner? In a hypothetical discussion perhaps? In real life, nope. My hunch is that the US Air Force anecdote is anchored in confusing “what if” thinking with reality. That’s easy for some younger than me to do in my experience.
I want to point out that in August 2020, a Heron Systems’ AI (based on Google technology) killed an Air Force “top gun” in a simulated aerial dog fight. How long did it take the smart software to neutralize the annoying humanoid? About a minute, maybe a minute and a half. See this Janes new item for more information.
My view is that smart software has some interesting capabilities. One scenario of interest to me is a hacked AI-infused weapons system? Pondering this idea opens the door some some intriguing “what if” scenarios.
Stephen E Arnold, June 2, 2023