The Authority of a Parent: In Question?
August 3, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
If we cannot scan the kids, let us scan the guardians. That is what the ESRB, digital identity firm Yoti, and kiddie marketing firm SuperAwesome are asking the Federal Trade Commission according to The Register‘s piece, “Watchdog Mulls Online Facial Age-Verification Tech—For Kids’ Parents.” The Children’s Online Privacy Protection Act (COPPA) requires websites and apps to make kids under 13 get a parent’s permission before they can harvest that sweet, early stage personal data. It is during the next step the petitioners would like to employ age-verification software on the grown-ups. As writer Jessica Lyons Hardcastle describes, the proposed process relies on several assumptions. She outlines the steps:
“1. First, a child visits a website and hits an age gate. The operator then asks the kid for their parent’s email, sends a note to the parent letting them know that they need to verify that they’re an adult for the child to proceed, and offers the facial-age scanning estimation as a possible verification method.
2. (Yes, let’s assume for a moment that the kid doesn’t do what every 10-year-old online does and lie about their age, or let’s assume the website or app has a way of recognizing it’s dealing with a kid, such as asking for some kind of ID.)
3. If the parent consents to having their face scanned, their system then takes a selfie and the software provides an age estimate.
4. If the age guesstimate indicates the parent is an adult, the kid can then proceed to the website. But if it determines they are not an adult, a couple of things happen.
5. If ‘there is some other uncertainty about whether the person is an adult’ then the person can choose an alternative verification method, such as a credit card, driver’s license, or social security number.
6. But if the method flat out decides they are not an adult, it’s a no go for access. We’re also going to assume here that the adult is actually the parent or legal guardian.”
Sure, why not? The tech works by converting one’s face into a set of numbers and feeding that to an AI that has been trained to assess age with those numbers. According to the ESRB, the actual facial scans are not saved for AI training, marketing, or any other purpose. But taking them, and their data-hungry partners, at their word is yet another assumption.
Cynthia Murrell, August 3, 2023
AI Commitments: But What about Chipmunks and the Bunny Rabbits?
July 23, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI sent executives to a meeting held in “the White House” to agree on some ground rules for “artificial intelligence.” AI is available from a number of companies and as free downloads as open source. Rumors have reached me suggesting that active research and development are underway in government agencies, universities, and companies located in a number of countries other than the U.S. Some believe the U.S. is the Zoe of AI, assisted by Naiads. Okay, but you know those Greek gods can be unpredictable.
Thus, what’s a commitment? I am not sure what the word means today. I asked You.com, a smart search system to define the term for me. The system dutifully return this explanation:
commitment is defined as “an agreement or pledge to do something in the future; the state or an instance of being obligated or emotionally impelled; the act of committing, especially the act of committing a crime.” In general, commitment refers to a promise or pledge to do something, often with a strong sense of dedication or obligation. It can also refer to a state of being emotionally invested in something or someone, or to the act of carrying out a particular action or decision.
Several words and phrases jumped out at me; namely, “do something in the future.” What does “do” mean? What is “the future?” Next week, next month, a decade from a specific point in time, etc.? “Obligated” is an intriguing word. What compels the obligation? A threat, a sense of duty, and understanding of a shared ethical fabric? “Promise” evokes a young person’s statement to a parent when caught drinking daddy’s beer; for example, “Mom, I promise I won’t do that again.” The “emotional” investment is an angle that reminds me that 40 to 50 percent of first marriages end in divorce. Commitments — even when bound by social values — are flimsy things for some. Would I fly on a commercial airline whose crash rate was 40 to 50 percent? Would you?
“Okay, we broke the window? Now what do we do?” asks the leader of the pack. “Run,” says the brightest of the group. “If we are caught, we just say, “Okay, we will fix it.” “Will we?” asks the smallest of the gang. “Of course not,” replies the leader. Thanks MidJourney, you create original kid images well.
Why make any noise about commitment?
I read “How Do the White House’s A.I. Commitments Stack Up?” The write up is a personal opinion about an agreement between “the White House” and the big US players in artificial intelligence. The focus was understandable because those in attendance are wrapped in the red, white, and blue; presumably pay taxes; and want to do what’s right, save the rain forest, and be green.
Some of the companies participating in the meeting have testified before Congress. I recall at least one of the firms’ senior managers say, “Senator, thank you for that question. I don’t know the answer. I will have my team provide that information to you…” My hunch is that a few of the companies in attendance at the White House meeting could use the phrase or a similar one at some point in the “future.”
The table below lists most of the commitments to which the AI leaders showed some receptivity. The table presents the commitments in the left hand column and the right hand column offers some hypothesized reactions from a nation state quite opposed to the United States, the US dollar, the hegemony of US technology, baseball, apple pie, etc.
Commitments | Gamed Responses |
Security testing before release | Based on historical security activities, not to worry |
Sharing AI information | Let’s order pizza and plan a front company based in Walnut Creek |
Protect IP about models | Let’s canvas our AI coders and pick some to get jobs at these outfits |
Permit pentesting | Yes, pentesting. Order some white hats with happy faces |
Tell users when AI content is produced | Yes, let’s become registered users. Who has a cousin in Mountain View? |
Report about use of the AI technologies | Make sure we are on the mailing list for these reports |
Research AI social risks | Do we own a research firm? Can we buy the research firm assisting these US companies? |
Use AI to fix up social ills | What is a social ill? Call the general, please, and ask. |
The PR angle is obvious. I wonder if commitments will work. The firms have one objective; that is, meet the expectations of their stakeholders. In order to do that, the firms must operate from the baseline of self-interest.
Net net: A plot of techno-land now have a few big outfits working and thinking hard how to buy up the best plots. What about zoning, government regulations, and doing good things for small animals and wild flowers? Yeah. No problem.
Stephen E Arnold, July 23, 2023
Sam the AI-Man Explains His Favorite Song, My Way, to the European Union
July 18, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
It seems someone is uncomfortable with AI regulation despite asking for regulation. TIME posts this “Exclusive: OpenAI Lobbied the E.U. to Water Down AI Regulation.” OpenAI insists AI must be regulated posthaste. CEO Sam Altman even testified to congress about it. But when push comes to legislative action, the AI-man balks. At least when it affects his company. Reporter Billy Perrigo tells us:
“The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds and in meetings with heads of governments, he has repeatedly spoken of the need for global AI regulation. But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company.”
What, to Altman’s mind, makes OpenAI exempt from the much-needed regulation? Their product is a general-purpose AI, as opposed to a high-risk one. So it contributes to benign projects as well as consequential ones. How’s that for logic? Apparently it was good enough for EU regulators. Or maybe they just caved to OpenGI’s empty threat to pull out of Europe.
Is it true that Mr. AI-Man only follows the rules he promulgates? Thanks for the Leonardo-like image of students violating a university’s Keep Off the Grass rule.
We learn:
“The final draft of the Act approved by E.U. lawmakers did not contain wording present in earlier drafts suggesting that general purpose AI systems should be considered inherently high risk. Instead, the agreed law called for providers of so-called ‘foundation models,’ or powerful AI systems trained on large quantities of data, to comply with a smaller handful of requirements including preventing the generation of illegal content, disclosing whether a system was trained on copyrighted material, and carrying out risk assessments.”
Of course, all of this may be a moot point given the catch-22 of asking legislators to regulate technologies they do not understand. Tech companies’ lobbying dollars seem to provide the most clarity.
Cynthia Murrell, July 18, 2023
On Twitter a Personal Endorsement Has Value
July 11, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The high school science club managers are engaged in a somewhat amusing dust up. First, there was a challenge to a physical fight, a modern joust in which two wizards would ride their egos into glory in Las Vegas, a physical metaphor for modern America. Then the two captains of industry would battle in court because … you know… you cannot hire people another company fired. Yesterday, real journalists crowed from many low rise apartment roof tops that a new social media service was growing allegedly at the expense of another social media company. The numbers prove that one company is better at providing a platform to erode cultural values than another. Victory!
Twitter… endorsed by those who know. Thanks, MidJourney, you output an image in spite of your inappropriate content filter. Good work.
Now I learn that one social media outfit is the bestie of an interesting organization. I think that organization has been known to cast aspersions on the United States. The phrase “the great Satan” sticks in my mind, but I am easily confused. I want to turn to a real news outfit which itself is the subject of some financial minds — Vice Motherboard.
The article title makes the point: “Taliban Endorses Twitter over Threads.” Now that is quite an accolade. The Facebook Zucker service, according to the article, is “intolerant.” Okay. Is the Taliban associated with lenient and tolerant behavior? I don’t know but I recall some anecdotes about being careful about what to wear when pow-wowing with the Taliban. Maybe that’s incorrect.
The write up adds:
Anas Haqqani, a Taliban thought-leader with family connections to leadership, has officially endorsed Twitter over Facebook-owned competitor Threads. “Twitter has two important advantages over other social media platforms,” Haqqani said in an English post on Twitter. “The first privilege is the freedom of speech. The second privilege is the public nature & credibility of Twitter. Twitter doesn’t have an intolerant policy like Meta. Other platforms cannot replace it.”
What group will endorse Threads directly and the Zuck implicitly? No, I don’t have any suggestions to offer. Why? This adolescent behavior can manifest itself in quite dramatic ways. As a dinobaby, I am not into drama. I am definitely interested in how those in adult bodies act out their adolescent thought processes. Thumbs up for Mr. Musk. Rocket thrusters, Teslas, and the Taliban. That’s the guts of an impressive LinkedIn résumé.
Stephen E Arnold, July 11, 2023
Googzilla Annoyed: No Longer to Stomp Around Scaring People
July 6, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
“Sweden Orders Four Companies to Stop Using Google Tool” reports that the Swedish government “has ordered four companies to stop using a Google tool that measures and analyzed Web traffic.” The idea informing the Swedish decision to control the rapacious creature’s desire for “personal data.” Is the lovable Googzilla slurping data and allegedly violating privacy? I have no idea.
In this MidJourney visual confection, it appears that a Tyrannosaurus Rex named Googzilla is watching children. Is Googzilla displaying an abnormal and possibly illegal behavior, particularly with regard to personal data.
The write up states:
The IMY said it considers the data sent to Google Analytics in the United States by the four companies to be personal data and that “the technical security measures that the companies have taken are not sufficient to ensure a level of protection that essentially corresponds to that guaranteed within the EU…”
Net net: Sweden is not afraid of the Google. Will other countries try their hand at influencing the lovable beastie?
Stephen E Arnold, July 6, 2023
Crackdown on Fake Reviews: That Is a Hoot!
July 3, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “The FTC Wants to Put a Ban on Fake Reviews.” My first reaction was, “Shouldn’t the ever-so-confident Verge poobah have insisted on the word “impose”; specifically, The FTC wants to impose a ban on a fake reviews” or maybe “The FTC wants to rein in fake reviews”? But who cares? The Verge is the digital New York Times and go-to source of “real” Silicon Valley type news.
The write up states:
If you, too, are so very tired of not knowing which reviews to trust on the internet, we may eventually get some peace of mind. That’s because the Federal Trade Commission now wants to penalize companies for engaging in shady review practices. Under the terms of a new rule proposed by the FTC, businesses could face fines for buying fake reviews — to the tune of up to $50,000 for each time a customer sees one.
For more than 30 years, I worked with an individual named Robert David Steele, who was an interesting figure in the intelligence world. He wrote and posted on Amazon more than 5,000 reviews. He wrote these himself, often in down times with me between meetings. At breakfast one morning in the Hague, Steele was writing at the breakfast table, and he knocked over his orange juice. He said, “Give me your napkin.” He used it to jot down a note; I sopped up the orange juice.
“That’s a hoot,” says a person who wrote a product review to make a competitor’s offering look bad. A $50,000 fine. Legal eagles take flight. The laughing man is an image flowing from the creative engine at MidJourney.
He wrote what I call humanoid reviews.
Now reviews of any type are readily available. Here’s an example from Fiverr.com, an Israel-based outfit with gig workers from many countries and free time on their hands:
How many of these reviews will be written by a humanoid? How many will be spat out via a ChatGPT-type system?
What about reviews written by someone with a bone to pick? The reviews are shaded so that the product or the book or whatever is presented in a questionable way? Did Mr. Steele write a review of an intelligence-related book and point out that the author was misinformed about the “real” intel world?
Several observations:
- Who or what is going to identify fake reviews?
- What’s the difference between a Fiverr-type review and a review written by a humanoid motivated by doing good or making the author or product look bad?
- As machine-generated text improves, how will software written to identify machine-generated reviews keep up with advances in the machine-generating software itself?
Net net: External editorial and ethical controls may be impractical. In my opinion, a failure of ethical controls within social structures creates a greenhouse in which fakery, baloney, misinformation, and corrupted content to thrive. In this context, who cares about the headline. It too is a reflection of the pickle barrel in which we soak.
Stephen E Arnold, July 3, 2023
Canada Bill C-18 Delivers a Victory: How Long Will the Triumph Pay Off in Cash Money?
June 23, 2023
News outlets make or made most of their money selling advertising. The idea was — when I worked at a couple of big news publishing companies — the audience for the content would attract those who wanted to reach the audience. I worked at the Courier-Journal & Louisville Times Co. before it dissolved into a Gannett marvel. If a used car dealer wanted to sell a 1980 Corvette, the choice was the newspaper or a free ad in what was called AutoTrader. This was a localized, printed collection of autos for sale. Some dealers advertised, but in the 1980s, individuals looking for a cheap or free way to pitch a vehicle loved AutoTrader. Despite a free option, the size of the readership and the sports news, comics, and obituaries made the Courier-Journal the must-have for a motivated seller.
Hannibal and his war elephant Zuckster survey the field of battle after Bill C-18 passes. MidJourney was the digital wonder responsible for this confection.
When I worked at the Ziffer in Manhattan, we published Computer Shopper. The biggest Computer Shopper had about 800 pages. It could have been bigger, but there were paper and press constraints If I recall correctly. But I smile when I remember that 85 percent of those pages were paid advertisements. We had an audience, and those in the burgeoning computer and software business wanted to reach our audience. How many Ziffers remember the way publishing used to work?
When I read the National Post article titled “Meta Says It’s Blocking News on Facebook, Instagram after Government Passes Online News Bill,” I thought about the Battle of Cannae. The Romans had the troops, the weapons, and the psychological advantage. But Hannibal showed up and, if historical records are as accurate as a tweet, killed Romans and mercenaries. I think it may have been estimated that Roman whiz kids lost 40,000 troops and 5,000 cavalry along with the Roman strategic wizards Paulus, Servilius, and Atilius.
My hunch is that those who survived paid with labor or money to be allowed to survive. Being a slave in peak Rome was a dicey gig. Having a fungible skill like painting zowie murals was good. Having minimal skills? Well, someone has to work for nothing in the fields or quarries.
What’s the connection? The publishers are similar to the Roman generals. The bad guys are the digital rebels who are like Hannibal and his followers.
Back to the cited National Post article:
After the Senate passed the Online News Act Thursday, Meta confirmed it will remove news content from Facebook and Instagram for all Canadian users, but it remained unclear whether Google would follow suit for its platforms. The act, which was known as Bill C-18, is designed to force Google and Facebook to share revenues with publishers for news stories that appear on their platforms. By removing news altogether, companies would be exempt from the legislation.
The idea is that US online services which touch most online users (maybe 90 or 95 percent in North America) will block news content. This means:
- Cash gushers from Facebook- and Google-type companies will not pay for news content. (This has some interesting downstream consequences but for this short essay, I want to focus on the “not paying” for news.)
- The publishers will experience a decline in traffic. Why? Without a “finding and pointing” mechanism, how would I find this “real news” article published by the National Post. (FYI: I think of this newspaper as Canada’s USAToday, which was a Gannett crown jewel. How is that working out for Gannett today?)
- Rome triumphed only to fizzle out again. And Hannibal? He’s remembered for the elephants-through-the-Alps trick. Are man’s efforts ultimately futile?
What happens if one considers, the clicks will stop accruing to the publishers’ Web sites. How will the publishers generate traffic? SEO. Yeah, good luck with that.
Is there an alternative?
Yes, buy Facebook and Google advertising. I call this pay to play.
The Canadian news outlets will have to pay for traffic. I suppose companies like Tyler Technologies, which has an office in Vancouver I think, could sell ads for the National Post’s stories, but that seems to be a stretch. Similarly the National Post could buy ads on the Embroidery Classics & Promotions (Calgary) Web site, but that may not produce too many clicks for the Canadian news outfits. I estimate one or two a month.
Bill C-18 may not have the desired effect. Facebook and Facebook-type outfits will want to sell advertising to the Canadian publishers in my opinion. And without high-impact, consistent and relevant online advertising, state-of-art marketing, and juicy content, the publishers may find themselves either impaled on their digital hopes or placed in servitude to the Zuck and his fellow travelers.
Are these publishers able to pony up the cash and make the appropriate decisions to generate revenues like the good old days?
Sure, there’s a chance.
But it’s a long shot. I estimate the chances as similar to King Charles’ horse winning the 2024 King George V Stakes race in 2024; that is, 18 to 1. But Desert Hero pulled it off. Who is rooting for the Canadian publishers?
Stephen E Arnold, June 23, 2023
Many Regulators, Many Countries Cannot Figure Out How to Regulate AI
June 21, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
American and European technology and trade leaders met in Sweden for the Trade and Tech Council (TTC) summit. They met at the beginning of June to discuss their sector’s future. One of the main talking points was how to control AI. The one thing all the leaders agreed on was that they could not agree on anything. Politico tells more about the story in: “The Struggle To Control AI.”
The main AI topic international leaders discussed was generative AI, such as Google’s Bard and ChatGPT from OpenAI, and its influence on humanity. The potential for generative AI is limitless, but there are worries that it poses threats to global security and would ruin the job market. The leaders want to prove to the world that democratic governments advances as quickly as technology advances.
A group of regulators discuss regulating AI. The regulators are enjoying a largely unregulated lunch of fast good stuffed with chemicals. Some of these have interesting consequences. One regulator says, “Pass the salt.” Another says, “What about AI and ML?” A third says, “Are those toppings?” The scene was generated by the copyright maven MidJourney.
Leaders from Europe and the United States are anxious to make laws that regulate how AI works in conjunction with society. The TTC’s goal is to develop non-binding standards about AI transparency, risk audits, and technical details. The non-binding standards would police AI so it does not destroy humanity and the planet. The plan is to present the standards at the G7 in Fall 2023.
Europe and the United States need to agree on the standards, except they are not-so that leaves room for China to promote its authoritarian version of AI. The European Union has written the majority of the digital rulebook that Western societies follows. The US has other ideas:
“The U.S., on the other hand, prefers a more hands-off approach, relying on industry to come up with its own safeguards. Ongoing political divisions within Congress make it unlikely any AI-specific legislation will be passed before next year’s U.S. election. The Biden administration has made international collaboration on AI a policy priority, especially because a majority of the leading AI companies like Google, Microsoft and OpenAI, are headquartered in the U.S. For Washington, helping these companies compete against China’s rivals is also a national security priority.”
The European Union wants to do things one way, the United States has other ideas. It is all about talking heads speaking legalese mixed with ethics, while China is pushing its own agenda.
Whitney Grace, June 21, 2023
Call 9-1-1. AI Will Say Hello Soon
June 20, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
My informal research suggests that every intelware and policeware vendor is working to infuse artificial intelligence or in my lingo “smart software” into their products and services. Most of these firms are not Chatty Cathies. The information about innovations is dribbled out in talks at restricted attendance events or in talks given at these events. This means that information does not zip around like the posts on the increasingly less use Twitter service #osint.
Government officials talking about smart software which could reduce costs but the current budget does not allow its licensing. Furthermore, time is required to rethink what to do with the humanoids who will be rendered surplus and ripe for RIF’ing. One of the attendees wisely asks, “Does anyone want dessert?” A wag of the dinobaby’s tail to MidJourney which has generated an original illustration unrelated to any content object upon which the system inadvertently fed. Smart software has to gobble lunch just like government officials.
However, once in a while, some information becomes public and “real news” outfits recognize the value of the information and make useful factoids available. That’s what happened in “A.I. Call Taker Will Begin Taking Over Police Non-Emergency Phone Lines Next Week: Artificial Intelligence Is Kind of a Scary Word for Us,” Admits Dispatch Director.”
Let me highlight a couple of statements in the cited article.
First, I circled this statement about Portland, Oregon’s new smart system:
A automated attendant will answer the phone on nonemergency and based on the answers using artificial intelligence—and that’s kind of a scary word for us at times—will determine if that caller needs to speak to an actual call taker,” BOEC director Bob Cozzie told city commissioners yesterday.
I found this interesting and suggestive of how some government professionals will view the smart software-infused system.
Second, I underlined this passage:
The new AI system was one of several new initiatives that were either announced or proposed at yesterday’s 90-minute city “work session” where commissioners grilled officials and consultants about potential ways to address the crisis.
The “crisis”, as I understand it, boils down to staffing and budgets.
Several observations:
- The write up makes a cautious approach to smart software. What will this mean for adoption of even more sophisticated services included in intelware and policeware solutions?
- The message I derived from the write up is that governmental entities are not sure what to do. Will this cloud of unknowing have a impact on adoption of AI-infused intelware and policeware systems?
- The article did not include information from the vendor? Is this fact provide information about the reporter’s research or does it suggest the vendor was not cooperative. Intelware and policeware companies are not particularly cooperative nor are some of the firms set up to respond to outside inquiries. Will those marketing decisions slow down adoption of smart software?
I will let you ponder the implications of this brief, and not particularly detailed article. I would suggest that intelware and policeware vendors put on their marketing hats and plug them into smart software. Some new hurdles for making sales may be on the horizon.
Stephen E Arnold, June 20. 2023
Intellectual Property: What Does That Mean, Samsung?
June 19, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Former Samsung Executive Accused of Trying to Copy an Entire Chip Plant in China.” I have no idea if [a] the story is straight and true, [b] a disinformation post aimed at China, [c] something a “real news” type just concocted with the help of a hallucinating chunk of smart software, [d] a story emerging from a lunch meeting with “what if” ideas and “hypotheticals” were flitting from Chinese take out container to take out container.
It does not matter. I find it bold, audacious, and almost believable.
A single engineer’s pile of schematics, process flow diagrams, and details of third party hardware require to build a Samsung-like outfit. The illustration comes from the fertile zeros and ones at MidJourney.
The write up reports:
Prosecutors in the Suwon District have indicted a former Samsung executive for allegedly stealing semiconductor plant blueprints and technology from the leading chipmaker, BusinessKorea reports. They didn’t name the 65-year-old defendant, who also previously served as vice president of another Korean chipmaker SK Hynix, but claimed he stole the information between 2018 and 2019. The leak reportedly cost Samsung about $230 million.
Why would someone steal information to duplicate a facility which is probably getting long in the tooth? That’s a good question. Why not steal from the departments of several companies which are planning facilities to be constructed in 2025? The write up states:
The defendant allegedly planned to build a semiconductor in Xi’an, China, less than a mile from an existing Samsung plant. He hired 200 employees from SK Hynix and Samsung to obtain their trade secrets while also teaming up with an unnamed Taiwanese electronics manufacturing company that pledged $6.2 billion to build the new semiconductor plant — the partnership fell through. However, the defendant was able to secure about $358 million from Chinese investors, which he used to create prototypes in a Chengdu, China-based plant. The plant was reportedly also built using stolen Samsung information, according to prosecutors.
Three countries identified. The alleged plant would be located in easy-to-reach Xi’an. (Take a look at the nifty entrance to the walled city. Does that look like a trap to you? It did to me.)
My hunch is that there is more to this story. But it does a great job of casting shade on the Middle Kingdom. Does anyone doubt the risk posed by insiders who get frisky? I want to ask Samsung’s human resources professional about that vetting process for new hires and what happens when a dinobaby leaves the company with some wrinkles, gray hair, and information. My hunch is that the answer will be, “Not much.”
Stephen E Arnold, June 19, 2023