AI Legislation: Can the US Regulate What It Does Understand Like a Dull Normal Student?

April 20, 2023

I read an essay by publishing and technology luminary Tim O’Reilly. If you don’t know the individual, you may recognize the distinctive art used on many of his books. Here’s what I call the parrot book’s cover:

image

You can get a copy at this link.

The essay to which I referred in the first sentence of this post is “You Can’t Regulate What You Don’t Understand.” The subtitle of the write up is “Or, Why AI Regulations Should Begin with Mandated Disclosures.” The idea is an interesting one.

Here’s a passage I found worth circling:

But if we are to create GAAP for AI, there is a lesson to be learned from the evolution of GAAP itself. The systems of accounting that we take for granted today and use to hold companies accountable were originally developed by medieval merchants for their own use. They were not imposed from without, but were adopted because they allowed merchants to track and manage their own trading ventures. They are universally used by businesses today for the same reason.

The idea is that those without first hand knowledge of something cannot make effective regulations.

The essay makes it clear that government regulators may be better off:

formalizing and requiring detailed disclosure about the measurement and control methods already used by those developing and operating advanced AI systems. [Emphasis in the original.]

The essay states:

Companies creating advanced AI should work together to formulate a comprehensive set of operating metrics that can be reported regularly and consistently to regulators and the public, as well as a process for updating those metrics as new best practices emerge.

The conclusion is warranted by the arguments offered in the essay:

We shouldn’t wait to regulate these systems until they have run amok. But nor should regulators overreact to AI alarmism in the press. Regulations should first focus on disclosure of current monitoring and best practices. In that way, companies, regulators, and guardians of the public interest can learn together how these systems work, how best they can be managed, and what the systemic risks really might be.

My thought is that it may be useful to look at what generalities and self-regulation deliver in real life. As examples, I would point out:

  1. The report “Independent Oversight of the Auditing Professionals: Lessons from US History.” To keep it short and sweet: Self regulation has failed. I will leave you to work through the somewhat academic argument. I have burrowed through the document and largely agree with the conclusion.
  2. The US Securities & Exchange Commission’s decision to accept $1.1 billion in penalties as a result of 16 Wall Street firms’ failure to comply with record keeping requirements.
  3. The hollowness of the points set forth in “The Role of Self-Regulation in the Cryptocurrency Industry: Where Do We Go from Here?” in the wake of the Sam Bankman Fried FTX problem.
  4. The MBA-infused “ethical compass” of outfits operating with a McKinsey-type of pivot point?

My view is that the potential payoff from pushing forward with smart software is sufficient incentive to create a Wild West, anything-goes environment. Those companies with the most to gain and the resources to win at any cost can overwhelm US government professionals with flights of legal eagles.

With innovations in smart software arriving quickly, possibly as quickly as new Web pages in the early days of the Internet, firms that don’t move quickly, act expediently, and push toward autonomous artificial intelligence will be unable to catch up with firms who move with alacrity.

Net net: No regulation, imposed or self-generated, will alter the rocket launch of news services. The US economy is not set up to encourage snail-speed innovation. The objective is met by generating money. Money, not guard rails, common sense, or actions which harm a company’s self interest, makes the system work… for some. Losers are the exhaust from an economic machine. One doesn’t drive a Model T Ford. Today those who can drive a Tesla Plaid or McLaren. The “pet” is a French bulldog, not a parrot.

Stephen E Arnold, April 20, 2023

Google, Does Quantum Supremacy Imply That Former Staff Grouse in Public?

April 5, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am not sure if this story is spot on. I am writing about “Report: A Google AI Researcher Resigned after Learning Google’s Bard Uses Data from ChatGPT.” I am skeptical because today is All Fools’ Day. Being careful is sometimes a useful policy. An exception might be when a certain online advertising company is losing bigly to the marketing tactics of [a] Microsoft, the AI in Word and Azure Security outfit, [b] OpenAI and its little language model that could, and [c] Midjourney which just rolled out its own camera with a chip called Bionzicle. (Is this perhaps pronounced “bio-cycle” like washing machine cycle or “bion zickle” like bio pickle? I go with the pickle sound; it seems appropriate.

The cited article reports as actual factual real news:

ChatGPT AI is often accused of leveraging “stolen” data from websites and artists to build its AI models, but this is the first time another AI firm has been accused of stealing from ChatGPT.  ChatGPT is powering Bing Chat search features, owing to an exclusive contract between Microsoft and OpenAI. It’s something of a major coup, given that Bing leap-frogged long-time search powerhouse Google in adding AI to its setup first, leading to a dip in Google’s share price.

This is im port’ANT as the word is pronounced on a certain podcast.

More interesting to me is that recycled Silicon Valley type real news verifies this remarkable assertion as the knowledge output of a PROM’ inANT researcher, allegedly named Jacob Devlin. Mr. Devil has found his future at – wait for it – OpenAI. Wasn’t OpenAI the company that wanted to do good and save the planet and then discovered Microsoft backing, thirsty trapped AI investors, and the American way of wealth?

Net net: I wish I could say, April’s fool, but I can’t. I have an unsubstantiated hunch that Google’s governance relies on the whims of high school science club members arguing about what pizza topping to order after winning the local math competition. Did the team cheat? My goodness no. The team has an ethical compass modeled on the triangulations of William McCloundy or I.O.U. O’Brian, the fellow who sold the Brooklyn Bridge in the early 20th century.

Stephen E Arnold, April 5, 2023

FAA Software: Good Enough?

January 11, 2023

Is today’s software good enough. For many, the answer is, “Absolutely.” I read “The FAA Grounded Every Single Domestic Flight in the U.S. While It Fixed Its Computers.” The article states what many people in affected airports knows:

The FAA alerted the public to a problem with the system at 6:29 a.m. ET on Twitter and announced that it had grounded flights at 7:19 a.m. ET. While the agency didn’t provide details on what had gone wrong with the system, known as NOTAM, Reuters reported that it had apparently stopped processing updated information. As explained by the FAA, pilots use the NOTAM system before they take off to learn about “closed runways, equipment outages, and other potential hazards along a flight route or at a location that could affect the flight.” As of 8:05 a.m. ET, there were 3,578 delays within, out, and into the U.S., according to flight-tracking website FlightAware.

NOTAM, for those not into government speak, means “Notice to Air Missions.”

Let’s go back in history. In the 1990s I think I was on the Board of the National Technical Information Service. One of our meetings was in a facility shared with the FAA. I wanted to move my rental car from the direct sunlight to a portion of the parking lot which would be shaded. I left the NTIS meeting, moved my vehicle, and entered through a side door. Guess what? I still remember my surprise when I was not asked for my admission key card. The door just opened and I was in an area which housed some FAA computer systems. I opened one of those doors and poked my nose in and saw no one. I shut the door, made sure it was locked, and returned to the NTIS meeting.

I recall thinking, “I hope these folks do software better than they do security.”

Today’s (January 11, 2023) FAA story reminded me that security procedures provide a glimpse to such technical aspects of a government agency as software. I had an engagement for the blue chip consulting firm for which I worked in the 1970s and early 1980s to observe air traffic control procedures and systems at one of the busy US airports. I noticed that incoming aircraft were monitored by printing out tail numbers and details of the flight, using a rubber band to affix these data to wooden blocks which were stacked in a holder on the air traffic control tower’s wall. A controlled knew the next flight to handle by taking the bottom most block, using the data, and putting the unused block back in a box on a table near the bowl of antacid tablets.

I recall that discussions were held about upgrading certain US government systems; for example, the IRS and the FAA computer systems. I am not sure if these systems were upgraded. My hunch is that legacy machines are still chugging along in facilities which hopefully are more secure than the door to the building referenced above.

My point is that “good enough” or “close enough for government work” is not a new concept. Many administrations have tried to address legacy systems and their propensity to [a] fail like the Social Security Agency’s mainframe to Web system, [b] not work as advertised; that is, output data that just doesn’t jibe with other records of certain activities (sorry, I am not comfortable naming that agency), or [c] are unstable because either funds for training staff, money for qualified contractors, or investments in infrastructure to keep the as is systems working in an acceptable manner.

I think someone other than a 78 year old should be thinking about the issue of technology infrastructure that, like Southwest Airlines’ systems, or the FAA’s system does not fail.

Why are these core systems failing? Here’s my list of thoughts. Note: Some of these will make anyone between 45 and 23 unhappy. Here goes:

  1. The people running agencies and their technology units don’t know what to do
  2. The consultants hired to do the work agency personnel should do don’t deliver top quality work. The objective may be a scope change or a new contract, not a healthy system
  3. The programmers don’t know what to do with IBM-type mainframe systems or other legacy hardware. These are not zippy mobile phones which run apps. These are specialized systems whose quirks and characteristics often have to be learned with hands on interaction. YouTube videos or a TikTok instructional video won’t do the job.

Net net: Failures are baked into commercial and government systems. The simultaneous of several core systems will generate more than annoyed airline passengers. Time to shift from “good enough” to “do the job right the first time”. See. I told you I would annoy some people with my observations. Well, reality is different from thinking about smart software will write itself.

Stephen E Arnold, January 11, 2023

Ah, Emergent Behavior: Tough to Predict, Right?

December 28, 2022

Super manager Jeff (I manage people well) Dean and a gam of Googlers published “Emergent Abilities of Large Language Models.” The idea is that those smart software systems informed by ingesting large volumes of content demonstrate behaviors the developers did not expect. Surprise!

Also, Google published a slightly less turgid discussion of the paper which has 16 authors. in a blog post called “Characterizing Emergent Phenomena in Large Language Models.” This post went live in November 2022, but the time required to grind through the 30 page “technical” excursion was not available to me until this weekend. (Hey, being retired and working on my new lectures for 2023 is time-consuming. Plus, disentangling Google’s techy content marketing from the often tough to figure out text and tiny graphs is not easy for my 78 year old eyes.

image

Helpful, right? Source: https://openreview.net/pdf?id=yzkSU5zdwD

In a nutshell, the smart software does things the wizards had not anticipated. According to the blog post:

The existence of emergent abilities has a range of implications. For example, because emergent few-shot prompted abilities and strategies are not explicitly encoded in pre-training, researchers may not know the full scope of few-shot prompted abilities of current language models. Moreover, the emergence of new abilities as a function of model scale raises the question of whether further scaling will potentially endow even larger models with new emergent abilities. Identifying emergent abilities in large language models is a first step in understanding such phenomena and their potential impact on future model capabilities. Why does scaling unlock emergent abilities? Because computational resources are expensive, can emergent abilities be unlocked via other methods without increased scaling (e.g., better model architectures or training techniques)? Will new real-world applications of language models become unlocked when certain abilities emerge? Analyzing and understanding the behaviors of language models, including emergent behaviors that arise from scaling, is an important research question as the field of NLP continues to grow.

The write up emulates other Googlers’ technical write ups. I noted several facets of the topic not included in the paper on OpenReview.net’s version of the paper. (Note: Snag this document now because many Google papers, particularly research papers, have a tendency to become unfindable for the casual online search expert.)

First, emergent behavior means humans were able to observe unexpected outputs or actions. The question is, “What less obvious emergent behaviors are operating within the code edifice?” Is it possible the wizards are blind to more substantive but subtle processes. Could some of these processes be negative? If so, which are and how does the observer identify those before an undesirable or harmful outcome is discovered?

Second, emergent behavior, in my view of bio-emulating systems, evokes the metaphor of cancer. If we assume the emergent behavior is cancerous, what’s the mechanism for communicating these behaviors to others working in the field in a responsible way? Writing a 30 page technical paper takes time, even for super duper Googlers. Perhaps the “emergent” angle requires a bit more pedal to the metal?

Third, how does the emergent behavior fit into the Google plan to make its approach to smart software the de facto standard? There is big money at stake because more and more organizations will want smart software. But will these outfits sign up with a system that demonstrates what might be called “off the reservation” behavior? One example is the use of Google methods for war fighting? Will smart software write a sympathy note to those affected by an emergent behavior or just a plain incorrect answer buried in a subsystem?

Net net: I discuss emergent behavior in my lecture about shadow online services. I cover what the software does and what use humans make of these little understood yet rapidly diffusing methods.

Stephen E Arnold, December 28, 2022

Fried Dorsey: Soggy, Not Crispy

December 15, 2022

I noted an odd shift in Big Tech acceptance of responsibility. For now, I will  call this the Fried Dorsey Anomaly.

First, CNBC reported about a letter the MIT graduate and top dog at FTX wrote to employees.  The article has the snappy title “Here’s the Apology Letter Sam Bankman-Fried Sent to FTX Employees: When Sh—y Things Happen to Us, We All Tend to Make Irrational Decisions. The logic in this victim argument and the use of a categorical affirmative are probably interesting to someone who loved Psychology 101. Here’s the sentence which caught my eye:

“I lost track of the most important things in the commotion of company growth. I care deeply about you all, and you were my family, and I’m sorry…”

This is the “Fried” side of making or not making certain decisions. Then there’s the apology.

Now let’s shift to the Dorsey facet of the anomaly. The estimable Wall Street Journal published “Dorsey Calls Twitter Controls Too Great.” The write up appeared in the December 15, 2022, dead tree version of the Murdoch output. The online, paywalled article is at this link.  Here’s the statement I noted:

If you want to blame, direct it at me and my actions.

These quotes are somewhat different from the “Senator, thank you for the question” and “We will improve…” statements from what we can think of as the pre-Covid era of Big Tech.

Now we have individuals accepting blame and demonstrating a soupçon of remorse, regret, or some related mental posture.

Thus, the post-Covid era of Big Tech is now into mea culpa suggestions and acceptance of blame.

Will the Fried Dorsey Anomaly persist? Will the tactic work as the penitents’ anticipate. Wow, I am convinced already.

Stephen E Arnold, December 15, 2022

Google Knocks NSO Group Off the PR Cat-Bird Seat

June 14, 2022

My hunch is that the executives at NSO Group are tickled that a knowledge warrior at Alphabet Google YouTube DeepMind rang the PR bell.

Google is in the news. Every. Single. Day. One government or another is investigating the company, fining the company, or denying Google access to something or another.

“Google Engineer Put on Leave after Saying AI Chatbot Has Become Sentient” is typical of the tsunami of commentary about this assertion. The UK newspaper’s write up states:

Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.

Is this a Googler buying into the Google view that it is the smartest outfit in the world, capable of solving death, achieving quantum supremacy, and avoiding the subject of online ad fraud? Is the viewpoint of a smart person who is lost in the Google metaverse, flush with the insight that software is by golly alive?

The article goes on:

The exchange is eerily reminiscent of a scene from the 1968 science fiction movie 2001: A Space Odyssey, in which the artificially intelligent computer HAL 9000 refuses to comply with human operators because it fears it is about to be switched off.

Yep, Mary, had a little lamb, Dave.

The talkative Googler was parked somewhere. The article notes:

Brad Gabriel, a Google spokesperson, also strongly denied Lemoine’s claims that LaMDA possessed any sentient capability. “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)…”

Quantum supremacy is okay to talk about. Smart software chatter appears to lead Waymo drivers to a rest stop.

TechMeme today (Monday, June 13, 2022) has links to many observers, pundits, poobahs, self appointed experts, and Twitter junkies.

Perhaps a few questions may help me think through how an online ad company knocked NSO Group off its perch as the most discussed specialized software company in the world. Let’s consider several:

  1. Why’s Google so intent on silencing people like this AI fellow and the researcher Timnit Gebru? My hunch is that the senior managers of Alphabet Google YouTube DeepMind (hereinafter AGYD) have concerns about chatty Cathies or loose lipped Lemoines. Why? Fear?
  2. Has AGYD’s management approach fallen short of the mark when it comes to creating a work environment in which employees know what to talk about, how to address certain subjects, and when to release information? If Lemoine’s information is accurate, is Google about to experience its Vault 7 moment?
  3. Where are the AGYD enablers and their defense of the company’s true AI capability? I look to Snorkel and maybe Dr. Christopher Ré or a helpful defense of Google reality from DeepDyve? Will Dr. Gebru rush to Google’s defense and suggest Lemoine was out of bounds? (Yeah, probably not.)

To sum up: NSO Group has been in the news for quite a while: The Facebook dust up, the allegations about the end point for Jamal Khashoggi, and Israel’s clamp down on certain specialized software outfits whose executives order take away from Sebastian’s restaurant in Herzliya.

Worth watching this AGYB race after the Twitter clown car for media coverage.

Stephen E Arnold, June 14, 2022

NSO Group Knock On: Live from Madrid

May 10, 2022

The NSO Group fan Paz Esteban has been gored (metaphorically speaking, of course). “Spain’s Spy Chief Sacked after Pegasus Spyware Revelations” reports that “Paz Esteban reportedly loses job after Catalan independence figures were said to have been targeted.” How about those hedging Latinate structures. The write up alleges:

Paz Esteban reportedly confirmed last week that 18 members of the Catalan independence movement were spied on with judicial approval by Spain’s National Intelligence Centre.

I suppose spying on the Barcelona football team makes sense if one roots for Real Madrid. It is a stretch that 18 individuals who want to do a 180 degree turn away from Madrid’s approach to maintaining law, order, health, peace, prosperity, etc. etc.

The write up notes:

Esteban reportedly confirmed last week to a congressional committee that 18 members of the Catalan independence movement were spied on with judicial approval by Spain’s National Intelligence Centre (CNI), leaving the Catalan regional government demanding answers.

Yep, the action was approved. Life would have been more like a late dinner than a burger from a fantastic American fast food restaurant. That’s the problem. The gobbling of the fries was approved by lawyers.

That’s a crisis. Making the spry 64 year old Ms. Esteban López the beard is unfortunate. My hunch is that some youthful whiz kids found the NSO Group’s Pegasus a fun digital horse to ride. The idea floated upwards for approval and ended up in front of the “judiciary.” That mysterious entity thought letting the kids ride the Pegasus was a perfectly okay idea.

Now a crisis is brewing. The gored Ms. Esteban López may only be one of the first in the intelligence, law enforcement, and judiciary to feel the prick of the digital bull’s horns and the knock from the beastie’s hooves.

Several observations:

  1. Who else will be implicated in this interesting matter? Who will be tossed aloft only to crash to the albero del ruedo?
  2. Will a parliamentary inquiry move forward? What will that become? A romp with Don Quixote and Sancho?
  3. Is a new Spanish inquisition about to begin?

Excitement in the Plaza de Toros de Las Ventas perhaps?

Stephen E Arnold, May 10, 2022

UAE Earns a Spot on Global Gray List

April 26, 2022

Forget Darkmatter. This is a gray matter.

Where is the best place to stash ill-gotten gains? The Cayman Islands and Switzerland come to mind, and we have to admit the US is also in the running. But there is another big contender—the United Arab Emirates. The StarTribune reports, “Anti-Money-Laundering Body Puts UAE on Global ‘Gray’ List.” Writer Jon Gambrell tells us:

“A global body focused on fighting money laundering has placed the United Arab Emirates on its so-called ‘gray list’ over concerns that the global trade hub isn’t doing enough to stop criminals and militants from hiding wealth there. The decision late Friday night by the Paris-based Financial Action Task Force [FATF] puts the UAE, home to Dubai and oil-rich Abu Dhabi, on a list of 23 countries including fellow Mideast nations Jordan, Syria and Yemen.”

Will the official censure grievously wound business in the country? Not by a long shot, though it might slightly tarnish its image and even affect interest rates. The FATF admits the UAE has made significant progress in fighting the problem but insists more must be done. Admittedly, the task was monumental from the start. We learn:

“The UAE long has been known as a place where bags of cash, diamonds, gold and other valuables can be moved into and through. In recent years, the State Department had described ‘bulk cash smuggling’ as ‘a significant problem’ in the Emirates. A 2018 report by the Washington-based Center for Advanced Defense Studies, relying on leaked Dubai property data, found that war profiteers, terror financiers and drug traffickers sanctioned by the U.S. had used the city-state’s boom-and-bust real estate market as a safe haven for their money.”

Is the government motivated to change its country’s ways? Yes, according to a statement from the Emirates’ Executive Office of Anti-Money Laundering and Countering the Financing of Terrorism. That ponderously named body promises to continue its efforts to thwart and punish the bad actors. The country’s senior diplomat also chimed in on Twitter, pledging ever stronger cooperation with global partners to address the issue.

Cynthia Murrell, April 26, 2022

NSO Group, the PR Champ of Intelware Does It Again: This Time Jordan

April 11, 2022

I hope this write up “NSO Hacked New Pegasus Victims Weeks after Apple Sought Injunction” is one of those confections which prove to be plastic. You know: Like the plastic sushi in restaurant windows in Osaka. The news report based on a report from Citizen Lab and an outfit called Front Line Defenders delineates how a Jordanian journalist’s mobile device was tapped.

The article reports:

The NSO-built Pegasus spyware gives its government customers near-complete access to a target’s device, including their personal data, photos, messages and precise location. Many victims have received text messages with malicious links, but Pegasus has more recently been able to silently hack iPhones without any user interaction, or so-called “zero-click” attacks. Apple last year bolstered iPhone security by introducing BlastDoor, a new but unseen security feature designed to filter out malicious payloads sent over iMessage that could compromise a device. But NSO was found to have circumvented the security measure with a new exploit, which researchers named ForcedEntry for its ability to break through BlastDoor’s protections. Apple fixed BlastDoor in September after the NSO exploit was found to affect iPads, Macs, and Apple Watches, not just iPhones.

This is “old news.” The incident dates from 2021, and since that time the MBA infused, cowboy software has sparked a rethinking of how software from a faithful US ally can be sold and to whom. Prior to the NSO Group’s becoming the poster child for mobile surveillance, the intelware industry was chugging along in relative obscurity. Those who knew about specialized software and services conducted low profile briefings and talks out of the public eye. What better place to chat than at a classified or restricted attendance conference? Certainly not in the pages of online blogs, estimable “real news” organs, or in official government statements.

Apple, the big tech company which cares about most of its customers and some of its employees (exceptions are leakers and those who want to expose certain Apple administrative procedures related to personnel), continues to fix its software. These fixes, as Microsoft’s security professionals have learned, can be handled by downplaying the attack surface its systems present to bad actors. Other tactics include trying to get assorted governments to help blunt the actions of bad actors and certain nation states which buy intelware for legitimate purposes. How this is to be accomplished remains a mystery to me, but Apple wanted an injunction to slow down the NSO Group’s exploit capability. How did that work out? Yeah. Other tactics include rolling out products in snazzy online events, making huge buyout plays, and pointing fingers at everyone except those who created the buggy and security-lax software.

I am not sure where my sympathies lie. Yes, I understand the discomfort the Jordanian target has experienced, but mobile devices are surveilled 24×7 now. I understand that. Do you? I am not sure if I resonate with either NSO Group’s efforts to build its business. I know I don’t vibrate like the leaves in the apple orchard.

The context for these intelware issues is a loss of social responsibility which I think begins at an early age. Without consequences, what exactly happens? My answer is, “Lots of real news, outrage, and not much else.” Without consequences, why should ethics, responsible behavior, and appropriate regulatory controls come into play?

Stephen E Arnold, April 11, 2022

MIT: Censorship and the New Approach to Learning

October 27, 2021

MIT is one of the top science and technology universities in the world. Like many universities in the United States, MIT has had its share of controversial issues related to cancel culture. The Atlantic discusses the most recent incident in the article, “Why The Latest Campus Cancellation Is Different.”

MIT invited geophysicist Dorian Abbot to deliver the yearly John Carlson Lecture about his new climate science research. When MIT students heard Abbot was invited to speak, they campaigned to disinvite him. MIT’s administration caved and Abbot’s invitation was rescinded. Unlike other cancel culture issues, when MIT disinvited Abbot it was not because he denied climate change or committed a crime. Instead, he gave his opinion about affirmative action and other ways minorities have advantages in college admission.

Abbot criticized affirmative action, legacy, and athletic admissions, which favors white applicants. He then compared these admission processes to 1930s Germany and that is a big no-no:

“Abbot seemingly meant to highlight the dangers of thinking about individuals primarily in terms of their ethnic identity. But any comparison between today’s practices on American college campuses and the genocidal policies of the Nazi regime is facile and incendiary.

Even so, it is patently absurd to cancel a lecture on climate change because of Abbot’s article in Newsweek. If every cringe worthy analogy to the Third Reich were grounds for canceling talks, hundreds of professors—and thousands of op-ed columnists—would no longer be welcome on campus.”

Pew Research shows that the majority of the United States believes merit-based admissions or hiring is the best system. The liberal state California even voted to uphold a ban on affirmative action.

MIT’s termination of the Abbot lecture may be an example of how leading universities define learning, information, and discussion. People are no longer allowed to have opposing or controversial beliefs if it offends someone. It harms not only an academic setting, especially at a research heavy university like MIT, but all of society.

It is also funny that MIT was quick to cancel Abbot, but they happily accepted money from Jeffrey Epstein. Interesting.

Whitney Grace, October 27, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta