A New Year Alert: Americans Cannot Read

January 1, 2025

The United States is a large country with a self-contained nature. Because of its monolith status, the United States is very isolated. The rest of the world views the US as a stupid country and NBC News shares evidence to that statement: “Survey: Growing Number Of U.S. Adults Lack Literacy Skills.” The National Center for Education Statistics (NCES) reported that the gap between high-skilled readers and kid-skilled immensely increased from 19% in 2017 to 28% in 2023.

The substantial difference doesn’t bode well for the US, but when it is compared to the countries the US faired well. The US’s scores stayed even according to the Survey of Adult Skills. This test surveyed over two dozen countries and many of them are members of the Organization for Economic Cooperation and Development. The survey measures the working-age population’s literacy, number, and problem-solving skills. Most of the countries, including European and Asian countries, had comparable results to the US.

The greatest surprises were that Japan saw a 4% increase from 5% to 9%, England remained the same at 17%, Singapore jumped from 26% to 30%, Germany saw a spike from 18% to 20%. The biggest changes were in South Korea and Lithuania. Both countries went from the teens to thirty percent or higher.

This doesn’t mean the US and other nations are idiots (arguably):

“Low scores don’t equal illiteracy, [NCES Commissioner Peggy Carr] said — the closest the survey comes to that is measuring those who could be called functionally illiterate, which is the inability to read or write at a level at which you’re able to handle basic living and workplace tasks.

Asked what could be causing the adult literacy decline in the U.S., Carr said, ’It is difficult to say.’”

The Internet and lack of reading is the cause, dingbat!

Whitney Grace, January 1, 2025

The US and Math: Not So Hot

January 1, 2025

In recent decades, the US educational system has increasingly emphasized teaching to the test over niceties like critical thinking and deep understanding. How is that working out for us? Not well. Education news site Chalkbeat reports, "U.S. Math Scores Drop on Major International Test."

Last year, the Trends in International Mathematics and Science Study assessed over 650,000 fourth and eighth graders in 64 countries. The test is performed every four years, and its emphasis is on foundational skills in those subjects. Crucial knowledge for our young people to have, not just for themselves but for the future of the country. That future is not looking so good. The write-up includes a chart of the rankings, with the U.S. now squarely in the middle. We learn:

"U.S. fourth graders saw their math scores drop steeply between 2019 and 2023 on a key international test even as more than a dozen other countries saw their scores improve. Scores dropped even more steeply for American eighth graders, a grade where only three countries saw increases. The declines in fourth grade mathematics in the U.S. were among the largest in the participating countries, though American students are still in the middle of the pack internationally. The extent of the decline seems to be driven by the lowest performing students losing more ground, a worrying trend that predates the pandemic."

So we can’t just blame this on the pandemic, when schools were shuttered and students "attended" classes remotely. A pity. The results are no surprise to many who have been sounding alarm bells for years. So why not just drop perpetual testing and return to more effective instruction? It couldn’t have anything to do with corporate interests, could it? Naw, even the jaded and powerful must know the education of our youth is too important to put behind profits.

Cynthia Murrell, January 1, 2024

Smart Software and Knowledge Skills: Nothing to Worry About. Nothing.

July 5, 2024

dinosaur30a_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read an article in Bang Premier (an estimable online publication with which I had no prior knowledge). It is now a “fave of the week.” The story “University Researchers Reveal They Fooled Professors by Submitting AI Exam Answers” was one of those experimental results which caused me to chuckle. I like to keep track of sources of entertaining AI information.

image

A doctor and his surgical team used smart software to ace their medical training. Now a patient learns that the AI system does not have the information needed to perform life-saving surgery. Thanks, MSFT Copilot. Good enough.

The Bang Premier article reports:

Researchers at the University of Reading have revealed they successfully fooled their professors by submitting AI-generated exam answers. Their responses went totally undetected and outperformed those of real students, a new study has shown.

Is anyone surprised?

The write up noted:

Dr Peter Scarfe, an associate professor at Reading’s school of psychology and clinical language sciences, said about the AI exams study: “Our research shows it is of international importance to understand how AI will affect the integrity of educational assessments. “We won’t necessarily go back fully to handwritten exams, but the global education sector will need to evolve in the face of AI.”

But the knee slapper is this statement in the write up:

In the study’s endnotes, the authors suggested they might have used AI to prepare and write the research. They stated: “Would you consider it ‘cheating’? If you did consider it ‘cheating’ but we denied using GPT-4 (or any other AI), how would you attempt to prove we were lying?” A spokesperson for Reading confirmed to The Guardian the study was “definitely done by humans”.

The researchers may not have used AI to create their report, but is it possible that some of the researchers thought about this approach?

Generative AI software seems to have hit a plateau for technology, financial, or training issues. Perhaps those who are trying to design a smart system to identify bogus images, machine-produced text and synthetic data, and nifty videos which often look like “real” TikTok-type creations will catch up? But if the AI innovators continue to refine their systems, the “AI identifier” software is effectively in a game of cat-and-mouse. Reacting to smart software means that existing identifiers will be blind to the new systems’ outputs.

The goal is a noble one, but the advantage goes to the AI companies, particularly those who want to go fast and break things. Academics get some benefit. New studies will be needed to determine how much fakery goes undetected. Will a surgeon who used AI to get his or her degree be able to handle a tricky operation and get the post-op drugs right?

Sure. No worries. Some might not think this is a laughing matter. Hey, it’s AI. It is A-Okay.

Stephen E Arnold, July 5, 2024

Is There a Problem with AI Detection Software?

July 1, 2024

Of course not.

But colleges and universities are struggling to contain AI-enabled cheating. Sadly, it seems the easiest solution is tragically flawed. Times Higher Education considers, “Is it Time to Turn Off AI Detectors?” The post shares a portion of the new book, “Teaching with AI: A Practical Guide to a New Era of Human Learning” by José Antonio Bowen and C. Edward Watson. The excerpt begins by looking at the problem:

“The University of Pennsylvania’s annual disciplinary report found a seven-fold (!) increase in cases of ‘unfair advantage over fellow students’, which included ‘using ChatGPT or Chegg’. But Quizlet reported that 73 per cent of students (of 1,000 students, aged 14 to 22 in June 2023) said that AI helped them ‘better understand material’. Watch almost any Grammarly ad (ubiquitous on TikTok) and ask first, if you think clicking on ‘get citation‘ or ‘paraphrase‘ is cheating. Second, do you think students might be confused?”

Probably. Some universities are not exactly clear on what is cheating and what is permitted usage of AI tools. At the same time, a recent study found 51 percent of students will keep using them even if they are banned. The boost to their GPAs is just too tempting. Schools’ urge to fight fire with fire is understandable, but detection tools are far from perfect. We learn:

“AI detectors are already having to revise claims. Turnitin initially claimed a 1 per cent false-positive rate but revised that to 4 per cent later in 2023. That was enough for many institutions, including Vanderbilt, Michigan State and others, to turn off Turnitin’s AI detection software, but not everyone followed their lead. Detectors vary considerably in their accuracy and rate of false positives. One study looked at 14 different detectors and found that five of the 14 were only 50 per cent accurate or worse, but four of them (CheckforAI, Winston AI, GPT-2 Output and Turnitin) missed only one of the 18 AI-written samples. Detectors are not all equal, but the best are better than faculty at identifying AI writing.”

But is that ability is worth the false positives? One percent may seem small, but to those students it can mean an end to their careers before they even begin. For institutions that do not want to risk false accusations, the authors suggest several alternatives that seem to make a difference. They advise instructors to discuss the importance of academic integrity at the beginning of the course and again as the semester progresses. Demonstrating how well detection tools work can also have an impact. Literally quizzing students on the school’s AI policies, definitions, and consequences can minimize accidental offenses. Schools could also afford students some wiggle room: allow them to withdraw submissions and take the zero if they have second thoughts. Finally, the authors suggest schools normalize asking for help. If students get stuck, they should feel they can turn to a human instead of AI.

Cynthia Murrell, July 1, 2024

Now Teachers Can Outsource Grading to AI

June 10, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

In a prime example of doublespeak, the “No Child Left Behind” act of 2002 ushered in today’s teach-to-the-test school environment. Once upon a time, teachers could follow student interest deeper into subject, explore topics tangential to the curriculum, and encourage children’s creativity. Now it seems if it won’t be on the test, there is no time for it. Never mind evidence that standardized tests do not even accurately measure learning. Or the psychological toll they take on students. But education degradation is about to get worse.

Get ready for the next level in impersonal instruction. Graded.Pro is “AI Grading and Marking for Teachers and Educators.” Now teachers can hand the task of evaluating every classroom assignment off to AI. On the Graded.Pro website, one can view explanatory videos and see examples of AI-graded assignments. Math, science, history, English, even art. The test maker inputs the criteria for correct responses and the AI interprets how well answers adhere to those descriptions. This means students only get credit for that which an AI can measure. Sure, there is an opportunity for teachers to review the software’s decisions. And some teachers will do so closely. Others will merely glance at the results. Most will fall somewhere in between.

Here are the assignment and solution description from the Art example: “Draw a lifelike skull with emphasis on shading to develop and demonstrate your skills in observational drawing.

Solutions:

  • The skull dimensions and proportions are highly accurate.
  • Exceptional attention to fine details and textures.
  • Shading is skillfully applied to create a dynamic range of tones.
  • Light and shadow are used effectively to create a realistic sense of volume and space.
  • Drawing is well-composed with thoughtful consideration of the placement and use of space.”

See the website for more examples as well as answers and grades. Sure, these are all relevant skills. But evaluation should not stop at the limits of an AI’s understanding. An insightful interpretation in a work of art? Brilliant analysis in an essay? A fresh take on an historical event? Qualities like those take a skilled human teacher to spot, encourage, and develop. But soon there may be no room for such niceties in education. Maybe, someday, no room for human teachers at all. After all, software is cheaper and does not form pesky unions.

Most important, however, is that teaching is a bummer. Every child is exceptional. So argue with the robot that little Debbie got an F.

Cynthia Murrell, June 10, 2024

The Evolution of Study Notes: From Lazy to Downright Slothful

April 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Study guides, Cliff Notes, movie versions, comic books, and bribing elder siblings or past students for their old homework and class notes were the how kids used to work their way through classes. Then came the Internet and over the years innovative people have perfected study guides. Some have even made successful businesses from study guides for literature, science, math, foreign language, writing, history, and more.

The quality of these study guides range from poor to fantastic. PinkMonkey.com is one of the average study guide websites. It has some free book guides while others are behind a paywall. There are also educational tips for different grades and advice for college applications. The information is a little dated but when it is combined with other educational and homework help websites it still has its uses.

PinkMonkey.com describes itself as:

“…a "G" rated study resource for junior high, high school, college students, teachers and home schoolers. What does PinkMonkey offer you? The World’s largest library of free online Literature Summaries, with over 460 Study Guides / Book Notes / Chapter Summaries online currently, and so much more. No more trips to the book store; no more fruitless searching for a booknote that no one ever has in stock! You’ll find it all here, online 24/7!”

YouTube, TikTok, and other platforms are also 24/7. They’re also being powered more and more by AI. It won’t be long before AI is condensing these guides and turning them into consumable videos. There are already channels that made study guides but homework still requires more than an AI answer.

ChatGPT and other generative AI algorithms are getting smarter by being trained on sets that pull their data from the Internet. These datasets include books, videos, and more. In the future, students will be relying on study guides in video format. The question to ask is how will they look? Will they summarize an entire book in fifteen seconds, take it chapter by chapter, or make movies powered by AI?

Whitey Grace, April 22, 2024

Harvard University: William James Continues Spinning in His Grave

March 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

William James, the brother of a novelist which caused my mind to wander just thinking about any one of his 20 novels, loved Harvard University. In a speech at Stanford University, he admitted his untoward affection. If one wanders by William’s grave in Cambridge Cemetery (daylight only, please), one can hear a sound similar to a giant sawmill blade emanating from the a modest tombstone. “What’s that horrific sound?” a by passer might ask. The answer: “William is spinning in his grave. It a bit like a perpetual motion machine now,” one elderly person says. “And it is getting louder.”

image

William is spinning in his grave because his beloved Harvard appears to foster making stuff up. Thanks, MSFT Copilot. Working on security today or just getting printers to work?

William is amping up his RPMs. Another distinguished Harvard expert, professor, shaper of the minds of young men and women and thems has been caught fabricating data. This is not the overt synthetic data shop at Stanford University’s Artificial Intelligence Lab and the commercial outfit Snorkel. Nope. This is just a faculty member who, by golly, wanted to be respected it seems.

The Chronicle of Higher Education (the immensely popular online information service consumed by thumb typers and swipers) published “Here’s the Unsealed Report Showing How Harvard Concluded That a Dishonesty Expert Committed Misconduct.” (Registration required because, you know, information about education is sensitive and users must be monitored.) The report allegedly required 1,300 pages. I did not read it. I get the drift: Another esteemed scholar just made stuff up. In my lingo, the individual shaped reality to support her / its vision of self. Reality was not delivering honor, praise, rewards, money, and freedom from teaching horrific undergraduate classes. Why not take the Excel macro to achievement: Invent and massage information. Who is going to know?

The write up says:

the committee wrote that “she does not provide any evidence of [research assistant] error that we find persuasive in explaining the major anomalies and discrepancies.” Over all, the committee determined “by a preponderance of the evidence” that Gino “significantly departed from accepted practices of the relevant research community and committed research misconduct intentionally, knowingly, or recklessly” for five alleged instances of misconduct across the four papers. The committee’s findings were unanimous, except for in one instance. For the 2012 paper about signing a form at the top, Gino was alleged to have falsified or fabricated the results for one study by removing or altering descriptions of the study procedures from drafts of the manuscript submitted for publication, thus misrepresenting the procedures in the final version. Gino acknowledged that there could have been an honest error on her part. One committee member felt that the “burden of proof” was not met while the two other members believed that research misconduct had, in fact, been committed.

Hey, William, let’s hook you up to a power test dynamometer so we can determine exactly how fast you are spinning in your chill, dank abode. Of course, if the data don’t reveal high-RPM spinning, someone at Harvard can be enlisted to touch up the data. Everyone seems to be doing from my vantage point in rural Kentucky.

Is there a way to harness the energy of professors who may cut corners and respected but deceased scholars to do something constructive? Oh, look. There’s a protest group. Let’s go ask them for some ideas. On second thought… let’s not.

Stephen E Arnold, March 15, 2024

Stanford: Tech Reinventing Higher Education: I Would Hope So

March 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “How Technology Is Reinventing Education.” Essays like this one are quite amusing. The ideas flow without important context. Let’s look at this passage:

“Technology is a game-changer for education – it offers the prospect of universal access to high-quality learning experiences, and it creates fundamentally new ways of teaching,” said Dan Schwartz, dean of Stanford Graduate School of Education (GSE), who is also a professor of educational technology at the GSE and faculty director of the Stanford Accelerator for Learning. “But there are a lot of ways we teach that aren’t great, and a big fear with AI in particular is that we just get more efficient at teaching badly. This is a moment to pay attention, to do things differently.”

imageI

A university expert explains to a rapt audience that technology will make them healthy, wealthy, and wise. Well, that’s the what the marketing copy which the lecturer recites. Thanks, MSFT Copilot. Are you security safe today? Oh, that’s too bad.

I would suggest that Stanford’s Graduate School of Education consider these probably unimportant points:

  • The president of Stanford University resigned allegedly because he fudged some data in peer-reviewed documents. True or false. Does it matter? The fellow quit.
  • The Stanford Artificial Intelligence Lab or SAIL innovated with cooking up synthetic data. Not only was synthetic data the fast food of those looking for cheap and easy AI training data, Stanford became super glued to the fake data movement which may be good or it may be bad. Hallucinating is easier if the models are training using fake information perhaps?
  • Stanford University produced some outstanding leaders in the high technology “space.” The contributions of famous graduates have delivered social media, shaped advertising systems, and interesting intelware companies which dabble in warfighting and saving lives from one versatile software and consulting platform.

The essay operates in smarter-than-you territory. It presents a view of the world which seems to be at odds with research results which are not reproducible, ethics-free researchers, and an awareness of how silly it looks to someone in rural Kentucky to have a president accused of pulling a grade-school essay cheating trick.

Enough pontification. How about some progress in remediating certain interesting consequences of Stanford faculty and graduates innovations?

Stephen E Arnold, March 15, 2024

Education on the Cheap: No AI Required

January 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I don’t write about education too often. I do like to mention the plagiarizing methods of some academics. What fun! I located a true research gem (probably non-reproducible, hallucinogenic, or just synthetic but I don’t care). “Emergency-Hired Teachers Do Just as Well as Those Who Go Through Normal Training” states:

New research from Massachusetts and New Jersey suggests maybe not. In both states, teachers who entered the profession without completing the full requirements performed no worse than their normally trained peers.

image

A sanitation worker with a high school diploma is teaching advanced seventh graders about linear equations. The students are engaged… with their mobile phones. Hey, good enough, MSFT Copilot Bing thing. Good enough.

Then a modest question:

The better question now is why these temporary waivers aren’t being made permanent.

And what’s the write up say? I quote:

In other words, making it harder to become a teacher will reduce the supply but offers no guarantee that those who meet the bar will actually be effective in the classroom.

Huh?

Using people who did not slog through college and learned something (one hopes) is expensive. Think of the cost savings when using those who are untrained and unencumbered with expectations of big money! When good enough is the benchmark of excellence, embrace those without an comprehensive four-year or more education. Ooops. Who wants that?

I thought that I once heard that the best, most educated teaching professionals should work with the youngest students. I must have been doing some of that AI-addled thinking common among some in the old age home. When’s lunch?

Stephen E Arnold, January 26, 2024

iPad and Zoom Learning: Not Working As Well As Expected

November 10, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

It seemed (to many) like the best option at the time. As COVID-19 shuttered brick-and-mortar schools, it was educational technology to the rescue around the world! Or at least that was the idea. In reality, kids with no tech, online access, informed guidance, or a nurturing environment were left behind. Who knew? UNESCO (the United Nations Educational, Scientific, and Cultural Organization) has put out a book that documents what went wrong, questions the dominant ed-tech narratives from the pandemic, and explores what we can do better going forward. The full text of "An Ed-Tech Tragedy?" can be read or downloaded for free here. The press release states:

"The COVID-19 pandemic pushed education from schools to educational technologies at a pace and scale with no historical precedent. For hundreds of millions of students formal learning became fully dependent on technology – whether internet-connected digital devices, televisions or radios. An Ed-Tech Tragedy? examines the numerous adverse and unintended consequences of the shift to ed-tech. It documents how technology-first solutions left a global majority of learners behind and details the many ways education was diminished even when technology was available and worked as intended. In unpacking what went wrong, the book extracts lessons and recommendations to ensure that technology facilitates, rather than subverts, efforts to ensure the universal provision of inclusive, equitable and human-centered public education."

The book is divided into four parts. Act 1 recalls the hopes and promises behind the push to move quarantined students online. Act 2 details the unintended consequences: The hundreds of millions of students without access to or knowledge of technology who were left behind. The widened disparity between privileged and underprivileged households in parental time and attention. The decreased engagement of students with subject matter. The environmental impact. The increased acceptance of in-home surveillance and breaches of privacy. And finally, the corporate stranglehold on education, which was dramatically strengthened and may now prove nigh impossible to dislodge.

Next an "Inter-Act" section questions what we were told about online learning during the pandemic and explores three options we could have pursued instead. The book concludes with a hopeful Act 3, a vision of how we might move forward with education technology in a more constructive and equitable manner. One thing remains to be seen: will we learn our lesson?

Cynthia Murrell, November 10, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta