Academic Excellence: Easy to Say, Tough to Deliver It Seems

February 21, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A recent report from Columbia Journalism Review examines “Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena.” Many words from admirals watching the Titanic steam toward the iceberg. The executive summary explains:

“Insufficient attention has also been paid to the implications of the news industry’s dependence on technology companies for AI. Drawing on 134 interviews with news workers at 35 news organizations in the United States, the United Kingdom, and Germany — including outlets such as The Guardian, Bayerischer Rundfunk, the Washington Post, The Sun, and the Financial Times — and 36 international experts from industry, academia, technology, and policy, this report examines the use of AI across editorial, commercial, and technological domains with an eye to the structural implications of AI in news organizations for the public arena. In a second step, it considers how a retooling of the news through AI stands to reinforce news organizations’ existing dependency on the technology sector and the implications of this.”

The first chapter examines how AI is changing news production and distribution. It is divided into three parts: news organizations’ motives for using AI, how they are doing so, and what expectations they have for the technology. Chapter two examines why news organizations now rely on tech companies and what this could mean for the future of news. Here’s a guess: Will any criticism of big tech firms soon fail to see the light of day, perhaps?

See the report (or download the PDF) for all the details. After analyzing the data, author Felix M. Simon hesitates to draw any firm conclusions about the future of AI and news organizations—there are too many factors in flux. For now, the technology is mostly being used to refine existing news practices rather than to transform them altogether. But that could soon change. If it does, public discourse as a whole will shift, too. Simon notes:

“As news organizations get reshaped by AI, so too will the public arena that is so vital to democracy and for which news organizations play a gatekeeper role. Depending on how it is used, AI has the potential to structurally strengthen news organizations’ position as gatekeepers to an information environment that provides ‘people with relatively accurate, accessible, diverse, relevant, and timely independently produced information about public affairs’ which they can use to make decisions about their lives. … This, however, is not a foregone conclusion. Instead, it will depend on decisions made by the set of actors who wield control over the conditions of news work — executives, managers, and journalists, but also increasingly technology companies, regulatory bodies, and the public.”

That is a lot of players. Which ones hold the most power in this equation? Hint: it is not the last entry in the list.

Cynthia Murrell, February 21, 2024

Generative AI and College Application Essays: College Presidents Cheat Too

February 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The first college application season since ChatGPT hit it big is in full swing. How are admissions departments coping with essays that may or may not have been written with AI? It depends on which college one asks. Forbes describes various policies in, “Did You Use ChatGPT on your School Applications? These Words May Tip Off Admissions.” The paper asked over 20 public and private schools about the issue. Many dared not reveal their practices: as a spokesperson for Emory put it, “it’s too soon for our admissions folks to offer any clear observations.” But the academic calendar will not wait for clarity, so schools must navigate these murky waters as best they can.

Reporters Rashi Shrivastava and Alexandra S. Levine describe the responses they did receive. From “zero tolerance” policies to a little wiggle room, approaches vary widely. Though most refused to reveal whether they use AI detection software, a few specified they do not. A wise choice at this early stage. See the article for details from school to school.

Shrivastava and Levine share a few words considered most suspicious: Tapestry. Beacon. Comprehensive curriculum. Esteemed faculty. Vibrant academic community. Gee, I think I used a one or two of those on my college essays, and I wrote them before the World Wide Web even existed. On a typewriter. (Yes, I am ancient.) Will earnest, if unoriginal, students who never touched AI get caught up in the dragnets? At least one admissions official seems confident they can tell the difference. We learn:

“Ben Toll, the dean of undergraduate admissions at George Washington University, explained just how easy it is for admissions officers to sniff out AI-written applications. ‘When you’ve read thousands of essays over the years, AI-influenced essays stick out,’ Toll told Forbes. ‘They may not raise flags to the casual reader, but from the standpoint of an admissions application review, they are often ineffective and a missed opportunity by the student.’ In fact, GWU’s admissions staff trained this year on sample essays that included one penned with the assistance of ChatGPT, Toll said—and it took less than a minute for a committee member to spot it. The words were ‘thin, hollow, and flat,’ he said. ‘While the essay filled the page and responded to the prompt, it didn’t give the admissions team any information to help move the application towards an admit decision.’”

That may be the key point here—even if an admissions worker fails to catch an AI-generated essay, they may reject it for being just plain bad. Students would be wise to write their own essays rather than leave their fates in algorithmic hands. As Toll put it:

“By the time a student is filling out their application, most of the materials will have already been solidified. The applicants can’t change their grades. They can’t go back in time and change the activities they’ve been involved in. But the essay is the one place they remain in control until the minute they press submit on the application. I want students to understand how much we value getting to know them through their writing and how tools like generative AI end up stripping their voice from their admission application.”

Disqualified or underwhelming—either way, relying on AI to write one’s application essay could spell rejection. Best to buckle down and write it the old-fashioned way. (But one can skip the typewriter.)

Cynthia Murrell, February 19, 2024

Gen Z and Retro Tech

March 7, 2023

I read an interesting write up about people who are younger than I. Keep in mind, please, that I am a dinobaby. “Gen Z Apparently Baffled by Basic Technology.” The write up says:

But when it comes to using a scanner or printer — or even a file system on a computer — things become a lot more challenging to a generation that has spent much of their lives online

Does this mean that a younger employee will not be able to make a photocopy of a receipt for an alleged business expense?

I learned that a 25-year-old wizard was unable to get the photocopy to produce something other than a blank page.

Okay, the idea of turning over the page eluded the budding captain of social media.

Will these future leaders ask for assistance? Nah, there’s something called tech shame. Who wants to look stupid and not get promoted.

Need another example? No, well, too bad. The write up points out that these world beaters cannot schedule meetings? Like time is hard. Follow ups are almost like work.

I am glad I am old.

Stephen E Arnold, March 7, 2023

Predicting the Future MIT Grads and Profs Helped Invent

July 19, 2021

Good news Monday!

MIT, the outfit that found Jeffrey Epstein—a wonderful human and inspiration to students and scholars, shares its brilliant insights into the future of humankind. Motherboard reports, “MIT Predicted in 1972 that Society Will Collapse This Century. New Research Shows We’re on Schedule.” Oh goodie. Reporter Nafeez Ahmed begins with a little background:

“In 1972, a team of MIT scientists got together to study the risks of civilizational collapse. Their system dynamics model published by the Club of Rome identified impending ‘limits to growth’ (LtG) that meant industrial civilization was on track to collapse sometime within the 21st century, due to overexploitation of planetary resources. The controversial MIT analysis generated heated debate, and was widely derided at the time by pundits who misrepresented its findings and methods. … The [new] study was published in the Yale Journal of Industrial Ecology in November 2020 and is available on the KPMG website. It concludes that the current business-as-usual trajectory of global civilization is heading toward the terminal decline of economic growth within the coming decade—and at worst, could trigger societal collapse by around 2040.”

The study’s author, Gaya Herrington, serves as Sustainability and Dynamic System Analysis Lead at accounting giant KPMG but makes clear she pursued this on her own as part of her Harvard University Masters thesis. The study examines data across 10 key variables: population, fertility rates, mortality rates, industrial output, food production, services, non-renewable resources, persistent pollution, human welfare, and ecological footprint. Herrington found recent data aligns most closely with two scenarios she calls “business-as-usual” and “comprehensive technology.” The most desirable outcome, “stabilized world,” is unfortunately the least likely. See the article for its explanation of each of these, including the related graphs.

The upshot: If we keep doing what we have been doing, we are in for dire food shortages, drastically reduced standards of living, and more chaos by 2040. There is hope, however, if we take drastic action within the next decade. Take one look at today’s Congress and assess the likelihood of that. Ahmed concludes:

“The best available data suggests that what we decide over the next 10 years will determine the long-term fate of human civilization. Although the odds are on a knife-edge, Herrington pointed to a ‘rapid rise’ in environmental, social and good governance priorities as a basis for optimism, signaling the change in thinking taking place in both governments and businesses. She told me that perhaps the most important implication of her research is that it’s not too late to create a truly sustainable civilization that works for all.”

Ah, optimism. Let us enjoy a sliver of it while we can.

Cynthia Murrell, July 19, 2021

Another Stanford University Insight: Captain Obvious Himself Knocked Out

November 27, 2020

I read “Researchers Link Poor Memory to Attention Lapses and Media Multitasking.” What was I doing before I read this article. Oh, right. I was watching TV, surfing the Tweeter, having a bagel, pumping my legs on an under my desk exercycle, and talking on a landline phone. Imagine, a landline.

The article which I had to reread multiple times because, well, I just don’t remember why, states:

A new study reveals a correlation between multimedia multitasking, memory loss, and difficulties in maintaining attention.

Well, there’s an insight. What? Multi-tasking does not work so well? Who knew?

The write up clarifies:

Differences in people’s ability to sustain attention were also measured by studying how well subjects were able to identify a gradual change in an image, while media multitasking was assessed by having individuals report how well they could engage with multiple media sources, like texting and watching television, within a given hour. The scientists then compared memory performance between individuals and found that those with lower sustained attention ability and heavier media multitaskers both performed worse on memory tasks.

Wow. Concentration may be an indicator of a person who is not dumb enough to watch TikTok video gems while driving a smart auto to a Covid testing facility and listening to a podcast about how wonderfully intelligent Vox real news people are.

There’s good news from the Stanford experts; for example:

“We have an opportunity now,” Wagner [one of Captain Obvious’ detractors] said, “to explore and understand how interactions between the brain’s networks that support attention, the use of goals and memory relate to individual differences in memory in older adults both independent of, and in relation to, Alzheimer’s disease.”

What was I doing? I forget.

Stephen E Arnold, November 27, 2020

Confidence in US Education: 46 Companies Have Doubts

November 3, 2020

I read “Top 48 US Companies Files Legal Challenge to Block H-1B Visa Changes.” The write up states:

Nearly 46 leading US companies and business organizations, including tech giants Apple, Google, Twitter and Facebook, representing and working with key sectors of the US economy, have filed an amicus brief that supports a legal challenge to block upcoming rule changes to H-1B visa eligibility.

Another interesting factoid:

The companies said that the new DHS rules will dramatically reduce US businesses’ ability to hire these skilled foreign workers—one senior DHS official estimated that they will render ineligible more than one-third of petitions for H-1B visas.

What does this suggest about the flow of talent from the US education system? How are those online classes working out?

Stephen E Arnold, November 3, 2020

The Online Cohorts: A Potential Blind Sport

April 15, 2020

In a conversation last week, a teacher told me, “We are not prepared to teach classes online.” I sympathized. What appears trivial to a person who routinely uses a range of technology, a person accustomed to automatic teller machines, a mobile phone, and an Alexa device may be befuddled. Add to the sense of having to learn about procedures, there is the challenge of adopting in person skills to instructing students via a different method; for example, Google Hangouts, Zoom, and other video conferencing services. How is that shift going? There are anecdotal reports that the shift is not going smoothly.

That’s understandable. More data will become available as researchers and hopefully some teachers report the efficacy of the great shift from a high touch classroom to a no touch digital setting.

I noted “Students Often Do Not Question Online Information.” The article provides a summary of research that suggests:

students struggle to critically assess information from the Internet and are often influenced by unreliable sources.

Again, understandable.

The article points out a related issue:

“Having a critical attitude alone is not enough. Instead, Internet users need skills that enable them to distinguish reliable from incorrect and manipulative information. It is therefore particularly important for students to question and critically examine online information so they can build their own knowledge and expertise on reliable information,” stated Zlatkin-Troitschanskaia. [Professor Olga Zlatkin-Troitschanskaia from JGU. The study was carried out as part of the Rhine-Main Universities (RMU) alliance.]

Online is a catalyst. The original compound is traditional classroom teaching methodologies. The new element is online. The result appears to raise the possibility of a loss of certain thinking skills.

Net net: A long period of adaptation may be ahead. The problem of humans who cannot do math or think in a manner that allows certain statements to be classified as bunk and others as not bunk is likely to have a number of downstream consequences.

In short, certain types of thinking and critical analysis may become quite rare. Informed decisions may not be helpful if the information upon which a choice is based operates from a different type of fact base.

Maybe not so good?

Stephen E Arnold, April 15, 2020

Google Told to Rein in Profits

December 5, 2017

Google makes a lot of money with their advertising algorithms.  Every quarter their profit looms higher and higher, but the San Francisco Gate reports that might change in the article, “Google Is Flying High, But Regulatory Threats Loom.”  Google and Facebook are being told they need to hold back their hyper efficient advertising machines.  Why?  Possible Russian interference in the 2016 elections and the widespread dissemination of fake news.

New regulations would require Google and Facebook to add more human oversight into their algorithms.  Congress already has a new bill on the floor with new regulations for online political ads to allow more transparency.  Social media sites like Twitter and Facebook already making changes, but Google has not done anything and will not get a free pass.

It’s hard to know whether Congress or regulators will actually step up and regulate the company, but there seems to be a newfound willingness to consider such action,’ says Daniel Stevens, executive director of the Campaign for Accountability, a nonprofit watchdog that tracks Google spending on lobbyists and academics. ‘Google, like every other industry, should not be left to its own devices.’

Google has remained mostly silent, but has made a statement that they will increase “efforts to improve transparency, enhance disclosures, and reduce foreign abuse.”  Google is out for profit like any other company in the world.  The question is if they have the conscience to comply or will find a way around it.

Whitney Grace, December 5, 2017


Semantic Scholar Expanding with Biomedical Lit

November 29, 2017

Academic publishing is the black hole of the publishing world.  While it is a prestigious honor to have your work published by a scholar press or journal, it will not have a high circulation.  One reason that academic material is blocked behind expensive paywalls and another is that papers are not indexed well.  Tech Crunch has some good news for researchers: “Allen institute For AI’s Semantic Scholar Adds Biomedical Papers To Its AI-Sorted Corpus.”

The Allen Institute for AI started the Semantic Scholar is an effort to index scientific literature with NLP and other AI algorithms.  Semantic Scholar will now include biomedical texts in the index.  There is way too much content available for individuals to read and create indices.  AI helps catalog and create keywords for papers by scanning an entire text, pulling key themes, and adding it to the right topic.

There’s so much literature being published now, and it stretches back so far, that it’s practically impossible for a single researcher or even a team to adequately review it. What if a paper from six years ago happened to note a slight effect of a drug byproduct on norepinephrine production, but it wasn’t a main finding, or was in a journal from a different discipline?

Scientific studies are being called into question, especially when the tests are funded by corporate entities.  It is important to verify truth from false information as we consume more and more each day.  Tools like Semantic Scholar are key to uncovering the truth.  It is too bad it does not receive more attention.

Whitney Grace, November 29, 2017


Dark Web Predator Awaits Sentencing

November 15, 2017

Here we have one of the darker corners of the Dark Web. A brief but disturbing article at the UK’s Birmingham Mail reports, “Birmingham University Academic Dr Matthew Falder Led Horrific Dark Web Double Life as ‘666devil’.” The 28-year-old academic in question has pled guilty to 137 charges, most if not all, it seems, of vile crimes against children. Reporter James Cartledge writes:

Since 2010, the geophysicist, who worked at Birmingham University till September, had degraded and humiliated more than 50 victims online using the names ‘666devil’ and ‘evilmind’. … He admitted the offences at a hearing at Birmingham Crown Court on Monday. He was arrested on June 21 this year and has been held in custody since that date. Falder, of Edgbaston, Birmingham, posed as a woman on sites such as Gumtree to trick his victims into sending him naked or partially-clothed images of themselves. The disgraced geophysicist then threatened to expose his victims if they did not send severe and depraved abuse images of themselves. He then distributed the images.

It gets worse from there. We’re told this is the first time the UK’s National Crime Agency had delved into the Dark Web’s hidden forums that share and discuss such “dark” material. Falder is scheduled to be sentenced on December 7 and shall remain in custody in the meantime.

Cynthia Murrell, November 15, 2017

Next Page »

  • Archives

  • Recent Posts

  • Meta