AI In Group Communications: The Good and the Bad

November 29, 2024

In theory, AI that can synthesize many voices into one concise, actionable statement is very helpful. In practice, it is complicated. The Tepper School of Business at Carnegie Mellon announces, “New Paper Co-Authored by Tepper School Researchers Articulates How Large Language Models are Changing Collective Intelligence Forever.” Researchers from Tepper and other institutions worked together on the paper, which was published in Nature Human Behavior. We learn:

“[Professor Anita Williams] Woolley and her co-authors considered how LLMs process and create text, particularly their impact on collective intelligence. For example, LLMs can make it easier for people from different backgrounds and languages to communicate, which means groups can collaborate more effectively. This technology helps share ideas and information smoothly, leading to more inclusive and productive online interactions. While LLMs offer many benefits, they also present challenges, such as ensuring that all voices are heard equally.”

Indeed. The write-up continues:

“‘Because LLMs learn from available online information, they can sometimes overlook minority perspectives or emphasize the most common opinions, which can create a false sense of agreement,’ said Jason Burton, an assistant professor at Copenhagen Business School. Another issue is that LLMs can spread incorrect information if not properly managed because they learn from the vast and varied content available online, which often includes false or misleading data. Without careful oversight and regular updates to ensure data accuracy, LLMs can perpetuate and even amplify misinformation, making it crucial to manage these tools responsibly to avoid misleading outcomes in collective decision-making processes.”

In order to do so, the paper suggests, we must further explore LLMs’ ethical and practical implications. Only then can we craft effective guidelines for responsible AI summarization. Such standards are especially needed, the authors note, for any use of LLMs in policymaking and public discussions.

But not to worry. The big AI firms are all about due diligence, right?

Cynthia Murrell, November 29, 2024

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta