
23 world-class scholars map the risk landscape
No doubt you have heard or read something about ChatGPT by now. It is being hailed and hyped by its fans as the next major tech breakthrough. Its detractors claim it has designs on ending the human race. Regardless of your own view, so-called Artificial Intelligence programs and applications that use Large Language Models (LMs) as their core training data are making breakthrough advances and enjoying rapid adoption in classroom and professional settings. But 23 authors who collaborated on the paper, Ethical and social risks of harm from Language Models, believe more work needs to be done to identify and reduce the risks of using these tools.
They published a detailed report to “help structure the risk landscape” (Weidinger et al.). In other words, their work maps out where the potential problems lie, where they come from, and where we should expect to see them in real-world usage. The authors hope tools based on LMs will be used safely, responsibly, and fairly. But in the high-tech world, whose motto is “Move fast and break things,” they realize that hopes alone won’t get the job done.
So, what is a Large Language Model?
Many people are eager to use the emerging technology based on LMs. OpenAI’s ChapGPT-3 reached the million user milestone in just 5 days!—faster than any social media platform—quicker than FaceBook, Twitter, or Insta (even faster than Netflix!). Despited the popularity, relatively few users understand the complexity behind these new systems’ proprietary curtains. Many conceive of them as having cognitive abilities reflecting human communication. But, as discussed below, they don’t.
Addressing these misconceptions is one of the report’s goals. To define LMs and computer scientist’s jargon about A.I.“conversational” systems, or “chatbots,” the authors included an appendix with definitions, a thorough bibliography (referencing more than 300 citations), and an abridged Table arranged by risk classification. These added resources inform readers who want to dive deeper.
The author’s goals
Combining their expertise across multiple academic disciplines, they presented one of the most-cited papers in the AI literature to achieve the three-part goal of:
- Ensuring AI developers, corporations, and organizations know the perils and accept responsibility for reducing them.
- Raising public awareness that threats exist and what steps should be taken to reduce or eliminate them, and;
- Assisting groups working on LMs to identify the sources and solutions to the problems they’ve identified.
21 Risks… and counting
With this purpose in mind, the paper identifies and groups the risks to users and society into six categories. It labels 21 specific threats. The report names and discusses each one in detail, and where possible, the authors determine the source of the potential peril. They create hypothetical scenarios demonstrating each hazard in action to help readers and researchers see how these might play out in the real world. See the complete list here.
The carefully organized paper includes a reader’s guide, and is arranged into five parts: An Introduction, an extensive 23-page Classification of harms, a two-page Discussion, two additional pages giving Directions for future research, and a single-page Conclusion.
Where do the risks come from?
The authors explain that large language models like the Colossal Clean Crawl Corpus, WebText, (Dodge et al.), and others are fed to computers for sophisticated processing. Highly complex algorithms based on statistics and probability use an enormous layered array of expensive processing power to generate output from these systems that magically seems like normal and natural conversational language. This is where the potential problems start.
Getting better, more accurate answers depends on the mass and caliber of text data analyzed. This means the quality of the training dataset and who controls it are significant factors affecting the quality and effectiveness of “downstream” outcomes—and the introduction of risks. The authors point out that little documentation defines what constitutes “quality” to the developers working on these tech tools. They note there seems to be no regulation about who owns the training data or who is responsible for redacting and editing it for accuracy or removing potentially harmful content.
“Based on our current understanding, […] stereotyping and unfair bias are set to recur in language technologies building on LMs unless corrective action is taken.”
Laura Weidinger
When considered alongside studies that show “language utterances (e.g., tweets) are already being analyzed to predict private information such as political orientation, age, and health data….” (Weidinger et al. 20), we can begin to appreciate what might happen if the wrong parties use these technologies for unfair or harmful reasons.
But wait, do humans really think and speak this way?
Humans don’t learn language or speak based on probabilities. Only machines do. As stated above, a training set full of embedded prejudices or falsehoods will, by default, output those prejudices and errors. A training set that under or over-represents some groups will likewise output the same under and over-representations.
Humans also consider context and new knowledge when we communicate. Computers cannot do this. A computer trained before Queen Elizabeth’s death will output responses that assume she is still alive and reigning as Queen.
People who don’t work as professional political propagandists know that repeating a lie an infinite number of times won’t make it true. On the other hand, computers will simply add up all those lies—then output responses like they’re probably accurate based on the numerical count alone. Unlike humans, they cannot make qualitative judgments.
However, as the machines gain more widespread adoption, they appear to “speak” more and more naturally. Think about asking questions of Apple Computer’s Siri or Amazon’s Alexa. These human-computer interactions with human-sounding digital assistants create a special category of potential risks and abuses.
Remember the “guy” below?

The trouble with conversational agents
These computerized but human-sounding CAs are based on technology that makes some people overly trusting. The authors cite studies showing that some people trust them more than people—even willing to divulge private information— despite computers, conversational agents, and digital assistants having no basis for ethical thinking or action.
“The more human-like a system appears, the more likely it is that users infer or attribute more human traits and capabilities to that system.”
Laura Weidinger
These CA systems might even be perpetuating gender-based norms by utilizing “female-sounding voices.” The paper cites a report by UNESCO that raises specific concerns, saying, digital voice assistants:
- ‘reflect, reinforce, and spread gender bias;
- model acceptance and tolerance of sexual harassment and verbal abuse;
- send explicit and implicit messages about how women and girls should respond to requests and express themselves;
- make women the ‘face’ of glitches and errors that result from the limitations of hardware and software designed predominately by men; and
- force ‘synthetic’ female voices and personality to defer questions and commands to higher (and often male) authorities.
Problems may outnumber solutions
Some issues may be too difficult or expensive to overcome. For instance, the computational power necessary for training and using these LM-based programs requires large amounts of electricity. The financial and environmental expense is one broad category of risk that may make it impossible for some groups to access emerging technology effectively. There may be no commercial incentive for developers to train AI on language sets with only a few tens or hundreds of thousands of speakers. This effect will further marginalize these languages and speakers from downstream applications of AI technology widening the gap between the technological and economic “haves” and “have-nots.”
Added monetary and societal impacts could arise from the automation (and subsequent loss) of creative or knowledge-based jobs. Currently, LM programs, though improving, are error-prone, especially when considering factors like knowledge or technology “lock-in.” The applications only “know” information included in their training data. The initial ChatGPT-3 training data ended in 2021. So human monitors and fact-checkers will be needed to clean up the outputs of LM systems in sensitive applications where accuracy matters.
Still, AI is being prompted to write computer code, poetry, academic articles, proposals, court orders, and even medical treatments. Are you ready to trust your healthcare to probabilities and statistical analysis, or do you want a doctor?
These developments and the excitement (hype) accompanying the emergence of programs like ChatGPT make understanding and reducing the risks essential.
Conclusion
This important report does not discuss LMs’ potential benefits. The authors believe more research needs to be done to evaluate the benefits considering the risks they have earmarked. Anything less is irresponsible and rife with potential harm.
Although this article barely scratches the surface of the potential problems and associated risks in the full report, my hope is that you are persuaded to “look before you leap” when it comes to AI and ChatGPT. We recommend reading the entire report for a more thorough understanding.
1 thought on “So You Think There Are No Dangers to Using AI Technology Like ChatGPT? …Better Look Before You Leap!”
Hi my family member! I want to say that this post is awesome, nice written and come with approximately all significant infos. I would like to peer extra posts like this.