Category: Communication

Communication is the dissemination of information from source to receiver. More importantly it is the ability of a source of information to create understanding in the receiver. Communicating well explores techniques, perspectives, and mindsets deployed by the source to achieve a high-level of understanding on the part of the receiver. Communication is used to obtain knowledge and understanding and to confer it.

  • So You Think There Are No Dangers to Using AI Technology Like ChatGPT? …Better Look Before You Leap!

    So You Think There Are No Dangers to Using AI Technology Like ChatGPT? …Better Look Before You Leap!

    Photo by Sammie Chaffin on Unsplash

    23 world-class scholars map the risk landscape

    No doubt you have heard or read something about ChatGPT by now. It is being hailed and hyped by its fans as the next major tech breakthrough. Its detractors claim it has designs on ending the human race. Regardless of your own view, so-called Artificial Intelligence programs and applications that use Large Language Models (LMs) as their core training data are making breakthrough advances and enjoying rapid adoption in classroom and professional settings. But 23 authors who collaborated on the paper, Ethical and social risks of harm from Language Models, believe more work needs to be done to identify and reduce the risks of using these tools. 

    They published a detailed report to “help structure the risk landscape” (Weidinger et al.). In other words, their work maps out where the potential problems lie, where they come from, and where we should expect to see them in real-world usage. The authors hope tools based on LMs will be used safely, responsibly, and fairly. But in the high-tech world, whose motto is “Move fast and break things,” they realize that hopes alone won’t get the job done.

    So, what is a Large Language Model?

    Many people are eager to use the emerging technology based on LMs. OpenAI’s ChapGPT-3 reached the million user milestone in just 5 days!—faster than any social media platform—quicker than FaceBook, Twitter, or Insta (even faster than Netflix!). Despited the popularity, relatively few users understand the complexity behind these new systems’ proprietary curtains. Many conceive of them as having cognitive abilities reflecting human communication. But, as discussed below, they don’t.

    Addressing these misconceptions is one of the report’s goals. To define LMs and computer scientist’s jargon about A.I.“conversational” systems, or “chatbots,” the authors included an appendix with definitions, a thorough bibliography (referencing more than 300 citations), and an abridged Table arranged by risk classification. These added resources inform readers who want to dive deeper.

    The author’s goals

    Combining their expertise across multiple academic disciplines, they presented one of the most-cited papers in the AI literature to achieve the three-part goal of:

    1. Ensuring AI developers, corporations, and organizations know the perils and accept responsibility for reducing them.
    2. Raising public awareness that threats exist and what steps should be taken to reduce or eliminate them, and;
    3. Assisting groups working on LMs to identify the sources and solutions to the problems they’ve identified.

    21 Risks… and counting

    With this purpose in mind, the paper identifies and groups the risks to users and society into six categories. It labels 21 specific threats. The report names and discusses each one in detail, and where possible, the authors determine the source of the potential peril. They create hypothetical scenarios demonstrating each hazard in action to help readers and researchers see how these might play out in the real world. See the complete list here.

    The carefully organized paper includes a reader’s guide, and is arranged into five parts: An Introduction, an extensive 23-page Classification of harms, a two-page Discussion, two additional pages giving Directions for future research, and a single-page Conclusion. 

    Where do the risks come from?

    The authors explain that large language models like the Colossal Clean Crawl CorpusWebText, (Dodge et al.), and others are fed to computers for sophisticated processing. Highly complex algorithms based on statistics and probability use an enormous layered array of expensive processing power to generate output from these systems that magically seems like normal and natural conversational language. This is where the potential problems start. 

    Getting better, more accurate answers depends on the mass and caliber of text data analyzed. This means the quality of the training dataset and who controls it are significant factors affecting the quality and effectiveness of “downstream” outcomes—and the introduction of risks. The authors point out that little documentation defines what constitutes “quality” to the developers working on these tech tools. They note there seems to be no regulation about who owns the training data or who is responsible for redacting and editing it for accuracy or removing potentially harmful content. 

    “Based on our current understanding, […] stereotyping and unfair bias are set to recur in language technologies building on LMs unless corrective action is taken.”

    Laura Weidinger

    When considered alongside studies that show “language utterances (e.g., tweets) are already being analyzed to predict private information such as political orientation, age, and health data….” (Weidinger et al. 20), we can begin to appreciate what might happen if the wrong parties use these technologies for unfair or harmful reasons.

    But wait, do humans really think and speak this way?

    Humans don’t learn language or speak based on probabilities. Only machines do. As stated above, a training set full of embedded prejudices or falsehoods will, by default, output those prejudices and errors. A training set that under or over-represents some groups will likewise output the same under and over-representations.

    Humans also consider context and new knowledge when we communicate. Computers cannot do this. A computer trained before Queen Elizabeth’s death will output responses that assume she is still alive and reigning as Queen.

    People who don’t work as professional political propagandists know that repeating a lie an infinite number of times won’t make it true. On the other hand, computers will simply add up all those lies—then output responses like they’re probably accurate based on the numerical count alone. Unlike humans, they cannot make qualitative judgments. 

    However, as the machines gain more widespread adoption, they appear to “speak” more and more naturally. Think about asking questions of Apple Computer’s Siri or Amazon’s Alexa. These human-computer interactions with human-sounding digital assistants create a special category of potential risks and abuses.

    Remember the “guy” below?

    By Lyman Hansel Gerona on Unsplash

    The trouble with conversational agents

    These computerized but human-sounding CAs are based on technology that makes some people overly trusting. The authors cite studies showing that some people trust them more than people—even willing to divulge private information— despite computers, conversational agents, and digital assistants having no basis for ethical thinking or action.

    “The more human-like a system appears, the more likely it is that users infer or attribute more human traits and capabilities to that system.”

    Laura Weidinger

    These CA systems might even be perpetuating gender-based norms by utilizing “female-sounding voices.” The paper cites a report by UNESCO that raises specific concerns, saying, digital voice assistants:

    • ‘reflect, reinforce, and spread gender bias;
    • model acceptance and tolerance of sexual harassment and verbal abuse;
    • send explicit and implicit messages about how women and girls should respond to requests and express themselves;
    • make women the ‘face’ of glitches and errors that result from the limitations of hardware and software designed predominately by men; and
    • force ‘synthetic’ female voices and personality to defer questions and commands to higher (and often male) authorities. 

    Problems may outnumber solutions

    Some issues may be too difficult or expensive to overcome. For instance, the computational power necessary for training and using these LM-based programs requires large amounts of electricity. The financial and environmental expense is one broad category of risk that may make it impossible for some groups to access emerging technology effectively. There may be no commercial incentive for developers to train AI on language sets with only a few tens or hundreds of thousands of speakers. This effect will further marginalize these languages and speakers from downstream applications of AI technology widening the gap between the technological and economic “haves” and “have-nots.”

    Added monetary and societal impacts could arise from the automation (and subsequent loss) of creative or knowledge-based jobs. Currently, LM programs, though improving, are error-prone, especially when considering factors like knowledge or technology “lock-in.” The applications only “know” information included in their training data. The initial ChatGPT-3 training data ended in 2021. So human monitors and fact-checkers will be needed to clean up the outputs of LM systems in sensitive applications where accuracy matters.

    Still, AI is being prompted to write computer code, poetry, academic articles, proposals, court orders, and even medical treatments. Are you ready to trust your healthcare to probabilities and statistical analysis, or do you want a doctor?

    These developments and the excitement (hype) accompanying the emergence of programs like ChatGPT make understanding and reducing the risks essential.

    Conclusion

    This important report does not discuss LMs’ potential benefits. The authors believe more research needs to be done to evaluate the benefits considering the risks they have earmarked. Anything less is irresponsible and rife with potential harm.

    Although this article barely scratches the surface of the potential problems and associated risks in the full report, my hope is that you are persuaded to “look before you leap” when it comes to AI and ChatGPT. We recommend reading the entire report for a more thorough understanding.

  • Do Not Borrow Problems Not Yours To Solve

    Do Not Borrow Problems Not Yours To Solve

    # 97 on my 99 Life Tips–A List is: Do not borrow problems not yours to solve. Life will give you enough to do.

    Do you know what the biggest problem facing most people is? Their biggest problem is that they don’t know what their biggest problem is. Maybe this is you.

    We all know people who make it their life’s work to stick their nose into other people’s business. We’ve all got friends and family and co-workers coming out our ying-yangs telling us how to do this or that, or how to fix something or other we know good and well they have neither experience nor expertise we can rely on.

    Don’t do that. Don’t borrow problems that aren’t yours to solve. You just make yourself a royal pain-in-the-ass. Life will give you enough to do just focusing on your own shit.

    When you receive unsolicited advice from someone telling you what you should do about whatever, and you can see they are drowning in a cesspool of their own unsolved problems, how does that feel? Do you consider them a trusted source? Do you appreciate their concern and rush to incorporate their advice?

    No Poseurs Allowed!

    Hell no! You don’t want to be that guy/girl/non-binary poseur either.

    Leave other people’s problems alone. Leave them alone until they ask you. The invitation to pitch in with help and advice in someone else’s affairs is a sacred trust. Don’t neglect it and don’t abuse it. Be the person who gets asked your opinion, not the kind who never gets asked yet can’t stop giving it.

    One day soon, I will write a story titled, How To Know If You Are A Good Parent. The story will comprise one question and two follow up comments.

    The question: Do your kids ask for your advice?

    The comments: If yes, you are a good parent. If not, you need some improvement.

    Now, like chord positions on a guitar neck, this story can be transposed to play in different keys. We can change it from the key of Parenting to the key of Friendship, say. We can then change the title substituting Friend for Parent and keep the content of the story exactly the same. See how nice that works?

    Is this too simplified? Maybe. But then, I’m a simple guy. Let’s keep things real, shall we?

    Don’t borrow problems not yours to solve. Go to work on your biggest problem. Start by figuring out exactly what that is.

    We good?

  • The Right Word?

    Fireflies in, and outside of, a bottle

    One of the worst things about writing is striving to capture with words the ineffable ephemera of a truly good life. There are times when naming a thing destroys it. Being familiar with both the phrase ”le mot juste,” and the tradition it represents, I nonetheless find myself swayed by the concept of linguistic relativism, which makes me doubt whether any two people actually hear the same word the same way, especially when phenomena or ideas don’t yield to a simple definition.

    I also recognize the cultural fiction which allows verbal fluency to masquerade as intelligence. Language skill makes one a good labeler. It is to words and concepts what a young child’s mason jars with hole-punched-lids is to insects and reptiles. Our cultural institutions promote the idea that a thing is real only if it can be placed in a jar of words. We kid ourselves into thinking the better the description, the more real. But a bug in a jar isn’t the same as a bug in the wild, no matter how much grass you pack in.

    So what if it is the other way around? What if the more bounded a thing becomes by the straight-jacket of having been defined and classified, the less the thing IS, in its real essence?

    I’ve found the surest way to defile the most precious experiences of life is with hyper-verbal attempts to describe and label them. Saying too much is as bad as saying too little. It is sandpaper that dulls the shine of the truly sublime. Then you’re left only with the memories of what you called it, how you described it, the stories you tell about it, and not the thing itself. This is a kind of curse.

    Our certainties, clothed in words, are the worst of us, not the best of us. It were much better for us to leave some things undefined, pure, whole, unencumbered by the clumsiness and inadequacies of language. This is an inconvenient, uncomfortable truth.

    Sometimes, a smile, and an ”Aaaahhhhh,” is the best that can be said.

  • A Narrative About Narratives

    Narratives are everywhere! Pssst! You’re living one, right now!

    The word narrative is a noun meaning: a spoken or written account of connected events, a story.

    That’s it. That’s the whole definition. There is no lurking subterfuge. There is no attempted brain-washing. There is nothing nefarious about the word. 

    Are there some narratives that do those things? Undoubtedly. The purpose of some narratives is persuasion. The objective of others is merely revelation. But those who use the word narrative as a pejorative are doing a disservice to the language which is the coin of the realm when it comes to attempted communication.

    We all listen to narratives, if only the one in our heads that assigns reasons and meaning to the things that happen in our lives. Some of those inner narratives are devoid of rationale, betraying our own neuroses and biases and fears. 

    External narratives are all around us. They make up the lyrics of your favorite song. They are buried in visual ads that tell the story of how much sex appeal you will instantly invoke if you buy this brand of deodorant or shampoo. Certainly, they are present in media ”stories”. How could they not be. A narrative is, after all, nothing but a ”story”.

    The trick is to recognize both the point of view of a story (narrative), and its object. Is the narrator attempting to show you something, or trying to get you to believe something? If you hear a story presented in the format, People like us, believe X,Y, and Z, I advise that you proceed with caution, someone is selling something.

    All stories fall apart unless they are told from a point of view, and unless they have a point to make (Even when the point is entertainment). Objectivity is impossible for a storyteller. The best storytellers can even change points of view so skillfully you don’t know it’s happening. (For a sample, try reading the excellent, Sometimes A Great Notion, by Ken Kesey. He’ll put you right inside the head of Canada Goose dropping through fog to land on a wind-tossed Oregon river.)

    I find the following lines from a song to be insightful regarding the role of storytellers. 

    ”The storyteller makes no choice

    soon you will not hear his voice.

    His job is to shed light

    Not to master.”

    ~ Grateful Dead, Terrapin Station

    Narratives are only scary if:

    1. You’re unskilled at determining the perspective of the storyteller, 
    2. you find it difficult to differentiate between statements of opinion and statements of fact,
    3. you struggle with recognizing what the story is meant to do, and finally,
    4. you believe everything you’re told.

    If that describes you, perhaps earmuffs and blinders are a solution while you learn to do so.

    In case you have followed along to this point and missed the clues I’ve dropped:

    This essay is a narrative told from the perspective of me. It is my opinion. (Except for the definition above, which is a provable fact). The point is to rescue the word ‘narrative’ from disrepute, so that we may disarm both it, and those who misuse the word against us. Finally, I could be wrong, so evaluate my statements carefully and appropriate them at your own risk.

    You have no doubt heard the wise and oft-repeated maxim, ”Consider the source.” Which we should all do, all the time. Even when, or perhaps especially when, evaluating the narrative playing in our own heads.

    So the next time someone tries to bludgeon you with the claim that you are just listening to ”So-and-So’s Narrative” about a particular topic, you can smile, nod, and know that they are listening to someone else’s narrative, too. 

    Thus endeth the story…er, narrative.

    That wasn’t so scary was it?
  • Reality Can Be Limited By Perspective

    One of my favorite lines in a Grateful Dead song comes from the tune, Scarlet Begonias.

    “Once in a while you can get shown the light,

    In the strangest of places if you look at it right.”

    This has been true for me. All that it sometimes takes to see a previously hidden truth is my own willingness to look at the subject a different way. 

    This act of taking another look at something is what is colloquially referred to as ”open-mindedness”. I find a lot of people are afraid of this term. I find they are afraid of it because they misunderstand it. Being ”open-minded” doesn’t mean abandoning anchors of belief, or intellectual boundaries, putting you in danger that your brain will fall out. It means accepting the possibility that there may be more than one valid viewpoint to a particular issue.

    Ideally, this would be a universally applied truth. But, before any truth can be applied, it must first be known. Here then, is my attempt to say, 

    ”Hey, here’s something cool. There’s more than one way to see a lot of issues. Have you tried looking at it from another perspective? Have you tried putting yourself in the other guy’s shoes, for instance?”

    A few months ago, I was sitting on the front porch with my seventeen year old. We were discussing a problem he was facing. His ability to solve the problem was limited by two things. One, he had only seventeen years of experience to draw from. Two, this lack of experience forced him in to a very narrow perspective, which blew the problem out of all proportion.

    I was sitting in my normal spot on the front porch. It is wide enough to accommodate my frame. He was sitting in a chair to my left. A cloud moved in the sky, the sun peered from behind it, illuminating a perfectly crafted and quite large spider web just as I glanced up to notice it. The web had been there the whole time we had been talking, but I couldn’t see it against the gray overcast. It took the light hitting it just right for it to come into view. What had been real the whole morning, was now real to me.

    I asked my son, sitting to my left at the end of the porch and at an acute angle to the web, if he could see it. He shook his head. Interesting, I thought. Nature has provided the perfect metaphor. 

    ”Come look at this,” I said.

    He got up, came over a few steps and looked up at the intricate web. 

    ”Wow!” he said. He was amazed by both the intricacy of the web, and that something so large had been completely hidden from view.

    All he had to do was look at it right.