The State of Artificial Intelligence – final comments – for the moment

Further on ChatGPT

Much has been made of the fact that interest and use of ChatGPT has grown rapidly, even compared to some of the social media giants early history. Further experimentation has not changed my estimate of this facile word smith to be an error-prone mess. As noted in earlier comments here (The State of Artificial Intelligence – OpenAI’s ChatGPT – 12.17.2022 and Is OpenAI’s ChatGPT AI app a deceptive, perhaps dangerous tool? – 12.18.2022) ChatGPT provides no hint of understanding in any meaningful manner what it is doing. It is a probabilistic guesser of prodigious scale and speed. It offers no hint as to what logic it uses to assemble its responses nor any hint of what resources it tapped into in its guessing. There is no logic except for the program searching for the next most probable word to follow the one preceding. Sometimes just for fun the program doesn’t choose the next word with the highest probability, but the third most probable. In the techno-babble of AI this is referred to as changing the “temperature” (a measure of randomness).1 I admit that I have decided not to explore the actual workings of AI further. I had a fairly lengthy conversation with a friend who has a Ph.D. in abstract mathematics (??) who has actively worked in the AI field for a decade. it became clear that there was simply too much for me to learn to even begin to understand the innerworkings.

from ChatGPT website

The key takeaway for me about ChatGPT is that we need to impose some standards of behavior and transparency on these artificial intelligence packages. It is outrageous that OpenAI (a non-profit corporation with a limited-profit corporation embedded) should release this AI into the wild. They at the very least should have water-marked every response with warnings that the text is unreliable and possibly harmful. It is not satisfactory to merely have a list of “Limitations” (see screen grab to left) on the page after login. And to use the phrase “may occasionally” to describe the output is simply a falsehood. Anyone who uses this tool will notice almost immediately that this is so.

On a recent visit to the new MIT Museum I paused on an exhibit “AI: Mind The Gap“. In one placard with notes from Chelsea Barabas, a research scientist at MIT,  she wrote:

“….. However, it’s not always clear what’s going on inside “the black box,” and it’s not enough to enter the input and analyze the output. What’s required is understanding how variables are processed and even developing separate systems that can explain how the original algorithm is behaving.”

We should not expect that we, as users, will be able to understand all of the mechanisms that make our technologies work. We all drive cars, yet very few of us have more than a thin veneer of knowledge of all of the technologies in play. Nevertheless, we do expect that cars behave in a consistent pattern with consistent reliability, and when things go wrong, we expect to be able to easily find someone who can bring the car back to order. We should have an expectation that the design and functioning of technologies be reliable, safe, accessible, and transparent.

There are efforts underway to establish some performance standards. Google states the following seven principles in its “AI at Google: our principles“:

    1. Be socially beneficial.
    2. Avoid creating or reinforcing unfair bias.
    3. Be built and tested for safety.
    4. Be accountable to people.
    5. Incorporate privacy design principles.
    6. Uphold high standards of scientific excellence.
    7. Be made available for uses that accord with these principles.”

The World Health Organization provides guiding principles too (see Ethics and governance of artificial intelligence for health). Here are the six ethical principles for the use of medical AI it cites:

    1. Protect autonomy
    2. Promote human well-being, human safety, and the public interest
    3. Ensure transparency, explainability, and intelligibility
    4. Foster responsibility and accountability
    5. Ensure inclusiveness and equity
    6. Promote artificial intelligence that is responsive and sustainable

Other AI Encountered Recently –

There is another AI engine similar to ChatGPT. Perplexity provides a similar ask-a-question-get-an-answer interface. However, it does provide source references and also suggests similar questions that might be pursued. It is also prone to errors of fact and logic. In the domain of speech recognition and transcription, there is OpenAI’s Whisper. This AI provides nearly simultaneous transcription in many languages. It is available to run on a Mac at MacWhisper. It is really pretty slick.

In the world of research, there are many search engines that seek out bibliographic citations from books, magazines, and academic journals. Google Scholar is handy. AI-assisted search is a great addition. Elicit is one that I think has lots of useful features. Type in a question, list of keywords, whatever. It then presents a list of articles with brief summaries. You can click to read longer summaries on ones that are of interest. By “starring” articles you like you can then replenish the search with more that are similar to the “starred” items. When you think you have enough to work with you can download the citations to add to your bibliography engine. Very neat.

Litmaps provides maps showing interconnections between authors and articles on a topic. The size of the colored circles represents the number of times that work has been cited by others.

Let me know about AI that you use and find useful. Use the comment box below. Or directly to mark(at)markorton.com

Footnotes

  1. see this long post for more: Stephen Wolfram, “What Is ChatGPT Doing … and Why Does It Work?,” February 14, 2023, https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/.

2 Comments

  1. I only wish that except for when this new tool would come with these shortcomings stated clearly out front. A handbook would be helpful for teachers at all levels of instruction.

  2. These short comings shouldbe as well publicized at the ads for this new instrument. Teachers at all levels of instruction should be issued handbooks relating to drawbacks to critical thinking especially writing.

Comments are closed.