Further experiments with ChatGPT (following yesterday’s post >>>) significantly weaken its usefulness and even suggest that it is a potentially dangerous, unreliable source of information.
Experiment Six (continuing from previous post) – Query: when will ChatGPT provide bibliographic references for its answers?
As an artificial intelligence, I am not able to provide bibliographic references for my answers. I do not have access to the internet and I do not have the ability to browse or search for specific information. Instead, I rely on the knowledge that has been programmed into me by my creators at OpenAI. I do my best to provide accurate and reliable information, but I am not a definitive source of information and it is always a good idea to verify any information that you receive, including information that you receive from me. If you have specific questions or concerns about the reliability of the information that I provide, please let me know and I will do my best to address them.
Experiment Seven – Query: Can’t you provide references from the information that OpenAI has programmed into you?
I apologize for any confusion. As an artificial intelligence, I do not have the ability to provide bibliographic references for the information that I provide. My knowledge is based on the data that has been programmed into me by my creators at OpenAI, and I do not have the ability to access or cite specific sources of information. …….
Experiment Eight – Query: How can I evaluate your responses if I cannot see the sources you are using?
As an artificial intelligence, I do not have access to specific sources of information and I am not able to provide bibliographic references for the information that I provide. However, I have been trained on a large dataset of text and my responses are generated based on patterns in that data. While I do my best to provide accurate and reliable information, it is always a good idea to verify any information that you receive, including information that you receive from me.
There are many ways to evaluate the credibility and reliability of a source of information. Here are a few factors to consider:
- Expertise: Is the source of the information an expert in the field or topic being discussed?
- Evidence: Does the source provide evidence to support its claims?
- Bias: Is the source objective or does it have a particular bias or agenda?
- Credentials: Does the source have relevant credentials or qualifications?
- Timeliness: Is the information up-to-date and relevant to the current context?
By considering these factors, you can get a sense of the credibility and reliability of a source of information. If you have specific questions or concerns about the information that I provide, please let me know and I will do my best to address them.
One conclusion here is that the concept of intelligence (briefly: the ability to acquire and apply knowledge and skills) cannot be applied to ChatGPT. It describes its working process as “I have been trained on a large dataset of text and my responses are generated based on patterns in that data”. As I pointed out in the previous post, there is clearly no sense that ChatGPT has any bodies of knowledge. Nor awareness of context outside of the question posed to it and the patterns (whatever that actually refers to??) in its closed dataset. So it might be better to think of what is going on here as a giant sifting, sorting processor. This sifting process leads it to state five key issues to consider in evaluating a source of information – see the response in Experiment Eight immediately above. Unfortunately, ChatGPT provides no way of applying these tests to its own output. Pretty useless. At this point, we should ban the use of the word intelligence from the world of software and computing. Meanwhile, OpenAI, the creator of ChatGPT, should do us the favor of dropping the obnoxious personification of having the software respond in the first person.
If AI expects to achieve acceptance in my world it needs to conform to the standards I expect good scholarship to deliver. Show your data, show your methods, provide context for the problems at hand, explain your conclusions clearly, and make clear any significant external conflicts of interest (chiefly, where is the money coming from). Or, to quote ChatGPT’s evaluation factors above, show: expertise, evidence, bias, credentials, and timeliness (credentials are not important to me).
I will ask around amongst people I know about the actual processes that underly so-called artificial intelligence. I don’t have the bandwidth to research the topic directly. So, I will lean on others’ expertise. If they can provide an explanation that doesn’t sound like that for financial derivatives I will add to this line of posts.