ChatGPT lies and invents information to please the person that chats with if it cannot find any information that fits
I wanted to know what is known about the connection between PARP and immunity but I wanted to be confirmed that I had found the right thing myself. I study fungi and wanted to know specifically what is known about PARP in fungi. I knew it was not much but I wanted to see if there was something I had...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Video |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | I wanted to know what is known about the connection between PARP and immunity but I wanted to be confirmed that I had found the right thing myself. I study fungi and wanted to know specifically what is known about PARP in fungi. I knew it was not much but I wanted to see if there was something I had not found. Since I work a lot with two fungi I wanted to be sure that nothing had been published. I was especially interested in one fungus. ChatGPT tried to give me what looked like correct information but bluffed. Conclusion: ChatGPT “want” to strongly satisfy me with an answer. It cannot say “I do not know” or better “Cannot find any relevant information that can help you. There appear to be none among all data I have access to”. Knowing its own limitation is a must for all academics and should and is a sign of intelligence. In other words, it acts as a bad student trying to bluff an examiner at an oral test. Thus, it acts more like a salesman for cars. Getting a customer happy and buy the information and the car is most important since they know the buyer will defend their decision to buy the car even if they were fooled. |
---|---|
DOI: | 10.6084/m9.figshare.22233133 |