2023-10-11 06:30:00
The large language model clearly knows “B is your mother” but does not know “You are B’s son”? Such a new study sparked a discussion as soon as it was published. Researchers from Vandenberg University, University of Sussex, University of Oxford and other research institutions were surprised to find that when a large language model adds data in the form of “A is B” during training, it will not automatically deduce “B is A”. Even as strong as GPT-4, in the reverse problem experiment, the accuracy rate is only 33%. OpenAI founding member Andrej Karpathy forwarded this paper for the first time and commented: The knowledge of LLM (large language model) is much more “scattered” than people think. This is how the same thing? The “Inversion Curse” of Large Language Models The researchers mainly conducted two experiments. In the first experiment, the researchers constructed data of the following form with the help of GPT-4 to fine-tune large language models.
1697157411
#GPT4 #escape #curse #reversal #Large #language #models #inherently #flawed #knowing #impossible #infer #TKenet