I also don’t think that the ChatGPT model is able to do something that requires referencing case law or medical texts or whatever else at all in its current form. The way it works by generating probabilities for certain words is all wrong for doing something where the value of the output isn’t subjective - you need the model to be able to distinguish between facts and opinion, you need it to be able to cite sources for what it says, you need it to be able to produce coherent cause and effect chains and formulate an argument, all things which no currently existing LLM is capable of no matter how much you fine tune it because of how it works.
I also don’t think that the ChatGPT model is able to do something that requires referencing case law or medical texts or whatever else at all in its current form. The way it works by generating probabilities for certain words is all wrong for doing something where the value of the output isn’t subjective - you need the model to be able to distinguish between facts and opinion, you need it to be able to cite sources for what it says, you need it to be able to produce coherent cause and effect chains and formulate an argument, all things which no currently existing LLM is capable of no matter how much you fine tune it because of how it works.