The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) – the premier international conference in natural language processing – reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe.
Came here to say the same. It’s an interesting question what in-context learning can do. But the title is silly. They’re kind of predicting the past. We already know we’re still alive. So… Sure. Past models didn’t have the ability to pose and extential thread. At the same time I’d argue they haven’t been intelligent enough to do serious harm, anyways. That doesn’t really add anything. The extential question is: Will AI be able to progress to that point in the future? We have some reason to think so.