I’m afraid that would not be sufficient.
These instructions are a small part of what makes a model answer like it does. Much more important is the training data. If you want to make a racist model, training it on racist text is sufficient.
Great care is put in the training data of these models by AI companies, to ensure that their biases are socially acceptable. If you train an LLM on the internet without care, a user will easily be able to prompt them into saying racist text.
Gab is forced to use this prompt because they’re unable to train a model, but as other comments show it’s pretty weak way to force a bias.
The ideal solution for transparency would be public sharing of the training data.
Exactly, you understand very well the purpose of microservices. You can submit a patch if you need that feature now.
Funnily enough I’m the technical lead of the team that handles the user service in an insurance company.
Due to direct access to our data without consulting us, we’re getting legal issues as people were using addresses to guess where people lived instead of using our endpoints.
I guess some people really hate the validation that service layers have.