I’m afraid that would not be sufficient.
These instructions are a small part of what makes a model answer like it does. Much more important is the training data. If you want to make a racist model, training it on racist text is sufficient.
Great care is put in the training data of these models by AI companies, to ensure that their biases are socially acceptable. If you train an LLM on the internet without care, a user will easily be able to prompt them into saying racist text.
Gab is forced to use this prompt because they’re unable to train a model, but as other comments show it’s pretty weak way to force a bias.
The ideal solution for transparency would be public sharing of the training data.
You are in a bubble. A neo nazi march was banned two weeks ago in France before being allowed again by the judicial system. The exact same scenario has been repeating for pro-palestine protests.
At least in France, the scenario seems to be that the government wants to ban any controversial march and is being kept under control by the justice system.