The Chinese model weights aren’t “tankie” by themselves, its the frontends that do most censoring. Their devs in interviews and such are quite grounded; it feels like they ‘have their cake and eat it,’ leaving the base models relatively uncensored, only complying with CCP censorship superficially.
Eh, recent studies show some implicit bias in some of the best Chinese models, mostly when it comes to software dev stuff so less relevant here. Not the hugest deal for this kind of projected either way, but it would be good to know if they are using those vs say Llama, or yes just building a front end fine tune for a closed model.
I notice it doesnt say what model Socialism AI is built on top of, I’m guessing one of the Chinese models fine tuned?
That would be ideal.
The Chinese model weights aren’t “tankie” by themselves, its the frontends that do most censoring. Their devs in interviews and such are quite grounded; it feels like they ‘have their cake and eat it,’ leaving the base models relatively uncensored, only complying with CCP censorship superficially.
Want to ask it about Tiananmen Square and Winnie the Pooh? There are tons of finetunes already, like: https://huggingface.co/perplexity-ai/r1-1776
…Bot honestly, is probably a ChatGPT system prompt like every other “AI” project :(
Eh, recent studies show some implicit bias in some of the best Chinese models, mostly when it comes to software dev stuff so less relevant here. Not the hugest deal for this kind of projected either way, but it would be good to know if they are using those vs say Llama, or yes just building a front end fine tune for a closed model.