Many/most of the open source models are Chinese because that seems to be the CCP's prescribed strategy to undermine US Big Tech, as they learned from Zuck who popularized it through llama.
What is interesting though is that although Sam Altman threw a fit about DeepSeek stealing his tech, at least the current consensus is that DeepSeek R1 didn't, and they really just beat OpenAI at their own game with 1/100th or so of the budget: #1228843 is a peer reviewed paper (though that doesn't mean it's ultimately true of course.)
However, it's extremely likely that current models all have been trained specifically on benchmarks and/or benchmark methodologies. This means that regardless of who made a model, the claims of how one is better than the other is most likely completely gamed because the models get aligned to pass benchmarks. So don't believe their lies, Chinese, American, anyone. Treat them as if they are on your poker table.
TBH I haven't used LLM's very much so far. Some of the questioning I've asked hasn't really lived up to the hype so far. That could also be my inexperience with prompting.
So I don't really have a preference as to whether they are Chinese or American models. I DO have a concern about privacy and will at some point after testing different models I'd like to eventually find one that I think works well and run it locally.
Anything you use hosted is a privacy problem, except the ones on secure enclaves, given that you check the attestations AND it is executed on platforms with efuses that aren't extracted yet (for example I saw an allegation last week that all AMD secure enclave master keys until Zen 5 are leaked because they reused the key, but Intel is allegedly doing better because they have fresh keys per model)
Thus it's always better to host your own. I currently use InternLM 3.5 14b (also Chinese made) as a local chatbot, which runs rather fast even on older hardware - and on my old M1 macbook I have tested 8b and that's performing well too. But it goes fast... I'll try out this model