pull down to refresh

I like the overall take, and the reasoning about targeted disruption of proprietary models through open-weight model releases, regardless of who does it 1, concurs with what I've observed and how I extrapolate that phenomenon too, as laid out in the linked tweet from the article:
China thinks it has an opportunity to hit US tech companies, boost its prestige, help its internal economy, and take the margins out of AI software globally (at least at the model level).
I just wonder how long it will last.
It's easy to now celebrate that FOSS is currently the weapon of choice in the global LLM-race and that there is evidence that the CCP strategy is to align with open weights, but I remind myself daily that this is weaponization only and not the embrace of open source principles. We often see that once market share is determined to be sufficient (or a competitor sufficiently hurt), the tools used for capture are abandoned or weakened. 2

Nit:
Polytheistic/Monotheistic feels like a bit of a misnomer, especially since the rest (of the article) focuses on utility and not that AI is in any way a higher being (because it isn't a being.) In the context of AI, poly kind of disqualifies theos, not only because there are multiple models, but also because each model can be ran multiple, independent times.
I think that if we change this into polylithic (many models running in many, decentralized instances) versus monolithic (a single grand Skynet-like "AI" that runs as a single instance, even if it's distributed), it makes more sense - but I'm not really sold on that terminology either.

Footnotes

  1. Chinese companies have done it, but Meta has done this too and at least announced (#1060587) it will continue doing it in some form.
  2. You can see this play out in more mature software sub-industries like for example mobile, where Google is now "sabotaging" AOSP (#1005566).
In this race, there is no victory. They will have the weapon, and you will have one that cannot fight them or defend you from them. The best way to win is not to use it and to encourage people not to use it by showing them how ridiculous it is. Because if they don't see how ridiculous it is, they will pay with their own freedom. This is already happening.
reply
So what's the way out then? Letting it pass?
reply
Yes, you'll be fine. Especially considering that you're above average or very close to non-standard knowledge, a kind of knowledge that makes you free and immune to all kinds of bullshit that comes to steal your freedom.
Not using it is extremely feasible since you haven't needed it so far. As mentioned in the article itself, AI is reactive and not active, it depends on commands even when you put it to do repeated tasks it's just following your “from to”. There's no point arming the enemy with something so trivial when we already have software that does it and it's not LLM.
reply
21 sats \ 2 replies \ @optimism 4h
There's no point arming the enemy with something so trivial
Arming "the enemy" how though?
reply
100 sats \ 1 reply \ @perscrutador 4h
Data. It's not because it's running locally that your information is completely protected, the model is processing and being trained, what guarantee do you have that it won't share insights with the developer, or that it will do so in a future moment of carelessness during an update or through an extraction from an agent who has an interest in data like this?
Most importantly, making yourself dependent on an AI makes you open to concepts where the AI is controlling many aspects of your life.
reply
what guarantee do you have that it won't share insights with the developer
For one, because I use my inference code, not "the developer's code", but it's good to check nonetheless. I'll run some wireshark tests later this week and let everyone know if I find something fishy in things like llama.cpp or transformers.
FWIW, your concern is not without precedent; see for example #1057075 for something that does exactly what you say. This is why as a coder, using a MS IDE or a fork of it is kind of a self-own, always has been (and it is not that great quality software anyway.)
Most importantly, making yourself dependent on an AI makes you open to concepts where the AI is controlling many aspects of your life.
Have to retain the skills. This is very true. We had a discussion about this not too long ago: #998489
as if is just a trend? yes sure, there will be in the future better tech we can not even imagine today... let it pass. Using it is optional anyway.
reply
I currently just treat it as an advanced database engine that indexed the internet, with an extrapolation function. I'm kind of unhappy with the pre-applied tuning but at the same time unwilling to invest time and resources into re-training research right now, so I just test things.
The use-cases I use it for in "production", defensive summarization and speech-to-text, have not been bleeding edge for a long time. It's just nice that I can run that efficiently on my own hardware, without depending on SAAS/IAAS, now.
reply
You can do it yourself and you'll gain more knowledge by doing it. Maybe even ask a human friend for a review.
I've used AI for this and I've seen how silly it was to waste time on something I could do myself and still get out of my comfort zone. It puts you in a low-level dependency zone, modifying something that should be authentic out of a need to appear better to those who will read it, which you are not, robotic and shallow.
reply
21 sats \ 2 replies \ @optimism 4h
You can do it yourself
Transcribe hours of youtube videos to make them searchable? Sure I can, but I can spend my time better. My gpu is otherwise idle, so why not?
Defensive summarization is just an anti-clickbait measure to protect against wasting time reading articles based on a title that is not corresponding to the actual content, which unfortunately is common practice nowadays. Takes under 5s of GPU time for average articles, but would take me 10 minutes + frustration for each. I don't need more frustration from clickbait, I've been frustrated for years by this.
a need to appear better to those who will read it, which you are not, robotic and shallow.
I don't need to appear better though? I don't care about appearances.
is just tech. People were worried about fire, trains, electricity, bitcoin... and now ai. It will be widely adopted and seemly used at the moment we will feel comfortable doing so, in the same way most of us today bring a phone in the pocket, or use a car instead of a horse.
reply
Unlike all those you mentioned, you give AI all your precise information that serves people who don't want you to be free. You give away your way of thinking, your habits, your data, your worries and weaknesses, some even give away the ways in which you keep your money like bitcoin and properties. This is ammunition for dictators and corporations who want to guide slaves into a way of thinking and remove from society those they think are dangerous.
reply
deleted by author