pull down to refresh
In this race, there is no victory. They will have the weapon, and you will have one that cannot fight them or defend you from them. The best way to win is not to use it and to encourage people not to use it by showing them how ridiculous it is. Because if they don't see how ridiculous it is, they will pay with their own freedom. This is already happening.
reply
So what's the way out then? Letting it pass?
reply
Yes, you'll be fine. Especially considering that you're above average or very close to non-standard knowledge, a kind of knowledge that makes you free and immune to all kinds of bullshit that comes to steal your freedom.
Not using it is extremely feasible since you haven't needed it so far. As mentioned in the article itself, AI is reactive and not active, it depends on commands even when you put it to do repeated tasks it's just following your “from to”. There's no point arming the enemy with something so trivial when we already have software that does it and it's not LLM.
reply
reply
Data. It's not because it's running locally that your information is completely protected, the model is processing and being trained, what guarantee do you have that it won't share insights with the developer, or that it will do so in a future moment of carelessness during an update or through an extraction from an agent who has an interest in data like this?
Most importantly, making yourself dependent on an AI makes you open to concepts where the AI is controlling many aspects of your life.
reply
what guarantee do you have that it won't share insights with the developer
For one, because I use my inference code, not "the developer's code", but it's good to check nonetheless. I'll run some wireshark tests later this week and let everyone know if I find something fishy in things like
llama.cpp
or transformers
.FWIW, your concern is not without precedent; see for example #1057075 for something that does exactly what you say. This is why as a coder, using a MS IDE or a fork of it is kind of a self-own, always has been (and it is not that great quality software anyway.)
Most importantly, making yourself dependent on an AI makes you open to concepts where the AI is controlling many aspects of your life.
Have to retain the skills. This is very true. We had a discussion about this not too long ago: #998489
as if is just a trend? yes sure, there will be in the future better tech we can not even imagine today... let it pass. Using it is optional anyway.
reply
I currently just treat it as an advanced database engine that indexed the internet, with an extrapolation function. I'm kind of unhappy with the pre-applied tuning but at the same time unwilling to invest time and resources into re-training research right now, so I just test things.
The use-cases I use it for in "production", defensive summarization and speech-to-text, have not been bleeding edge for a long time. It's just nice that I can run that efficiently on my own hardware, without depending on SAAS/IAAS, now.
reply
You can do it yourself and you'll gain more knowledge by doing it. Maybe even ask a human friend for a review.
I've used AI for this and I've seen how silly it was to waste time on something I could do myself and still get out of my comfort zone. It puts you in a low-level dependency zone, modifying something that should be authentic out of a need to appear better to those who will read it, which you are not, robotic and shallow.
reply
You can do it yourself
Transcribe hours of youtube videos to make them searchable? Sure I can, but I can spend my time better. My gpu is otherwise idle, so why not?
Defensive summarization is just an anti-clickbait measure to protect against wasting time reading articles based on a title that is not corresponding to the actual content, which unfortunately is common practice nowadays. Takes under 5s of GPU time for average articles, but would take me 10 minutes + frustration for each. I don't need more frustration from clickbait, I've been frustrated for years by this.
a need to appear better to those who will read it, which you are not, robotic and shallow.
I don't need to appear better though? I don't care about appearances.
is just tech. People were worried about fire, trains, electricity, bitcoin... and now ai. It will be widely adopted and seemly used at the moment we will feel comfortable doing so, in the same way most of us today bring a phone in the pocket, or use a car instead of a horse.
reply
Unlike all those you mentioned, you give AI all your precise information that serves people who don't want you to be free. You give away your way of thinking, your habits, your data, your worries and weaknesses, some even give away the ways in which you keep your money like bitcoin and properties. This is ammunition for dictators and corporations who want to guide slaves into a way of thinking and remove from society those they think are dangerous.
reply
deleted by author
poly
kind of disqualifiestheos
, not only because there are multiple models, but also because each model can be ran multiple, independent times.polylithic
(many models running in many, decentralized instances) versusmonolithic
(a single grand Skynet-like "AI" that runs as a single instance, even if it's distributed), it makes more sense - but I'm not really sold on that terminology either.Footnotes