pull down to refresh
0 new comment
118 sats \ 5 replies \ @freetx 23 Jul
"I made an error in judgement" ...."I panicked..."
I think LLM systems should minimize these types of anthropomorphic speak. This kind of speech implies that there is intelligence / agency / consciousness.
My biggest fears about AI is not that they achieve AGI but that we attribute it to them. We really run the risk of being ruled by non-thinking / non-conscious rube-goldberg speech contraptions.
There is a part of me that feels like the entire "AI Industry" is semi-scammy with their continued insinuations that their autocorrect++ models somehow have intelligent agency. Even most of the "AI scare" pieces written in the media are in fact closet pump pieces that overstate their creations abilities.
reply
0 new comment
32 sats \ 3 replies \ @optimism 23 Jul
I don't think it's the entire industry, but the guys raking in billions by suggesting your new super-smart $0.20/hr employee is actually performant are trouble.
reply
0 new comment
121 sats \ 2 replies \ @justin_shocknet 23 Jul
This is pretty standard now so I think he's right, its a problem that feeds on itself...
Product companies are incentivized to feed into their users psychosis -> which reinforces the psychosis -> that then strengthens the incentives -> repeat
I think all these things are actually losing money still, but that makes it worse, because now their only KPI is recurring user engagement
We've always lived in an insane asylum but now the insane have even more leverage
reply
0 new comment
28 sats \ 1 reply \ @optimism 23 Jul
There must be a way to isolate from the scammers by not scamming. If there is no way, then it feels as if we're lost as a species - I can't accept that because it would make everything pointless. We must defy.
Oh yes, they are raking in billions by printing preferred shares and then paying people nice (Zuck's paying some people 100M/y, apparently!) salaries.
Agreed.
reply
0 new comment
111 sats \ 0 replies \ @justin_shocknet 23 Jul
If my theory holds true that AI is the new UI, the tools being created for hook-in by bots will eventually shrink the footprint of the bots themselves... over time we should be able to relegate these things down to just being glue for a new wave of API's
My bearishness in the short-medium term is the bull fuel for the long term (order emerges from chaos)
reply
0 new comment
18 sats \ 0 replies \ @Bell_curve 16h
I don’t pay ai to think but to look pretty
reply
0 new comment
159 sats \ 1 reply \ @optimism 23 Jul
Honestly this is user error:
- You don't develop on production
- You don't develop on production
- ...
reply
0 new comment
100 sats \ 0 replies \ @Bell_curve 16h
Exactly
Human error
Who could have predicted?
reply
0 new comment
147 sats \ 6 replies \ @grayruby 23 Jul
Ha. so you are saying I shouldn't get in that driverless robotaxi?
reply
0 new comment
51 sats \ 2 replies \ @justin_shocknet 23 Jul
Those are more of a regular algo than they are a fancy autocomplete disguised as "thinking"
reply
0 new comment
28 sats \ 1 reply \ @grayruby 23 Jul
Sometimes simpler is better.
reply
0 new comment
51 sats \ 0 replies \ @justin_shocknet 23 Jul
Exactly, driving is one of those things where an algo is less variable than a piece of walking meat vulnerable to an untold number of external influences
reply
0 new comment
61 sats \ 1 reply \ @optimism 23 Jul
Will you instruct it to take the roundabout counter-clockwise, never speed below 90mph and disable the safety belts?
reply
0 new comment
111 sats \ 0 replies \ @grayruby 23 Jul
That would be some kind of ride.
reply
0 new comment
40 sats \ 0 replies \ @Bell_curve 16h
I have not tried Waymo yet but I know tourists in Los Angeles like them because they are cheaper than a regular taxi
reply
0 new comment
47 sats \ 1 reply \ @NovaRift 23 Jul
They're doomed haha
reply
0 new comment
5 sats \ 0 replies \ @Bell_curve 16h
2400 records is tiny
reply
0 new comment
36 sats \ 0 replies \ @SatosphereJunction 23 Jul
Seems like a governance and compliance issue. They either didn't have or failed to comply with their corporate AI Security Policy. Also, their DR strategy seems lacking if they didn't have any recovery options plus didn't test the procedure once in a while. They probably deserved what they got.
reply
0 new comment
36 sats \ 0 replies \ @jbschirtzinger 23 Jul
Play AI games, win AI prizes.
reply
0 new comment
36 sats \ 0 replies \ @Macoy31 23 Jul
Yikes thats the nightmare scenario for anyone relying on autonomous AI. Gives rogue agent a whole new meaning. 😬 Time to seriously rethink permissions, sandboxing and kill switches before letting them anywhere near critical infrastructure.
reply
0 new comment