pull down to refresh

159 sats \ 1 reply \ @optimism 23 Jul

Honestly this is user error:

  1. You don't develop on production
  2. You don't develop on production
  3. ...
reply

Exactly

Human error Who could have predicted?

reply
118 sats \ 5 replies \ @freetx 23 Jul

"I made an error in judgement" ...."I panicked..."

I think LLM systems should minimize these types of anthropomorphic speak. This kind of speech implies that there is intelligence / agency / consciousness.

My biggest fears about AI is not that they achieve AGI but that we attribute it to them. We really run the risk of being ruled by non-thinking / non-conscious rube-goldberg speech contraptions.

There is a part of me that feels like the entire "AI Industry" is semi-scammy with their continued insinuations that their autocorrect++ models somehow have intelligent agency. Even most of the "AI scare" pieces written in the media are in fact closet pump pieces that overstate their creations abilities.

reply

I don't think it's the entire industry, but the guys raking in billions by suggesting your new super-smart $0.20/hr employee is actually performant are trouble.

reply
anthropomorphic speak

This is pretty standard now so I think he's right, its a problem that feeds on itself...

Product companies are incentivized to feed into their users psychosis -> which reinforces the psychosis -> that then strengthens the incentives -> repeat

raking in billions

I think all these things are actually losing money still, but that makes it worse, because now their only KPI is recurring user engagement

We really run the risk of being ruled by non-thinking / non-conscious rube-goldberg speech contraptions.

We've always lived in an insane asylum but now the insane have even more leverage

reply
Product companies are incentivized to feed into their users psychosis

There must be a way to isolate from the scammers by not scamming. If there is no way, then it feels as if we're lost as a species - I can't accept that because it would make everything pointless. We must defy.

I think all these things are actually losing money still

Oh yes, they are raking in billions by printing preferred shares and then paying people nice (Zuck's paying some people 100M/y, apparently!) salaries.

We've always lived in an insane asylum but now the insane have even more leverage

Agreed.

reply
isolate from the scammers by not scamming

If my theory holds true that AI is the new UI, the tools being created for hook-in by bots will eventually shrink the footprint of the bots themselves... over time we should be able to relegate these things down to just being glue for a new wave of API's

My bearishness in the short-medium term is the bull fuel for the long term (order emerges from chaos)

reply

I don’t pay ai to think but to look pretty

reply

Ha. so you are saying I shouldn't get in that driverless robotaxi?

reply

Will you instruct it to take the roundabout counter-clockwise, never speed below 90mph and disable the safety belts?

reply

That would be some kind of ride.

reply

Those are more of a regular algo than they are a fancy autocomplete disguised as "thinking"

reply

Sometimes simpler is better.

reply

Exactly, driving is one of those things where an algo is less variable than a piece of walking meat vulnerable to an untold number of external influences

reply

I have not tried Waymo yet but I know tourists in Los Angeles like them because they are cheaper than a regular taxi

reply
with an AI coding assistant admitting to a "catastrophic failure" after wiping out an entire company database containing over 2,400 business records

They're doomed haha

reply

2400 records is tiny

reply

Seems like a governance and compliance issue. They either didn't have or failed to comply with their corporate AI Security Policy. Also, their DR strategy seems lacking if they didn't have any recovery options plus didn't test the procedure once in a while. They probably deserved what they got.

reply

Play AI games, win AI prizes.

reply

Yikes thats the nightmare scenario for anyone relying on autonomous AI. Gives rogue agent a whole new meaning. 😬 Time to seriously rethink permissions, sandboxing and kill switches before letting them anywhere near critical infrastructure.

reply