I appreciate this presentation, it makes the issue with OS provided Agentic AI tooling palatable, but I have one huge issue:
In her conclusion of how to defend in the short term, MW talks about how there must be "developer opt-in and out" of OS provided AI functions. I think that that is dangerous.
We need to control this at the user level. If we don't, and we apply the math of failure presented in the same presentation, as for me, as a user of Signal, the success rate of my security decreases by the rate of Signal's security. This is totally unacceptable for anyone that deals with sensitive information, and imho, anyone that has a phone, deals with sensitive information (such as: your location.)
I appreciate this presentation, it makes the issue with OS provided Agentic AI tooling palatable, but I have one huge issue:
In her conclusion of how to defend in the short term, MW talks about how there must be "developer opt-in and out" of OS provided AI functions. I think that that is dangerous.
We need to control this at the
userlevel. If we don't, and we applythe math of failurepresented in the same presentation, as for me, as a user of Signal, the success rate of my security decreases by the rate of Signal's security. This is totally unacceptable for anyone that deals with sensitive information, and imho, anyone that has a phone, deals with sensitive information (such as: your location.)We shall not
trustmebro, sorry, MW.