However, modern healthcare has been using expert systems for a longer time already (in rich countries) and that's really the OG application of AI proper.
Ultimately though, if you have to trust it, it means it's no good. It ought to be deterministic, which is why you should never trust a chatbot; those are anti-deterministic.
Thanks very much for your explanation, but humans shouldn't be so confident about AI fully, they can some times malfunction without we knowing on time, so they should be a constant observation on it otherwise it will do the opposite side what it's suppose to do.
Only if docs used it as an assistant. I mean someone needs to supervise it. Until we have an AGI, I won't solely trust an AI diagnosing things on its own
Ha!😂 It's time difference and I have been awake before you I think. I'm a Freelancer from West Africa. Thanks for your concern and contribution once again boss.
I’d trust an AI doctor for a first pass, like catching patterns in symptoms or lab results stuff humans might miss. But I’d want a human doc to double check the diagnosis.
I don’t even trust in humans, I could trust in exams and data.
I don’t like AI in general, so if a insurance company use AI as a UX to tell me what medicine science have to say about my health data I prefer don’t use it. Because, in a not too distant future humans will use AI to cross data with your exams and other people, like a big data and no privacy at all. Not to me, tks.
trust
it, it means it's no good. It ought to be deterministic, which is why you should never trust a chatbot; those are anti-deterministic.