ADELAIDE, July 3 — Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation...

Full Story Read More