A Case Study of Privacy Protection Challenges and Risks in AI-Enabled Healthcare App

Artificial intelligence (AI) is increasingly used in healthcare systems and applications (apps) with questions and debates on ethical issues and privacy risks. This research study explores and discusses the ethical challenges, privacy risks, and possible solutions related to protecting user data privacy in AI-enabled healthcare apps. The study is based on the healthcare app named Charlie in one of the fictional case studies designed by Princeton University to elucidate critical thinking and discussions on emerging ethical issues embracing AI.

[1]  R. Tsaih,et al.  The AI Tech-Stack Model , 2023, Communications of the ACM.

[2]  A. Gillespie Designing an Ethical Tech Developer , 2023, Communications of the ACM.

[3]  S. Berger,et al.  AI and Neurotechnology , 2023, Communications of the ACM.

[4]  Lorrie Faith Cranor,et al.  Metrics for Success , 2023, Communications of the ACM.

[5]  S. Pasricha AI Ethics in Smart Healthcare , 2022, IEEE Consumer Electronics Magazine.

[6]  Jieyu Zhang,et al.  Privacy Protection in Using Artificial Intelligence for Healthcare: Chinese Regulation in Comparative Perspective , 2022, Healthcare.

[7]  S. Evers,et al.  The ethical issues of the application of artificial intelligence in healthcare: a systematic scoping review , 2022, AI and Ethics.

[8]  Blake Murdoch Privacy and artificial intelligence: challenges for protecting health information in a new era , 2021, BMC Medical Ethics.

[9]  Bernd Carsten Stahl,et al.  Ethical Issues of AI , 2021, Artificial Intelligence for a Better Future.

[10]  E. Di Ruggiero,et al.  Artificial intelligence for good health: a scoping review of the ethics literature , 2020, BMC medical ethics.

[11]  S. Gerke,et al.  Ethical and legal challenges of artificial intelligence-driven healthcare , 2020, Artificial Intelligence in Healthcare.

[12]  A. Schuchat,et al.  DEPARTMENT OF HEALTH & HUMAN SERVICES , 1991, Pharmacy Today.