Ordo AI Chao
- Jon Cili
- Jul 19, 2021
- 2 min read
Updated: Nov 15, 2022
In these unprecedented times of rapid AI development and burgeoning ethical dilemmas of patient care, AI usage might feel like navigating through the murky waters of a foggy swamp. The alligators and mosquitos lurking in the swamp are akin to the perilous misjudgements of AI. Just last week, I spoke of an algorithm that incorrectly interpreted data and unintentionally yielded biased results. By heeding no rigorous standards for development or post-testing, it can be easy for an AI company’s learning algorithms to go awry. In considering that there are societal and government regulations that must also be met, it’s important that standards be incorporated into the development of AI.
A great resource for AI in healthcare turns out to be a recent report from the World Health Organization. On June 28, 2021, WHO released its first global report advising for the ethical use of AI in healthcare. This 148+ page report, aptly titled Ethics and governance of artificial intelligence for health, is the culmination of 2-years worth of discussions held by appointed WHO experts. Although by no means the only resource available, the report excels in providing information because of its global stance on AI use and its following six core principles:
AI should preserve human autonomy in healthcare, thereby allowing physicians to continue to make medical decisions and allowing patients to continue to provide informed consent.
AI should promote human well-being and safety and interest.
AI should be transparent and explainable, thereby allowing for continued public discussion about how the algorithms in the AI are functioning.
AI should foster responsibility and accountability, thereby allowing the redress of those who are adversely impacted by the AI.
AI should ensure inclusiveness and equity, thereby allowing for the protection of the characteristics listed in human rights codes.
AI should be responsive and environmentally sustainable.

In addition to these principles, the report also highlights the importance for governments and companies to anticipate job losses at the hands of AI and train healthcare professionals in the usage of AI-empowered equipment. Check out Ethics and governance of artificial intelligence for health (who.int) to see the full report.
Whereas WHO provides concrete guidance to governments about AI usage through the organization's detailed report, other organizations, such as the AI Policy Forum (AIPF), have taken more interactive approaches to AI guidance. Convened by Massachusetts Institute of Technology (MIT) Schwarzman College of Computing, AIPF is a global effort which operates in yearlong cycles that allows scientists to engage with key bureaucratic officials at all levels of government (check out MIT Schwarzman College of Computing AIPF Overview for more information). As the provost of MIT, Martin A. Schmidt, mentioned in a video, one of the key goals of AIPF “is to bridge two worlds that are too often distinct: the world of those designing the next generation of AI systems...and the world of those formally charged with insuring their responsible use” (AI Policy Forum Symposium Day 1 Video).
This “bridge” between policymakers and AI developers will be the lighthouse in the foggy air surrounding AI ethics.
- Jon Cili
Comments