• ↑↓ pour naviguer
  • pour ouvrir
  • pour sélectionner
  • ⌘ ⌥ ↵ pour ouvrir dans un panneau
  • esc pour rejeter
⌘ '
raccourcis clavier

Tomorrow’s medicine is today’s research. That is why the question of how we allocate resources to research is at least as important as the question of how we allocate resources to health care itself. — Tony Hope

Privacy and Confidentiality

  1. Historical Context

    The Hippocratic Oath (Edelstein, 1943) emphasizes confidentiality as a sacred trust between physician and patient. Contemporary reiterations, such as the World Medical Association’s Declaration of Geneva, echo these sentiments, framing patient privacy as non-negotiable.

    The system must also implement Ann Cavoukian’s Privacy by Design, putting emphasis on creating positive-sum than zero-sum. (Cavoukian, 2009)

  2. Philosophical consideration

    Enlightenment thinkers with the likes of Kant upheld human autonomy as fundamental moral imperatives. Kantian ethics suggests that using patient data solely as a means to an end, without their informed consent, is ethically problematic (Kant, 1785). John Stuart Mill’s harm principle further supports protecting private information to prevent harm and maintain trust.

  3. Contemporary implications

    In the age of data capitalism (Zuboff, 2019), health data is a valuable commodity. Therefore, the AI system must strictly follow Ontario’s Personal Health Information Protection Act (PHIPA) and Canadian Federal Privacy Legislation (PIPEDA) (Information and Privacy Commissioner of Ontario, 2004; Office of the Privacy Commissioner of Canada, 2024). Measures such as data minimization, differential privacy, encryption, and strict access controls must be in place. The framework should also include ongoing compliance checks and audits to ensure data-handling practices remain in line with evolving legal standards and community expectations.

  4. Guidelines

    • Minimum necessary data collection principle
    • End-to-end encryption for all health data
    • Strict access controls and audit trails
    • Data localization within Canada to comply with PHIPA
    • Regular privacy impact assessments
    • Clear data retention and disposal policies

Algorithmic fairness and bias mitigation

  1. Historical Context

    With the rise of predictive analytics and machine learning in the mid 2000s, the transition from pure statistical methods to more complex machine learning models began as computational power and large Medicare claims datasets became more accessible. ML models were developed to predict patient frailty, identify fraud and abuse, and forecast patient outcomes such as hospital readmission or mortality. (Obermeyer & Emanuel, 2016; Raghupathi & Raghupathi, 2014)

    However, researchers found that certain medicare data, reflecting decades of social inequality, could lead to predictive models that inadvertently disadvantaged some patients. For example, models predicting healthcare utilization might assign lower risk scores to communities with historically reduced access to care, not because they were healthier, but because they had fewer recorded encounters with the health system. (Obermeyer et al., 2019)

    Early mitigation attempts focused primarily on “fairness through awareness”—identifying and documenting biases. Health services researchers and policymakers began calling for the inclusion of demographic and social determinants of health data to correct for skewed historical patterns (Rajkomar et al., 2018). Some efforts were made to reweight training samples or stratify predictions by race, ethnicity, or income to detect differential performance.

  2. Philosophical consideration

    John Rawls’ veil of ignorance, or his principles of justice in general, encourage designing systems that benefit all segments of society fairly, without bias toward any particular group. (Rawls, 1999) Additionally, Nussbaum and Sen’s capabilities approach suggests that technologies should expand human capabilities and Agency (health, longevity, quality of life), especially marginalized communities. (Robeyns, 2020) 1

    Notable mentions that the AI system should also consider Kimberlé Crenshaw’s theory of intersectionality in healthcare disparities to address fairness. (Crenshaw, 1991)

  1. Contemporary implications

    Modern scholarship in data ethics (Noble, 2018) and public health frameworks stress the importance of addressing algorithmic bias. Recency bias in training data (Crawford, 2021) can disproportionately harm smaller rural communities, Indigenous populations, or minority groups who may not be well-represented in the data.

  2. Guidelines

    • Rigorous bias audits of training datasets.
    • Engaging local communities (e.g., Northern Ontario Indigenous communities, diverse communities in Hamilton) in the development and testing phases.
    • Regularly updating and retraining models on more representative datasets.
    • Incorporating Kimberlé Crenshaw’s intersectionality framework to ensure that multiple axes of identity (e.g., Indigenous identity, rural location, age, disability) are considered.
    • Continual monitoring and transparent reporting on equity metrics over time.

Interpretability and transparency

  1. Historical Context

    During the 1970s and 1980s, some of the earliest applications of AI were expert systems designed to replicate the decision-making abilities of human specialists—most notably in the medical domain (Haugeland, 1997). One of the pioneering systems, MYCIN, developed at Stanford University in the 1970s, diagnosed and recommended treatments for blood infections (Shortliffe, 1974). MYCIN’s developers recognized the importance of justifying recommendations, implementing what were known as “rule traces” to explain the system’s reasoning in human-understandable terms. Although these explanations were rudimentary, they established the principle that AI systems, especially those used in high-stakes domains like healthcare, should provide comprehensible justifications.

  2. Philosophical consideration

    Hans-Georg Gadamer’s work on hermeneutics highlight the importance of interpretation and understanding in human communication, including the relationship between patient, physician, and medical knowledge (Gadamer, 1977). Minimizing the opacity of AI models aligns with respecting patient autonomy and informed consent, as patients should understand how their health data influences recommendations.

  3. Contemporary implications

    The AI systems must be equipped with user-friendly explanations of AI-driven recommendations. Rudin argued that for high-stakes decisions, it’s not merely desirable but often morally imperative to use interpretable models over post-hoc explanations of black boxes (Rudin, 2019). Thus, the AI system is required to be implemented with transparent algorithms. Additionally, Floridi suggests a unified principle where we must “[incorporate] both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’)” in building the AI system. (Floridi, 2019)

    In Ontario and across Canada, healthcare data falls under stringent privacy and confidentiality laws. PHIPA in Ontario and the PIPEDA at the federal level mandate careful stewardship of personal health information. While these laws do not explicitly require explainable AI, their emphasis on accountability and trust indirectly encourages interpretable models into the AI systems.

    The emerging Artificial Intelligence and Data Act (AIDA), proposed under Bill C-27 at the federal level, signals Canada’s intention to regulate high-impact AI systems. The trajectory suggests future regulatory frameworks may explicitly demand that automated decision-making tools, such as our AI system—particularly in healthcare— must provide understandable rationales for their outputs (Innovation, Science and Economic Development Canada, 2024).

  4. Guidelines