Full Text

Research Article

Navigating AI Governance in Health & Human Services: Principals and Implementation Strategy


Abstract

The Health and Human Services (HHS) industry is gradually moving towards artificial intelligence (AI). AI governance is becoming more crucial with this move. Regulatory authorities for the HHS industry have outlined governance principles. However, social agencies struggle with the practical application of these principles. This paper covers the six core principles outlined in the Trustworthy AI playbook (TAI). It provides strategic guidance on implementing AI governance across the various stages of the AI lifecycle.

 

Keywords: Health and Human Services (HHS), Artificial Intelligence (AI), AI Governance, Trustworthy AI (TAI), Health and Human Services, AI life cycle

1. Introduction

Artificial Intelligence (AI) is a transformative force reshaping the health and human services landscape. This presents opportunities to streamline operations, enhance outcomes, and revolutionize service delivery. AI applications, such as employing Bots to support caseworkers in application processing, assisting clients via calls, facilitating online application submissions, and utilizing intelligent document processing to reduce caseworkers’ burden, are a few examples of AI impact in this sector.

AI Governance ensures responsible and ethical AI adoption in the Health and Human Services Industry. This paper covers how social agencies can effectively embed these principles into the AI lifecycle.

2. Principles of AI Governance in Human Services and Practical Implementation
Principles and guidelines provided by entities like the Federal Government, the Science and Technology Policy Institute (STPI), and other regulatory authorities are essential in shaping AI governance in the HHS industry. For instance, the STPI conducted a comprehensive analysis in late 2021 to enhance the trustworthiness of AI systems.

The Trustworthy AI (TAI) playbook is a guiding framework that outlines six fundamental principles. These principles provide the foundation for ethical and responsible AI development and deployment.

The six fundamental principles of AI Governance as per the Trustworthy (TAI) playbook are:
Fair/Impartial
The external and internal stakeholders review helps achieve fair and impartial implementation of AI projects.

This approach helps consider all participants’ needs and perspectives
.

The presence of a governance body within the social agency plays a vital role in the ethical deployment of AI technologies. This body monitors and oversees AI projects across different departments throughout their lifecycle.

Furthermore, external stakeholders’ review is essential to validate that AI initiatives are ethical and compliant. For instance, before implementing AI technology for the Supplemental Nutrition Assistance Program (SNAP), Social agencies must adhere to review processes. Any proposed AI use case for SNAP program delivery must undergo an approval process, including submission of a Major Change form to the Food and Nutrition Service (FNS) for approval before implementing the project. This helps FNS review the change’s overall impact on the People and operations.
Transparent/Explainable
Transparency and explainability in data usage and decision-making processes within AI systems are fundamental requirements for building trustworthy AI practices. Some practices that will help ensure the implementation of this principle are engaging stakeholders early on in the project, clearly documenting the solution, and having a well-validated system.

Engaging stakeholders early in the AI project lifecycle will help build trustworthy AI practices. This early engagement is critical in aligning with the agency's and stakeholders' goals and values.

All relevant individuals must be able to understand how AI systems make decisions. Stakeholders should be able to gain insight into the workings of AI, i.e., what algorithms, attributes, and correlations are used in the respective AI system. Detailed and precise design documentation of the AI system, including information on how data is collected, processed, and used to make decisions, should help to achieve this goal.

AI use cases should be validated to promote the system's reliability. Testing should be conducted using diverse datasets and scenarios to assess the system's robustness and accuracy.

The outputs generated by the AI system should be explainable and interpretable, allowing stakeholders to understand how decisions are made.
Responsible/Accountable
Responsibilities and accountability must be defined for the governance body, AI implementation team, and digital worker.

Social agencies must establish a governance structure to oversee every aspect of the AI solution lifecycle, i.e., design, development, deployment, and maintenance.

Social agencies must identify digital identities, which refer to the unique digital profiles of AI systems and their components and manage them for the ethical use of AI technologies. This involves assigning clear roles and responsibilities to each digital identity, implementing access controls to prevent unauthorized use, and regularly updating and monitoring these digital identities to ensure their integrity and trustworthiness.
Safe/Secure
The safe and secure principle helps manage potential risks in AI systems, like cyber threats, data breaches, or algorithmic biases, that can cause physical or digital harm to individuals, groups, or entities.

To effectively implement the safe/secure AI governance principle, social agencies must develop and implement a comprehensive security plan outlining proactive measures to protect AI systems from potential risks. This security plan should cover strategies for identifying vulnerabilities, assessing threats, and implementing appropriate safeguards to mitigate risks effectively, such as specific security, data encryption, and user authentication mechanisms.
 
Privacy
The privacy of individuals, groups, or entities must be respected, and their data must be used strictly for its intended and specified purposes, with approval from the data owner. Data must be used only within these agreed-upon boundaries to protect trust and confidentiality.

Social agencies must evaluate the sensitivity of the data they employ within AI systems. This can be done through detailed and well-documented impact assessments that evaluate data usage’s potential risks and implications. This measure will help safeguard privacy and address any identified vulnerabilities.

Social agencies must adhere to all applicable regulations and laws related to privacy. And continuously incorporate any change in law into their implementation policies/strategies.
Robust/Reliable
AI systems should consistently produce accurate and dependable outputs that align with their original design objectives.

AI systems must improve over time through continuous learning. This learning process should be comprehensive, covering various data sources and scenarios to ensure the AI can effectively handle diverse and unforeseen situations.

With robustness and reliability practices, social agencies can ensure that AI systems meet their original design goals and provide value and trustworthiness.

3. AI Governance and AI Life Cycle
Governing body roles and responsibilities
Social agencies should identify the Governance Body and create an actionable AI implementation and maintenance framework in alignment with the fundamental principles of AI governance. The governance body's vital responsibility is to define a comprehensive set of best practices and procedures per the law and regulation of the HHS industry.

The governance body comprises two main groups: governing and key working members. Governing members, such as the Chief AI Officer and Chief Compliance Officer, may not be involved in the day-to-day implementation and maintenance of AI systems but provide oversight and strategic direction for all AI projects within the agency. On the other hand, working members, like the AI Infrastructure and Operations Lead and the AI Development Lead, are directly involved in the technical and operational aspects of AI system development and deployment.

AI governance body helps in managing the
 AI initiatives responsibly, ethically, and in compliance with principles and regulations.

A screenshot of a computer screen

Description automatically generated

Figure 1: AI Governance Body Roles & Responsibilities.

AI Principles in Action

The governance body must furnish a definitive and actionable framework that guides the implementation of AI systems and facilitates ongoing monitoring and refinement processes.

A structured reference table can establish correlations between the principles and each AI project’s actions.

This will foster a harmonious integration of ethical considerations into the operational fabric of artificial intelligence initiatives.

A white and blue chart with black text

Description automatically generated

Figure 2: AI Principles mapped with AI life cycle deliverables.

 

4. Conclusion

In conclusion, effective AI governance is vital in successfully implementing AI systems in the health and human services industry. Adopting a structured approach that aligns the principles with specific deliverables throughout the AI lifecycle will help the Social agencies to overcome the challenge of creating an actionable

5. References

  1. https://efaidnbmnnnibpcajpcglclefindmkaj/https://arxiv.org/pdf/2206.00335     
  2. https://www.acf.hhs.gov/opre/report/options-opportunities-address-mitigate-existing-potential-risks-promote-benefits
  3. https://www.newamerica.org/pit/blog/need-regulate-ai-implementation-public-assistance-programs/
  4. https://www.ecfr.gov/current/title-7/subtitle-B/chapter-II/subchapter-C/part-272/section-272.15
  5. https://www.hhs.gov/sites/default/files/hhs-trustworthy-ai-playbook.pdf      
  6. https://dualitytech.com/blog/ai-governance-framework/