Perspective: Global Index on Responsible AI Report
- Rod Crowder
- Jul 15
- 4 min read
Updated: Jul 16

The Rapid Pace of AI vs. The Slow March of Responsibility
Â
Artificial Intelligence (AI) is transforming our world at an unprecedented pace, permeating nearly every aspect of daily life, from policing and healthcare to commerce and communication. This rapid evolution, however, has outstripped efforts to ensure its responsible development and deployment.
The inaugural Global Index on Responsible AI (GIRAI) 2024 report, a monumental data collection effort covering 138 countries, serves as a stark reminder of this growing "AI responsibility gap". As a Responsible AI consultant, this report provides crucial insights into the current global landscape, highlighting both promising areas and significant deficiencies.
Â
Defining Responsible AI: More Than Just Technology
The GIRAI defines Responsible AI not merely as a technical undertaking but as a holistic approach to the "design, development, deployment and governance of AI in a way that respects and protects all human rights and upholds the principles of AI ethics through every stage of the AI lifecycle and value chain. It emphasises that all actors within the national AI ecosystem must bear responsibility for the human, social, and environmental impacts of their decisions. The index measures 19 thematic areas clustered into three dimensions: Human Rights and AI, Responsible AI Governance, and Responsible AI Capacities. Each thematic area is assessed across three pillars: Government Frameworks, Government Actions, and Non-state Actors’ initiatives.
Â
Key Observations: A Sobering Reality
Â
The overarching finding of the GIRAI is clear: global progress toward responsible AI is significantly behind the curve of AI development and adoption. The majority of countries globally lack adequate measures to protect or promote human rights in the context of AI, impacting nearly 6 billion people.
Â
Here are some key observations from the report:
AI Governance vs. Responsible AI:Â While 39% of assessed countries have national AI strategies, the mere existence of these frameworks does not equate to effective responsible AI promotion. Many strategies lack enforceability and fail to embed a comprehensive range of responsible AI principles. Countries scoring lower in the GIRAI often possess national AI strategies but struggle to demonstrate the capacity for responsible AI development and use.
Limited Human Rights Protection:Â Mechanisms to safeguard human rights from AI risks are largely absent in most countries. For instance, only 43 countries have government frameworks for AI impact assessments, and a mere 35 provide frameworks for redress and remedy in cases of AI-related harm. Furthermore, a significant gap exists in public procurement guidelines that ensure rights-respecting AI adoption by the public sector, with only 24 countries having such frameworks.
International Cooperation as a Cornerstone:Â Interestingly, international cooperation emerged as the highest-scoring thematic area across all regions, indicating a strong foundation for global solidarity in responsible AI. The adoption of the UNESCO Recommendation on Ethics in AI by most countries is a testament to its significance in strengthening country-level capacity in AI ethics. Regional initiatives, such as the Santiago Declaration and the ASEAN Guide on AI Governance and Ethics, also demonstrate rising collaborative efforts.
Persistent Gaps in Inclusion and Equality:Â The report underscores critical shortcomings in addressing gender equality, labour protections, and cultural/linguistic diversity in AI. Gender equality, in particular, was one of the lowest-performing thematic areas, with only 24 countries having government frameworks addressing this intersection. Similarly, few countries are adequately protecting labour rights in the evolving AI-driven economy. The promotion of cultural and linguistic diversity in AI also remains largely unaddressed.
The Crucial Role of Non-State Actors:Â Universities and civil society organisations are playing a vital role in advancing responsible AI, where governments fall short. These actors are actively engaged in filling critical gaps, particularly within the Human Rights and AI dimension, through research, advocacy, and initiatives focused on areas like gender equality, labour protections, bias, and cultural diversity.
AI Safety Concerns:Â A deeply concerning finding is the limited number of countries (only 38) that have measures in place to ensure the safety, security, reliability, and accuracy of AI systems. This poses a significant risk to the technical integrity of AI on a global scale.
Â
Recommendations: Diverse Pathways to Responsibility
Â
The GIRAI emphasises that there are "many pathways to achieving responsible AI" and countries should tailor their efforts based on their current performance. The report provides targeted recommendations based on a country's score range:
For high-scoring countries (above 75):Â These nations should leverage their influence to advance international cooperation, helping to bridge the AI divide, and adopt specific, legally enforceable frameworks that address key areas of AI and human rights.
For mid-to-high scoring countries (above 50 and up to 75):Â Focus areas include advancing government actions and frameworks for women's rights and gender equality in AI, implementing mechanisms for access to redress and remedy for AI-related harms, and incentivising non-state actors to promote inclusion. Additionally, they should ensure the adoption of technical standards for AI safety and encourage competition commissions to address AI-related issues.
For mid-to-low scoring countries (above 25 and up to 50):Â These countries need to prioritise action on children's rights, strengthen the role of civil society in responsible AI ecosystems, support cultural and linguistic diversity initiatives, ensure government frameworks protect workers' rights in AI contexts, and adopt technical standards for AI system safety.
For low-scoring countries (between 0 and 25):Â The fundamental steps involve prioritising the adoption or updating of data protection and privacy laws, ensuring the adoption of AI impact assessments, developing public sector skills in responsible AI, encouraging non-state actor engagement, and developing standards for responsible public procurement of AI.
Looking Ahead: A Shared Global Agenda
Â
The Global Index on Responsible AI 2024 paints a realistic, yet challenging, picture of the current state of responsible AI globally. While the path ahead is long, the report also highlights bright spots and the strong commitment to international cooperation as a crucial foundation for building a shared agenda on AI governance.
Â
The upcoming second edition, with its focus on generative AI and the inclusion of AI and persons with disabilities, will further refine our understanding and provide ongoing benchmarks for this critical journey towards a human-centric and rights-based AI future. It is imperative that all stakeholders – governments, private sector, academia, and civil society – collaborate to bridge the AI responsibility gap and ensure that AI truly serves as a force for good for all of humanity.
Download a copy of the report below.
.
Â