AI Governance in Big Consulting: Navigating the Ethical Frontier (III)
- stefandulman
- Jul 23, 2024
- 2 min read
Developing National Gen AI FoAs artificial intelligence continues to reshape industries, major consulting firms are at the forefront of addressing the complex governance challenges that arise with AI implementation. This series explores how leading consultancies are developing frameworks, strategies, and best practices to ensure responsible and ethical AI deployment.
McKinsey has identified several key topics in governance with respect to AI, particularly generative AI (gen AI) (see here). Here are the top three most discussed topics:
Addressing Potential Risks of Generative AI - Governments are focusing on how to manage the potential risks associated with gen AI, which include unpredictability, inaccuracy, bias, and cybersecurity threats. Specific risks for governments also encompass the loss of confidential data and national security compromises. To mitigate these risks, frameworks and guidelines are being developed, such as the voluntary AI ethical framework in Australia and international collaborations like the Bletchley Declaration (see here).
Transforming Public Service Delivery - Generative AI offers significant opportunities to enhance public service delivery. Governments are exploring how gen AI can automate and improve various services, such as document drafting, claims management, and citizen engagement through chatbots. For instance, the city of Heidelberg in Germany has implemented a digital citizen assistant to help residents navigate government services. This transformation aims to increase efficiency, reduce repetitive work, and improve the accuracy and speed of service delivery.
Foundation Models - There is a discussion on whether governments should develop their own national gen AI foundation models, which are the core models on which gen AI applications are built. This involves considering the balance between innovation and regulation, ensuring that these models are developed responsibly and securely. The Biden administration's executive order on AI governance emphasizes ethics, safety, and security, and mandates private sector accountability in AI development.
These topics highlight the balance governments must strike between leveraging the benefits of gen AI and managing its associated risks.
Building on the lessons learned from managing cyber risks, the application of these principles to AI governance is particularly relevant in the Australian economic context. Australia's diverse economy, spanning sectors like mining, agriculture, finance, and technology, presents unique challenges and opportunities for AI implementation. For instance, in the mining sector, AI can optimize operations and improve safety, but also raises concerns about job displacement and data security. The financial services industry, a cornerstone of the Australian economy, is leveraging AI for fraud detection and customer service, necessitating robust governance frameworks to ensure compliance with stringent regulatory requirements and to maintain consumer trust.
For small and medium enterprises (SMEs) in Australia, which make up about 99% of all businesses, AI governance presents both challenges and opportunities. Many SMEs lack the resources for comprehensive AI governance frameworks, yet they are increasingly adopting AI technologies to remain competitive. Australian SMEs in sectors like retail and professional services are using AI for tasks such as inventory management and customer analytics. To address this, initiatives like the AI Action Plan by the Australian Government aim to support SMEs in responsibly adopting AI technologies. Additionally, industry associations and government bodies are developing simplified AI governance toolkits tailored for SMEs, helping them navigate the complexities of AI implementation while managing associated risks (see here).
Comentarios