Online Article

Responsible AI by Design: It’s Not Compliance, It’s Culture

By Mohammed Rafee Tarafdar

10/15/2024

Artificial Intelligence Culture Online Article

As the dust kicked up by generative artificial intelligence (AI) settles, it is becoming clear that the technology could provide enormous benefits, if it is used in a responsible manner. Essentially, responsible AI requires enterprises to ensure that their data and algorithms are contained within regulatory guardrails to mitigate concerns, such as data management risks related to the violation of security, privacy, and intellectual property or AI model risks (e.g., the propagation of false, inaccurate, or discriminatory outcomes). 

But aside from compliance requirements, boards and management teams should consider embracing a culture that promotes the ethical use of AI for the simple reason that the choice between right and wrong also requires human judgment at an individual and collective level. The following are a few examples of how boards can work with management to promote a culture of ethical and responsible AI usage. 

Move Beyond Compliance

To practice ethical AI usage in the truest sense, organizations should start from the top by articulating a responsible-by-design vision and execution framework. The vision should factor in considerations such as scope—local sentiments, cultural mores, human rights, and regional coverage—and fair representation of stakeholder interests. Additionally, creating an AI vision and implementation framework is not a one-and-done exercise but a journey of several iterations. Below is Infosys’ responsible-by-design framework to guide organizations toward a holistic approach to ensuring ethical AI practices are established and followed across the organization.

Responsible AI by Design Principles and Tenets

  • Human and AI
  • Safeguard human rights
  • Ethical innovation
  • Fairness
  • Transparency
  • Inclusivity and equal access
  • Global responsible AI adoption

The framework, underpinned by attention to fairness and bias, privacy, security, safety, and explainability, as well as responsible talent and culture, involves the following three guardrails:

  • Technical guardrails (responsible by design). Ensure that responsible AI principles are followed from inception to launch and build technical guardrails across the range of use cases for a variety of threat vectors.
  • Legal guardrails (regulatory compliance). Ensure codified regulatory compliance across all geographies of operations, as well as intellectual property protection.
  • Process guardrails (AI governance). Continuously scan for risks, perform audits, and scale across the organization from a “siloed set of practice” to the usual ways of working.

Teach People to Make the Right Decisions 

Like any powerful capability, AI can be exploited for good and bad. That said, even well-intentioned AI usage could produce unintended consequences unless backed by sound judgement. As the technology is democratized, employees across the organization will start making decisions during the course of developing, modifying, or simply using various AI solutions. Apart from being trained in how to use AI for maximum impact, it is critical that they are also taught to consider the ethical impact of their decisions (e.g., how the choice of words used to write a prompt influences a model’s outcome). Google’s People + AI team employs a human-centric design to eliminate bias in teams that train AI, which offers one example of the impact of employee training. Apple also works to spread awareness among its employees of the various aspects of AI ethics through a series of podcasts hosted on its platform.

Emphasize Responsible Innovation

A culture of responsible innovation will help keep the adverse effects of AI, such as deepfakes, propagation of bias, or privacy violation, in check. Apart from sensitizing employees to these issues, organizations should enforce traceability and accountability of AI-related decisions and review AI models and systems regularly to root out bias and other flaws. Responsible AI should be driven from the top, with boards engaging with C-suite leaders, who engage with employees, partners, and other stakeholders on the importance of balancing experimentation and ethics. When we started our own transformation to become “AI first” at Infosys, we established the Responsible AI Council to evolve the existing practices, processes, tooling, and talent required to weave generative AI into our ways of working.

Use AI in the Right Manner and for the Right Reasons

The possibilities of AI’s virtually unlimited application could distract organizations from prioritizing the right use cases and lead to suboptimal choices that might not justify the ethical consequences of those decisions, such as job displacement. When AI is leveraged to automate routine functions, there should be a provision to reskill and redeploy the impacted workers in higher-skilled or more creative roles. A barometer of whether AI is being used for the right reasons stems from employee interest: Does AI help employees become more productive, work in safer conditions, or enjoy their work? Other considerations when choosing use cases include the strength of the business case and alignment with organizational vision and goals. 

Take a World View

Global companies will need to be cognizant of the fact that different cultures can have different ethical perspectives, reflecting religious and cultural traditions or ingrained beliefs about what is right and wrong. They should also account for differences in AI compliance requirements from one region to another. This may call for rethinking the current approach to AI (and AI ethics), which is largely influenced by a Western perspective. 

Leverage Technology to Operationalize AI Ethics

Implementing change of any kind can be extremely challenging for an organization. The good news is that technology can come to a company’s assistance by translating AI ethics into a set of technical, process, and legal guardrails—which could be subjective, abstract, or open to interpretation—into more objective and configurable parameters, such as gender or race-related metrics. These parameters can be used to refine AI ethics policies and to evaluate and improve the ethical performance of AI models.

AI brings forth a great power that also comes with great responsibility. Implementing AI responsibly cannot be an afterthought. Therefore, all AI initiatives should be responsible by design, which should be a key point of discussion between the board and management. It cannot be a mere compliance obligation; it should be a core value of organizational culture.

Infosys is a NACD partner, providing directors with critical and timely information, and perspectives. Infosys is a financial supporter of the NACD.

Robert Peak

Mohammed Rafee Tarafdar is the chief technology officer at Infosys, focused on building next-generation platforms, capabilities, and solutions.