Who’s responsible for responsible AI? (It’s a trick question.)
agosto 20, 2024 / Suzanne Taylor
Short on time? Read the key takeaways:
- Everyone in an organization, from C-suite executives to operational staff, is responsible for ensuring AI is implemented and used ethically, safely, and securely.
- Individuals need to educate themselves on responsible AI practices, and organizations need to develop and implement clear guidelines for AI usage.
- AI should be deployed to enhance human capabilities rather than replace them, aiming for a balance that leads to high user adoption, increased efficiency, and ethical integrity.
- Be an advocate for responsible AI within your organization, actively participating in the creation and adoption of AI policies and guidelines.
Part two of a three-part blog series on responsible AI that focuses on “who.” Read part one, which focuses on “why.” Coming soon: Part three that focuses on “how.”
The push for responsible AI is gaining traction across industries. But who is responsible for responsible AI? Psst: This is a trick question.
Many organizations are focused on operationalizing generative AI to achieve business objectives. In fact, a whopping 73% of organizations expect generative AI to be significantly or extremely valuable, according to a recent Harvard Business Review survey sponsored by Unisys. But with the power to achieve more comes the responsibility to use that power ethically.
So who’s accountable for ensuring AI is implemented and used responsibly? We all are. From those in the C-suite formulating the AI strategy to professionals operationalizing the use of AI, everyone has to act as a guardian of integrity, making sure AI is safe, ethical and secure – across all roles.
It’s no coincidence that people are at the heart of AI success. Organizations seek to deploy AI tools in a way that complements human skills, creativity and intelligence rather than replaces it. Achieving this balance can lead to high user adoption, increased operational efficiency and ethical integrity – all of which require a committed team effort.
Every person working with AI is accountable and bears a professional responsibility to ensure safety, ethical usage and security. You can demonstrate your commitment to responsible AI by taking several actions.
#1: Educate yourself on responsible AI
You and every other professional can champion responsible AI practices within your organizations and have a voice in the ongoing conversation about balancing innovation and creativity with the secure and ethical use of AI.
With any technology, there’s a big learning curve. If the concept of responsible AI is new to you or you want a refresher, seek our resources, including the blog post, “Five reasons to prioritize responsible AI: Your key to success in the age of AI.” A growing number of training webinars and classes on AI can give you insights on the challenges that make responsible AI necessary. In fact, your organization may have added AI to its employee training schedule as usage of tools like ChatGPT has soared. And the National Institute of Standards and Technology (NIST) offers a succinct overview of responsible AI principles.
And if you end up contributing to the development of – or compliance with— your organization’s responsible AI guidelines, consider how to reduce the risks. NIST also develops frameworks like the AI Risk Management Framework (AI RMF) and playbooks like the NIST AI RMF Playbook for inspiration.
#2: Know your organization’s responsible AI guidelines
To effectively govern AI use in your organization, evaluate your security, ethics and privacy policies. Identify areas where these policies already cover AI and where they fall short. Then, either bolster them to address AI-specific concerns or create dedicated responsible AI guidelines. Don’t overlook your ethics policies; update them to include AI considerations in general and responsible AI principles in particular.
Among other things, you can encourage policy creation and adoption by sharing how guidelines can protect your organization from potential liability, lost public credibility and diminished reputation. Trust is critical to strengthening your relationship with those you most want to impress – your customers, partners, employees and prospects.
If you’re interested, reach out to those in charge of this effort in your organization to learn how you can participate in creating guidelines and promoting adoption. Both are important responsibilities in this effort, with training and education a core component for greater user adoption. This process is a cross-functional responsibility and involves people in legal, IT, HR, security, individual business units and more. The guidelines you come up with can be a core part of your compliance efforts.
Once your organization has established guidelines, policies and best practices for responsible AI usage, you can show your support. Ways to do this include:
- Studying them carefully and taking training so you understand the concepts thoroughly
- Taking care to follow them and ensure your teams follow them, reaching out to in-house experts if you have any questions
- Spreading the word among coworkers so they understand them (more on training in a future blog post)
#3: Advocate for responsible AI
It’s easier to follow guidelines if you respect them. To contribute to responsible AI, people within your organization must be clear on the challenge of operationalizing AI in a way that is trustworthy and responsible and enhances human expertise. But motivation is key. This includes recognizing the importance of using AI responsibly.
This recognition includes appreciating the synergy between AI and human talent. They both drive organizational success and encourage collaborative growth and ethical compliance. The ultimate goal is leveraging AI to bolster human capabilities and building a foundation of trust and dependability in AI outcomes. It’s a team effort and when done right, AI functions like a supportive teammate.
Do your part for responsible AI
Responsible AI is critical and all of us play a pivotal role in making it a reality. You can demonstrate your dedication to this value by acknowledging the importance of responsible AI in today’s world, where AI tools are becoming commonplace. Recent headlines demonstrate even the largest of companies can be sued for copyright or privacy infringement and employees can inadvertently release proprietary code without precautions. Beyond generative AI, bias can creep into healthcare, recruitment and other algorithms.
To gain insights into how global executives view the operationalization of AI, read the Harvard Business Review Analytic Services report, sponsored by Unisys. If you see the value in AI initiatives for your organization, read about how a leading provider of automated test equipment and virtual instrumentation software streamlined operations and gained business agility with the Unisys Core AI framework and explore the AI solutions offered by Unisys.