Key Takeaways:
– Companies anticipate significant returns from generative AI (GenAI) investments: a Google Cloud study found 86% of GenAI adopters witnessed over 6% company revenue growth.
– However, ethical concerns like governance, security, privacy, and regulations pose challenges to broader GenAI adoption.
– Thomson Reuters, leading the AI responsibly, uses a centralized and standardized approach to ensure ethical AI through principles, policies, and a constant human presence in the loop.
– Human involvement at all stages – design, development, deployment, and post-deployment monitoring – aids the responsible AI integration and boosts worker morale.
The global generative AI (GenAI) rollout continually unfolds, bringing companies face-to-face with ethical challenges and governance-related issues. While companies invest trillions in GenAI, the question of implementation alongside ethical, governance, security, and privacy standards hinders celebrations of GenAI success.
The Boom of GenAI
OpenAI’s launch of ChatGPT instigated a significant leap for generative AI with companies collectively investing trillions in the hope of high returns. Despite a slight recent pullback, many organizations are banking on considerable returns on investments (ROI). For instance, a recent Google Cloud study revealed that 86% of GenAI adopters are experiencing a growth of 6% or more in their annual company revenue.
The Ethical Dilemma
Despite the promising technology, the primary deterrent to broader GenAI adoption is the knottier issues revolving around ethics, governance, security, privacy, and regulations. While GenAI can be effectively implemented, the important questions are if it should be and how that probably can be achieved, concurring with standards around ethics, governance, security, and privacy. Moreover, one has to navigate the new regulations such as the EU AI Act.
Thomson Reuters Pioneering Responsible AI
Thomson Reuters offers an insight into the resolution of such dilemmas. Carter Cousineau, vice president of data model and governance at Thomson Reuters, has been leading the company’s responsible AI practice. The company’s objective was to standardize and centralize the building of ethical AI models, starting with the formation of principles for AI and data that were implemented through a range of policies and procedures. This approach made them equipped for generative AI when ChatGPT was introduced.
Thomson Reuters employed data impact assessments (DIAs) to monitor potential AI risk. Cousineau’s team conducted exhaustive risk analysis of proposed AI use cases, followed by the application of control measures. The firm also implemented internal tools like a centralized model repository and Responsible AI Hub to maintain governance and manage associated risks. However, the most influential means to ensure responsible AI was to keep humans in the loop.
The Power of Human Oversight
Thomson Reuters advocates a multi-pronged approach that ensures human intervention at all stages of AI development. From guiding clients’ usage of its products to training teams, human interaction was pivotal. In post-deployment monitoring, human involvement was crucial in tracking model performance and vetting the output of AI systems. This human-in-the-loop approach does not only improve the accuracy of AI systems but also reassures workers about their importance in the organization.
Engaging the Human Element
Empowering human involvement in AI does more than improve system performance, such as greater accuracy or less hallucination. It fosters engagement and reassures personnel about their critical role within the organization. Business leaders now need to strike a balance between human intervention and AI to optimize advantages, foresee limitations, and develop preparedness for a human-in-the-loop approach.