Working closely with our clients on their generative AI journey, we ensure that a robust governance model is adopted. As we have observed since the release of ChatGPT4, there are thousands of ways to apply generative AI and foundation models to maximise efficiency and drive competitive advantage. However, an enterprise-company-wide strategy must account for all the variants of AI and associated technologies they intend to use, not only generative AI and large language models.
ChatGPT raises essential questions about the responsible use of AI. The speed of technology evolution and adoption requires companies to pay close attention to any legal, ethical and reputational risks they may be incurring. Therefore, It's critical that generative AI technologies, including ChatGPT, are responsible and compliant by design and that models and applications do not create unacceptable risks for the business.
DataClue has set out and adopted a responsible use of technology policy, including the responsible use of AI in its Code of Business Ethics, since its inception in 2020. Responsible AI is designing, building, and deploying AI with clear principles to empower businesses, respect people, and benefit society, allowing companies to engender trust in AI and confidently adopt and scale AI.
Therefore, it is imperative that “ALL” AI systems be “raised” with a diverse and inclusive set of inputs so that they reflect the broader business and societal norms of responsibility, fairness, and transparency. When AI is designed and implemented within an ethical framework, it accelerates the potential for responsible, collaborative intelligence, where human ingenuity converges with intelligent technology.
How will the business intend to protect its own IP? And how will it prevent the inadvertent breach of third-party copyright using pre-trained foundation models?
How will upcoming laws like the EU AI Act be incorporated into how data is handled, processed, protected, secured and used?
Is the company using or creating tools that must factor in anti-discrimination or anti-bias considerations?
What health and safety mechanisms will need to be put in place before a generative AI-based product is taken to market?
What level of transparency should be provided to consumers and employees? How can the business ensure the accuracy of generative AI outputs and maintain user confidence and integrity in its data?
How will verification methods be enhanced and improved when establishing proof of personhood depends on voice or facial recognition? What will be the consequences of its misuse?