Close

Striking the Balance Between Innovation and Responsibility

Striking the Balance Between Innovation and Responsibility

Minute Read 0 views
1467593453

Since the phenomenal debut of ChatGPT late last year, innovation in the generative AI space has accelerated at breakneck speed, transforming the AI landscape in real time.

The benefits generative AI brings to businesses and communities are clear – and immense. Used as a “co-pilot” to human intelligence, it could turbo-charge human creativity and productivity. GitHub Copilot, for instance, turns natural language into coding suggestions, a game changer in software development. However, the risks generative AI creates are equally significant – from misinformation and bias to intellectual property violations and privacy leaks. Several companies have banned the internal use of ChatGPT and similar products this year on privacy and security concerns, including Apple, Amazon, Deutsche Bank and Goldman Sachs.

As the pace of innovation continues, we stand at an inflection point that will determine how AI will be developed and used. 

Prioritising Objective Verification

We cannot ignore the potential of generative AI to add value to our ecosystems, and catalyse the development of talent, products and solutions. The rapid evolution in its power and capabilities can greatly accelerate digital transformation. Already, it is disrupting industries from content creation and product design to logistics, education and health. A PwC study estimates that generative AI will add US$15.7 trillion to global GDP by 2030, driven by the proliferation of data and improved algorithms. 

As a global investment firm, Temasek views AI as a powerful enabler and an effective productivity tool. Over the last few years, we have invested in and built big data and AI infrastructure. Most recently, we rolled out an internal, enterprise version of ChatGPT firm-wide, and are committed to building our capabilities internally, as well as catalysing human-centred and responsible AI products and solutions across the technology stack. 

However, it is critical that innovation does not outpace accountability. 

There have been worrying instances of misinformation – or “confabulations” – presented both logically and extremely convincingly. As AI becomes more powerful and autonomous, there are also concerns about the ethical implications of its use, from bias in AI algorithms to job displacement.

Equally concerning is the potential for breaches in privacy, confidentiality and permitted uses of data. With the right query, it is technically possible to derive individual data from the large data sets that generative AI models are trained on. Cybersecurity, too, will become more challenging, with much more sophisticated phishing scams, more effective password predictions and malware that can evade detection. Users will also need to understand how risks and liabilities should be allocated with any vendors of these AI tools.

There are a raft of other concerns, and as more people use – and rely on – generative AI, the risks will increase in tandem. Consider that ChatGPT is the fastest-growing consumer app in history, with more than 100 million active users within two months of its launch. By contrast, it took TikTok nine months after its global launch to reach 100 million users.

With more pervasive adoption, it is imperative that there is an objective and verifiable way to validate the performance and test the trustworthiness of the AI systems being developed. 

Driving Responsible Innovation

While many governments and organisations have assumed responsibility for creating guardrails, AI creators and developers, too, must ensure that they are developing an ethical and responsible product, and one that can stand up to scrutiny when assessed, verified and checked against known true sources. The problem is we don’t yet have the tools to build this layer of data and oversight into generative AI models.

We believe that voluntary self-assessment is an important first step. To that end, we support the AI Verify Foundation, which aims to bring AI owners, solution providers, users, and policymakers together in an open-source community to build trustworthy AI. Importantly, it will support the development and use of AI Verify, an AI validation system launched by the Infocomm Media Development Authority (IMDA) as a minimum viable product (MVP) last year, and rolled out earlier this year. 

AI Verify is made up of two components: a governance testing framework aligned with internationally accepted AI ethics principles and guidelines, and a software toolkit, developed in consultation with companies. What it does is provide standardised and objective tests through which AI system owners and other third parties can verify the performance of their systems against AI ethics principles, such as safety, fairness and robustness. 

It can also help red-team and uncover potential biases that the developer may have overlooked. As momentum for greater governance of AI systems picks up, it will provide AI system owners with an independent evaluation that they can share with their stakeholders, demonstrating transparency, and building trust.  

The Power of the Collective

Since mid-this year, AI Verify has been available to the global open-source community. The MVP for the international pilot last year attracted the interest of over 50 local and multinational companies, with users acknowledging the utility of the robust testing framework checklist to conduct self-assessment on their AI systems. 

It is hoped that the uptake of AI Verify will foster creativity, collaboration, and continuous improvement, while also demonstrating transparency. This is key in contributing to the development of international standards and industry benchmarks.

While ethical intelligence in AI can be encouraged by governments through regulations, a voluntary governance framework with robust community involvement inspires trust and confidence, ensuring that it is sustainable for the long term. 

Risk management strategies for AI are trying to keep pace with both innovation and the excitement around this technology. However, the responsible implementation of generative AI lies in the collective efforts of all stakeholders. It will require all of us – academia, industry and policymakers – working collaboratively to address the challenges, and to harness AI’s full potential. 

By Dr Michael Zeller, Head, AI Strategy & Solutions, Temasek

Top

News & Insights

Select a type of content
    Please select Stories you are interested in.
    Please give us your consent.
    Please confirm that you are not a robot.

    Subscribe to our newsletter

    Stay up to date with our latest news, insights and stories

    Select a type of content
      Please select Stories you are interested in.
      Please give us your consent.
      Please confirm that you are not a robot.