ESG fund managers who have turned to big tech as a low-carbon, high-return bet are increasingly worried about experimentation with artificial intelligence in the sector.
Exposure to AI now represents a “near-term risk for investors,” said Marcel Stotzel, a London-based portfolio manager at Fidelity International.
Stotzel said he is “worried about an AI backlash,” which he describes as a situation in which something unexpected triggers a significant market decline. “It only takes one incident for something to go wrong and the material impact could be significant,” he said.
Examples that Stotzel says are worth worrying about are fighter jets with self-learning AI systems. Fidelity is now among fund managers talking to companies developing such technologies to discuss security features such as a “kill switch” that can be activated if the world ever wakes up to “systems of AI that goes rogue in dramatic ways,” he said.
The ESG investing industry may be more exposed to such risks than most, having shifted largely to technology. Funds registered as having genuine environmental, social and good governance objective hold more tech assets than any other sector, according to Bloomberg Intelligence. And the world’s largest ESG exchange-traded fund is dominated by technology, led by Apple Inc., Microsoft Corp., Amazon.com Inc. and Nvidia Corp.
These companies are now at the forefront of AI development. Tensions over where the industry should go – and how quickly it should move – have recently burst into public view. This month, OpenAI, the company that shook the world a year ago with the launch of ChatGPT, was fired and then quickly rehired its general director, Sam Altman, triggering a speculation frenzy.
Internal disagreements have clearly arisen over OpenAI’s ambition, in light of potential societal risks. Altman’s reinstatement puts the company on track to continue its growth plans, including faster commercialization of AI.
Apple has announced its intention to move forward carefully in the AI space, with CEO Tim Cook saying in May that there were “a number of issues that need to be addressed” with the technology. And companies, including Microsoft, Amazon, Alphabet Inc. and Metaplatforms Inc., have agreed to adopt voluntary safeguards to minimize abuse and bias within AI.
Stotzel said he was less concerned about the risks associated with small-scale AI startups than those lurking at global tech giants. “The biggest companies could do the most damage,” he said.
Other investors share these concerns. THE New York City Employees’ Retirement Systemone of the largest U.S. public pension plans, said it is “actively monitoring” how portfolio companies use AI, according to a spokeswoman for the $248 billion plan. Generation investment managementThe company, co-founded by former US Vice President Al Gore, told clients it was ramping up research into generative AI and talking daily with the companies it invests in about risks – as well as opportunities – what technology represents.
And Norway’s $1.4 trillion sovereign wealth fund has said Boards and companies must take seriously the “serious and unexplored” risks posed by AI.
When OpenAILaunched last November, ChatGPT quickly became the fastest-growing Internet application in history, reaching 13 million daily users in January, according to estimates provided by analysts at UBS Group AG. Against this backdrop, tech giants developing or supporting similar technologies have seen their stock prices soar this year.
But the lack of regulation or meaningful historical data on how AI assets might perform over time is concerning, according to Crystal Geng, an ESG analyst at BNP Paribas Asset management in Hong Kong.
“We don’t have tools or methodology to quantify risk,” she said. One way BNP attempts to estimate the potential social impact of AI is by asking portfolio companies how many jobs could arise due to the emergence of technologies like ChatGPT. “I haven’t seen any company that can give me a useful number,” Geng said.
Jonas Kron, director of advocacy at Boston-based Trillium Asset Management, who helped push Meta’s Apple and Facebook to include privacy in their board charters, pressed the tech companies to better explain their work in AI material. Earlier this year, Trillium filed a shareholder resolution with Google parent Alphabet, asking it to provide more details about its AI algorithms.
Kron said AI represents a governance risk for investors and noted that even insiders, including OpenAI’s Altman, have urged lawmakers to impose regulations.
The concern is that, if left unchecked, AI may reinforce discrimination in areas such as healthcare. And in addition to the potential of AI to amplify racial and sexist prejudices, there is concern about its propensity to allow abuse of personal data.
Meanwhile, the number of AI-related incidents and controversies has increased 26-fold since 2012, according to a database that tracks misuse of technology.
Investors from Alphabet’s Microsoft, Apple and Google have filed resolutions demanding greater transparency on AI algorithms. The AFL-CIO Equity Index Fund, which oversees $12 billion in union pensions, has asked companies including Netflix Inc. and Walt Disney Co. to indicate whether they have adopted guidelines to protect workers, customers and the public from the harms of AI.
Areas of concern include discrimination or bias against employees, misinformation in political elections and mass layoffs resulting from automation, said Carin Zelenko, director of capital strategies at AFL-CIO in Washington. She added that concerns among Hollywood actors and writers about AI played a role in their high-profile strikes this year.
“It just raised awareness of how important this problem is in probably every business,” she said.