Everyone and their mother is talking about AI these days. From Congress to conferences to almost every start-up, AI is here and here to stay. There is tremendous power in the tools and we need to learn to use them in ways that advance societies. The fears I hear and read are real: AI will displace jobs; it can be used to destabilize political systems; it could further inequities and place enormous power in the hands of few. We have a real window right now to harness this power, ensure the benefits are accrued to all not a few and create standards for responsibility. This is where AI and ESG collide.
Like AI, ESG is also among the buzziest of acronyms these days. Like AI, ESG is also often misunderstood.
ESG is essentially business intelligence data, with less emphasis on financial data. It is a way to organize and understand data that can improve decision making, create opportunities and reduce risk. ESG metrics can include things like a company’s safety rates for employees or whether they pay living wages as well as how they manage their fleet of vehicles and their impact on the communities around them.
AI can inform ESG in a few different ways. First, ESG is the lens through which companies should manage or govern the use of AI in their product and services. Does a company have a policy on how its employees can or can’t use AI for their work? I, for one, use AI every day and it speeds up my work; however I also know that it is imperfect and that there is no substitute for my human review and analysis of the quality of its output. If a company is embedding AI into their products and services then how are they ensuring that they are mitigating bias and ensuring fairness. Transparency and accountability are crucial components of governing AI.
Second, there are ways in which AI could be used to enhance the quality and the rigor of data in the ESG space. From computer vision AI being used to track methane leaks from satellite imagery to using Natural Language Processing to “code” tremendous amounts of unstructured qualitative data, the applications are endless to enhance our understanding of the environmental, social and governance impacts of a company.
I think the real question people are grappling with today is how to integrate AI in a way that is ethical and sustainable given a lack of regulatory frameworks and limited knowledge by many of the opportunities and threats. This year we have seen a few important efforts to address these systemic issues including the open letter published by hundreds of the leading researchers to Pause Giant AI Experiments to take time to establish standards of practice, guard rails and regulations to ensure AI is done responsibly.
In the world of social research this reminds me of the development of Institutional Review Boards (IRBs) for the protection of human subjects. IRBs were developed in academia. The reason for this was that researchers had on numerous occasions committed atrocities with the justification that the learnings from the research were worth endangering human subjects. Recall that doctors gave black men syphilis only to watch them die as a result of the disease. There are always people who will push the limits of ethics and responsibility, therefore creating a self-governing body for using AI responsibly can set standards for all to follow.
One of the resources shared with me recently is The AI Incident Database, dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. The open-source project is “managed in a participatory manner,” with a board of directors, submissions from the public, and a team of reviewers guided by a set of definitions and decision criteria. So far, it has cataloged 500+ incidents and 2,500+ reports. This type of resource is needed to provide transparency and accountability to corporate actors.
While AI and ESG align in the business arena it is essential that all stakeholders, from government to nonprofits to individuals all have a role to play in shaping the coming transitions. In both we need transparency.
We have seen seismic economic shifts like this in the past, such as industrialization. With that we saw demographic shifts, where people moved into cities because that is where jobs were. Certain jobs went away and others were retooled.
The rapid pace of development we are seeing now is analogous. We can predict what some of those challenges and shifts are going to be and make sure we have tools in place to ensure that those most vulnerable will not suffer more as a result of this transition. If we do not learn from what we have seen before we are destined to repeat them. Also study the failures that we are already seeing because if we wait too long to put up guard rails AI will get into the wrong hands and be used to heighten the very problems that ESG is trying to address.