Interest in ESG data is on the rise, and many vendors are producing data sets, each with their own view and depth, captured around ESG pillars. Firms adding ESG to their data assets have begun either single or multi-sourcing data, creating a significant challenge around normalizing attributes and values across different vendor feeds. But how can they best manage this data when the notions of alignment or normalization don’t really apply? Creating gold copies and versions of truth from disparate versions of data for common records, a GoldenSource staple, don’t work in this case, but firms still need four things from their ESG data:
To identify how best to manage the data, firms must understand how, and by whom, the data is being used.
How is the data being used?
This data is used within the organization for two main purposes:
- Investment decision making
Reporting requires the data to answer qualitative questions, or questions that have simple yes or no answers. Firms want to see if a certain company, for example, is adhering to the statements that it makes about its government’s environmental policy or statements that it made around climate.
Investment decision making seeks to answer quantitative questions, such as how a firm achieves a certain score or a certain data point and requires raw data to do so. From here, two important concepts come into play:
The materiality of a data point identifies if there is any sort of investment impact based on the point, while validity identifies how reliable the produced point is.
Who is using this data?
There is a lot of interrogation on ESG data, with different groups of people within a firm doing assessments to identify if the data is reliable enough and/or does it produce the desired depth that they need. What they’re trying to produce is transparent, comparable outcomes – compare ESG based investment strategies at the issue and portfolio level, assess ROI, then drive a decision based on that.
Data governance and technology teams have ownership of setting up scalable and reliable pipelines for ingestion, normalization and distribution,
Once the data is consumed, internal quant analytics teams will want access to the raw data points to assess materiality and validity. From there, they will continue with their own algorithmic extrapolation and validation. This is research oriented and will drive investment decisions.
Finally, the front office will need the data for reporting. They are the face of the data, providing transparency regarding scores and investment allocation so investors can hold them accountable. This is becoming more important as we sit on the cusp of a $35 trillion transfer of wealth from the baby boomer generation to Gen X, millennials and other younger generations, of which one-third of millennials often or exclusively use investments that take ESG factors into account.
How do I manage my ESG data?
The infrastructure that needs to be put in place to address these is similar to normal data management, but some of the workflows that run validations on the data don’t apply in the same ways because of the nature of the data. A reliable service must be created so data can be captured at all the necessary levels for the investments. For example, at the entity level the data is robust, but at the listing level it thins out, causing firms to miss more data points simply because they’re not captured in the data sets.
Fundamentally these are data driven issues, however scalable infrastructure that creates transparency, reliability, materiality, verifiability, and optimizes tech dollar spend in managing such pipelines is essential.