2020 showed how important it is to quickly and accurately model your risk. Headline-grabbing moves fast became the norm during the second quarter of the year as market volatility soared.
Having data you can trust is key to providing your firm with more accurate risk models, enhanced visibility. This gives you the agility you need to react quickly to changing market conditions.
Backtesting is a vital step in the process. As Chetan Joshi, COO of Margin Reform, explains:
“Applying your model to historical data allows you to determine its accuracy. Without this step, you have no way of confirming whether or not your model is sufficiently conservative.
Sometimes this is best practice, but in some regions there are regulations that specify a need for risk models to be backtested in order to demonstrate that they are fit for purpose.
For instance, the Uncleared Margin Rules require firms to meet strict validation and ongoing requirements to objectively demonstrate the performance of the model via backtesting. The backtesting methodology must be well documented and include an objective way of linking backtesting exceptions to model performance.”
Here’s why data integrity is key to meeting the higher standards and expectations brought about by the events of 2020.
Backtesting requires a lot of trade/position, pricing and market data (including historical) to generate the P&L for the testing period.
New tools provide unprecedented control over how you process this data. Old barriers, such as varying formats or the data’s length and width, are no longer relevant. It’s never been easier to transform your risk data to create P&L you can trust.
The traditional way of handling a variety of risk data is reactive. Every time a new data format becomes popular you have to adapt your existing processes or create new ones.
With legacy on-premise systems that run on fixed schemas, this means the research, development and testing of a new process. These processes are designed specifically to handle data in the new format – and only in the new format.
If that’s too time-consuming or costly, manual processes such as those involving spreadsheets are often used to plug the gaps.
But tools now exist that can ingest data in any format without the need for complex extract, transform and load (ETL) processes. Inflexible legacy technology can be replaced with something faster and more agile. Manual processes can be automated. The challenge of multiple data formats is removed.
Having confidence in your data means you can have confidence in the accuracy of your backtested risk models.
Risk data should be reconciled back to the source to ensure accuracy. Without doing this, you run the risk of using incorrect or incomplete data when testing your models. This undermines the validity of your backtests.
But even when data is being reconciled, traditional tools used to do so often complicate, rather than expedite, the process. Again, schema-heavy legacy technology or time-heavy manual processes used to be the main ways to handle this.
Thanks to the power of the cloud, no-code solutions and machine learning, your teams are empowered to take control of your data. You can ditch expensive, complex and inflexible systems, and bring all of your reconciliations onto one system. This eliminates the need for the error-prone manual processes that are used to plug the gaps between systems.
When your data is accurate, so are your backtests. Exceptions caused by bad data can undermine the results of your backtest. In the worst case scenarios, they could cause you to reject a working model or accept a faulty model.
Starting with a single data truth means you can avoid wasting time investigating breaks that should match. New tools give you a granular view of your data, pinpointing breaks. Streamlined workflows make it easy to assign exceptions to the right person or team for resolution.
This helps you quickly and accurately identify common problems, so you spend less time resolving simple breaks and more time fixing true exceptions. It gives you the chance to focus on what’s important.
Depending upon where you operate, backtesting can be a regulatory requirement, or simply best practice.
Either way, having confidence in the accuracy of your risk models drives investment decisions and helps you meet reporting requirements. It can also help shield you from the financial consequences of regulatory fines, or the restrictions of larger mandated capital buffer requirements.
A trend towards flexible new tools allows you to embrace a new approach to managing your risk data.
No-code solutions empower you to quickly build processes, without the need for lengthy development work.
Machine learning can handle repetitive tasks, freeing up more time to focus on your top priorities when testing your models.
And cloud-based solutions unlock speed and scalability, enabling you to handle increasingly complex data in larger volumes.
Combined, this allows you to build risk models, test them, and create reports faster and more accurately. This helps you meet regulatory requirements, stay on top of changing market conditions, and protect your firm. Find out more with Duco’s Risk Data Integrity guide.