Frequently Asked Questions
Take a look through our latest list of Frequently Asked Questions
- The calculation of taxes.
- Technical accounting.
- First Notice of Loss/Validation of policies for claims.
- The requirements Lloyd’s has to create five returns for regulators.
The next phase of the CDR will be to expand it by class of business and geography so changes will be needed to increase the scope of the CDR. In addition, other changes may be needed for example, to reflect changes to Lloyd’s licences, or to tax and regulatory requirements. We will communicate clearly which iteration of the CDR contains all the data items required to deliver Blueprint Two.
Further information will follow as to when the systems and processes that will enable the Future at Lloyd’s will become live, and the point at which market participants need to adopt the CDR.
As we did in the development of the first iteration of the CDR, we will be sharing the data model with the Beta Group to gather and incorporate initial input from market participants. We will publish an updated draft version when we share further information in September on the development of the baseline Open Market North American Property insurance CDR.
Market participants and suppliers will regularly be invited to contribute, explore and attend review sessions and will have access to the model as it develops to ensure adoption.
We are working with ACORD to define the certification framework and liaise with market participants to agree an approach. Where data provided to us is not in the standardised format but can easily be transformed, we will facilitate this, however it will be the responsibility of the data provider to validate any data transformations. Data providers will also be responsible for validating any enriched data prior to it being used for further downstream processing. The process for validating data is in the design phase.
At each stage in the data journey where the CDR is acquired or transformed, we will embed automated data quality checks to ensure that the data driving the end-to-end process is acceptable and complete. We will work with the market to trial the process using test data, and ensure these checks are at the appropriate level to generate good quality data while minimising the burden on data suppliers.
An optional pre-submission validation tool will be built to allow users to confirm data quality prior to submission. Any data quality issues identified will be raised as warnings to the user, giving them the opportunity to remediate any issues prior to submission.
Data quality dashboards will be produced to inform periodic quality monitoring reviews, identifying key areas of the CDR which are proving to have the lowest quality data, that are critical to accurate downstream processes. Engagement with the market is critical to remediate any data quality issues and ensure the robustness of the CDR.
We are currently undertaking research to understand how data may be shared between market participants in the event of changes in roles, renewals to contracts and other circumstances in which different parties will be required to share data. Further details on the proposed design will be shared in the coming months.
We will be transparent about any default settings and decisions we make or are taking that are driven by data and will be able to explain these to market participants, auditors and regulators in an appropriate manner.
Customers will be able to access an appropriate description of how they are interacting with AI or other automated systems to ensure transparency around the use of automation along customer journeys.
We will respect the customer’s choice to have data deleted when requested, however a completed review form must be seen and a full record of data that has been destroyed or retained for a further period will be kept. Any information deleted from operational systems will still be present in back-up systems.
• the gap between the CDR template and the data the market currently captures for a policy;
• data availability, formats, standards, where is the data stored, the granularity of the data stored;
• whether placement support services can produce required outcomes based on fields provided by participants and enriched fields sourced, e.g. tax calculations; and
• enrichment opportunities within placement support services and in the Digital Gateway.
The colour wheel below shows the sort of information that needs to be captured to enable this new method of classification.
Lloyd’s recognises that risk codes are currently used in many systems and processes and plans to enrich the data in the CDR with the appropriate code(s) using the key facts provided about the risk.
• Where the insured item is Fine Art, only one risk code FA can apply.
• Where the insured item is Bloodstock or Livestock applying a code is a little more complicated. Where the business is written under Excess of Loss, risk code NX applies; otherwise, if the insured item is Bloodstock, the code is NB, and if the insured item is Livestock then the code is N.
• Where the insured item is property many different codes may apply. Typically, we will need to consider the:
- coverage, e.g. difference in conditions or property damage;
- perils, e.g. fire, terrorism or war on land;
-method of placement, e.g. open market or binder; and
- location of the property.
This diagram shows how logic might be used to derive risk codes for Open Market North American Property Insurance business.
Foreign Insurance Legislation (FIL) codes are currently generated manually and are used by Lloyd's and market participants to group transactions and drive multiple downstream processes including regulatory reporting. There is a set of key data fields which we aim to collect through the CDR to enable the calculation of FIL codes automatically. This sort of logic is already being used in Lloyd's Direct Reporting (LDR). The example below illustrates how FIL codes will be calculated within the scope of Open Market North American Property insurance.