Overcoming a Disjointed Approach Assures That No Risks Fall through the Cracks
Insurers' numerous intricate reinsurance contracts and special pool arrangements, countless policies and arrays of transactions create a massive risk of having unintended risk exposure. Inability to ensure that each risk has the appropriate reinsurance program associated with it is a recipe for disaster.
Having disjointed systems, a combination of policy administration system (PAS) and spreadsheets for example, or systems working in silos, is a sure way of having risks fall through the cracks. The question is not if it will happen but when and by how much.
Beyond excessive risk exposure, the risks are many: claims leakage, poor management of aging recoverables and lack of business intelligence capabilities. There's also the likelihood of not being able to track out-of-compliance reinsurance contracts. For instance, if a reinsurer requires certain exclusion in the policies it reinsures and the direct writer issues the policy without the exclusion, then the policy is out of compliance, and the reinsurer may deny liability.
The result is unreliable financial information for trends, profitability analysis and exposure, to name a few.
Having fragmented solutions and manual processes is the worst formula when it comes to audit trails. This is particularly troubling in an age of stringent standards in an increasingly internationally regulated industry.
Integrating the right solution will help reduce the aforementioned risks to an absolute minimum.
Consider vendors offering dedicated and comprehensive systems as opposed to policy administration system vendors, which may simply offer "reinsurance modules" as part of all-encompassing systems.
Failing to pick the right solution will cost the insurer frustration and delays by attempting to "right" the solution through a series of customizations. This will surely lead to cost overruns, a lengthy implementation and an uncertain outcome. An incomplete system will need to be customized by adding missing functions.
Common system features a carrier should look out for are:
> Cession treaties and facultative management
> Claims and events management
> Policy management
> Technical accounting (billing)
> Internal retrocession
> Assumed and retrocession operations
> Financial accounting
> Regulatory reporting
> Statistical reports
> Business intelligence
Study Before Implementing
Picking the right solution is just the start. Implementing a new solution still has many pitfalls. Therefore, the first priority is to perform a thorough and meticulous preliminary study.
The study is directed by the vendor, similar to an audit through a series of meetings and interviews with the different stakeholders: IT, business, etc. It typically lasts one to three weeks depending on the complexity of the project. A good approach is to spend a half-day conducting the scheduled meeting(s) and the other half drafting the findings and submitting them for review the following day.
The study should at least contain the following:
> A detailed report on the company's current reinsurance management processes.
> A determination of potential gaps between the carrier reinsurance processes and the target solution
> A list of contracts and financial data required for going live.
> Specifications for the interfaces
> Definitions of the data conversion and migration strategy
> Reporting requirements and strategy
> Detailed project planning and identification of potential risks
> Repository requirements
> Assessment and revision of overall project costs
The preliminary study report must be submitted to each stakeholder for review and validation as well as endorsement by the head of the steering committee of the insurance company prior to the start of the project. If necessary, it should be revised until all the components are adequately defined. Ideally, the report should be used as a road map by the carrier and vendor.
All project risks and issues identified at this stage will be incorporated into the project planning. It saves much time and money to discover them before the implementation phase. One of the main reasons why projects fail is poor communication. Key people on different teams need to actively communicate with each other. There should be at least one person from each invested area—IT, business and upper management must be part of a well-defined steering committee.
A clear-cut escalation process must be in place to tackle any foreseeable issues and address them in a timely manner.
A Successful Implementation Process
Let us focus on key areas and related guidelines that are essential to successfully carry out a project.
Prior to migration, an in-depth data scrubbing or cleansing is recommended. This is the process of amending or removing data derived from the existing applications that is erroneous, incomplete, inadequately formatted or replicated. The discrepancies discovered or deleted may have been originally produced by user-entry errors or by corruption in transmission or storage.
Data cleansing may also include actions such as harmonization of data, which relates to identifying commonalities in data sets and combining them into a single data component, as well as standardization of data, which is a means of changing a reference data set to a new standard—in other words, use of standard codes.
Data migration pertains to the moving of data between the existing system (or systems) and the target application as well as all the measures required for migrating and validating the data throughout the entire cycle. The data needs to be converted so that it's compatible with the reinsurance system before the migration can take place.
It's a mapping of all the data with business rules and relevant codes attached to it; this step is required before the automatic migration can take place.
An effective and efficient data migration effort involves anticipating potential issues and threats as well as opportunities, such as determining the most suitable data-migration methodology early in the project and taking appropriate measures in order to mitigate potential risks. Suitable data migration methodology differs from one carrier to another based on its particular business model.
Analyze and understand the business requirements before gathering and working on the actual data. Thereafter, the carrier must delineate what needs to be migrated and how far back. In the case of long-tail business, such as asbestos coverage, all the historical data must be migrated. This is because it may take several years or decades to identify and assess claims.
Conversely, for short-tail lines, such as property fire or physical auto damage, for which losses are usually known and paid shortly after the loss occurs, only the applicable business data is to be singled out for migration.
A detailed mapping of the existing data and system architecture must be drafted in order to isolate any issues related to the conversion early on. Most likely, workarounds will be required so as to overcome the specificities or constraints of the new application. As a result, it will be crucial to establish checks and balances or guidelines to validate the quality and accuracy of the data to be loaded.
Identifying subject-matter experts who are thoroughly acquainted with the source data will lessen the risk of missing undocumented data snags and help ensure the success of the project. Therefore, proper planning for accessibility to qualified resources at both the vendor and insurer is critical. You'll also need experts in the existing systems, the new application and other tools.
Interfacing in a reinsurance context relates to connecting to the data residing in the upstream system, or PAS, to the reinsurance management system, plus integrating the reinsurance data to other applications, such as the general ledger, the claims system and business intelligence tools.
Integration and interfaces are achieved by exchanging data between two different applications but can include tighter mechanisms such as direct function calls. These are synchronous communications used for information retrieval. The synchronous request is made using a direct function call to the target system.
Again, choosing the right partner will be critical. A provider with extensive experience in developing interfaces between primary insurance systems, general ledgers, BI suites and reinsurance solutions most likely has already developed such interfaces for the most popular packages and will have the know-how and best practices to develop new ones if needed. This will ensure that the process will proceed as smoothly as possible.
After the vendor (primarily) and the carrier carry out all essential implementation specifics to consolidate the process automation and integrations required to deliver the system, look to provide a fully deployable and testable solution ready for user acceptance testing in the reinsurance system test environment.
Formal user training must take place beforehand. It needs to include a role-based program and ought not to be a "one-size-fits-all" training course. Each user group needs to have a specific training program that relates to its particular job functions.
The next step is to prepare for a deployment in production. You'll need to perform a number of parallel runs of the existing reinsurance solutions and the new reinsurance system and be able to replicate each one and reach the same desired outcome before going live.
Now that you've installed a modern, comprehensive reinsurance management system, you'll have straight-through automated processing with all the checks and balances in place. You will be able to reap the benefits of a well-thought-out strategy paired with an appropriate reinsurance system that will lead to superior controls, reduced risk and better financials. You'll no longer have any dangerous hidden "cracks" in your reinsurance program.
By Joseph Sebbag
Copyright © 2014 by Carrier Management. All Rights Reserved. http://www.carriermanagement.com/features/2014/02/04/118275.htm