Bordereaux Processing
Last reviewed April 2026
Every month, thousands of spreadsheets flow between cedants and reinsurers carrying the data that underpins billions in premium and claims. Column headers differ. Date formats vary. Currency fields contain text. Errors caught months late trigger reserve adjustments that swing quarterly results. Bordereaux processing is the unsexy plumbing of reinsurance, and its failures are expensive enough that data governance in this space has become a board-level concern.
What is bordereaux processing?
A bordereau is a periodic report, typically monthly or quarterly, that a cedant sends to its reinsurers detailing the individual risks, premiums, and claims within a reinsurance treaty. Premium bordereaux list the policies written under the treaty with their characteristics and premiums. Claims bordereaux list individual claims with their incurred amounts, paid amounts, and reserve movements. These reports are the primary data exchange mechanism between cedants and reinsurers, and they feed directly into the reinsurer's financial reporting, reserving, and pricing processes.
The volume is substantial. A mid-sized reinsurer receives thousands of bordereaux per year from hundreds of cedants, each in a different format. Some arrive as Excel files. Others as CSV extracts from legacy systems. A few as PDF tables that must be re-keyed. The lack of standardisation is not a technology failure; it reflects the reality that every cedant's policy administration system produces data in its own structure, and no two cedants describe the same risk in the same way.
Reconciliation is where the cost accumulates. The premium bordereau must reconcile to the treaty's premium statements. The claims bordereau must reconcile to the loss advices and the reinsurer's own claims records. Discrepancies, a policy appearing in one month's bordereau but not the next, a claim amount that changes without explanation, a currency that shifts between reporting periods, require manual investigation. A typical reinsurance operations team spends 60 to 70 per cent of its time on data ingestion and reconciliation rather than analysis.
The landscape
ACORD standards define data formats for insurance and reinsurance data exchange, including bordereaux. ACORD's Global Reinsurance and Large Commercial (GRLC) standards provide a common vocabulary and structure for bordereaux data. Adoption is growing but far from universal. Many cedants, particularly smaller ones or those in markets outside North America and London, continue to use proprietary formats. Even among ACORD-compliant bordereaux, interpretation differences mean that true plug-and-play interoperability remains elusive.
Lloyd's has mandated structured data submission for certain reporting requirements through its Core Data Record (CDR) initiative, pushing the market toward standardised, machine-readable data exchange. The CDR captures key risk and premium data at the point of placement, reducing the need for retrospective bordereaux reconciliation. However, CDR covers Lloyd's market business only, and the broader reinsurance market remains dependent on traditional bordereaux exchange.
EIOPA's Solvency II reporting requirements demand granular, validated data from reinsurers about their assumed business. The quality of bordereaux data directly affects the reinsurer's ability to complete regulatory reporting accurately and on time. Errors in bordereaux that are not caught until the regulatory reporting cycle create time pressure and restatement risk that supervisors view unfavourably.
How AI changes this
Automated ingestion and mapping is the foundational application. AI systems that can ingest a bordereau in any format, identify the column structure, map it to a canonical data model, and flag anomalies represent a step change from the current process of maintaining manual mapping templates for each cedant. Machine learning models trained on thousands of historical bordereaux can recognise common patterns and variations, handling format changes without manual remapping.
Data validation at the point of ingestion catches errors that would otherwise propagate into downstream systems. AI-powered validation rules check for logical consistency (a claim paid amount that exceeds the incurred amount), temporal consistency (a policy inception date that falls outside the treaty period), and cross-reference consistency (a claim number that appears in the claims bordereau but not in the premium bordereau). Errors flagged at ingestion, rather than during quarterly reconciliation, are cheaper to resolve and less disruptive to financial reporting.
Automated reconciliation between bordereaux, treaty statements, and the reinsurer's own records reduces the manual effort that currently dominates reinsurance operations. AI systems that can match records across multiple data sources, handling differences in naming conventions, date formats, and currency representation, reduce reconciliation time from days to hours for a typical treaty.
The deeper value is in analytics that become possible once bordereaux data is clean and structured. Portfolio monitoring, loss trend analysis, pricing validation, and exposure accumulation tracking all depend on accurate, timely bordereaux data. AI that fixes the data quality problem at the source enables analytics that were previously impractical because the data preparation consumed all available resource.
What to know before you start
The problem is not extraction; it is interpretation. Extracting data from a spreadsheet is technically simple. Understanding that "GWP" in one cedant's bordereau means gross written premium inclusive of taxes while another cedant's "GWP" excludes taxes is a semantic problem that requires domain knowledge. Build your data mapping with experienced reinsurance operations staff, not just data engineers.
Data quality at source matters more than downstream processing. If a cedant's policy administration system produces incorrect premium allocations, no amount of AI processing will fix the error. The most effective investment is working with key cedants to improve the quality of the data they produce, through clearer specifications, validation rules at their end, and feedback loops when errors are detected. This is a relationship investment, not a technology one.
Do not underestimate the long tail of formats. Your largest twenty cedants may represent 80 per cent of premium but only 30 per cent of the format variations you encounter. The remaining cedants, each with their own quirks, are where automated ingestion is most valuable and most difficult. Design your system for the full distribution of formats, not just the common ones.
Start with a single treaty type, typically property proportional, where bordereaux formats are relatively standardised and the reconciliation rules are well-defined. Build the ingestion, validation, and reconciliation pipeline for this treaty type, demonstrate the operational savings, and extend to more complex treaty types, casualty excess of loss, multi-year programmes, where the data challenges are greater. The process automation infrastructure you build for the simple case will serve as the foundation for the complex ones.
Last updated
Exploring AI for your organisation? There are fifteen minutes on the calendar.
Let’s build AI together