Giter VIP home page Giter VIP logo

databricks-industry-solutions / smart-claims Goto Github PK

View Code? Open in Web Editor NEW
12.0 3.0 7.0 198.2 MB

Use Databricks to improve the Claims Management process for faster claims settlement, lower claims processing costs and quicker identification of possible fraud

Home Page: https://www.databricks.com/blog/2023/04/03/claims-automation-databricks-lakehouse.html

License: Other

Python 100.00%
databricks-industry-solutions databricks-industry-solutions-fsi dbsql dlt fsi insurance

smart-claims's Introduction

DBR CLOUD POC

  • Domain : Insurance
  • Challenge :
    • Insurance companies have to constantly innovate to beat competition
    • Customer Retention & Loyalty can be a challenge as people are always shopping for more competitive rates leading to churn
    • Fraudulent transactions can erode profit margins
    • Processing Claims can be very time consuming
    • How to improve the Claims Management process for faster claims settlement, lower claims processing costs and quicker identification of possible fraud.
  • Solution: Smart Claims!
    • A Databricks Solution Accelerator that uses the Lakehouse paradigm to automate certain components of this process that aids human investigation
    • Please refer to the getting started doc in the docs

Every claim is different, the following steps capturess a typical workflow

  1. The Insured contacts the broker who is the primary contact w.r.t. policy
  2. The Broker examines the data to ensure that relevant details of the claim situation have been captured
    The Adjuster takes over the investigation and may collaborate with internal/external experts to determine the amount of loss or damages covered by the insurance policy.
  3. The Claims Investigator will do due diligence on the paperwork
  4. The Compliance Officer will check eligibility of coverage and ensure no foul play is involved
  5. The Appraiser will conduct a damage evaluation to determine the severity of the claim
  6. The Adjuster will ensure payment is approved and released and communicates back to the Insured

Details

  • What
    • How to manage operational costs so as to offer lower premiums, be competitive & yet remain profitable?
    • How can customer loyalty & retention be improved to reduce churn?
    • How to improve process efficiencies to reduce the response time to customers on the status/decision on their claims?
    • How can funds and resources be released in a timely manner to deserving parties?
    • How can suspicious activities be flagged for further investigation?
  • Why
    • Faster approvals leads to Better Customer NPS scores and Lower Operating expenses
    • Detecting & Preventing fraudulent scenarios leaads to Lower Leakage ratio
    • Improving customer satisfaction leads to Lower Loss ratio
  • How: Claims Automation
    • Automting certain aspects of the claims processing pipeline to reduce dependence of human personnel esspecially in mundane predictable tasks
    • Augmenting additional info/insights to existing claims data to aid/expedite human investigation, eg. Recommend Next Best Action
    • Providing greater explainability of the sitution/case for better decision making in the human workflow
    • Serving as a sounding board to avoid human error/bias as well as providing an audit trail for personel in Claims Roles

Emerging trends in Insurance

  • According to EY: 'It is given that the future of insurance will be data-driven and analytics-enabled. But tomorrow’s top-performing insurers will also excel at making human connections and applying the personal touch at the right time.'
  • Deloitte in its '2023 Insurance outlook' states 'Technology infrastructure has improved, but focus needs to shift to value realization, and broaden historical focus from risk and cost reduction to prioritize greater levels of experimentation and risk-taking that drives ongoing innovation, competitive differentiation, and profitable growth.' with increased focus on ESG as value differentiator & DEI to broaden offerings.
  • Nationwide CTO, Jim Fowler in a podcast on 'Future of Insurance' summarized it aroud Innovation.
  • Each individual need is different. Hence personalization and delivering the relevant value to the concerned individual is an importaant ingredient to inovate. Personalization is not about bothering the customer with multiple touchpoints but wowing them with relevant insights that suit their need in a timely manner.
  • Apart from courage and conviction, Innovation requires patience because no worthy change is delivered overnight. Hence the need to be on a platform that enables fast paced innovation and an architecture that is open, extensible and pluggable so that technology is never a constraint nor a hindrance to execution of novel ideas.

Insurance Terminology

Insurance Reference Architecture

Smart Claims Reference Architecture & Data Flow

Claims flow typically involve some orchestration between an operational system such as Guidewire and an analytic system such as Databricks as shown in the diagram above. End users often use a smart app to file claims, look at the status of their case. Either via an app or an IoT dvice embedded in their vehicle, telematic data iss constantly streaming into one of these two systems which provides a lot of information regarding their driving patterns. Sometimes in the event of other credit scores, this data is used to assign a risk score for the driver which has a direct consequence on their premiums. In some ways, it can be argued that this type of insurance risk score is a better indicator of a person's safety track rather than a generic financial credit score which is determined primarily by their financial track record.

  1. Policy data ingestion
  2. Claims and telematics data ingestion
  3. Ingest all data sources to the cloud storage
  4. Incrementally Load Raw data to Delta Bronze table
  5. Transform and Manipulate data
  6. Model scoring (and model training in the training pipeline)
  7. Load predictions to a gold table and perform aggregations
  8. Dashboard visualization
  9. Feed the results back to the operational system
  10. Claims routing based on decision

Datasets

  • All the data is synthetically generated data including the images and geo locations

  • Typical datasets include the above, some of these are slow moving while others are fast moving.
  • Some are strutured/semi-structured while others are un-structed.
  • Some of these are additive and are appended while others are inccremental updates and are treated as slowly changing dimensions.

Domain Model

  • There are several industry prescribed data domain models Eg. OMG (https://www.omg.org/)
  • The above diagram is a simplified domain model to capture some of the relevant data points for the use case.
  • For more details, efer to the P&C etity definitions, terminology & logicaal model https://www.omg.org/spec/PC/1.0/PDF

Insight Generation using ML & Rule Engine

  • A pre-trained ML Model is used to score the image attached in the claims record to assess the severity of damage.
  • A Rule Engine is a flexible way to define known operational static checks that can be applied without requiring a human in the loop, thereby speeding up 'routine cases'. When the reported data does not comply with auto detected info, flags are raised to involve additional human investigation
  • This additional info helps a claims investigator by narrowing down the number of cases that need intervention a well as by narrowing down the specific areas that need additional follow up and scrutiny
  • Some common checks include
    • Claim date should be within coverage period
    • Reported Severity should match ML predicted severity
    • Accident Location as reported by telematics data should match the location as reported in claim
    • Speed limit as reported by telematics should be within speed limits of that region if there is a dispute on who was on the offense

Workflow

  • Different data sources flow in at their own pace, some independent, some with dependencies
  • We will use Databricks multi-task Workflows to put the process in auto-pilot mode to demonstrate the Lakehouse paradigm.
  • Some nodes are Delta Live Table nodes which employ the medallion architecture to refine and curate data, while others are notebooks which use a Model to score the data while still others are SQL workflows to refresh a dashboard with newly generated insights.

1. Setup involve all the work needed to setup the
2. Ingest claims, Policy & accident data ussing a DLT Pipeline
3. Ingest Telematic data
4. Augment claims data with latitude/longitude using zipcode
5. Apply ML model to incoming image data to auto infer severity
6. Join telematics data with claims data to recreaate scene of accident eg. location, speed. This is where other 3rd party dta can be layered ex. road conditions, weather data, etc.
7. Apply pre-determined rules dynamically to assess merit of the claim and if it is a 'normal' case, release of funds can be expedited
8. Claims Dashboard is refreshed to aid claim investigators with additional data insights inferenced through the data and AI pipeline

Using DLT for ETL helps simplify and operationalize the pipeline with its support for autoloader, data quality via constraints, efficient auto-scaling for streaming workloads, resiliency via restart on failure, execution of administrative operations among others.

  • Schema: smart_claims
  • Tables:
    • Bronze: bronze_claim, bronze_policy, bronze_accident
    • Silver: silver_claim, silver_policy, silver_claim_policy, silver_telematics, silver_accident, silver_claim_policy_accident, silver_claim_policy_telematics, silver_claim_policy_location
    • Gold: claim_rules, gold_insights

Insight visualization using Dashboards

A Loss Summary dashboard gives a birds eye view to overall business operations

  • Loss Ratio is computed by insurance claims paid plus adjustment expenses divided by total earned premiums.
    • For example, if a company pays $80 in claims for every $160 in collected premiums, the loss ratio would be 50%.
    • The lower the ratio, the more profitable the insurance company. Each insurance company has its target loss ratio. A typical range is between 40%-60%.
    • Damage is captured in 2 categories - property & liability - their loss ratios are tracked separately
    • The 80/20 Rule generally requires insurance companies to spend at least 80% of the money they take in from premiums on care costs and quality improvement activities. The other 20% can go to administrative, overhead, and marketing costs.
  • Summary visualization captures count of incident type by severity
    • Incident type refers to damage on account of
      • theft, collision (at rest, in motion (single/multiple vehicle collision)
    • Damage Severity is categorized as trivial, minor, major, total loss
  • Analyzing recent trends helps to prepare for handling similar claims in the near future, for Eg.
    • What is the frequency of incident/damage amount by hour of day
      • Are there certain times in a day such as peak hours that are more prone to incidents?
    • Is there a corelation to the age of the driver and the normalized age of the driver
      • Note there are very few driver below or above a certain threshold
    • What about the number of incident coreelated to the age/make of the vehicle.
    • Which areas of the city have a higher incidence rate(construction, congestion, layout, density, etc)

A per claim Investigation dashboard gives additional where a claims officer picks a claim number and can drill into its various facets

  • The first panel uses counter widgets to provide statistics on rolling counts on number of
    • Claims filed and of those how many were flagged as
      • suspicious or
      • had expired policies or
      • had a severity assessment mimatch or
      • claims amount exceeded the policy limits
  • The next widget uses a table view to provide recent claims that are auto scored in the pipeline using ML iferecing and rule engine
    • A green tick is used to denote auto-assessmentt matches claims description
    • A red cross indicates a mismatch that warrants further manual investigation
  • Drill down to a specific claim to see
    • Images of the damaged vehicle
    • Claim, Policy & Driver details
    • Telematic data draws the path taken by the vehicle
    • Reported data is contrasted with assessed data insights

Databricks value proposition in Smart Claims?

  • Databricks features used
    • Delta, DLT, Multitask-workflows, ML & MLFlow, DBSQL Queries & Dashboards
  • Unified Lakehouse architecture for
    • All data personas to work collaboratively on a single platform contributing to a single pipeline
    • All big data architecture paradigms including streaming, ML, BI, DE & Ops
  • Workflow Pipelines are easier to create, monitor and maintain
    • Multi-task Workflows accommodate multiple node types (notebooks, DLT, ML tasks, QL dashboard and support repair&run & compute sharing)
    • DLT pipelines offer quality constraints and faster path to flip dev workloads to production
    • Robust, Scalable and fully automated via REST APIs thereby improving team agility and productivity
  • BI & AI workloads
    • Created, managed with MLFlow for easy reproducibility and auditability
    • Supports any model either created or ported
    • Parameterized Dashboards that can access all data in the Laake and can be setup in minutes

How best to use this demo?

  • Ideal time: 1 hour (see recorded demo, deck, field-demo link)
  • Ideal audience: Mix of tech and business folks (Basic Databricks knowhow is assumed)
  • For optimum experience, reduce cluster startup times by having a running ML Runtime Interactive cluster, DBSQL Warehouse, DLT in dev mode
  • Ideal Flow:
    • Explain need for claims automation via 'smart claims' & how Lakehouse aids the process
    • Deck: based on this Readme, set the flow of the story (15 min)
    • Discovery of where they are (10 min)
    • Demo (25 min)
      • Data sources & EDA notebooks
      • DE: Workflow & DLT Pipeline (5 min)
      • ML: Model management & inferencing (5 min)
      • BI: Loss summary & Claims Investigation (10 min)
  • Next steps (5 min)

[email protected]
[email protected]
[email protected]


© 2022 Databricks, Inc. All rights reserved. The source in this notebook is provided subject to the Databricks License [https://databricks.com/db-license-source]. All included or referenced third party libraries are subject to the licenses set forth below.

library description license source
geopy A Python client for geocoding MIT https://github.com/geopy/geopy

Getting started

Although specific solutions can be downloaded as .dbc archives from our websites, we recommend cloning these repositories onto your databricks environment. Not only will you get access to latest code, but you will be part of a community of experts driving industry best practices and re-usable solutions, influencing our respective industries.

add_repo

To start using a solution accelerator in Databricks simply follow these steps:

  1. Clone solution accelerator repository in Databricks using Databricks Repos
  2. Attach the RUNME notebook to any cluster and execute the notebook via Run-All. A multi-step-job describing the accelerator pipeline will be created, and the link will be provided. The job configuration is written in the RUNME notebook in json format.
  3. Execute the multi-step-job to see how the pipeline runs.
  4. You might want to modify the samples in the solution accelerator to your need, collaborate with other users and run the code samples against your own data. To do so start by changing the Git remote of your repository to your organization’s repository vs using our samples repository (learn more). You can now commit and push code, collaborate with other user’s via Git and follow your organization’s processes for code development.

The cost associated with running the accelerator is the user's responsibility.

Project support

Please note the code in this project is provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects. The source in this project is provided subject to the Databricks License. All included or referenced third party libraries are subject to the licenses set forth below.

Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for support.

smart-claims's People

Contributors

anindita-databricks avatar dbbnicole avatar mrasooli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

smart-claims's Issues

Cant make directory for path for s3a/... since it is a file

org.apache.hadoop.fs.FileAlreadyExistsException: Can't make directory for path 's3a://e2-demo-field-eng-dbfs/oregon-prod/1444828305810485/tmp' since it is a file.

in setup()
21 # Similar to database, we will store actual content on a given path
22 dbutils.fs.mkdirs(home_directory)
---> 23 dbutils.fs.mkdirs(temp_directory)
24
25 # # Where we might stored temporary data on local disk


ExecutionError Traceback (most recent call last)
in
29
30 tear_down()
---> 31 setup()

in setup()
21 # Similar to database, we will store actual content on a given path
22 dbutils.fs.mkdirs(home_directory)
---> 23 dbutils.fs.mkdirs(temp_directory)
24
25 # # Where we might stored temporary data on local disk

/databricks/python_shell/dbruntime/dbutils.py in f_with_exception_handling(*args, **kwargs)
387 exc.context = None
388 exc.cause = None
--> 389 raise exc
390
391 return f_with_exception_handling

ExecutionError: An error occurred while calling z:com.databricks.backend.daemon.dbutils.FSUtils.mkdirs.
: org.apache.hadoop.fs.FileAlreadyExistsException: Can't make directory for path 's3a://e2-demo-field-eng-dbfs/oregon-prod/1444828305810485/tmp' since it is a file.
at shaded.databricks.org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:3149)
at shaded.databricks.org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:3099)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.$anonfun$mkdirs$3(DatabricksFileSystemV2.scala:852)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at com.databricks.s3a.S3AExceptionUtils$.convertAWSExceptionToJavaIOException(DatabricksStreamUtils.scala:70)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.$anonfun$mkdirs$2(DatabricksFileSystemV2.scala:850)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.$anonfun$withUserContextRecorded$2(DatabricksFileSystemV2.scala:1051)
at com.databricks.logging.Log4jUsageLoggingShim$.$anonfun$withAttributionContext$1(Log4jUsageLoggingShim.scala:33)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:94)
at com.databricks.logging.Log4jUsageLoggingShim$.withAttributionContext(Log4jUsageLoggingShim.scala:31)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:205)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:204)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.withAttributionContext(DatabricksFileSystemV2.scala:510)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:240)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:225)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.withAttributionTags(DatabricksFileSystemV2.scala:510)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.withUserContextRecorded(DatabricksFileSystemV2.scala:1024)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.$anonfun$mkdirs$1(DatabricksFileSystemV2.scala:849)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at com.databricks.logging.UsageLogging.$anonfun$recordOperation$1(UsageLogging.scala:330)
at com.databricks.logging.UsageLogging.executeThunkAndCaptureResultTags$1(UsageLogging.scala:424)
at com.databricks.logging.UsageLogging.$anonfun$recordOperationWithResultTags$4(UsageLogging.scala:444)
at com.databricks.logging.Log4jUsageLoggingShim$.$anonfun$withAttributionContext$1(Log4jUsageLoggingShim.scala:33)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:94)
at com.databricks.logging.Log4jUsageLoggingShim$.withAttributionContext(Log4jUsageLoggingShim.scala:31)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:205)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:204)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.withAttributionContext(DatabricksFileSystemV2.scala:510)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:240)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:225)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.withAttributionTags(DatabricksFileSystemV2.scala:510)
at com.databricks.logging.UsageLogging.recordOperationWithResultTags(UsageLogging.scala:419)
at com.databricks.logging.UsageLogging.recordOperationWithResultTags$(UsageLogging.scala:339)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.recordOperationWithResultTags(DatabricksFileSystemV2.scala:510)
at com.databricks.logging.UsageLogging.recordOperation(UsageLogging.scala:330)
at com.databricks.logging.UsageLogging.recordOperation$(UsageLogging.scala:302)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.recordOperation(DatabricksFileSystemV2.scala:510)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.mkdirs(DatabricksFileSystemV2.scala:849)
at com.databricks.backend.daemon.data.client.DatabricksFileSystem.mkdirs(DatabricksFileSystem.scala:212)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2388)
at com.databricks.backend.daemon.dbutils.FSUtils$.$anonfun$mkdirs$1(DBUtilsCore.scala:182)
at com.databricks.backend.daemon.dbutils.FSUtils$.withFsSafetyCheck(DBUtilsCore.scala:91)
at com.databricks.backend.daemon.dbutils.FSUtils$.mkdirs(DBUtilsCore.scala:180)
at com.databricks.backend.daemon.dbutils.FSUtils.mkdirs(DBUtilsCore.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:295)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:750)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.