Giter VIP home page Giter VIP logo

ibmil's Introduction

Interventional Bag Multi-Instance Learning On Whole-Slide Pathological Images

Pytorch implementation for the multiple instance learning model described in the paper Interventional Bag Multi-Instance Learning On Whole-Slide Pathological Images (CVPR 2023, selected as a highlight).

Installation

a. Create a conda virtual environment and activate it.

conda create -n ibmil python=3.7 -y
conda activate ibmil

b. Install PyTorch and torchvision following the official instructions, e.g.,

conda install pytorch torchvision -c pytorch

c. Install other third-party libraries.

Stage 1: Data pre-processing and computing features

Please refer to dsmil for these steps.

  • Data pre-processing: Download the raw WSI data and Prepare the patches.
  • Computing features: Train the feature extractor and using the pre-trained feature extractor for instance-level features. Note that the default feature extractor is ResNet, which can be replaced by other networks, e.g., ViT and CTransPath. Download the MoCo v3 pretrained ViT and SRCL pretrained CTransPath from https://github.com/Xiyue-Wang/TransPath. The pre-computed features (over 100G) will be released.

Stage 2: Training aggregator and generating confounder

The aggregator is firstly trained with bag-level labels end to end.

  • For abmil and dsmil:
    python train_tcga.py --num_classes [according to your dataset] --dataset [C16/tcga] --agg no --feats_size [size of pre-computed features] --model [abmil/dsmil]
    
  • For TransMIL:
    python train_tcga_transmil.py --num_classes [according to your dataset] --dataset [C16/tcga] --agg no --feats_size [size of pre-computed features] --model transmil
    
  • For DTFD-MIL:
    python train_tcga_DTFD.py --num_classes [according to your dataset] --dataset [C16/tcga] --agg no --feats_size [size of pre-computed features] --model DTFD
    

Confounder is then generated with pre-trained aggregator.

  • For abmil, dsmil and TransMIL:
    python clustering.py --num_classes [according to your dataset] --dataset [C16/tcga] --feats_size [size of pre-computed features] --model [abmil/transmil/dsmil] --load_path [path of pre-trained aggregator]
    
  • For DTFD-MIL:
    python clustering_DTFD.py --num_classes [according to your dataset] --dataset [C16/tcga] --feats_size [size of pre-computed features] --model DTFD --load_path [path of pre-trained aggregator]
    

An example with feature extractor of ImageNet-pretrained ResNet-18, MIL model of abmil, dataset of Camelyon16, load_path of pretrained_weights/agg.pth:

python train_tcga.py --num_classes 1 --dataset Camelyon16_Img_nor --agg no --feats_size 512 --model abmil
python clustering.py --num_classes 1 --dataset Camelyon16_Img_nor --feats_size 512 --model abmil --load_path pretrained_weights/agg.pth

Stage 3: Interventional training

The proposed interventional training for MIL models.

  • For abmil and dsmil:
    python train_tcga.py --num_classes [according to your dataset] --dataset [C16/tcga] --agg no --feats_size [size of pre-computed features]  --model [abmil/dsmil] --c_path [path of the generated confounders] (Interventional training is activated if `--c_path` is specified.)
    
  • For TransMIL:
    python train_tcga_transmil.py --num_classes [according to your dataset] --dataset [C16/tcga] --agg no --feats_size [size of pre-computed features] --model transmil --c_path [path of the generated confounders] (Interventional training is activated if `--c_path` is specified.)
    
  • For DTFD-MIL:
    python train_tcga_DTFD.py --num_classes [according to your dataset] --dataset [C16/tcga] --agg no --feats_size [size of pre-computed features] --model DTFD --c_path [path of the generated confounders] (Interventional training is activated if `--c_path` is specified.)
    

An example with feature extractor of ImageNet-pretrained ResNet-18, MIL model of abmil, dataset of Camelyon16, c_path of datasets_deconf/Camelyon16_Img_nor/train_bag_cls_agnostic_feats_proto_8.npy:

python train_tcga.py --num_classes 1 --dataset Camelyon16_Img_nor --agg no --feats_size 512   --model abmil --c_path datasets_deconf/Camelyon16_Img_nor/train_bag_cls_agnostic_feats_proto_8.npy

TODO

  • Code refactoring
  • Improve documentation and optimize project setup procedures

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.