Giter VIP home page Giter VIP logo

doctorai's People

Contributors

jeremykid avatar mp2893 avatar wkkmike avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

doctorai's Issues

Softmax for multi-label classification ?

Hi, since the RNN is used to perform a multi label classification task, shouldn't Sigmoid be used instead of Softmax layer at the end to calculate the probabilities? Couldn't Softmax be the perfect for a Multi-class classification problem? Please let me know your thoughts!

How to recurrent your result which recall

Hi Edward,

I tune hyperparameters such as the number of RNN layers, the size of RNN and the size of embedding,However,the result is disappointed. The recall@30 is about 0.35.

Therefore,first of all I want to recurrent your result.
Could you give your pre-train model which was trained via Suffer Database? 
I don't have Suffer Database, but I have MIMIC-III, MIMIC-II is a part of MIMIC-III, I can use the MIMIC-III in your pre-train. Maybe I can get a same result as in you paper

thank you very much!
jianglong

A question about doctorai

Dr Edward Choi, I am very interested in your work about disease risk prediction, and I have a question about the experiment setting. You described in your paper that you tested the model performance on different prediction window. As I understand, the model is trained on the data setting of "using 18-6 months records to predict the risk for the future 6 months".

Would you use the data in prediction window for evaluation/testing?

Another question is about the negative examples(controls).
"Up to 10 eligible primary care clinic-, sex-, and age-matched (in 5- year intervals) controls were selected for each incident HF case, yielding an overall ratio of 9 controls per case. Each control was also as- signed an index date, which was the HFDx timepoint of the matched case. Primary care patients were eligible to be controls if they did not meet the operational criteria for HF diagnosis prior to the HFDx timepoint plus 182 days of their corresponding case. Other details on matching are described in the supplementary section."

As I understand, the negative examples are patients that do not have heart failure in records, suppose we have a patient with records from 1990.1 - 1991.12 with heart failure diagnosis on 1991.6 and was 50 years old in 1991, one possible control can be a patient with records from 1985.1-1986.12 without heart failure and was also 50 years old in 1986.

How do we use the HFDx timepoint of the matched case?

In addition, do you have any requirements about the frequency of diagnosis/number of codes in the observation periods? Too sparse (few codes) training data could make the model hard to converge and few information could be inefficient for prediction.

Theano: We didn't implemented yet the case where scan do 0 iteration

Hi Dr. Edward Choi,
I have been following your work which I think is quite fascinating and impactful.

I am trying to implement Doctor AI using ICD codes on large scale population level data. However, currently I am facing following error:

Loading data ... done Optimization start !! Traceback (most recent call last): File "doctorAI.py", line 522, in <module> verbose=args.verbose File "doctorAI.py", line 448, in train_doctorAI cost = f_grad_shared(x, y, mask, lengths) File "/home/cvc2gpu16/anaconda3/envs/gct/lib/python2.7/site-packages/theano/compile/function_module.py", line 917, in __call__ storage_map=getattr(self.fn, 'storage_map', None)) File "/home/cvc2gpu16/anaconda3/envs/gct/lib/python2.7/site-packages/theano/gof/link.py", line 325, in raise_with_op reraise(exc_type, exc_value, exc_trace) File "/home/cvc2gpu16/anaconda3/envs/gct/lib/python2.7/site-packages/theano/compile/function_module.py", line 903, in __call__ self.fn() if output_subset is None else\ File "/home/cvc2gpu16/anaconda3/envs/gct/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 963, in rval r = p(n, [x[0] for x in i], o) File "/home/cvc2gpu16/anaconda3/envs/gct/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 952, in p self, node) File "scan_perform.pyx", line 215, in theano.scan_module.scan_perform.perform NotImplementedError: We didn't implemented yet the case where scan do 0 iteration Apply node that caused the error: forall_inplace,cpu,gru_layer0}(Shape_i{0}.0, Elemwise{sub,no_inplace}.0, InplaceDimShuffle{0,1,x}.0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, IncSubtensor{InplaceSet;:int64:}.0, U_z_001, U_r_001, U_001, InplaceDimShuffle{x,0}.0, InplaceDimShuffle{x,0}.0, InplaceDimShuffle{x,0}.0) Toposort index: 328 Inputs types: [TensorType(int64, scalar), TensorType(float64, (False, False, True)), TensorType(float64, (False, False, True)), TensorType(float64, 3D), TensorType(float64, 3D), TensorType(float64, 3D), TensorType(float64, 3D), TensorType(float64, matrix), TensorType(float64, matrix), TensorType(float64, matrix), TensorType(float64, row), TensorType(float64, row), TensorType(float64, row)] Inputs shapes: [(), (0, 100, 1), (0, 100, 1), (0, 100, 200), (0, 100, 200), (0, 100, 200), (2, 100, 200), (200, 200), (200, 200), (200, 200), (1, 200), (1, 200), (1, 200)] Inputs strides: [(), (800, 8, 8), (800, 8, 8), (160000, 1600, 8), (160000, 1600, 8), (160000, 1600, 8), (160000, 1600, 8), (1600, 8), (1600, 8), (1600, 8), (1600, 8), (1600, 8), (1600, 8)] Inputs values: [array(0), array([], shape=(0, 100, 1), dtype=float64), array([], shape=(0, 100, 1), dtype=float64), array([], shape=(0, 100, 200), dtype=float64), array([], shape=(0, 100, 200), dtype=float64), array([], shape=(0, 100, 200), dtype=float64), 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', 'not shown'] Outputs clients: [[Subtensor{int64:int64:int8}(forall_inplace,cpu,gru_layer0}.0, ScalarFromTensor.0, ScalarFromTensor.0, Constant{1}), Subtensor{int64:int64:int64}(forall_inplace,cpu,gru_layer0}.0, ScalarFromTensor.0, ScalarFromTensor.0, Constant{-1})]]

I understand from Theano forum that the problem is that we have 0 iteration during recurence and Theano does not support that. So, I wonder what in Doctor AI code or my dataset is causing to end up with just 0 iteration. I think I have formatted the input files as described on Readme page. Some patients have only one event. Is that the issue here?

Your advice will be greatly appreciated.

OS: Ubuntu 16.04
Python 2.7
Theano 1.0.4
CUDA Version 10.1.243

Thank you,
Sunil

Pre-trained concept embedding seems barely improve the model performance?

Hi Edward,

As I am reading some nlp papers about using pre-trained embedding can largely improve the downstream nlp tasks, like NMT. I am wondering is it still true that the pre-trained concept embedding can improve the healthcare tasks, like disease prediction in your paper.

It seems that in your experiment, the pre-trained concept embedding barely improve the model performance (only 1% increase of the recall rate), while the pre-trained RNN on scutter dataset clearly largely improve the model performance on MIMIC dataset.

So, why the pre-trained concept embedding seems barely improve the model performance

Is it because:

  1. the quality of the pre-trained embedding is not good enough as data source is relatively small, skip-gram will suffer from the zipfian's law.
  2. there is an aggregate operation on the visit-level, which sums all the medical code embedding and hence downplay the influence of the medical code embedding.
  3. the characteristic of the task itself. There are two many new diseases that are unpredictable, with 70% recall rate is already reaching the max performance.

thank you so much!
Xianlong

questions in calculate_r_squared?

Hello Dr. Edward Choi

I am reading the code and trying to follow your work. I have questions in calculate_r_squared function.

Should the mean_duration in R_2(coefficient of determination) be updated by argument use_log_time in testDoctorAI.py?

In my understanding, R_2(coefficient of determination)
image

In testDoctorAI.py calculate_r_squared function , I assume mean_duration is the mean of y in R_2(coefficient of determination).

In help said mean_duration is the mean value of the durations.

But when the use_log_time = 1, the mean_duration still uses the number of days instead of log(days).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.