Giter VIP home page Giter VIP logo

timecorr-paper's Introduction

What is the neural code?

DOI

This repository contains data and code used to produce the paper High-level cognition during story listening is reflected in high-order dynamic correlations in neural activity patterns by Lucy L.W. Owen, Thomas H. Chang, and Jeremy R. Manning. You may also be interested in our timecorr Python toolbox for calculating high-order dynamic correlations in timeseries data; the methods implemented in our timecorr toolbox feature prominently in our paper.

This repository is organized as follows:

root
└── code : all code used in the paper
    ├── notebooks : jupyter notebooks for paper analyses and instructions for downloading the data
    └── scripts : python scripts used to run analyses on a computing cluster
    └── figs : pdf and png copies of figures
└── data : create this folder by extracting the following zip archive into your clone of this repository's folder: https://drive.google.com/file/d/1CZYe8eyAkZFuLqfwwlKoeijgkjdW6vFs
└── paper : all files to generate paper
    └── figs : pdf copies of each figure

Content of the data folder is provided here. We also include a Dockerfile to reproduce our computational environment. Instruction for use are below:

Docker setup

  1. Install Docker on your computer using the appropriate guide below:
  2. Launch Docker and adjust the preferences to allocate sufficient resources (e.g. > 4GB RAM)
  3. Build the docker image by opening a terminal in this repo folder and enter docker build -t timecorr_paper .
  4. Use the image to create a new container
    • The command below will create a new container that will map your local copy of the repository to /mnt within the container, so that location is shared between your host OS and the container. The command will also share port 9999 with your host computer so any jupyter notebooks launched from within the container will be accessible in your web browser.
    • docker run -it -p 9999:9999 --name Timecorr_paper -v $PWD:/mnt timecorr_paper
    • You should now see the root@ prefix in your terminal, if so you've successfully created a container and are running a shell from inside!
  5. To launch any of the notebooks: jupyter notebook

Using the container after setup

  1. You can always fire up the container by typing the following into a terminal
    • docker start --attach Timecorr_paper
    • When you see the root@ prefix, you're inside the container
  2. Stop a running jupyter notebook server with ctrl + c
  3. Close a running container with ctrl + d or exit from the same terminal window you used to launch the container, or docker stop Timecorr_paper from any other terminal window

timecorr-paper's People

Contributors

jeremymanning avatar lucywowen avatar paxtonfitzpatrick avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

timecorr-paper's Issues

new pca analyses

Analyses:

  • For each condition, run PCA for each subject and reduce to N components (going to start with 10)
  • With the time points x N components, divide into 2 groups and run corrmean_combine
  • For each component, (1, 1 and 2, 1-3, etc) how well can we decode
  • Redo the optimized weighted level analysis with pca as the reductions technique
  • To do this, keep track of the groups when running ISFC but combine groups for reducing
    • set rfun to None and make a wrapper function that takes the infold and outfold correlations, combine, and reduce using PCA, the return the reduced smooth infold and outfield data

Sanity check 1) Check using eigenvector centrality - should return the same values as before
Sanity check 2) Scramble the time points for each person - should get chance decoding

figure 2 fonts

please make the kernel figure (Fig 2) fonts bigger to match the other figures

Create timecorr kernels figure

three panels showing weights over time (with times shown for t = 0...100 and the weights function evaluated at t = 50):

  • delta function
  • gaussian weights (with \sigma = 10, 25, 50, 100, 1000 each in a different color, plus a legend)
  • laplace weights (with \lambda = 10, 25, 50, 100, 1000, each in a different color, plus a legend)

prepare pieman datasets

  • use HTFA, PCA, ICA, k-means clustering (or similar) to reduce dimensionality (hopefully HTFA, re-using fits from Manning et al. 2018)
  • need per-subject T by K matrices (I think these are already stored on discovery)

add some brain results

some ideas:

  • for each level, characterize proportion of connections (for intact condition) that are left-to-right, dorsal-to-ventral, and/or anterior-to-posterior. Do this for positive vs. negative connections.
  • See which lobes, brain areas, and/or "networks" (defined by Yao et al. or another paper) tend to exhibit strong connections
  • threshold connections (at some strength) and see how long connections persist (expressed as a proportion of the story)
  • color different nodes according to how many strong connections involve that node (or sum up the duration proportions of all connections involving that node). then use this code to generate a .nii plot and display it in some nice-looking way (surface plot using pycortex?)

debugging timecorr

I'm using this notebook to debug timecorr and show how it can recover parameters from synthetic data.

However, in some cases it's hard to tell if the parameters are being recovered correctly, or under what circumstances it should be possible to recover them. Need to explore further...

final NatComm edits to do:

  • Include latex format with final submission
  • Upload updated editorial checklist (signed)
  • Upload revised reporting summary (signed)
  • Remove bold from display items (so I'm assuming those are captions too)
  • Make sure all math conform to guidelines
  • Link repo to zenodo
  • Check that all figures are referred in order
  • Supplementary references at the end of supplementary file (J help please!)
  • Figure out the source data

make results figures

  • methods figure (cartoon)
  • kernels figure showing different weights functions (DONE)
  • synthetic data recovery (correlations)
  • timepoint decoding on synthetic and real data by level (real data: break down by condition)
    • add hypertools trajectory plots for each pieman condition/level?
  • pieman: future prediction by level/condition

create supplemental materials

Figures:
-S1: ttests matrices (heatmap)
-S2: intact detailed (15 order) brain plots and terms
-S3: paragraph
-S4: word
-S5: rest

papers to cite

  • Allen et al., 2014
  • Barttfeld et al., 2015
  • Chen et al., 2016a
  • Handwerker et al., 2012
  • Yang et al., 2014
  • Marusak et al., 2016
  • Miller et al., 2014
  • Damaraju et al., 2014
  • Zalesky et al., 2014
  • Rashid et al., 2014
  • Shakil et al., 2016
  • Sourty et al., 2016a
  • Sourty et al., 2016b
  • Yaesoubi et al., 2015b
  • Betzel et al., 2016
  • Leonardi and Van De Ville, 2015
  • Hutchison et al., 2013a
  • Zalesky and Breakspear, 2015
  • Chang and Glover, 2010
  • Yaesoubi et al., 2015a
  • Cribben et al., 2012
  • Cribben et al., 2013
  • Xu and Lindquist, 2015

new plot idea: decoding benefit of each level

another way to plot the optimization results:
1.) after optimizing the weights, compute the decoding accuracy (or error, or rank)
2.) for each level in turn, set the weights to 0 (leaving all others fixed at their optimized values). then re-normalize the weights to sum to 1. then compute the new decoding performance (it should be lower than the decoding performance without that level set to 0, if the optimization code worked.)
3.) then for each level we can plot the improvement in performance when we include that level vs. not.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.