Name: Greg Kiar
Type: User
Company: Child Mind Institute
Bio: openly-reproducible-and-accessibly-scalable-science-enthusiast.
Ph.D. in Biomedical Engineering, Research Scientist @ChildMindInstitute
Twitter: g_kiar
Location: Montreal, Quebec
Blog: gkiar.me
Greg Kiar's Projects
3D Convolutional Siamese AutoEncoder for learning fMRI parameteric maps
Sandbox for scripts and data grooming utils developed to simplify working with ABCD data
figures+flowcharts for aces
Sandbox for actigraphy data analysis
Evaluating various aggregation methods for MCA-perturbed datasets
the alzhippos are real
Notes, presentations, plans, misc code, and links relating to the Analytical Stability project with Camille Maumet and Elisa Fromont in INRIA.
A Python package to read/write/query/validate BIDS datasets.
APPIAN is an open-source automated software pipeline for analyzing PET images in conjunction with MRI. The goal of APPIAN is to make PET tracer kinetic data analysis easy for users with moderate computing skills and to facilitate reproducible research.
:atom: The hackable text editor
Utility to crawl and convert files in the B7 Google Drive
this is my example repo
BIDS in JSON
This is a prototype to explore the possibility of creating self-contained tool to perform the data aggregation necessary to jump from events.tsv files/model.json/other time series metadata to a design matrix (or an appropriate sparse precursor of the design matrix).
Website for the lab
Scripts used to download, tile, and ingest BigBrain into the Boss
JSON schema and tools for boutiques application descriptors.
A report describing the Boutiques system.
Boutiques website
for @gkiar
Art made by brains for brains about brains
Brain Hacking 101 Course for the 2017 HBM Hackathon in Vancouver, British Columbia
brainhack events webpage
new frontent for brainspell-neo
Scripts and commands run for generating connectomes using the DKT parcellation from Budha's data
Configurable Pipeline for the Analysis of Connectomes
CBRAIN is a flexible Ruby on Rails framework for accessing and processing of large data on high-performance computing infrastructures.