jsherfey / dnsim Goto Github PK
View Code? Open in Web Editor NEWDynamic Neural Simulator - a modular modeling tool for large ODE systems.
Dynamic Neural Simulator - a modular modeling tool for large ODE systems.
While trying to tackle #66 , (I've got the coder working on the cluster, but not local yet) I've been trying to battle-test simulations using the coder to make sure I have it working. However, the issue is... using coder (via the default MATLAB 2013a version) doesn't seem to work for anything larger than 400 (+/- 100ish) minimal Hodgkin Huxley cells. This seems to be due to a hard upper limit (that of intmax
) on the size of any one matrix in MATLAB, a limit that the codegen is hitting. The error output is further below.
I have tried running cluster jobs using newer versions of MATLAB via replacing the central MATLAB call in dnsim/csh/qmatjob
with /usr/local/apps/matlab-2014b/bin/matlab
but for some reason the cluster still runs the job with 2013a, the default version. This table seems to indicate there's a max matrix size change before and after "MATLAB 7.4 and 7.5", but that switch was apparently in 2007, so I don't know if changes in the last two years will have any affect.
Here is a simple script to run 500 Hodgkin-Huxley neurons, with just Na and K channels, and a tonic stimulus - @jsherfey and @kupiqu may be the only ones with code that uses the coder and works - if anyone has spare time (I know it's SFN) and their codegen working, can you test it? This should work with any DNSIM code that has the default mechanisms.
cd '~/x010-dnsim/dnsim/'; % where my DNSIM code is
%% Demo: DNSim batch simulations
% Purpose: demonstrate how to run simulation batches varying model parameters
% on local machine or cluster with or without codegen.
% Created by Jason Sherfey (27-Mar-2015)
% -------------------------------------------------------------------------
% model specification
numcells = 500;
spec=[];
spec.nodes(1).label = 'HHA'; % name of this node population
spec.nodes(1).multiplicity = numcells; % size of this node population
spec.nodes(1).dynamics = {'V''=current/c'}; % node dynamics for state variables that can be "seen" by other nodes
spec.nodes(1).mechanisms = {'iNa','iK','itonic'}; % note: corresponding mechanism files (e.g., iNa.txt) should exist in matlab path
spec.nodes(1).parameters = {'V_IC',-65+65*rand(1,numcells),'stim',7,'c',1}; % user-specified parameter values (note: these override default values set in mechanism files)
% simulation controls
tspan=[0 8000]; % [beg end], ms, simulation time limits
SOLVER='euler'; % numerical integration method
dt=.01; % integration time step [ms]
dsfact=1; % downsample factor (applied to simulated data)
% again, where my code lies - this is sent to 'addpath' in the simstudy call
codepath='~/x010-dnsim/dnsim'; % path to dnsim toolbox
%% SIMULATION BATCHES
% output controls
% where the data will be output
rootdir = '/projectnb/crc-nak/asoplata/dnsim_data/HH_testbed'; % where to save outputs
% don't save anything
savedata_flag = 0; % 0 or 1, whether to save the simulated data (after downsampling)
saveplot_flag = 0; % 0 or 1, whether to save plots
plotvars_flag = 0; % 0 or 1, whether to plot state variables
overwrite_flag = 1;
coder = 1;
cluster_flag=1; % 0 or 1, whether to submit jobs to a cluster (requires: running on a cluster node that recognizes the command 'qsub')
% memlimit = '8G';
memlimit = '254G';
% what to vary across batch
scope = {'(HHA)'}; % node or connection label whose parameter you want to vary
variable = {'stim'}; % parameter to vary (e.g., capacitance)
values = {'[2]'}; % values for each simulation in batch (varied across batches)
%{
Prerequisites for submitting jobs to cluster:
1. add path-to-dnsim/csh to your environment.
e.g., add to .bashrc: "export PATH=$PATH:$HOME/dnsim/csh"
2. run matlab on cluster node that recognizes the command 'qsub'
%}
% % -------------------------------------------------------------------------
% % LOCAL BATCHES (run serial jobs on current machine)
% cluster_flag=0; % 0 or 1, whether to submit jobs to a cluster (requires: running on a cluster node that recognizes the command 'qsub')
%
% % without codegen
% values = '[1 1.5]';
% [~,~,outdir]=simstudy(spec,{scope},{variable},{values},...
% 'dt',dt,'SOLVER',SOLVER,'rootdir',rootdir,'timelimits',tspan,'dsfact',dsfact,...
% 'savedata_flag',savedata_flag,'saveplot_flag',saveplot_flag,'plotvars_flag',plotvars_flag,'addpath',codepath,...
% 'cluster_flag',cluster_flag,'coder',0);
%
% % with codegen
% values = '[2 2.5]';
% [~,~,outdir]=simstudy(spec,{scope},{variable},{values},...
% 'dt',dt,'SOLVER',SOLVER,'rootdir',rootdir,'timelimits',tspan,'dsfact',dsfact,...
% 'savedata_flag',savedata_flag,'saveplot_flag',saveplot_flag,'plotvars_flag',plotvars_flag,'addpath',codepath,...
% 'cluster_flag',cluster_flag,'coder',1);
% -------------------------------------------------------------------------
% CLUSTER BATCHES (submit to queue for running parallel jobs on cluster nodes)
% cluster_flag=1; % 0 or 1, whether to submit jobs to a cluster (requires: running on a cluster node that recognizes the command 'qsub')
% without codegen
% values = '[3 3.5]';
% [~,~,outdir]=simstudy(spec,{scope},{variable},{values},...
[~,~,outdir]=simstudy(spec,scope,variable,values,...
'dt',dt,'SOLVER',SOLVER,'rootdir',rootdir,'timelimits',tspan,'dsfact',dsfact,...
'savedata_flag',savedata_flag,'saveplot_flag',saveplot_flag,'plotvars_flag',plotvars_flag,'addpath',codepath,...
'cluster_flag',cluster_flag,'coder',coder,'overwrite_flag',overwrite_flag,'memlimit',memlimit);
% % with codegen
% values = '[4 4.5]';
% [~,~,outdir]=simstudy(spec,{scope},{variable},{values},...
% 'dt',dt,'SOLVER',SOLVER,'rootdir',rootdir,'timelimits',tspan,'dsfact',dsfact,...
% 'savedata_flag',savedata_flag,'saveplot_flag',saveplot_flag,'plotvars_flag',plotvars_flag,'addpath',codepath,...
% 'cluster_flag',cluster_flag,'coder',1);
% exit
The relevant part of the output log is
[t,y]=ode23(ODEFUN,[0 100],IC); % numerical integration
figure; plot(t,y); % plot all variables/functions
try legend('HHA\_V','HHA\_iNa\_mNaf','HHA\_iNa\_hNaf','HHA\_iK\_nKf'); end
%-----------------------------------------------------------
Starting the simulation...
Using previous mex file: odefun_20151016_182724_141_mex
Simulation interval: 0-8000
Starting integration (euler, dt=0.01)
Cannot grow a matrix with greater than intmax() elements.
in cat (line 26)
in odefun (line 33)
Error: Cannot grow a matrix with greater than intmax() elements.
in cat (line 26)
in odefun (line 33)
saving cluster log (/usr3/graduate/asoplata/batchdirs/B20151016-182722/pbsout/job1.out) to /projectnb/crc-nak/asoplata/dnsim_data/HH_testbed/HHA-stim/20151016/logs/job0001_HHA-stim2_time0-8000_job1.out.
cp: cannot create regular file `/projectnb/crc-nak/asoplata/dnsim_data/HH_testbed/HHA-stim/20151016/logs/job0001_HHA-stim2_time0-8000_job1.out': No such file or directory
cp: cannot create regular file `/projectnb/crc-nak/asoplata/dnsim_data/HH_testbed/HHA-stim/20151016/logs/job0001_HHA-stim2_time0-8000_job1.err': No such file or directory
finished
This is a really weird one, since I can't figure out how I haven't run across it before, nor why it hasn't occurred for anyone else. Note that everything I say will apply to trying to run a cluster job; local sims are a different story. I first noticed this when trying to run my own cells, but I gradually simplified it down to even occurring on the Example Batch Script that Jason has sent some people (sent to me on 2015/03/27).
Can other people PLEASE try recreating this bug (e.g. just running the script below, only needing to change codepath
) using up-to-date copies of the GitHub version of DNSIM or the other altered versions of DNSIM (e.g. ones they got from Jason directly), if you use those versions on the cluster? Basically, whichever version you use on the cluster.
If I take this minimal/extremely simple MATLAB script, where the only simulation run is the one sent to the cluster but not using coder, and I try to run it with multiplicity
set to > 1, then I get an Artifex Ghostscript
system error in which case the system is unable to save any voltage trace plots (and other plots). Edit: Note that you have to change where your DNSIM code is installed, but other than that, if you've ever run a sim on the cluster successfully before, then this should work without you having to change anything else.
%% Demo: DNSim batch simulations
% Purpose: demonstrate how to run simulation batches varying model parameters
% on local machine or cluster with or without codegen.
% Created by Jason Sherfey (27-Mar-2015)
% -------------------------------------------------------------------------
% model specification
spec=[];
spec.nodes(1).label = 'HH'; % name of this node population
spec.nodes(1).multiplicity = 2; % size of this node population
spec.nodes(1).dynamics = {'V''=current/c'}; % node dynamics for state variables that can be "seen" by other nodes
spec.nodes(1).mechanisms = {'iNa','iK','itonic'}; % note: corresponding mechanism files (e.g., iNa.txt) should exist in matlab path
spec.nodes(1).parameters = {'stim',7,'c',1}; % user-specified parameter values (note: these override default values set in mechanism files)
% simulation controls
tspan=[0 3000]; % [beg end], ms, simulation time limits
SOLVER='euler'; % numerical integration method
dt=.01; % integration time step [ms]
dsfact=10; % downsample factor (applied to simulated data)
% THIS IS THE ONLY THING YOU NEED TO CHANGE
% THIS IS THE ONLY THING YOU NEED TO CHANGE
codepath='~/x010-dnsim/dnsim'; % path to dnsim toolbox
% THIS IS THE ONLY THING YOU NEED TO CHANGE
% THIS IS THE ONLY THING YOU NEED TO CHANGE
% %% SINGLE SIMULATION
% % local simulation with codegen
% data = runsim(spec,'timelimits',tspan,'dt',dt,'SOLVER',SOLVER,'dsfact',dsfact,...
% 'coder',1,'verbose',0);
% plotv(data,spec,'varlabel','V');
%% SIMULATION BATCHES
% output controls
rootdir = pwd; % where to save outputs
savedata_flag = 0; % 0 or 1, whether to save the simulated data (after downsampling)
saveplot_flag = 1; % 0 or 1, whether to save plots
plotvars_flag = 1; % 0 or 1, whether to plot state variables
% what to vary across batch
scope = 'HH'; % node or connection label whose parameter you want to vary
variable = 'c'; % parameter to vary (e.g., capacitance)
values = '[1]'; % values for each simulation in batch (varied across batches)
%{
Prerequisites for submitting jobs to cluster:
1. add path-to-dnsim/csh to your environment.
e.g., add to .bashrc: "export PATH=$PATH:$HOME/dnsim/csh"
2. run matlab on cluster node that recognizes the command 'qsub'
%}
% % -------------------------------------------------------------------------
% % LOCAL BATCHES (run serial jobs on current machine)
% cluster_flag=0; % 0 or 1, whether to submit jobs to a cluster (requires: running on a cluster node that recognizes the command 'qsub')
%
% % without codegen
% values = '[1 1.5]';
% [~,~,outdir]=simstudy(spec,{scope},{variable},{values},...
% 'dt',dt,'SOLVER',SOLVER,'rootdir',rootdir,'timelimits',tspan,'dsfact',dsfact,...
% 'savedata_flag',savedata_flag,'saveplot_flag',saveplot_flag,'plotvars_flag',plotvars_flag,'addpath',codepath,...
% 'cluster_flag',cluster_flag,'coder',0);
%
% % with codegen
% values = '[2 2.5]';
% [~,~,outdir]=simstudy(spec,{scope},{variable},{values},...
% 'dt',dt,'SOLVER',SOLVER,'rootdir',rootdir,'timelimits',tspan,'dsfact',dsfact,...
% 'savedata_flag',savedata_flag,'saveplot_flag',saveplot_flag,'plotvars_flag',plotvars_flag,'addpath',codepath,...
% 'cluster_flag',cluster_flag,'coder',1);
% -------------------------------------------------------------------------
% CLUSTER BATCHES (submit to queue for running parallel jobs on cluster nodes)
cluster_flag=1; % 0 or 1, whether to submit jobs to a cluster (requires: running on a cluster node that recognizes the command 'qsub')
% without codegen
% values = '[3 3.5]';
[~,~,outdir]=simstudy(spec,{scope},{variable},{values},...
'dt',dt,'SOLVER',SOLVER,'rootdir',rootdir,'timelimits',tspan,'dsfact',dsfact,...
'savedata_flag',savedata_flag,'saveplot_flag',saveplot_flag,'plotvars_flag',plotvars_flag,'addpath',codepath,...
'cluster_flag',cluster_flag,'coder',0);
% % with codegen
% values = '[4 4.5]';
% [~,~,outdir]=simstudy(spec,{scope},{variable},{values},...
% 'dt',dt,'SOLVER',SOLVER,'rootdir',rootdir,'timelimits',tspan,'dsfact',dsfact,...
% 'savedata_flag',savedata_flag,'saveplot_flag',saveplot_flag,'plotvars_flag',plotvars_flag,'addpath',codepath,...
% 'cluster_flag',cluster_flag,'coder',1);
This is the job1.out
output log, which shows the error message
command = job1
cwd = /usr3/graduate/asoplata/batchdirs/B20150723-145627
HOSTNAME = scc-ba3.scc.bu.edu
JOB_NAME = 5880217
JOB_NAME = B20150723-145627_job1
< M A T L A B (R) >
Copyright 1984-2013 The MathWorks, Inc.
R2013a (8.1.0.604) 64-bit (glnxa64)
February 15, 2013
To get started, type one of these: helpwin, helpdesk, or demo.
For product information, visit www.mathworks.com.
Looking in known mech list for iNa
Found mech file. Using: ~/research/modeling/database/iNa.txt
Looking in known mech list for iK
Found mech file. Using: ~/research/modeling/database/iK.txt
Looking in known mech list for itonic
Found mech file. Using: ~/research/modeling/database/itonic.txt
Model Description
----------------------------------
Specification files:
Populations:
HH (n=2):
dynamics: V' = ((-HH_iNa_INaf(HH_V,HH_iNa_mNaf,HH_iNa_hNaf))+((-HH_iK_IKf(HH_V,HH_iK_nKf))+((HH_itonic_Itonic(t))+0)))/(1)
mechanisms: iNa, iK, itonic
parameters: stim=7, c=1
Connections:
HH
HH -
Connection parameters:
Model Equations:
----------------------------------
ODEs:
HH_V' = ((-((120).*HH_iNa_mNaf.^3.*HH_iNa_hNaf.*(HH_V-(50))))+((-((36).*HH_iK_nKf.^4.*(HH_V-(-77))))+((((7)*(t>(0) & t<HH_itonic_offset)))+0)))/(1)
HH_iNa_mNaf' = ((2.5-.1*(HH_V+65))./(exp(2.5-.1*(HH_V+65))-1)).*(1-HH_iNa_mNaf)-(4*exp(-(HH_V+65)/18)).*HH_iNa_mNaf;
HH_iNa_hNaf' = (.07*exp(-(HH_V+65)/20)).*(1-HH_iNa_hNaf)-(1./(exp(3-.1*(HH_V+65))+1)).*HH_iNa_hNaf;
HH_iK_nKf' = ((.1-.01*(HH_V+65))./(exp(1-.1*(HH_V+65))-1)).*(1-HH_iK_nKf)-(.125*exp(-(HH_V+65)/80)).*HH_iK_nKf;
Initial Conditions:
HH_V(0) =[0, 0, ]
HH_iNa_mNaf(0) =[0, 0, ]
HH_iNa_hNaf(0) =[0, 0, ]
HH_iK_nKf(0) =[0, 0, ]
Matlab-formatted (copy and paste to repeat simulation):
%-----------------------------------------------------------
% Auxiliary variables:
HH_itonic_offset = inf;
% Anonymous functions:
HH_iNa_aM = @(IN) (2.5-.1*(IN+65))./(exp(2.5-.1*(IN+65))-1);
HH_iNa_bM = @(IN) 4*exp(-(IN+65)/18);
HH_iNa_aH = @(IN) .07*exp(-(IN+65)/20);
HH_iNa_bH = @(IN) 1./(exp(3-.1*(IN+65))+1);
HH_iNa_INaf = @(IN,m,h) (120).*m.^3.*h.*(IN-(50));
HH_iK_aN = @(IN) (.1-.01*(IN+65))./(exp(1-.1*(IN+65))-1);
HH_iK_bN = @(IN) .125*exp(-(IN+65)/80);
HH_iK_IKf = @(IN,n) (36).*n.^4.*(IN-(-77));
HH_itonic_Itonic = @(t) (7)*(t>(0) & t<HH_itonic_offset);
% ODE Handle, ICs, integration, and plotting:
ODEFUN = @(t,X) [((-((120).*X(3:4).^3.*X(5:6).*(X(1:2)-(50))))+((-((36).*X(7:8).^4.*(X(1:2)-(-77))))+((((7)*(t>(0) & t<HH_itonic_offset)))+0)))/(1);((2.5-.1*(X(1:2)+65))./(exp(2.5-.1*(X(1:2)+65))-1)).*(1-X(3:4))-(4*exp(-(X(1:2)+65)/18)).*X(3:4);(.07*exp(-(X(1:2)+65)/20)).*(1-X(5:6))-(1./(exp(3-.1*(X(1:2)+65))+1)).*X(5:6);((.1-.01*(X(1:2)+65))./(exp(1-.1*(X(1:2)+65))-1)).*(1-X(7:8))-(.125*exp(-(X(1:2)+65)/80)).*X(7:8);];
IC = [0 0 0 0 0 0 0 0];
[t,y]=ode23(ODEFUN,[0 100],IC); % numerical integration
figure; plot(t,y); % plot all variables/functions
try legend('HH\_V','HH\_iNa\_mNaf','HH\_iNa\_hNaf','HH\_iK\_nKf'); end
%-----------------------------------------------------------
odefun_file =
odefun_20150723_145703_364_job0001
Simulation interval: 0-3000
Starting integration (euler, dt=0.01)
Processed 600 of 3000 ms (elapsed time: 0.715 s)
Processed 1200 of 3000 ms (elapsed time: 1.798 s)
Processed 1800 of 3000 ms (elapsed time: 2.876 s)
Processed 2400 of 3000 ms (elapsed time: 3.956 s)
Processed 3000 of 3000 ms (elapsed time: 5.034 s)
saving plots - voltage traces: /usr3/graduate/asoplata/x010-dnsim/dnsim/HH-c/20150723/images/rawV/job0001_HH-c1_time0-3000_rawV
Error: Problem converting PostScript. System returned error: -1.Failed to convert to output format; Ghostscript status: -100.Error: /rangecheck in --string--heck
Operand stack:
picstr 90003
Execution stack:
%interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop 1 3 %oparray_pop 1 3 %oparray_pop .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval--
Dictionary stack:
--dict:1121/1686(ro)(G)-- --dict:0/20(G)-- --dict:71/200(L)-- --dict:94/160(L)-- --dict:8/76(L)--
Current allocation mode is local
Last OS error: 2
Current file position is 6579
00
Artifex Ghostscript 8.54: Unrecoverable error, exit code 1
in ghostscript (line 186)
in LocalPrint (line 314)
in print (line 240)
in biosimdriver (line 196)
in job1 (line 1)
saving cluster log (/usr3/graduate/asoplata/batchdirs/B20150723-145627/pbsout/job1.out) to /usr3/graduate/asoplata/x010-dnsim/dnsim/HH-c/20150723/logs/job0001_HH-c1_time0-3000_job1.out.
sim_data and spec assigned to base workspace.
finished
I have tried submitting the jobs/running the above script both through the CLI with/without X-Forwarding, and even in the MATLAB command window inside a VNC session, and the error always occurs, so I'm pretty sure it's not due to 'how' I'm submitting it to the cluster. I've run it with both the regular MATLAB 2013a and also the MATLAB 2014b available on the cluster (Thanks for showing me that, Mike), and so if it's a MATLAB version problem...then it's been around for a long and it's still around.
What I'm most concerned about is that no one else has reported/made mention of this issue, even though Jason, and AFAIK at least Julia and Ben, have been running cluster batches where some celltype multiplicites are greater than 1. While yes, this does in part sound like a screw-up on the cluster's (or MATLAB's) part in that MATLAB's cluster stuff isn't handling the graphics printing/saving correctly (apparently MATLAB first prints figures into a PostScript thing, which apparently requires live graphics support by the OS, before printing that to, say, a PNG), judging from the previous sentence, we have working DNSIM code where this error is not encountered. Therefore, it is possible that there is some difference between the DNSIM code that is on the GitHub and the altered DNSIM code people are using in the 'wild' that may be responsible for this bug. If that is the cause of the bug, then it's going to be one NASTY fix.
It could also just be a difference between my 'setup' somehow and someone else's which is why I really need someone else to test at least the GitHub-default ('dev' branch) version of the code for this bug on their end! Based on the simplicity of the simulation I'm running, I REALLY don't think it's an issue with my specific pipeline, though.
Just to reiterate, until we figure this out, or I switch to a 'wild/non-GitHub' version of the DNSIM code that doesn't experience this problem, I/people using the GitHub version CANNOT run sims on the cluster that use a multiplicity >1, aka a simulation that uses more than 1 of a certain celltype.
Thanks for any help.
This is an issue for two little things, only concerning the "real" DNSIM code in dnsim/matlab/functions
. The following functions do not seem to be used/called by anything else (some are interdependent, not that it matters), and I've never seen them used in any capacity like during instructional "how-to" scripts. We may have to update these functions when we make the post-SFN changes to the data structure, model structure, or the main control flow of the program IF they are necessary, but if they are not or are deprecated, we should delete them:
comparemodels.m
extractfeatures.m
getfeatures.m
prepare_spec.m
(actually just a script, not a function)studydriver.m
StudyDriverUI.m
In addition, after looking through the files in this subdirectory for functions that have a significant amount of commented-out code (especially at the end), I've found several files seem to have a large amount of this "graveyard code" as I call it. Given that versions of these files containing the commented-out code will still be present in the Git history, and we can clearly mark such a commit where they're removed, I think it's best to go ahead and remove most of the commented-out code in these files:
biosimdriver.m
dnsim2xpp.m
MechanismBrowser.m
modeler.m
plotpow.m
rk.m
runsim.m
write_dnsim_script.m
SO, my job for other people is to answer: Which files of the first group do we NOT want to delete (and therefore, when we make big architecture changes, have to port to the new formats/stuff), and is there any good reason, on a file-by-file basis, why the commented out code in the second group shouldn't be deleted?
Currently there is no way to access the values of mechanisms other than state variables.But it would be very useful to have an option to save those variables/input currents/etc that are of interest, such as the Poisson input.
check what others do, e.g. brian
I have experienced two issues recently about introducing instructions that were not correct, but no error was raised (one related to the gaba connectivity and the other one about changing values in buildmodel). Of course that was my fault of using incorrect instructions, and It is true that one can use the verbose mode and check, one by one the different equations and initial conditions, but this would be better used for debugging purposes for us, not for an average user using a stable version (a user just wants to get her work done, and unless an error happens, not to worry much).
Therefore, I would highly encourage having an instruction monitor, so this sort of errors are not silent but reported as an error (i.e. stopping the simulation).
So I've taken a certain DNSIM script (below) and I find that when I run the script a couple times, one of the simulations is incorrectly saved under a DIFFERENT, wrong simulation output plot file. These are basic Hodgkin-Huxley models based on the Na, K, and Leak currents for the TC and RE cells available here under a sinusoidally varying current. If I try to run all four different cosine frequencies, I see that the 0.8 Hz simulation does NOT have a 0.8 Hz current applied to it -- rather it appears to be 0.6 Hz.
Thus, when a batch of jobs is run on the cluster, sweeping over some parameter dimension, the correct simulations do NOT necessarily get saved to their corresponding simulation output files, including picture/plot files.
0.6 Hz -->
0.8 Hz wrong -->
1.0 Hz -->
1.3 Hz -->
However, when I run the 0.8 Hz alone, it works fine
0.8 Hz -->
Here is the script
% Model: TC-RE
spec=[];
spec.nodes(1).label = 'TC';
spec.nodes(1).multiplicity = 1;
spec.nodes(1).dynamics = {'V''=current/c'};
spec.nodes(1).mechanisms = {'iK_TC','iNa_TC','iLeak_TC','icosine'};
% spec.nodes(1).mechanisms = {'Ca_TC','iH_TC','iK_TC','iKLeak_TC','iLeak_TC','iNa_TC','iT_TC','itonic','icosine'};
% spec.nodes(1).mechanisms = {'Ca_TC','iH_TC','iK_TC','iKLeak_TC','iLeak_TC','iNa_TC','iT_TC','itonic_switch','icosine'};
spec.nodes(1).parameters = {'c',1,'V_IC',-65,'cos_ampl',0.5,'cos_freq',1.0};
spec.nodes(2).label = 'RE';
spec.nodes(2).multiplicity = 1;
spec.nodes(2).dynamics = {'V''=current/c'};
% spec.nodes(2).mechanisms = {'iK_RE','iLeak_RE','iNa_RE','iT_RE','itonic','icosine'};
spec.nodes(2).mechanisms = {'iNa_RE','iK_RE','iLeak_RE','icosine'};
% spec.nodes(2).mechanisms = {'iK_RE','iLeak_RE','iNa_RE','iT_RE','itonic_switch','icosine'};
spec.nodes(2).parameters = {'c',1,'V_IC',-85,'cos_ampl',0.5,'cos_freq',1.0};
% spec.connections(1,2).label = 'TC-RE';
% spec.connections(1,2).mechanisms = {'iAMPA'};
% spec.connections(1,2).parameters = {'gAMPA',0.1};
% spec.connections(2,1).label = 'RE-TC';
% spec.connections(2,1).mechanisms = {'iGABAA'};
% spec.connections(2,1).parameters = {'gGABAA_base',0.04,'spm',2}
% % spec.connections(2,1).mechanisms = {'iGABAA','iGABAB'};
% % spec.connections(2,1).parameters = {'gGABAA_base',0.04,'spm',2,'gGABAB',0.069}
% spec.connections(2,2).label = 'RE-RE';
% % spec.connections(2,2).mechanisms = {'iGABAA'};
% spec.connections(2,2).mechanisms = {'iAMPA_PY_faux'};
% spec.connections(2,2).parameters = {'gAMPA',0.1,'PY_square_freq',0.0};
% spec.connections(1,1).label = 'TC-TC';
% spec.connections(1,1).mechanisms = {'iAMPA_PY_faux'};
% spec.connections(1,1).parameters = {'gAMPA',0.1,'PY_square_freq',0.0};
codepath='~/x010-dnsim/dnsim';
plotvars_flag = 1;
plot_flag = 1;
plotpower_flag = 1;
plotpacoupling_flag = 1;
saveplot_flag = 1;
overwrite_flag = 1;
timelimits = [0 4000];
SOLVER = 'euler';
rootdir = '/projectnb/crc-nak/asoplata/dnsim_data/p1d1q023c11_cosfreq_0pt8';
addpath = codepath;
cluster_flag = 0;
% cluster_flag = 1;
savedata_flag = 1;
[specs,timestamp,rootoutdir]=simstudy(spec,{'(TC,RE)'},{'cos_freq'},{'[0.6,0.8,1.0,1.3]'},'plotvars_flag',plotvars_flag,'plot_flag',plot_flag,'plotpower_flag',plotpower_flag,'plotpacoupling_flag',plotpacoupling_flag,'saveplot_flag',saveplot_flag,'overwrite_flag',overwrite_flag,'timelimits',timelimits,'SOLVER',SOLVER,'rootdir',rootdir,'addpath',addpath,'cluster_flag',cluster_flag,'savedata_flag',savedata_flag);
% run this just to do the 0.8 Hz sim by itself
% [specs,timestamp,rootoutdir]=simstudy(spec,{'(TC,RE)'},{'cos_freq'},{'[0.8]'},'plotvars_flag',plotvars_flag,'plot_flag',plot_flag,'plotpower_flag',plotpower_flag,'plotpacoupling_flag',plotpacoupling_flag,'saveplot_flag',saveplot_flag,'overwrite_flag',overwrite_flag,'timelimits',timelimits,'SOLVER',SOLVER,'rootdir',rootdir,'addpath',addpath,'cluster_flag',cluster_flag,'savedata_flag',savedata_flag);
exit
Using matlab code generator accelerates the simulation time in matlab significantly, by automatically generating a mex file. However:
In order to make the implementation of codegen more efficient we will have to think how we can reuse previous mex files (assumed a same model is in place, but with tolerance for different parameter values).
We also need to somehow 'predict' if codegen is not going to work (for instance checking for a list of unsupported functions), and then:disable it/inform about possible solutions, if any.
There is a new branch dnsim_codegen explicitly about this feature in the code.
Processed 5600 of 5600 ms (elapsed time: 8.236 s)
Simulated data saved to: /usr3/graduate/asoplata/Documents/MATLAB/TC-multiplicity/20150321/data/job0001_TC-multiplicity1_time0-5600_sim_data.mat
Error: X and Y must be the same length
in plotpow (line 144)
in biosimdriver (line 119)
in job1 (line 1)
LFP data saved to /usr3/graduate/asoplata/Documents/MATLAB/TC-multiplicity/20150321/data/lfp/job0001_TC-multiplicity1_time0-5600_sim_data_LFPs.mat
Which refers to the following line of dnsim/matlab/functions/plotpow.m
:
hh=text(maxf+.05*range(xlim),min(ylim)+.9*range(ylim),num2str(maxf));
ylim
also doesn't appear to be set before this, so maybe its inference is failing in this smaller-screensize environment.I can modify a parameter value within the paramters field, e.g. defined value for g_m in the mechanism is 1.2 and can change it to 0.8 by:
g_m = 0.8;
MSNpop_spec.nodes(1).parameters = {'g_m',g_m};
However, I cannot change it again later within the buildmodel, e.g.:
g_m =1.2;
models{1} = buildmodel(MSNpop_spec,'override', {'MSN','g_m',g_m},'verbose',1);
the verbose information about the model is telling that g_m is still 0.8, beyond that, no apparent error occurs or any info about the issue is reported.
Action: The current naming convention for mechanisms is to use the name of it in the ORIGINAL model that it is inherited from. E.g., if we have (Sherfey, 1996)'s "iNa" current implemented, it will be in dnsim/database/iNa.txt
rather than whatever naming convention we so choose. I believe we should include the previous implementation's name in a comment in the file, but we should name all mechanisms according to an agreed-upon scheme. I'm open to ideas about the scheme itself.
Reason: This prevents duplicates (including "different-case duplicates" as exist now, which interfere with Dropbox) and makes the list of mechanisms more clear/explanatory.
voltage plot, run via cluster
voltage plot, run via GUI
coupling plot, run via cluster
coupling plot, run via GUI
So, in trying map out the control flow of the program, I was trying to look at this scenario: when you want to use the coder, in a batch job on the cluster. However, it kept completely crashing (as in, MATLAB itself was crashing), and when I was debugging it it was giving me a
"cp cannot stat /$PATHTODNSIM/dnsim/$PATHTODNSIM/dnsim/odefun/default_8/odefun_20151009_<whatever>_mex.mexa64
: no such file"
error. I did some digging, and now I'm really confused, because I'm not sure how the code currently in simstudy
can properly run in this situation (more in a sec). I also think it's possible that the system/stdout error it was getting was enough (because it was a system error) to tell MATLAB to just halt entirely, but idk much about the interface between MATLAB and the OS.
@jsherfey @kupiqu Do either of you run simulations, successfully, using the current dev
version of the code, with the coder activated, on the cluster?
(All of this applies to both the mex-file and the m-file treatment in simstudy.m
) Currently, on the try block that starts on Line 63 of simstudy https://github.com/jsherfey/dnsim/blob/dev/matlab/functions/simstudy.m#L63 , "filemex" contains the full path up to and including odefun_subdir plus the actual mexfile filename. On line 230, the system tries to copy "fullfile(cwd,filemex)" to somewhere (cwd is made the pwd on Line 180), but this ends up concatenating the current working directory directly to the string of the entire filepath of the mexfile, creating the false mega-filepath mentioned in the error above. Maybe this was affected by July changes (there were a lot) to "dnsimulator.m", but that would mean there would have been big changes to what "odefun_subdir" represented, which I think is unlikely (but possible).
Maybe this scenario can still work, given that your code, and your rootdir, are both in the right place, but I don't know what that combination is - it's certainly not mine and I've tried using dnsim code directly from my personal account, with both a pwd rootdir and a /projectnb rootdir. But going through the filemex code, I just can't see how it would ever work. Is this the version of using the coder on the cluster that you use? Does it work for anyone? I think Salva doesn't use simstudy, but don't you use it Jason? If not, it's possible that using the coder on the cluster is broken this way...
If not, can either of you test this scenario, using the dev
version of the code?
As I was trying to come up with the most complicated final output data structure I could (to test my R data analysis read-in stuff), I came across a breaking case. From the DNSIM GUI, I loaded up a base HH cell, then added two more. I connected them haphazardly with iSYN and iNMDA, and change the numbers of the celltypes, to discover that when an iNMDA mechanism from 'iNMDA.txt' is outgoing to a celltype with >1 instance of that celltype, the simulation DOES NOT RUN due to an error, and stops. This includes when the conductance is zero! I've tested this with the most recent commit. Below is what's printed when this happens:
<private>/002/HH-multiplicity/20150210: job0001_HH-multiplicity1_time0-400
processing simulation...
Model Description
----------------------------------
Specification files:
<private>/dnsim/database/iNMDA.txt
Populations:
HH (n=1):
dynamics: v' = ((-HH_K_ik(HH_v,HH_K_n))+((-HH_Na_ina(HH_v,HH_Na_m,HH_Na_h))+((HH_input_I(t))+((-HH_leak_ileak(HH_v))+0))))/(1)
mechanisms: K, Na, input, leak
parameters: c=1, v_IC=-65
HH2 (n=2):
dynamics: v' = ((-HH2_K_ik(HH2_v,HH2_K_n))+((-HH2_Na_ina(HH2_v,HH2_Na_m,HH2_Na_h))+((HH2_input_I(t))+((-HH2_leak_ileak(HH2_v))+((-HH_HH2_iNMDA_INMDA(HH2_v,HH_HH2_iNMDA_s))+0)))))/(1)
mechanisms: K, Na, input, leak
parameters: c=1, v_IC=-65
Connections:
HH HH2
HH - iNMDA
HH2 - -
Connection parameters:
Model Equations:
----------------------------------
ODEs:
HH_v' = ((-((36)*(HH_K_n.^4).*(HH_v-(-77))))+((-((120)*HH_Na_h.*(HH_Na_m.^3).*(HH_v-(50))))+(((10))+((-((0.3)*(HH_v-(-54.4))))+0))))/(1)
HH_K_n' = (.01*(HH_v+55)./(1-exp(-(HH_v+55)/10))).*(1-HH_K_n)-(.125*exp(-(HH_v+65)/80)).*HH_K_n;
HH_Na_m' = (.1*(HH_v+40)./(1-exp(-(HH_v+40)/10))).*(1-HH_Na_m)-(4*exp(-(HH_v+65)/18)).*HH_Na_m;
HH_Na_h' = (.07*exp(-(HH_v+65)/20)).*(1-HH_Na_h)-(1./(1+exp(-(HH_v+35)/10))).*HH_Na_h;
HH2_v' = ((-((36)*(HH2_K_n.^4).*(HH2_v-(-77))))+((-((120)*HH2_Na_h.*(HH2_Na_m.^3).*(HH2_v-(50))))+(((10))+((-((0.3)*(HH2_v-(-54.4))))+((-((0).*(1./(1+exp(-.062*HH2_v)*(1.5)/3.57)).*(HH_HH2_iNMDA_NMkernel*HH_HH2_iNMDA_s).*(HH2_v-(0))))+0)))))/(1)
HH2_K_n' = (.01*(HH2_v+55)./(1-exp(-(HH2_v+55)/10))).*(1-HH2_K_n)-(.125*exp(-(HH2_v+65)/80)).*HH2_K_n;
HH2_Na_m' = (.1*(HH2_v+40)./(1-exp(-(HH2_v+40)/10))).*(1-HH2_Na_m)-(4*exp(-(HH2_v+65)/18)).*HH2_Na_m;
HH2_Na_h' = (.07*exp(-(HH2_v+65)/20)).*(1-HH2_Na_h)-(1./(1+exp(-(HH2_v+35)/10))).*HH2_Na_h;
HH_HH2_iNMDA_s' = HH_HH2_iNMDA_alphar*((0.001)./(1+exp(-(HH_v-(2))/(5)))).*(1-HH_HH2_iNMDA_s)-HH_HH2_iNMDA_s/(100);
Initial Conditions:
HH_v(0) =[-65, ]
HH_K_n(0) =[0.317, ]
HH_Na_m(0) =[0.052, ]
HH_Na_h(0) =[0.596, ]
HH2_v(0) =[-65, -65, ]
HH2_K_n(0) =[0.317, 0.317, ]
HH2_Na_m(0) =[0.052, 0.052, ]
HH2_Na_h(0) =[0.596, 0.596, ]
HH_HH2_iNMDA_s(0) =[0.194, ]
Matlab-formatted (copy and paste to repeat simulation):
%-----------------------------------------------------------
% Auxiliary variables:
HH_HH2_iNMDA_alphar = 7.2E4;
HH_HH2_iNMDA_Nmax = max((1),(2));
HH_HH2_iNMDA_Xpre = linspace(1,HH_HH2_iNMDA_Nmax,(1))'*ones(1,(2));
HH_HH2_iNMDA_Xpost = (linspace(1,HH_HH2_iNMDA_Nmax,(2))'*ones(1,(1)))';
HH_HH2_iNMDA_NMsigma = ((0.1)*HH_HH2_iNMDA_Nmax);
HH_HH2_iNMDA_NMkernel = (1)*exp(-(HH_HH2_iNMDA_Xpre-HH_HH2_iNMDA_Xpost).^2/HH_HH2_iNMDA_NMsigma^2);
% Anonymous functions:
HH_K_an = @(HH_v) .01*(HH_v+55)./(1-exp(-(HH_v+55)/10));
HH_K_bn = @(HH_v) .125*exp(-(HH_v+65)/80);
HH_K_ik = @(HH_v,HH_K_n) (36)*(HH_K_n.^4).*(HH_v-(-77));
HH_Na_am = @(HH_v) .1*(HH_v+40)./(1-exp(-(HH_v+40)/10));
HH_Na_bm = @(HH_v) 4*exp(-(HH_v+65)/18);
HH_Na_ah = @(HH_v) .07*exp(-(HH_v+65)/20);
HH_Na_bh = @(HH_v) 1./(1+exp(-(HH_v+35)/10));
HH_Na_ina = @(HH_v,HH_Na_m,HH_Na_h) (120)*HH_Na_h.*(HH_Na_m.^3).*(HH_v-(50));
HH_input_I = @(t) 10;
HH_leak_ileak = @(HH_v) (0.3)*(HH_v-(-54.4));
HH2_K_an = @(HH2_v) .01*(HH2_v+55)./(1-exp(-(HH2_v+55)/10));
HH2_K_bn = @(HH2_v) .125*exp(-(HH2_v+65)/80);
HH2_K_ik = @(HH2_v,HH2_K_n) (36)*(HH2_K_n.^4).*(HH2_v-(-77));
HH2_Na_am = @(HH2_v) .1*(HH2_v+40)./(1-exp(-(HH2_v+40)/10));
HH2_Na_bm = @(HH2_v) 4*exp(-(HH2_v+65)/18);
HH2_Na_ah = @(HH2_v) .07*exp(-(HH2_v+65)/20);
HH2_Na_bh = @(HH2_v) 1./(1+exp(-(HH2_v+35)/10));
HH2_Na_ina = @(HH2_v,HH2_Na_m,HH2_Na_h) (120)*HH2_Na_h.*(HH2_Na_m.^3).*(HH2_v-(50));
HH2_input_I = @(t) 10;
HH2_leak_ileak = @(HH2_v) (0.3)*(HH2_v-(-54.4));
HH_HH2_iNMDA_BMg = @(V) 1./(1+exp(-.062*V)*(1.5)/3.57);
HH_HH2_iNMDA_NT = @(V) (0.001)./(1+exp(-(V-(2))/(5)));
HH_HH2_iNMDA_INMDA = @(V,HH_HH2_iNMDA_s) (0).*(1./(1+exp(-.062*V)*(1.5)/3.57)).*(HH_HH2_iNMDA_NMkernel*HH_HH2_iNMDA_s).*(V-(0));
% ODE Handle, ICs, integration, and plotting:
ODEFUN = @(t,X) [((-((36)*(X(2:2).^4).*(X(1:1)-(-77))))+((-((120)*X(4:4).*(X(3:3).^3).*(X(1:1)-(50))))+(((10))+((-((0.3)*(X(1:1)-(-54.4))))+0))))/(1);(.01*(X(1:1)+55)./(1-exp(-(X(1:1)+55)/10))).*(1-X(2:2))-(.125*exp(-(X(1:1)+65)/80)).*X(2:2);(.1*(X(1:1)+40)./(1-exp(-(X(1:1)+40)/10))).*(1-X(3:3))-(4*exp(-(X(1:1)+65)/18)).*X(3:3);(.07*exp(-(X(1:1)+65)/20)).*(1-X(4:4))-(1./(1+exp(-(X(1:1)+35)/10))).*X(4:4);((-((36)*(X(7:8).^4).*(X(5:6)-(-77))))+((-((120)*X(11:12).*(X(9:10).^3).*(X(5:6)-(50))))+(((10))+((-((0.3)*(X(5:6)-(-54.4))))+((-((0).*(1./(1+exp(-.062*X(5:6))*(1.5)/3.57)).*(HH_HH2_iNMDA_NMkernel*X(13:13)).*(X(5:6)-(0))))+0)))))/(1);(.01*(X(5:6)+55)./(1-exp(-(X(5:6)+55)/10))).*(1-X(7:8))-(.125*exp(-(X(5:6)+65)/80)).*X(7:8);(.1*(X(5:6)+40)./(1-exp(-(X(5:6)+40)/10))).*(1-X(9:10))-(4*exp(-(X(5:6)+65)/18)).*X(9:10);(.07*exp(-(X(5:6)+65)/20)).*(1-X(11:12))-(1./(1+exp(-(X(5:6)+35)/10))).*X(11:12);HH_HH2_iNMDA_alphar*((0.001)./(1+exp(-(X(1:1)-(2))/(5)))).*(1-X(13:13))-X(13:13)/(100);];
IC = [-65 0.317 0.052 0.596 -65 -65 0.317 0.317 0.052 0.052 0.596 0.596 0.193981];
[t,y]=ode23(ODEFUN,[0 100],IC); % numerical integration
figure; plot(t,y); % plot all variables/functions
try legend('HH\_v','HH\_K\_n','HH\_Na\_m','HH\_Na\_h','HH2\_v','HH2\_K\_n','HH2\_Na\_m','HH2\_Na\_h','HH\_HH2\_iNMDA\_s'); end
%-----------------------------------------------------------
Simulation interval: 0-400
Starting integration (euler, dt=0.01)
Simulation interval: 0-400
Starting integration (euler, dt=0.01)
Error: Matrix dimensions must agree.
in biosimulator/@(t,X)[((-((36)*(X(2:2).^4).*(X(1:1)-(-77))))+((-((120)*X(4:4).*(X(3:3).^3).*(X(1:1)-(50))))+(((10))+((-((0.3)*(X(1:1)-(-54.4))))+0))))/(1);(.01*(X(1:1)+55)./(1-exp(-(X(1:1)+55)/10))).*(1-X(2:2))-(.125*exp(-(X(1:1)+65)/80)).*X(2:2);(.1*(X(1:1)+40)./(1-exp(-(X(1:1)+40)/10))).*(1-X(3:3))-(4*exp(-(X(1:1)+65)/18)).*X(3:3);(.07*exp(-(X(1:1)+65)/20)).*(1-X(4:4))-(1./(1+exp(-(X(1:1)+35)/10))).*X(4:4);((-((36)*(X(7:8).^4).*(X(5:6)-(-77))))+((-((120)*X(11:12).*(X(9:10).^3).*(X(5:6)-(50))))+(((10))+((-((0.3)*(X(5:6)-(-54.4))))+((-((0).*(1./(1+exp(-.062*X(5:6))*(1.5)/3.57)).*(HH_HH2_iNMDA_NMkernel*X(13:13)).*(X(5:6)-(0))))+0)))))/(1);(.01*(X(5:6)+55)./(1-exp(-(X(5:6)+55)/10))).*(1-X(7:8))-(.125*exp(-(X(5:6)+65)/80)).*X(7:8);(.1*(X(5:6)+40)./(1-exp(-(X(5:6)+40)/10))).*(1-X(9:10))-(4*exp(-(X(5:6)+65)/18)).*X(9:10);(.07*exp(-(X(5:6)+65)/20)).*(1-X(11:12))-(1./(1+exp(-(X(5:6)+35)/10))).*X(11:12);HH_HH2_iNMDA_alphar*((0.001)./(1+exp(-(X(1:1)-(2))/(5)))).*(1-X(13:13))-X(13:13)/(100);] (line 0)
in biosimulator (line 55)
in runsim (line 109)
in biosimdriver (line 66)
in simstudy (line 198)
in RunSimStudy (line 2791)
done (1 of 1)
>>
(Before I begin, I only later realized I could use {('TC,RE')},{'square_amp''},{'[0.1,0.3]'}
to do the same simulations, as you said in your chat -- so that IS working, but the below is still a bug), So, in order to get "groups" working, I set a bunch of sims yesterday running things like
[specs,timestamp,rootoutdir]=simstudy(spec, {'TC','RE'},{'square_amp','square_amp'},{'[0.1]','[0.1]'},'plotvars_flag',plotvars_flag,'plot_flag',plot_flag,'plotpower_flag',plotpower_flag,'plotpacoupling_flag',plotpacoupling_flag,'saveplot_flag',saveplot_flag,'overwrite_flag',overwrite_flag,'timelimits',timelimits,'SOLVER',SOLVER,'rootdir',rootdir,'addpath',addpath,'cluster_flag',cluster_flag,'savedata_flag',savedata_flag);
[specs,timestamp,rootoutdir]=simstudy(spec, {'TC','RE'},{'square_amp','square_amp'},{'[0.3]','[0.3]'},'plotvars_flag',plotvars_flag,'plot_flag',plot_flag,'plotpower_flag',plotpower_flag,'plotpacoupling_flag',plotpacoupling_flag,'saveplot_flag',saveplot_flag,'overwrite_flag',overwrite_flag,'timelimits',timelimits,'SOLVER',SOLVER,'rootdir',rootdir,'addpath',addpath,'cluster_flag',cluster_flag,'savedata_flag',savedata_flag);
<change some synaptic conductances or whatever, along with the rootdir
so it's saving to a different folder ultimately>
[specs,timestamp,rootoutdir]=simstudy(spec, {'TC','RE'},{'square_amp','square_amp'},{'[0.1]','[0.1]'},'plotvars_flag',plotvars_flag,'plot_flag',plot_flag,'plotpower_flag',plotpower_flag,'plotpacoupling_flag',plotpacoupling_flag,'saveplot_flag',saveplot_flag,'overwrite_flag',overwrite_flag,'timelimits',timelimits,'SOLVER',SOLVER,'rootdir',rootdir,'addpath',addpath,'cluster_flag',cluster_flag,'savedata_flag',savedata_flag);
[specs,timestamp,rootoutdir]=simstudy(spec, {'TC','RE'},{'square_amp','square_amp'},{'[0.3]','[0.3]'},'plotvars_flag',plotvars_flag,'plot_flag',plot_flag,'plotpower_flag',plotpower_flag,'plotpacoupling_flag',plotpacoupling_flag,'saveplot_flag',saveplot_flag,'overwrite_flag',overwrite_flag,'timelimits',timelimits,'SOLVER',SOLVER,'rootdir',rootdir,'addpath',addpath,'cluster_flag',cluster_flag,'savedata_flag',savedata_flag);
I wake up this morning, to realize that only about 50-66% of them ran/were saved. Seriously. It's almost as if, though not exactly, only every simulation was run, but that pattern isn't always true.
So I look at '~/batchdirs' entries, and the missing simulations are straight up missing - as if they were never sent to the cluster. So I look at my standard output from when I was submitting them all, and I notice something I hadnt paid much attention to at the time: sometimes, when I would do 'matlab -r run_script' with a bunch of the above simulations, go in and change something like a synaptic conductance, and do 'matlab -r run_script' again. In the second script run, I noticed this as an example
...
/project/.../20150401:: job0001_RE-squareamp0pt1__TC-squareamp0pt1_time0-2000
executing: "qmatjobs_memlimit B20150401-100547 8G" on cluster scc2.bu.edu
1 jobs submitted.
/project/.../20150401:: job0001_RE-squareamp0pt1__TC-squareamp0pt1_time0-2000
Warning: Directory already exists.
>In simstudy at 191
>In <run_script> at 86
executing: "qmatjobs_memlimit B20150401-100547 8G" on cluster scc2.bu.edu
1 jobs submitted.
emphasis on the job number. Whenever this warning appeared, it would say executing the same job number twice, which appears to overwrite whatever was the job before. So the [0.1] job would be overwritten by the [0.3] job, and the [0.1] job would never be properly run (since the time to write and submit the job is far less than the time until the job finishes). This also means that apparently you can overwrite your jobs to the cluster this way, as that's what I think is going on here. You can test it yourself by making 4 changes to something like the synaptic conductance, interspersed with running the exact same simstudy call, run one right after another. EVEN if you change the 'rootdir' so each of the resulting data is saved into different folders, this STILL happens.
Therefore it appears that, regardless of the resulting, absolute data directory like rootdir
, if you submit a bunch of jobs where each of say the resulting filenames are identical, like job0001_RE-squareamp0pt3__TC-squareamp0pt3_time0-15000_rawV.png
(EVEN if in different directories), then only sometimes (there is some randomness involved it seems) the previous job is overwritten.
Very cheap solution I've been testing out:
simply waiting between the simstudy calls, via a pause(5)
, seems to make the problem go away! That said, this bug should still be fixed.
I would like to do sth like:
V_IC = -63
alpha_m_ic = 0.32*(V_IC+54)/(1-exp(-(V_IC+54)/4))
beta_m_ic = 0.28*(V_IC+27)/(exp((V_IC+27)/5)-1)
m(0) = alpha_m_ic/(alpha_m_ic+beta_m_ic)*ones(Npop,1)
but then I get the following error:
Error using eval
Undefined function or variable 'alpha_m_ic'.
Error in buildmodel (line 949)
ic=eval(ic);
Error in runsim (line 89)
[model,ic,functions,auxvars,spec,readable,StateIndex] =
buildmodel(spec,args{:});
but that doesn't currently work, so I have to use:
m(0) = 0.32*(V_IC+54)/(1-exp(-(V_IC+54)/4))/(0.32*(V_IC+54)/(1-exp(-(V_IC+54)/4))+0.28*(V_IC+27)/(exp((V_IC+27)/5)-1))*ones(Npop,1)
which is uglier
I have tried multiple times on the internet-enabled 'scc2' MGHPCC cluster to log into my InfiniteBrain account via the DNSIM GUI, so as to upload my own mechanisms and models to under my account name, but each time I get a popup warning "LOGIN FAILED. You can upload models as ANONYMOUS user (File -> Upload)". In the MATLAB Command Window, when I first start DNSIM it prints out
>> dnsim
Matlab<->MySQL connector v1.36
Current database is "modulator"
<some path warnings>
Database query failed.
authentication failed: there was an error opening the database for user authentication.
I AM able to login to the infinitebrain.org website using this same login information (copy/pasted), so the problem isn't that the password is wrong. I have confirmed I do have access to the internet through this machine. Thus, I do not seem to have a way to upload mechanisms/models and have them be tied to my user account.
(I will only be discussing code in dnsim/matlab/functions
here). With the exception of MechanismBrowser.m
, which has a LARGE number of GUI-related functions that are independent of the rest of the program, I've tried to identify all the functions currently in DNSIM, both those that have their own file and those that don't. (Yes, I know there're are automated ways to do this but I'm partially doing it for education about the codebase).
I've identified a number of cases where the same function is present, occasionally in slightly different forms, within different files, including sometimes its own file. I.e., there are duplicates of many functions. This is confusing for a number of reasons, like which version is MATLAB using (probably the "closest" one, i.e. the one inside a higher-order function's file instead of the version that is independent), but most importantly it's super confusing for developers.
For a project like this, I think it's generally a good idea to all functions inside a single file, that only contains that function -- with an exception for big GUI "function" definitions, which can include a ton of little callback/click-y functions that only make sense in their own context (MechanismBrowser.m
and modeler.m
are good examples of this). Even with GUI code segmentation, however, I don't think any two functions should be named the same and do two different things in different contexts. Yes, that is the definition of polymorphism (I think), but due to the fact we are restricted from using classes because one of the goals is to satisfy the MCR MATLAB compiler to make standalone versions (what @jsherfey just told me), and due to the size of the codebase, I don't think we should bring that into the mix. In simpler terms, I think the entire program should just be a bunch of files with single functions in them, with exceptions for complicated GUI behavior.
IMHO this is far easier for developers, and makes no difference to users, and makes it easier for developers to have cause to re-use others' internal functions.
I will go through these duplicated functions below and try to spell out what I think is best if there are differences between the versions, but @jsherfey is really the only person who understands anything about the history of different versions if he can remember. If they are not different, then I think we should just turn them into single files that just hold their own function.
function_name
(same or different) [solution: do thingy]
download_get_models
(different) [solution: use standalone version, since is newest of three? only one that has IP address]
download_models
) [solution: change name since wrapper for download_get_models
?]downloadmodel
) [solution: just change name since just calls modeler]combine_models
(and accompanying function combine_models_pair
) (different) [solution: use standalone version since seems to have more code / be more developed]
load_get_models
(same)
mech_spec2str
(same except for MechanismBrowser) [solution: use standalone version/modeler/write_dnsim_scipt version, since seems to have a couple extra lines more than MechanismBrowser]
dat2str
(same)
disperror
(same)
standardize_model_spec
(different, combine_models newer?) [solution: merge?]
correct_loadjson_arrstr
(seem to be the same) [solution: use get_search_space.m version since has an extra ischar check]
copyfields
(different) [solution: use json2spec since seems to have more code?]
simulate
(modeler) (name confusing with, well, everything) [solution: rename]RunSimStudy
(modeler) (name confusing) [solution: rename]When I was trying to testdrive DNSIM use GNU Octave about a month ago, one of the discrepancies I encountered was with two of the regular expressions in matlab/functions/get_search_space.m
's subfunction parse_spec
. Octave complained about:
^([-+]?\[)[\d\s:-+]*\]$
^\[[\w\s,-+*/\.]*\]$
in both instances saying that there was an improper range inside the regex. I input both into online regular expression checkers (there's a million, like https://regex101.com/ and http://regexr.com/ ) and they all said the same thing:
:-+
range in the "numeric_array" regex is backwards; the :
char is greater than +
,-+
range in the "bracketed_strings" regex is backwards; the ,
char is greater than +
I also hypothesized that "maybe MATLAB does regular expressions differently than everyone else", but could not find a shred of evidence that that's the case. Therefore,
What I think is happening is, MATLAB is throwing only a warning (or nothing at all) as it processes a malformed regular expression, perhaps processing it as what it thinks you want. Whereas "everyone else" (Octave, online regex checkers) are more loud/stringent about errors.
IIRC, reversing the ranges above seemed to fix the problem, but this'll need extensive testing...for the same reason that regular expression errors should be treated seriously.
On Windows, the files in the csh directory don't actually form valid links, which prevents DNSim scripts from running when submitted to the cluster. Solution: don't use Windows. I'm PC trash sorry
take a look to this, it might be useful: https://github.com/catch22/doctest-for-matlab
timesurfer_flag set to 0 gives different results compared with 1 (default), in particular I saw this happening on the membrane potential. It might be that some mechanism is not correctly parsed (although no error is reaised; see issue #7) and the resulting model is incomplete.
As of simulations using the latest dev commit e9a23e4 , some people (Ben and Austin) report that, when DNSIM is run via a CLI script, a savedata_flag
must be set to one to save data, but other people report that it saves their data even without this flag.
We need to
cluster_flag
and sim_cluster_flag
, but only the former is supposed to be used by the user)Examples given:
External Poisson has variables from elsewhere:
MSN_extPoissonInputCoderEnabled_cm
MSN_extPoissonInputCoderEnabled_g_m
All other mechanisms also have external Poisson variables:
MSN_kCurrentMSN_g_ext
MSN_kCurrentMSN_rate_ext
MSN_kCurrentMSN_tau_ext
This seems to happen for the parameters which value is replaced in the main script.
otherwise diff returns an empty string but the associated mex is not found
example -- integrate-and-fire model
need support for conditional statements
ex) if (V>=Vthresh) { V=Vreset; g=g+b }
in parse_mech_spec():
add mech.conditional
{ condition, response }
in buildmodel(): do all substitutions into condition and response
in dnsimulator(): fprint after ODE updates:
sel=var(:,k)>parm % i.e., sel=(condition)
if any(sel)
vars(sel,k)=f(vars(sel,:),params) % vars(sel,k)=(response)
I built my model for coupled oscillators using seconds, for instance like this:
tspan = [0,8];
dt = 1e-3;
then, time appears as 0:1e-6:0.008 in the output data, which is totally unexpected.
Problem: After loading a simulation data file into struct variable foo
, when I try to export foo
into JSON using JSONlab's savejson()
function, it is unable to export the struct. This appears to be due to that it doesn't know how to interpret foo.parms.biosim.Iext
(that's an I as in "eye"), which is a function handle, into a JSON-acceptable type. If you remove it e.g. via foo.parms.biosim = rmfield(foo.parms.biosim, 'Iext')
, then it does export succesfully, and you can save it into a JSON file with savejson('',foo,'foo-file.json')
.
Suggested solution: Simply remove all mention of Iext
from the code. It does NOT appear to be used AT ALL in any functioning code, save for getting passed around as a parameter, in files dnsim/matlab/functions
: biosimdriver.m
, biosim.m
, runsim.m
, and simdriver.m
. Except for when it is passed, I can't find any instance where it is used. This implies that it is either archaic and can be deleted, or its functionality has lapsed and is currently not included in the code. Either way, since all the places where it is used in the commented-out sections is still present in the git history and is thus recoverable, all usage of it could be deleted right now, which would enable full JSON export of new simulation data files.
In order to use the "new" data format, one must set timesurfer_flag
to 0, since by default right now it is 1, where the timesurfer
part of the name denotes the old format. This change is implemented in matlab/functions/runsim.m
but one cannot yet pass it from important matlab/functions/simstudy.m
, and therefore, if you want to submit batch jobs and also want to use the new data format, this must be added to simstudy()
.
So, I just noticed that none of my larger simulations (e.g. 100 cells x 2 cell types, for anything over 3 seconds at dt
of 0.01 milliseconds) actually save to sim_data
, probably due to this very simple error:
"Warning: Variable 'sim_data' cannot be saved to a MAT-file whose version is older than 7.3. To save this variable, use the -v7.3 switch. Skipping..." as discussed in this MATLAB Central thread. Never mind the fact that I question why such a thing is a warning instead of an error, in matlab/functions/biosimdriver.m
there's (Line 81) a few places (Line 143) where an optional -v7.3
flag was there previously, but has been commented out in the current version of the dev
branch for a while. Testing putting that flag parameter in the main sim_data
save line (81) solves this problem, probably for the reasons mentioned in the thread (a variable above a certain size, near 2GB, MUST be saved into a version 7.3 MAT-file).
tl;dr I'm marking this as a "bug" because it's something that needs fixing as opposed to what's in the dev
branch right now, but I'm mainly making this an issue in order to ask, are there any good reasons why NOT to add this flag to all DNSIM saves of sim_data
?
Please work directly on the dnsim_codegen branch to fix this bug.
I found a bug in the script below (since it's directly related to the CLI workflow):
If I clone a new dnsim copy etc. to ~/dnsim
, and run cluster sims using this script matlab -r dnsim_batch_example
, it works. BUT when I add a new mechanism to ~/dnsim/database
and try to run a sim on the cluster using this new mechanism, it can't find that new mechanism (yes, I have $DNSIM exported correctly in my .bashrc as in https://github.com/jsherfey/dnsim README). In the ~/batchdirs/whatever/pbsout/job1.out
it says
Found mech file. Using ~/research/modeling/database/iNa.txt
Looking in known mech list for iLeak_TC <the new one>
Error: Failed to find...
In dnsim/matlab/functions/get_mechlist.m
it appears that DBPATH, which defaults to ~/research/modeling/database
(which of course doesn't necessarily exist!), is, when run on the cluster, not getting a BIOSIMROOT to pull from. DBPATH appears to be the database directory that is ultimately used.
This wouldn't be a problem if I could say, set BIOSIMROOT=~/dnsim
in my run script or startup.m or whatever, but even though BIOSIMROOT is used as a global variable, it appears that it can't just be set anywhere: I tried the above, and I still get this problem. Weirdly, when I run the script over SSH and print out BIOSIMROOT in the main session where i'm calling/running the script, it seems that it's my current directory (unsure if this is always the case), but when I look at the cluster logs, it's empty (as in BIOSIMROOT = []
) in the MATLAB output for the cluster run! During the initial SSH run script, it successfully finds all the files since it's looking in the current working directory, and BIOSIMROOT is set right to it (in this case, ~/dnsim
). Is BIOSIMROOT supposed to work as a bash-environment variable? That could get complicated since the cluster isn't necessarily using the same ~/.bashrc as your account's desktop/VM.
What works:
load('~/dnsim/database/TC-RE-debug.mat')
in the run script DOES work.dnsim/matlab/functions/get_mechlist.m
and hardcodedly- replacing the ~/research/modeling/database
with whatever my new database is, like ~/dnsim/database
, or hardcoding in my BIOSIMROOT in there. But this IS hardcoding.script:
%% Demo: DNSim batch simulations
% Purpose: demonstrate how to run simulation batches varying model parameters
% on local machine or cluster with or without codegen.
% Created by Jason Sherfey (27-Mar-2015)
% -------------------------------------------------------------------------
% model specification
spec=[];
spec.nodes(1).label = 'HH'; % name of this node population
spec.nodes(1).multiplicity = 1; % size of this node population
spec.nodes(1).dynamics = {'V''=current/c'}; % node dynamics for state variables that can be "seen" by other nodes
spec.nodes(1).mechanisms = {'iNa','iK','itonic'}; % note: corresponding mechanism files (e.g., iNa.txt) should exist in matlab path
spec.nodes(1).parameters = {'stim',7,'c',1}; % user-specified parameter values (note: these override default values set in mechanism files)
% simulation controls
tspan=[0 1000]; % [beg end], ms, simulation time limits
SOLVER='euler'; % numerical integration method
dt=.01; % integration time step [ms]
dsfact=10; % downsample factor (applied to simulated data)
codepath='~/dnsim'; % path to dnsim toolbox
%% SINGLE SIMULATION
% local simulation with codegen
data = runsim(spec,'timelimits',tspan,'dt',dt,'SOLVER',SOLVER,'dsfact',dsfact,...
'coder',1,'verbose',0);
plotv(data,spec,'varlabel','V');
%% SIMULATION BATCHES
% output controls
rootdir = pwd; % where to save outputs
savedata_flag = 0; % 0 or 1, whether to save the simulated data (after downsampling)
saveplot_flag = 1; % 0 or 1, whether to save plots
plotvars_flag = 1; % 0 or 1, whether to plot state variables
% what to vary across batch
scope = 'HH'; % node or connection label whose parameter you want to vary
variable = 'c'; % parameter to vary (e.g., capacitance)
values = '[1 1.5]'; % values for each simulation in batch (varied across batches)
%{
Prerequisites for submitting jobs to cluster:
1. add path-to-dnsim/csh to your environment.
e.g., add to .bashrc: "export PATH=$PATH:$HOME/dnsim/csh"
2. run matlab on cluster node that recognizes the command 'qsub'
%}
% -------------------------------------------------------------------------
% LOCAL BATCHES (run serial jobs on current machine)
cluster_flag=0; % 0 or 1, whether to submit jobs to a cluster (requires: running on a cluster node that recognizes the command 'qsub')
% without codegen
values = '[1 1.5]';
[~,~,outdir]=simstudy(spec,{scope},{variable},{values},...
'dt',dt,'SOLVER',SOLVER,'rootdir',rootdir,'timelimits',tspan,'dsfact',dsfact,...
'savedata_flag',savedata_flag,'saveplot_flag',saveplot_flag,'plotvars_flag',plotvars_flag,'addpath',codepath,...
'cluster_flag',cluster_flag,'coder',0);
% with codegen
values = '[2 2.5]';
[~,~,outdir]=simstudy(spec,{scope},{variable},{values},...
'dt',dt,'SOLVER',SOLVER,'rootdir',rootdir,'timelimits',tspan,'dsfact',dsfact,...
'savedata_flag',savedata_flag,'saveplot_flag',saveplot_flag,'plotvars_flag',plotvars_flag,'addpath',codepath,...
'cluster_flag',cluster_flag,'coder',1);
% -------------------------------------------------------------------------
% CLUSTER BATCHES (submit to queue for running parallel jobs on cluster nodes)
cluster_flag=1; % 0 or 1, whether to submit jobs to a cluster (requires: running on a cluster node that recognizes the command 'qsub')
% without codegen
values = '[3 3.5]';
[~,~,outdir]=simstudy(spec,{scope},{variable},{values},...
'dt',dt,'SOLVER',SOLVER,'rootdir',rootdir,'timelimits',tspan,'dsfact',dsfact,...
'savedata_flag',savedata_flag,'saveplot_flag',saveplot_flag,'plotvars_flag',plotvars_flag,'addpath',codepath,...
'cluster_flag',cluster_flag,'coder',0);
% with codegen
values = '[4 4.5]';
[~,~,outdir]=simstudy(spec,{scope},{variable},{values},...
'dt',dt,'SOLVER',SOLVER,'rootdir',rootdir,'timelimits',tspan,'dsfact',dsfact,...
'savedata_flag',savedata_flag,'saveplot_flag',saveplot_flag,'plotvars_flag',plotvars_flag,'addpath',codepath,...
'cluster_flag',cluster_flag,'coder',1);
There are calls to the deprecated buildmodel2
function in both dnsim/matlab/functions/browse_dnsim.m
and dnsim/matlab/functions/modeldiff.m
, and there are still references to it in comments in dnsim/matlab/functions/dnsim.m
and dnsim/matlab/functions/buildmodel.m
.
I'm making an issue out of this because I don't know the differences between the interfaces and bodies of buildmodel
and buildmodel2
, nor which commit it was where they were merged/deleted/whatever.
This will automatically allow a dt adjustment according to the solver when codegen is not used (as it currently happens in dev when codegen is enabled). It will also clean the code and remove repeating same thing twice, one with parameters for codegen and a second version of it with explicit values when codegen is not used/available.
I am trying to implement synapses with an exponential form, instead of using ODEs. These typically are described as a synaptic response convolved with a train of delta functions representing presynaptic spikes.
I've found the iRhythmic.txt function attempts to do this, but it seems to be broken. Advice would be appreciated.
If this type of mechanism is outside the scope of the Matlab ODE solver, it might be possible to hack it using ODEs.
(Prescript: I highly doubt there will ever be legal issues with DNSIM, but if it does get in trouble we will want to be sure where it stands, and the sooner we pick a license, the better in that regard [specifically for solidifying authorship]).
TL;DR I think we should use the GNU General Public License (v2 or v3). This would just mean making a LICENSE file and, recommendedly, putting a brief blerb at the top of every source code file. I could do this easily.
As per the meeting with BU's Office of Technology Development (OTD) that Jason (Sherfey), Jason Climer, and I attended, there are two key legal components of software: Copyright and License.
The Copyright owner, given that Jason probably built DNSIM on BU's dime (and, from a legal point of view, thereby almost definitely not in his spare time in his garage), is that of the Trustees of Boston University. This isn't really an issue though, since...
We can choose whatever license we want, including as free/open source as we desire, as OTD has informed us BU generally likes open software. BU will support probably any licensing as long as we have the authorship squared away (which is abundantly clear in this case: Jason is the main author). It might be possible that anyone who's ever contributed to the code counts as an author, and theoretically, if we don't license now, someone may contribute (if we accept their changes) code while disagreeing with whichever license we want, which becomes problematic.
GitHub and others have gone a long way to trying to inform the developer public about licensing options, including making http://choosealicense.com/ which tells you how to pick one and lists the most popular options.
As presented on that site, the Apache, MIT, and GNU General Public License v2 (GPL) licenses are probably the most common free and/or open source licenses.
This is just one example, but the MIT one says anyone can do whatever they want with it, but that includes taking it, selling it, and not distributing their further changes back up to us (called "upstream") as long as they attribute the author. The GPL prevents this scenario by stating that anyone who makes a copy or changes to their physical copy of this code, must use the same (or a "compatible") license; this means those people would themselves HAVE to give up their source code including all its changes for free, if anyone else asked them.
"Open Source Software" (OSS) licenses (as opposed to Free and Open Source Software/FOSS) like the MIT license use the definition of "free" that is, the person receiving the software can do WHATEVER they want with it, including making it private, hiding it, making changes, selling it, etc. AFAIK this is equivalent to "free as in beer", and no more than that.
The key difference is the use of the word "Free". BOTH OSS and FOSS "give away their source code for free", but idiomatically speaking "free software" is generally thought of as the GPL-kind discussed below, where it's very closely tied to specific notions of freedom and the future of a source code.
"Free and Open Source Software" (FOSS) licenses like the GNU GPL versions 2 and 3, and the Apache License (Apache is "compatible" with GPLv3 but NOT GPLv2) use a stronger, more philosphical definition of "free" that is as much about, yes, giving away source code for free, as it is about "forcing" the same freedoms on other people who obtain/update your code. The GPL especially uses "copyleft" licensing legalities so anyone who uses code that inherits from your GPL'd code must make that NEW code under the same terms (GPL/compatible) too. That person must be held to account in giving away things for free and basically supporting making it available and allowing local changes to happen. So yes, it's more "restricting" than the "free as in beer" OSS in one definition of freedom, but that's because it's forcing a different type of freedom to propagate (you must make everything with this code available, and not put limitations on other people making changes -- you must let them tinker!!! and their tinkerings must then be made available too...). While as a side effect you CAN download the source code for free, always, under the GPL, it's more about "free as in freedom".
In some ways the copyleft FOSS licenses are as much about making code available as they are about a social movement to try to sway the field against proprietary, hidden code that prevents people from developing the code themselves. I think most modern developers would say that this has been a hugely good thing for software development on the whole. Which the exception of IIRC some few proprietary binary blobs for drivers (which are easy to not use), Linux is GPLv2, and the world would be much worse off if its license was more restrictive. It's really quite incredible, some student's pet project in the 90s utilized open community development to such a degree that it's ubiquitous now.
Not that I'm saying DNSIM is going to be the next Linux.
I'm just saying that the GPL is awesome, and going with it means the software will "always be free", and any direct development on it has to be publicly released, which we can then bring back into the "master original" version of the code -- this component is EXACTLY why things like GitHub are so successful (never mind that Google Code is shutting down...but it wasn't really a business on its own then again). People see GPL and they know exactly what it means, and how open it and its descendants will be.
Note that the GPL doesn't mean you can't make money on the software; there's lots of business models that give away their software for free but offer paid support services (e.g. Red Hat Linux) or maybe? have unreleased private software that works with GPL'd software. Arguably anyone who does private software development and uses Linux does that anyways. There is definitely no limit on making money using the GPL'd software, you just can't prevent people from getting a copy of the code under any circumstances.
I personally would strongly prefer a GPL license (though, if the Apache v2 is "compatibly free" with GPLv3 then it must be good too), but I'm open to the idea of "just" an Open Source license (rather than a "Free" one as well), but I think Jason obviously should have the main say. Thankfully, given that it's already public, it's ineffectual to assign a very limiting license anyways. Hopefully we will never even need to worry about licensing, and I SERIOUSLY doubt we ever will, but GPLing DNSIM would foster collaboration (in part by forcing anyone's changes or forks into the open, so we could add them in ourselves), protect DNSIM in the event someone tries to "steal" it by selling limited access to their own version or passing it off as theirs, and other things I haven't thought about.
What do you all think?
That is not what "inhibitory" means and now I am very sad that all this work I did was not with an I to I network at all. I will fix it, I just wanted to complain to you guys first (and alert people in case they were making the same mistake).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.