Giter VIP home page Giter VIP logo

brainpy's Introduction

Header image of BrainPy - brain dynamics programming in Python.

Supported Python Version LICENSE Documentation PyPI version Continuous Integration Continuous Integration with Models

BrainPy is a flexible, efficient, and extensible framework for computational neuroscience and brain-inspired computation based on the Just-In-Time (JIT) compilation (built on top of JAX, Taichi, Numba, and others). It provides an integrative ecosystem for brain dynamics programming, including brain dynamics building, simulation, training, analysis, etc.

Installation

BrainPy is based on Python (>=3.8) and can be installed on Linux (Ubuntu 16.04 or later), macOS (10.12 or later), and Windows platforms.

For detailed installation instructions, please refer to the documentation: Quickstart/Installation

Using BrainPy with docker

We provide a docker image for BrainPy. You can use the following command to pull the image:

$ docker pull brainpy/brainpy:latest

Then, you can run the image with the following command:

$ docker run -it --platform linux/amd64 brainpy/brainpy:latest

Using BrainPy with Binder

We provide a Binder environment for BrainPy. You can use the following button to launch the environment:

Binder

Ecosystem

Citing

BrainPy is developed by a team in Neural Information Processing Lab at Peking University, China. Our team is committed to the long-term maintenance and development of the project.

If you are using brainpy, please consider citing the corresponding papers.

Ongoing development plans

We highlight the key features and functionalities that are currently under active development.

We also welcome your contributions (see Contributing to BrainPy).

  • model and data parallelization on multiple devices for dense connection models
  • model parallelization on multiple devices for sparse spiking network models
  • data parallelization on multiple devices for sparse spiking network models
  • pipeline parallelization on multiple devices for sparse spiking network models
  • multi-compartment modeling
  • measurements, analysis, and visualization methods for large-scale spiking data
  • Online learning methods for large-scale spiking network models
  • Classical plasticity rules for large-scale spiking network models

brainpy's People

Contributors

adaliubc avatar akitsufaye avatar c-xy17 avatar chaoming0625 avatar charlielam0615 avatar cloudydory avatar dependabot[bot] avatar github-actions[bot] avatar grysgreat avatar hoshinokoji avatar mamiezhu avatar routhleck avatar shangyangli avatar terrypang avatar yqianjiang avatar yygf123 avatar zhangzeqingcn avatar ztqakita avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

brainpy's Issues

brainpy.init.DOGDecay returns the weight matrix with incorrect structure (values)

import brainpy as bp
import brainpy.math as bm
import numpy as np
import matplotlib.pyplot as plt

bp.math.set_platform('cpu')

# visualization
def mat_visualize(matrix, cmap=plt.cm.get_cmap('coolwarm')):
    im = plt.matshow(matrix, cmap=cmap)
    plt.colorbar(mappable=im, shrink=0.8, aspect=15)
    plt.show()

size = (10, 12)
dog_init = bp.init.DOGDecay(sigmas=(1., 3.), max_ws=(10., 5.), min_w=0.1, include_self=True)
weights = dog_init(size)
print('shape of weights: {}'.format(weights.shape))
# out: shape of weights: (120, 120)

# visualize neuron(3, 4)
mat_visualize(weights[3*12+4].reshape((10, 12)), cmap=plt.cm.get_cmap('Reds'))

根据输出结果,weights的shape是(120, 120),每一行reshape成(10, 12)就有问题,但reshape成(12, 10)就是正确的输出,所以我怀疑是在__call__函数里面row和col搞反了

weight init

Support expoenential euler auto for SDE integration

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

There is exp_auto numerical method which relies on autodiff (brainpy.math.vector_grad) for ODEs. I think we also need exp_auto for SDEs.

Monitors should record the initial state of each variable

Attached is the simulation of two HH models and an Alpha synapse model. The initial membrane potential (V) of the HH model is about -65 mV, but in the image the line starts at around 0. This is because the monitor does not record the initial state, so the starting point is the value at the first integration time step. I think the initial state of each variable should be included in the monitor so that the result is more intuitive for users.

AlphaCOBA_output

A kind of 'delay_step' setting in synapse could cause an error

When creating a brain simulation network, neuron group A and neuron group B form a synapse connection whose delay_step is set to 1 and then neuron group A and neuron group B form a synapse connection whose delay_step is set to None.

self.A2B = bp.synapses.Exponential(self.A, self.B, 
                                      bp.conn.FixedProb(0.02),
                                       output=bp.synouts.COBA(E=0.), g_max=we,
                                       tau=5., 
                                       method=method,
                                       delay_step=1)
self.A2C = bp.synapses.Exponential(self.A, self.C, 
                                      bp.conn.FixedProb(0.02),
                                       output=bp.synouts.COBA(E=0.), g_max=we,
                                       tau=5., 
                                       method=method,
                                       delay_step=None)

During building synapse connection between A and C, __init__ will call method register_delay and the first element of value corresponds to neuron group A's identifier in global_delay_data will be replaced by None. The element replaced by None is an instance of bm.LengthDelay before building synapse connection between A and C.

In this case, when self.A2B updating, it will call method get_delay_data and an error 'NoneType' object is not callable will be given.

Much easier way to write control flows

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

Control flow is not easy in BrainPy because it relies on JAX. I think maybe there is a more conveninet way to write control flows. Like,

import brainpy as bp
import brainpy.math as bm

def f(a)
  return bm.ifelse(a > 10., lambda x: 10., 
                   a > 5 and a <= 10, lambda x: x*2,
                   lambda x: x+2, x)

Or,

def f(b)
  return bm.ifelse(conditions=[a > 10.,  a > 5 and a <= 10], 
                   functions=[lambda x: 10., lambda x: x*2, lambda x: x+2], 
                   operands=x,
                   dyn_vars=...)

Support "shared_kwargs" when using "brainpy.nn.RNNTrainer" like "train=True" in dropout

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

Current, After applying JIT, shared_kwargs has no effect when using the structural trainer in brainpy.nn modules.

But it would be great when we set shared_kwargs like train=True in these structural trainers.

import brainpy as bp

trainer = bp.nn.BPTT(...)
tainer.fit(train_data, shared_args=dict(train=True))

Documentation of control flows

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

Control flows should be clarified in BDP tutorial section. See #184 for more information.

想知道代码中怎么通过突触模型建立链式的神经元关系

我们想通过多个神经元和相关的兴奋抑制突触模型搭建一个神经回路,现在准备了HH模型的A神经元群(5个),B神经元群(5个),C神经元群(5个),D神经元群(5个),我们希望A->B通过兴奋型突触,B->C通过抑制型突触,C->D通过抑制型突触,的这样一个网络,并且在A端施加一个恒定的直流电位或者交流电位,A->B的结果可以在B->C中可用等等。
有以下问题,如果作者能够解释下就好了。

1.怎么链式传递关系,在网络中直接写如下代码A=netrou(monitor='U'),B=netrou('monitor='U'),synab=(A,B),network1=(A,synab,B), C=netrou(monitor='U'),synbc=(B,C),network2=(B,synbc,C),network1.run(500,input={input,5(请问这个5是代表的电流吗)}),但是我们发现最终A\B\C的监视器中的monitors的电位U随时间的变化关系不符合神经元经过兴奋性突触和抑制型突触后电位的变化。

2.如果需要对神经元群里面的单个神经元节点进行操作,比如A有5个神经元,B有5个神经元,我们想定制A的3个神经元和B的3个神经元是通过兴奋型突触连接,A的2个神经元和B的2个神经元是通过抑制型突触连接,请问这个在您的框架中是怎么实现的?

因为生物学知识欠缺,所以提的问题比较简单,如果作者有空,希望能够回答一下,这里谢谢了,如果还有相关文献更加好了。

Strange Results by ReportRunner

In runners.ipynb:

import brainpy as bp

bp.math.set_platform('cpu')

class EINet(bp.Network):
  def __init__(self, num_exc=3200, num_inh=800, method='exp_auto'):
    # neurons
    pars = dict(V_rest=-60., V_th=-50., V_reset=-60., tau=20., tau_ref=5.)
    E = bp.dyn.LIF(num_exc, **pars, method=method)
    I = bp.dyn.LIF(num_inh, **pars, method=method)
    E.V[:] = bp.math.random.randn(num_exc) * 2 - 55.
    I.V[:] = bp.math.random.randn(num_inh) * 2 - 55.

    # synapses
    E2E = bp.dyn.ExpCOBA(E, E, bp.conn.FixedProb(prob=0.02),
                         E=0., g_max=0.6, tau=5., method=method)
    E2I = bp.dyn.ExpCOBA(E, I, bp.conn.FixedProb(prob=0.02),
                         E=0., g_max=0.6, tau=5., method=method)
    I2E = bp.dyn.ExpCOBA(I, E, bp.conn.FixedProb(prob=0.02),
                         E=-80., g_max=6.7, tau=10., method=method)
    I2I = bp.dyn.ExpCOBA(I, I, bp.conn.FixedProb(prob=0.02),
                         E=-80., g_max=6.7, tau=10., method=method)

    super(EINet, self).__init__(E2E, E2I, I2E, I2I, E=E, I=I)


net = EINet()

runner = bp.ReportRunner(net, 
                         monitors=['E.spike'],
                         inputs=[('E.input', 20.), ('I.input', 20.)],
                         jit=True)
runner.run(100.)

bp.visualize.raster_plot(runner.mon.ts, runner.mon['E.spike'], show=True)

ReportRunner_EINet

However, the same code applied to DSRunner will generate the correct results. The reasons are unclear.

Library "npbrain" can't be found

In README.rst / Define a Hodgkin–Huxley neuron model, import npbrain as nb is not interpretable. Maybe it should be import brainpy as nb?

Linear interpolation in "brainpy.math.FixedLenDelay" is expensive when simulation time is long

Please:

  • Check for duplicate requests.

Compare with brainpy.dyn.ConstantDelay, brainpy.math.FixedLenDelay uses the linear interpolation to obtain the delay data value, it is expensive and slow when the simulation time is long.

However, sometimes we cannot direatly apply brainpy.dyn.ConstantDelay in our models.

I think it is necessary to support the same method of brainpy.dyn.ConstantDelay to get data delay. Because it is computational cheap and can also handle most of the delay cases, such as constant delay $x(t-\tau)$.

Documentation of operator customization

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

It's exiciting to see recet PR #122 which enables low-level operator customization in BrainPy just using numba. However, how do we use it? The documentation or the tutorial is still missing.

brainpy.analysis.LowDimAnalyzer cannot plot the nullcline that is absolutely horizontal

import brainpy as bp

bp.math.enable_x64()


def ppa2d(group, title, v_range=None, w_range=None, Iext=65., duration=400):
	v_range = [-70., -40.] if not v_range else v_range
	w_range = [-10., 50.] if not w_range else w_range

	# 使用BrainPy中的相平面分析工具
	phase_plane_analyzer = bp.analysis.PhasePlane2D(
		model=group,
		target_vars={'V': v_range, 'w': w_range},  # 待分析变量
		pars_update={'Iext': Iext},  # 需要更新的变量
		resolutions=0.05
	)

	# 画出V, w的零增长曲线
	phase_plane_analyzer.plot_nullcline()
	# 画出固定点
	phase_plane_analyzer.plot_fixed_point()
	# 画出向量场
	phase_plane_analyzer.plot_vector_field(plot_style=dict(color='lightgrey', density=1.))

	# ... (some codes are not shown)

	plt.xlim(v_range)
	plt.ylim(w_range)
	plt.title(title)
	plt.show()

ppa_tonic_spiking_bugs

Delay variable updating in synapse models seems wired

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

Actually, we should update the delay variables in neurons after the updating of the neuron model. Actually, we update them in the synapse mdoels, before we updating the neuron models.

It's easy to ignore to mark the dynamically changed variables as ``brainpy.math.Variable``

Please:

  • Check for duplicate requests.

In BrainPy, the JIT compilation or other JAX transformations are only supported on instances of brainpy.math.Variable. However, in a big problem, we usually ignore to mark the dynamically changed variables as brainpy.math.Variable.

Therefore, is there any better way to find bugs when we use a non-variable tensor to update new values?

Support offline learning of the multiple trails

import brainpy as bp

i = bp.nn.Input(1)
r = bp.nn.Reservoir(100)
o = bp.nn.Dense(1)

model = i >> r >> o

model.initialize(num_batch=1)

Currently, we only support offline learning with batch size is 1. This cannot satisfy the needs in the real data. We should extend it to learn with multiple trials.

Supports for hyperparameter exploration

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

Hyperparameter tuning is burdensome in computational modeling. We should provide more supports on easy hyperparameter exploreation.

Cannot set default float/int types

We cannot set the default int or float types by using brainpy.math module:

>>> import brainpy.math as bm
>>> bm.float_
jax._src.numpy.lax_numpy.float32
>>> bm.set_float_(bm.float64)
jax._src.numpy.lax_numpy.float32

Support training of SNN networks

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

Please see the following papers:

  • Neftci E O, Mostafa H, Zenke F. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks[J]. IEEE Signal Processing Magazine, 2019, 36(6): 51-63.
  • Nicola W, Clopath C. Supervised learning in spiking neural networks with FORCE training[J]. Nature communications, 2017, 8(1): 1-15.
  • Bellec G, Scherr F, Subramoney A, et al. A solution to the learning dilemma for recurrent networks of spiking neurons[J]. Nature communications, 2020, 11(1): 1-15.

Compatible interafce documentation

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

We are missing some important documentation of interfaces which are compatitable with previous BrainPy release.

Provide more nodes in artificial neural networks

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

Currently, BrainPy provides excellent nodes in reservoir computing and recurrent neural networks. However, there are much more layers or modles which are useful when constructing dynamics training models. Such as Convolution kernels, transformers, batch normarlizations,etc.

Using the official example“Training a spiking neural network”, brainpy cannot call GPU acceleration

CODE:

import brainpy as bp
import brainpy.math as bm

bm.set_platform('gpu')

E = bp.neurons.LIF(3200, V_rest=-60., V_th=-50., V_reset=-60.,
                   tau=20., tau_ref=5., method='exp_auto',
                   V_initializer=bp.init.Normal(-60., 2.))

I = bp.neurons.LIF(800, V_rest=-60., V_th=-50., V_reset=-60.,
                   tau=20., tau_ref=5., method='exp_auto',
                   V_initializer=bp.init.Normal(-60., 2.))

E2E = bp.synapses.Exponential(E, E, bp.conn.FixedProb(prob=0.02), g_max=0.6,
                              tau=5., output=bp.synouts.COBA(E=0.),
                              method='exp_auto')

E2I = bp.synapses.Exponential(E, I, bp.conn.FixedProb(prob=0.02), g_max=0.6,
                              tau=5., output=bp.synouts.COBA(E=0.),
                              method='exp_auto')

I2E = bp.synapses.Exponential(I, E, bp.conn.FixedProb(prob=0.02), g_max=6.7,
                              tau=10., output=bp.synouts.COBA(E=-80.),
                              method='exp_auto')

I2I = bp.synapses.Exponential(I, I, bp.conn.FixedProb(prob=0.02), g_max=6.7,
                              tau=10., output=bp.synouts.COBA(E=-80.),
                              method='exp_auto')

net = bp.dyn.Network(E2E, E2I, I2E, I2I, E=E, I=I)

runner = bp.dyn.DSRunner(net,
                         monitors=['E.spike', 'I.spike'],
                         inputs=[('E.input', 20.), ('I.input', 20.)],
                         dt=0.1)

runner.run(10000)
import matplotlib.pyplot as plt

plt.figure(figsize=(12, 4.5))

plt.subplot(121)

bp.visualize.raster_plot(runner.mon.ts, runner.mon['E.spike'], show=False)
plt.subplot(122)
bp.visualize.raster_plot(runner.mon.ts, runner.mon['I.spike'], show=True)

ERROR
"""
Traceback (most recent call last):
File "D:\SNN_MidBrain_Project\SNN_Sound source location\Midbrain_simulation_practice\brainpy\test2.py", line 37, in
runner.run(10000)
File "C:\Users\Link\Anaconda3\lib\site-packages\brainpy\dyn\runners.py", line 500, in run
return self.predict(*args, **kwargs)
File "C:\Users\Link\Anaconda3\lib\site-packages\brainpy\dyn\runners.py", line 420, in predict
outputs, hists = self._predict(xs=(times, indices, xs), shared_args=shared_args)
File "C:\Users\Link\Anaconda3\lib\site-packages\brainpy\dyn\runners.py", line 464, in _predict
outputs, hists = _predict_func(xs)
File "C:\Users\Link\Anaconda3\lib\site-packages\brainpy\dyn\runners.py", line 590, in
run_func = lambda all_inputs: f(all_inputs)[1]
File "C:\Users\Link\Anaconda3\lib\site-packages\brainpy\math\controls.py", line 167, in call
dyn_values, (out_values, results) = lax.scan(
File "C:\Users\Link\Anaconda3\lib\site-packages\brainpylib\event_sum.py", line 109, in _event_sum_translation
raise ValueError('Cannot find compiled gpu wheels.')
ValueError: Cannot find compiled gpu wheels.
"""

I have tested Jax. Can call GPU operation successfully. But brainpy can't succeed. Can the official send an example to test whether it successfully calls GPU acceleration? I really need this, otherwise it's too slow to use the CPU

How to customize the ``brainpy.nn.Node`` feedforward output when it relies on the feedback inputs?

Currently, brainpy.nn.Node initialize the feedforward connections, and feedback connections in two separate functions:

  • init_ff()
  • init_fb()

When implementing init_ff() function, we do not know the feedback information. However, once the feedbacks are sources to initialize the node feedforward output, we cannot have the correct initialization for this node.

I think there may be some way to change this drawback.

How to Use Synaptic Weights

Hello, I don't understand the synaptic weight in the official document. The code example only shows how to set the synaptic weight, which is not related to the actual network simulation. I don't know how to set the weight in the network.
I try to modify the official EI network. Only these sentences are added
"E2E.init_weights(weight=0.0, comp_method='dense')
E2I.init_ weights(weight=0.0, comp_method='dense')
I2E.init_ weights(weight=0.0, comp_method='dense')
I2I.init_ weights(weight=0.0, comp_method='dense')"。
The weight is 0, and the final result should be no spike, but the graph drawn by the result is the same as that without these codes. I hope you can give me an example of how to set synaptic weights in the network.

import brainpy as bp
import brainpy.math as bm
bm.set_platform('cpu')
E = bp.neurons.LIF(3200, V_rest=-60., V_th=-50., V_reset=-60.,
                   tau=20., tau_ref=5., method='exp_auto',
                   V_initializer=bp.init.Normal(-60., 2.))
I = bp.neurons.LIF(800, V_rest=-60., V_th=-50., V_reset=-60.,
                   tau=20., tau_ref=5., method='exp_auto',
                   V_initializer=bp.init.Normal(-60., 2.))
E2E = bp.synapses.Exponential(E, E, bp.conn.FixedProb(prob=0.02), g_max=0.6,
                              tau=5., output=bp.synouts.COBA(E=0.),
                              method='exp_auto')
E2I = bp.synapses.Exponential(E, I, bp.conn.FixedProb(prob=0.02), g_max=0.6,
                              tau=5., output=bp.synouts.COBA(E=0.),
                              method='exp_auto')
I2E = bp.synapses.Exponential(I, E, bp.conn.FixedProb(prob=0.02), g_max=6.7,
                              tau=10., output=bp.synouts.COBA(E=-80.),
                              method='exp_auto')
I2I = bp.synapses.Exponential(I, I, bp.conn.FixedProb(prob=0.02), g_max=6.7,
                              tau=10., output=bp.synouts.COBA(E=-80.),
                              method='exp_auto')
# Want to try to change the weight to 0
E2E.init_weights(weight=0.0, comp_method='dense')
E2I.init_weights(weight=0.0, comp_method='dense')
I2E.init_weights(weight=0.0, comp_method='dense')
I2I.init_weights(weight=0.0, comp_method='dense')
net = bp.dyn.Network(E2E, E2I, I2E, I2I, E=E, I=I)
runner = bp.dyn.DSRunner(net,
                         monitors=['E.spike', 'I.spike'],
                         inputs=[('E.input', 20.), ('I.input', 20.)],
                         dt=0.1)
runner.run(100)

import matplotlib.pyplot as plt
plt.figure(figsize=(12, 4.5))
plt.subplot(121)
bp.visualize.raster_plot(runner.mon.ts, runner.mon['E.spike'], show=False)
plt.subplot(122)
bp.visualize.raster_plot(runner.mon.ts, runner.mon['I.spike'], show=True)
  • If applicable, include full error messages/tracebacks.

Random sampling is still different from those in ``numpy.random``

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

Maybe we should provide more examples how to perform similar random sampling with numpy.random. Currently, these are still significant difference between brainpy.math.random and numpy.random.

More examples about rate-based whole brain modeling

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

Rate-based whole brain modeling is attractive for brain modeing users. However, the support (including documentaion and examples) of it in BrainPy is still limited. Maybe we should provide more support on this topic.

Runner for dynamical systems using adaptive integrators

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

Currently, the brainpy.dyn.DSRunner and brainpy.integrators.IntegratorRunner have good supports on non-adaptive numerical integrators. However, they failed when applying to the models modeled with adaptive integration methods.

a bug when importing numba

npbrain\tools\functions.py
numba.core.dispatcher is not available to my numba package. So I changed it to numba.dispatcher and then it worked.

High-dimensional analysis ``brainpy.analysis.FixedPointFinder`` should be generalized to mutiple variables

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

Currenty, brainpy.analysis.FixedPointFinder only receives a function f_cell which only get one argument. This restricts the method applied to more genral cases. Becauce multiple variables usually exist in a dynamical model. Therefore, brainpy.analysis.FixedPointFinder should be generialized to multiple variables.

Support veterization and parallerization for class objects

Please:

  • Check for duplicate requests.
  • Describe your goal, and if possible provide a code snippet with a motivating example.

If instances of brainpy.math.Variable in a class is mapped, then there is no need to write a vmap for class objects, because jax.vmap can directly apply on it.

However, if user define a input-based class object based on no-batch axis, Like

For example,

import brainpy as bp
import brainpy.math as bm

class A(bp.Base):
  def __init__(self):
       super(A, self).__init__()
       self.v = bm.Variable(bm.zeros(1))
  
  def add(self, inp):
       self.v += inp

Later, if users try to put a batch of inputs, directly applying jax.vmap may cause errors.

For this kind of model, we need process the vmapped self.v variable, and make a summation on the mapped v. However, once the operation is the substraction. The result of this operation is also wrong. Therefore, what do we really need for vectorization and parallerization for class objects.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.