holoviz-topics / neuro Goto Github PK
View Code? Open in Web Editor NEWHoloViz+Bokeh for Neuroscience
License: BSD 3-Clause "New" or "Revised" License
HoloViz+Bokeh for Neuroscience
License: BSD 3-Clause "New" or "Revised" License
Create util to convert between binned, ragged, tabular form of spike times
channel-type-grouping
Support channel-type grouping with different sampling and amplitude range (normalization).
grouped-y-scaling
: Manual scaling/zooming of Y axis per channel group
channel-group-yticks
Switch y-ticks to group values when zoomed out enough that channel-based y-ticks are cluttered.. So a zoomed out view might be something like the following y-ticks (instead of have a tick per row):
See #87
See #87 for general motivation. In contrast, the goal of this current issue is to focus on multi-scale large image volumes, rather than downscaling in the time dimension.
See #87
Also:
UPDATE: This initiative has been superseded by other, more targeted efforts
large-data-handling
(lead: ): Develop a first pass solution for the various workflow typesNote.. each domain section below starts with some important 'Context'.
While the continuous, raw data is streamed and viewed during data acquisition, it's not that critical to look at the full-band 30KHz version during processing/analysis. Instead, the raw-ish displays are of the low-pass filtered (<1000 Hz) continuous data (like a filtered version of the ephys viewer workflow), stacked 'spike raster' of action potential events (see spike raster workflow), and a view of the spike waveforms (see waveform workflow). These three workflows represent different challenges to large data handling and may require specific approaches.
Additionally, although there is a lot of heterogeneity in technique and equipment in electrophysiology, below we are focusing on the Allen Institute data is advantageous because they have a well-funded group maintaining their sdk, they utilize Neuropixel probes which are relatively high channel-count (and therefore represent a more difficult use case), and their data are available via the NWB 2.0 file format (fancy HDF5) which is becoming increasingly common in neuroscience. Demetris has some contacts with the Allen institute but we haven't yet engaged with them for feedback/collaboration; but this will happen once we have something to show them that is demonstrably better than their current approach. Additionally, we are collaborating with one of Jim's former colleagues, who works primarily with relatively smaller spike-time datasets (some real, some synthetic) and is primarily interested in spike-raster-type workflows, so the work below will benefit his group as well even though we will focus on Allen Institute data.
Primarily regarding the Miniscope device and associated Minian software
Primarily regarding the MNE software
annotation
(lead: @hoxbro
)# MNE-Annotations
# orig_time : 2009-08-12 16:15:00
# onset, duration, description
0.0,4.2,T0
4.2,4.1,T2
8.3,4.2,T0
12.5,4.1,T1
16.6,4.2,T0
20.8,4.1,T1
24.9,4.2,T0
subcoordinates
(lead: ): Stacked traces on Y axis sub-coordinatesscale-bar
(lead: @mattpap for Bokeh, @hoxbro for HoloViews)
Resulting Issues and PRs:
benchmark
: Benchmark speed of initial display and interaction (zoom, pan)hvneuro
module.intensity-hist
:The gap (vertical distance) between the two components of the EEG viewer (signal browser and overview bar) is too large on my machine:
I started the viewer with panel serve workflow_eeg-viewer.ipynb --show
. Here's the output of import holoviews as hv; hv.extension("bokeh"); hv.show_versions()
:
Python : 3.9.16 | packaged by conda-forge | (main, Feb 1 2023, 21:38:11)
[Clang 14.0.6 ]
Operating system : macOS-13.4.1-arm64-arm-64bit
holoviews : 1.16.2
IPython : 8.14.0
bokeh : 3.2.0
colorcet : 3.0.1
cudf : -
dask : 2023.7.0
datashader : 0.15.1
geoviews : -
hvplot : 0.8.4
ibis : -
jupyter_bokeh : -
jupyterlab : 4.0.3
matplotlib : 3.7.2
networkx : 3.1
notebook : 7.0.0
numba : 0.57.1
numpy : 1.24.4
pandas : 2.0.3
panel : 1.2.0
param : 1.13.0
pillow : -
plotly : -
pyarrow : -
scipy : 1.11.1
skimage : 0.21.0
spatialpandas : -
streamz : -
xarray : 2023.7.0
datagen
(lead: @droumis
)This current idea is to demonstrate inspecting a large electron microscopy image.
Features include utilizing max_interval + datashaded minimap + viewport-specific rendering to limit the data being sent to browser while inspecting a region at full resolution.
Think about how to handle overlapping channels (e.g. for large deflections, a channel might invade the space of other channels). This is usually a good thing, but an option to clip signals would be helpful.
On their own, our current methods like Datashader and downsampling are insufficient for data that cannot be fully loaded into memory.
This project aims to enable effective processing and visualization of biological datasets that exceed available memory limits. The task is to develop a proof of concept for an xarray-datatree-based multi-resolution generator and dynamic accessor. This involves generating and storing incrementally downsampled versions of a large dataset, and then accessing the appropriate resolution copy based on viewport and screen parameters. We want to leverage existing work and standards as much as possible, aligning with the geo and bio communities.
from scipy.stats import zscore
import h5py
import holoviews as hv; hv.extension('bokeh')
from holoviews.plotting.links import RangeToolLink
from holoviews.operation.datashader import rasterize
from bokeh.models import HoverTool
filename = 'recording_neuropixels_10s_384ch.h5'
f = h5py.File(filename, "r")
n_sample_chans = 40
n_sample_times = 25000 # sampling frequency is 25 kHz
clim_mul = 2
# main plot
hover = HoverTool(tooltips=[
("Channel", "@channel"),
("Time", "$x s"),
("Amplitude", "$y µV")])
time = f['timestamps'][:n_sample_times]
data = f['recordings'][:n_sample_times,:n_sample_chans].T
f.close()
channels = [f'ch{i}' for i in range(n_sample_chans)]
channels = channels[:n_sample_chans]
channel_curves = []
for i, channel in enumerate(channels):
ds = hv.Dataset((time, data[i,:], channel), ["Time", "Amplitude", "channel"])
curve = hv.Curve(ds, "Time", ["Amplitude", "channel"], label=f'{channel}')
curve.opts(color="black", line_width=1, subcoordinate_y=True, subcoordinate_scale=3, tools=[hover])
channel_curves.append(curve)
curves = hv.Overlay(channel_curves, kdims="Channel")
curves = curves.opts(
xlabel="Time (s)", ylabel="Channel", show_legend=False,
padding=0, aspect=1.5, responsive=True, shared_axes=False, framewise=False)
# minimap
y_positions = range(len(channels))
yticks = [(i, ich) for i, ich in enumerate(channels)]
z_data = zscore(data, axis=1)
minimap = rasterize(hv.Image((time, y_positions, z_data), ["Time (s)", "Channel"], "Amplitude (uV)"))
minimap = minimap.opts(
cmap="RdBu_r", colorbar=False, xlabel='', yticks=[yticks[0], yticks[-1]], toolbar='disable',
height=120, responsive=True, clim=(-z_data.std()*clim_mul, z_data.std()*clim_mul))
RangeToolLink(minimap, curves, axes=["x", "y"],
boundsx=(.1, .3),
boundsy=(10, 30))
(curves + minimap).cols(1)
Note... I recommend working through this notebook on accessing ephys HDF5 Datasets into xarray via Kerchunk and Zarr that Ian created. I can imagine a situation in which the approach to a multiresolution access just utilizes kerchunk references instead of downsampled data copies; although I'm not sure how that would work with xarray-datatree - maybe it would have to be either kerchunk or xarray-datatree, but not both. Maybe we could consult Martin.
import xarray as xr
import panel as pn; pn.extension()
import holoviews as hv; hv.extension('bokeh')
import hvplot.xarray
DATA_ARRAY = '1000frames'
DATA_PATH = f"<miniscope_sim_{DATA_ARRAY}.zarr>"
ldataset = xr.open_dataset(DATA_PATH, engine='zarr', chunks='auto')
data = ldataset[DATA_ARRAY]
# data.hvplot.image(groupby="frame", cmap="Viridis", height=400, width=400, colorbar=False)
FRAMES_PER_SECOND = 30
FRAMES = data.coords["frame"].values
# Create a video player widget
video_player = pn.widgets.Player(
length=len(data.coords["frame"]),
interval=1000 // FRAMES_PER_SECOND, # ms
value=int(FRAMES.min()),
max_width=400,
max_height=90,
loop_policy="loop",
sizing_mode="stretch_width",
)
# Create the main plot
main_plot = data.hvplot.image(
groupby="frame",
cmap="Viridis",
frame_height=400,
frame_width=400,
colorbar=False,
widgets={"frame": video_player},
)
# frame indicator lines on side plots
line_opts = dict(color='red', alpha=.6, line_width=3)
dmap_hline = hv.DynamicMap(pn.bind(lambda value: hv.HLine(value), video_player)).opts(**line_opts)
dmap_vline = hv.DynamicMap(pn.bind(lambda value: hv.VLine(value), video_player)).opts(**line_opts)
# height side view
right_plot = data.mean(['width']).hvplot.image(x='frame',
cmap="Viridis",
frame_height=400,
frame_width=200,
colorbar=False,
rasterize=True,
title='_', # TODO: Fix this. See https://github.com/bokeh/bokeh/issues/13225#issuecomment-1611172355
) * dmap_vline
# width side view
bottom_plot = data.mean(['height']).hvplot.image(y='frame',
cmap="Viridis",
frame_height=200,
frame_width=400,
colorbar=False,
rasterize=True,
) * dmap_hline
video_player.margin = (20, 20, 20, 70) # center widget over main
sim_app = pn.Column(
video_player,
pn.Row(main_plot[0], right_plot),
bottom_plot)
sim_app
2D-annotation
(lead: @hoxbro
)
gpu-image
(lead: @ianthomas23
)channel-type-grouping
Support channel-type grouping with different sampling and amplitude range.
channel-group-yticks
Switch y-ticks to group values when zoomed out enough that channel-based y-ticks are cluttered.. So a zoomed out view might be something like the following y-ticks (instead of have a tick per row)
benchmark
: Benchmark speed of initial display and interactionminimap
(lead: @droumis and @hoxbro )A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.