API reference

spykeutils package

class SpykeException[source]

Exception thrown when a function in spykeutils encounters a problem that is not covered by standard exceptions.

When using Spyke Viewer, these exceptions will be caught and shown in the GUI, while general exceptions will not be caught (and therefore be visible in the console) for easier debugging.

conversions Module

analog_signal_array_to_analog_signals(signal_array)[source]

Return a list of analog signals for an analog signal array.

If signal_array is attached to a recording channel group with exactly is many channels as there are channels in signal_array, each created signal will be assigned the corresponding channel. If the attached recording channel group has only one recording channel, all created signals will be assigned to this channel. In all other cases, the created signal will not have a reference to a recording channel.

Note that while the created signals may have references to a segment and channels, the relationships in the other direction are not automatically created (the signals are not attached to the recording channel or segment). Other properties like annotations are not copied or referenced in the created analog signals.

Parameters:signal_array (neo.core.AnalogSignalArray) – An analog signal array from which the neo.core.AnalogSignal objects are constructed.
Returns:A list of analog signals, one for every channel in signal_array.
Return type:list
epoch_array_to_epochs(epoch_array)[source]

Return a list of epochs for an epoch array.

Note that while the created epochs may have references to a segment, the relationships in the other direction are not automatically created (the events are not attached to the segment). Other properties like annotations are not copied or referenced in the created epochs.

Parameters:epoch_array (neo.core.EpochArray) – A period array from which the Epoch objects are constructed.
Returns:A list of events, one for of the events in epoch_array.
Return type:list
event_array_to_events(event_array)[source]

Return a list of events for an event array.

Note that while the created events may have references to a segment, the relationships in the other direction are not automatically created (the events are not attached to the segment). Other properties like annotations are not copied or referenced in the created events.

Parameters:event_array (neo.core.EventArray) – An event array from which the Event objects are constructed.
Returns:A list of events, one for of the events in event_array.
Return type:list
spike_train_to_spikes(spike_train, include_waveforms=True)[source]

Return a list of spikes for a spike train.

Note that while the created spikes have references to the same segment and unit as the spike train, the relationships in the other direction are not automatically created (the spikes are not attached to the unit or segment). Other properties like annotations are not copied or referenced in the created spikes.

Parameters:
  • spike_train (neo.core.SpikeTrain) – A spike train from which the neo.core.Spike objects are constructed.
  • include_waveforms (bool) – Determines if the waveforms property is converted to the spike waveforms. If waveforms is None, this parameter has no effect.
Returns:

A list of neo.core.Spike objects, one for every spike in spike_train.

Return type:

list

spikes_to_spike_train(spikes, include_waveforms=True)[source]

Return a spike train for a list of spikes.

All spikes must have an identical left sweep, the same unit and the same segment, otherwise a SpykeException is raised.

Note that while the created spike train has references to the same segment and unit as the spikes, the relationships in the other direction are not automatically created (the spike train is not attached to the unit or segment). Other properties like annotations are not copied or referenced in the created spike train.

Parameters:
  • spikes (sequence) – A sequence of neo.core.Spike objects from which the spike train is constructed.
  • include_waveforms (bool) – Determines if the waveforms from the spike objects are used to fill the waveforms property of the resulting spike train. If True, all spikes need a waveform property with the same shape or a SpykeException is raised (or the waveform property needs to be None for all spikes).
Returns:

All elements of spikes as spike train.

Return type:

neo.core.SpikeTrain

correlations Module

correlogram(trains, bin_size, max_lag=500 ms, border_correction=True, unit=ms, progress=None)[source]

Return (cross-)correlograms from a dictionary of spike train lists for different units.

Parameters:
  • trains (dict) – Dictionary of neo.core.SpikeTrain lists.
  • bin_size (Quantity scalar) – Bin size (time).
  • max_lag (Quantity scalar) – Cut off (end time of calculated correlogram).
  • border_correction (bool) – Apply correction for less data at higher timelags. Not perfect for bin_size != 1*``unit``, especially with large max_lag compared to length of spike trains.
  • unit (Quantity) – Unit of X-Axis.
  • progress (spykeutils.progress_indicator.ProgressIndicator) – A ProgressIndicator object for the operation.
Returns:

Two values:

  • An ordered dictionary indexed with the indices of trains of ordered dictionaries indexed with the same indices. Entries of the inner dictionaries are the resulting (cross-)correlograms as numpy arrays. All crosscorrelograms can be indexed in two different ways: c[index1][index2] and c[index2][index1].
  • The bins used for the correlogram calculation.

Return type:

dict, Quantity 1D

progress_indicator Module

exception CancelException[source]

Bases: exceptions.Exception

This is raised when a user cancels a progress process. It is used by ProgressIndicator and its descendants.

class ProgressIndicator[source]

Bases: object

Base class for classes indicating progress of a long operation.

This class does not implement any of the methods and can be used as a dummy if no progress indication is needed.

begin(title='')[source]

Signal that the operation starts.

Parameters:title (string) – The name of the whole operation.
done()[source]

Signal that the operation is done.

set_status(new_status)[source]

Set status description.

Parameters:new_status (string) – A description of the current status.
set_ticks(ticks)[source]

Set the required number of ticks before the operation is done.

Parameters:ticks (int) – The number of steps that the operation will take.
step(num_steps=1)[source]

Signal that one or more steps of the operation were completed.

Parameters:num_steps (int) – The number of steps that have been completed.
ignores_cancel(function)[source]

Decorator for functions that should ignore a raised CancelException and just return nothing in this case

rate_estimation Module

binned_spike_trains(trains, bin_size, start=0 ms, stop=None)[source]

Return dictionary of binned rates for a dictionary of spike train lists.

Parameters:
  • trains (dict) – A dictionary of neo.core.SpikeTrain lists.
  • bin_size (Quantity scalar) – The desired bin size (as a time quantity).
  • stop (Quantity scalar) – The desired time for the end of the last bin. It will be recalculated if there are spike trains which end earlier than this time.
Returns:

A dictionary (with the same indices as trains) of lists of spike train counts and the bin borders.

Return type:

dict, Quantity 1D

psth(trains, bin_size, rate_correction=True, start=0 ms, stop=None)[source]

Return dictionary of peri stimulus time histograms for a dictionary of spike train lists.

Parameters:
  • trains (dict) – A dictionary of lists of neo.core.SpikeTrain objects.
  • bin_size (Quantity scalar) – The desired bin size (as a time quantity).
  • rate_correction (bool) – Determines if a rates (True) or counts (False) are returned.
  • start (Quantity scalar) – The desired time for the start of the first bin. It will be recalculated if there are spike trains which start later than this time.
  • stop (Quantity scalar) – The desired time for the end of the last bin. It will be recalculated if there are spike trains which end earlier than this time.
Returns:

A dictionary (with the same indices as trains) of arrays containing counts (or rates if rate_correction is True) and the bin borders.

Return type:

dict, Quantity 1D

spike_density_estimation(trains, start=0 ms, stop=None, evaluation_points=None, kernel=gauss_kernel, kernel_size=100 ms, optimize_steps=None, progress=None)[source]

Create a spike density estimation from a dictionary of lists of spike trains.

The spike density estimations give an estimate of the instantaneous rate. The density estimation is evaluated at 1024 equally spaced points covering the range of the input spike trains. Optionally finds optimal kernel size for given data using the algorithm from (Shimazaki, Shinomoto. Journal of Computational Neuroscience. 2010).

Parameters:
  • trains (dict) – A dictionary of neo.core.SpikeTrain lists.
  • start (Quantity scalar) – The desired time for the start of the estimation. It will be recalculated if there are spike trains which start later than this time. This parameter can be negative (which could be useful when aligning on events).
  • stop (Quantity scalar) – The desired time for the end of the estimation. It will be recalculated if there are spike trains which end earlier than this time.
  • kernel (func) – The kernel function to use, should accept two parameters: A ndarray of distances and a kernel size. The total area under the kernel function sould be 1. Default: Gaussian kernel
  • kernel_size (Quantity scalar) – A uniform kernel size for all spike trains. Only used if optimization of kernel sizes is not used.
  • optimize_steps (Quantity 1D) – An array of time lengths that will be considered in the kernel width optimization. Note that the optimization assumes a Gaussian kernel and will most likely not give the optimal kernel size if another kernel is used. If None, kernel_size will be used.
  • progress (spykeutils.progress_indicator.ProgressIndicator) – Set this parameter to report progress.
Returns:

Three values:

  • A dictionary of the spike density estimations (Quantity 1D in Hz). Indexed the same as trains.
  • A dictionary of kernel sizes (Quantity scalars). Indexed the same as trains.
  • The used evaluation points.

Return type:

dict, dict, Quantity 1D

aligned_spike_trains(trains, events, copy=True)[source]

Return a list of spike trains aligned to an event (the event will be time 0 on the returned trains).

Parameters:
  • trains (dict) – A list of neo.core.SpikeTrain objects.
  • events (dict) – A dictionary of Event objects, indexed by segment. These events (in case of lists, always the first element in the list) will be used to align the spike trains and will be at time 0 for the aligned spike trains.
  • copy (bool) – Determines if aligned copies of the original spike trains will be returned. If not, every spike train needs exactly one corresponding event, otherwise a ValueError will be raised. Otherwise, entries with no event will be ignored.
collapsed_spike_trains(trains)[source]

Return a superposition of a list of spike trains.

Parameters:trains (iterable) – A list of neo.core.SpikeTrain objects
Returns:A spike train object containing all spikes of the given spike trains.
Return type:neo.core.SpikeTrain
minimum_spike_train_interval(trains)[source]

Computes the minimum starting time and maximum end time that all given spike trains share.

Parameters:trains (dict) – A dictionary of sequences of neo.core.SpikeTrain objects.
Returns:Maximum shared start time and minimum shared stop time.
Return type:Quantity scalar, Quantity scalar
optimal_gauss_kernel_size(train, optimize_steps, progress=None)[source]

Return the optimal kernel size for a spike density estimation of a spike train for a gaussian kernel. This function takes a single spike train, which can be a superposition of multiple spike trains (created with collapsed_spike_trains()) that should be included in a spike density estimation.

Implements the algorithm from (Shimazaki, Shinomoto. Journal of Computational Neuroscience. 2010).

Parameters:
  • train (neo.core.SpikeTrain) – The spike train for which the kernel size should be optimized.
  • optimize_steps (Quantity 1D) – Array of kernel sizes to try (the best of these sizes will be returned).
  • progress (spykeutils.progress_indicator.ProgressIndicator) – Set this parameter to report progress. Will be advanced by len(optimize_steps) steps.
Returns:

Best of the given kernel sizes

Return type:

Quantity scalar

sorting_quality_assesment Module

Functions for estimating the quality of spike sorting results. These functions estimate false positive and false negative fractions.

calculate_overlap_fp_fn(means, spikes)[source]

Return a dict of tuples (False positive rate, false negative rate) indexed by unit.

Deprecated since version 0.2.1.

Use overlap_fp_fn() instead.

Details for the calculation can be found in (Hill et al. The Journal of Neuroscience. 2011). This function works on prewhitened data, which means it assumes that all clusters have a uniform normal distribution. Data can be prewhitened using the noise covariance matrix.

The calculation for total false positive and false negative rates does not follow (Hill et al. The Journal of Neuroscience. 2011), where a simple addition of pairwise probabilities is proposed. Instead, the total error probabilities are estimated using all clusters at once.

Parameters:
  • means (dict) – Dictionary of prewhitened cluster means (e.g. unit templates) indexed by unit as neo.core.Spike objects or numpy arrays for all units.
  • spikes (dict) – Dictionary, indexed by unit, of lists of prewhitened spike waveforms as neo.core.Spike objects or numpy arrays for all units.
Returns:

Two values:

  • A dictionary (indexed by unit) of total (false positives, false negatives) tuples.
  • A dictionary of dictionaries, both indexed by units, of pairwise (false positives, false negatives) tuples.

Return type:

dict, dict

calculate_refperiod_fp(num_spikes, refperiod, violations, total_time)[source]

Return the rate of false positives calculated from refractory period calculations for each unit. The equation used is described in (Hill et al. The Journal of Neuroscience. 2011).

Parameters:
  • num_spikes (dict) – Dictionary of total number of spikes, indexed by unit.
  • refperiod (Quantity scalar) – The refractory period (time). If the spike sorting algorithm includes a censored period (a time after a spike during which no new spikes can be found), subtract it from the refractory period before passing it to this function.
  • violations (dict) – Dictionary of total number of violations, indexed the same as num_spikes.
  • total_time (Quantity scalar) – The total time in which violations could have occured.
Returns:

A dictionary of false positive rates indexed by unit. Note that values above 0.5 can not be directly interpreted as a false positive rate! These very high values can e.g. indicate that the generating processes are not independent.

get_refperiod_violations(spike_trains, refperiod, progress=None)[source]

Return the refractory period violations in the given spike trains for the specified refractory period.

Parameters:
Returns:

Two values:

  • The total number of violations.
  • A dictionary (with the same indices as spike_trains) of arrays with violation times (Quantity 1D with the same unit as refperiod) for each spike train.

Return type:

int, dict

overlap_fp_fn(spikes, means=None, covariances=None)[source]

Return dicts of tuples (False positive rate, false negative rate) indexed by unit. This function needs sklearn if covariances is not set to 'white'.

This function estimates the pairwise and total false positive and false negative rates for a number of waveform clusters. The results can be interpreted as follows: False positives are the fraction of spikes in a cluster that is estimated to belong to a different cluster (a specific cluster for pairwise results or any other cluster for total results). False negatives are the number spikes from other clusters that are estimated to belong to a given cluster (also expressed as fraction, this number can be larger than 1 in extreme cases).

Details for the calculation can be found in (Hill et al. The Journal of Neuroscience. 2011). The calculation for total false positive and false negative rates does not follow Hill et al., who propose a simple addition of pairwise probabilities. Instead, the total error probabilities are estimated using all clusters at once.

Parameters:
  • spikes (dict) – Dictionary, indexed by unit, of lists of spike waveforms as neo.core.Spike objects or numpy arrays. If the waveforms have multiple channels, they will be reshaped automatically. All waveforms need to have the same number of samples.
  • means (dict) – Dictionary, indexed by unit, of lists of spike waveforms as neo.core.Spike objects or numpy arrays. Means for units that are not in this dictionary will be estimated using the spikes. Note that if you pass 'white' for covariances and you want to provide means, they have to be whitened in the same way as the spikes. Default: None, means will be estimated from data.
  • covariances (dict or str) – Dictionary, indexed by unit, of lists of covariance matrices. Covariances for units that are not in this dictionary will be estimated using the spikes. It is useful to give a covariance matrix if few spikes are present - consider using the noise covariance. If you use prewhitened spikes (i.e. all clusters are normal distributed, so their covariance matrix is the identity), you can pass 'white' here. The calculation will be much faster in this case and the sklearn package is not required. Default: None, covariances will estimated from data.
Returns:

Two values:

  • A dictionary (indexed by unit) of total (false positive rate, false negative rate) tuples.
  • A dictionary of dictionaries, both indexed by units, of pairwise (false positive rate, false negative rate) tuples.

Return type:

dict, dict

stationarity Module

spike_amplitude_histogram(trains, num_bins, uniform_y_scale=True, unit=uV, progress=None)[source]

Return a spike amplitude histogram.

The resulting is useful to assess the drift in spike amplitude over a longer recording. It shows histograms (one for each trains entry, e.g. segment) of maximum and minimum spike amplitudes.

Parameters:
  • trains (list) – A list of lists of neo.core.SpikeTrain objects. Each entry of the outer list will be one point on the x-axis (they could correspond to segments), all amplitude occurences of spikes contained in the inner list will be added up.
  • num_bins (int) – Number of bins for the histograms.
  • uniform_y_scale (bool) – If True, the histogram for each channel will use the same bins. Otherwise, the minimum bin range is computed separately for each channel.
  • unit (Quantity) – Unit of Y-Axis.
  • progress (spykeutils.progress_indicator.ProgressIndicator) – Set this parameter to report progress.
Returns:

A tuple with three values:

  • A three-dimensional histogram matrix, where the first dimension corresponds to bins, the second dimension to the entries of trains (e.g. segments) and the third dimension to channels.
  • A list of the minimum amplitude value for each channel (all values will be equal if uniform_y_scale is true).
  • A list of the maximum amplitude value for each channel (all values will be equal if uniform_y_scale is true).

Return type:

(ndarray, list, list)

Project Versions

Table Of Contents

Previous topic

Examples

Next topic

spykeutils.plot package

This Page