API reference

spykeutils package

class SpykeException[source]

Exception thrown when a function in spykeutils encounters a problem that is not covered by standard exceptions.

When using Spyke Viewer, these exceptions will be caught and shown in the GUI, while general exceptions will not be caught (and therefore be visible in the console) for easier debugging.

conversions Module

analog_signal_array_to_analog_signals(signal_array)[source]

Return a list of analog signals for an analog signal array.

If signal_array is attached to a recording channel group with exactly is many channels as there are channels in signal_array, each created signal will be assigned the corresponding channel. If the attached recording channel group has only one recording channel, all created signals will be assigned to this channel. In all other cases, the created signal will not have a reference to a recording channel.

Note that while the created signals may have references to a segment and channels, the relationships in the other direction are not automatically created (the signals are not attached to the recording channel or segment). Other properties like annotations are not copied or referenced in the created analog signals.

Parameters:signal_array (neo.core.AnalogSignalArray) – An analog signal array from which the AnalogSignal objects are constructed.
Returns:A list of analog signals, one for every channel in signal_array.
Return type:list
epoch_array_to_epochs(epoch_array)[source]

Return a list of epochs for an epoch array.

Note that while the created epochs may have references to a segment, the relationships in the other direction are not automatically created (the events are not attached to the segment). Other properties like annotations are not copied or referenced in the created epochs.

Parameters:epoch_array (neo.core.EpochArray) – A period array from which the Epoch objects are constructed.
Returns:A list of events, one for of the events in epoch_array.
Return type:list
event_array_to_events(event_array)[source]

Return a list of events for an event array.

Note that while the created events may have references to a segment, the relationships in the other direction are not automatically created (the events are not attached to the segment). Other properties like annotations are not copied or referenced in the created events.

Parameters:event_array (neo.core.EventArray) – An event array from which the Event objects are constructed.
Returns:A list of events, one for of the events in event_array.
Return type:list
spike_train_to_spikes(spike_train, include_waveforms=True)[source]

Return a list of spikes for a spike train.

Note that while the created spikes have references to the same segment and unit as the spike train, the relationships in the other direction are not automatically created (the spikes are not attached to the unit or segment). Other properties like annotations are not copied or referenced in the created spikes.

Parameters:
  • spike_train (SpikeTrain) – A spike train from which the Spike objects are constructed.
  • include_waveforms (bool) – Determines if the waveforms property is converted to the spike waveforms. If waveforms is None, this parameter has no effect.
Returns:

A list of Spike objects, one for every spike in spike_train.

Return type:

list

spikes_to_spike_train(spikes, include_waveforms=True)[source]

Return a spike train for a list of spikes.

All spikes must have an identical left sweep, the same unit and the same segment, otherwise a SpykeException is raised.

Note that while the created spike train has references to the same segment and unit as the spikes, the relationships in the other direction are not automatically created (the spike train is not attached to the unit or segment). Other properties like annotations are not copied or referenced in the created spike train.

Parameters:
  • spikes – A list of spike objects from which the SpikeTrain object is constructed.
  • include_waveforms (bool) – Determines if the waveforms from the Spike objects are used to fill the waveforms property of the resulting spike train. If True, all spikes need a waveform property with the same shape or a SpykeException is raised (or the waveform property needs to be None for all spikes).
Returns:

A SpikeTrain object including all elements of spikes.

Return type:

neo.core.SpikeTrain

correlations Module

correlogram(trains, bin_size, max_lag=500 ms, border_correction=True, unit=ms, progress=None)[source]

Return (cross-)correlograms from a dictionary of SpikeTrain lists for different units.

Parameters:
  • trains (dict) – Dictionary of SpikeTrain lists.
  • bin_size (Quantity scalar) – Bin size (time).
  • max_lag (Quantity scalar) – Cut off (end time of calculated correlogram).
  • border_correction (bool) – Apply correction for less data at higher timelags. Not perfect for bin_size != 1*``unit``, especially with large max_lag compared to length of spike trains.
  • unit (Quantity) – Unit of X-Axis.
  • progress (spykeutils.progress_indicator.ProgressIndicator) – A ProgressIndicator object for the operation.
Returns:

Two values:

  • An ordered dictionary indexed with the indices of trains of ordered dictionaries indexed with the same indices. Entries of the inner dictionaries are the resulting (cross-)correlograms as numpy arrays. All crosscorrelograms can be indexed in two different ways: c[index1][index2] and c[index2][index1].
  • The bins used for the correlogram calculation.

Return type:

dict, Quantity 1D

progress_indicator Module

exception CancelException[source]

Bases: exceptions.Exception

This is raised when a user cancels a progress process. It is used by ProgressIndicator and its descendants.

class ProgressIndicator[source]

Bases: object

Base class for classes indicating progress of a long operation.

This class does not implement any of the methods and can be used as a dummy if no progress indication is needed.

begin(title='')[source]

Signal that the operation starts.

Parameters:title (string) – The name of the whole operation.
done()[source]

Signal that the operation is done.

set_status(new_status)[source]

Set status description.

Parameters:new_status (string) – A description of the current status.
set_ticks(ticks)[source]

Set the required number of ticks before the operation is done.

Parameters:ticks (int) – The number of steps that the operation will take.
step(num_steps=1)[source]

Signal that one or more steps of the operation were completed.

Parameters:num_steps (int) – The number of steps that have been completed.
ignores_cancel(function)[source]

Decorator for functions that should ignore a raised CancelException and just return nothing in this case

rate_estimation Module

binned_spike_trains(trains, bin_size, start=0 ms, stop=None)[source]

Return dictionary of binned rates for a dictionary of SpikeTrain lists.

Parameters:
  • trains (dict) – A sequence of SpikeTrain lists.
  • bin_size (Quantity scalar) – The desired bin size (as a time quantity).
  • stop (Quantity scalar) – The desired time for the end of the last bin. It will be recalculated if there are spike trains which end earlier than this time.
Returns:

A dictionary (with the same indices as trains) of lists of spike train counts and the bin borders.

Return type:

dict, Quantity 1D

psth(trains, bin_size, rate_correction=True, start=0 ms, stop=None)[source]

Return dictionary of peri stimulus time histograms for a dictionary of SpikeTrain lists.

Parameters:
  • trains (dict) – A dictionary of lists of SpikeTrain objects.
  • bin_size (Quantity scalar) – The desired bin size (as a time quantity).
  • rate_correction (bool) – Determines if a rates (True) or counts (False) are returned.
  • start (Quantity scalar) – The desired time for the start of the first bin. It will be recalculated if there are spike trains which start later than this time.
  • stop (Quantity scalar) – The desired time for the end of the last bin. It will be recalculated if there are spike trains which end earlier than this time.
Returns:

A dictionary (with the same indices as trains) of arrays containing counts (or rates if rate_correction is True) and the bin borders.

Return type:

dict, Quantity 1D

spike_density_estimation(trains, start=0 ms, stop=None, evaluation_points=None, kernel=gauss_kernel, kernel_size=100 ms, optimize_steps=None, progress=None)[source]

Create a spike density estimation from a dictionary of lists of SpikeTrain objects. The spike density estimations give an estimate of the instantaneous rate. Optionally finds optimal kernel size for given data.

Parameters:
  • trains (dict) – A dictionary of SpikeTrain lists.
  • start (Quantity scalar) – The desired time for the start of the first bin. It will be recalculated if there are spike trains which start later than this time. This parameter can be negative (which could be useful when aligning on events).
  • stop (Quantity scalar) – The desired time for the end of the last bin. It will be recalculated if there are spike trains which end earlier than this time.
  • evaluation_points (Quantity 1D) – An array of time points at which the density estimation is evaluated to produce the data. If this is None, 1000 equally spaced points covering the range of the input spike trains will be used.
  • kernel (func) – The kernel function to use, should accept two parameters: A ndarray of distances and a kernel size. The total area under the kernel function sould be 1. Default: Gaussian kernel
  • kernel_size (Quantity scalar) – A uniform kernel size for all spike trains. Only used if optimization of kernel sizes is not used.
  • optimize_steps (Quantity 1D) – An array of time lengths that will be considered in the kernel width optimization. Note that the optimization assumes a Gaussian kernel and will most likely not give the optimal kernel size if another kernel is used. If None, kernel_size will be used.
  • progress (spykeutils.progress_indicator.ProgressIndicator) – Set this parameter to report progress.
Returns:

Three values:

  • A dictionary of the spike density estimations (Quantity 1D in Hz). Indexed the same as trains.
  • A dictionary of kernel sizes (Quantity scalars). Indexed the same as trains.
  • The used evaluation points.

Return type:

dict, dict, Quantity 1D

aligned_spike_trains(trains, events, copy=True)[source]

Return a list of spike trains aligned to an event (the event will be time 0 on the returned trains).

Parameters:
  • trains (dict) – A list of SpikeTrain objects.
  • events (dict) – A dictionary of Event objects, indexed by segment. These events (in case of lists, always the first element in the list) will be used to align the spike trains and will be at time 0 for the aligned spike trains.
  • copy (bool) – Determines if aligned copies of the original spike trains will be returned. If not, every spike train needs exactly one corresponding event, otherwise a ValueError will be raised. Otherwise, entries with no event will be ignored.
collapsed_spike_trains(trains)[source]

Return a superposition of a list of spike trains.

Parameters:trains (iterable) – A list of SpikeTrain objects
Returns:A SpikeTrain object containing all spikes of the given SpikeTrain objects.
minimum_spike_train_interval(trains)[source]

Computes the minimum starting time and maximum end time that all given spike trains share.

Parameters:trains (dict) – A dictionary of sequences of SpikeTrain objects.
Returns:Maximum shared start time and minimum shared stop time.
Return type:Quantity scalar, Quantity scalar
optimal_gauss_kernel_size(train, optimize_steps, progress=None)[source]

Return the optimal kernel size for a spike density estimation of a SpikeTrain for a gaussian kernel. This function takes a single spike train, which can be a superposition of multiple spike trains (created with collapsed_spike_trains()) that should be included in a spike density estimation. See (Shimazaki, Shinomoto. Journal of Computational Neuroscience. 2010).

Parameters:
  • train (SpikeTrain) – The spike train for which the kernel size should be optimized.
  • optimize_steps (Quantity 1D) – Array of kernel sizes to try (the best of these sizes will be returned).
  • progress (spykeutils.progress_indicator.ProgressIndicator) – Set this parameter to report progress. Will be advanced by len(optimize_steps) steps.
Returns:

Best of the given kernel sizes

Return type:

Quantity scalar

sorting_quality_assesment Module

Functions for estimating the quality of spike sorting results. These functions estimate false positive and false negative fractions.

calculate_overlap_fp_fn(means, spikes)[source]

Return a dict of tuples (False positive rate, false negative rate) indexed by unit.

Details for the calculation can be found in (Hill et al. The Journal of Neuroscience. 2011). This function works on prewhitened data, which means it assumes that all clusters have a uniform normal distribution. Data can be prewhitened using the noise covariance matrix.

The calculation for total false positive and false negative rates does not follow (Hill et al. The Journal of Neuroscience. 2011), where a simple addition of pairwise probabilities is proposed. Instead, the total error probabilities are estimated using all clusters at once.

Parameters:
  • means (dict) – Dictionary of prewhitened cluster means (e.g. unit templates) indexed by unit as Spike objects or numpy arrays for all units.
  • spikes (dict) – Dictionary, indexed by unit, of lists of prewhitened spike waveforms as Spike objects or numpy arrays for all units.
Returns:

Two values:

  • A dictionary (indexed by unit) of total (false positives, false negatives) tuples.
  • A dictionary of dictionaries, both indexed by units, of pairwise (false positives, false negatives) tuples.

Return type:

dict, dict

calculate_refperiod_fp(num_spikes, refperiod, violations, total_time)[source]

Return the rate of false positives calculated from refractory period calculations for each unit. The equation used is described in (Hill et al. The Journal of Neuroscience. 2011).

Parameters:
  • num_spikes (dict) – Dictionary of total number of spikes, indexed by unit.
  • refperiod (Quantity scalar) – The refractory period (time). If the spike sorting algorithm includes a censored period (a time after a spike during which no new spikes can be found), subtract it from the refractory period before passing it to this function.
  • violations (dict) – Dictionary of total number of violations, indexed the same as num_spikes.
  • total_time (Quantity scalar) – The total time in which violations could have occured.
Returns:

A dictionary of false positive rates indexed by unit. Note that values above 0.5 can not be directly interpreted as a false positive rate! These very high values can e.g. indicate that the chosen refractory period was too large.

get_refperiod_violations(spike_trains, refperiod, progress=None)[source]

Return the refractory period violations in the given spike trains for the specified refractory period.

Parameters:
Returns:

Two values:

  • The total number of violations.
  • A dictionary (with the same indices as spike_trains) of arrays with violation times (Quantity 1D with the same unit as refperiod) for each spike train.

Return type:

int, dict

staionarity Module

spike_amplitude_histogram(trains, num_bins, uniform_y_scale=True, unit=uV, progress=None)[source]

Return a spike amplitude histogram.

The resulting is useful to assess the drift in spike amplitude over a longer recording. It shows histograms (one for each trains entry, e.g. segment) of maximum and minimum spike amplitudes.

Parameters:
  • trains (list) – A list of lists of SpikeTrain objects. Each entry of the outer list will be one point on the x-axis (they could correspond to segments), all amplitude occurences of spikes contained in the inner list will be added up.
  • num_bins (int) – Number of bins for the histograms.
  • uniform_y_scale (bool) – If True, the histogram for each channel will use the same bins. Otherwise, the minimum bin range is computed separately for each channel.
  • unit (Quantity) – Unit of Y-Axis.
  • progress (spykeutils.progress_indicator.ProgressIndicator) – Set this parameter to report progress.
Returns:

A tuple with three values:

  • A three-dimensional histogram matrix, where the first dimension corresponds to bins, the second dimension to the entries of trains (e.g. segments) and the third dimension to channels.
  • A list of the minimum amplitude value for each channel (all values will be equal if uniform_y_scale is true).
  • A list of the maximum amplitude value for each channel (all values will be equal if uniform_y_scale is true).

Return type:

(ndarray, list, list)

Project Versions

Table Of Contents

Previous topic

Examples

Next topic

spykeutils.plot package

This Page