NIR API Documentation#
This page lists functions and classes exposed by nir and their corresponding documentation strings.
Reading and writing from/to HDF5 files#
NIR nodes#
- class nir.ir.Affine(weight: ~numpy.ndarray, bias: ~numpy.ndarray, input_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, output_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)#
Affine transform that linearly maps and translates the input signal.
This is equivalent to the Affine transformation
Assumes a one-dimensional input vector of shape (N,).
\[y(t) = W*x(t) + b\]
- class nir.ir.AvgPool2d(kernel_size: ndarray, stride: ndarray, padding: ndarray)#
Average pooling layer in 2d.
- class nir.ir.Conv1d(input_shape: int | None, weight: ~numpy.ndarray, stride: int, padding: int | str, dilation: int, groups: int, bias: ~numpy.ndarray, input_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, output_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)#
Convolutional layer in 1d.
Note that the input_shape argument is required to disambiguate the shape, and is used to infer the exact output shape along with the other parameters. If the input_shape is None, the output shape will also be None.
The NIRGraph.infer_all_shapes function may be used to automatically infer the input and output types on the graph level.
- Parameters:
input_shape (Optional[int]) – Shape of spatial input (N,)
weight (np.ndarray) – Weight, shape (C_out, C_in, N)
stride (int) – Stride
padding (int | str) – Padding, if string must be ‘same’ or ‘valid’
dilation (int) – Dilation
groups (int) – Groups
bias (np.ndarray) – Bias array of shape (C_out,)
- class nir.ir.Conv2d(input_shape: Tuple[int, int] | None, weight: ndarray, stride: int | Tuple[int, int], padding: int | Tuple[int, int] | str, dilation: int | Tuple[int, int], groups: int, bias: ndarray)#
Convolutional layer in 2d.
Note that the input_shape argument is required to disambiguate the shape, and is used to infer the exact output shape along with the other parameters. If the input_shape is None, the output shape will also be None.
The NIRGraph.infer_all_shapes function may be used to automatically infer the input and output types on the graph level.
- Parameters:
input_shape (Optional[tuple[int, int]]) – Shape of spatial input (N_x, N_y)
weight (np.ndarray) – Weight, shape (C_out, C_in, N_x, N_y)
stride (int | int, int) – Stride
padding (int | int, int | str) – Padding, if string must be ‘same’ or ‘valid’
dilation (int | int, int) – Dilation
groups (int) – Groups
bias (np.ndarray) – Bias array of shape (C_out,)
- class nir.ir.CubaLIF(tau_syn: ~numpy.ndarray, tau_mem: ~numpy.ndarray, r: ~numpy.ndarray, v_leak: ~numpy.ndarray, v_threshold: ~numpy.ndarray, w_in: ~numpy.ndarray = 1.0, input_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, output_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)#
Current based leaky integrate and-fire-neuron model.
The current based leaky integrate-and-fire neuron model is defined by the following equations:
\[\tau_{syn} \dot {I} = - I + w_{in} S\]\[\tau_{mem} \dot {v} = (v_{leak} - v) + R I\]\[\begin{split}z = \begin{cases} 1 & v > v_{threshold} \\ 0 & else \end{cases}\end{split}\]\[\begin{split}v = \begin{cases} v-v_{threshold} & z=1 \\ v & else \end{cases}\end{split}\]Where \(\tau_{syn}\) is the synaptic time constant, \(\tau_{mem}\) is the membrane time constant, \(R\) is the resistance, \(v_{leak}\) is the leak voltage, \(v_{threshold}\) is the firing threshold, \(w_{in}\) is the input current weight (elementwise), and \(S\) is the input spike.
- class nir.ir.Delay(delay: ~numpy.ndarray, input_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, output_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)#
Simple delay node.
This node implements a simple delay:
\[y(t) = x(t - \tau)\]
- class nir.ir.Flatten(input_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, start_dim: int = 1, end_dim: int = -1, output_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)#
Flatten node.
This node flattens its input tensor. input_type must be a dict with one key: “input”.
- to_dict() Dict[str, Any] #
Serialize into a dictionary.
- class nir.ir.I(r: ~numpy.ndarray, input_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, output_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)#
Integrator.
The integrator neuron model is defined by the following equation:
\[\dot{v} = R I\]
- class nir.ir.IF(r: ~numpy.ndarray, v_threshold: ~numpy.ndarray, input_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, output_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)#
Integrate-and-fire neuron model.
The integrate-and-fire neuron model is defined by the following equations:
\[\dot{v} = R I\]\[\begin{split}z = \begin{cases} 1 & v > v_{thr} \\ 0 & else \end{cases}\end{split}\]\[\begin{split}v = \begin{cases} v-v_{thr} & z=1 \\ v & else \end{cases}\end{split}\]
- class nir.ir.Input(input_type: Dict[str, ndarray])#
Input Node.
This is a virtual node, which allows feeding in data into the graph.
- to_dict() Dict[str, Any] #
Serialize into a dictionary.
- class nir.ir.LI(tau: ~numpy.ndarray, r: ~numpy.ndarray, v_leak: ~numpy.ndarray, input_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, output_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)#
Leaky integrator neuron model.
The leaky integrator neuron model is defined by the following equation:
\[\tau \dot{v} = (v_{leak} - v) + R I\]Where \(\tau\) is the time constant, \(v\) is the membrane potential, \(v_{leak}\) is the leak voltage, \(R\) is the resistance, and \(I\) is the input current.
- class nir.ir.LIF(tau: ~numpy.ndarray, r: ~numpy.ndarray, v_leak: ~numpy.ndarray, v_threshold: ~numpy.ndarray, input_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, output_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)#
Leaky integrate and-fire-neuron model.
The leaky integrate-and-fire neuron model is defined by the following equations:
\[\tau \dot{v} = (v_{leak} - v) + R I\]\[\begin{split}z = \begin{cases} 1 & v > v_{thr} \\ 0 & else \end{cases}\end{split}\]\[\begin{split}v = \begin{cases} v-v_{thr} & z=1 \\ v & else \end{cases}\end{split}\]Where \(\tau\) is the time constant, \(v\) is the membrane potential, \(v_{leak}\) is the leak voltage, \(R\) is the resistance, \(v_{threshold}\) is the firing threshold, and \(I\) is the input current.
- class nir.ir.Linear(weight: ndarray)#
Linear transform without bias:
\[y(t) = W*x(t)\]
- class nir.ir.NIRGraph(nodes: ~typing.Dict[str, ~nir.ir.node.NIRNode], edges: ~typing.List[~typing.Tuple[str, str]], input_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, output_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)#
Neural Intermediate Representation (NIR) Graph containing a number of nodes and edges.
A graph of computational nodes and identity edges.
- static from_list(*nodes: NIRNode) NIRGraph #
Create a sequential graph from a list of nodes by labelling them after indices.
- infer_types()#
Infer the shapes of all nodes in this graph. Will modify the input_type and output_type of all nodes in the graph.
Assumes that either the input type or the output type of the graph is set. Assumes that if A->B, then A.output_type.values() = B.input_type.values()
- to_dict() Dict[str, Any] #
Serialize into a dictionary.
- class nir.ir.NIRNode#
Base superclass of Neural Intermediate Representation Unit (NIR).
All NIR primitives inherit from this class, but NIRNodes should never be instantiated.
- to_dict() Dict[str, Any] #
Serialize into a dictionary.
- class nir.ir.Output(output_type: Dict[str, ndarray])#
Output Node.
Defines an output of the graph.
- to_dict() Dict[str, Any] #
Serialize into a dictionary.
- class nir.ir.Scale(scale: ndarray)#
Scales a signal by some values.
This node is equivalent to the Hadamard product.
\[y(t) = x(t) \odot s\]
- class nir.ir.SumPool2d(kernel_size: ~numpy.ndarray, stride: ~numpy.ndarray, padding: ~numpy.ndarray, input_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, output_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)#
Sum pooling layer in 2d.
- class nir.ir.Threshold(threshold: ~numpy.ndarray, input_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, output_type: ~typing.Dict[str, ~numpy.ndarray] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)#
Threshold node.
This node implements the heaviside step function:
\[\begin{split}z = \begin{cases} 1 & v > v_{thr} \\ 0 & else \end{cases}\end{split}\]
- nir.ir.dict2NIRNode(data_dict: Dict[str, Any]) NIRNode #
Assume data_dict[“type”] exist and correspond to a subclass of NIRNode.
Other items should match fields in the corresponding NIRNode subclass, unless subclass provides from_dict. Any extra item will be rejected and should be removed before calling this function