Math#

This submodule contains various mathematical functions. Most are re-exported directly from pytensor.tensor and pytensor.tensor.linalg (see there for full signatures and details). Doing any kind of math with PyMC random variables, or defining custom likelihoods or priors, requires you to use these PyTensor expressions rather than NumPy or Python code.

pymc.math.argmax(x, axis=None, keepdims=False)[source]#

Returns indices of maximum elements obtained by iterating over given axis.

When axis is None (the default value), the argmax is performed over the flattened tensor.

Parameters:
x: TensorLike

Array on which to compute argmax

axis:

Axis along which to compute argmax. Unlike numpy multiple partial axis are supported.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.

Returns:
TensorVariable

TensorVariable representing the argmax operation

pymc.math.argmin(x, axis=None, keepdims=False)[source]#

Returns indices of minimum elements obtained by iterating over given axis.

When axis is None (the default value), the argmin is performed over the flattened tensor.

Parameters:
keepdims: bool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.

pymc.math.argsort(a, axis=-1, kind=None, order=None, stable=None)[source]#

Returns the indices that would sort an array.

Perform an indirect sort along the given axis using the algorithm specified by the kind keyword. It returns an array of indices of the same shape as a that index data along the given axis in sorted order.

pymc.math.as_tensor(x, name=None, ndim=None, **kwargs)#

Convert x into an equivalent TensorVariable.

This function can be used to turn ndarrays, numbers, ScalarType instances, Apply instances and TensorVariable instances into valid input list elements.

See pytensor.as_symbolic for a more general conversion function.

Parameters:
x

The object to be converted into a Variable type. A numpy.ndarray argument will not be copied, but a list of numbers will be copied to make an numpy.ndarray.

name

If a new Variable instance is created, it will be named with this string.

ndim

Return a Variable with this many dimensions.

dtype

The dtype to use for the resulting Variable. If x is already a Variable type, then the dtype will not be changed.

Raises:
TypeError

If x cannot be converted to a TensorVariable.

pymc.math.block_diagonal(matrices, sparse=False, format='csr')[source]#

See pt.linalg.block_diag or pytensor.sparse.basic.block_diag for reference.

Parameters:
matrices: tensors
format: str (default ‘csr’)

must be one of: ‘csr’, ‘csc’

sparse: bool (default False)

if True return sparse format

Returns:
matrix
pymc.math.broadcast_arrays(*args)[source]#

Broadcast any number of arrays against each other.

Parameters:
*args

The arrays to broadcast.

pymc.math.broadcast_to(x, shape)[source]#

Broadcast an array to a new shape.

Parameters:
array

The array to broadcast.

shape

The shape of the desired array.

Returns:
broadcast

A readonly view on the original array with the given shape. It is typically not contiguous. Furthermore, more than one element of a broadcasted array may refer to a single memory location.

pymc.math.cartesian(*arrays)[source]#

Make the Cartesian product of arrays.

Parameters:
arrays: N-D array-like

N-D arrays where earlier arrays loop more slowly than later ones

pymc.math.cho_solve(c_and_lower, b, *, b_ndim=None)[source]#

Solve the linear equations A x = b, given the Cholesky factorization of A.

Parameters:
c_and_lowertuple of (TensorLike, bool)

Cholesky factorization of a, as given by cho_factor

bTensorLike

Right-hand side

check_finitebool

Unused by PyTensor. PyTensor will return nan if the operation fails.

b_ndimint

Whether the core case of b is a vector (1) or matrix (2). This will influence how batched dimensions are interpreted.

pymc.math.cholesky(x, lower=True, *, check_finite=True, overwrite_a=False, on_error='nan')[source]#

Return a triangular matrix square root of positive semi-definite x.

L = cholesky(X, lower=True) implies dot(L, L.T) == X.

Parameters:
x: tensor_like
lowerbool, default=True

Whether to return the lower or upper cholesky factor

check_finitebool

Unused by PyTensor. PyTensor will return nan if the operation fails.

overwrite_a: bool, ignored

Whether to use the same memory for the output as a. This argument is ignored, and is present here only for consistency with scipy.linalg.cholesky.

on_error[‘raise’, ‘nan’]

If on_error is set to ‘raise’, this Op will raise a scipy.linalg.LinAlgError if the matrix is not positive definite. If on_error is set to ‘nan’, it will return a matrix containing nans instead.

Returns:
TensorVariable

Lower or upper triangular Cholesky factor of x

pymc.math.concatenate(tensor_list, axis=0)[source]#

Alias for `join`(axis, *tensor_list).

This function is similar to join, but uses the signature of numpy’s concatenate function.

Raises:
TypeError

The tensor_list must be a tuple or list.

pymc.math.constant(x, name=None, ndim=None, dtype=None)[source]#

Return a TensorConstant with value x.

Raises:
TypeError

x could not be converted to a numpy.ndarray.

ValueError

x could not be expanded to have ndim dimensions.

pymc.math.cumprod(x, axis=None)[source]#

Return the cumulative product of the elements along a given axis.

This wraps numpy.cumprod.

Parameters:
x

Input tensor variable.

axis

The axis along which the cumulative product is computed. The default (None) is to compute the cumprod over the flattened array.

.. versionadded:: 0.7
pymc.math.cumsum(x, axis=None)[source]#

Return the cumulative sum of the elements along a given axis.

This wraps numpy.cumsum.

Parameters:
x

Input tensor variable.

axis

The axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.

.. versionadded:: 0.7
pymc.math.diag(v, k=0)[source]#

A helper function for two ops: ExtractDiag and AllocDiag. The name diag is meant to keep it consistent with numpy. It both accepts tensor vector and tensor matrix. While the passed tensor variable v has v.ndim==2, it builds a ExtractDiag instance, and returns a vector with its entries equal to v’s main diagonal; otherwise if v.ndim is 1, it builds an AllocDiag instance, and returns a matrix with v at its k-th diaogonal.

Parameters:
vsymbolic tensor
kint

offset

Returns:
tensorsymbolic tensor
pymc.math.diff(x, n=1, axis=-1)[source]#

Calculate the n-th order discrete difference along the given axis.

The first order difference is given by out[i] = a[i + 1] - a[i] along the given axis, higher order differences are calculated by using diff recursively. This is heavily inspired by numpy.diff.

Parameters:
x

Input tensor variable.

n

The number of times values are differenced, default is 1.

axis

The axis along which the difference is taken, default is the last axis.

.. versionadded:: 0.6
pymc.math.dot(l, r)[source]#

Return a symbolic dot product.

This is designed to work with both sparse and dense tensors types.

pymc.math.eigh(a, b=None, lower=True, UPLO=None, driver='evr')[source]#

Return the eigenvalues and eigenvectors of a symmetric/Hermitian matrix.

Parameters:
aTensorLike

Symmetric/Hermitian matrix (or batch thereof).

bTensorLike, optional

Second matrix for the generalized eigenvalue problem A v = w B v. Must be positive-definite. If None, the standard eigenvalue problem is solved.

lowerbool

Whether to use the lower or upper triangle of a (and b, if provided). Default is True

UPLO{‘L’, ‘U’}, optional

Whether to use the lower or upper triangle of a (and b, if provided). Default is ‘L’ (lower). UPLO is deprecated and will be removed in a future version. Use the lower argument instead.

driver{‘evr’, ‘evd’}, optional

LAPACK driver to use. 'evr' (default) uses the MRRR algorithm, the fastest general-purpose driver. This is the default used by Scipy. 'evd' uses divide-and-conquer, matching NumPy, JAX, and MLX.

Returns:
wVariable

Eigenvalues of the system, in ascending order.

vVariable

Eigenvectors of the system, in ascending order.

pymc.math.expand_dims(a, axis)[source]#

Expand the shape of an array.

Insert a new axis that will appear at the axis position in the expanded array shape.

Parameters:
a

The input array.

axis

Position in the expanded axes where the new axis is placed. If axis is empty, a will be returned immediately.

Returns
——-
`a` with a new axis at the `axis` position.
pymc.math.expand_packed_triangular(n, packed, lower=True, diagonal_only=False)[source]#

Convert a packed triangular matrix into a two dimensional array.

Triangular matrices can be stored with better space efficiency by storing the non-zero values in a one-dimensional array. We number the elements by row like this (for lower or upper triangular matrices):

[[0 - - -] [[0 1 2 3]

[1 2 - -] [- 4 5 6] [3 4 5 -] [- - 7 8] [6 7 8 9]] [- - - 9]

Parameters:
n: int

The number of rows of the triangular matrix.

packed: pytensor.vector

The matrix in packed format.

lower: bool, default=True

If true, assume that the matrix is lower triangular.

diagonal_only: bool

If true, return only the diagonal of the matrix.

pymc.math.eye(n, m=None, k=0, dtype=None)[source]#

Return a 2-D array with ones on the diagonal and zeros elsewhere.

Parameters:
nint

Number of rows in the output.

mint, optional

Number of columns in the output. If None, defaults to N.

kint, optional

Index of the diagonal: 0 (the default) refers to the main diagonal, a positive value refers to an upper diagonal, and a negative value to a lower diagonal.

dtypedata-type, optional

Data-type of the returned array.

Returns:
ndarray of shape (N,M)

An array where all elements are equal to zero, except for the k-th diagonal, whose values are equal to one.

pymc.math.flatten(x, ndim=1)[source]#

Return a copy of the array collapsed into one dimension.

Reshapes the variable x by keeping the first outdim-1 dimension size(s) of x the same, and making the last dimension size of x equal to the multiplication of its remaining dimension size(s).

Parameters:
xpytensor.tensor.var.TensorVariable

The variable to be reshaped.

ndimint

The number of dimensions of the returned variable The default value is 1.

Returns:
pytensor.tensor.var.TensorVariable

the flattened variable with dimensionality of outdim

pymc.math.full(shape, fill_value, dtype=None)[source]#

Return a new array of given shape and type, filled with fill_value.

See numpy.full.

Parameters:
shapeint or sequence of ints

Shape of the new array, e.g., (2, 3) or 2.

fill_valuescalar or array_like

Fill value.

dtypedata-type, optional

The desired data-type for the array The default, None, means np.array(fill_value).dtype.

pymc.math.full_like(a, fill_value, dtype=None)[source]#

Equivalent of numpy.full_like.

Returns:
tensor

tensor the shape of a containing fill_value of the type of dtype.

pymc.math.iv(v, x)[source]#

Modified Bessel function of the first kind of order v (real).

Computed as ive(v, x) * exp(abs(x)) for numerical consistency with ive. For large x, prefer working in log-space: log(iv(v, x)) == log(ive(v, x)) + abs(x) to avoid overflow.

pymc.math.kron(*Ks)#

Return the Kronecker product of arguments.

math:K_1 otimes K_2 otimes … otimes K_D

Parameters:
KsIterable of 2D array_like

Arrays of which to take the product.

Returns:
np.ndarray

Block matrix Kroncker product of the argument matrices.

pymc.math.kron_diag(*diags)[source]#

Return diagonal of a kronecker product.

Parameters:
diags: 1D arrays

The diagonals of matrices that are to be Kroneckered

pymc.math.kron_dot(krons, m, *, op=<function dot>)#

Apply op to krons and m in a way that reproduces op(kronecker(*krons), m).

Parameters:
kronslist of square 2D array_like objects

D square matrices \([A_1, A_2, ..., A_D]\) to be Kronecker’ed \(A = A_1 \otimes A_2 \otimes ... \otimes A_D\) Product of column dimensions must be \(N\)

mNxM array or 1D array (treated as Nx1)

Object that krons act upon

Returns:
numpy array
pymc.math.kron_solve_lower(krons, m, *, op=functools.partial(<function solve_triangular>, lower=True))#

Apply op to krons and m in a way that reproduces op(kronecker(*krons), m).

Parameters:
kronslist of square 2D array_like objects

D square matrices \([A_1, A_2, ..., A_D]\) to be Kronecker’ed \(A = A_1 \otimes A_2 \otimes ... \otimes A_D\) Product of column dimensions must be \(N\)

mNxM array or 1D array (treated as Nx1)

Object that krons act upon

Returns:
numpy array
pymc.math.kron_solve_upper(krons, m, *, op=functools.partial(<function solve_triangular>, lower=False))#

Apply op to krons and m in a way that reproduces op(kronecker(*krons), m).

Parameters:
kronslist of square 2D array_like objects

D square matrices \([A_1, A_2, ..., A_D]\) to be Kronecker’ed \(A = A_1 \otimes A_2 \otimes ... \otimes A_D\) Product of column dimensions must be \(N\)

mNxM array or 1D array (treated as Nx1)

Object that krons act upon

Returns:
numpy array
pymc.math.kronecker(*Ks)[source]#

Return the Kronecker product of arguments.

math:K_1 otimes K_2 otimes … otimes K_D

Parameters:
KsIterable of 2D array_like

Arrays of which to take the product.

Returns:
np.ndarray

Block matrix Kroncker product of the argument matrices.

pymc.math.kv(v, x)[source]#

Modified Bessel function of the second kind of real order v.

pymc.math.linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None, axis=0, end=None, steps=None)[source]#

Return evenly spaced numbers over a specified interval.

Returns num evenly spaced samples, calculated over the interval [start, stop].

The endpoint of the interval can optionally be excluded.

Parameters:
start: int, float, or TensorVariable

The starting value of the sequence.

stop: int, float or TensorVariable

The end value of the sequence, unless endpoint is set to False. In that case, the sequence consists of all but the last of num + 1 evenly spaced samples, such that stop is excluded.

num: int

Number of samples to generate. Must be non-negative.

endpoint: bool

Whether to include the endpoint in the range.

retstep: bool

If true, returns both the samples and an array of steps between samples.

dtype: str, optional

dtype of the output tensor(s). If None, the dtype is inferred from that of the values provided to the start and end arguments.

axis: int

Axis along which to generate samples. Ignored if both start and end have dimension 0. By default, axis=0 will insert the samples on a new left-most dimension. To insert samples on a right-most dimension, use axis=-1.

end: int, float or TensorVariable

Warning

The “end” parameter is deprecated and will be removed in a future version. Use “stop” instead.

The end value of the sequence, unless endpoint is set to False. In that case, the sequence consists of all but the last of num + 1 evenly spaced samples, such that end is excluded.

steps: float, int, or TensorVariable

Warning

The “steps” parameter is deprecated and will be removed in a future version. Use “num” instead.

Number of samples to generate. Must be non-negative

Returns:
samples: TensorVariable

Tensor containing num evenly-spaced values between [start, stop]. The range is inclusive if endpoint is True.

step: TensorVariable

Tensor containing the spacing between samples. Only returned if retstep is True.

pymc.math.log1mexp(x, *, negative_input=UNSET)[source]#

Return log(1 - exp(-x)).

This function is numerically more stable than the naive approach.

For details, see https://cran.r-project.org/web/packages/Rmpfr/vignettes/log1mexp-note.pdf

References

[Machler2012]

Martin Mächler (2012). “Accurately computing log(1-exp(- mid a mid)) Assessed by the Rmpfr package”

pymc.math.logaddexp(*xs)[source]#

Logarithm of the sum of exponentiations of the inputs.

See numpy.logaddexp.

Parameters:
xssymbolic tensors

Input

Returns:
TensorVariable
pymc.math.logdiffexp(a, b)[source]#

Return log(exp(a) - exp(b)).

pymc.math.logsumexp(x, axis=None, keepdims=False)[source]#

Compute the log of the sum of exponentials of input elements.

See scipy.special.logsumexp.

Parameters:
xsymbolic tensor

Input

axisNone or int or tuple of ints, optional

Axis or axes over which the sum is taken. By default axis is None, and all elements are summed.

keepdimsbool, optional

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array.

Returns:
TensorVariable
pymc.math.matmul(x1, x2, dtype=None)[source]#

Compute the matrix product of two tensor variables.

Parameters:
x1, x2

Input arrays, scalars not allowed.

dtype

The desired data-type for the array. If not given, then the type will be determined as the minimum type required to hold the objects in the sequence.

Returns:
outndarray

The matrix product of the inputs. This is a scalar only when both x1, x2 are 1-d vectors.

Raises:
ValueError

If the last dimension of x1 is not the same size as the second-to-last dimension of x2. If a scalar value is passed in.

Notes

The behavior depends on the arguments in the following way.

  • If both arguments are 2-D they are multiplied like conventional matrices.

  • If either argument is N-D, N > 2, it is treated as a stack of matrices

    residing in the last two indexes and broadcast accordingly.

  • If the first argument is 1-D, it is promoted to a matrix by prepending a

    1 to its dimensions. After matrix multiplication the prepended 1 is removed.

  • If the second argument is 1-D, it is promoted to a matrix by appending a

    1 to its dimensions. After matrix multiplication the appended 1 is removed.

matmul differs from dot in two important ways:

  • Multiplication by scalars is not allowed, use mul instead.

  • Stacks of matrices are broadcast together as if the matrices were elements,

    respecting the signature (n, k), (k, m) -> (n, m):

pymc.math.max(x, axis=None, keepdims=False)[source]#

Returns maximum elements obtained by iterating over given axis.

When axis is None (the default value), the max is performed over the flattened tensor.

Parameters:
keepdims: bool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.

Notes

We return an error as numpy when we reduce a dim with a shape of 0.

pymc.math.mean(input, axis=None, dtype=None, keepdims=False, acc_dtype=None)[source]#

Computes the mean value along the given axis(es) of a tensor input.

Parameters:
axisNone or int or (list of int) (see Sum)

Compute the mean along this axis of the tensor. None means all axes (like numpy).

dtype: None or string

Dtype to cast the result of the inner summation into. For instance, by default, a sum of a float32 tensor will be done in float64 (acc_dtype would be float64 by default), but that result will be casted back in float32.

keepdims: bool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.

acc_dtype: None or string

Dtype to use for the inner summation. This will not necessarily be the dtype of the output (in particular if it is a discrete (int/uint) dtype, the output will be in a float type). If None, then we use the same rules as sum().

pymc.math.min(x, axis=None, keepdims=False)[source]#

Returns minimum elements obtained by iterating over given axis.

When axis is None (the default value), the min is performed over the flattened tensor.

Parameters:
keepdims: bool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.

pymc.math.moveaxis(a, source, destination)[source]#

Move axes of a TensorVariable to new positions.

Other axes remain in their original order.

Parameters:
a

The TensorVariable whose axes should be reordered.

source

Original positions of the axes to move. These must be unique.

destination

Destination positions for each of the original axes. These must also be unique.

Returns:
result

TensorVariable with moved axes.

pymc.math.norm(x, ord=None, axis=None, keepdims=False)[source]#

Matrix or vector norm.

Parameters:
x: TensorVariable

Tensor to take norm of.

ord: float, str or int, optional
Order of norm. If ord is a str, it must be one of the following:
  • ‘fro’ or ‘f’ : Frobenius norm

  • ‘nuc’ : nuclear norm

  • ‘inf’ : Infinity norm

  • ‘-inf’ : Negative infinity norm

If an integer, order can be one of -2, -1, 0, 1, or 2. Otherwise ord must be a float.

Default is the Frobenius (L2) norm.

axis: tuple of int, optional

Axes over which to compute the norm. If None, norm of entire matrix (or vector) is computed. Row or column norms can be computed by passing a single integer; this will treat a matrix like a batch of vectors.

keepdims: bool

If True, dummy axes will be inserted into the output so that norm.dnim == x.dnim. Default is False.

Returns:
TensorVariable

Norm of x along axes specified by axis.

Notes

Batched dimensions are supported to the left of the core dimensions. For example, if x is a 3D tensor with shape (2, 3, 4), then norm(x) will compute the norm of each 3x4 matrix in the batch.

If the input is a 2D tensor and should be treated as a batch of vectors, the axis argument must be specified.

pymc.math.ones(shape, dtype=None)[source]#

Create a TensorVariable filled with ones, closer to NumPy’s syntax than alloc.

pymc.math.ones_like(model, dtype=None, opt=False)[source]#

equivalent of numpy.ones_like Parameters ———- model : tensor dtype : data-type, optional opt : If True, we will return a constant instead of a graph when possible.

Useful for PyTensor optimization, not for user building a graph as this have the consequence that model isn’t always in the graph.

Returns:
tensor

tensor the shape of model containing ones of the type of dtype.

pymc.math.prod(input, axis=None, dtype=None, keepdims=False, acc_dtype=None, no_zeros_in_input=False)[source]#

Computes the product along the given axis(es) of a tensor input.

When axis is None (the default value), the product is performed over the flattened tensor.

For full documentation see tensor.elemwise.Prod.

Parameters:
keepdims: bool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.

pymc.math.repeat(a, repeats, axis=None)[source]#

Repeat elements of a tensor.

See numpy.repeat() for more information.

Parameters:
a: tensor_like

Input tensor

repeats: tensor_like

The number of repetitions for each element. repeats is broadcasted to fit the shape of the given axis.

axisint, optional

The axis along which to repeat values. By default, use the flattened input array, and return a flat output array.

Returns:
repeated_tensor: TensorVariable

Output tensor which as the same shape as a, except along the given axis

Examples

When axis is None, the array is first flattened and then repeated

Added in version 0.6.

pymc.math.round(a, mode=None)[source]#

round_mode(a) with mode in [half_away_from_zero, half_to_even]. Default to half_to_even.

pymc.math.sgn(a)[source]#

sign of a

pymc.math.slogdet(x)[source]#

Compute the sign and (natural) logarithm of the determinant of an array.

Returns a naive graph which is optimized later using rewrites with the det operation.

Parameters:
x(…, M, M) tensor or tensor_like

Input tensor, has to be square.

Returns:
A tuple with the following attributes:
sign(…) tensor_like

A number representing the sign of the determinant. For a real matrix, this is 1, 0, or -1.

logabsdet(…) tensor_like

The natural log of the absolute value of the determinant.

If the determinant is zero, then sign will be 0 and logabsdet
will be -inf. In all cases, the determinant is equal to
sign * exp(logabsdet).
pymc.math.solve(a, b, *, lower=False, overwrite_a=False, overwrite_b=False, check_finite=True, assume_a='gen', transposed=False, b_ndim=None)[source]#

Solves the linear equation set a * x = b for the unknown x for square a matrix.

If the data matrix is known to be a particular type then supplying the corresponding string to assume_a key chooses the dedicated solver. The available options are

diagonal

‘diagonal’

tridiagonal

‘tridiagonal’

banded

‘banded’

upper triangular

‘upper triangular’

lower triangular

‘lower triangular’

symmetric

‘symmetric’ (or ‘sym’)

hermitian

‘hermitian’ (or ‘her’)

positive definite

‘positive definite’ (or ‘pos’)

general

‘general’ (or ‘gen’)

If omitted, 'general' is the default structure.

The datatype of the arrays define which solver is called regardless of the values. In other words, even when the complex array entries have precisely zero imaginary parts, the complex solver will be called based on the data type of the array.

Parameters:
a(…, N, N) array_like

Square input data

b(…, N, NRHS) array_like

Input data for the right hand side.

lowerbool, default False

Ignored unless assume_a is one of 'sym', 'her', or 'pos'. If True, the calculation uses only the data in the lower triangle of a; entries above the diagonal are ignored. If False (default), the calculation uses only the data in the upper triangle of a; entries below the diagonal are ignored.

overwrite_abool

Unused by PyTensor. PyTensor will always perform the operation in-place if possible.

overwrite_bbool

Unused by PyTensor. PyTensor will always perform the operation in-place if possible.

check_finitebool

Unused by PyTensor. PyTensor returns nan if the operation fails.

assume_astr, optional

Valid entries are explained above.

transposed: bool, default False

If True, solves the system A^T x = b. Default is False.

b_ndimint

Whether the core case of b is a vector (1) or matrix (2). This will influence how batched dimensions are interpreted. By default, we assume b_ndim = b.ndim is 2 if b.ndim > 1, else 1.

pymc.math.solve_triangular(a, b, *, trans=0, lower=False, unit_diagonal=False, check_finite=True, b_ndim=None)[source]#

Solve the equation a x = b for x, assuming a is a triangular matrix.

Parameters:
a: TensorVariable

Square input data

b: TensorVariable

Input data for the right hand side.

lowerbool, optional

Use only data contained in the lower triangle of a. Default is to use upper triangle.

trans: {0, 1, 2, ‘N’, ‘T’, ‘C’}, optional

Type of system to solve: trans system 0 or ‘N’ a x = b 1 or ‘T’ a^T x = b 2 or ‘C’ a^H x = b

unit_diagonal: bool, optional

If True, diagonal elements of a are assumed to be 1 and will not be referenced.

check_finitebool, optional

Unused by PyTensor. PyTensor will return nan if the operation fails.

b_ndimint

Whether the core case of b is a vector (1) or matrix (2). This will influence how batched dimensions are interpreted.

pymc.math.sort(a, axis=-1, kind=None, order=None, *, stable=None)[source]#
Parameters:
a: TensorVariable

Tensor to be sorted

axis: TensorVariable

Axis along which to sort. If None, the array is flattened before sorting.

kind: {‘quicksort’, ‘mergesort’, ‘heapsort’ ‘stable’}, optional

Sorting algorithm. Default is ‘quicksort’ unless stable is defined.

order: list, optional

For compatibility with numpy sort signature. Cannot be specified.

stable: bool, optional

Same as specifying kind = ‘stable’. Cannot be specified at the same time as kind

Returns:
array

A sorted copy of an array.

pymc.math.squeeze(x, axis=None)[source]#

Remove broadcastable (length 1) dimensions from the shape of an array.

It returns the input array, but with the broadcastable dimensions removed. This is always x itself or a view into x.

Added in version 0.6.

Parameters:
x

Input data, tensor variable.

axisNone or int or tuple of ints, optional

Selects a subset of broadcastable dimensions to be removed. If a non broadcastable dimension is selected, an error is raised. If axis is None, all broadcastable dimensions will be removed.

Returns:
x without axis dimensions.

Notes

The behavior can differ from that of NumPy in two ways:

1. If an axis is chosen for a dimension that is not known to be broadcastable an error is raised, even if this dimension would be broadcastable when the variable is evaluated. 2. Similarly, if axis is None, only dimensions known to be broadcastable will be removed, even if there are more dimensions that happen to be broadcastable when the variable is evaluated.

pymc.math.stack(tensors, axis=0)[source]#

Stack tensors in sequence on given axis (default is 0).

Take a sequence of tensors or tensor-like constant and stack them on given axis to make a single tensor. The size in dimension axis of the result will be equal to the number of tensors passed.

Parameters:
tensorsSequence[TensorLike]

A list of tensors or tensor-like constants to be stacked.

axisint

The index of the new axis. Default value is 0.

Examples

>>> a = pytensor.tensor.type.scalar()
>>> b = pytensor.tensor.type.scalar()
>>> c = pytensor.tensor.type.scalar()
>>> x = pytensor.tensor.stack([a, b, c])
>>> x.ndim  # x is a vector of length 3.
1
>>> a = pytensor.tensor.type.tensor4()
>>> b = pytensor.tensor.type.tensor4()
>>> c = pytensor.tensor.type.tensor4()
>>> x = pytensor.tensor.stack([a, b, c])
>>> x.ndim  # x is a 5d tensor.
5
>>> rval = x.eval(dict((t, np.zeros((2, 2, 2, 2))) for t in [a, b, c]))
>>> rval.shape  # 3 tensors are stacked on axis 0
(3, 2, 2, 2, 2)
>>> x = pytensor.tensor.stack([a, b, c], axis=3)
>>> x.ndim
5
>>> rval = x.eval(dict((t, np.zeros((2, 2, 2, 2))) for t in [a, b, c]))
>>> rval.shape  # 3 tensors are stacked on axis 3
(2, 2, 2, 3, 2)
>>> x = pytensor.tensor.stack([a, b, c], axis=-2)
>>> x.ndim
5
>>> rval = x.eval(dict((t, np.zeros((2, 2, 2, 2))) for t in [a, b, c]))
>>> rval.shape  # 3 tensors are stacked on axis -2
(2, 2, 2, 3, 2)
pymc.math.std(input, axis=None, ddof=0, keepdims=False, corrected=False)[source]#

Computes the standard deviation along the given axis(es) of a tensor input.

Parameters:
axis: None or int or (list of int) (see `Sum`)

Compute the variance along this axis of the tensor. None means all axes (like numpy).

ddof: Degrees of freedom; 0 would compute the ML estimate, 1 would compute

the unbiased estimate.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.

correctedbool

If this is set to True, the ‘corrected_two_pass’ algorithm is used to compute the variance. Refer : http://www.cs.yale.edu/publications/techreports/tr222.pdf

Notes

It calls ‘var()’ and ‘var()’ uses the two-pass algorithm (reference below). https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Two-pass_algorithm Function ‘var()’ also supports ‘corrected_two_pass’ algorithm (using the ‘corrected’ flag) which is numerically more stable. There exist other implementations that offer better stability, but probably slower.

pymc.math.sum(input, axis=None, dtype=None, keepdims=False, acc_dtype=None)[source]#

Computes the sum along the given axis(es) of a tensor input.

When axis is None (the default value), the sum is performed over the flattened tensor.

For full documentation see Sum. In particular please pay attention to the important warning when using a custom acc_dtype.

Parameters:
keepdims: bool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.

pymc.math.swapaxes(y, axis1, axis2)[source]#

Swap the axes of a tensor.

pymc.math.take(a, indices, axis=None, mode='raise')[source]#

Take elements from an array along an axis.

When axis is not None, this function does the same thing as “fancy” indexing (indexing arrays using arrays); however, it can be easier to use if you need elements along a given axis. A call such as np.take(arr, indices, axis=3) is equivalent to arr[:,:,:,indices,...].

See np.take

Parameters:
aTensorVariable

The source array.

indicesTensorVariable, ndarray, list, tuple

The indices of the values to extract.

axisint, optional

The axis over which to select values. By default, the flattened input array is used.

pymc.math.tile(A, reps)[source]#

Tile input tensor A according to reps.

See the docstring of numpy.tile for details.

If reps is a PyTensor vector, its length must be statically known. You can use specify_shape to set the length.

Examples

Reps can be a sequence of constants and/ or symbolic integer variables

Reps can be a single integer vector, in which case its length must be statically known. Either of the following is a valid way to specify the length:

pymc.math.trace(a, offset=0, axis1=0, axis2=1)[source]#

Returns the sum along diagonals of the array.

Equivalent to numpy.trace

pymc.math.transpose(x, axes=None)[source]#

Reorder the dimensions of x. (Default: reverse them)

This is a macro around dimshuffle that matches the numpy.transpose function.

pymc.math.tril(m, k=0)[source]#

Lower triangle of an array.

Return a copy of an array with elements above the k-th diagonal zeroed. For arrays with ndim exceeding 2, tril will apply to the final two axes.

Parameters:
marray_like, shape (…, M, N)

Input array.

kint, optional

Diagonal above which to zero elements. k = 0 (the default) is the main diagonal, k < 0 is below it and k > 0 is above.

Returns:
trilndarray, shape (…, M, N)

Lower triangle of m, of same shape and data-type as m.

See also

triu

Same thing, only for the upper triangle.

Examples

>>> import pytensor.tensor as pt
>>> pt.tril(pt.arange(1, 13).reshape((4, 3)), -1).eval()
array([[ 0,  0,  0],
       [ 4,  0,  0],
       [ 7,  8,  0],
       [10, 11, 12]])
>>> pt.tril(pt.arange(3 * 4 * 5).reshape((3, 4, 5))).eval()
array([[[ 0,  0,  0,  0,  0],
        [ 5,  6,  0,  0,  0],
        [10, 11, 12,  0,  0],
        [15, 16, 17, 18,  0]],

       [[20,  0,  0,  0,  0],
        [25, 26,  0,  0,  0],
        [30, 31, 32,  0,  0],
        [35, 36, 37, 38,  0]],

       [[40,  0,  0,  0,  0],
        [45, 46,  0,  0,  0],
        [50, 51, 52,  0,  0],
        [55, 56, 57, 58,  0]]])
pymc.math.triu(m, k=0)[source]#

Upper triangle of an array.

Return a copy of an array with the elements below the k-th diagonal zeroed. For arrays with ndim exceeding 2, triu will apply to the final two axes.

Please refer to the documentation for tril for further details.

See also

tril

Lower triangle of an array.

Examples

>>> import pytensor.tensor as pt
>>> pt.triu(pt.arange(1, 13).reshape((4, 3)), -1).eval()
array([[ 1,  2,  3],
       [ 4,  5,  6],
       [ 0,  8,  9],
       [ 0,  0, 12]])
>>> pt.triu(np.arange(3 * 4 * 5).reshape((3, 4, 5))).eval()
array([[[ 0,  1,  2,  3,  4],
        [ 0,  6,  7,  8,  9],
        [ 0,  0, 12, 13, 14],
        [ 0,  0,  0, 18, 19]],

       [[20, 21, 22, 23, 24],
        [ 0, 26, 27, 28, 29],
        [ 0,  0, 32, 33, 34],
        [ 0,  0,  0, 38, 39]],

       [[40, 41, 42, 43, 44],
        [ 0, 46, 47, 48, 49],
        [ 0,  0, 52, 53, 54],
        [ 0,  0,  0, 58, 59]]])
pymc.math.unique(ar, return_index=False, return_inverse=False, return_counts=False, axis=None)[source]#

Find the unique elements of an array.

Returns the sorted unique elements of an array. There are three optional outputs in addition to the unique elements:

  • the indices of the input array that give the unique values

  • the indices of the unique array that reconstruct the input array

  • the number of times each unique value comes up in the input array

pymc.math.var(input, axis=None, ddof=0, keepdims=False, corrected=False)[source]#

Computes the variance along the given axis(es) of a tensor input.

Parameters:
axis: None or int or (list of int) (see `Sum`)

Compute the variance along this axis of the tensor. None means all axes (like numpy).

ddof: Degrees of freedom; 0 would compute the ML estimate, 1 would compute

the unbiased estimate.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.

correctedbool

If this is set to True, the ‘corrected_two_pass’ algorithm is used to compute the variance. Refer : http://www.cs.yale.edu/publications/techreports/tr222.pdf

Notes

Default uses the two-pass algorithm (reference below). https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Two-pass_algorithm Also supports ‘corrected_two_pass’ algorithm (using the ‘corrected’ flag) which is numerically more stable. There exist other implementations that offer better stability, but probably slower.

pymc.math.where(condition[, ift, iff])[source]#

Return elements chosen from ift or iff depending on condition.

Note: When only condition is provided, this function is a shorthand for as_tensor(condition).nonzero().

Parameters:
conditiontensor_like, bool

Where True, yield ift, otherwise yield iff.

x, ytensor_like

Values from which to choose.

Returns:
outTensorVariable

A tensor with elements from ift where condition is True, and elements from iff elsewhere.

pymc.math.zeros(shape, dtype=None)[source]#

Create a TensorVariable filled with zeros, closer to NumPy’s syntax than alloc.

pymc.math.zeros_like(model, dtype=None, opt=False)[source]#

equivalent of numpy.zeros_like Parameters ———- model : tensor dtype : data-type, optional opt : If True, we will return a constant instead of a graph when possible.

Useful for PyTensor optimization, not for user building a graph as this have the consequence that model isn’t always in the graph.

Returns:
tensor

tensor the shape of model containing zeros of the type of dtype.