Tensors are the core datastructure of TensorFlow.js They are a generalization of vectors and matrices to potentially higher dimensions.
We have utility functions for common cases like Scalar, 1D, 2D, 3D and 4D tensors, as well a number of functions to initialize tensors in ways useful for machine learning.
Creates a tf.Tensor with the provided values, shape and dtype.
// Pass an array of values to create a vector.
tf.tensor([1, 2, 3, 4]).print();
// Pass a nested array of values to make a matrix or a higher
// dimensional tensor.
tf.tensor([[1, 2], [3, 4]]).print();
// Pass a flat array and specify a shape yourself.
tf.tensor([1, 2, 3, 4], [2, 2]).print();
-
values
(TypedArray|Array)
The values of the tensor. Can be nested array of numbers,
or a flat array, or a TypedArray. If the values are strings,
they will be encoded as utf-8 and kept as
Uint8Array[]
. -
shape
(number[])
The shape of the tensor. Optional. If not provided,
it is inferred from
values
. Optional - dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data type. Optional
Creates rank-0 tf.Tensor (scalar) with the provided value and dtype.
The same functionality can be achieved with tf.tensor(), but in general we recommend using tf.scalar() as it makes the code more readable.
tf.scalar(3.14).print();
- value (number|boolean|string|Uint8Array) The value of the scalar.
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data type. Optional
Creates rank-1 tf.Tensor with the provided values, shape and dtype.
The same functionality can be achieved with tf.tensor(), but in general we recommend using tf.tensor1d() as it makes the code more readable.
tf.tensor1d([1, 2, 3]).print();
- values (TypedArray|Array) The values of the tensor. Can be array of numbers, or a TypedArray.
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data type. Optional
Creates rank-2 tf.Tensor with the provided values, shape and dtype.
The same functionality can be achieved with tf.tensor(), but in general we recommend using tf.tensor2d() as it makes the code more readable.
// Pass a nested array.
tf.tensor2d([[1, 2], [3, 4]]).print();
// Pass a flat array and specify a shape.
tf.tensor2d([1, 2, 3, 4], [2, 2]).print();
- values (TypedArray|Array) The values of the tensor. Can be nested array of numbers, or a flat array, or a TypedArray.
-
shape
([number, number])
The shape of the tensor. If not provided, it is inferred from
values
. Optional - dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data type. Optional
Creates rank-3 tf.Tensor with the provided values, shape and dtype.
The same functionality can be achieved with tf.tensor(), but in general we recommend using tf.tensor3d() as it makes the code more readable.
// Pass a nested array.
tf.tensor3d([[[1], [2]], [[3], [4]]]).print();
// Pass a flat array and specify a shape.
tf.tensor3d([1, 2, 3, 4], [2, 2, 1]).print();
- values (TypedArray|Array) The values of the tensor. Can be nested array of numbers, or a flat array, or a TypedArray.
-
shape
([number, number, number])
The shape of the tensor. If not provided, it is inferred from
values
. Optional - dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data type. Optional
Creates rank-4 tf.Tensor with the provided values, shape and dtype.
The same functionality can be achieved with tf.tensor(), but in general we recommend using tf.tensor4d() as it makes the code more readable.
// Pass a nested array.
tf.tensor4d([[[[1], [2]], [[3], [4]]]]).print();
// Pass a flat array and specify a shape.
tf.tensor4d([1, 2, 3, 4], [1, 2, 2, 1]).print();
- values (TypedArray|Array) The values of the tensor. Can be nested array of numbers, or a flat array, or a TypedArray.
-
shape
([number, number, number, number])
The shape of the tensor. Optional. If not provided,
it is inferred from
values
. Optional - dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data type. Optional
Creates rank-5 tf.Tensor with the provided values, shape and dtype.
The same functionality can be achieved with tf.tensor(), but in general we recommend using tf.tensor5d() as it makes the code more readable.
// Pass a nested array.
tf.tensor5d([[[[[1],[2]],[[3],[4]]],[[[5],[6]],[[7],[8]]]]]).print();
// Pass a flat array and specify a shape.
tf.tensor5d([1, 2, 3, 4, 5, 6, 7, 8], [1, 2, 2, 2, 1]).print();
- values (TypedArray|Array) The values of the tensor. Can be nested array of numbers, or a flat array, or a TypedArray.
-
shape
([number, number, number, number, number])
The shape of the tensor. Optional. If not provided,
it is inferred from
values
. Optional - dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data type. Optional
Creates rank-6 tf.Tensor with the provided values, shape and dtype.
The same functionality can be achieved with tf.tensor(), but in general we recommend using tf.tensor6d() as it makes the code more readable.
// Pass a nested array.
tf.tensor6d([[[[[[1],[2]],[[3],[4]]],[[[5],[6]],[[7],[8]]]]]]).print();
// Pass a flat array and specify a shape.
tf.tensor6d([1, 2, 3, 4, 5, 6, 7, 8], [1, 1, 2, 2, 2, 1]).print();
- values (TypedArray|Array) The values of the tensor. Can be nested array of numbers, or a flat array, or a TypedArray.
-
shape
([number, number, number, number, number, number])
The shape of the tensor. Optional. If not provided,
it is inferred from
values
. Optional - dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data type. Optional
Creates an empty tf.TensorBuffer with the specified shape
and dtype
.
The values are stored in CPU as TypedArray. Fill the buffer using
buffer.set()
, or by modifying directly buffer.values
.
When done, call buffer.toTensor()
to get an immutable tf.Tensor with
those values.
// Create a buffer and set values at particular indices.
const buffer = tf.buffer([2, 2]);
buffer.set(3, 0, 0);
buffer.set(5, 1, 0);
// Convert the buffer back to a tensor.
buffer.toTensor().print();
- shape (number[]) An array of integers defining the output tensor shape.
- dtype ('float32') The dtype of the buffer. Defaults to 'float32'. Optional
- values (DataTypeMap['float32']) The values of the buffer as TypedArray. Defaults to zeros. Optional
Creates a new tensor with the same values and shape as the specified tensor.
const x = tf.tensor([1, 2]);
x.clone().print();
- x (tf.Tensor|TypedArray|Array) The tensor to clone.
Converts two real numbers to a complex number.
Given a tensor real
representing the real part of a complex number, and a
tensor imag
representing the imaginary part of a complex number, this
operation returns complex numbers elementwise of the form [r0, i0, r1, i1],
where r represents the real part and i represents the imag part.
The input tensors real and imag must have the same shape.
const real = tf.tensor1d([2.25, 3.25]);
const imag = tf.tensor1d([4.75, 5.75]);
const complex = tf.complex(real, imag);
complex.print();
- real (tf.Tensor|TypedArray|Array)
- imag (tf.Tensor|TypedArray|Array)
Create an identity matrix.
- numRows (number) Number of rows.
-
numColumns
(number)
Number of columns. Defaults to
numRows
. Optional - batchShape ([ number ]|[number, number]|[number, number, number]|[number, number, number, number]) If provided, will add the batch shape to the beginning of the shape of the returned tf.Tensor by repeating the identity matrix. Optional
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') Data type. Optional
Creates a tf.Tensor filled with a scalar value.
tf.fill([2, 2], 4).print();
- shape (number[]) An array of integers defining the output tensor shape.
- value (number|string) The scalar value to fill the tensor with.
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The type of an element in the resulting tensor. Defaults to 'float'. Optional
Returns the imaginary part of a complex (or real) tensor.
Given a tensor input, this operation returns a tensor of type float that is the imaginary part of each element in input considered as a complex number. If input is real, a tensor of all zeros is returned.
const x = tf.complex([-2.25, 3.25], [4.75, 5.75]);
tf.imag(x).print();
- input (tf.Tensor|TypedArray|Array)
Return an evenly spaced sequence of numbers over the given interval.
tf.linspace(0, 9, 10).print();
- start (number) The start value of the sequence.
- stop (number) The end value of the sequence.
- num (number) The number of values to generate.
Creates a one-hot tf.Tensor. The locations represented by indices
take
value onValue
(defaults to 1), while all other locations take value
offValue
(defaults to 0). If indices
is rank R
, the output has rank
R+1
with the last axis of size depth
.
tf.oneHot(tf.tensor1d([0, 1], 'int32'), 3).print();
-
indices
(tf.Tensor|TypedArray|Array)
tf.Tensor of indices with dtype
int32
. - depth (number) The depth of the one hot dimension.
- onValue (number) A number used to fill in the output when the index matches the location. Optional
- offValue (number) A number used to fill in the output when the index does not match the location. Optional
Creates a tf.Tensor with all elements set to 1.
tf.ones([2, 2]).print();
- shape (number[]) An array of integers defining the output tensor shape.
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The type of an element in the resulting tensor. Defaults to 'float'. Optional
Creates a tf.Tensor with all elements set to 1 with the same shape as the given tensor.
const x = tf.tensor([1, 2]);
tf.onesLike(x).print();
- x (tf.Tensor|TypedArray|Array) A tensor.
Prints information about the tf.Tensor including its data.
const verbose = true;
tf.tensor2d([1, 2, 3, 4], [2, 2]).print(verbose);
- x (tf.Tensor) The tensor to be printed.
-
verbose
(boolean)
Whether to print verbose information about the
Tensor
, including dtype and size. Optional
Creates a new tf.Tensor1D filled with the numbers in the range provided.
The tensor is a is half-open interval meaning it includes start, but excludes stop. Decrementing ranges and negative step values are also supported.sv
tf.range(0, 9, 2).print();
- start (number) An integer start value
- stop (number) An integer stop value
- step (number) An integer increment (will default to 1 or -1) Optional
- dtype ('float32'|'int32') The data type of the output tensor. Defaults to 'float32'. Optional
Returns the real part of a complex (or real) tensor.
Given a tensor input, this operation returns a tensor of type float that is the real part of each element in input considered as a complex number.
If the input is real, it simply makes a clone.
const x = tf.complex([-2.25, 3.25], [4.75, 5.75]);
tf.real(x).print();
- input (tf.Tensor|TypedArray|Array)
Creates a tf.Tensor with values sampled from a truncated normal distribution.
tf.truncatedNormal([2, 2]).print();
The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
- shape (number[]) An array of integers defining the output tensor shape.
- mean (number) The mean of the normal distribution. Optional
- stdDev (number) The standard deviation of the normal distribution. Optional
- dtype ('float32'|'int32') The data type of the output tensor. Optional
- seed (number) The seed for the random number generator. Optional
Creates a new variable with the provided initial value.
const x = tf.variable(tf.tensor([1, 2, 3]));
x.assign(tf.tensor([4, 5, 6]));
x.print();
- initialValue (tf.Tensor) Initial value for the tensor.
- trainable (boolean) If true, optimizers are allowed to update it. Optional
- name (string) Name of the variable. Defaults to a unique id. Optional
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') If set, initialValue will be converted to the given type. Optional
Creates a tf.Tensor with all elements set to 0.
tf.zeros([2, 2]).print();
- shape (number[]) An array of integers defining the output tensor shape.
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The type of an element in the resulting tensor. Can be 'float32', 'int32' or 'bool'. Defaults to 'float'. Optional
Creates a tf.Tensor with all elements set to 0 with the same shape as the given tensor.
const x = tf.tensor([1, 2]);
tf.zerosLike(x).print();
- x (tf.Tensor|TypedArray|Array) The tensor of required shape.
This section shows the main Tensor related classes in TensorFlow.js and the methods we expose on them.
A tf.Tensor object represents an immutable, multidimensional array of numbers that has a shape and a data type.
See tf.tensor() for details on how to create a tf.Tensor.
Returns a promise of tf.TensorBuffer that holds the underlying data.
Returns a tf.TensorBuffer that holds the underlying data.
Returns the tensor data as a nested array. The transfer of data is done asynchronously.
Returns the tensor data as a nested array. The transfer of data is done synchronously.
Asynchronously downloads the values from the tf.Tensor. Returns a promise of TypedArray that resolves when the computation has finished.
Synchronously downloads the values from the tf.Tensor. This blocks the UI thread until the values are ready, which can cause performance issues.
Prints the tf.Tensor. See tf.print() for details.
- verbose (boolean) Whether to print verbose information about the tensor, including dtype and size. Optional
A mutable tf.Tensor, useful for persisting state, e.g. for training.
A mutable object, similar to tf.Tensor, that allows users to set values at locations before converting to an immutable tf.Tensor.
See tf.buffer() for creating a tensor buffer.
Sets a value in the buffer at a given location.
- value (SingleValueMap[D]) The value to set.
- ...locs (number[]) The location indices.
Returns the value in the buffer at the provided location.
- ...locs (number[]) The location indices.
This section describes some common Tensor transformations for reshaping and type-casting.
This operation reshapes the "batch" dimension 0 into M + 1
dimensions of
shape blockShape + [batch]
, interleaves these blocks back into the grid
defined by the spatial dimensions [1, ..., M]
, to obtain a result with
the same rank as the input. The spatial dimensions of this intermediate
result are then optionally cropped according to crops
to produce the
output. This is the reverse of tf.spaceToBatchND(). See below for a precise
description.
const x = tf.tensor4d([1, 2, 3, 4], [4, 1, 1, 1]);
const blockShape = [2, 2];
const crops = [[0, 0], [0, 0]];
x.batchToSpaceND(blockShape, crops).print();
-
x
(tf.Tensor|TypedArray|Array)
A tf.Tensor. N-D with
x.shape
=[batch] + spatialShape + remainingShape
, where spatialShape hasM
dimensions. -
blockShape
(number[])
A 1-D array. Must have shape
[M]
, all values must be >= 1. -
crops
(number[][])
A 2-D array. Must have shape
[M, 2]
, all values must be >= 0.crops[i] = [cropStart, cropEnd]
specifies the amount to crop from input dimensioni + 1
, which corresponds to spatial dimensioni
. It is required thatcropStart[i] + cropEnd[i] <= blockShape[i] * inputShape[i + 1]
This operation is equivalent to the following steps:
-
Reshape
x
toreshaped
of shape:[blockShape[0], ..., blockShape[M-1], batch / prod(blockShape), x.shape[1], ..., x.shape[N-1]]
-
Permute dimensions of
reshaped
to producepermuted
of shape[batch / prod(blockShape),x.shape[1], blockShape[0], ..., x.shape[M], blockShape[M-1],x.shape[M+1], ..., x.shape[N-1]]
-
Reshape
permuted
to producereshapedPermuted
of shape[batch / prod(blockShape),x.shape[1] * blockShape[0], ..., x.shape[M] * blockShape[M-1],x.shape[M+1], ..., x.shape[N-1]]
-
Crop the start and end of dimensions
[1, ..., M]
ofreshapedPermuted
according tocrops
to produce the output of shape:[batch / prod(blockShape),x.shape[1] * blockShape[0] - crops[0,0] - crops[0,1], ..., x.shape[M] * blockShape[M-1] - crops[M-1,0] - crops[M-1,1],x.shape[M+1], ..., x.shape[N-1]]
-
Broadcast an array to a compatible shape NumPy-style.
The tensor's shape is compared to the broadcast shape from end to beginning. Ones are prepended to the tensor's shape until is has the same length as the broadcast shape. If input.shape[i]==shape[i], the (i+1)-th axis is already broadcast-compatible. If input.shape[i]==1 and shape[i]==N, then the input tensor is tiled N times along that axis (using tf.tile).
- x (tf.Tensor|TypedArray|Array)
- shape (number[]) The input is to be broadcast to this shape.
Casts a tf.Tensor to a new dtype.
const x = tf.tensor1d([1.5, 2.5, 3]);
tf.cast(x, 'int32').print();
- x (tf.Tensor|TypedArray|Array) The input tensor to be casted.
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The dtype to cast the input tensor to.
Rearranges data from depth into blocks of spatial data. More specifically,
this op outputs a copy of the input tensor where values from the depth
dimension are moved in spatial blocks to the height
and width
dimensions.
The attr blockSize
indicates the input block size and how the data is
moved.
-
Chunks of data of size
blockSize * blockSize
from depth are rearranged into non-overlapping blocks of sizeblockSize x blockSize
-
The width the output tensor is
inputWidth * blockSize
, whereas the height isinputHeight * blockSize
-
The Y, X coordinates within each block of the output image are determined by the high order component of the input channel index
-
The depth of the input tensor must be divisible by
blockSize * blockSize
The dataFormat
attr specifies the layout of the input and output tensors
with the following options: "NHWC": [ batch, height, width, channels
]
"NCHW": [ batch, channels, height, width
]
const x = tf.tensor4d([1, 2, 3, 4], [1, 1, 1, 4]);
const blockSize = 2;
const dataFormat = "NHWC";
tf.depthToSpace(x, blockSize, dataFormat).print();
- x (tf.Tensor4D|TypedArray|Array) The input tensor of rank 4
- blockSize (number)
- dataFormat ('NHWC'|'NCHW') An optional string from: "NHWC", "NCHW". Defaults to "NHWC" Optional
Returns a tf.Tensor that has expanded rank, by inserting a dimension into the tensor's shape.
const x = tf.tensor1d([1, 2, 3, 4]);
const axis = 1;
x.expandDims(axis).print();
- x (tf.Tensor|TypedArray|Array) The input tensor whose dimensions to be expanded.
-
axis
(number)
The dimension index at which to insert shape of
1
. Defaults to 0 (the first dimension). Optional
Pads a tf.Tensor using mirror padding.
This operation implements the REFLECT
and SYMMETRIC
modes of pad.
const x = tf.range(0, 9).reshape([1, 1, 3, 3]);
x.mirrorPad([[0, 0], [0, 0], [2, 2], [2, 2]], 'reflect').print();
- x (tf.Tensor|TypedArray|Array) The tensor to pad.
-
paddings
(Array)
An array of length
R
(the rank of the tensor), where each element is a length-2 tuple of ints[padBefore, padAfter]
, specifying how much to pad along each dimension of the tensor. In "reflect" mode, the padded regions do not include the borders, while in "symmetric" mode the padded regions do include the borders. For example, if the input is[1, 2, 3]
and paddings is[0, 2]
, then the output is[1, 2, 3, 2, 1]
in "reflect" mode, and[1, 2, 3, 3, 2]
in "symmetric" mode. Ifmode
is "reflect" then bothpaddings[D, 0]
andpaddings[D, 1]
must be no greater thanx.shape[D] - 1
. If mode is "symmetric" then bothpaddings[D, 0]
andpaddings[D, 1]
must be no greater thanx.shape[D]
-
mode
('reflect'|'symmetric')
String to specify padding mode. Can be
'reflect' | 'symmetric'
Pads a tf.Tensor with a given value and paddings.
This operation implements CONSTANT
mode. For REFLECT
and SYMMETRIC
,
refer to tf.mirrorPad()
Also available are stricter rank-specific methods with the same signature
as this method that assert that paddings
is of given length.
tf.pad1d
tf.pad2d
tf.pad3d
tf.pad4d
const x = tf.tensor1d([1, 2, 3, 4]);
x.pad([[1, 2]]).print();
- x (tf.Tensor|TypedArray|Array) The tensor to pad.
-
paddings
(Array)
An array of length
R
(the rank of the tensor), where each element is a length-2 tuple of ints[padBefore, padAfter]
, specifying how much to pad along each dimension of the tensor. - constantValue (number) The pad value to use. Defaults to 0. Optional
Reshapes a tf.Tensor to a given shape.
Given an input tensor, returns a new tensor with the same values as the
input tensor with shape shape
.
If one component of shape is the special value -1, the size of that dimension is computed so that the total size remains constant. In particular, a shape of [-1] flattens into 1-D. At most one component of shape can be -1.
If shape is 1-D or higher, then the operation returns a tensor with shape shape filled with the values of tensor. In this case, the number of elements implied by shape must be the same as the number of elements in tensor.
const x = tf.tensor1d([1, 2, 3, 4]);
x.reshape([2, 2]).print();
- x (tf.Tensor|TypedArray|Array) The input tensor to be reshaped.
- shape (number[]) An array of integers defining the output tensor shape.
Computes the difference between two lists of numbers.
Given a Tensor x
and a Tensor y
, this operation returns a Tensor out
that represents all values that are in x
but not in y
. The returned
Tensor out
is sorted in the same order that the numbers appear in x
(duplicates are preserved). This operation also returns a Tensor indices that
represents the position of each out element in x
. In other words:
out[i] = x[idx[i]] for i in [0, 1, ..., out.length - 1]
const x = [1, 2, 3, 4, 5, 6];
const y = [1, 3, 5];
const [out, indices] = await tf.setdiff1dAsync(x, y);
out.print(); // [2, 4, 6]
indices.print(); // [1, 3, 5]
- x (tf.Tensor|TypedArray|Array) 1-D Tensor. Values to keep.
- y (tf.Tensor|TypedArray|Array) 1-D Tensor. Must have the same type as x. Values to exclude in the output.
This operation divides "spatial" dimensions [1, ..., M]
of the input into
a grid of blocks of shape blockShape
, and interleaves these blocks with
the "batch" dimension (0) such that in the output, the spatial
dimensions [1, ..., M]
correspond to the position within the grid,
and the batch dimension combines both the position within a spatial block
and the original batch position. Prior to division into blocks,
the spatial dimensions of the input are optionally zero padded
according to paddings
. See below for a precise description.
const x = tf.tensor4d([1, 2, 3, 4], [1, 2, 2, 1]);
const blockShape = [2, 2];
const paddings = [[0, 0], [0, 0]];
x.spaceToBatchND(blockShape, paddings).print();
-
x
(tf.Tensor|TypedArray|Array)
A tf.Tensor. N-D with
x.shape
=[batch] + spatialShape + remainingShape
, where spatialShape hasM
dimensions. -
blockShape
(number[])
A 1-D array. Must have shape
[M]
, all values must be >= 1. -
paddings
(number[][])
A 2-D array. Must have shape
[M, 2]
, all values must be >= 0.paddings[i] = [padStart, padEnd]
specifies the amount to zero-pad from input dimensioni + 1
, which corresponds to spatial dimensioni
. It is required that(inputShape[i + 1] + padStart + padEnd) % blockShape[i] === 0
This operation is equivalent to the following steps:
-
Zero-pad the start and end of dimensions
[1, ..., M]
of the input according topaddings
to producepadded
of shape paddedShape. -
Reshape
padded
toreshapedPadded
of shape:[batch] + [paddedShape[1] / blockShape[0], blockShape[0], ..., paddedShape[M] / blockShape[M-1], blockShape[M-1]] + remainingShape
-
Permute dimensions of
reshapedPadded
to producepermutedReshapedPadded
of shape:blockShape + [batch] + [paddedShape[1] / blockShape[0], ..., paddedShape[M] / blockShape[M-1]] + remainingShape
-
Reshape
permutedReshapedPadded
to flattenblockShape
into the batch dimension, producing an output tensor of shape:[batch * prod(blockShape)] + [paddedShape[1] / blockShape[0], ..., paddedShape[M] / blockShape[M-1]] + remainingShape
-
Removes dimensions of size 1 from the shape of a tf.Tensor.
const x = tf.tensor([1, 2, 3, 4], [1, 1, 4]);
x.squeeze().print();
- x (tf.Tensor|TypedArray|Array) The input tensor to be squeezed.
- axis (number[]) An optional list of numbers. If specified, only squeezes the dimensions listed. The dimension index starts at 0. It is an error to squeeze a dimension that is not 1. Optional
TensorFlow.js provides several operations to slice or extract parts of a tensor, or join multiple tensors together.
Apply boolean mask to tensor.
const tensor = tf.tensor2d([1, 2, 3, 4, 5, 6], [3, 2]);
const mask = tf.tensor1d([1, 0, 1], 'bool');
const result = await tf.booleanMaskAsync(tensor, mask);
result.print();
- tensor (tf.Tensor|TypedArray|Array) N-D tensor.
- mask (tf.Tensor|TypedArray|Array) K-D boolean tensor, K <= N and K must be known statically.
- axis (number) A 0-D int Tensor representing the axis in tensor to mask from. By default, axis is 0 which will mask from the first dimension. Otherwise K + axis <= N. Optional
Concatenates a list of tf.Tensors along a given axis.
The tensors ranks and types must match, and their sizes must match in all
dimensions except axis
.
Also available are stricter rank-specific methods that assert that
tensors
are of the given rank:
tf.concat1d
tf.concat2d
tf.concat3d
tf.concat4d
Except tf.concat1d
(which does not have axis param), all methods have
same signature as this method.
const a = tf.tensor1d([1, 2]);
const b = tf.tensor1d([3, 4]);
a.concat(b).print(); // or a.concat(b)
const a = tf.tensor1d([1, 2]);
const b = tf.tensor1d([3, 4]);
const c = tf.tensor1d([5, 6]);
tf.concat([a, b, c]).print();
const a = tf.tensor2d([[1, 2], [10, 20]]);
const b = tf.tensor2d([[3, 4], [30, 40]]);
const axis = 1;
tf.concat([a, b], axis).print();
- tensors (Array) A list of tensors to concatenate.
- axis (number) The axis to concate along. Defaults to 0 (the first dim). Optional
Gather slices from tensor x
's axis axis
according to indices
.
const x = tf.tensor1d([1, 2, 3, 4]);
const indices = tf.tensor1d([1, 3, 3], 'int32');
x.gather(indices).print();
const x = tf.tensor2d([1, 2, 3, 4], [2, 2]);
const indices = tf.tensor1d([1, 1, 0], 'int32');
x.gather(indices).print();
- x (tf.Tensor|TypedArray|Array) The input tensor whose slices to be gathered.
- indices (tf.Tensor|TypedArray|Array) The indices of the values to extract.
- axis (number) The axis over which to select values. Defaults to 0. Optional
-
batchDims
(number)
Optional. The number of batch dimensions. It must be less
than or equal to rank(indices). Defaults to 0.
The output tensor will have shape of
x.shape[:axis] + indices.shape[batchDims:] + x.shape[axis + 1:]
Optional
Reverses a tf.Tensor along a specified axis.
Also available are stricter rank-specific methods that assert that x
is
of the given rank:
tf.reverse1d
tf.reverse2d
tf.reverse3d
tf.reverse4d
Except tf.reverse1d
(which does not have axis param), all methods have
same signature as this method.
const x = tf.tensor1d([1, 2, 3, 4]);
x.reverse().print();
const x = tf.tensor2d([1, 2, 3, 4], [2, 2]);
const axis = 1;
x.reverse(axis).print();
- x (tf.Tensor|TypedArray|Array) The input tensor to be reversed.
- axis (number|number[]) The set of dimensions to reverse. Must be in the range [-rank(x), rank(x)). Defaults to all axes. Optional
Extracts a slice from a tf.Tensor starting at coordinates begin
and is of size size
.
Also available are stricter rank-specific methods with the same signature
as this method that assert that x
is of the given rank:
tf.slice1d
tf.slice2d
tf.slice3d
tf.slice4d
const x = tf.tensor1d([1, 2, 3, 4]);
x.slice([1], [2]).print();
const x = tf.tensor2d([1, 2, 3, 4], [2, 2]);
x.slice([1, 0], [1, 2]).print();
- x (tf.Tensor|TypedArray|Array) The input tf.Tensor to slice from.
- begin (number|number[]) The coordinates to start the slice from. The length can be less than the rank of x - the rest of the axes will have implicit 0 as start. Can also be a single number, in which case it specifies the first axis.
- size (number|number[]) The size of the slice. The length can be less than the rank of x - the rest of the axes will have implicit -1. A value of -1 requests the rest of the dimensions in the axis. Can also be a single number, in which case it specifies the size of the first axis. Optional
Splits a tf.Tensor into sub tensors.
If numOrSizeSplits
is a number, splits x
along dimension axis
into numOrSizeSplits
smaller tensors.
Requires that numOrSizeSplits
evenly divides x.shape[axis]
.
If numOrSizeSplits
is a number array, splits x
into
numOrSizeSplits.length
pieces. The shape of the i
-th piece has the
same size as x
except along dimension axis
where the size is
numOrSizeSplits[i]
.
const x = tf.tensor2d([1, 2, 3, 4, 5, 6, 7, 8], [2, 4]);
const [a, b] = tf.split(x, 2, 1);
a.print();
b.print();
const [c, d, e] = tf.split(x, [1, 2, 1], 1);
c.print();
d.print();
e.print();
- x (tf.Tensor|TypedArray|Array) The input tensor to split.
-
numOrSizeSplits
(number[]|number)
Either an integer indicating the number of
splits along the axis or an array of integers containing the sizes of
each output tensor along the axis. If a number then it must evenly divide
x.shape[axis]
; otherwise the sum of sizes must matchx.shape[axis]
. Can contain one -1 indicating that dimension is to be inferred. - axis (number) The dimension along which to split. Defaults to 0 (the first dim). Optional
Stacks a list of rank-R
tf.Tensors into one rank-(R+1)
tf.Tensor.
const a = tf.tensor1d([1, 2]);
const b = tf.tensor1d([3, 4]);
const c = tf.tensor1d([5, 6]);
tf.stack([a, b, c]).print();
- tensors (Array) A list of tensor objects with the same shape and dtype.
- axis (number) The axis to stack along. Defaults to 0 (the first dim). Optional
Construct a tensor by repeating it the number of times given by reps.
This operation creates a new tensor by replicating input
reps
times. The output tensor's i'th dimension has input.shape[i] * reps[i]
elements, and the values of input
are replicated
reps[i]
times along the i'th dimension. For example, tiling
[a, b, c, d]
by [2]
produces [a, b, c, d, a, b, c, d]
.
const a = tf.tensor1d([1, 2]);
a.tile([2]).print(); // or a.tile([2])
const a = tf.tensor2d([1, 2, 3, 4], [2, 2]);
a.tile([1, 2]).print(); // or a.tile([1, 2])
- x (tf.Tensor|TypedArray|Array) The tensor to tile.
- reps (number[]) Determines the number of replications per dimension.
Unstacks a tf.Tensor of rank-R
into a list of rank-(R-1)
tf.Tensors.
const a = tf.tensor2d([1, 2, 3, 4], [2, 2]);
tf.unstack(a).forEach(tensor => tensor.print());
- x (tf.Tensor|TypedArray|Array) A tensor object.
- axis (number) The axis to unstack along. Defaults to 0 (the first dim). Optional
Creates a tf.Tensor with values drawn from a multinomial distribution.
const probs = tf.tensor([.75, .25]);
tf.multinomial(probs, 3).print();
-
logits
(tf.Tensor1D|tf.Tensor2D|TypedArray|Array)
1D array with unnormalized log-probabilities, or
2D array of shape
[batchSize, numOutcomes]
. See thenormalized
parameter. - numSamples (number) Number of samples to draw for each row slice.
- seed (number) The seed number. Optional
-
normalized
(boolean)
Whether the provided
logits
are normalized true probabilities (sum to 1). Defaults to false. Optional
Creates a tf.Tensor with values sampled from a gamma distribution.
tf.randomGamma([2, 2], 1).print();
- shape (number[]) An array of integers defining the output tensor shape.
- alpha (number) The shape parameter of the gamma distribution.
- beta (number) The inverse scale parameter of the gamma distribution. Defaults to 1. Optional
- dtype ('float32'|'int32') The data type of the output. Defaults to float32. Optional
- seed (number) The seed for the random number generator. Optional
Creates a tf.Tensor with values sampled from a normal distribution.
tf.randomNormal([2, 2]).print();
- shape (number[]) An array of integers defining the output tensor shape.
- mean (number) The mean of the normal distribution. Optional
- stdDev (number) The standard deviation of the normal distribution. Optional
- dtype ('float32'|'int32') The data type of the output. Optional
- seed (number) The seed for the random number generator. Optional
Creates a tf.Tensor with values sampled from a uniform distribution.
The generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.
tf.randomUniform([2, 2]).print();
- shape (number[]) An array of integers defining the output tensor shape.
- minval (number) The lower bound on the range of random values to generate. Defaults to 0. Optional
- maxval (number) The upper bound on the range of random values to generate. Defaults to 1. Optional
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data type of the output tensor. Defaults to 'float32'. Optional
- seed (number|string) Optional
Models are one of the primary abstractions used in TensorFlow.js Layers. Models can be trained, evaluated, and used for prediction. A model's state (topology, and optionally, trained weights) can be restored from various formats.
Models are a collection of Layers, see Model Creation for details about how Layers can be connected.
There are two primary ways of creating models.
- Sequential — Easiest, works if the models is a simple stack of each layer's input resting on the top of the previous layer's output.
- Model — Offers more control if the layers need to be wired together in graph-like ways — multiple 'towers', layers that skip a layer, etc.
Creates a tf.Sequential model. A sequential model is any model where the outputs of one layer are the inputs to the next layer, i.e. the model topology is a simple 'stack' of layers, with no branching or skipping.
This means that the first layer passed to a tf.Sequential model should have
a defined input shape. What that means is that it should have received an
inputShape
or batchInputShape
argument, or for some type of layers
(recurrent, Dense...) an inputDim
argument.
The key difference between tf.model() and tf.sequential() is that tf.sequential() is less generic, supporting only a linear stack of layers. tf.model() is more generic and supports an arbitrary graph (without cycles) of layers.
Examples:
const model = tf.sequential();
// First layer must have an input shape defined.
model.add(tf.layers.dense({units: 32, inputShape: [50]}));
// Afterwards, TF.js does automatic shape inference.
model.add(tf.layers.dense({units: 4}));
// Inspect the inferred shape of the model's output, which equals
// `[null, 4]`. The 1st dimension is the undetermined batch dimension; the
// 2nd is the output size of the model's last layer.
console.log(JSON.stringify(model.outputs[0].shape));
It is also possible to specify a batch size (with potentially undetermined
batch dimension, denoted by "null") for the first layer using the
batchInputShape
key. The following example is equivalent to the above:
const model = tf.sequential();
// First layer must have a defined input shape
model.add(tf.layers.dense({units: 32, batchInputShape: [null, 50]}));
// Afterwards, TF.js does automatic shape inference.
model.add(tf.layers.dense({units: 4}));
// Inspect the inferred shape of the model's output.
console.log(JSON.stringify(model.outputs[0].shape));
You can also use an Array
of already-constructed Layer
s to create
a tf.Sequential model:
const model = tf.sequential({
layers: [tf.layers.dense({units: 32, inputShape: [50]}),
tf.layers.dense({units: 4})]
});
console.log(JSON.stringify(model.outputs[0].shape));
- config (Object) Optional
- layers (tf.layers.Layer[]) Stack of layers for the model.
- name (string) The name of this model.
A model is a data structure that consists of Layers
and defines inputs
and outputs.
The key difference between tf.model() and tf.sequential() is that tf.model() is more generic, supporting an arbitrary graph (without cycles) of layers. tf.sequential() is less generic and supports only a linear stack of layers.
When creating a tf.LayersModel, specify its input(s) and output(s). Layers are used to wire input(s) to output(s).
For example, the following code snippet defines a model consisting of
two dense
layers, with 10 and 4 units, respectively.
// Define input, which has a size of 5 (not including batch dimension).
const input = tf.input({shape: [5]});
// First dense layer uses relu activation.
const denseLayer1 = tf.layers.dense({units: 10, activation: 'relu'});
// Second dense layer uses softmax activation.
const denseLayer2 = tf.layers.dense({units: 4, activation: 'softmax'});
// Obtain the output symbolic tensor by applying the layers on the input.
const output = denseLayer2.apply(denseLayer1.apply(input));
// Create the model based on the inputs.
const model = tf.model({inputs: input, outputs: output});
// The model can be used for training, evaluation and prediction.
// For example, the following line runs prediction with the model on
// some fake data.
model.predict(tf.ones([2, 5])).print();
See also: tf.sequential(), tf.loadLayersModel().
- args (Object)
- inputs (tf.SymbolicTensor|tf.SymbolicTensor[])
- outputs (tf.SymbolicTensor|tf.SymbolicTensor[])
- name (string)
Used to instantiate an input to a model as a tf.SymbolicTensor.
Users should call the input
factory function for
consistency with other generator functions.
Example:
// Defines a simple logistic regression model with 32 dimensional input
// and 3 dimensional output.
const x = tf.input({shape: [32]});
const y = tf.layers.dense({units: 3, activation: 'softmax'}).apply(x);
const model = tf.model({inputs: x, outputs: y});
model.predict(tf.ones([2, 32])).print();
Note: input
is only necessary when using model
. When using
sequential
, specify inputShape
for the first layer or use inputLayer
as the first layer.
- config (Object)
-
shape
((null | number)[])
A shape, not including the batch size. For instance,
shape=[32]
indicates that the expected input will be batches of 32-dimensional vectors. -
batchShape
((null | number)[])
A shape tuple (integer), including the batch size. For instance,
batchShape=[10, 32]
indicates that the expected input will be batches of 10 32-dimensional vectors.batchShape=[null, 32]
indicates batches of an arbitrary number of 32-dimensional vectors. - name (string) An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
- dtype ('float32'|'int32'|'bool'|'complex64'|'string')
- sparse (boolean) A boolean specifying whether the placeholder to be created is sparse.
Load a graph model given a URL to the model definition.
Example of loading MobileNetV2 from a URL and making a prediction with a zeros input:
const modelUrl =
'https://storage.googleapis.com/tfjs-models/savedmodel/mobilenet_v2_1.0_224/model.json';
const model = await tf.loadGraphModel(modelUrl);
const zeros = tf.zeros([1, 224, 224, 3]);
model.predict(zeros).print();
Example of loading MobileNetV2 from a TF Hub URL and making a prediction with a zeros input:
const modelUrl =
'https://tfhub.dev/google/imagenet/mobilenet_v2_140_224/classification/2';
const model = await tf.loadGraphModel(modelUrl, {fromTFHub: true});
const zeros = tf.zeros([1, 224, 224, 3]);
model.predict(zeros).print();
-
modelUrl
(string|io.IOHandler)
The url or an
io.IOHandler
that loads the model. - options (Object) Options for the HTTP request, which allows to send credentials and custom headers. Optional
-
requestInit
(RequestInit)
RequestInit (options) for HTTP requests.
For detailed information on the supported fields, see https://developer.mozilla.org/en-US/docs/Web/API/Request/Request
- onProgress (OnProgressCallback) Progress callback.
-
fetchFunc
(Function)
A function used to override the
window.fetch
function. -
strict
(boolean)
Strict loading model: whether extraneous weights or missing
weights should trigger an
Error
.If
true
, require that the provided weights exactly match those required by the layers.false
means that both extra weights and missing weights will be silently ignored.Default:
true
. -
weightPathPrefix
(string)
Path prefix for weight files, by default this is calculated from the
path of the model JSON file.
For instance, if the path to the model JSON file is
http://localhost/foo/model.json
, then the default path prefix will behttp://localhost/foo/
. If a weight file has the path valuegroup1-shard1of2
in the weight manifest, then the weight file will be loaded fromhttp://localhost/foo/group1-shard1of2
by default. However, if you provide aweightPathPrefix
value ofhttp://localhost/foo/alt-weights
, then the weight file will be loaded from the pathhttp://localhost/foo/alt-weights/group1-shard1of2
instead. -
fromTFHub
(boolean)
Whether the module or model is to be loaded from TF Hub.
Setting this to
true
allows passing a TF-Hub module URL, omitting the standard model file name and the query parameters.Default:
false
. -
weightUrlConverter
((weightFileName: string) => Promise<string>)
An async function to convert weight file name to URL. The weight file
names are stored in model.json's weightsManifest.paths field. By default we
consider weight files are colocated with the model.json file. For example:
model.json URL: https://www.google.com/models/1/model.json
group1-shard1of1.bin url:
https://www.google.com/models/1/group1-shard1of1.bin
With this func you can convert the weight file name to any URL.
Load a model composed of Layer objects, including its topology and optionally weights. See the Tutorial named "How to import a Keras Model" for usage examples.
This method is applicable to:
- Models created with the
tf.layers.*
, tf.sequential(), and tf.model() APIs of TensorFlow.js and later saved with the tf.LayersModel.save() method. - Models converted from Keras or TensorFlow tf.keras using the tensorflowjs_converter.
This mode is not applicable to TensorFlow SavedModel
s or their converted
forms. For those models, use tf.loadGraphModel().
Example 1. Load a model from an HTTP server.
const model = await tf.loadLayersModel(
'https://storage.googleapis.com/tfjs-models/tfjs/iris_v1/model.json');
model.summary();
Example 2: Save model
's topology and weights to browser local
storage;
then load it back.
const model = tf.sequential(
{layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
console.log('Prediction from original model:');
model.predict(tf.ones([1, 3])).print();
const saveResults = await model.save('localstorage://my-model-1');
const loadedModel = await tf.loadLayersModel('localstorage://my-model-1');
console.log('Prediction from loaded model:');
loadedModel.predict(tf.ones([1, 3])).print();
Example 3. Saving model
's topology and weights to browser
IndexedDB;
then load it back.
const model = tf.sequential(
{layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
console.log('Prediction from original model:');
model.predict(tf.ones([1, 3])).print();
const saveResults = await model.save('indexeddb://my-model-1');
const loadedModel = await tf.loadLayersModel('indexeddb://my-model-1');
console.log('Prediction from loaded model:');
loadedModel.predict(tf.ones([1, 3])).print();
Example 4. Load a model from user-selected files from HTML file input elements.
// Note: this code snippet will not work without the HTML elements in the
// page
const jsonUpload = document.getElementById('json-upload');
const weightsUpload = document.getElementById('weights-upload');
const model = await tf.loadLayersModel(
tf.io.browserFiles([jsonUpload.files[0], weightsUpload.files[0]]));
-
pathOrIOHandler
(string|io.IOHandler)
Can be either of the two formats
- A string path to the
ModelAndWeightsConfig
JSON describing the model in the canonical TensorFlow.js format. For file:// (tfjs-node-only), http:// and https:// schemas, the path can be either absolute or relative. - An
tf.io.IOHandler
object that loads model artifacts with itsload
method.
- A string path to the
-
options
(Object)
Optional configuration arguments for the model loading,
including:
strict
: Require that the provided weights exactly match those required by the layers. Default true. Passing false means that both extra weights and missing weights will be silently ignored.onProgress
: A function of the signature `(fraction: number) => void', that can be used as the progress callback for the model loading.
-
requestInit
(RequestInit)
RequestInit (options) for HTTP requests.
For detailed information on the supported fields, see https://developer.mozilla.org/en-US/docs/Web/API/Request/Request
- onProgress (OnProgressCallback) Progress callback.
-
fetchFunc
(Function)
A function used to override the
window.fetch
function. -
strict
(boolean)
Strict loading model: whether extraneous weights or missing
weights should trigger an
Error
.If
true
, require that the provided weights exactly match those required by the layers.false
means that both extra weights and missing weights will be silently ignored.Default:
true
. -
weightPathPrefix
(string)
Path prefix for weight files, by default this is calculated from the
path of the model JSON file.
For instance, if the path to the model JSON file is
http://localhost/foo/model.json
, then the default path prefix will behttp://localhost/foo/
. If a weight file has the path valuegroup1-shard1of2
in the weight manifest, then the weight file will be loaded fromhttp://localhost/foo/group1-shard1of2
by default. However, if you provide aweightPathPrefix
value ofhttp://localhost/foo/alt-weights
, then the weight file will be loaded from the pathhttp://localhost/foo/alt-weights/group1-shard1of2
instead. -
fromTFHub
(boolean)
Whether the module or model is to be loaded from TF Hub.
Setting this to
true
allows passing a TF-Hub module URL, omitting the standard model file name and the query parameters.Default:
false
. -
weightUrlConverter
((weightFileName: string) => Promise<string>)
An async function to convert weight file name to URL. The weight file
names are stored in model.json's weightsManifest.paths field. By default we
consider weight files are colocated with the model.json file. For example:
model.json URL: https://www.google.com/models/1/model.json
group1-shard1of1.bin url:
https://www.google.com/models/1/group1-shard1of1.bin
With this func you can convert the weight file name to any URL.
Creates an IOHandler that triggers file downloads from the browser.
The returned IOHandler
instance can be used as model exporting methods such
as tf.Model.save
and supports only saving.
const model = tf.sequential();
model.add(tf.layers.dense(
{units: 1, inputShape: [10], activation: 'sigmoid'}));
const saveResult = await model.save('downloads://mymodel');
// This will trigger downloading of two files:
// 'mymodel.json' and 'mymodel.weights.bin'.
console.log(saveResult);
-
fileNamePrefix
(string)
Prefix name of the files to be downloaded. For use with
tf.Model
,fileNamePrefix
should follow either of the following two formats:null
orundefined
, in which case the default file names will be used:
- 'model.json' for the JSON file containing the model topology and weights manifest.
- 'model.weights.bin' for the binary file containing the binary weight values.
- A single string or an Array of a single string, as the file name prefix.
For example, if
'foo'
is provided, the downloaded JSON file and binary weights file will be named 'foo.json' and 'foo.weights.bin', respectively.
Creates an IOHandler that loads model artifacts from user-selected files.
This method can be used for loading from files such as user-selected files in the browser. When used in conjunction with tf.loadLayersModel(), an instance of tf.LayersModel (Keras-style) can be constructed from the loaded artifacts.
// Note: This code snippet won't run properly without the actual file input
// elements in the HTML DOM.
// Suppose there are two HTML file input (`<input type="file" ...>`)
// elements.
const uploadJSONInput = document.getElementById('upload-json');
const uploadWeightsInput = document.getElementById('upload-weights');
const model = await tf.loadLayersModel(tf.io.browserFiles(
[uploadJSONInput.files[0], uploadWeightsInput.files[0]]));
-
files
(File[])
File
s to load from. Currently, this function supports only loading from files that contain Keras-style models (i.e.,tf.Model
s), for which anArray
ofFile
s is expected (in that order):- A JSON file containing the model topology and weight manifest.
- Optionally, One or more binary files containing the binary weights.
These files must have names that match the paths in the
weightsManifest
contained by the aforementioned JSON file, or errors will be thrown during loading. These weights files have the same format as the ones generated bytensorflowjs_converter
that comes with thetensorflowjs
Python PIP package. If no weights files are provided, only the model topology will be loaded from the JSON file above.
Creates an IOHandler subtype that sends model artifacts to HTTP server.
An HTTP request of the multipart/form-data
mime type will be sent to the
path
URL. The form data includes artifacts that represent the topology
and/or weights of the model. In the case of Keras-style tf.Model
, two
blobs (files) exist in form-data:
- A JSON file consisting of
modelTopology
andweightsManifest
. - A binary weights file consisting of the concatenated weight values. These files are in the same format as the one generated by tfjs_converter.
The following code snippet exemplifies the client-side code that uses this function:
const model = tf.sequential();
model.add(
tf.layers.dense({units: 1, inputShape: [100], activation: 'sigmoid'}));
const saveResult = await model.save(tf.io.http(
'http://model-server:5000/upload', {requestInit: {method: 'PUT'}}));
console.log(saveResult);
If the default POST
method is to be used, without any custom parameters
such as headers, you can simply pass an HTTP or HTTPS URL to model.save
:
const saveResult = await model.save('http://model-server:5000/upload');
The following GitHub Gist https://gist.github.com/dsmilkov/1b6046fd6132d7408d5257b0976f7864 implements a server based on flask that can receive the request. Upon receiving the model artifacts via the requst, this particular server reconsistutes instances of Keras Models in memory.
- path (string) A URL path to the model. Can be an absolute HTTP path (e.g., 'http://localhost:8000/model-upload)') or a relative path (e.g., './model-upload').
-
loadOptions
(LoadOptions)
Optional configuration for the loading. It includes the
following fields:
- weightPathPrefix Optional, this specifies the path prefix for weight files, by default this is calculated from the path param.
- fetchFunc Optional, custom
fetch
function. E.g., in Node.js, thefetch
from node-fetch can be used here. - onProgress Optional, progress callback function, fired periodically before the load is completed.
Copy a model from one URL to another.
This function supports:
- Copying within a storage medium, e.g.,
tf.io.copyModel('localstorage://model-1', 'localstorage://model-2')
- Copying between two storage mediums, e.g.,
tf.io.copyModel('localstorage://model-1', 'indexeddb://model-1')
// First create and save a model.
const model = tf.sequential();
model.add(tf.layers.dense(
{units: 1, inputShape: [10], activation: 'sigmoid'}));
await model.save('localstorage://demo/management/model1');
// Then list existing models.
console.log(JSON.stringify(await tf.io.listModels()));
// Copy the model, from Local Storage to IndexedDB.
await tf.io.copyModel(
'localstorage://demo/management/model1',
'indexeddb://demo/management/model1');
// List models again.
console.log(JSON.stringify(await tf.io.listModels()));
// Remove both models.
await tf.io.removeModel('localstorage://demo/management/model1');
await tf.io.removeModel('indexeddb://demo/management/model1');
- sourceURL (string) Source URL of copying.
- destURL (string) Destination URL of copying.
List all models stored in registered storage mediums.
For a web browser environment, the registered mediums are Local Storage and IndexedDB.
// First create and save a model.
const model = tf.sequential();
model.add(tf.layers.dense(
{units: 1, inputShape: [10], activation: 'sigmoid'}));
await model.save('localstorage://demo/management/model1');
// Then list existing models.
console.log(JSON.stringify(await tf.io.listModels()));
// Delete the model.
await tf.io.removeModel('localstorage://demo/management/model1');
// List models again.
console.log(JSON.stringify(await tf.io.listModels()));
Move a model from one URL to another.
This function supports:
- Moving within a storage medium, e.g.,
tf.io.moveModel('localstorage://model-1', 'localstorage://model-2')
- Moving between two storage mediums, e.g.,
tf.io.moveModel('localstorage://model-1', 'indexeddb://model-1')
// First create and save a model.
const model = tf.sequential();
model.add(tf.layers.dense(
{units: 1, inputShape: [10], activation: 'sigmoid'}));
await model.save('localstorage://demo/management/model1');
// Then list existing models.
console.log(JSON.stringify(await tf.io.listModels()));
// Move the model, from Local Storage to IndexedDB.
await tf.io.moveModel(
'localstorage://demo/management/model1',
'indexeddb://demo/management/model1');
// List models again.
console.log(JSON.stringify(await tf.io.listModels()));
// Remove the moved model.
await tf.io.removeModel('indexeddb://demo/management/model1');
- sourceURL (string) Source URL of moving.
- destURL (string) Destination URL of moving.
Remove a model specified by URL from a reigstered storage medium.
// First create and save a model.
const model = tf.sequential();
model.add(tf.layers.dense(
{units: 1, inputShape: [10], activation: 'sigmoid'}));
await model.save('localstorage://demo/management/model1');
// Then list existing models.
console.log(JSON.stringify(await tf.io.listModels()));
// Delete the model.
await tf.io.removeModel('localstorage://demo/management/model1');
// List models again.
console.log(JSON.stringify(await tf.io.listModels()));
- url (string) A URL to a stored model, with a scheme prefix, e.g., 'localstorage://my-model-1', 'indexeddb://my/model/2'.
Register a class with the serialization map of TensorFlow.js.
This is often used for registering custom Layers, so they can be serialized and deserialized.
Example:
class MyCustomLayer extends tf.layers.Layer {
static className = 'MyCustomLayer';
constructor(config) {
super(config);
}
}
tf.serialization.registerClass(MyCustomLayer);
-
cls
(SerializableConstructor)
The class to be registered. It must have a public static member
called
className
defined and the value must be a non-empty string.
A tf.Functional is an alias to tf.LayersModel.
See also: tf.LayersModel, tf.Sequential, tf.loadLayersModel().
A tf.GraphModel is a directed, acyclic graph built from a SavedModel GraphDef and allows inference execution.
A tf.GraphModel can only be created by loading from a model converted from a TensorFlow SavedModel using the command line converter tool and loaded via tf.loadGraphModel().
Synchronously construct the in memory weight map and compile the inference graph. Also initialize hashtable if any.
- artifacts (io.ModelArtifacts)
Save the configuration and/or weights of the GraphModel.
An IOHandler
is an object that has a save
method of the proper
signature defined. The save
method manages the storing or
transmission of serialized data ("artifacts") that represent the
model's topology and weights onto or via a specific medium, such as
file downloads, local storage, IndexedDB in the web browser and HTTP
requests to a server. TensorFlow.js provides IOHandler
implementations for a number of frequently used saving mediums, such as
tf.io.browserDownloads() and tf.io.browserLocalStorage
. See tf.io
for more details.
This method also allows you to refer to certain types of IOHandler
s
as URL-like string shortcuts, such as 'localstorage://' and
'indexeddb://'.
Example 1: Save model
's topology and weights to browser local
storage;
then load it back.
const modelUrl =
'https://storage.googleapis.com/tfjs-models/savedmodel/mobilenet_v2_1.0_224/model.json';
const model = await tf.loadGraphModel(modelUrl);
const zeros = tf.zeros([1, 224, 224, 3]);
model.predict(zeros).print();
const saveResults = await model.save('localstorage://my-model-1');
const loadedModel = await tf.loadGraphModel('localstorage://my-model-1');
console.log('Prediction from loaded model:');
model.predict(zeros).print();
-
handlerOrURL
(io.IOHandler|string)
An instance of
IOHandler
or a URL-like, scheme-based string shortcut forIOHandler
. - config (Object) Options for saving the model. Optional
- trainableOnly (boolean) Whether to save only the trainable weights of the model, ignoring the non-trainable ones.
-
includeOptimizer
(boolean)
Whether the optimizer will be saved (if exists).
Default:
false
.
Execute the inference for the input tensors.
- inputs (tf.Tensor|tf.Tensor[]|{[name: string]: tf.Tensor})
- config (Object) Prediction configuration for specifying the batch size and output node names. Currently the batch size option is ignored for graph model. Optional
- batchSize (number) Optional. Batch size (Integer). If unspecified, it will default to 32.
- verbose (boolean) Optional. Verbosity mode. Defaults to false.
Executes inference for the model for given input tensors.
- inputs (tf.Tensor|tf.Tensor[]|{[name: string]: tf.Tensor}) tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.
- outputs (string|string[]) output node name from the Tensorflow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array. Optional
Executes inference for the model for given input tensors in async fashion, use this method when your model contains control flow ops.
- inputs (tf.Tensor|tf.Tensor[]|{[name: string]: tf.Tensor}) tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.
- outputs (string|string[]) output node name from the Tensorflow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array. Optional
A tf.LayersModel is a directed, acyclic graph of tf.Layer
s plus methods
for training, evaluation, prediction and saving.
tf.LayersModel is the basic unit of training, inference and evaluation in TensorFlow.js. To create a tf.LayersModel, use tf.LayersModel.
See also: tf.Sequential, tf.loadLayersModel().
Print a text summary of the model's layers.
The summary includes
- Name and type of all layers that comprise the model.
- Output shape(s) of the layers
- Number of weight parameters of each layer
- If the model has non-sequential-like topology, the inputs each layer receives
- The total number of trainable and non-trainable parameters of the model.
const input1 = tf.input({shape: [10]});
const input2 = tf.input({shape: [20]});
const dense1 = tf.layers.dense({units: 4}).apply(input1);
const dense2 = tf.layers.dense({units: 8}).apply(input2);
const concat = tf.layers.concatenate().apply([dense1, dense2]);
const output =
tf.layers.dense({units: 3, activation: 'softmax'}).apply(concat);
const model = tf.model({inputs: [input1, input2], outputs: output});
model.summary();
- lineLength (number) Custom line length, in number of characters. Optional
-
positions
(number[])
Custom widths of each of the columns, as either
fractions of
lineLength
(e.g.,[0.5, 0.75, 1]
) or absolute number of characters (e.g.,[30, 50, 65]
). Each number corresponds to right-most (i.e., ending) position of a column. Optional -
printFn
((message?: tf.any(), ...optionalParams: tf.any()[]) => void)
Custom print function. Can be used to replace the default
console.log
. For example, you can usex => {}
to mute the printed messages in the console. Optional
Configures and prepares the model for training and evaluation. Compiling
outfits the model with an optimizer, loss, and/or metrics. Calling fit
or evaluate
on an un-compiled model will throw an error.
-
args
(Object)
a
ModelCompileArgs
specifying the loss, optimizer, and metrics to be used for fitting and evaluating this model. - optimizer (string|tf.train.Optimizer) An instance of tf.train.Optimizer or a string name for an Optimizer.
- loss (string|string[]|{[outputName: string]: string}|LossOrMetricFn| LossOrMetricFn[]|{[outputName: string]: LossOrMetricFn}) Object function(s) or name(s) of object function(s). If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or an Array of losses. The loss value that will be minimized by the model will then be the sum of all individual losses.
-
metrics
(string|LossOrMetricFn|Array|
{[outputName: string]: string | LossOrMetricFn})
List of metrics to be evaluated by the model during training and testing.
Typically you will use
metrics=['accuracy']
. To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary.
Returns the loss value & metrics values for the model in test mode.
Loss and metrics are specified during compile()
, which needs to happen
before calls to evaluate()
.
Computation is done in batches.
const model = tf.sequential({
layers: [tf.layers.dense({units: 1, inputShape: [10]})]
});
model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});
const result = model.evaluate(
tf.ones([8, 10]), tf.ones([8, 1]), {batchSize: 4});
result.print();
-
x
(tf.Tensor|tf.Tensor[])
tf.Tensor of test data, or an
Array
of tf.Tensors if the model has multiple inputs. -
y
(tf.Tensor|tf.Tensor[])
tf.Tensor of target data, or an
Array
of tf.Tensors if the model has multiple outputs. -
args
(Object)
A
ModelEvaluateArgs
, containing optional fields. Optional - batchSize (number) Batch size (Integer). If unspecified, it will default to 32.
- verbose (ModelLoggingVerbosity) Verbosity mode.
- sampleWeight (tf.Tensor) Tensor of weights to weight the contribution of different samples to the loss and metrics.
-
steps
(number)
integer: total number of steps (batches of samples)
before declaring the evaluation round finished. Ignored with the default
value of
undefined
.
Evaluate model using a dataset object.
Note: Unlike evaluate()
, this method is asynchronous (async
);
-
dataset
(tf.data.Dataset)
A dataset object. Its
iterator()
method is expected to generate a dataset iterator object, thenext()
method of which is expected to produce data batches for evaluation. The return value of thenext()
call ought to contain a booleandone
field and avalue
field. Thevalue
field is expected to be an array of two tf.Tensors or an array of two nested tf.Tensor structures. The former case is for models with exactly one input and one output (e.g.. a sequential model). The latter case is for models with multiple inputs and/or multiple outputs. Of the two items in the array, the first is the input feature(s) and the second is the output target(s). - args (Object) A configuration object for the dataset-based evaluation. Optional
- batches (number) Number of batches to draw from the dataset object before ending the evaluation.
- verbose (ModelLoggingVerbosity) Verbosity mode.
Generates output predictions for the input samples.
Computation is done in batches.
Note: the "step" mode of predict() is currently not supported. This is because the TensorFlow.js core backend is imperative only.
const model = tf.sequential({
layers: [tf.layers.dense({units: 1, inputShape: [10]})]
});
model.predict(tf.ones([8, 10]), {batchSize: 4}).print();
-
x
(tf.Tensor|tf.Tensor[])
The input data, as a Tensor, or an
Array
of tf.Tensors if the model has multiple inputs. -
args
(Object)
A
ModelPredictArgs
object containing optional fields. Optional - batchSize (number) Optional. Batch size (Integer). If unspecified, it will default to 32.
- verbose (boolean) Optional. Verbosity mode. Defaults to false.
Returns predictions for a single batch of samples.
const model = tf.sequential({
layers: [tf.layers.dense({units: 1, inputShape: [10]})]
});
model.predictOnBatch(tf.ones([8, 10])).print();
Trains the model for a fixed number of epochs (iterations on a dataset).
const model = tf.sequential({
layers: [tf.layers.dense({units: 1, inputShape: [10]})]
});
model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});
for (let i = 1; i < 5 ; ++i) {
const h = await model.fit(tf.ones([8, 10]), tf.ones([8, 1]), {
batchSize: 4,
epochs: 3
});
console.log("Loss after Epoch " + i + " : " + h.history.loss[0]);
}
- x (tf.Tensor|tf.Tensor[]|{[inputName: string]: tf.Tensor}) tf.Tensor of training data, or an array of tf.Tensors if the model has multiple inputs. If all inputs in the model are named, you can also pass a dictionary mapping input names to tf.Tensors.
- y (tf.Tensor|tf.Tensor[]|{[inputName: string]: tf.Tensor}) tf.Tensor of target (label) data, or an array of tf.Tensors if the model has multiple outputs. If all outputs in the model are named, you can also pass a dictionary mapping output names to tf.Tensors.
-
args
(Object)
A
ModelFitArgs
, containing optional fields. Optional - batchSize (number) Number of samples per gradient update. If unspecified, it will default to 32.
- epochs (number) Integer number of times to iterate over the training data arrays.
-
verbose
(ModelLoggingVerbosity)
Verbosity level.
Expected to be 0, 1, or 2. Default: 1.
0 - No printed message during fit() call. 1 - In Node.js (tfjs-node), prints the progress bar, together with real-time updates of loss and metric values and training speed. In the browser: no action. This is the default. 2 - Not implemented yet.
-
callbacks
(BaseCallback[]|CustomCallbackArgs|CustomCallbackArgs[])
List of callbacks to be called during training.
Can have one or more of the following callbacks:
onTrainBegin(logs)
: called when training starts.onTrainEnd(logs)
: called when training ends.onEpochBegin(epoch, logs)
: called at the start of every epoch.onEpochEnd(epoch, logs)
: called at the end of every epoch.onBatchBegin(batch, logs)
: called at the start of every batch.onBatchEnd(batch, logs)
: called at the end of every batch.onYield(epoch, batch, logs)
: called everyyieldEvery
milliseconds with the current epoch, batch and logs. The logs are the same as inonBatchEnd()
. Note thatonYield
can skip batches or epochs. See also docs foryieldEvery
below.
-
validationSplit
(number)
Float between 0 and 1: fraction of the training data
to be used as validation data. The model will set apart this fraction of
the training data, will not train on it, and will evaluate the loss and
any model metrics on this data at the end of each epoch.
The validation data is selected from the last samples in the
x
andy
data provided, before shuffling. -
validationData
([
tf.Tensor|tf.Tensor[], tf.Tensor|tf.Tensor[]
]|[tf.Tensor | tf.Tensor[], tf.Tensor|tf.Tensor[], tf.Tensor|tf.Tensor[]])
Data on which to evaluate the loss and any model
metrics at the end of each epoch. The model will not be trained on this
data. This could be a tuple [xVal, yVal] or a tuple [xVal, yVal,
valSampleWeights]. The model will not be trained on this data.
validationData
will overridevalidationSplit
. -
shuffle
(boolean)
Whether to shuffle the training data before each epoch. Has
no effect when
stepsPerEpoch
is notnull
. -
classWeight
(ClassWeight|ClassWeight[]|ClassWeightMap)
Optional object mapping class indices (integers) to
a weight (float) to apply to the model's loss for the samples from this
class during training. This can be useful to tell the model to "pay more
attention" to samples from an under-represented class.
If the model has multiple outputs, a class weight can be specified for each of the outputs by setting this field an array of weight object or a object that maps model output names (e.g.,
model.outputNames[0]
) to weight objects. - sampleWeight (tf.Tensor) Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequenceLength), to apply a different weight to every timestep of every sample. In this case you should make sure to specify sampleWeightMode="temporal" in compile().
-
initialEpoch
(number)
Epoch at which to start training (useful for resuming a previous training
run). When this is used,
epochs
is the index of the "final epoch". The model is not trained for a number of iterations given byepochs
, but merely until the epoch of indexepochs
is reached. -
stepsPerEpoch
(number)
Total number of steps (batches of samples) before
declaring one epoch finished and starting the next epoch. When training
with Input Tensors such as TensorFlow data tensors, the default
null
is equal to the number of unique samples in your dataset divided by the batch size, or 1 if that cannot be determined. -
validationSteps
(number)
Only relevant if
stepsPerEpoch
is specified. Total number of steps (batches of samples) to validate before stopping. -
yieldEvery
(YieldEveryOptions)
Configures the frequency of yielding the main thread to other tasks.
In the browser environment, yielding the main thread can improve the responsiveness of the page during training. In the Node.js environment, it can ensure tasks queued in the event loop can be handled in a timely manner.
The value can be one of the following:
'auto'
: The yielding happens at a certain frame rate (currently set at 125ms). This is the default.'batch'
: yield every batch.'epoch'
: yield every epoch.- any
number
: yield everynumber
milliseconds. 'never'
: never yield. (yielding can still happen throughawait nextFrame()
calls in custom callbacks.)
Trains the model using a dataset object.
-
dataset
(tf.data.Dataset)
A dataset object. Its
iterator()
method is expected to generate a dataset iterator object, thenext()
method of which is expected to produce data batches for training. The return value of thenext()
call ought to contain a booleandone
field and avalue
field. Thevalue
field is expected to be an array of two tf.Tensors or an array of two nested tf.Tensor structures. The former case is for models with exactly one input and one output (e.g.. a sequential model). The latter case is for models with multiple inputs and/or multiple outputs. Of the two items in the array, the first is the input feature(s) and the second is the output target(s). -
args
(Object)
A
ModelFitDatasetArgs
, containing optional fields. -
batchesPerEpoch
(number)
(Optional) Total number of steps (batches of samples) before
declaring one epoch finished and starting the next epoch. It should
typically be equal to the number of samples of your dataset divided by
the batch size, so that
fitDataset
() call can utilize the entire dataset. If it is not provided, usedone
return value initerator.next()
as signal to finish an epoch. - epochs (number) Integer number of times to iterate over the training dataset.
-
verbose
(ModelLoggingVerbosity)
Verbosity level.
Expected to be 0, 1, or 2. Default: 1.
0 - No printed message during fit() call. 1 - In Node.js (tfjs-node), prints the progress bar, together with real-time updates of loss and metric values and training speed. In the browser: no action. This is the default. 2 - Not implemented yet.
-
callbacks
(BaseCallback[]|CustomCallbackArgs|CustomCallbackArgs[])
List of callbacks to be called during training.
Can have one or more of the following callbacks:
onTrainBegin(logs)
: called when training starts.onTrainEnd(logs)
: called when training ends.onEpochBegin(epoch, logs)
: called at the start of every epoch.onEpochEnd(epoch, logs)
: called at the end of every epoch.onBatchBegin(batch, logs)
: called at the start of every batch.onBatchEnd(batch, logs)
: called at the end of every batch.onYield(epoch, batch, logs)
: called everyyieldEvery
milliseconds with the current epoch, batch and logs. The logs are the same as inonBatchEnd()
. Note thatonYield
can skip batches or epochs. See also docs foryieldEvery
below.
-
validationData
([
TensorOrArrayOrMap, TensorOrArrayOrMap
]|[TensorOrArrayOrMap, TensorOrArrayOrMap, TensorOrArrayOrMap]|tf.data.Dataset)
Data on which to evaluate the loss and any model
metrics at the end of each epoch. The model will not be trained on this
data. This could be any of the following:
- An array
[xVal, yVal]
, where the two values may be tf.Tensor, an array of Tensors, or a map of string to Tensor. - Similarly, an array
[xVal, yVal, valSampleWeights]
(not implemented yet). - a
Dataset
object with elements of the form{xs: xVal, ys: yVal}
, wherexs
andys
are the feature and label tensors, respectively.
If
validationData
is an Array of Tensor objects, each tf.Tensor will be sliced into batches during validation, using the parametervalidationBatchSize
(which defaults to 32). The entirety of the tf.Tensor objects will be used in the validation.If
validationData
is a dataset object, and thevalidationBatches
parameter is specified, the validation will usevalidationBatches
batches drawn from the dataset object. IfvalidationBatches
parameter is not specified, the validation will stop when the dataset is exhausted.The model will not be trained on this data.
- An array
-
validationBatchSize
(number)
Optional batch size for validation.
Used only if
validationData
is an array of tf.Tensor objects, i.e., not a dataset object.If not specified, its value defaults to 32.
-
validationBatches
(number)
(Optional) Only relevant if
validationData
is specified and is a dataset object.Total number of batches of samples to draw from
validationData
for validation purpose before stopping at the end of every epoch. If not specified,evaluateDataset
will useiterator.next().done
as signal to stop validation. -
yieldEvery
(YieldEveryOptions)
Configures the frequency of yielding the main thread to other tasks.
In the browser environment, yielding the main thread can improve the responsiveness of the page during training. In the Node.js environment, it can ensure tasks queued in the event loop can be handled in a timely manner.
The value can be one of the following:
'auto'
: The yielding happens at a certain frame rate (currently set at 125ms). This is the default.'batch'
: yield every batch.'epoch'
: yield every epoch.- a
number
: Will yield everynumber
milliseconds. 'never'
: never yield. (But yielding can still happen throughawait nextFrame()
calls in custom callbacks.)
-
initialEpoch
(number)
Epoch at which to start training (useful for resuming a previous training
run). When this is used,
epochs
is the index of the "final epoch". The model is not trained for a number of iterations given byepochs
, but merely until the epoch of indexepochs
is reached. -
classWeight
(ClassWeight|ClassWeight[]|ClassWeightMap)
Optional object mapping class indices (integers) to
a weight (float) to apply to the model's loss for the samples from this
class during training. This can be useful to tell the model to "pay more
attention" to samples from an under-represented class.
If the model has multiple outputs, a class weight can be specified for each of the outputs by setting this field an array of weight object or a object that maps model output names (e.g.,
model.outputNames[0]
) to weight objects.
Runs a single gradient update on a single batch of data.
This method differs from fit()
and fitDataset()
in the following
regards:
- It operates on exactly one batch of data.
- It returns only the loss and matric values, instead of returning the batch-by-batch loss and metric values.
- It doesn't support fine-grained options such as verbosity and callbacks.
Save the configuration and/or weights of the LayersModel.
An IOHandler
is an object that has a save
method of the proper
signature defined. The save
method manages the storing or
transmission of serialized data ("artifacts") that represent the
model's topology and weights onto or via a specific medium, such as
file downloads, local storage, IndexedDB in the web browser and HTTP
requests to a server. TensorFlow.js provides IOHandler
implementations for a number of frequently used saving mediums, such as
tf.io.browserDownloads() and tf.io.browserLocalStorage
. See tf.io
for more details.
This method also allows you to refer to certain types of IOHandler
s
as URL-like string shortcuts, such as 'localstorage://' and
'indexeddb://'.
Example 1: Save model
's topology and weights to browser local
storage;
then load it back.
const model = tf.sequential(
{layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
console.log('Prediction from original model:');
model.predict(tf.ones([1, 3])).print();
const saveResults = await model.save('localstorage://my-model-1');
const loadedModel = await tf.loadLayersModel('localstorage://my-model-1');
console.log('Prediction from loaded model:');
loadedModel.predict(tf.ones([1, 3])).print();
Example 2. Saving model
's topology and weights to browser
IndexedDB;
then load it back.
const model = tf.sequential(
{layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
console.log('Prediction from original model:');
model.predict(tf.ones([1, 3])).print();
const saveResults = await model.save('indexeddb://my-model-1');
const loadedModel = await tf.loadLayersModel('indexeddb://my-model-1');
console.log('Prediction from loaded model:');
loadedModel.predict(tf.ones([1, 3])).print();
Example 3. Saving model
's topology and weights as two files
(my-model-1.json
and my-model-1.weights.bin
) downloaded from
browser.
const model = tf.sequential(
{layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
const saveResults = await model.save('downloads://my-model-1');
Example 4. Send model
's topology and weights to an HTTP server.
See the documentation of tf.io.http() for more details
including specifying request parameters and implementation of the
server.
const model = tf.sequential(
{layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
const saveResults = await model.save('http://my-server/model/upload');
-
handlerOrURL
(io.IOHandler|string)
An instance of
IOHandler
or a URL-like, scheme-based string shortcut forIOHandler
. - config (Object) Options for saving the model. Optional
- trainableOnly (boolean) Whether to save only the trainable weights of the model, ignoring the non-trainable ones.
-
includeOptimizer
(boolean)
Whether the optimizer will be saved (if exists).
Default:
false
.
Retrieves a layer based on either its name (unique) or index.
Indices are based on order of horizontal graph traversal (bottom-up).
If both name
and index
are specified, index
takes precedence.
- name (string) Name of layer. Optional
- index (number) Index of layer. Optional
A model with a stack of layers, feeding linearly from one to the next.
tf.sequential() is a factory function that creates an instance of tf.Sequential.
// Define a model for linear regression.
const model = tf.sequential();
model.add(tf.layers.dense({units: 1, inputShape: [1]}));
// Prepare the model for training: Specify the loss and the optimizer.
model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
// Generate some synthetic data for training.
const xs = tf.tensor2d([1, 2, 3, 4], [4, 1]);
const ys = tf.tensor2d([1, 3, 5, 7], [4, 1]);
// Train the model using the data then do inference on a data point the
// model hasn't seen:
await model.fit(xs, ys);
model.predict(tf.tensor2d([5], [1, 1])).print();
Adds a layer instance on top of the layer stack.
const model = tf.sequential();
model.add(tf.layers.dense({units: 8, inputShape: [1]}));
model.add(tf.layers.dense({units: 4, activation: 'relu6'}));
model.add(tf.layers.dense({units: 1, activation: 'relu6'}));
// Note that the untrained model is random at this point.
model.predict(tf.randomNormal([10, 1])).print();
- layer (tf.layers.Layer) Layer instance.
Print a text summary of the Sequential model's layers.
The summary includes
- Name and type of all layers that comprise the model.
- Output shape(s) of the layers
- Number of weight parameters of each layer
- The total number of trainable and non-trainable parameters of the model.
const model = tf.sequential();
model.add(
tf.layers.dense({units: 100, inputShape: [10], activation: 'relu'}));
model.add(tf.layers.dense({units: 1, activation: 'sigmoid'}));
model.summary();
- lineLength (number) Custom line length, in number of characters. Optional
-
positions
(number[])
Custom widths of each of the columns, as either
fractions of
lineLength
(e.g.,[0.5, 0.75, 1]
) or absolute number of characters (e.g.,[30, 50, 65]
). Each number corresponds to right-most (i.e., ending) position of a column. Optional -
printFn
((message?: tf.any(), ...optionalParams: tf.any()[]) => void)
Custom print function. Can be used to replace the default
console.log
. For example, you can usex => {}
to mute the printed messages in the console. Optional
Returns the loss value & metrics values for the model in test mode.
Loss and metrics are specified during compile()
, which needs to happen
before calls to evaluate()
.
Computation is done in batches.
const model = tf.sequential({
layers: [tf.layers.dense({units: 1, inputShape: [10]})]
});
model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});
const result = model.evaluate(tf.ones([8, 10]), tf.ones([8, 1]), {
batchSize: 4,
});
result.print();
-
x
(tf.Tensor|tf.Tensor[])
tf.Tensor of test data, or an
Array
of tf.Tensors if the model has multiple inputs. -
y
(tf.Tensor|tf.Tensor[])
tf.Tensor of target data, or an
Array
of tf.Tensors if the model has multiple outputs. -
args
(Object)
A
ModelEvaluateConfig
, containing optional fields. Optional - batchSize (number) Batch size (Integer). If unspecified, it will default to 32.
- verbose (ModelLoggingVerbosity) Verbosity mode.
- sampleWeight (tf.Tensor) Tensor of weights to weight the contribution of different samples to the loss and metrics.
-
steps
(number)
integer: total number of steps (batches of samples)
before declaring the evaluation round finished. Ignored with the default
value of
undefined
.
Evaluate model using a dataset object.
Note: Unlike evaluate()
, this method is asynchronous (async
);
-
dataset
(tf.data.Dataset)
A dataset object. Its
iterator()
method is expected to generate a dataset iterator object, thenext()
method of which is expected to produce data batches for evaluation. The return value of thenext()
call ought to contain a booleandone
field and avalue
field. Thevalue
field is expected to be an array of two tf.Tensors or an array of two nested tf.Tensor structures. The former case is for models with exactly one input and one output (e.g.. a sequential model). The latter case is for models with multiple inputs and/or multiple outputs. Of the two items in the array, the first is the input feature(s) and the second is the output target(s). - args (Object) A configuration object for the dataset-based evaluation.
- batches (number) Number of batches to draw from the dataset object before ending the evaluation.
- verbose (ModelLoggingVerbosity) Verbosity mode.
Generates output predictions for the input samples.
Computation is done in batches.
Note: the "step" mode of predict() is currently not supported. This is because the TensorFow.js core backend is imperative only.
const model = tf.sequential({
layers: [tf.layers.dense({units: 1, inputShape: [10]})]
});
model.predict(tf.ones([2, 10])).print();
Trains the model for a fixed number of epochs (iterations on a dataset).
const model = tf.sequential({
layers: [tf.layers.dense({units: 1, inputShape: [10]})]
});
model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});
const history = await model.fit(tf.ones([8, 10]), tf.ones([8, 1]), {
batchSize: 4,
epochs: 3
});
console.log(history.history.loss[0]);
- x (tf.Tensor|tf.Tensor[]|{[inputName: string]: tf.Tensor}) tf.Tensor of training data, or an array of tf.Tensors if the model has multiple inputs. If all inputs in the model are named, you can also pass a dictionary mapping input names to tf.Tensors.
- y (tf.Tensor|tf.Tensor[]|{[inputName: string]: tf.Tensor}) tf.Tensor of target (label) data, or an array of tf.Tensors if the model has multiple outputs. If all outputs in the model are named, you can also pass a dictionary mapping output names to tf.Tensors.
-
args
(Object)
A
ModelFitConfig
, containing optional fields. Optional - batchSize (number) Number of samples per gradient update. If unspecified, it will default to 32.
- epochs (number) Integer number of times to iterate over the training data arrays.
-
verbose
(ModelLoggingVerbosity)
Verbosity level.
Expected to be 0, 1, or 2. Default: 1.
0 - No printed message during fit() call. 1 - In Node.js (tfjs-node), prints the progress bar, together with real-time updates of loss and metric values and training speed. In the browser: no action. This is the default. 2 - Not implemented yet.
-
callbacks
(BaseCallback[]|CustomCallbackArgs|CustomCallbackArgs[])
List of callbacks to be called during training.
Can have one or more of the following callbacks:
onTrainBegin(logs)
: called when training starts.onTrainEnd(logs)
: called when training ends.onEpochBegin(epoch, logs)
: called at the start of every epoch.onEpochEnd(epoch, logs)
: called at the end of every epoch.onBatchBegin(batch, logs)
: called at the start of every batch.onBatchEnd(batch, logs)
: called at the end of every batch.onYield(epoch, batch, logs)
: called everyyieldEvery
milliseconds with the current epoch, batch and logs. The logs are the same as inonBatchEnd()
. Note thatonYield
can skip batches or epochs. See also docs foryieldEvery
below.
-
validationSplit
(number)
Float between 0 and 1: fraction of the training data
to be used as validation data. The model will set apart this fraction of
the training data, will not train on it, and will evaluate the loss and
any model metrics on this data at the end of each epoch.
The validation data is selected from the last samples in the
x
andy
data provided, before shuffling. -
validationData
([
tf.Tensor|tf.Tensor[], tf.Tensor|tf.Tensor[]
]|[tf.Tensor | tf.Tensor[], tf.Tensor|tf.Tensor[], tf.Tensor|tf.Tensor[]])
Data on which to evaluate the loss and any model
metrics at the end of each epoch. The model will not be trained on this
data. This could be a tuple [xVal, yVal] or a tuple [xVal, yVal,
valSampleWeights]. The model will not be trained on this data.
validationData
will overridevalidationSplit
. -
shuffle
(boolean)
Whether to shuffle the training data before each epoch. Has
no effect when
stepsPerEpoch
is notnull
. -
classWeight
(ClassWeight|ClassWeight[]|ClassWeightMap)
Optional object mapping class indices (integers) to
a weight (float) to apply to the model's loss for the samples from this
class during training. This can be useful to tell the model to "pay more
attention" to samples from an under-represented class.
If the model has multiple outputs, a class weight can be specified for each of the outputs by setting this field an array of weight object or a object that maps model output names (e.g.,
model.outputNames[0]
) to weight objects. - sampleWeight (tf.Tensor) Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequenceLength), to apply a different weight to every timestep of every sample. In this case you should make sure to specify sampleWeightMode="temporal" in compile().
-
initialEpoch
(number)
Epoch at which to start training (useful for resuming a previous training
run). When this is used,
epochs
is the index of the "final epoch". The model is not trained for a number of iterations given byepochs
, but merely until the epoch of indexepochs
is reached. -
stepsPerEpoch
(number)
Total number of steps (batches of samples) before
declaring one epoch finished and starting the next epoch. When training
with Input Tensors such as TensorFlow data tensors, the default
null
is equal to the number of unique samples in your dataset divided by the batch size, or 1 if that cannot be determined. -
validationSteps
(number)
Only relevant if
stepsPerEpoch
is specified. Total number of steps (batches of samples) to validate before stopping. -
yieldEvery
(YieldEveryOptions)
Configures the frequency of yielding the main thread to other tasks.
In the browser environment, yielding the main thread can improve the responsiveness of the page during training. In the Node.js environment, it can ensure tasks queued in the event loop can be handled in a timely manner.
The value can be one of the following:
'auto'
: The yielding happens at a certain frame rate (currently set at 125ms). This is the default.'batch'
: yield every batch.'epoch'
: yield every epoch.- any
number
: yield everynumber
milliseconds. 'never'
: never yield. (yielding can still happen throughawait nextFrame()
calls in custom callbacks.)
Trains the model using a dataset object.
const xArray = [
[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1],
];
const yArray = [1, 1, 1, 1];
// Create a dataset from the JavaScript array.
const xDataset = tf.data.array(xArray);
const yDataset = tf.data.array(yArray);
// Zip combines the `x` and `y` Datasets into a single Dataset, the
// iterator of which will return an object containing of two tensors,
// corresponding to `x` and `y`. The call to `batch(4)` will bundle
// four such samples into a single object, with the same keys now pointing
// to tensors that hold 4 examples, organized along the batch dimension.
// The call to `shuffle(4)` causes each iteration through the dataset to
// happen in a different order. The size of the shuffle window is 4.
const xyDataset = tf.data.zip({xs: xDataset, ys: yDataset})
.batch(4)
.shuffle(4);
const model = tf.sequential({
layers: [tf.layers.dense({units: 1, inputShape: [9]})]
});
model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});
const history = await model.fitDataset(xyDataset, {
epochs: 4,
callbacks: {onEpochEnd: (epoch, logs) => console.log(logs.loss)}
});
-
dataset
(tf.data.Dataset)
A dataset object. Its
iterator()
method is expected to generate a dataset iterator object, thenext()
method of which is expected to produce data batches for evaluation. The return value of thenext()
call ought to contain a booleandone
field and avalue
field.The
value
field is expected to be an object of with fieldsxs
andys
, which point to the feature tensor and the target tensor, respectively. This case is for models with exactly one input and one output (e.g.. a sequential model). For example:{value: {xs: xsTensor, ys: ysTensor}, done: false}
If the model has multiple inputs, the
xs
field ofvalue
should be an object mapping input names to their respective feature tensors. For example:{ value: { xs: { input_1: xsTensor1, input_2: xsTensor2 }, ys: ysTensor }, done: false }
If the model has multiple outputs, the
ys
field ofvalue
should be an object mapping output names to their respective target tensors. For example:{ value: { xs: xsTensor, ys: { output_1: ysTensor1, output_2: ysTensor2 }, }, done: false }
-
args
(Object)
A
ModelFitDatasetArgs
, containing optional fields. -
batchesPerEpoch
(number)
(Optional) Total number of steps (batches of samples) before
declaring one epoch finished and starting the next epoch. It should
typically be equal to the number of samples of your dataset divided by
the batch size, so that
fitDataset
() call can utilize the entire dataset. If it is not provided, usedone
return value initerator.next()
as signal to finish an epoch. - epochs (number) Integer number of times to iterate over the training dataset.
-
verbose
(ModelLoggingVerbosity)
Verbosity level.
Expected to be 0, 1, or 2. Default: 1.
0 - No printed message during fit() call. 1 - In Node.js (tfjs-node), prints the progress bar, together with real-time updates of loss and metric values and training speed. In the browser: no action. This is the default. 2 - Not implemented yet.
-
callbacks
(BaseCallback[]|CustomCallbackArgs|CustomCallbackArgs[])
List of callbacks to be called during training.
Can have one or more of the following callbacks:
onTrainBegin(logs)
: called when training starts.onTrainEnd(logs)
: called when training ends.onEpochBegin(epoch, logs)
: called at the start of every epoch.onEpochEnd(epoch, logs)
: called at the end of every epoch.onBatchBegin(batch, logs)
: called at the start of every batch.onBatchEnd(batch, logs)
: called at the end of every batch.onYield(epoch, batch, logs)
: called everyyieldEvery
milliseconds with the current epoch, batch and logs. The logs are the same as inonBatchEnd()
. Note thatonYield
can skip batches or epochs. See also docs foryieldEvery
below.
-
validationData
([
TensorOrArrayOrMap, TensorOrArrayOrMap
]|[TensorOrArrayOrMap, TensorOrArrayOrMap, TensorOrArrayOrMap]|tf.data.Dataset)
Data on which to evaluate the loss and any model
metrics at the end of each epoch. The model will not be trained on this
data. This could be any of the following:
- An array
[xVal, yVal]
, where the two values may be tf.Tensor, an array of Tensors, or a map of string to Tensor. - Similarly, an array
[xVal, yVal, valSampleWeights]
(not implemented yet). - a
Dataset
object with elements of the form{xs: xVal, ys: yVal}
, wherexs
andys
are the feature and label tensors, respectively.
If
validationData
is an Array of Tensor objects, each tf.Tensor will be sliced into batches during validation, using the parametervalidationBatchSize
(which defaults to 32). The entirety of the tf.Tensor objects will be used in the validation.If
validationData
is a dataset object, and thevalidationBatches
parameter is specified, the validation will usevalidationBatches
batches drawn from the dataset object. IfvalidationBatches
parameter is not specified, the validation will stop when the dataset is exhausted.The model will not be trained on this data.
- An array
-
validationBatchSize
(number)
Optional batch size for validation.
Used only if
validationData
is an array of tf.Tensor objects, i.e., not a dataset object.If not specified, its value defaults to 32.
-
validationBatches
(number)
(Optional) Only relevant if
validationData
is specified and is a dataset object.Total number of batches of samples to draw from
validationData
for validation purpose before stopping at the end of every epoch. If not specified,evaluateDataset
will useiterator.next().done
as signal to stop validation. -
yieldEvery
(YieldEveryOptions)
Configures the frequency of yielding the main thread to other tasks.
In the browser environment, yielding the main thread can improve the responsiveness of the page during training. In the Node.js environment, it can ensure tasks queued in the event loop can be handled in a timely manner.
The value can be one of the following:
'auto'
: The yielding happens at a certain frame rate (currently set at 125ms). This is the default.'batch'
: yield every batch.'epoch'
: yield every epoch.- a
number
: Will yield everynumber
milliseconds. 'never'
: never yield. (But yielding can still happen throughawait nextFrame()
calls in custom callbacks.)
-
initialEpoch
(number)
Epoch at which to start training (useful for resuming a previous training
run). When this is used,
epochs
is the index of the "final epoch". The model is not trained for a number of iterations given byepochs
, but merely until the epoch of indexepochs
is reached. -
classWeight
(ClassWeight|ClassWeight[]|ClassWeightMap)
Optional object mapping class indices (integers) to
a weight (float) to apply to the model's loss for the samples from this
class during training. This can be useful to tell the model to "pay more
attention" to samples from an under-represented class.
If the model has multiple outputs, a class weight can be specified for each of the outputs by setting this field an array of weight object or a object that maps model output names (e.g.,
model.outputNames[0]
) to weight objects.
Runs a single gradient update on a single batch of data.
This method differs from fit()
and fitDataset()
in the following
regards:
- It operates on exactly one batch of data.
- It returns only the loss and matric values, instead of returning the batch-by-batch loss and metric values.
- It doesn't support fine-grained options such as verbosity and callbacks.
tf.SymbolicTensor is a placeholder for a Tensor without any concrete value.
They are most often encountered when building a graph of Layer
s for a
a tf.LayersModel and the input data's shape, but not values are known.
Deregister the Op for graph model executor.
- name (string) The Tensorflow Op name.
Retrieve the OpMapper object for the registered op.
- name (string) The Tensorflow Op name.
Register an Op for graph model executor. This allow you to register TensorFlow custom op or override existing op.
Here is an example of registering a new MatMul Op.
const customMatmul = (node) =>
tf.matMul(
node.inputs[0], node.inputs[1],
node.attrs['transpose_a'], node.attrs['transpose_b']);
tf.registerOp('MatMul', customMatmul);
The inputs and attrs of the node object is based on the TensorFlow op registry.
- name (string) The Tensorflow Op name.
-
opFunc
(Object)
An op function which is called with the current graph node
during execution and needs to return a tensor or a list of tensors. The node
has the following attributes:
- attr: A map from attribute name to its value
- inputs: A list of input tensors
Layers are the primary building block for constructing a Model. Each layer will typically perform some computation to transform its input to its output.
Layers will automatically take care of creating and initializing the various internal variables/weights they need to function.
Exponetial Linear Unit (ELU).
It follows:
f(x) = alpha * (exp(x) - 1.) for x < 0
,
f(x) = x for x >= 0
.
Input shape:
Arbitrary. Use the configuration inputShape
when using this layer as the
first layer in a model.
Output shape: Same shape as the input.
References:
- args (Object) Optional
-
alpha
(number)
Float
>= 0
. Negative slope coefficient. Defaults to1.0
. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Leaky version of a rectified linear unit.
It allows a small gradient when the unit is not active:
f(x) = alpha * x for x < 0.
f(x) = x for x >= 0.
Input shape:
Arbitrary. Use the configuration inputShape
when using this layer as the
first layer in a model.
Output shape: Same shape as the input.
- args (Object) Optional
-
alpha
(number)
Float
>= 0
. Negative slope coefficient. Defaults to0.3
. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Parameterized version of a leaky rectified linear unit.
It follows
f(x) = alpha * x for x < 0.
f(x) = x for x >= 0.
wherein alpha
is a trainable weight.
Input shape:
Arbitrary. Use the configuration inputShape
when using this layer as the
first layer in a model.
Output shape: Same shape as the input.
- args (Object) Optional
- alphaInitializer (tf.initializers.Initializer|'constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string) Initializer for the learnable alpha.
- alphaRegularizer (Regularizer) Regularizer for the learnable alpha.
- alphaConstraint (tf.constraints.Constraint) Constraint for the learnable alpha.
-
sharedAxes
(number|number[])
The axes along which to share learnable parameters for the activation
function. For example, if the incoming feature maps are from a 2D
convolution with output shape
[numExamples, height, width, channels]
, and you wish to share parameters across space (height and width) so that each filter channels has only one set of parameters, setshared_axes: [1, 2]
. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Rectified Linear Unit activation function.
Input shape:
Arbitrary. Use the config field inputShape
(Array of integers, does
not include the sample axis) when using this layer as the first layer
in a model.
Output shape: Same shape as the input.
- args (Object) Optional
- maxValue (number) Float, the maximum output value.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Softmax activation layer.
Input shape:
Arbitrary. Use the configuration inputShape
when using this layer as the
first layer in a model.
Output shape: Same shape as the input.
- args (Object) Optional
-
axis
(number)
Integer, axis along which the softmax normalization is applied.
Defaults to
-1
(i.e., the last axis). -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Thresholded Rectified Linear Unit.
It follows:
f(x) = x for x > theta
,
f(x) = 0 otherwise
.
Input shape:
Arbitrary. Use the configuration inputShape
when using this layer as the
first layer in a model.
Output shape: Same shape as the input.
References:
- args (Object) Optional
- theta (number) Float >= 0. Threshold location of activation.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Applies an activation function to an output.
This layer applies element-wise activation function. Other layers, notably
dense
can also apply activation functions. Use this isolated activation
function to extract the values before and after the
activation. For instance:
const input = tf.input({shape: [5]});
const denseLayer = tf.layers.dense({units: 1});
const activationLayer = tf.layers.activation({activation: 'relu6'});
// Obtain the output symbolic tensors by applying the layers in order.
const denseOutput = denseLayer.apply(input);
const activationOutput = activationLayer.apply(denseOutput);
// Create the model based on the inputs.
const model = tf.model({
inputs: input,
outputs: [denseOutput, activationOutput]
});
// Collect both outputs and print separately.
const [denseOut, activationOut] = model.predict(tf.randomNormal([6, 5]));
denseOut.print();
activationOut.print();
- args (Object)
- activation ('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'| 'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh') Name of the activation function to use.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Creates a dense (fully connected) layer.
This layer implements the operation:
output = activation(dot(input, kernel) + bias)
activation
is the element-wise activation function
passed as the activation
argument.
kernel
is a weights matrix created by the layer.
bias
is a bias vector created by the layer (only applicable if useBias
is true
).
Input shape:*
nD tf.Tensor with shape: (batchSize, ..., inputDim)
.
The most common situation would be
a 2D input with shape (batchSize, inputDim)
.
Output shape:*
nD tensor with shape: (batchSize, ..., units)
.
For instance, for a 2D input with shape (batchSize, inputDim)
,
the output would have shape (batchSize, units)
.
Note: if the input to the layer has a rank greater than 2, then it is flattened prior to the initial dot product with the kernel.
- args (Object)
- units (number) Positive integer, dimensionality of the output space.
-
activation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function to use.
If unspecified, no activation is applied.
- useBias (boolean) Whether to apply a bias.
- kernelInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the dense kernel weights matrix.
- biasInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the bias vector.
-
inputDim
(number)
If specified, defines inputShape as
[inputDim]
. - kernelConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the kernel weights.
- biasConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the bias vector.
- kernelRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the dense kernel weights matrix.
- biasRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the bias vector.
- activityRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the activation.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Applies dropout to the input.
Dropout consists in randomly setting a fraction rate
of input units to 0 at
each update during training time, which helps prevent overfitting.
- args (Object)
- rate (number) Float between 0 and 1. Fraction of the input units to drop.
-
noiseShape
(number[])
Integer array representing the shape of the binary dropout mask that will
be multiplied with the input.
For instance, if your inputs have shape
(batchSize, timesteps, features)
and you want the dropout mask to be the same for all timesteps, you can usenoise_shape=(batch_size, 1, features)
. - seed (number) An integer to use as random seed.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Maps positive integers (indices) into dense vectors of fixed size. eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]
Input shape:* 2D tensor with shape: [batchSize, sequenceLength]
.
Output shape:* 3D tensor with shape: [batchSize, sequenceLength, outputDim]
.
- args (Object)
- inputDim (number) Integer > 0. Size of the vocabulary, i.e. maximum integer index + 1.
- outputDim (number) Integer >= 0. Dimension of the dense embedding.
-
embeddingsInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the
embeddings
matrix. -
embeddingsRegularizer
('l1l2'|string|Regularizer)
Regularizer function applied to the
embeddings
matrix. - activityRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the activation.
-
embeddingsConstraint
('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint)
Constraint function applied to the
embeddings
matrix. -
maskZero
(boolean)
Whether the input value 0 is a special "padding" value that should be
masked out. This is useful when using recurrent layers which may take
variable length input.
If this is
True
then all subsequent layers in the model need to support masking or an exception will be raised. If maskZero is set toTrue
, as a consequence, index 0 cannot be used in the vocabulary (inputDim should equal size of vocabulary + 1). -
inputLength
(number|number[])
Length of input sequences, when it is constant.
This argument is required if you are going to connect
flatten
thendense
layers upstream (without it, the shape of the dense outputs cannot be computed). -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Flattens the input. Does not affect the batch size.
A Flatten
layer flattens each batch in its inputs to 1D (making the output
2D).
For example:
const input = tf.input({shape: [4, 3]});
const flattenLayer = tf.layers.flatten();
// Inspect the inferred output shape of the flatten layer, which
// equals `[null, 12]`. The 2nd dimension is 4 * 3, i.e., the result of the
// flattening. (The 1st dimension is the undermined batch size.)
console.log(JSON.stringify(flattenLayer.apply(input).shape));
- args (Object) Optional
- dataFormat ('channelsFirst'|'channelsLast') Image data format: channeLast (default) or channelFirst.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Permutes the dimensions of the input according to a given pattern.
Useful for, e.g., connecting RNNs and convnets together.
Example:
const model = tf.sequential();
model.add(tf.layers.permute({
dims: [2, 1],
inputShape: [10, 64]
}));
console.log(model.outputShape);
// Now model's output shape is [null, 64, 10], where null is the
// unpermuted sample (batch) dimension.
Input shape:
Arbitrary. Use the configuration field inputShape
when using this
layer as the first layer in a model.
Output shape:
Same rank as the input shape, but with the dimensions re-ordered (i.e.,
permuted) according to the dims
configuration of this layer.
- args (Object)
-
dims
(number[])
Array of integers. Permutation pattern. Does not include the
sample (batch) dimension. Index starts at 1.
For instance,
[2, 1]
permutes the first and second dimensions of the input. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Repeats the input n times in a new dimension.
const model = tf.sequential();
model.add(tf.layers.repeatVector({n: 4, inputShape: [2]}));
const x = tf.tensor2d([[10, 20]]);
// Use the model to do inference on a data point the model hasn't see
model.predict(x).print();
// output shape is now [batch, 2, 4]
- args (Object)
- n (number) The integer number of times to repeat the input.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Reshapes an input to a certain shape.
const input = tf.input({shape: [4, 3]});
const reshapeLayer = tf.layers.reshape({targetShape: [2, 6]});
// Inspect the inferred output shape of the Reshape layer, which
// equals `[null, 2, 6]`. (The 1st dimension is the undermined batch size.)
console.log(JSON.stringify(reshapeLayer.apply(input).shape));
Input shape:
Arbitrary, although all dimensions in the input shape must be fixed.
Use the configuration inputShape
when using this layer as the
first layer in a model.
Output shape: [batchSize, targetShape[0], targetShape[1], ..., targetShape[targetShape.length - 1]].
- args (Object)
- targetShape ((null | number)[]) The target shape. Does not include the batch axis.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Spatial 1D version of Dropout.
This Layer type performs the same function as the Dropout layer, but it drops
entire 1D feature maps instead of individual elements. For example, if an
input example consists of 3 timesteps and the feature map for each timestep
has a size of 4, a spatialDropout1d
layer may zero out the feature maps
of the 1st timesteps and 2nd timesteps completely while sparing all feature
elements of the 3rd timestep.
If adjacent frames (timesteps) are strongly correlated (as is normally the
case in early convolution layers), regular dropout will not regularize the
activation and will otherwise just result in merely an effective learning
rate decrease. In this case, spatialDropout1d
will help promote
independence among feature maps and should be used instead.
Arguments:* rate: A floating-point number >=0 and <=1. Fraction of the input elements to drop.
Input shape:*
3D tensor with shape (samples, timesteps, channels)
.
Output shape:* Same as the input shape.
References:
- args (Object)
- rate (number) Float between 0 and 1. Fraction of the input units to drop.
- seed (number) An integer to use as random seed.
- input_shape ((null | number)[])
- batch_input_shape ((null | number)[])
- batch_size (number)
- dtype ('float32'|'int32'|'bool'|'complex64'|'string')
- name (string)
- trainable (boolean)
- input_dtype ('float32'|'int32'|'bool'|'complex64'|'string')
1D convolution layer (e.g., temporal convolution).
This layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a tensor of outputs.
If use_bias
is True, a bias vector is created and added to the outputs.
If activation
is not null
, it is applied to the outputs as well.
When using this layer as the first layer in a model, provide an
inputShape
argument Array
or null
.
For example, inputShape
would be:
[10, 128]
for sequences of 10 vectors of 128-dimensional vectors[null, 128]
for variable-length sequences of 128-dimensional vectors.
- args (Object)
- filters (number) The dimensionality of the output space (i.e. the number of filters in the convolution).
- kernelSize (number|number[]) The dimensions of the convolution window. If kernelSize is a number, the convolutional window will be square.
-
strides
(number|number[])
The strides of the convolution in each dimension. If strides is a number,
strides in both dimensions are equal.
Specifying any stride value != 1 is incompatible with specifying any
dilationRate
value != 1. - padding ('valid'|'same'|'causal') Padding mode.
-
dataFormat
('channelsFirst'|'channelsLast')
Format of the data, which determines the ordering of the dimensions in
the inputs.
channels_last
corresponds to inputs with shape(batch, ..., channels)
channels_first
corresponds to inputs with shape(batch, channels, ...)
.Defaults to
channels_last
. -
dilationRate
(number|[number]|[number, number]|[number, number, number])
The dilation rate to use for the dilated convolution in each dimension.
Should be an integer or array of two or three integers.
Currently, specifying any
dilationRate
value != 1 is incompatible with specifying anystrides
value != 1. -
activation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function of the layer.
If you don't specify the activation, none is applied.
-
useBias
(boolean)
Whether the layer uses a bias vector. Defaults to
true
. - kernelInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the convolutional kernel weights matrix.
- biasInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the bias vector.
- kernelConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the convolutional kernel weights.
- biasConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the bias vector.
- kernelRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the kernel weights matrix.
- biasRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the bias vector.
- activityRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the activation.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
2D convolution layer (e.g. spatial convolution over images).
This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs.
If useBias
is True, a bias vector is created and added to the outputs.
If activation
is not null
, it is applied to the outputs as well.
When using this layer as the first layer in a model,
provide the keyword argument inputShape
(Array of integers, does not include the sample axis),
e.g. inputShape=[128, 128, 3]
for 128x128 RGB pictures
in dataFormat='channelsLast'
.
- args (Object)
- filters (number) The dimensionality of the output space (i.e. the number of filters in the convolution).
- kernelSize (number|number[]) The dimensions of the convolution window. If kernelSize is a number, the convolutional window will be square.
-
strides
(number|number[])
The strides of the convolution in each dimension. If strides is a number,
strides in both dimensions are equal.
Specifying any stride value != 1 is incompatible with specifying any
dilationRate
value != 1. - padding ('valid'|'same'|'causal') Padding mode.
-
dataFormat
('channelsFirst'|'channelsLast')
Format of the data, which determines the ordering of the dimensions in
the inputs.
channels_last
corresponds to inputs with shape(batch, ..., channels)
channels_first
corresponds to inputs with shape(batch, channels, ...)
.Defaults to
channels_last
. -
dilationRate
(number|[number]|[number, number]|[number, number, number])
The dilation rate to use for the dilated convolution in each dimension.
Should be an integer or array of two or three integers.
Currently, specifying any
dilationRate
value != 1 is incompatible with specifying anystrides
value != 1. -
activation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function of the layer.
If you don't specify the activation, none is applied.
-
useBias
(boolean)
Whether the layer uses a bias vector. Defaults to
true
. - kernelInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the convolutional kernel weights matrix.
- biasInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the bias vector.
- kernelConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the convolutional kernel weights.
- biasConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the bias vector.
- kernelRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the kernel weights matrix.
- biasRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the bias vector.
- activityRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the activation.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Transposed convolutional layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
When using this layer as the first layer in a model, provide the
configuration inputShape
(Array
of integers, does not include the
sample axis), e.g., inputShape: [128, 128, 3]
for 128x128 RGB pictures in
dataFormat: 'channelsLast'
.
Input shape:
4D tensor with shape:
[batch, channels, rows, cols]
if dataFormat
is 'channelsFirst'
.
or 4D tensor with shape
[batch, rows, cols, channels]
if dataFormat
is 'channelsLast
.
Output shape:
4D tensor with shape:
[batch, filters, newRows, newCols]
if dataFormat
is
'channelsFirst'
. or 4D tensor with shape:
[batch, newRows, newCols, filters]
if dataFormat
is 'channelsLast'
.
References:
- args (Object)
- filters (number) The dimensionality of the output space (i.e. the number of filters in the convolution).
- kernelSize (number|number[]) The dimensions of the convolution window. If kernelSize is a number, the convolutional window will be square.
-
strides
(number|number[])
The strides of the convolution in each dimension. If strides is a number,
strides in both dimensions are equal.
Specifying any stride value != 1 is incompatible with specifying any
dilationRate
value != 1. - padding ('valid'|'same'|'causal') Padding mode.
-
dataFormat
('channelsFirst'|'channelsLast')
Format of the data, which determines the ordering of the dimensions in
the inputs.
channels_last
corresponds to inputs with shape(batch, ..., channels)
channels_first
corresponds to inputs with shape(batch, channels, ...)
.Defaults to
channels_last
. -
dilationRate
(number|[number]|[number, number]|[number, number, number])
The dilation rate to use for the dilated convolution in each dimension.
Should be an integer or array of two or three integers.
Currently, specifying any
dilationRate
value != 1 is incompatible with specifying anystrides
value != 1. -
activation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function of the layer.
If you don't specify the activation, none is applied.
-
useBias
(boolean)
Whether the layer uses a bias vector. Defaults to
true
. - kernelInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the convolutional kernel weights matrix.
- biasInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the bias vector.
- kernelConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the convolutional kernel weights.
- biasConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the bias vector.
- kernelRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the kernel weights matrix.
- biasRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the bias vector.
- activityRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the activation.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
3D convolution layer (e.g. spatial convolution over volumes).
This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs.
If useBias
is True, a bias vector is created and added to the outputs.
If activation
is not null
, it is applied to the outputs as well.
When using this layer as the first layer in a model,
provide the keyword argument inputShape
(Array of integers, does not include the sample axis),
e.g. inputShape=[128, 128, 128, 1]
for 128x128x128 grayscale volumes
in dataFormat='channelsLast'
.
- args (Object)
- filters (number) The dimensionality of the output space (i.e. the number of filters in the convolution).
- kernelSize (number|number[]) The dimensions of the convolution window. If kernelSize is a number, the convolutional window will be square.
-
strides
(number|number[])
The strides of the convolution in each dimension. If strides is a number,
strides in both dimensions are equal.
Specifying any stride value != 1 is incompatible with specifying any
dilationRate
value != 1. - padding ('valid'|'same'|'causal') Padding mode.
-
dataFormat
('channelsFirst'|'channelsLast')
Format of the data, which determines the ordering of the dimensions in
the inputs.
channels_last
corresponds to inputs with shape(batch, ..., channels)
channels_first
corresponds to inputs with shape(batch, channels, ...)
.Defaults to
channels_last
. -
dilationRate
(number|[number]|[number, number]|[number, number, number])
The dilation rate to use for the dilated convolution in each dimension.
Should be an integer or array of two or three integers.
Currently, specifying any
dilationRate
value != 1 is incompatible with specifying anystrides
value != 1. -
activation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function of the layer.
If you don't specify the activation, none is applied.
-
useBias
(boolean)
Whether the layer uses a bias vector. Defaults to
true
. - kernelInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the convolutional kernel weights matrix.
- biasInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the bias vector.
- kernelConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the convolutional kernel weights.
- biasConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the bias vector.
- kernelRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the kernel weights matrix.
- biasRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the bias vector.
- activityRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the activation.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Cropping layer for 2D input (e.g., image).
This layer can crop an input at the top, bottom, left and right side of an image tensor.
Input shape: 4D tensor with shape:
- If
dataFormat
is"channelsLast"
:[batch, rows, cols, channels]
- If
data_format
is"channels_first"
:[batch, channels, rows, cols]
.
Output shape: 4D with shape:
- If
dataFormat
is"channelsLast"
:[batch, croppedRows, croppedCols, channels]
- IfdataFormat
is"channelsFirst"
:[batch, channels, croppedRows, croppedCols]
.
Examples
const model = tf.sequential();
model.add(tf.layers.cropping2D({cropping:[[2, 2], [2, 2]],
inputShape: [128, 128, 3]}));
//now output shape is [batch, 124, 124, 3]
- args (Object)
-
cropping
(number|[number, number]|[[number, number], [number, number]])
Dimension of the cropping along the width and the height.
- If integer: the same symmetric cropping is applied to width and height.
- If list of 2 integers:
interpreted as two different
symmetric cropping values for height and width:
[symmetric_height_crop, symmetric_width_crop]
. - If a list of 2 list of 2 integers:
interpreted as
[[top_crop, bottom_crop], [left_crop, right_crop]]
-
dataFormat
('channelsFirst'|'channelsLast')
Format of the data, which determines the ordering of the dimensions in
the inputs.
channels_last
corresponds to inputs with shape(batch, ..., channels)
channels_first
corresponds to inputs with shape(batch, channels, ...)
Defaults to
channels_last
. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Depthwise separable 2D convolution.
Depthwise Separable convolutions consists in performing just the first step
in a depthwise spatial convolution (which acts on each input channel
separately). The depthMultplier
argument controls how many output channels
are generated per input channel in the depthwise step.
- args (Object)
- kernelSize (number|[number, number]) An integer or Array of 2 integers, specifying the width and height of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
depthMultiplier
(number)
The number of depthwise convolution output channels for each input
channel.
The total number of depthwise convolution output channels will be equal to
filtersIn * depthMultiplier
. Default: 1. - depthwiseInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the depthwise kernel matrix. Default: GlorotNormal.
- depthwiseConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the depthwise kernel matrix.
- depthwiseRegularizer ('l1l2'|string|Regularizer) Regulzarizer function for the depthwise kernel matrix.
-
strides
(number|number[])
The strides of the convolution in each dimension. If strides is a number,
strides in both dimensions are equal.
Specifying any stride value != 1 is incompatible with specifying any
dilationRate
value != 1. - padding ('valid'|'same'|'causal') Padding mode.
-
dataFormat
('channelsFirst'|'channelsLast')
Format of the data, which determines the ordering of the dimensions in
the inputs.
channels_last
corresponds to inputs with shape(batch, ..., channels)
channels_first
corresponds to inputs with shape(batch, channels, ...)
.Defaults to
channels_last
. -
dilationRate
(number|[number]|[number, number]|[number, number, number])
The dilation rate to use for the dilated convolution in each dimension.
Should be an integer or array of two or three integers.
Currently, specifying any
dilationRate
value != 1 is incompatible with specifying anystrides
value != 1. -
activation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function of the layer.
If you don't specify the activation, none is applied.
-
useBias
(boolean)
Whether the layer uses a bias vector. Defaults to
true
. - kernelInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the convolutional kernel weights matrix.
- biasInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the bias vector.
- kernelConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the convolutional kernel weights.
- biasConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the bias vector.
- kernelRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the kernel weights matrix.
- biasRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the bias vector.
- activityRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the activation.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Depthwise separable 2D convolution.
Separable convolution consists of first performing
a depthwise spatial convolution
(which acts on each input channel separately)
followed by a pointwise convolution which mixes together the resulting
output channels. The depthMultiplier
argument controls how many
output channels are generated per input channel in the depthwise step.
Intuitively, separable convolutions can be understood as a way to factorize a convolution kernel into two smaller kernels, or as an extreme version of an Inception block.
Input shape:
4D tensor with shape:
[batch, channels, rows, cols]
if data_format='channelsFirst'
or 4D tensor with shape:
[batch, rows, cols, channels]
if data_format='channelsLast'.
Output shape:
4D tensor with shape:
[batch, filters, newRows, newCols]
if data_format='channelsFirst'
or 4D tensor with shape:
[batch, newRows, newCols, filters]
if data_format='channelsLast'.
rows
and cols
values might have changed due to padding.
- args (Object)
-
depthMultiplier
(number)
The number of depthwise convolution output channels for each input
channel.
The total number of depthwise convolution output channels will be equal
to
filtersIn * depthMultiplier
. Default: 1. - depthwiseInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the depthwise kernel matrix.
- pointwiseInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the pointwise kernel matrix.
- depthwiseRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the depthwise kernel matrix.
- pointwiseRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the pointwise kernel matrix.
- depthwiseConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint function applied to the depthwise kernel matrix.
- pointwiseConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint function applied to the pointwise kernel matrix.
- filters (number) The dimensionality of the output space (i.e. the number of filters in the convolution).
- kernelSize (number|number[]) The dimensions of the convolution window. If kernelSize is a number, the convolutional window will be square.
-
strides
(number|number[])
The strides of the convolution in each dimension. If strides is a number,
strides in both dimensions are equal.
Specifying any stride value != 1 is incompatible with specifying any
dilationRate
value != 1. - padding ('valid'|'same'|'causal') Padding mode.
-
dataFormat
('channelsFirst'|'channelsLast')
Format of the data, which determines the ordering of the dimensions in
the inputs.
channels_last
corresponds to inputs with shape(batch, ..., channels)
channels_first
corresponds to inputs with shape(batch, channels, ...)
.Defaults to
channels_last
. -
dilationRate
(number|[number]|[number, number]|[number, number, number])
The dilation rate to use for the dilated convolution in each dimension.
Should be an integer or array of two or three integers.
Currently, specifying any
dilationRate
value != 1 is incompatible with specifying anystrides
value != 1. -
activation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function of the layer.
If you don't specify the activation, none is applied.
-
useBias
(boolean)
Whether the layer uses a bias vector. Defaults to
true
. - kernelInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the convolutional kernel weights matrix.
- biasInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the bias vector.
- kernelConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the convolutional kernel weights.
- biasConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the bias vector.
- kernelRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the kernel weights matrix.
- biasRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the bias vector.
- activityRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the activation.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Upsampling layer for 2D inputs.
Repeats the rows and columns of the data by size[0] and size[1] respectively.
Input shape:
4D tensor with shape:
- If dataFormat
is "channelsLast"
:
[batch, rows, cols, channels]
- If dataFormat
is "channelsFirst"
:
[batch, channels, rows, cols]
Output shape:
4D tensor with shape:
- If dataFormat
is "channelsLast"
:
[batch, upsampledRows, upsampledCols, channels]
- If dataFormat
is "channelsFirst"
:
[batch, channels, upsampledRows, upsampledCols]
- args (Object)
-
size
(number[])
The upsampling factors for rows and columns.
Defaults to
[2, 2]
. -
dataFormat
('channelsFirst'|'channelsLast')
Format of the data, which determines the ordering of the dimensions in
the inputs.
"channelsLast"
corresponds to inputs with shape[batch, ..., channels]
"channelsFirst"
corresponds to inputs with shape[batch, channels, ...]
.Defaults to
"channelsLast"
. -
interpolation
(InterpolationFormat)
The interpolation mechanism, one of
"nearest"
or"bilinear"
, default to"nearest"
. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Layer that performs element-wise addition on an Array
of inputs.
It takes as input a list of tensors, all of the same shape, and returns a
single tensor (also of the same shape). The inputs are specified as an
Array
when the apply
method of the Add
layer instance is called. For
example:
const input1 = tf.input({shape: [2, 2]});
const input2 = tf.input({shape: [2, 2]});
const addLayer = tf.layers.add();
const sum = addLayer.apply([input1, input2]);
console.log(JSON.stringify(sum.shape));
// You get [null, 2, 2], with the first dimension as the undetermined batch
// dimension.
- args (Object) Optional
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Layer that performs element-wise averaging on an Array
of inputs.
It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). For example:
const input1 = tf.input({shape: [2, 2]});
const input2 = tf.input({shape: [2, 2]});
const averageLayer = tf.layers.average();
const average = averageLayer.apply([input1, input2]);
console.log(JSON.stringify(average.shape));
// You get [null, 2, 2], with the first dimension as the undetermined batch
// dimension.
- args (Object) Optional
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Layer that concatenates an Array
of inputs.
It takes a list of tensors, all of the same shape except for the concatenation axis, and returns a single tensor, the concatenation of all inputs. For example:
const input1 = tf.input({shape: [2, 2]});
const input2 = tf.input({shape: [2, 3]});
const concatLayer = tf.layers.concatenate();
const output = concatLayer.apply([input1, input2]);
console.log(JSON.stringify(output.shape));
// You get [null, 2, 5], with the first dimension as the undetermined batch
// dimension. The last dimension (5) is the result of concatenating the
// last dimensions of the inputs (2 and 3).
- args (Object) Optional
- axis (number) Axis along which to concatenate.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Layer that computes a dot product between samples in two tensors.
E.g., if applied to a list of two tensors a
and b
both of shape
[batchSize, n]
, the output will be a tensor of shape [batchSize, 1]
,
where each entry at index [i, 0]
will be the dot product between
a[i, :]
and b[i, :]
.
Example:
const dotLayer = tf.layers.dot({axes: -1});
const x1 = tf.tensor2d([[10, 20], [30, 40]]);
const x2 = tf.tensor2d([[-1, -2], [-3, -4]]);
// Invoke the layer's apply() method in eager (imperative) mode.
const y = dotLayer.apply([x1, x2]);
y.print();
- args (Object)
-
axes
(number|[number, number])
Axis or axes along which the dot product will be taken.
Integer or an Array of integers.
-
normalize
(boolean)
Whether to L2-normalize samples along the dot product axis
before taking the dot product.
If set to
true
, the output of the dot product isthe cosine proximity between the two samples. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Layer that computes the element-wise maximum an Array
of inputs.
It takes as input a list of tensors, all of the same shape and returns a single tensor (also of the same shape). For example:
const input1 = tf.input({shape: [2, 2]});
const input2 = tf.input({shape: [2, 2]});
const maxLayer = tf.layers.maximum();
const max = maxLayer.apply([input1, input2]);
console.log(JSON.stringify(max.shape));
// You get [null, 2, 2], with the first dimension as the undetermined batch
// dimension.
- args (Object) Optional
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Layer that computes the element-wise minimum of an Array
of inputs.
It takes as input a list of tensors, all of the same shape and returns a single tensor (also of the same shape). For example:
const input1 = tf.input({shape: [2, 2]});
const input2 = tf.input({shape: [2, 2]});
const minLayer = tf.layers.minimum();
const min = minLayer.apply([input1, input2]);
console.log(JSON.stringify(min.shape));
// You get [null, 2, 2], with the first dimension as the undetermined batch
// dimension.
- args (Object) Optional
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Layer that multiplies (element-wise) an Array
of inputs.
It takes as input an Array of tensors, all of the same shape, and returns a single tensor (also of the same shape). For example:
const input1 = tf.input({shape: [2, 2]});
const input2 = tf.input({shape: [2, 2]});
const input3 = tf.input({shape: [2, 2]});
const multiplyLayer = tf.layers.multiply();
const product = multiplyLayer.apply([input1, input2, input3]);
console.log(product.shape);
// You get [null, 2, 2], with the first dimension as the undetermined batch
// dimension.
- args (Object) Optional
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Batch normalization layer (Ioffe and Szegedy, 2014).
Normalize the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1.
Input shape:
Arbitrary. Use the keyword argument inputShape
(Array of integers, does
not include the sample axis) when calling the constructor of this class,
if this layer is used as a first layer in a model.
Output shape: Same shape as input.
References:
- args (Object) Optional
-
axis
(number)
The integer axis that should be normalized (typically the features axis).
Defaults to -1.
For instance, after a
Conv2D
layer withdata_format="channels_first"
, setaxis=1
inbatchNormalization
. - momentum (number) Momentum of the moving average. Defaults to 0.99.
- epsilon (number) Small float added to the variance to avoid dividing by zero. Defaults to 1e-3.
-
center
(boolean)
If
true
, add offset ofbeta
to normalized tensor. Iffalse
,beta
is ignored. Defaults totrue
. -
scale
(boolean)
If
true
, multiply bygamma
. Iffalse
,gamma
is not used. When the next layer is linear (also e.g.nn.relu
), this can be disabled since the scaling will be done by the next layer. Defaults totrue
. - betaInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the beta weight. Defaults to 'zeros'.
-
gammaInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the gamma weight.
Defaults to
ones
. -
movingMeanInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the moving mean.
Defaults to
zeros
- movingVarianceInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the moving variance. Defaults to 'Ones'.
- betaConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for the beta weight.
- gammaConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint for gamma weight.
- betaRegularizer ('l1l2'|string|Regularizer) Regularizer for the beta weight.
- gammaRegularizer ('l1l2'|string|Regularizer) Regularizer for the gamma weight.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Layer-normalization layer (Ba et al., 2016).
Normalizes the activations of the previous layer for each given example in a
batch independently, instead of across a batch like in batchNormalization
.
In other words, this layer applies a transformation that maintanis the mean
activation within each example close to0 and activation variance close to 1.
Input shape:
Arbitrary. Use the argument inputShape
when using this layer as the first
layer in a model.
Output shape: Same as input.
References:
- args (Object) Optional
- axis (number|number[]) The axis or axes that should be normalized (typically, the feature axis.) Defaults to -1 (the last axis.)
- epsilon (number) A small positive float added to variance to avoid divison by zero. Defaults to 1e-3.
-
center
(boolean)
If
true
, add offset ofbeta
to normalized tensor. Iffalse
,beta
is ignored. Default:true
. -
scale
(boolean)
If
true
, multiply output bygamma
. Iffalse
,gamma
is not used. When the next layer is linear, this can be disabled since scaling will be done by the next layer. Default:true
. -
betaInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the beta weight.
Default:
'zeros'
. -
gammaInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the gamma weight.
Default:
'ones'
. - betaRegularizer ('l1l2'|string|Regularizer) Regularizer for the beta weight.
- gammaRegularizer ('l1l2'|string|Regularizer) Regularizer for the gamma weight.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Average pooling operation for spatial data.
Input shape: [batchSize, inLength, channels]
Output shape: [batchSize, pooledLength, channels]
tf.avgPool1d
is an alias.
- args (Object)
- poolSize (number|[number]) Size of the window to pool over, should be an integer.
-
strides
(number|[number])
Period at which to sample the pooled values.
If
null
, defaults topoolSize
. - padding ('valid'|'same'|'causal') How to fill in data that's not an integer multiple of poolSize.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Average pooling operation for spatial data.
Input shape:
- If
dataFormat === CHANNEL_LAST
: 4D tensor with shape:[batchSize, rows, cols, channels]
- If
dataFormat === CHANNEL_FIRST
: 4D tensor with shape:[batchSize, channels, rows, cols]
Output shape
- If
dataFormat === CHANNEL_LAST
: 4D tensor with shape:[batchSize, pooleRows, pooledCols, channels]
- If
dataFormat === CHANNEL_FIRST
: 4D tensor with shape:[batchSize, channels, pooleRows, pooledCols]
tf.avgPool2d
is an alias.
- args (Object)
-
poolSize
(number|[number, number])
Factors by which to downscale in each dimension [vertical, horizontal].
Expects an integer or an array of 2 integers.
For example,
[2, 2]
will halve the input in both spatial dimension. If only one integer is specified, the same window length will be used for both dimensions. -
strides
(number|[number, number])
The size of the stride in each dimension of the pooling window. Expects
an integer or an array of 2 integers. Integer, tuple of 2 integers, or
None.
If
null
, defaults topoolSize
. - padding ('valid'|'same'|'causal') The padding type to use for the pooling layer.
- dataFormat ('channelsFirst'|'channelsLast') The data format to use for the pooling layer.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Average pooling operation for 3D data.
Input shape
- If
dataFormat === channelsLast
: 5D tensor with shape:[batchSize, depths, rows, cols, channels]
- If
dataFormat === channelsFirst
: 4D tensor with shape:[batchSize, channels, depths, rows, cols]
Output shape
- If
dataFormat=channelsLast
: 5D tensor with shape:[batchSize, pooledDepths, pooledRows, pooledCols, channels]
- If
dataFormat=channelsFirst
: 5D tensor with shape:[batchSize, channels, pooledDepths, pooledRows, pooledCols]
- args (Object)
-
poolSize
(number|[number, number, number])
Factors by which to downscale in each dimension [depth, height, width].
Expects an integer or an array of 3 integers.
For example,
[2, 2, 2]
will halve the input in three dimensions. If only one integer is specified, the same window length will be used for all dimensions. -
strides
(number|[number, number, number])
The size of the stride in each dimension of the pooling window. Expects
an integer or an array of 3 integers. Integer, tuple of 3 integers, or
None.
If
null
, defaults topoolSize
. - padding ('valid'|'same'|'causal') The padding type to use for the pooling layer.
- dataFormat ('channelsFirst'|'channelsLast') The data format to use for the pooling layer.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Global average pooling operation for temporal data.
Input Shape: 3D tensor with shape: [batchSize, steps, features]
.
Output Shape:2D tensor with shape: [batchSize, features]
.
- args (Object) Optional
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Global average pooling operation for spatial data.
Input shape:
- If
dataFormat
isCHANNEL_LAST
: 4D tensor with shape:[batchSize, rows, cols, channels]
. - If
dataFormat
isCHANNEL_FIRST
: 4D tensor with shape:[batchSize, channels, rows, cols]
.
Output shape:
2D tensor with shape: [batchSize, channels]
.
- args (Object)
-
dataFormat
('channelsFirst'|'channelsLast')
One of
CHANNEL_LAST
(default) orCHANNEL_FIRST
.The ordering of the dimensions in the inputs.
CHANNEL_LAST
corresponds to inputs with shape[batch, height, width, channels[
whileCHANNEL_FIRST
corresponds to inputs with shape[batch, channels, height, width]
. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Global max pooling operation for temporal data.
Input Shape: 3D tensor with shape: [batchSize, steps, features]
.
Output Shape:2D tensor with shape: [batchSize, features]
.
- args (Object) Optional
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Global max pooling operation for spatial data.
Input shape:
- If
dataFormat
isCHANNEL_LAST
: 4D tensor with shape:[batchSize, rows, cols, channels]
. - If
dataFormat
isCHANNEL_FIRST
: 4D tensor with shape:[batchSize, channels, rows, cols]
.
Output shape:
2D tensor with shape: [batchSize, channels]
.
- args (Object)
-
dataFormat
('channelsFirst'|'channelsLast')
One of
CHANNEL_LAST
(default) orCHANNEL_FIRST
.The ordering of the dimensions in the inputs.
CHANNEL_LAST
corresponds to inputs with shape[batch, height, width, channels[
whileCHANNEL_FIRST
corresponds to inputs with shape[batch, channels, height, width]
. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Max pooling operation for temporal data.
Input shape: [batchSize, inLength, channels]
Output shape: [batchSize, pooledLength, channels]
- args (Object)
- poolSize (number|[number]) Size of the window to pool over, should be an integer.
-
strides
(number|[number])
Period at which to sample the pooled values.
If
null
, defaults topoolSize
. - padding ('valid'|'same'|'causal') How to fill in data that's not an integer multiple of poolSize.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Max pooling operation for spatial data.
Input shape
- If
dataFormat === CHANNEL_LAST
: 4D tensor with shape:[batchSize, rows, cols, channels]
- If
dataFormat === CHANNEL_FIRST
: 4D tensor with shape:[batchSize, channels, rows, cols]
Output shape
- If
dataFormat=CHANNEL_LAST
: 4D tensor with shape:[batchSize, pooleRows, pooledCols, channels]
- If
dataFormat=CHANNEL_FIRST
: 4D tensor with shape:[batchSize, channels, pooleRows, pooledCols]
- args (Object)
-
poolSize
(number|[number, number])
Factors by which to downscale in each dimension [vertical, horizontal].
Expects an integer or an array of 2 integers.
For example,
[2, 2]
will halve the input in both spatial dimension. If only one integer is specified, the same window length will be used for both dimensions. -
strides
(number|[number, number])
The size of the stride in each dimension of the pooling window. Expects
an integer or an array of 2 integers. Integer, tuple of 2 integers, or
None.
If
null
, defaults topoolSize
. - padding ('valid'|'same'|'causal') The padding type to use for the pooling layer.
- dataFormat ('channelsFirst'|'channelsLast') The data format to use for the pooling layer.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Max pooling operation for 3D data.
Input shape
- If
dataFormat === channelsLast
: 5D tensor with shape:[batchSize, depths, rows, cols, channels]
- If
dataFormat === channelsFirst
: 5D tensor with shape:[batchSize, channels, depths, rows, cols]
Output shape
- If
dataFormat=channelsLast
: 5D tensor with shape:[batchSize, pooledDepths, pooledRows, pooledCols, channels]
- If
dataFormat=channelsFirst
: 5D tensor with shape:[batchSize, channels, pooledDepths, pooledRows, pooledCols]
- args (Object)
-
poolSize
(number|[number, number, number])
Factors by which to downscale in each dimension [depth, height, width].
Expects an integer or an array of 3 integers.
For example,
[2, 2, 2]
will halve the input in three dimensions. If only one integer is specified, the same window length will be used for all dimensions. -
strides
(number|[number, number, number])
The size of the stride in each dimension of the pooling window. Expects
an integer or an array of 3 integers. Integer, tuple of 3 integers, or
None.
If
null
, defaults topoolSize
. - padding ('valid'|'same'|'causal') The padding type to use for the pooling layer.
- dataFormat ('channelsFirst'|'channelsLast') The data format to use for the pooling layer.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Convolutional LSTM layer - Xingjian Shi 2015.
This is an ConvRNN2D
layer consisting of one ConvLSTM2DCell
. However,
unlike the underlying ConvLSTM2DCell
, the apply
method of ConvLSTM2D
operates on a sequence of inputs. The shape of the input (not including the
first, batch dimension) needs to be 4-D, with the first dimension being time
steps. For example:
const filters = 3;
const kernelSize = 3;
const batchSize = 4;
const sequenceLength = 2;
const size = 5;
const channels = 3;
const inputShape = [batchSize, sequenceLength, size, size, channels];
const input = tf.ones(inputShape);
const layer = tf.layers.convLstm2d({filters, kernelSize});
const output = layer.apply(input);
- args (Object)
-
activation
(tf.any())
Activation function to use.
Defaults to hyperbolic tangent (
tanh
)If you pass
null
, no activation will be applied. - useBias (tf.any()) Whether the layer uses a bias vector.
-
kernelInitializer
(tf.any())
Initializer for the
kernel
weights matrix, used for the linear transformation of the inputs. -
recurrentInitializer
(tf.any())
Initializer for the
recurrentKernel
weights matrix, used for linear transformation of the recurrent state. - biasInitializer (tf.any()) Initializer for the bias vector.
- kernelRegularizer (tf.any()) Regularizer function applied to the kernel weights matrix.
- recurrentRegularizer (tf.any()) Regularizer function applied to the recurrentKernel weights matrix.
- biasRegularizer (tf.any()) Regularizer function applied to the bias vector.
- kernelConstraint (tf.any()) Constraint function applied to the kernel weights matrix.
- recurrentConstraint (tf.any()) Constraint function applied to the recurrentKernel weights matrix.
- biasConstraint (tf.any()) Constraint function applied to the bias vector.
- dropout (tf.any()) Number between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
- recurrentDropout (tf.any()) Number between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
-
inputShape
(tf.any())
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
(tf.any())
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(tf.any())
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype (tf.any()) The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (tf.any()) Name for this layer.
-
trainable
(tf.any())
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.any()) Initial weight values of the layer.
- inputDType (tf.any()) Legacy support. Do not use for new code.
-
recurrentActivation
(tf.any())
Activation function to use for the recurrent step.
Defaults to hard sigmoid (
hardSigmoid
).If
null
, no activation is applied. -
unitForgetBias
(tf.any())
If
true
, add 1 to the bias of the forget gate at initialization. Setting it totrue
will also forcebiasInitializer = 'zeros'
. This is recommended in Jozefowicz et al.. -
implementation
(tf.any())
Implementation mode, either 1 or 2.
Mode 1 will structure its operations as a larger number of
smaller dot products and additions, whereas mode 2 will
batch them into fewer, larger operations. These modes will
have different performance profiles on different hardware and
for different applications.
Note: For superior performance, TensorFlow.js always uses implementation 2, regardless of the actual value of this config field.
- returnSequences (tf.any()) Whether to return the last output in the output sequence, or the full sequence.
- returnState (tf.any()) Whether to return the last state in addition to the output.
-
goBackwards
(tf.any())
If
true
, process the input sequence backwards and return the reversed sequence (default:false
). -
stateful
(tf.any())
If
true
, the last state for each sample at index i in a batch will be used as initial state of the sample of index i in the following batch (default:false
).You can set RNN layers to be "stateful", which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. This assumes a one-to-one mapping between samples in different successive batches.
To enable "statefulness":
- specify
stateful: true
in the layer constructor. - specify a fixed batch size for your model, by passing
- if sequential model:
batchInputShape: [...]
to the first layer in your model. - else for functional model with 1 or more Input layers:
batchShape: [...]
to all the first layers in your model. This is the expected shape of your inputs including the batch size. It should be a tuple of integers, e.g.,[32, 10, 100]
.
- if sequential model:
- specify
shuffle: false
when callingLayersModel.fit()
.
To reset the state of your model, call
resetStates()
on either the specific layer or on the entire model. - specify
-
unroll
(tf.any())
If
true
, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory- intensive. Unrolling is only suitable for short sequences (default:false
). Porting Note: tfjs-layers has an imperative backend. RNNs are executed with normal TypeScript control flow. Hence this property is inapplicable and ignored in tfjs-layers. -
inputDim
(tf.any())
Dimensionality of the input (integer).
This option (or alternatively, the option
inputShape
) is required when this layer is used as the first layer in a model. -
inputLength
(tf.any())
Length of the input sequences, to be specified when it is constant.
This argument is required if you are going to connect
Flatten
thenDense
layers upstream (without it, the shape of the dense outputs cannot be computed). Note that if the recurrent layer is not the first layer in your model, you would need to specify the input length at the level of the first layer (e.g., via theinputShape
option). -
cell
(tf.RNNCell|tf.RNNCell[])
A RNN cell instance. A RNN cell is a class that has:
- a
call()
method, which takes[Tensor, Tensor]
as the first input argument. The first item is the input at time t, and second item is the cell state at time t. Thecall()
method returns[outputAtT, statesAtTPlus1]
. Thecall()
method of the cell can also take the argumentconstants
, see section "Note on passing external constants" below. Porting Node: PyKeras overrides thecall()
signature of RNN cells, which are Layer subtypes, to accept two arguments. tfjs-layers does not do such overriding. Instead we preseve thecall()
signature, which due to itsTensor|Tensor[]
argument and return value, is flexible enough to handle the inputs and states. - a
stateSize
attribute. This can be a single integer (single state) in which case it is the size of the recurrent state (which should be the same as the size of the cell output). This can also be an Array of integers (one size per state). In this case, the first entry (stateSize[0]
) should be the same as the size of the cell output. It is also possible forcell
to be a list of RNN cell instances, in which case the cells get stacked on after the other in the RNN, implementing an efficient stacked RNN.
- a
- filters (number) The dimensionality of the output space (i.e. the number of filters in the convolution).
- kernelSize (number|number[]) The dimensions of the convolution window. If kernelSize is a number, the convolutional window will be square.
-
strides
(number|number[])
The strides of the convolution in each dimension. If strides is a number,
strides in both dimensions are equal.
Specifying any stride value != 1 is incompatible with specifying any
dilationRate
value != 1. - padding ('valid'|'same'|'causal') Padding mode.
-
dataFormat
('channelsFirst'|'channelsLast')
Format of the data, which determines the ordering of the dimensions in
the inputs.
channels_last
corresponds to inputs with shape(batch, ..., channels)
channels_first
corresponds to inputs with shape(batch, channels, ...)
.Defaults to
channels_last
. -
dilationRate
(number|[number]|[number, number])
The dilation rate to use for the dilated convolution in each dimension.
Should be an integer or array of two or three integers.
Currently, specifying any
dilationRate
value != 1 is incompatible with specifying anystrides
value != 1.
Cell class for ConvLSTM2D
.
ConvLSTM2DCell
is distinct from the ConvRNN2D
subclass ConvLSTM2D
in
that its call
method takes the input data of only a single time step and
returns the cell's output at the time step, while ConvLSTM2D
takes the
input data over a number of time steps. For example:
const filters = 3;
const kernelSize = 3;
const sequenceLength = 1;
const size = 5;
const channels = 3;
const inputShape = [sequenceLength, size, size, channels];
const input = tf.ones(inputShape);
const cell = tf.layers.convLstm2dCell({filters, kernelSize});
cell.build(input.shape);
const outputSize = size - kernelSize + 1;
const outShape = [sequenceLength, outputSize, outputSize, filters];
const initialH = tf.zeros(outShape);
const initialC = tf.zeros(outShape);
const [o, h, c] = cell.call([input, initialH, initialC], {});
- args (Object)
-
activation
(tf.any())
Activation function to use.
Default: hyperbolic tangent ('tanh').
If you pass
null
, 'linear' activation will be applied. - useBias (tf.any()) Whether the layer uses a bias vector.
-
kernelInitializer
(tf.any())
Initializer for the
kernel
weights matrix, used for the linear transformation of the inputs. -
recurrentInitializer
(tf.any())
Initializer for the
recurrentKernel
weights matrix, used for linear transformation of the recurrent state. - biasInitializer (tf.any()) Initializer for the bias vector.
-
kernelRegularizer
(tf.any())
Regularizer function applied to the
kernel
weights matrix. -
recurrentRegularizer
(tf.any())
Regularizer function applied to the
recurrent_kernel
weights matrix. - biasRegularizer (tf.any()) Regularizer function applied to the bias vector.
-
kernelConstraint
(tf.any())
Constraint function applied to the
kernel
weights matrix. -
recurrentConstraint
(tf.any())
Constraint function applied to the
recurrentKernel
weights matrix. - biasConstraint (tf.any()) Constraintfunction applied to the bias vector.
- dropout (tf.any()) Float number between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
- recurrentDropout (tf.any()) Float number between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
-
inputShape
(tf.any())
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
(tf.any())
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(tf.any())
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype (tf.any()) The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (tf.any()) Name for this layer.
-
trainable
(tf.any())
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.any()) Initial weight values of the layer.
- inputDType (tf.any()) Legacy support. Do not use for new code.
-
recurrentActivation
(tf.any())
Activation function to use for the recurrent step.
Defaults to hard sigmoid (
hardSigmoid
).If
null
, no activation is applied. -
unitForgetBias
(tf.any())
If
true
, add 1 to the bias of the forget gate at initialization. Setting it totrue
will also forcebiasInitializer = 'zeros'
. This is recommended in Jozefowicz et al.. -
implementation
(tf.any())
Implementation mode, either 1 or 2.
Mode 1 will structure its operations as a larger number of smaller dot products and additions.
Mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
Note: For superior performance, TensorFlow.js always uses implementation 2, regardless of the actual value of this configuration field.
- filters (number) The dimensionality of the output space (i.e. the number of filters in the convolution).
- kernelSize (number|number[]) The dimensions of the convolution window. If kernelSize is a number, the convolutional window will be square.
-
strides
(number|number[])
The strides of the convolution in each dimension. If strides is a number,
strides in both dimensions are equal.
Specifying any stride value != 1 is incompatible with specifying any
dilationRate
value != 1. - padding ('valid'|'same'|'causal') Padding mode.
-
dataFormat
('channelsFirst'|'channelsLast')
Format of the data, which determines the ordering of the dimensions in
the inputs.
channels_last
corresponds to inputs with shape(batch, ..., channels)
channels_first
corresponds to inputs with shape(batch, channels, ...)
.Defaults to
channels_last
. -
dilationRate
(number|[number]|[number, number])
The dilation rate to use for the dilated convolution in each dimension.
Should be an integer or array of two or three integers.
Currently, specifying any
dilationRate
value != 1 is incompatible with specifying anystrides
value != 1.
Gated Recurrent Unit - Cho et al. 2014.
This is an RNN
layer consisting of one GRUCell
. However, unlike
the underlying GRUCell
, the apply
method of SimpleRNN
operates
on a sequence of inputs. The shape of the input (not including the first,
batch dimension) needs to be at least 2-D, with the first dimension being
time steps. For example:
const rnn = tf.layers.gru({units: 8, returnSequences: true});
// Create an input with 10 time steps.
const input = tf.input({shape: [10, 20]});
const output = rnn.apply(input);
console.log(JSON.stringify(output.shape));
// [null, 10, 8]: 1st dimension is unknown batch size; 2nd dimension is the
// same as the sequence length of `input`, due to `returnSequences`: `true`;
// 3rd dimension is the `GRUCell`'s number of units.
- args (Object)
-
recurrentActivation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function to use for the recurrent step.
Defaults to hard sigmoid (
hardSigmoid
).If
null
, no activation is applied. -
implementation
(number)
Implementation mode, either 1 or 2.
Mode 1 will structure its operations as a larger number of smaller dot products and additions.
Mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
Note: For superior performance, TensorFlow.js always uses implementation 2, regardless of the actual value of this configuration field.
- units (number) Positive integer, dimensionality of the output space.
-
activation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function to use.
Defaults to hyperbolic tangent (
tanh
)If you pass
null
, no activation will be applied. - useBias (boolean) Whether the layer uses a bias vector.
-
kernelInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the
kernel
weights matrix, used for the linear transformation of the inputs. -
recurrentInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the
recurrentKernel
weights matrix, used for linear transformation of the recurrent state. - biasInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the bias vector.
- kernelRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the kernel weights matrix.
- recurrentRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the recurrentKernel weights matrix.
- biasRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the bias vector.
- kernelConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint function applied to the kernel weights matrix.
- recurrentConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint function applied to the recurrentKernel weights matrix.
- biasConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint function applied to the bias vector.
- dropout (number) Number between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
- recurrentDropout (number) Number between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
-
cell
(tf.RNNCell|tf.RNNCell[])
A RNN cell instance. A RNN cell is a class that has:
- a
call()
method, which takes[Tensor, Tensor]
as the first input argument. The first item is the input at time t, and second item is the cell state at time t. Thecall()
method returns[outputAtT, statesAtTPlus1]
. Thecall()
method of the cell can also take the argumentconstants
, see section "Note on passing external constants" below. Porting Node: PyKeras overrides thecall()
signature of RNN cells, which are Layer subtypes, to accept two arguments. tfjs-layers does not do such overriding. Instead we preseve thecall()
signature, which due to itsTensor|Tensor[]
argument and return value, is flexible enough to handle the inputs and states. - a
stateSize
attribute. This can be a single integer (single state) in which case it is the size of the recurrent state (which should be the same as the size of the cell output). This can also be an Array of integers (one size per state). In this case, the first entry (stateSize[0]
) should be the same as the size of the cell output. It is also possible forcell
to be a list of RNN cell instances, in which case the cells get stacked on after the other in the RNN, implementing an efficient stacked RNN.
- a
- returnSequences (boolean) Whether to return the last output in the output sequence, or the full sequence.
- returnState (boolean) Whether to return the last state in addition to the output.
-
goBackwards
(boolean)
If
true
, process the input sequence backwards and return the reversed sequence (default:false
). -
stateful
(boolean)
If
true
, the last state for each sample at index i in a batch will be used as initial state of the sample of index i in the following batch (default:false
).You can set RNN layers to be "stateful", which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. This assumes a one-to-one mapping between samples in different successive batches.
To enable "statefulness":
- specify
stateful: true
in the layer constructor. - specify a fixed batch size for your model, by passing
- if sequential model:
batchInputShape: [...]
to the first layer in your model. - else for functional model with 1 or more Input layers:
batchShape: [...]
to all the first layers in your model. This is the expected shape of your inputs including the batch size. It should be a tuple of integers, e.g.,[32, 10, 100]
.
- if sequential model:
- specify
shuffle: false
when callingLayersModel.fit()
.
To reset the state of your model, call
resetStates()
on either the specific layer or on the entire model. - specify
-
unroll
(boolean)
If
true
, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory- intensive. Unrolling is only suitable for short sequences (default:false
). Porting Note: tfjs-layers has an imperative backend. RNNs are executed with normal TypeScript control flow. Hence this property is inapplicable and ignored in tfjs-layers. -
inputDim
(number)
Dimensionality of the input (integer).
This option (or alternatively, the option
inputShape
) is required when this layer is used as the first layer in a model. -
inputLength
(number)
Length of the input sequences, to be specified when it is constant.
This argument is required if you are going to connect
Flatten
thenDense
layers upstream (without it, the shape of the dense outputs cannot be computed). Note that if the recurrent layer is not the first layer in your model, you would need to specify the input length at the level of the first layer (e.g., via theinputShape
option). -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Cell class for GRU
.
GRUCell
is distinct from the RNN
subclass GRU
in that its
apply
method takes the input data of only a single time step and returns
the cell's output at the time step, while GRU
takes the input data
over a number of time steps. For example:
const cell = tf.layers.gruCell({units: 2});
const input = tf.input({shape: [10]});
const output = cell.apply(input);
console.log(JSON.stringify(output.shape));
// [null, 10]: This is the cell's output at a single time step. The 1st
// dimension is the unknown batch size.
Instance(s) of GRUCell
can be used to construct RNN
layers. The
most typical use of this workflow is to combine a number of cells into a
stacked RNN cell (i.e., StackedRNNCell
internally) and use it to create an
RNN. For example:
const cells = [
tf.layers.gruCell({units: 4}),
tf.layers.gruCell({units: 8}),
];
const rnn = tf.layers.rnn({cell: cells, returnSequences: true});
// Create an input with 10 time steps and a length-20 vector at each step.
const input = tf.input({shape: [10, 20]});
const output = rnn.apply(input);
console.log(JSON.stringify(output.shape));
// [null, 10, 8]: 1st dimension is unknown batch size; 2nd dimension is the
// same as the sequence length of `input`, due to `returnSequences`: `true`;
// 3rd dimension is the last `gruCell`'s number of units.
To create an RNN
consisting of only one GRUCell
, use the
tf.layers.gru().
- args (Object)
-
recurrentActivation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function to use for the recurrent step.
Defaults to hard sigmoid (
hardSigmoid
).If
null
, no activation is applied. -
implementation
(number)
Implementation mode, either 1 or 2.
Mode 1 will structure its operations as a larger number of smaller dot products and additions.
Mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
Note: For superior performance, TensorFlow.js always uses implementation 2, regardless of the actual value of this configuration field.
- resetAfter (boolean) GRU convention (whether to apply reset gate after or before matrix multiplication). false = "before", true = "after" (only false is supported).
- units (number) units: Positive integer, dimensionality of the output space.
-
activation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function to use.
Default: hyperbolic tangent ('tanh').
If you pass
null
, 'linear' activation will be applied. - useBias (boolean) Whether the layer uses a bias vector.
-
kernelInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the
kernel
weights matrix, used for the linear transformation of the inputs. -
recurrentInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the
recurrentKernel
weights matrix, used for linear transformation of the recurrent state. - biasInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the bias vector.
-
kernelRegularizer
('l1l2'|string|Regularizer)
Regularizer function applied to the
kernel
weights matrix. -
recurrentRegularizer
('l1l2'|string|Regularizer)
Regularizer function applied to the
recurrent_kernel
weights matrix. - biasRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the bias vector.
-
kernelConstraint
('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint)
Constraint function applied to the
kernel
weights matrix. -
recurrentConstraint
('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint)
Constraint function applied to the
recurrentKernel
weights matrix. - biasConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraintfunction applied to the bias vector.
- dropout (number) Float number between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
- recurrentDropout (number) Float number between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Long-Short Term Memory layer - Hochreiter 1997.
This is an RNN
layer consisting of one LSTMCell
. However, unlike
the underlying LSTMCell
, the apply
method of LSTM
operates
on a sequence of inputs. The shape of the input (not including the first,
batch dimension) needs to be at least 2-D, with the first dimension being
time steps. For example:
const lstm = tf.layers.lstm({units: 8, returnSequences: true});
// Create an input with 10 time steps.
const input = tf.input({shape: [10, 20]});
const output = lstm.apply(input);
console.log(JSON.stringify(output.shape));
// [null, 10, 8]: 1st dimension is unknown batch size; 2nd dimension is the
// same as the sequence length of `input`, due to `returnSequences`: `true`;
// 3rd dimension is the `LSTMCell`'s number of units.
- args (Object)
-
recurrentActivation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function to use for the recurrent step.
Defaults to hard sigmoid (
hardSigmoid
).If
null
, no activation is applied. -
unitForgetBias
(boolean)
If
true
, add 1 to the bias of the forget gate at initialization. Setting it totrue
will also forcebiasInitializer = 'zeros'
. This is recommended in Jozefowicz et al.. -
implementation
(number)
Implementation mode, either 1 or 2.
Mode 1 will structure its operations as a larger number of
smaller dot products and additions, whereas mode 2 will
batch them into fewer, larger operations. These modes will
have different performance profiles on different hardware and
for different applications.
Note: For superior performance, TensorFlow.js always uses implementation 2, regardless of the actual value of this config field.
- units (number) Positive integer, dimensionality of the output space.
-
activation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function to use.
Defaults to hyperbolic tangent (
tanh
)If you pass
null
, no activation will be applied. - useBias (boolean) Whether the layer uses a bias vector.
-
kernelInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the
kernel
weights matrix, used for the linear transformation of the inputs. -
recurrentInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the
recurrentKernel
weights matrix, used for linear transformation of the recurrent state. - biasInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the bias vector.
- kernelRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the kernel weights matrix.
- recurrentRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the recurrentKernel weights matrix.
- biasRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the bias vector.
- kernelConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint function applied to the kernel weights matrix.
- recurrentConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint function applied to the recurrentKernel weights matrix.
- biasConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint function applied to the bias vector.
- dropout (number) Number between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
- recurrentDropout (number) Number between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
-
cell
(tf.RNNCell|tf.RNNCell[])
A RNN cell instance. A RNN cell is a class that has:
- a
call()
method, which takes[Tensor, Tensor]
as the first input argument. The first item is the input at time t, and second item is the cell state at time t. Thecall()
method returns[outputAtT, statesAtTPlus1]
. Thecall()
method of the cell can also take the argumentconstants
, see section "Note on passing external constants" below. Porting Node: PyKeras overrides thecall()
signature of RNN cells, which are Layer subtypes, to accept two arguments. tfjs-layers does not do such overriding. Instead we preseve thecall()
signature, which due to itsTensor|Tensor[]
argument and return value, is flexible enough to handle the inputs and states. - a
stateSize
attribute. This can be a single integer (single state) in which case it is the size of the recurrent state (which should be the same as the size of the cell output). This can also be an Array of integers (one size per state). In this case, the first entry (stateSize[0]
) should be the same as the size of the cell output. It is also possible forcell
to be a list of RNN cell instances, in which case the cells get stacked on after the other in the RNN, implementing an efficient stacked RNN.
- a
- returnSequences (boolean) Whether to return the last output in the output sequence, or the full sequence.
- returnState (boolean) Whether to return the last state in addition to the output.
-
goBackwards
(boolean)
If
true
, process the input sequence backwards and return the reversed sequence (default:false
). -
stateful
(boolean)
If
true
, the last state for each sample at index i in a batch will be used as initial state of the sample of index i in the following batch (default:false
).You can set RNN layers to be "stateful", which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. This assumes a one-to-one mapping between samples in different successive batches.
To enable "statefulness":
- specify
stateful: true
in the layer constructor. - specify a fixed batch size for your model, by passing
- if sequential model:
batchInputShape: [...]
to the first layer in your model. - else for functional model with 1 or more Input layers:
batchShape: [...]
to all the first layers in your model. This is the expected shape of your inputs including the batch size. It should be a tuple of integers, e.g.,[32, 10, 100]
.
- if sequential model:
- specify
shuffle: false
when callingLayersModel.fit()
.
To reset the state of your model, call
resetStates()
on either the specific layer or on the entire model. - specify
-
unroll
(boolean)
If
true
, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory- intensive. Unrolling is only suitable for short sequences (default:false
). Porting Note: tfjs-layers has an imperative backend. RNNs are executed with normal TypeScript control flow. Hence this property is inapplicable and ignored in tfjs-layers. -
inputDim
(number)
Dimensionality of the input (integer).
This option (or alternatively, the option
inputShape
) is required when this layer is used as the first layer in a model. -
inputLength
(number)
Length of the input sequences, to be specified when it is constant.
This argument is required if you are going to connect
Flatten
thenDense
layers upstream (without it, the shape of the dense outputs cannot be computed). Note that if the recurrent layer is not the first layer in your model, you would need to specify the input length at the level of the first layer (e.g., via theinputShape
option). -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Cell class for LSTM
.
LSTMCell
is distinct from the RNN
subclass LSTM
in that its
apply
method takes the input data of only a single time step and returns
the cell's output at the time step, while LSTM
takes the input data
over a number of time steps. For example:
const cell = tf.layers.lstmCell({units: 2});
const input = tf.input({shape: [10]});
const output = cell.apply(input);
console.log(JSON.stringify(output.shape));
// [null, 10]: This is the cell's output at a single time step. The 1st
// dimension is the unknown batch size.
Instance(s) of LSTMCell
can be used to construct RNN
layers. The
most typical use of this workflow is to combine a number of cells into a
stacked RNN cell (i.e., StackedRNNCell
internally) and use it to create an
RNN. For example:
const cells = [
tf.layers.lstmCell({units: 4}),
tf.layers.lstmCell({units: 8}),
];
const rnn = tf.layers.rnn({cell: cells, returnSequences: true});
// Create an input with 10 time steps and a length-20 vector at each step.
const input = tf.input({shape: [10, 20]});
const output = rnn.apply(input);
console.log(JSON.stringify(output.shape));
// [null, 10, 8]: 1st dimension is unknown batch size; 2nd dimension is the
// same as the sequence length of `input`, due to `returnSequences`: `true`;
// 3rd dimension is the last `lstmCell`'s number of units.
To create an RNN
consisting of only one LSTMCell
, use the
tf.layers.lstm().
- args (Object)
-
recurrentActivation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function to use for the recurrent step.
Defaults to hard sigmoid (
hardSigmoid
).If
null
, no activation is applied. -
unitForgetBias
(boolean)
If
true
, add 1 to the bias of the forget gate at initialization. Setting it totrue
will also forcebiasInitializer = 'zeros'
. This is recommended in Jozefowicz et al.. -
implementation
(number)
Implementation mode, either 1 or 2.
Mode 1 will structure its operations as a larger number of smaller dot products and additions.
Mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
Note: For superior performance, TensorFlow.js always uses implementation 2, regardless of the actual value of this configuration field.
- units (number) units: Positive integer, dimensionality of the output space.
-
activation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function to use.
Default: hyperbolic tangent ('tanh').
If you pass
null
, 'linear' activation will be applied. - useBias (boolean) Whether the layer uses a bias vector.
-
kernelInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the
kernel
weights matrix, used for the linear transformation of the inputs. -
recurrentInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the
recurrentKernel
weights matrix, used for linear transformation of the recurrent state. - biasInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the bias vector.
-
kernelRegularizer
('l1l2'|string|Regularizer)
Regularizer function applied to the
kernel
weights matrix. -
recurrentRegularizer
('l1l2'|string|Regularizer)
Regularizer function applied to the
recurrent_kernel
weights matrix. - biasRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the bias vector.
-
kernelConstraint
('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint)
Constraint function applied to the
kernel
weights matrix. -
recurrentConstraint
('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint)
Constraint function applied to the
recurrentKernel
weights matrix. - biasConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraintfunction applied to the bias vector.
- dropout (number) Float number between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
- recurrentDropout (number) Float number between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Base class for recurrent layers.
Input shape:
3D tensor with shape [batchSize, timeSteps, inputDim]
.
Output shape:
- if
returnState
, an Array of tensors (i.e., tf.Tensors). The first tensor is the output. The remaining tensors are the states at the last time step, each with shape[batchSize, units]
. - if
returnSequences
, the output will have shape[batchSize, timeSteps, units]
. - else, the output will have shape
[batchSize, units]
.
Masking:
This layer supports masking for input data with a variable number
of timesteps. To introduce masks to your data,
use an embedding layer with the mask_zero
parameter
set to True
.
Notes on using statefulness in RNNs: You can set RNN layers to be 'stateful', which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. This assumes a one-to-one mapping between samples in different successive batches.
To enable statefulness:
- specify stateful: true
in the layer constructor.
- specify a fixed batch size for your model, by passing
if sequential model:
batchInputShape=[...]
to the first layer in your model.
else for functional model with 1 or more Input layers:
batchShape=[...]
to all the first layers in your model.
This is the expected shape of your inputs including the batch size.
It should be a tuple of integers, e.g. (32, 10, 100)
.
- specify shuffle=False
when calling fit().
To reset the states of your model, call .resetStates()
on either
a specific layer, or on your entire model.
Note on specifying the initial state of RNNs
You can specify the initial state of RNN layers symbolically by
calling them with the option initialState
. The value of
initialState
should be a tensor or list of tensors representing
the initial state of the RNN layer.
You can specify the initial state of RNN layers numerically by
calling resetStates
with the keyword argument states
. The value of
states
should be a numpy array or list of numpy arrays representing
the initial state of the RNN layer.
Note on passing external constants to RNNs
You can pass "external" constants to the cell using the constants
keyword argument of RNN.call
method. This requires that the cell.call
method accepts the same keyword argument constants
. Such constants
can be used to conditon the cell transformation on additional static inputs
(not changing over time), a.k.a an attention mechanism.
- args (Object)
-
cell
(tf.RNNCell|tf.RNNCell[])
A RNN cell instance. A RNN cell is a class that has:
- a
call()
method, which takes[Tensor, Tensor]
as the first input argument. The first item is the input at time t, and second item is the cell state at time t. Thecall()
method returns[outputAtT, statesAtTPlus1]
. Thecall()
method of the cell can also take the argumentconstants
, see section "Note on passing external constants" below. Porting Node: PyKeras overrides thecall()
signature of RNN cells, which are Layer subtypes, to accept two arguments. tfjs-layers does not do such overriding. Instead we preseve thecall()
signature, which due to itsTensor|Tensor[]
argument and return value, is flexible enough to handle the inputs and states. - a
stateSize
attribute. This can be a single integer (single state) in which case it is the size of the recurrent state (which should be the same as the size of the cell output). This can also be an Array of integers (one size per state). In this case, the first entry (stateSize[0]
) should be the same as the size of the cell output. It is also possible forcell
to be a list of RNN cell instances, in which case the cells get stacked on after the other in the RNN, implementing an efficient stacked RNN.
- a
- returnSequences (boolean) Whether to return the last output in the output sequence, or the full sequence.
- returnState (boolean) Whether to return the last state in addition to the output.
-
goBackwards
(boolean)
If
true
, process the input sequence backwards and return the reversed sequence (default:false
). -
stateful
(boolean)
If
true
, the last state for each sample at index i in a batch will be used as initial state of the sample of index i in the following batch (default:false
).You can set RNN layers to be "stateful", which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. This assumes a one-to-one mapping between samples in different successive batches.
To enable "statefulness":
- specify
stateful: true
in the layer constructor. - specify a fixed batch size for your model, by passing
- if sequential model:
batchInputShape: [...]
to the first layer in your model. - else for functional model with 1 or more Input layers:
batchShape: [...]
to all the first layers in your model. This is the expected shape of your inputs including the batch size. It should be a tuple of integers, e.g.,[32, 10, 100]
.
- if sequential model:
- specify
shuffle: false
when callingLayersModel.fit()
.
To reset the state of your model, call
resetStates()
on either the specific layer or on the entire model. - specify
-
unroll
(boolean)
If
true
, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory- intensive. Unrolling is only suitable for short sequences (default:false
). Porting Note: tfjs-layers has an imperative backend. RNNs are executed with normal TypeScript control flow. Hence this property is inapplicable and ignored in tfjs-layers. -
inputDim
(number)
Dimensionality of the input (integer).
This option (or alternatively, the option
inputShape
) is required when this layer is used as the first layer in a model. -
inputLength
(number)
Length of the input sequences, to be specified when it is constant.
This argument is required if you are going to connect
Flatten
thenDense
layers upstream (without it, the shape of the dense outputs cannot be computed). Note that if the recurrent layer is not the first layer in your model, you would need to specify the input length at the level of the first layer (e.g., via theinputShape
option). -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Fully-connected RNN where the output is to be fed back to input.
This is an RNN
layer consisting of one SimpleRNNCell
. However, unlike
the underlying SimpleRNNCell
, the apply
method of SimpleRNN
operates
on a sequence of inputs. The shape of the input (not including the first,
batch dimension) needs to be at least 2-D, with the first dimension being
time steps. For example:
const rnn = tf.layers.simpleRNN({units: 8, returnSequences: true});
// Create an input with 10 time steps.
const input = tf.input({shape: [10, 20]});
const output = rnn.apply(input);
console.log(JSON.stringify(output.shape));
// [null, 10, 8]: 1st dimension is unknown batch size; 2nd dimension is the
// same as the sequence length of `input`, due to `returnSequences`: `true`;
// 3rd dimension is the `SimpleRNNCell`'s number of units.
- args (Object)
- units (number) Positive integer, dimensionality of the output space.
-
activation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function to use.
Defaults to hyperbolic tangent (
tanh
)If you pass
null
, no activation will be applied. - useBias (boolean) Whether the layer uses a bias vector.
-
kernelInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the
kernel
weights matrix, used for the linear transformation of the inputs. -
recurrentInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the
recurrentKernel
weights matrix, used for linear transformation of the recurrent state. - biasInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the bias vector.
- kernelRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the kernel weights matrix.
- recurrentRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the recurrentKernel weights matrix.
- biasRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the bias vector.
- kernelConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint function applied to the kernel weights matrix.
- recurrentConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint function applied to the recurrentKernel weights matrix.
- biasConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraint function applied to the bias vector.
- dropout (number) Number between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
- recurrentDropout (number) Number between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
-
cell
(tf.RNNCell|tf.RNNCell[])
A RNN cell instance. A RNN cell is a class that has:
- a
call()
method, which takes[Tensor, Tensor]
as the first input argument. The first item is the input at time t, and second item is the cell state at time t. Thecall()
method returns[outputAtT, statesAtTPlus1]
. Thecall()
method of the cell can also take the argumentconstants
, see section "Note on passing external constants" below. Porting Node: PyKeras overrides thecall()
signature of RNN cells, which are Layer subtypes, to accept two arguments. tfjs-layers does not do such overriding. Instead we preseve thecall()
signature, which due to itsTensor|Tensor[]
argument and return value, is flexible enough to handle the inputs and states. - a
stateSize
attribute. This can be a single integer (single state) in which case it is the size of the recurrent state (which should be the same as the size of the cell output). This can also be an Array of integers (one size per state). In this case, the first entry (stateSize[0]
) should be the same as the size of the cell output. It is also possible forcell
to be a list of RNN cell instances, in which case the cells get stacked on after the other in the RNN, implementing an efficient stacked RNN.
- a
- returnSequences (boolean) Whether to return the last output in the output sequence, or the full sequence.
- returnState (boolean) Whether to return the last state in addition to the output.
-
goBackwards
(boolean)
If
true
, process the input sequence backwards and return the reversed sequence (default:false
). -
stateful
(boolean)
If
true
, the last state for each sample at index i in a batch will be used as initial state of the sample of index i in the following batch (default:false
).You can set RNN layers to be "stateful", which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. This assumes a one-to-one mapping between samples in different successive batches.
To enable "statefulness":
- specify
stateful: true
in the layer constructor. - specify a fixed batch size for your model, by passing
- if sequential model:
batchInputShape: [...]
to the first layer in your model. - else for functional model with 1 or more Input layers:
batchShape: [...]
to all the first layers in your model. This is the expected shape of your inputs including the batch size. It should be a tuple of integers, e.g.,[32, 10, 100]
.
- if sequential model:
- specify
shuffle: false
when callingLayersModel.fit()
.
To reset the state of your model, call
resetStates()
on either the specific layer or on the entire model. - specify
-
unroll
(boolean)
If
true
, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory- intensive. Unrolling is only suitable for short sequences (default:false
). Porting Note: tfjs-layers has an imperative backend. RNNs are executed with normal TypeScript control flow. Hence this property is inapplicable and ignored in tfjs-layers. -
inputDim
(number)
Dimensionality of the input (integer).
This option (or alternatively, the option
inputShape
) is required when this layer is used as the first layer in a model. -
inputLength
(number)
Length of the input sequences, to be specified when it is constant.
This argument is required if you are going to connect
Flatten
thenDense
layers upstream (without it, the shape of the dense outputs cannot be computed). Note that if the recurrent layer is not the first layer in your model, you would need to specify the input length at the level of the first layer (e.g., via theinputShape
option). -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Cell class for SimpleRNN
.
SimpleRNNCell
is distinct from the RNN
subclass SimpleRNN
in that its
apply
method takes the input data of only a single time step and returns
the cell's output at the time step, while SimpleRNN
takes the input data
over a number of time steps. For example:
const cell = tf.layers.simpleRNNCell({units: 2});
const input = tf.input({shape: [10]});
const output = cell.apply(input);
console.log(JSON.stringify(output.shape));
// [null, 10]: This is the cell's output at a single time step. The 1st
// dimension is the unknown batch size.
Instance(s) of SimpleRNNCell
can be used to construct RNN
layers. The
most typical use of this workflow is to combine a number of cells into a
stacked RNN cell (i.e., StackedRNNCell
internally) and use it to create an
RNN. For example:
const cells = [
tf.layers.simpleRNNCell({units: 4}),
tf.layers.simpleRNNCell({units: 8}),
];
const rnn = tf.layers.rnn({cell: cells, returnSequences: true});
// Create an input with 10 time steps and a length-20 vector at each step.
const input = tf.input({shape: [10, 20]});
const output = rnn.apply(input);
console.log(JSON.stringify(output.shape));
// [null, 10, 8]: 1st dimension is unknown batch size; 2nd dimension is the
// same as the sequence length of `input`, due to `returnSequences`: `true`;
// 3rd dimension is the last `SimpleRNNCell`'s number of units.
To create an RNN
consisting of only one SimpleRNNCell
, use the
tf.layers.simpleRNN().
- args (Object)
- units (number) units: Positive integer, dimensionality of the output space.
-
activation
('elu'|'hardSigmoid'|'linear'|'relu'|'relu6'|
'selu'|'sigmoid'|'softmax'|'softplus'|'softsign'|'tanh')
Activation function to use.
Default: hyperbolic tangent ('tanh').
If you pass
null
, 'linear' activation will be applied. - useBias (boolean) Whether the layer uses a bias vector.
-
kernelInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the
kernel
weights matrix, used for the linear transformation of the inputs. -
recurrentInitializer
('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'|
'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'|
'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer)
Initializer for the
recurrentKernel
weights matrix, used for linear transformation of the recurrent state. - biasInitializer ('constant'|'glorotNormal'|'glorotUniform'|'heNormal'|'heUniform'|'identity'| 'leCunNormal'|'leCunUniform'|'ones'|'orthogonal'|'randomNormal'| 'randomUniform'|'truncatedNormal'|'varianceScaling'|'zeros'|string|tf.initializers.Initializer) Initializer for the bias vector.
-
kernelRegularizer
('l1l2'|string|Regularizer)
Regularizer function applied to the
kernel
weights matrix. -
recurrentRegularizer
('l1l2'|string|Regularizer)
Regularizer function applied to the
recurrent_kernel
weights matrix. - biasRegularizer ('l1l2'|string|Regularizer) Regularizer function applied to the bias vector.
-
kernelConstraint
('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint)
Constraint function applied to the
kernel
weights matrix. -
recurrentConstraint
('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint)
Constraint function applied to the
recurrentKernel
weights matrix. - biasConstraint ('maxNorm'|'minMaxNorm'|'nonNeg'|'unitNorm'|string|tf.constraints.Constraint) Constraintfunction applied to the bias vector.
- dropout (number) Float number between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
- recurrentDropout (number) Float number between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Wrapper allowing a stack of RNN cells to behave as a single cell.
Used to implement efficient stacked RNNs.
- args (Object)
-
cells
(tf.RNNCell[])
A
Array
ofRNNCell
instances. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
- args (Object)
-
layer
(RNN)
The instance of an
RNN
layer to be wrapped. -
mergeMode
('sum'|'mul'|'concat'|'ave')
Mode by which outputs of the forward and backward RNNs are
combined. If
null
orundefined
, the output will not be combined, they will be returned as anArray
.If
undefined
(i.e., not provided), defaults to'concat'
. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
This wrapper applies a layer to every temporal slice of an input.
The input should be at least 3D, and the dimension of the index 1
will be
considered to be the temporal dimension.
Consider a batch of 32 samples, where each sample is a sequence of 10 vectors
of 16 dimensions. The batch input shape of the layer is then [32, 10, 16]
, and the inputShape
, not including the sample dimension, is
[10, 16]
.
You can then use TimeDistributed
to apply a Dense
layer to each of the 10
timesteps, independently:
const model = tf.sequential();
model.add(tf.layers.timeDistributed({
layer: tf.layers.dense({units: 8}),
inputShape: [10, 16],
}));
// Now model.outputShape = [null, 10, 8].
// The output will then have shape `[32, 10, 8]`.
// In subsequent layers, there is no need for `inputShape`:
model.add(tf.layers.timeDistributed({layer: tf.layers.dense({units: 32})}));
console.log(JSON.stringify(model.outputs[0].shape));
// Now model.outputShape = [null, 10, 32].
The output will then have shape [32, 10, 32]
.
TimeDistributed
can be used with arbitrary layers, not just Dense
, for
instance a Conv2D
layer.
const model = tf.sequential();
model.add(tf.layers.timeDistributed({
layer: tf.layers.conv2d({filters: 64, kernelSize: [3, 3]}),
inputShape: [10, 299, 299, 3],
}));
console.log(JSON.stringify(model.outputs[0].shape));
- args (Object)
- layer (tf.layers.Layer) The layer to be wrapped.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
A layer is a grouping of operations and weights that can be composed to create a tf.LayersModel.
Layers are constructed by using the functions under the tf.layers namespace.
Builds or executes a `Layer's logic.
When called with tf.Tensor(s), execute the Layer
s computation and
return Tensor(s). For example:
const denseLayer = tf.layers.dense({
units: 1,
kernelInitializer: 'zeros',
useBias: false
});
// Invoke the layer's apply() method with a [tf.Tensor](#class:Tensor) (with concrete
// numeric values).
const input = tf.ones([2, 2]);
const output = denseLayer.apply(input);
// The output's value is expected to be [[0], [0]], due to the fact that
// the dense layer has a kernel initialized to all-zeros and does not have
// a bias.
output.print();
When called with tf.SymbolicTensor(s), this will prepare the layer for future execution. This entails internal book-keeping on shapes of expected Tensors, wiring layers together, and initializing weights.
Calling apply
with tf.SymbolicTensors are typically used during the
building of non-tf.Sequential models. For example:
const flattenLayer = tf.layers.flatten();
const denseLayer = tf.layers.dense({units: 1});
// Use tf.layers.input() to obtain a SymbolicTensor as input to apply().
const input = tf.input({shape: [2, 2]});
const output1 = flattenLayer.apply(input);
// output1.shape is [null, 4]. The first dimension is the undetermined
// batch size. The second dimension comes from flattening the [2, 2]
// shape.
console.log(JSON.stringify(output1.shape));
// The output SymbolicTensor of the flatten layer can be used to call
// the apply() of the dense layer:
const output2 = denseLayer.apply(output1);
// output2.shape is [null, 1]. The first dimension is the undetermined
// batch size. The second dimension matches the number of units of the
// dense layer.
console.log(JSON.stringify(output2.shape));
// The input and output and be used to construct a model that consists
// of the flatten and dense layers.
const model = tf.model({inputs: input, outputs: output2});
- inputs (tf.Tensor|tf.Tensor[]|tf.SymbolicTensor|tf.SymbolicTensor[]) a tf.Tensor or tf.SymbolicTensor or an Array of them.
-
kwargs
(Kwargs)
Additional keyword arguments to be passed to
call()
. Optional
Counts the total number of numbers (e.g., float32, int32) in the weights.
Creates the layer weights.
Must be implemented on all layers that have weights.
Called when apply() is called to construct the weights.
-
inputShape
((null | number)[]|(null | number)[][])
A
Shape
or array ofShape
(unused).
Returns the current values of the weights of the layer.
- trainableOnly (boolean) Whether to get the values of only trainable weights. Optional
Sets the weights of the layer, from Tensors.
-
weights
(tf.Tensor[])
a list of Tensors. The number of arrays and their shape
must match number of the dimensions of the weights of the layer (i.e.
it should match the output of
getWeights
).
Adds a weight variable to the layer.
- name (string) Name of the new weight variable.
- shape ((null | number)[]) The shape of the weight.
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The dtype of the weight. Optional
- initializer (tf.initializers.Initializer) An initializer instance. Optional
- regularizer (Regularizer) A regularizer instance. Optional
- trainable (boolean) Whether the weight should be trained via backprop or not (assuming that the layer itself is also trainable). Optional
- constraint (tf.constraints.Constraint) An optional trainable. Optional
Add losses to the layer.
The loss may potentionally be conditional on some inputs tensors, for instance activity losses are conditional on the layer's inputs.
- losses (RegularizerFn|RegularizerFn[])
Computes the output shape of the layer.
Assumes that the layer will be built to match that input shape provided.
- inputShape ((null | number)[]|(null | number)[][]) A shape (tuple of integers) or a list of shape tuples (one per output tensor of the layer). Shape tuples can include null for free dimensions, instead of an integer.
Returns the config of the layer.
A layer config is a TS dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by 'Container' (one layer of abstraction above).
Porting Note: The TS dictionary follows TS naming standrds for keys, and uses tfjs-layers type-safe Enums. Serialization methods should use a helper function to convert to the pythonic storage standard. (see serialization_utils.convertTsToPythonic)
Attempt to dispose layer's weights.
This method decrease the reference count of the Layer object by 1.
A Layer is reference-counted. Its reference count is incremented by 1
the first item its apply()
method is called and when it becomes a part
of a new Node
(through calling the apply()
) method on a
tf.SymbolicTensor).
If the reference count of a Layer becomes 0, all the weights will be disposed and the underlying memory (e.g., the textures allocated in WebGL) will be freed.
Note: If the reference count is greater than 0 after the decrement, the weights of the Layer will not be disposed.
After a Layer is disposed, it cannot be used in calls such as apply()
,
getWeights()
or setWeights()
anymore.
An input layer is an entry point into a tf.LayersModel.
InputLayer
is generated automatically for tf.Sequentialmodels by specifying the
inputshapeor
batchInputShape` for the first layer. It
should not be specified explicitly. However, it can be useful sometimes,
e.g., when constructing a sequential model from a subset of another
sequential model's layers. Like the code snippet below shows.
// Define a model which simply adds two inputs.
const model1 = tf.sequential();
model1.add(tf.layers.dense({inputShape: [4], units: 3, activation: 'relu'}));
model1.add(tf.layers.dense({units: 1, activation: 'sigmoid'}));
model1.summary();
model1.predict(tf.zeros([1, 4])).print();
// Construct another model, reusing the second layer of `model1` while
// not using the first layer of `model1`. Note that you cannot add the second
// layer of `model` directly as the first layer of the new sequential model,
// because doing so will lead to an error related to the fact that the layer
// is not an input layer. Instead, you need to create an `inputLayer` and add
// it to the new sequential model before adding the reused layer.
const model2 = tf.sequential();
// Use an inputShape that matches the input shape of `model1`'s second
// layer.
model2.add(tf.layers.inputLayer({inputShape: [3]}));
model2.add(model1.layers[1]);
model2.summary();
model2.predict(tf.zeros([1, 3])).print();
- args (Object)
- inputShape ((null | number)[]) Input shape, not including the batch axis.
- batchSize (number) Optional input batch size (integer or null).
- batchInputShape ((null | number)[]) Batch input shape, including the batch axis.
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') Datatype of the input.
- sparse (boolean) Whether the placeholder created is meant to be sparse.
- name (string) Name of the layer.
Zero-padding layer for 2D input (e.g., image).
This layer can add rows and columns of zeros at the top, bottom, left and right side of an image tensor.
Input shape: 4D tensor with shape:
- If
dataFormat
is"channelsLast"
:[batch, rows, cols, channels]
- If
data_format
is"channels_first"
:[batch, channels, rows, cols]
.
Output shape: 4D with shape:
- If
dataFormat
is"channelsLast"
:[batch, paddedRows, paddedCols, channels]
- IfdataFormat
is"channelsFirst"
:[batch, channels, paddedRows, paddedCols]
.
- args (Object) Optional
-
padding
(number|[number, number]|[[number, number], [number, number]])
Integer, or
Array
of 2 integers, orArray
of 2Array
s, each of which is anArray
of 2 integers.- If integer, the same symmetric padding is applied to width and height.
- If Array
of 2 integers, interpreted as two different symmetric values for height and width:
[symmetricHeightPad, symmetricWidthPad]`. - If
Array
of 2Array
s, interpreted as:[[topPad, bottomPad], [leftPad, rightPad]]
.
-
dataFormat
('channelsFirst'|'channelsLast')
One of
'channelsLast'
(default) and'channelsFirst'
.The ordering of the dimensions in the inputs.
channelsLast
corresponds to inputs with shape[batch, height, width, channels]
whilechannelsFirst
corresponds to inputs with shape[batch, channels, height, width]
. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Applies Alpha Dropout to the input.
As it is a regularization layer, it is only active at training time.
Alpha Dropout is a Dropout
that keeps mean and variance of inputs
to their original values, in order to ensure the self-normalizing property
even after this dropout.
Alpha Dropout fits well to Scaled Exponential Linear Units
by randomly setting activations to the negative saturation value.
Arguments:
rate
: float, drop probability (as withDropout
). The multiplicative noise will have standard deviationsqrt(rate / (1 - rate))
.noise_shape
: A 1-DTensor
of typeint32
, representing the shape for randomly generated keep/drop flags.
Input shape:
Arbitrary. Use the keyword argument inputShape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.
Output shape: Same shape as input.
References:
- args (Object)
- rate (number) drop probability.
-
noiseShape
((null | number)[])
A 1-D
Tensor
of typeint32
, representing the shape for randomly generated keep/drop flags. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Apply multiplicative 1-centered Gaussian noise.
As it is a regularization layer, it is only active at training time.
Arguments:
rate
: float, drop probability (as withDropout
). The multiplicative noise will have standard deviationsqrt(rate / (1 - rate))
.
Input shape:
Arbitrary. Use the keyword argument inputShape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.
Output shape: Same shape as input.
References:
- args (Object)
- rate (number) drop probability.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Apply additive zero-centered Gaussian noise.
As it is a regularization layer, it is only active at training time.
This is useful to mitigate overfitting (you could see it as a form of random data augmentation). Gaussian Noise (GS) is a natural choice as corruption process for real valued inputs.
Arguments
stddev: float, standard deviation of the noise distribution.
Input shape
Arbitrary. Use the keyword argument `input_shape`
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.
Output shape
Same shape as input.
- args (Object)
- stddev (number) Standard Deviation.
-
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
Masks a sequence by using a mask value to skip timesteps.
If all features for a given sample timestep are equal to mask_value
,
then the sample timestep will be masked (skipped) in all downstream layers
(as long as they support masking).
If any downstream layer does not support masking yet receives such an input mask, an exception will be raised.
Arguments:
maskValue
: Either None or mask value to skip.
Input shape:
Arbitrary. Use the keyword argument inputShape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.
Output shape: Same shape as input.
- args (Object) Optional
-
maskValue
(number)
Masking Value. Defaults to
0.0
. -
inputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchInputShape
((null | number)[])
If defined, will be used to create an input layer to insert before this
layer. If both
inputShape
andbatchInputShape
are defined,batchInputShape
will be used. This argument is only applicable to input layers (the first layer of a model). -
batchSize
(number)
If
inputShape
is specified andbatchInputShape
is not specified,batchSize
is used to construct thebatchInputShape
:[batchSize, ...inputShape]
- dtype ('float32'|'int32'|'bool'|'complex64'|'string') The data-type for this layer. Defaults to 'float32'. This argument is only applicable to input layers (the first layer of a model).
- name (string) Name for this layer.
-
trainable
(boolean)
Whether the weights of this layer are updatable by
fit
. Defaults to true. - weights (tf.Tensor[]) Initial weight values of the layer.
- inputDType ('float32'|'int32'|'bool'|'complex64'|'string') Legacy support. Do not use for new code.
To perform mathematical computation on Tensors, we use operations. Tensors are immutable, so all operations always return new Tensors and never modify input Tensors.
Adds two tf.Tensors element-wise, A + B. Supports broadcasting.
const a = tf.tensor1d([1, 2, 3, 4]);
const b = tf.tensor1d([10, 20, 30, 40]);
a.add(b).print(); // or tf.add(a, b)
// Broadcast add a with b.
const a = tf.scalar(5);
const b = tf.tensor1d([10, 20, 30, 40]);
a.add(b).print(); // or tf.add(a, b)
- a (tf.Tensor|TypedArray|Array) The first tf.Tensor to add.
-
b
(tf.Tensor|TypedArray|Array)
The second tf.Tensor to add. Must have the same type as
a
.
Subtracts two tf.Tensors element-wise, A - B. Supports broadcasting.
const a = tf.tensor1d([10, 20, 30, 40]);
const b = tf.tensor1d([1, 2, 3, 4]);
a.sub(b).print(); // or tf.sub(a, b)
// Broadcast subtract a with b.
const a = tf.tensor1d([10, 20, 30, 40]);
const b = tf.scalar(5);
a.sub(b).print(); // or tf.sub(a, b)
- a (tf.Tensor|TypedArray|Array) The first tf.Tensor to subtract from.
-
b
(tf.Tensor|TypedArray|Array)
The second tf.Tensor to be subtracted. Must have the same dtype as
a
.
Multiplies two tf.Tensors element-wise, A * B. Supports broadcasting.
We also expose tf.mulStrict
which has the same signature as this op and
asserts that a
and b
are the same shape (does not broadcast).
const a = tf.tensor1d([1, 2, 3, 4]);
const b = tf.tensor1d([2, 3, 4, 5]);
a.mul(b).print(); // or tf.mul(a, b)
// Broadcast mul a with b.
const a = tf.tensor1d([1, 2, 3, 4]);
const b = tf.scalar(5);
a.mul(b).print(); // or tf.mul(a, b)
- a (tf.Tensor|TypedArray|Array) The first tensor to multiply.
-
b
(tf.Tensor|TypedArray|Array)
The second tensor to multiply. Must have the same dtype as
a
.
Divides two tf.Tensors element-wise, A / B. Supports broadcasting.
const a = tf.tensor1d([1, 4, 9, 16]);
const b = tf.tensor1d([1, 2, 3, 4]);
a.div(b).print(); // or tf.div(a, b)
// Broadcast div a with b.
const a = tf.tensor1d([2, 4, 6, 8]);
const b = tf.scalar(2);
a.div(b).print(); // or tf.div(a, b)
- a (tf.Tensor|TypedArray|Array) The first tensor as the numerator.
-
b
(tf.Tensor|TypedArray|Array)
The second tensor as the denominator. Must have the same dtype as
a
.
Adds a list of tf.Tensors element-wise, each with the same shape and dtype.
const a = tf.tensor1d([1, 2]);
const b = tf.tensor1d([3, 4]);
const c = tf.tensor1d([5, 6]);
tf.addN([a, b, c]).print();
- tensors (Array) A list of tensors with the same shape and dtype.
Divides two tf.Tensors element-wise, A / B. Supports broadcasting. Return 0 if denominator is 0.
const a = tf.tensor1d([1, 4, 9, 16]);
const b = tf.tensor1d([1, 2, 3, 4]);
const c = tf.tensor1d([0, 0, 0, 0]);
a.divNoNan(b).print(); // or tf.divNoNan(a, b)
a.divNoNan(c).print(); // or tf.divNoNan(a, c)
// Broadcast div a with b.
const a = tf.tensor1d([2, 4, 6, 8]);
const b = tf.scalar(2);
const c = tf.scalar(0);
a.divNoNan(b).print(); // or tf.divNoNan(a, b)
a.divNoNan(c).print(); // or tf.divNoNan(a, c)
- a (tf.Tensor|TypedArray|Array) The first tensor as the numerator.
-
b
(tf.Tensor|TypedArray|Array)
The second tensor as the denominator. Must have the same dtype as
a
.
Divides two tf.Tensors element-wise, A / B. Supports broadcasting. The result is rounded with floor function.
const a = tf.tensor1d([1, 4, 9, 16]);
const b = tf.tensor1d([1, 2, 3, 4]);
a.floorDiv(b).print(); // or tf.div(a, b)
// Broadcast div a with b.
const a = tf.tensor1d([2, 4, 6, 8]);
const b = tf.scalar(2);
a.floorDiv(b).print(); // or tf.floorDiv(a, b)
- a (tf.Tensor|TypedArray|Array) The first tensor as the numerator.
-
b
(tf.Tensor|TypedArray|Array)
The second tensor as the denominator. Must have the same dtype as
a
.
Returns the max of a and b (a > b ? a : b
) element-wise.
Supports broadcasting.
We also expose tf.maximumStrict
which has the same signature as this op and
asserts that a
and b
are the same shape (does not broadcast).
const a = tf.tensor1d([1, 4, 3, 16]);
const b = tf.tensor1d([1, 2, 9, 4]);
a.maximum(b).print(); // or tf.maximum(a, b)
// Broadcast maximum a with b.
const a = tf.tensor1d([2, 4, 6, 8]);
const b = tf.scalar(5);
a.maximum(b).print(); // or tf.maximum(a, b)
- a (tf.Tensor|TypedArray|Array) The first tensor.
-
b
(tf.Tensor|TypedArray|Array)
The second tensor. Must have the same type as
a
.
Returns the min of a and b (a < b ? a : b
) element-wise.
Supports broadcasting.
We also expose minimumStrict
which has the same signature as this op and
asserts that a
and b
are the same shape (does not broadcast).
const a = tf.tensor1d([1, 4, 3, 16]);
const b = tf.tensor1d([1, 2, 9, 4]);
a.minimum(b).print(); // or tf.minimum(a, b)
// Broadcast minimum a with b.
const a = tf.tensor1d([2, 4, 6, 8]);
const b = tf.scalar(5);
a.minimum(b).print(); // or tf.minimum(a, b)
- a (tf.Tensor|TypedArray|Array) The first tensor.
-
b
(tf.Tensor|TypedArray|Array)
The second tensor. Must have the same type as
a
.
Returns the mod of a and b element-wise.
floor(x / y) * y + mod(x, y) = x
Supports broadcasting.
We also expose tf.modStrict
which has the same signature as this op and
asserts that a
and b
are the same shape (does not broadcast).
const a = tf.tensor1d([1, 4, 3, 16]);
const b = tf.tensor1d([1, 2, 9, 4]);
a.mod(b).print(); // or tf.mod(a, b)
// Broadcast a mod b.
const a = tf.tensor1d([2, 4, 6, 8]);
const b = tf.scalar(5);
a.mod(b).print(); // or tf.mod(a, b)
- a (tf.Tensor|TypedArray|Array) The first tensor.
-
b
(tf.Tensor|TypedArray|Array)
The second tensor. Must have the same type as
a
.
Computes the power of one tf.Tensor to another. Supports broadcasting.
Given a tf.Tensor x and a tf.Tensor y, this operation computes x^y for
corresponding elements in x and y. The result's dtype will be the upcasted
type of the base
and exp
dtypes.
const a = tf.tensor([[2, 3], [4, 5]])
const b = tf.tensor([[1, 2], [3, 0]]).toInt();
a.pow(b).print(); // or tf.pow(a, b)
const a = tf.tensor([[1, 2], [3, 4]])
const b = tf.tensor(2).toInt();
a.pow(b).print(); // or tf.pow(a, b)
We also expose powStrict
which has the same signature as this op and
asserts that base
and exp
are the same shape (does not broadcast).
- base (tf.Tensor|TypedArray|Array) The base tf.Tensor to pow element-wise.
- exp (tf.Tensor|TypedArray|Array) The exponent tf.Tensor to pow element-wise.
Returns (a - b) * (a - b) element-wise. Supports broadcasting.
const a = tf.tensor1d([1, 4, 3, 16]);
const b = tf.tensor1d([1, 2, 9, 4]);
a.squaredDifference(b).print(); // or tf.squaredDifference(a, b)
// Broadcast squared difference a with b.
const a = tf.tensor1d([2, 4, 6, 8]);
const b = tf.scalar(5);
a.squaredDifference(b).print(); // or tf.squaredDifference(a, b)
- a (tf.Tensor|TypedArray|Array) The first tensor.
-
b
(tf.Tensor|TypedArray|Array)
The second tensor. Must have the same type as
a
.
Computes absolute value element-wise: abs(x)
const x = tf.tensor1d([-1, 2, -3, 4]);
x.abs().print(); // or tf.abs(x)
- x (tf.Tensor|TypedArray|Array) The input tf.Tensor.
Computes acos of the input tf.Tensor element-wise: acos(x)
const x = tf.tensor1d([0, 1, -1, .7]);
x.acos().print(); // or tf.acos(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes the inverse hyperbolic cos of the input tf.Tensor element-wise:
acosh(x)
const x = tf.tensor1d([10, 1, 3, 5.7]);
x.acosh().print(); // or tf.acosh(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes asin of the input tf.Tensor element-wise: asin(x)
const x = tf.tensor1d([0, 1, -1, .7]);
x.asin().print(); // or tf.asin(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes inverse hyperbolic sin of the input tf.Tensor element-wise:
asinh(x)
const x = tf.tensor1d([0, 1, -1, .7]);
x.asinh().print(); // or tf.asinh(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes atan of the input tf.Tensor element-wise: atan(x)
const x = tf.tensor1d([0, 1, -1, .7]);
x.atan().print(); // or tf.atan(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes arctangent of tf.Tensors a / b element-wise: atan2(a, b)
.
Supports broadcasting.
const a = tf.tensor1d([1.0, 1.0, -1.0, .7]);
const b = tf.tensor1d([2.0, 13.0, 3.5, .21]);
tf.atan2(a, b).print()
- a (tf.Tensor|TypedArray|Array) The first tensor.
-
b
(tf.Tensor|TypedArray|Array)
The second tensor. Must have the same dtype as
a
.
Computes inverse hyperbolic tan of the input tf.Tensor element-wise:
atanh(x)
const x = tf.tensor1d([0, .1, -.1, .7]);
x.atanh().print(); // or tf.atanh(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes ceiling of input tf.Tensor element-wise: ceil(x)
const x = tf.tensor1d([.6, 1.1, -3.3]);
x.ceil().print(); // or tf.ceil(x)
- x (tf.Tensor|TypedArray|Array) The input Tensor.
Clips values element-wise. max(min(x, clipValueMax), clipValueMin)
const x = tf.tensor1d([-1, 2, -3, 4]);
x.clipByValue(-2, 3).print(); // or tf.clipByValue(x, -2, 3)
- x (tf.Tensor|TypedArray|Array) The input tensor.
- clipValueMin (number) Lower-bound of range to be clipped to.
- clipValueMax (number) Upper-bound of range to be clipped to.
Computes cos of the input tf.Tensor element-wise: cos(x)
const x = tf.tensor1d([0, Math.PI / 2, Math.PI * 3 / 4]);
x.cos().print(); // or tf.cos(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes hyperbolic cos of the input tf.Tensor element-wise: cosh(x)
const x = tf.tensor1d([0, 1, -1, .7]);
x.cosh().print(); // or tf.cosh(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes exponential linear element-wise: x > 0 ? e ^ x - 1 : 0
.
const x = tf.tensor1d([-1, 1, -3, 2]);
x.elu().print(); // or tf.elu(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes gause error function of the input tf.Tensor element-wise:
erf(x)
const x = tf.tensor1d([0, .1, -.1, .7]);
x.erf().print(); // or tf.erf(x);
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes exponential of the input tf.Tensor element-wise. e ^ x
const x = tf.tensor1d([1, 2, -3]);
x.exp().print(); // or tf.exp(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes exponential of the input tf.Tensor minus one element-wise.
e ^ x - 1
const x = tf.tensor1d([1, 2, -3]);
x.expm1().print(); // or tf.expm1(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes floor of input tf.Tensor element-wise: floor(x)
.
const x = tf.tensor1d([.6, 1.1, -3.3]);
x.floor().print(); // or tf.floor(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Returns which elements of x are finite.
const x = tf.tensor1d([NaN, Infinity, -Infinity, 0, 1]);
x.isFinite().print(); // or tf.isNaN(x)
- x (tf.Tensor|TypedArray|Array) The input Tensor.
Returns which elements of x are Infinity or -Infinity.
const x = tf.tensor1d([NaN, Infinity, -Infinity, 0, 1]);
x.isInf().print(); // or tf.isNaN(x)
- x (tf.Tensor|TypedArray|Array) The input Tensor.
RReturns which elements of x are NaN.
const x = tf.tensor1d([NaN, Infinity, -Infinity, 0, 1]);
x.isNaN().print(); // or tf.isNaN(x)
- x (tf.Tensor|TypedArray|Array) The input Tensor.
Computes leaky rectified linear element-wise.
See http://web.stanford.edu/~awni/papers/relu_hybrid_icml2013_final.pdf
const x = tf.tensor1d([-1, 2, -3, 4]);
x.leakyRelu(0.1).print(); // or tf.leakyRelu(x, 0.1)
- x (tf.Tensor|TypedArray|Array) The input tensor.
- alpha (number) The scaling factor for negative values, defaults to 0.2. Optional
Computes natural logarithm of the input tf.Tensor element-wise: ln(x)
const x = tf.tensor1d([1, 2, Math.E]);
x.log().print(); // or tf.log(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes natural logarithm of the input tf.Tensor plus one
element-wise: ln(1 + x)
const x = tf.tensor1d([1, 2, Math.E - 1]);
x.log1p().print(); // or tf.log1p(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes log sigmoid of the input tf.Tensor element-wise:
logSigmoid(x)
. For numerical stability, we use -tf.softplus(-x)
.
const x = tf.tensor1d([0, 1, -1, .7]);
x.logSigmoid().print(); // or tf.logSigmoid(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes -1 * x
element-wise.
const x = tf.tensor2d([1, 2, -2, 0], [2, 2]);
x.neg().print(); // or tf.neg(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes leaky rectified linear element-wise with parametric alphas.
x < 0 ? alpha * x : f(x) = x
const x = tf.tensor1d([-1, 2, -3, 4]);
const alpha = tf.scalar(0.1);
x.prelu(alpha).print(); // or tf.prelu(x, alpha)
- x (tf.Tensor|TypedArray|Array) The input tensor.
- alpha (tf.Tensor|TypedArray|Array) Scaling factor for negative values.
Computes reciprocal of x element-wise: 1 / x
const x = tf.tensor1d([0, 1, 2]);
x.reciprocal().print(); // or tf.reciprocal(x)
- x (tf.Tensor|TypedArray|Array) The input tensor.
Computes rectified linear element-wise: max(x, 0)
.
const x = tf.tensor1d([-1, 2, -3, 4]);
x.relu().print(); // or tf.relu(x)
- x (tf.Tensor|TypedArray|Array) T