API Version
  • 4.22.0
  • 4.21.0
  • 4.20.0
  • 4.19.0
  • 4.18.0
  • 4.17.0
  • 4.16.0
  • 4.15.0
  • 4.14.0
  • 4.13.0
  • 4.12.0
  • 4.11.0
  • 4.10.0
  • 4.9.0
  • 4.8.0
  • 4.7.0
  • 4.6.0
  • 4.4.0
  • 4.2.0
  • 4.1.0
  • 4.0.0
  • 3.21.0
  • 3.19.0
  • 3.18.0
  • 3.17.0
  • 3.16.0
  • 3.15.0
  • 3.14.0
  • 3.13.0
  • 3.12.0
  • 3.9.0
  • 3.8.0
  • 3.7.0
  • 3.6.0
  • 3.5.0
  • 3.4.0
  • 3.2.0
  • 3.1.0
  • 3.0.0
  • 2.8.6
  • 2.8.5
  • 2.8.4
  • 2.8.3
  • 2.8.2
  • 2.8.1
  • 2.8.0
  • 2.7.0
  • 2.6.0
  • 2.4.0
  • 2.3.0
  • 2.1.0
  • 2.0.1
  • 2.0.0
  • 1.7.4
  • 1.7.2
  • 1.7.1
  • 1.7.0
  • 1.6.0
  • 1.5.2
  • 1.5.1
  • 1.3.1
  • 1.3.0
  • 1.2.11
  • 1.2.10
  • 1.2.8
  • 1.2.7
  • 1.2.6
  • 1.2.5
  • 1.1.2
  • 1.1.0
  • 1.0.2

Functions for visualizing training in TensorBoard

tf.node.summaryFileWriter (logdir, maxQueue?, flushMillis?, filenameSuffix?) function Source

Create a summary file writer for TensorBoard.

Example:

const tf = require('@tensorflow/tfjs-node');

const summaryWriter = tf.node.summaryFileWriter('/tmp/tfjs_tb_logdir');

for (let step = 0; step < 100; ++step) {
 summaryWriter.scalar('dummyValue', Math.sin(2 * Math.PI * step / 8), step);
}
Parameters:
  • logdir (string) Log directory in which the summary data will be written.
  • maxQueue (number) Maximum queue length (default: 10). Optional
  • flushMillis (number) Flush every __ milliseconds (default: 120e3, i.e, 120 seconds). Optional
  • filenameSuffix (string) Suffix of the protocol buffer file names to be written in the logdir (default: .v2). Optional
Returns: SummaryFileWriter
tf.node.tensorBoard (logdir?, args?) function Source

Callback for logging to TensorBoard durnig training.

Writes the loss and metric values (if any) to the specified log directory (logdir) which can be ingested and visualized by TensorBoard. This callback is usually passed as a callback to tf.Model.fit() or tf.Model.fitDataset() calls during model training. The frequency at which the values are logged can be controlled with the updateFreq field of the configuration object (2nd argument).

Usage example:

// Constructor a toy multilayer-perceptron regressor for demo purpose.
const model = tf.sequential();
model.add(
    tf.layers.dense({units: 100, activation: 'relu', inputShape: [200]}));
model.add(tf.layers.dense({units: 1}));
model.compile({
  loss: 'meanSquaredError',
  optimizer: 'sgd',
  metrics: ['MAE']
});

// Generate some random fake data for demo purpose.
const xs = tf.randomUniform([10000, 200]);
const ys = tf.randomUniform([10000, 1]);
const valXs = tf.randomUniform([1000, 200]);
const valYs = tf.randomUniform([1000, 1]);

// Start model training process.
await model.fit(xs, ys, {
  epochs: 100,
  validationData: [valXs, valYs],
   // Add the tensorBoard callback here.
  callbacks: tf.node.tensorBoard('/tmp/fit_logs_1')
});

Then you can use the following commands to point tensorboard to the logdir:

pip install tensorboard  # Unless you've already installed it.
tensorboard --logdir /tmp/fit_logs_1
Parameters:
  • logdir (string) Directory to which the logs will be written. Optional
  • args (Object) Optional configuration arguments. Optional
  • updateFreq ('batch'|'epoch') The frequency at which loss and metric values are written to logs.

    Currently supported options are:

    • 'batch': Write logs at the end of every batch of training, in addition to the end of every epoch of training.
    • 'epoch': Write logs at the end of every epoch of training.

    Note that writing logs too often slows down the training.

    Default: 'epoch'.

Returns: TensorBoardCallback
tf.node.decodeBmp (contents, channels?) function Source

Decode the first frame of a BMP-encoded image to a 3D Tensor of dtype int32.

Parameters:
  • contents (Uint8Array) The BMP-encoded image in an Uint8Array.
  • channels (number) An optional int. Defaults to 0. Accepted values are 0: use the number of channels in the BMP-encoded image. 3: output an RGB image. 4: output an RGBA image. Optional
Returns: Tensor3D
tf.node.decodeGif (contents) function Source

Decode the frame(s) of a GIF-encoded image to a 4D Tensor of dtype int32.

Parameters:
  • contents (Uint8Array) The GIF-encoded image in an Uint8Array.
Returns: Tensor4D
tf.node.decodeImage (content, channels?, dtype?, expandAnimations?) function Source

Given the encoded bytes of an image, it returns a 3D or 4D tensor of the decoded image. Supports BMP, GIF, JPEG and PNG formats.

Parameters:
  • content (Uint8Array) The encoded image in an Uint8Array.
  • channels (number) An optional int. Defaults to 0, use the number of channels in the image. Number of color channels for the decoded image. It is used when image is type Png, Bmp, or Jpeg. Optional
  • dtype (string) The data type of the result. Only int32 is supported at this time. Optional
  • expandAnimations (boolean) A boolean which controls the shape of the returned op's output. If True, the returned op will produce a 3-D tensor for PNG, JPEG, and BMP files; and a 4-D tensor for all GIFs, whether animated or not. If, False, the returned op will produce a 3-D tensor for all file types and will truncate animated GIFs to the first frame. Optional
Returns: Tensor3D|Tensor4D
tf.node.decodeJpeg (contents, channels?, ratio?, fancyUpscaling?, tryRecoverTruncated?, acceptableFraction?, dctMethod?) function Source

Decode a JPEG-encoded image to a 3D Tensor of dtype int32.

Parameters:
  • contents (Uint8Array) The JPEG-encoded image in an Uint8Array.
  • channels (number) An optional int. Defaults to 0. Accepted values are 0: use the number of channels in the JPEG-encoded image. 1: output a grayscale image. 3: output an RGB image. Optional
  • ratio (number) An optional int. Defaults to 1. Downscaling ratio. It is used when image is type Jpeg. Optional
  • fancyUpscaling (boolean) An optional bool. Defaults to True. If true use a slower but nicer upscaling of the chroma planes. It is used when image is type Jpeg. Optional
  • tryRecoverTruncated (boolean) An optional bool. Defaults to False. If true try to recover an image from truncated input. It is used when image is type Jpeg. Optional
  • acceptableFraction (number) An optional float. Defaults to 1. The minimum required fraction of lines before a truncated input is accepted. It is used when image is type Jpeg. Optional
  • dctMethod (string) An optional string. Defaults to "". string specifying a hint about the algorithm used for decompression. Defaults to "" which maps to a system-specific default. Currently valid values are ["INTEGER_FAST", "INTEGER_ACCURATE"]. The hint may be ignored (e.g., the internal jpeg library changes to a version that does not have that specific option.) It is used when image is type Jpeg. Optional
Returns: Tensor3D
tf.node.decodePng (contents, channels?, dtype?) function Source

Decode a PNG-encoded image to a 3D Tensor of dtype int32.

Parameters:
  • contents (Uint8Array) The PNG-encoded image in an Uint8Array.
  • channels (number) An optional int. Defaults to 0. Accepted values are 0: use the number of channels in the PNG-encoded image. 1: output a grayscale image. 3: output an RGB image. 4: output an RGBA image. Optional
  • dtype (string) The data type of the result. Only int32 is supported at this time. Optional
Returns: Tensor3D
tf.node.encodeJpeg (image, format?, quality?, progressive?, optimizeSize?, chromaDownsampling?, densityUnit?, xDensity?, yDensity?, xmpMetadata?) function Source

Encodes an image tensor to JPEG.

Parameters:
  • image (Tensor3D) A 3-D uint8 Tensor of shape [height, width, channels].
  • format (''|'grayscale'|'rgb') An optional string from: "", "grayscale", "rgb". Defaults to "". Per pixel image format.

    • '': Use a default format based on the number of channels in the image.
    • grayscale: Output a grayscale JPEG image. The channels dimension of image must be 1.
    • rgb: Output an RGB JPEG image. The channels dimension of image must be 3.
    Optional
  • quality (number) An optional int. Defaults to 95. Quality of the compression from 0 to 100 (higher is better and slower). Optional
  • progressive (boolean) An optional bool. Defaults to False. If True, create a JPEG that loads progressively (coarse to fine). Optional
  • optimizeSize (boolean) An optional bool. Defaults to False. If True, spend CPU/RAM to reduce size with no quality change. Optional
  • chromaDownsampling (boolean) An optional bool. Defaults to True. See http://en.wikipedia.org/wiki/Chroma_subsampling. Optional
  • densityUnit ('in'|'cm') An optional string from: "in", "cm". Defaults to "in". Unit used to specify x_density and y_density: pixels per inch ('in') or centimeter ('cm'). Optional
  • xDensity (number) An optional int. Defaults to 300. Horizontal pixels per density unit. Optional
  • yDensity (number) An optional int. Defaults to 300. Vertical pixels per density unit. Optional
  • xmpMetadata (string) An optional string. Defaults to "". If not empty, embed this XMP metadata in the image header. Optional
Returns: Promise<Uint8Array>
tf.node.encodePng (image, compression?) function Source

Encodes an image tensor to PNG.

Parameters:
  • image (Tensor3D) A 3-D uint8 Tensor of shape [height, width, channels].
  • compression (number) An optional int. Defaults to -1. Compression level. Optional
Returns: Promise<Uint8Array>
tf.node.TFSavedModel extends InferenceModel class Source

A tf.TFSavedModel is a signature loaded from a SavedModel metagraph, and allows inference execution.

dispose () method Source

Delete the SavedModel from nodeBackend and delete corresponding session in the C++ backend if the session is only used by this TFSavedModel.

Returns: void
predict (inputs, config?) method Source

Execute the inference for the input tensors.

Parameters:
  • inputs (Tensor|Tensor[]|NamedTensorMap)
  • config (ModelPredictConfig) Prediction configuration for specifying the batch size. Optional
Returns: Tensor|Tensor[]|NamedTensorMap
execute (inputs, outputs) method Source

Execute the inference for the input tensors and return activation values for specified output node names without batching.

Parameters:
  • inputs (Tensor|Tensor[]|NamedTensorMap)
  • outputs (string|string[]) string|string[]. List of output node names to retrieve activation from.
Returns: Tensor|Tensor[]

Inspect the MetaGraphs of the SavedModel from the provided path. This function will return an array of MetaGraphInfo objects.

Parameters:
  • path (string) Path to SavedModel folder.
Returns: Promise<MetaGraph[]>
tf.node.loadSavedModel (path, tags?, signature?) function Source

Load a TensorFlow SavedModel from disk. TensorFlow SavedModel is different from TensorFlow.js model format. A SavedModel is a directory containing serialized signatures and the states needed to run them. The directory has a saved_model.pb (or saved_model.pbtxt) file storing the actual TensorFlow program, or model, and a set of named signatures, each identifying a function. The directory also has a variables directory contains a standard training checkpoint. The directory may also has a assets directory contains files used by the TensorFlow graph, for example text files used to initialize vocabulary tables. These are supported datatypes: float32, int32, complex64, string.For more information, see this guide: https://www.tensorflow.org/guide/saved_model.

Parameters:
  • path (string) The path to the SavedModel.
  • tags ({}) The tags of the MetaGraph to load. The available tags of a SavedModel can be retrieved through tf.node.getMetaGraphsFromSavedModel() API. Defaults to ['serve']. Optional
  • signature (string) The name of the SignatureDef to load. The available SignatureDefs of a SavedModel can be retrieved through tf.node.getMetaGraphsFromSavedModel() API. Defaults to 'serving_default'. Optional
Returns: Promise<tf.node.TFSavedModel>