Functions for visualizing training in TensorBoard
Create a summary file writer for TensorBoard.
Example:
const tf = require('@tensorflow/tfjs-node');
const summaryWriter = tf.node.summaryFileWriter('/tmp/tfjs_tb_logdir');
for (let step = 0; step < 100; ++step) {
summaryWriter.scalar('dummyValue', Math.sin(2 * Math.PI * step / 8), step);
}
- logdir (string) Log directory in which the summary data will be written.
-
maxQueue
(number)
Maximum queue length (default:
10
). Optional -
flushMillis
(number)
Flush every __ milliseconds (default:
120e3
, i.e,120
seconds). Optional -
filenameSuffix
(string)
Suffix of the protocol buffer file names to be
written in the
logdir
(default:.v2
). Optional
Callback for logging to TensorBoard during training.
Writes the loss and metric values (if any) to the specified log directory
(logdir
) which can be ingested and visualized by TensorBoard.
This callback is usually passed as a callback to tf.Model.fit()
or
tf.Model.fitDataset()
calls during model training. The frequency at which
the values are logged can be controlled with the updateFreq
field of the
configuration object (2nd argument).
Usage example:
// Constructor a toy multilayer-perceptron regressor for demo purpose.
const model = tf.sequential();
model.add(
tf.layers.dense({units: 100, activation: 'relu', inputShape: [200]}));
model.add(tf.layers.dense({units: 1}));
model.compile({
loss: 'meanSquaredError',
optimizer: 'sgd',
metrics: ['MAE']
});
// Generate some random fake data for demo purpose.
const xs = tf.randomUniform([10000, 200]);
const ys = tf.randomUniform([10000, 1]);
const valXs = tf.randomUniform([1000, 200]);
const valYs = tf.randomUniform([1000, 1]);
// Start model training process.
await model.fit(xs, ys, {
epochs: 100,
validationData: [valXs, valYs],
// Add the tensorBoard callback here.
callbacks: tf.node.tensorBoard('/tmp/fit_logs_1')
});
Then you can use the following commands to point tensorboard to the logdir:
pip install tensorboard # Unless you've already installed it.
tensorboard --logdir /tmp/fit_logs_1
- logdir (string) Directory to which the logs will be written. Optional
- args (Object) Optional configuration arguments. Optional
-
updateFreq
('batch'|'epoch')
The frequency at which loss and metric values are written to logs.
Currently supported options are:
- 'batch': Write logs at the end of every batch of training, in addition to the end of every epoch of training.
- 'epoch': Write logs at the end of every epoch of training.
Note that writing logs too often slows down the training.
Default: 'epoch'.
The environment contains evaluated flags as well as the registered platform.
This is always used as a global singleton and can be retrieved with
tf.env()
.
Returns the current environment (a global singleton).
The environment object contains the evaluated feature values as well as the active platform.
Asserts that the expression is true. Otherwise throws an error with the provided message.
const x = 2;
tf.util.assert(x === 2, 'x is not 2');
- expr (boolean) The expression to assert (as a boolean).
- msg (() => string) A function that returns the message to report when throwing an error. We use a function for performance reasons.
Creates a new array with randomized indicies to a given quantity.
const randomTen = tf.util.createShuffledIndices(10);
console.log(randomTen);
- n (number)
Decodes the provided bytes into a string using the provided encoding scheme.
- bytes (Uint8Array) The bytes to decode.
- encoding (string) The encoding scheme. Defaults to utf-8. Optional
Encodes the provided string into bytes using the provided encoding scheme.
- s (string) The string to encode.
- encoding (string) The encoding scheme. Defaults to utf-8. Optional
Returns a platform-specific implementation of
fetch
.
If fetch
is defined on the global object (window
, process
, etc.),
tf.util.fetch
returns that function.
If not, tf.util.fetch
returns a platform-specific solution.
const resource = await tf.util.fetch('https://unpkg.com/@tensorflow/tfjs');
// handle response
- path (string)
- requestInits (RequestInit) Optional
Flattens an arbitrarily nested array.
const a = [[1, 2], [3, 4], [5, [6, [7]]]];
const flat = tf.util.flatten(a);
console.log(flat);
- arr (number|boolean|string|Promise<number>|TypedArray|RecursiveArray|TypedArray>) The nested array to flatten.
- result (number|boolean|string|Promise<number>|TypedArray[]) The destination array which holds the elements. Optional
- skipTypedArray (boolean) If true, avoids flattening the typed arrays. Defaults to false. Optional
Returns the current high-resolution time in milliseconds relative to an arbitrary time in the past. It works across different platforms (node.js, browsers).
console.log(tf.util.now());
Shuffles the array in-place using Fisher-Yates algorithm.
const a = [1, 2, 3, 4, 5];
tf.util.shuffle(a);
console.log(a);
- array (any[]|Uint32Array|Int32Array| Float32Array) The array to shuffle in-place.
Returns the size (number of elements) of the tensor given its shape.
const shape = [3, 4, 2];
const size = tf.util.sizeFromShape(shape);
console.log(size);
- shape (number[])
Decode the first frame of a BMP-encoded image to a 3D Tensor of dtype
int32
.
- contents (Uint8Array) The BMP-encoded image in an Uint8Array.
- channels (number) An optional int. Defaults to 0. Accepted values are 0: use the number of channels in the BMP-encoded image. 3: output an RGB image. 4: output an RGBA image. Optional
Decode the frame(s) of a GIF-encoded image to a 4D Tensor of dtype int32
.
- contents (Uint8Array) The GIF-encoded image in an Uint8Array.
Given the encoded bytes of an image, it returns a 3D or 4D tensor of the decoded image. Supports BMP, GIF, JPEG and PNG formats.
- content (Uint8Array) The encoded image in an Uint8Array.
- channels (number) An optional int. Defaults to 0, use the number of channels in the image. Number of color channels for the decoded image. It is used when image is type Png, Bmp, or Jpeg. Optional
-
dtype
(string)
The data type of the result. Only
int32
is supported at this time. Optional - expandAnimations (boolean) A boolean which controls the shape of the returned op's output. If True, the returned op will produce a 3-D tensor for PNG, JPEG, and BMP files; and a 4-D tensor for all GIFs, whether animated or not. If, False, the returned op will produce a 3-D tensor for all file types and will truncate animated GIFs to the first frame. Optional
Decode a JPEG-encoded image to a 3D Tensor of dtype int32
.
- contents (Uint8Array) The JPEG-encoded image in an Uint8Array.
- channels (number) An optional int. Defaults to 0. Accepted values are 0: use the number of channels in the JPEG-encoded image. 1: output a grayscale image. 3: output an RGB image. Optional
- ratio (number) An optional int. Defaults to 1. Downscaling ratio. It is used when image is type Jpeg. Optional
- fancyUpscaling (boolean) An optional bool. Defaults to True. If true use a slower but nicer upscaling of the chroma planes. It is used when image is type Jpeg. Optional
- tryRecoverTruncated (boolean) An optional bool. Defaults to False. If true try to recover an image from truncated input. It is used when image is type Jpeg. Optional
- acceptableFraction (number) An optional float. Defaults to 1. The minimum required fraction of lines before a truncated input is accepted. It is used when image is type Jpeg. Optional
- dctMethod (string) An optional string. Defaults to "". string specifying a hint about the algorithm used for decompression. Defaults to "" which maps to a system-specific default. Currently valid values are ["INTEGER_FAST", "INTEGER_ACCURATE"]. The hint may be ignored (e.g., the internal jpeg library changes to a version that does not have that specific option.) It is used when image is type Jpeg. Optional
Decode a PNG-encoded image to a 3D Tensor of dtype int32
.
- contents (Uint8Array) The PNG-encoded image in an Uint8Array.
- channels (number) An optional int. Defaults to 0. Accepted values are 0: use the number of channels in the PNG-encoded image. 1: output a grayscale image. 3: output an RGB image. 4: output an RGBA image. Optional
-
dtype
(string)
The data type of the result. Only
int32
is supported at this time. Optional
Encodes an image tensor to JPEG.
- image (Tensor3D) A 3-D uint8 Tensor of shape [height, width, channels].
-
format
(''|'grayscale'|'rgb')
An optional string from: "", "grayscale", "rgb".
Defaults to "". Per pixel image format.
- '': Use a default format based on the number of channels in the image.
- grayscale: Output a grayscale JPEG image. The channels dimension of image must be 1.
- rgb: Output an RGB JPEG image. The channels dimension of image must be 3.
- quality (number) An optional int. Defaults to 95. Quality of the compression from 0 to 100 (higher is better and slower). Optional
- progressive (boolean) An optional bool. Defaults to False. If True, create a JPEG that loads progressively (coarse to fine). Optional
- optimizeSize (boolean) An optional bool. Defaults to False. If True, spend CPU/RAM to reduce size with no quality change. Optional
- chromaDownsampling (boolean) An optional bool. Defaults to True. See http://en.wikipedia.org/wiki/Chroma_subsampling. Optional
- densityUnit ('in'|'cm') An optional string from: "in", "cm". Defaults to "in". Unit used to specify x_density and y_density: pixels per inch ('in') or centimeter ('cm'). Optional
- xDensity (number) An optional int. Defaults to 300. Horizontal pixels per density unit. Optional
- yDensity (number) An optional int. Defaults to 300. Vertical pixels per density unit. Optional
- xmpMetadata (string) An optional string. Defaults to "". If not empty, embed this XMP metadata in the image header. Optional
Encodes an image tensor to PNG.
- image (Tensor3D) A 3-D uint8 Tensor of shape [height, width, channels].
- compression (number) An optional int. Defaults to -1. Compression level. Optional
A tf.TFSavedModel
is a signature loaded from a SavedModel
metagraph, and allows inference execution.
Delete the SavedModel from nodeBackend and delete corresponding session in the C++ backend if the session is only used by this TFSavedModel.
Execute the inference for the input tensors.
- inputs (Tensor|Tensor[]|NamedTensorMap)
- config (ModelPredictConfig) Prediction configuration for specifying the batch size. Optional
Execute the inference for the input tensors and return activation values for specified output node names without batching.
- inputs (Tensor|Tensor[]|NamedTensorMap)
- outputs (string|string[]) string|string[]. List of output node names to retrieve activation from.
Inspect the MetaGraphs of the SavedModel from the provided path. This
function will return an array of MetaGraphInfo
objects.
- path (string) Path to SavedModel folder.
Load a TensorFlow SavedModel from disk. TensorFlow SavedModel is different from TensorFlow.js model format. A SavedModel is a directory containing serialized signatures and the states needed to run them. The directory has a saved_model.pb (or saved_model.pbtxt) file storing the actual TensorFlow program, or model, and a set of named signatures, each identifying a function. The directory also has a variables directory contains a standard training checkpoint. The directory may also has a assets directory contains files used by the TensorFlow graph, for example text files used to initialize vocabulary tables. These are supported datatypes: float32, int32, complex64, string.For more information, see this guide: https://www.tensorflow.org/guide/saved_model.
- path (string) The path to the SavedModel.
- tags ({}) The tags of the MetaGraph to load. The available tags of a SavedModel can be retrieved through tf.node.getMetaGraphsFromSavedModel() API. Defaults to ['serve']. Optional
- signature (string) The name of the SignatureDef to load. The available SignatureDefs of a SavedModel can be retrieved through tf.node.getMetaGraphsFromSavedModel() API. Defaults to 'serving_default'. Optional