API Version
  • 4.18.0
  • 4.17.0
  • 4.16.0
  • 4.15.0
  • 4.14.0
  • 4.13.0
  • 4.12.0
  • 4.11.0
  • 4.10.0
  • 4.9.0
  • 4.8.0
  • 4.7.0
  • 4.6.0
  • 4.4.0
  • 4.2.0
  • 4.1.0
  • 4.0.0
  • 3.21.0
  • 3.19.0
  • 3.18.0
  • 3.17.0
  • 3.16.0
  • 3.15.0
  • 3.14.0
  • 3.13.0
  • 3.12.0
  • 3.9.0
  • 3.8.0
  • 3.7.0
  • 3.6.0
  • 3.5.0
  • 3.4.0
  • 3.2.0
  • 3.1.0
  • 3.0.0
  • 2.8.6
  • 2.8.5
  • 2.8.4
  • 2.8.3
  • 2.8.2
  • 2.8.1
  • 2.8.0
  • 2.7.0
  • 2.6.0
  • 2.4.0
  • 2.3.0
  • 2.1.0
  • 2.0.1
  • 2.0.0
  • 1.7.4
  • 1.7.2
  • 1.7.1
  • 1.7.0
  • 1.6.0
  • 1.5.2
  • 1.5.1
  • 1.3.1
  • 1.3.0
  • 1.2.11
  • 1.2.10
  • 1.2.8
  • 1.2.7
  • 1.2.6
  • 1.2.5
  • 1.1.2
  • 1.1.0
  • 1.0.2

Functions for visualizing training in TensorBoard

tf.node.summaryFileWriter (logdir, maxQueue?, flushMillis?, filenameSuffix?) function Source

Create a summary file writer for TensorBoard.

Example:

const tf = require('@tensorflow/tfjs-node');

const summaryWriter = tf.node.summaryFileWriter('/tmp/tfjs_tb_logdir');

for (let step = 0; step < 100; ++step) {
summaryWriter.scalar('dummyValue', Math.sin(2 * Math.PI * step / 8), step);
}
Parameters:
  • logdir (string) Log directory in which the summary data will be written.
  • maxQueue (number) Maximum queue length (default: 10). Optional
  • flushMillis (number) Flush every __ milliseconds (default: 120e3, i.e, 120 seconds). Optional
  • filenameSuffix (string) Suffix of the protocol buffer file names to be written in the logdir (default: .v2). Optional
Returns: SummaryFileWriter
tf.node.tensorBoard (logdir?, args?) function Source

Callback for logging to TensorBoard durnig training.

Writes the loss and metric values (if any) to the specified log directory (logdir) which can be ingested and visualized by TensorBoard. This callback is usually passed as a callback to tf.Model.fit() or tf.Model.fitDataset() calls during model training. The frequency at which the values are logged can be controlled with the updateFreq field of the configuration object (2nd argument).

Usage example:

// Constructor a toy multilayer-perceptron regressor for demo purpose.
const model = tf.sequential();
model.add(
tf.layers.dense({units: 100, activation: 'relu', inputShape: [200]}));
model.add(tf.layers.dense({units: 1}));
model.compile({
loss: 'meanSquaredError',
optimizer: 'sgd',
metrics: ['MAE']
});

// Generate some random fake data for demo purpose.
const xs = tf.randomUniform([10000, 200]);
const ys = tf.randomUniform([10000, 1]);
const valXs = tf.randomUniform([1000, 200]);
const valYs = tf.randomUniform([1000, 1]);

// Start model training process.
await model.fit(xs, ys, {
epochs: 100,
validationData: [valXs, valYs],
// Add the tensorBoard callback here.
callbacks: tf.node.tensorBoard('/tmp/fit_logs_1')
});

Then you can use the following commands to point tensorboard to the logdir:

pip install tensorboard  # Unless you've already installed it.
tensorboard --logdir /tmp/fit_logs_1
Parameters:
  • logdir (string) Directory to which the logs will be written. Optional
  • args (Object) Optional configuration arguments. Optional
  • updateFreq ('batch'|'epoch') The frequency at which loss and metric values are written to logs.

    Currently supported options are:

    • 'batch': Write logs at the end of every batch of training, in addition to the end of every epoch of training.
    • 'epoch': Write logs at the end of every epoch of training.

    Note that writing logs too often slows down the training.

    Default: 'epoch'.

Returns: TensorBoardCallback