TFJS Task API groups models into different tasks. To use a specific model, you first need to load it, then call the predict method on the model to run the inference.

Load model

To load a model, use a model loader as follows. Do not construct the model manually.

const model = await tfTask.{task_name}.{model_name}.{runtime}.load(options);

Please refer to a specific model below for details about the exact model loader to use and the corrsponding options.

Run inference

All loaded models have a predict method defined. Call it with model-specific input and options to get the inference result. Please refer to a specific model below for details about the input and the corresponding options.

const result = await model.predict(input, options);

Clean up resources

All loaded models have a cleanUp method defined to clean up resources. The model should not be used after this call.

model.cleanUp();

The task of classifying images into a preset of labels.

tfTask.ImageClassifier extends TaskModel class Source

The base class for all ImageClassification task models.

predict (img, options?) method Source

Performs classification on the given image-like input, and returns result.

Parameters:
Returns: Promise<ImageClassificationResult>
ImageClassificationResult:
  • classes (Class[]) All predicted classes.
Class:
  • className (string) The name of the class.
  • score (number) The score of the class.
cleanUp () method Source

Cleans up resources if needed.

Returns: void
tfTask.ICCustomModelTFLite extends ImageClassifierTFLite class Source

A custom TFLite image classification model loaded from a model url or an ArrayBuffer in memory.

The underlying image classifier is built on top of the TFLite Task Library. As a result, the custom model needs to meet the metadata requirements.

Usage:

// Load the model from a custom url with other options (optional).
const model = await tfTask.ImageClassification.CustomModel.TFLite.load({
   model:
'https://tfhub.dev/google/lite-model/aiy/vision/classifier/plants_V1/3',
});

// Run inference on an image.
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result.classes);

// Clean up.
model.cleanUp();

Refer to tfTask.ImageClassifier for the predict and cleanUp method.

Options for load:
  • model (string|ArrayBuffer) The model url, or the model content stored in an ArrayBuffer.

    You can use TFLite model urls from tfhub.dev directly. For model compatibility, see comments in the corresponding model class.

  • maxResults (number) Maximum number of top scored results to return. If < 0, all results will be returned. If 0, an invalid argument error is returned.
  • scoreThreshold (number) Score threshold in [0,1), overrides the ones provided in the model metadata (if any). Results below this value are rejected.
  • numThreads (number) The number of threads to be used for TFLite ops that support multi-threading when running inference with CPU. num_threads should be greater than 0 or equal to -1. Setting num_threads to -1 has the effect to let TFLite runtime set the value.

    Default to number of physical CPU cores, or -1 if WASM multi-threading is not supported by user's browser.

Pre-trained TFJS mobilenet model.

Usage:

// Load the model with options (optional).
//
// By default, it uses mobilenet V1 with webgl backend. You can change them
// in the options parameter of the `load` function (see below for docs).
const model = await tfTask.ImageClassification.Mobilenet.TFJS.load();

// Run inference on an image with options (optional).
const img = document.querySelector('img');
const result = await model.predict(img, {topK: 5});
console.log(result.classes);

// Clean up.
model.cleanUp();

Refer to tfTask.ImageClassifier for the predict and cleanUp method.

Options for load:
  • backend ('cpu'|'webgl'|'wasm') The backend to use to run TFJS models. Default to 'webgl'.
Options for predict:
  • topK (number) Number of top classes to return.
tfTask.MobilenetTFLite extends ImageClassifierTFLite class Source

Pre-trained TFLite mobilenet image classification model.

Usage:

// Load the model with options (optional).
//
// By default, it uses mobilenet V1. You can change it in the options
// parameter of the `load` function (see below for docs).
const model = await tfTask.ImageClassification.Mobilenet.TFJS.load();

// Run inference on an image.
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result.classes);

// Clean up.
model.cleanUp();

Refer to tfTask.ImageClassifier for the predict and cleanUp method.

Options for load:
  • version (1|2) The MobileNet version number. Use 1 for MobileNetV1, and 2 for MobileNetV2. Defaults to 1.
  • alpha (0.25|0.50|0.75|1.0) Controls the width of the network, trading accuracy for performance. A smaller alpha decreases accuracy and increases performance. Defaults to 1.0.
  • maxResults (number) Maximum number of top scored results to return. If < 0, all results will be returned. If 0, an invalid argument error is returned.
  • scoreThreshold (number) Score threshold in [0,1), overrides the ones provided in the model metadata (if any). Results below this value are rejected.
  • numThreads (number) The number of threads to be used for TFLite ops that support multi-threading when running inference with CPU. num_threads should be greater than 0 or equal to -1. Setting num_threads to -1 has the effect to let TFLite runtime set the value.

    Default to number of physical CPU cores, or -1 if WASM multi-threading is not supported by user's browser.

The task of predicting associated class for each pixel of an image.

tfTask.ImageSegmenter extends TaskModel class Source

The base class for all ImageSegmentation task models.

predict (img, options?) method Source

Performs segmentation on the given image-like input, and returns result.

Parameters:
Returns: Promise<ImageSegmentationResult>
ImageSegmentationResult:
  • legend (Legend)
  • width (number) The width of the returned segmentation map.
  • height (number) The height of the returned segmentation map.
  • segmentationMap (Uint8ClampedArray) The colored segmentation map as Uint8ClampedArray which can be fed into ImageData and mapped to a canvas.
Color:
  • r (number) The red color component for the label, in the [0, 255] range.
  • g (number) The green color component for the label, in the [0, 255] range.
  • b (number) The blue color component for the label, in the [0, 255] range.
cleanUp () method Source

Cleans up resources if needed.

Returns: void

Pre-trained TFJS depelab model.

Usage:

// Load the model with options (optional).
//
// By default, it uses base='pascal' and quantizationBytes=2 with webgl
// backend. You can change them in the options parameter of the `load`
// function (see below for docs).
const model = await tfTask.ImageSegmentation.Deeplab.TFJS.load();

// Run inference on an image with options (optional).
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result);

// Clean up.
model.cleanUp();

Refer to tfTask.ImageSegmenter for the predict and cleanUp method.

Options for load:
  • backend ('cpu'|'webgl') The backend to use to run TFJS models. Default to 'webgl'.
tfTask.DeeplabTFLite extends ImageSegmenterTFLite class Source

Pre-trained TFLite deeplab image segmentation model.

Usage:

// Load the model with options (optional).
const model = await tfTask.ImageSegmentation.Deeplab.TFLite.load();

// Run inference on an image.
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result);

// Clean up.
model.cleanUp();

Refer to tfTask.ImageSegmenter for the predict and cleanUp method.

Options for load:
  • outputType (OutputType)
  • numThreads (number) The number of threads to be used for TFLite ops that support multi-threading when running inference with CPU. num_threads should be greater than 0 or equal to -1. Setting num_threads to -1 has the effect to let TFLite runtime set the value.

    Default to number of physical CPU cores, or -1 if WASM multi-threading is not supported by user's browser.

tfTask.ISCustomModelTFLite extends ImageSegmenterTFLite class Source

A custom TFLite image segmentation model loaded from a model url or an ArrayBuffer in memory.

The underlying image segmenter is built on top of the TFLite Task Library. As a result, the custom model needs to meet the metadata requirements.

Usage:

// Load the model from a custom url with other options (optional).
const model = await tfTask.ImageSemgentation.CustomModel.TFLite.load({
   model:
'https://tfhub.dev/tensorflow/lite-model/deeplabv3/1/metadata/2?lite-format=tflite',
});

// Run inference on an image.
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result);

// Clean up.
model.cleanUp();

Refer to tfTask.ImageSegmenter for the predict and cleanUp method.

Options for load:
  • model (string|ArrayBuffer) The model url, or the model content stored in an ArrayBuffer.

    You can use TFLite model urls from tfhub.dev directly. For model compatibility, see comments in the corresponding model class.

  • outputType (OutputType)
  • numThreads (number) The number of threads to be used for TFLite ops that support multi-threading when running inference with CPU. num_threads should be greater than 0 or equal to -1. Setting num_threads to -1 has the effect to let TFLite runtime set the value.

    Default to number of physical CPU cores, or -1 if WASM multi-threading is not supported by user's browser.

The task of localizing and identifing multiple objects in a single image.

tfTask.ObjectDetector extends TaskModel class Source

The base class for all ObjectDetection task models.

predict (img, options?) method Source

Detects objects on the given image-like input, and returns result.

Parameters:
Returns: Promise<ObjectDetectionResult>
ObjectDetectionResult:
  • objects (DetectedObject[]) All detected objects.
DetectedObject:
  • boundingBox (BoundingBox) The bounding box of the object.
  • className (string) The name of the class.
  • score (number) The score of the class.
BoundingBox:
  • originX (number) The X coordinate of the top-left corner of the bounding box.
  • originY (number) The Y coordinate of the top-left corner of the bounding box.
  • width (number) The width of bounding box.
  • height (number) The height of bounding box.
Class:
  • className (string) The name of the class.
  • score (number) The score of the class.
cleanUp () method Source

Cleans up resources if needed.

Returns: void

Pre-trained TFJS coco-ssd model.

Usage:

// Load the model with options (optional).
//
// By default, it uses lite_mobilenet_v2 as the base model with webgl
// backend. You can change them in the `options` parameter of the `load`
// function (see below for docs).
const model = await tfTask.ObjectDetection.CocoSsd.TFJS.load();

// Run detection on an image with options (optional).
const img = document.querySelector('img');
const result = await model.predict(img, {numMaxBoxes: 5});
console.log(result.objects);

// Clean up.
model.cleanUp();

Refer to tfTask.ObjectDetector for the predict and cleanUp method.

Options for load:
  • backend ('cpu'|'webgl'|'wasm') The backend to use to run TFJS models. Default to 'webgl'.
Options for predict:
  • maxNumBoxes (number) The maximum number of bounding boxes of detected objects. There can be multiple objects of the same class, but at different locations. Defaults to 20.
  • minScore (number) The minimum score of the returned bounding boxes of detected objects. Value between 0 and 1. Defaults to 0.5.
tfTask.CocoSsdTFLite extends ObjectDetectorTFLite class Source

Pre-trained TFLite coco-ssd object detection model.

Usage:

// Load the model with options (optional).
const model = await tfTask.ObjectDetection.CocoSsd.TFLite.load();

// Run inference on an image.
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result.objects);

// Clean up.
model.cleanUp();

Refer to tfTask.ObjectDetector for the predict and cleanUp method.

Options for load:
  • maxResults (number) Maximum number of top scored results to return. If < 0, all results will be returned. If 0, an invalid argument error is returned.
  • scoreThreshold (number) Score threshold in [0,1), overrides the ones provided in the model metadata (if any). Results below this value are rejected.
  • numThreads (number) The number of threads to be used for TFLite ops that support multi-threading when running inference with CPU. num_threads should be greater than 0 or equal to -1. Setting num_threads to -1 has the effect to let TFLite runtime set the value.

    Default to number of physical CPU cores, or -1 if WASM multi-threading is not supported by user's browser.

tfTask.ODCustomModelTFLite extends ObjectDetectorTFLite class Source

A custom TFLite object detection model loaded from a model url or an ArrayBuffer in memory.

The underlying object detector is built on top of the TFLite Task Library. As a result, the custom model needs to meet the metadata requirements.

Usage:

// Load the model from a custom url with other options (optional).
const model = await tfTask.ObjectDetection.CustomModel.TFLite.load({
   model:
'https://tfhub.dev/tensorflow/lite-model/ssd_mobilenet_v1/1/metadata/2?lite-format=tflite',
});

// Run inference on an image.
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result.objects);

// Clean up.
model.cleanUp();

Refer to tfTask.ObjectDetector for the predict and cleanUp method.

Options for load:
  • model (string|ArrayBuffer) The model url, or the model content stored in an ArrayBuffer.

    You can use TFLite model urls from tfhub.dev directly. For model compatibility, see comments in the corresponding model class.

  • maxResults (number) Maximum number of top scored results to return. If < 0, all results will be returned. If 0, an invalid argument error is returned.
  • scoreThreshold (number) Score threshold in [0,1), overrides the ones provided in the model metadata (if any). Results below this value are rejected.
  • numThreads (number) The number of threads to be used for TFLite ops that support multi-threading when running inference with CPU. num_threads should be greater than 0 or equal to -1. Setting num_threads to -1 has the effect to let TFLite runtime set the value.

    Default to number of physical CPU cores, or -1 if WASM multi-threading is not supported by user's browser.