TFJS Task API groups models into different tasks. To use a specific model, you first need to load it, then call the predict method on the model to run the inference.

Load model

To load a model, use a model loader as follows. Do not construct the model manually.

const model = await tfTask.{task_name}.{model_name}.{runtime}.load(options);

Please refer to a specific model below for details about the exact model loader to use and the corrsponding options.

Run inference

All loaded models have a predict method defined. Call it with model-specific input and options to get the inference result. Please refer to a specific model below for details about the input and the corresponding options.

const result = await model.predict(input, options);

Clean up resources

All loaded models have a cleanUp method defined to clean up resources. The model should not be used after this call.

model.cleanUp();

The task of classifying images into a preset of labels.

tfTask.ImageClassifier extends TaskModel class Source

The base class for all ImageClassification task models.

predict (img, options?) method Source

Performs classification on the given image-like input, and returns result.

Parameters:
Returns: Promise<ImageClassificationResult>
ImageClassificationResult:
  • classes (Class[]) All predicted classes.
Class:
  • className (string) The name of the class.
  • score (number) The score of the class.
cleanUp () method Source

Cleans up resources if needed.

Returns: void
tfTask.ICCustomModelTFLite extends ImageClassifierTFLite class Source

A custom TFLite image classification model loaded from a model url or an ArrayBuffer in memory.

The underlying image classifier is built on top of the TFLite Task Library. As a result, the custom model needs to meet the metadata requirements.

Usage:

// Load the model from a custom url with other options (optional).
const model = await tfTask.ImageClassification.CustomModel.TFLite.load({
   model:
'https://tfhub.dev/google/lite-model/aiy/vision/classifier/plants_V1/3',
});

// Run inference on an image.
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result.classes);

// Clean up.
model.cleanUp();

Refer to tfTask.ImageClassifier for the predict and cleanUp method.

Options for load:
  • model (string|ArrayBuffer) The model url, or the model content stored in an ArrayBuffer.

    You can use TFLite model urls from tfhub.dev directly. For model compatibility, see comments in the corresponding model class.

  • maxResults (number) Maximum number of top scored results to return. If < 0, all results will be returned. If 0, an invalid argument error is returned.
  • scoreThreshold (number) Score threshold in [0,1), overrides the ones provided in the model metadata (if any). Results below this value are rejected.
  • numThreads (number) The number of threads to be used for TFLite ops that support multi-threading when running inference with CPU. num_threads should be greater than 0 or equal to -1. Setting num_threads to -1 has the effect to let TFLite runtime set the value.

    Default to number of physical CPU cores, or -1 if WASM multi-threading is not supported by user's browser.

Pre-trained TFJS mobilenet model.

Usage:

// Load the model with options (optional).
//
// By default, it uses mobilenet V1 with webgl backend. You can change them
// in the options parameter of the `load` function (see below for docs).
const model = await tfTask.ImageClassification.Mobilenet.TFJS.load();

// Run inference on an image with options (optional).
const img = document.querySelector('img');
const result = await model.predict(img, {topK: 5});
console.log(result.classes);

// Clean up.
model.cleanUp();

Refer to tfTask.ImageClassifier for the predict and cleanUp method.

Options for load:
  • backend ('cpu'|'webgl'|'wasm') The backend to use to run TFJS models. Default to 'webgl'.
Options for predict:
  • topK (number) Number of top classes to return.
tfTask.MobilenetTFLite extends ImageClassifierTFLite class Source

Pre-trained TFLite mobilenet image classification model.

Usage:

// Load the model with options (optional).
//
// By default, it uses mobilenet V1. You can change it in the options
// parameter of the `load` function (see below for docs).
const model = await tfTask.ImageClassification.Mobilenet.TFJS.load();

// Run inference on an image.
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result.classes);

// Clean up.
model.cleanUp();

Refer to tfTask.ImageClassifier for the predict and cleanUp method.

Options for load:
  • version (1|2) The MobileNet version number. Use 1 for MobileNetV1, and 2 for MobileNetV2. Defaults to 1.
  • alpha (0.25|0.50|0.75|1.0) Controls the width of the network, trading accuracy for performance. A smaller alpha decreases accuracy and increases performance. Defaults to 1.0.
  • maxResults (number) Maximum number of top scored results to return. If < 0, all results will be returned. If 0, an invalid argument error is returned.
  • scoreThreshold (number) Score threshold in [0,1), overrides the ones provided in the model metadata (if any). Results below this value are rejected.
  • numThreads (number) The number of threads to be used for TFLite ops that support multi-threading when running inference with CPU. num_threads should be greater than 0 or equal to -1. Setting num_threads to -1 has the effect to let TFLite runtime set the value.

    Default to number of physical CPU cores, or -1 if WASM multi-threading is not supported by user's browser.

The task of predicting associated class for each pixel of an image.

tfTask.ImageSegmenter extends TaskModel class Source

The base class for all ImageSegmentation task models.

predict (img, options?) method Source

Performs segmentation on the given image-like input, and returns result.

Parameters:
Returns: Promise<ImageSegmentationResult>
ImageSegmentationResult:
  • legend (Legend)
  • width (number) The width of the returned segmentation map.
  • height (number) The height of the returned segmentation map.
  • segmentationMap (Uint8ClampedArray) The colored segmentation map as Uint8ClampedArray which can be fed into ImageData and mapped to a canvas.
Color:
  • r (number) The red color component for the label, in the [0, 255] range.
  • g (number) The green color component for the label, in the [0, 255] range.
  • b (number) The blue color component for the label, in the [0, 255] range.
cleanUp () method Source

Cleans up resources if needed.

Returns: void

Pre-trained TFJS depelab model.

Usage:

// Load the model with options (optional).
//
// By default, it uses base='pascal' and quantizationBytes=2 with webgl
// backend. You can change them in the options parameter of the `load`
// function (see below for docs).
const model = await tfTask.ImageSegmentation.Deeplab.TFJS.load();

// Run inference on an image with options (optional).
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result);

// Clean up.
model.cleanUp();

Refer to tfTask.ImageSegmenter for the predict and cleanUp method.

Options for load:
  • backend ('cpu'|'webgl') The backend to use to run TFJS models. Default to 'webgl'.
tfTask.DeeplabTFLite extends ImageSegmenterTFLite class Source

Pre-trained TFLite deeplab image segmentation model.

Usage:

// Load the model with options (optional).
const model = await tfTask.ImageSegmentation.Deeplab.TFLite.load();

// Run inference on an image.
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result);

// Clean up.
model.cleanUp();

Refer to tfTask.ImageSegmenter for the predict and cleanUp method.

Options for load:
  • outputType (OutputType)
  • numThreads (number) The number of threads to be used for TFLite ops that support multi-threading when running inference with CPU. num_threads should be greater than 0 or equal to -1. Setting num_threads to -1 has the effect to let TFLite runtime set the value.

    Default to number of physical CPU cores, or -1 if WASM multi-threading is not supported by user's browser.

tfTask.ISCustomModelTFLite extends ImageSegmenterTFLite class Source

A custom TFLite image segmentation model loaded from a model url or an ArrayBuffer in memory.

The underlying image segmenter is built on top of the TFLite Task Library. As a result, the custom model needs to meet the metadata requirements.

Usage:

// Load the model from a custom url with other options (optional).
const model = await tfTask.ImageSemgentation.CustomModel.TFLite.load({
   model:
'https://tfhub.dev/tensorflow/lite-model/deeplabv3/1/metadata/2?lite-format=tflite',
});

// Run inference on an image.
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result);

// Clean up.
model.cleanUp();

Refer to tfTask.ImageSegmenter for the predict and cleanUp method.

Options for load:
  • model (string|ArrayBuffer) The model url, or the model content stored in an ArrayBuffer.

    You can use TFLite model urls from tfhub.dev directly. For model compatibility, see comments in the corresponding model class.

  • outputType (OutputType)
  • numThreads (number) The number of threads to be used for TFLite ops that support multi-threading when running inference with CPU. num_threads should be greater than 0 or equal to -1. Setting num_threads to -1 has the effect to let TFLite runtime set the value.

    Default to number of physical CPU cores, or -1 if WASM multi-threading is not supported by user's browser.

The task of localizing and identifing multiple objects in a single image.

tfTask.ObjectDetector extends TaskModel class Source

The base class for all ObjectDetection task models.

predict (img, options?) method Source

Detects objects on the given image-like input, and returns result.

Parameters:
Returns: Promise<ObjectDetectionResult>
ObjectDetectionResult:
  • objects (DetectedObject[]) All detected objects.
DetectedObject:
  • boundingBox (BoundingBox) The bounding box of the object.
  • className (string) The name of the class.
  • score (number) The score of the class.
BoundingBox:
  • originX (number) The X coordinate of the top-left corner of the bounding box.
  • originY (number) The Y coordinate of the top-left corner of the bounding box.
  • width (number) The width of bounding box.
  • height (number) The height of bounding box.
Class:
  • className (string) The name of the class.
  • score (number) The score of the class.
cleanUp () method Source

Cleans up resources if needed.

Returns: void

Pre-trained TFJS coco-ssd model.

Usage:

// Load the model with options (optional).
//
// By default, it uses lite_mobilenet_v2 as the base model with webgl
// backend. You can change them in the `options` parameter of the `load`
// function (see below for docs).
const model = await tfTask.ObjectDetection.CocoSsd.TFJS.load();

// Run detection on an image with options (optional).
const img = document.querySelector('img');
const result = await model.predict(img, {numMaxBoxes: 5});
console.log(result.objects);

// Clean up.
model.cleanUp();

Refer to tfTask.ObjectDetector for the predict and cleanUp method.

Options for load:
  • backend ('cpu'|'webgl'|'wasm') The backend to use to run TFJS models. Default to 'webgl'.
Options for predict:
  • maxNumBoxes (number) The maximum number of bounding boxes of detected objects. There can be multiple objects of the same class, but at different locations. Defaults to 20.
  • minScore (number) The minimum score of the returned bounding boxes of detected objects. Value between 0 and 1. Defaults to 0.5.
tfTask.CocoSsdTFLite extends ObjectDetectorTFLite class Source

Pre-trained TFLite coco-ssd object detection model.

Usage:

// Load the model with options (optional).
const model = await tfTask.ObjectDetection.CocoSsd.TFLite.load();

// Run inference on an image.
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result.objects);

// Clean up.
model.cleanUp();

Refer to tfTask.ObjectDetector for the predict and cleanUp method.

Options for load:
  • maxResults (number) Maximum number of top scored results to return. If < 0, all results will be returned. If 0, an invalid argument error is returned.
  • scoreThreshold (number) Score threshold in [0,1), overrides the ones provided in the model metadata (if any). Results below this value are rejected.
  • numThreads (number) The number of threads to be used for TFLite ops that support multi-threading when running inference with CPU. num_threads should be greater than 0 or equal to -1. Setting num_threads to -1 has the effect to let TFLite runtime set the value.

    Default to number of physical CPU cores, or -1 if WASM multi-threading is not supported by user's browser.

tfTask.ODCustomModelTFLite extends ObjectDetectorTFLite class Source

A custom TFLite object detection model loaded from a model url or an ArrayBuffer in memory.

The underlying object detector is built on top of the TFLite Task Library. As a result, the custom model needs to meet the metadata requirements.

Usage:

// Load the model from a custom url with other options (optional).
const model = await tfTask.ObjectDetection.CustomModel.TFLite.load({
   model:
'https://tfhub.dev/tensorflow/lite-model/ssd_mobilenet_v1/1/metadata/2?lite-format=tflite',
});

// Run inference on an image.
const img = document.querySelector('img');
const result = await model.predict(img);
console.log(result.objects);

// Clean up.
model.cleanUp();

Refer to tfTask.ObjectDetector for the predict and cleanUp method.

Options for load:
  • model (string|ArrayBuffer) The model url, or the model content stored in an ArrayBuffer.

    You can use TFLite model urls from tfhub.dev directly. For model compatibility, see comments in the corresponding model class.

  • maxResults (number) Maximum number of top scored results to return. If < 0, all results will be returned. If 0, an invalid argument error is returned.
  • scoreThreshold (number) Score threshold in [0,1), overrides the ones provided in the model metadata (if any). Results below this value are rejected.
  • numThreads (number) The number of threads to be used for TFLite ops that support multi-threading when running inference with CPU. num_threads should be greater than 0 or equal to -1. Setting num_threads to -1 has the effect to let TFLite runtime set the value.

    Default to number of physical CPU cores, or -1 if WASM multi-threading is not supported by user's browser.

The task of detecting sentiments in a given paragraph of text.

tfTask.SentimentDetector extends TaskModel class Source

The base class for all SentimentDetection task models.

predict (text, options?) method Source

Detects sentiment on the given text, and returns result.

Parameters:
  • text (string) The text to detect sentiment on.
  • options (IO) Inference options. Different models have different inference options. See individual model for more details. Optional
Returns: Promise<SentimentDetectionResult>
SentimentDetectionResult:
  • sentimentLabels ({[label: string]: Sentiment}) A map from sentiment labels to their detection result along with the raw probabilities ([negative probability, positive probability]).

    For example: { 'insult': {result: true, probabilities: [0.3, 0.7]} 'threat': {result: false, probabilities: [0.7, 0.3]} }

Sentiment:
  • result (boolean|null) Whether the sentiment is considered true or false. It is set to null when the result cannot be determined (e.g. below a threshold).
  • probabilities (number[]) The raw probabilities for this sentiment.
cleanUp () method Source

Cleans up resources if needed.

Returns: void

Pre-trained TFLite movie review sentiment detection model.

It detects whether the review text is positive or negetive.

Usage:

// Load the model with options (optional).
const model = await tfTask.SentimentDetection.MovieReview.TFLite.load();

// Run inference on a review text.
const result = await model.predict('This is a great movie!');
console.log(result.sentimentLabels);

// Clean up.
model.cleanUp();

The model returns the prediction results of the following sentiment labels:

  • positive
  • negative

Refer to tfTask.SentimentDetector for the predict and cleanUp method, and more details about the result interface.

Options for load:
  • threshold (number) A prediction is considered valid only if its confidence exceeds the threshold. Defaults to 0.65.

Pre-trained TFJS toxicity model.

It detects whether text contains toxic content such as threatening language, insults, obscenities, identity-based hate, or sexually explicit language.

Usage:

// Load the model with options (optional. See below for docs).
const model = await tfTask.SentimentDetection.Toxicity.TFJS.load();

// Run detection on text.
const result = await model.predict('You are stupid');
console.log(result.sentimentLabels);

// Clean up.
model.cleanUp();

By default, the model returns the prediction results of the following sentiment labels:

  • toxicity
  • severe_toxicity
  • identity_attack
  • insult
  • threat
  • sexual_explicit
  • obscene

Refer to tfTask.SentimentDetection for the predict and cleanUp method, and more details about the result interface.

Options for load:
  • toxicityLabels (string[]) An array of strings indicating which types of toxicity to detect. Labels must be one of toxicity | severe_toxicity | identity_attack | insult | threat | sexual_explicit | obscene. Defaults to all labels.
  • backend ('cpu'|'webgl'|'wasm') The backend to use to run TFJS models. Default to 'webgl'.
  • threshold (number) A prediction is considered valid only if its confidence exceeds the threshold. Defaults to 0.65.

The task of classifying texts into a preset of labels.

tfTask.NLClassifier extends TaskModel class Source

The base class for all NLClassification task models.

predict (text, options?) method Source

Predicts classes on the given text, and returns result.

Parameters:
  • text (string) The text to predict on.
  • options (IO) Inference options. Different models have different inference options. See individual model for more details. Optional
Returns: Promise<NLClassificationResult>
NLClassificationResult:
  • classes (Class[]) All predicted classes.
Class:
  • className (string) The name of the class.
  • score (number) The score of the class.
cleanUp () method Source

Cleans up resources if needed.

Returns: void
tfTask.NCCustomModelTFLite extends NLClassifierTFLite class Source

A custom TFLite natural language classification model loaded from a model url or an ArrayBuffer in memory.

The underlying NL classifier is built on top of the NLClassifier in TFLite Task Library. As a result, the custom model needs to meet the metadata requirements.

Usage:

// Load the model from a custom url with other options (optional).
const model = await tfTask.NLClassification.CustomModel.TFLite.load({
   model:
'https://storage.googleapis.com/download.tensorflow.org/models/tflite/text_classification/text_classification_v2.tflite',
});

// Run inference on text.
const result = await model.predict('This is a great movie!');
console.log(result.classes);

// Clean up.
model.cleanUp();

Refer to tfTask.NLClassifier for the predict and cleanUp method.

Options for load:
  • model (string|ArrayBuffer) The model url, or the model content stored in an ArrayBuffer.

    You can use TFLite model urls from tfhub.dev directly. For model compatibility, see comments in the corresponding model class.

  • inputTensorIndex (number) Index of the input tensor.
  • outputScoreTensorIndex (number) Index of the output score tensor.
  • outputLabelTensorIndex (number) Index of the output label tensor.
  • inputTensorName (string) Name of the input tensor.
  • outputScoreTensorName (string) Name of the output score tensor.
  • outputLabelTensorName (string) Name of the output label tensor.

The task of answering questions based on the content of a given passage.

tfTask.QuestionAnswerer extends TaskModel class Source

The base class for all Q&A task models.

predict (question, context, options?) method Source

Gets the answer to the given question based on the content of a given passage.

Parameters:
  • question (string)
  • context (string) Context where the answer are looked up from.
  • options (IO) Optional
Returns: Promise<QuestionAnswerResult>
QuestionAnswerResult:
  • answers (Answer[]) All predicted answers.
Answer:
  • text (string) The text of the answer.
  • startIndex (number) The index of the starting character of the answer in the passage.
  • endIndex (number) The index of the last character of the answer text.
  • score (number) Indicates the confident level.
cleanUp () method Source

Cleans up resources if needed.

Returns: void

Pre-trained TFJS Bert Q&A model.

Usage:

// Load the model with options (optional).
const model = await tfTask.QuestionAndAnswer.BertQA.TFJS.load();

// Run inference with question and context.
const result = await model.predict(question, context);
console.log(result.answers);

// Clean up.
model.cleanUp();

Refer to tfTask.QuestionAnswerer for the predict and cleanUp method.

Options for load:
  • backend ('cpu'|'webgl'|'wasm') The backend to use to run TFJS models. Default to 'webgl'.
tfTask.BertQATFLite extends QuestionAnswererTFLite class Source

Pre-trained TFLite Bert Q&A model.

Usage:

// Load the model.
const model = await tfTask.QuestionAndAnswer.BertQA.TFLite.load();

// Run inference on an image.
const result = await model.predict(question, context);
console.log(result);

// Clean up.
model.cleanUp();

Refer to tfTask.QuestionAnswerer for the predict and cleanUp method.

tfTask.QACustomModelTFLite extends QuestionAnswererTFLite class Source

A custom TFLite Q&A model loaded from a model url or an ArrayBuffer in memory.

The underlying question answerer is built on top of the TFLite Task Library. As a result, the custom model needs to meet the metadata requirements.

Usage:

// Load the model from a custom url.
const model = await tfTask.QuestionAndAnswer.CustomModel.TFLite.load({
   model:
'https://tfhub.dev/tensorflow/lite-model/mobilebert/1/metadata/1?lite-format=tflite',
});

// Run inference with question and context.
const result = await model.predict(question, context);
console.log(result.answers);

// Clean up.
model.cleanUp();

Refer to tfTask.QuestionAnswerer for the predict and cleanUp method.

Options for load:
  • model (string|ArrayBuffer) The model url, or the model content stored in an ArrayBuffer.

    You can use TFLite model urls from tfhub.dev directly. For model compatibility, see comments in the corresponding model class.