API Version
  • 0.0.1-alpha.9
  • 0.0.1-alpha.8
  • 0.0.1-alpha.4
  • 0.0.1-alpha.10

This library is a wrapper of TFLite interpreter. It is packaged in a WebAssembly binary that runs in a browser. For more details and related concepts about TFLite Interpreter and what the inference process looks like, check out the official doc.

tflite.TFLiteModel extends InferenceModel class Source

A tflite.TFLiteModel is built from a TFLite model flatbuffer and executable on TFLite interpreter. To load it, use the loadTFLiteModel function below.

Sample usage:

// Load the MobilenetV2 tflite model from tfhub.
const tfliteModel = await tflite.loadTFLiteModel(
     'https://tfhub.dev/tensorflow/lite-model/mobilenet_v2_1.0_224/1/metadata/1');

const outputTensor = tf.tidy(() => {
    // Get pixels data from an image.
    let img = tf.browser.fromPixels(document.querySelector('img'));
    // Resize and normalize:
    img = tf.image.resizeBilinear(img, [224, 224]);
    img = tf.sub(tf.div(tf.expandDims(img), 127.5), 1);
    // Run the inference.
    let outputTensor = tfliteModel.predict(img);
    // De-normalize the result.
    return tf.mul(tf.add(outputTensor, 1), 127.5)
  });
console.log(outputTensor);

predict (inputs, config?) method Source

Execute the inference for the input tensors.

Parameters:
  • inputs (Tensor|Tensor[]|NamedTensorMap) The input tensors, when there is single input for the model, inputs param should be a Tensor. For models with multiple inputs, inputs params should be in either Tensor[] if the input order is fixed, or otherwise NamedTensorMap format.
  • config (ModelPredictConfig) Prediction configuration for specifying the batch size. Currently this field is not used, and batch inference is not supported. Optional
Returns: Tensor|Tensor[]|NamedTensorMap
tflite.loadTFLiteModel (model, options?) function Source

Loads a TFLiteModel from the given model url.

Parameters:
  • model (string|ArrayBuffer) The path to the model (string), or the model content in memory (ArrayBuffer).
  • options (Object) Options related to model inference. Optional
  • numThreads (number) Number of threads to use when running inference.

    Default to number of physical CPU cores, or -1 if WASM multi-threading is not supported by user's browser.

  • enableProfiling (boolean) Whether to enable profiling.

    Default to false. After it is enabled, the profiling results can be retrieved by calling TFLiteWebModelRunner.getProfilingResults or TFLiteWebModelRunner.getProfilingSummary. See their comments for more details.

  • maxProfilingBufferEntries (number) Maximum nmber of entries that the profiler can keep.

    Default to 1024.

Returns: Promise<tflite.TFLiteModel>
tflite.getDTypeFromTFLiteType (tfliteType) function Source

Returns the compatible tfjs DataType from the given TFLite data type.

Parameters:
  • tfliteType (TFLiteDataType) The type in TFLite.
Returns: DataType