tfjs-react-native provides a TensorFlow.js platform adapter for react native.

All symbols are named exports from the tfjs-react-native package.

fetch (path, init?, options?) function Source

Makes an HTTP request.

Parameters:
  • path (string) The URL path to make a request to
  • init (RequestInit) The request init. See init here: https://developer.mozilla.org/en-US/docs/Web/API/Request/Request Optional
  • options (tf.io.RequestDetails) A RequestDetails object.

    • options.isBinary boolean indicating whether this request is for a binary file.
    Optional
Returns: Promise<Response>

Model loading and saving.

asyncStorageIO (modelPath) function Source

Factory function for AsyncStorage IOHandler.

This IOHandler supports both save and load.

For each model's saved artifacts, three items are saved to async storage.

  • tensorflowjs_models/${modelPath}/info: Contains meta-info about the model, such as date saved, type of the topology, size in bytes, etc.
  • tensorflowjs_models/${modelPath}/model_without_weight: The topology, weights_specs and all other information about the model except for the weights.
  • tensorflowjs_models/${modelPath}/weight_data: Concatenated binary weight values, stored as a base64-encoded string.
  async function asyncStorageExample() {
    // Define a model
    const model = tf.sequential();
    model.add(tf.layers.dense({units: 5, inputShape: [1]}));
    model.add(tf.layers.dense({units: 1}));
    model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});

    // Save the model to async storage
    await model.save(asyncStorageIO('custom-model-test'));
    // Load the model from async storage
    await tf.loadLayersModel(asyncStorageIO('custom-model-test'));
}
Parameters:
  • modelPath (string) A unique identifier for the model to be saved. Must be a non-empty string.
Returns: io.IOHandler
bundleResourceIO (modelJson, modelWeightsId) function Source

Factory function for BundleResource IOHandler.

This IOHandler only supports load. It is designed to support loading models that have been statically bundled (at compile time) with an app.

  const modelJson = require('../path/to/model.json');
  const modelWeights = require('../path/to/model_weights.bin');
  async function bundleResourceIOExample() {
    const model =
      await tf.loadLayersModel(bundleResourceIO(modelJson, modelWeights));

     const res = model.predict(tf.randomNormal([1, 28, 28, 1])) as tf.Tensor;
  }
Parameters:
  • modelJson (io.ModelJSON) The JSON object for the serialized model.
  • modelWeightsId (number) An identifier for the model weights file. This is generally a resourceId or a path to the resource in the app package. This is typically obtained with a require statement.

    See facebook.github.io/react-native/docs/images#static-non-image-resources for more details on how to include static resources into your react-native app including how to configure metro to bundle .bin files.

Returns: io.IOHandler

Utilities for dealing with images and cameras.

decodeJpeg (contents, channels?) function Source

Decode a JPEG-encoded image to a 3D Tensor of dtype int32.

const image = require('path/to/img.jpg');
const imageAssetPath = Image.resolveAssetSource(image);
const response = await fetch(imageAssetPath.uri, {}, { isBinary: true });
const rawImageData = await response.arrayBuffer();
const imageTensor = decodeJpeg(rawImageData);
Parameters:
  • contents (Uint8Array) The JPEG-encoded image in an Uint8Array.
  • channels (0|1|3) An optional int. Defaults to 3. Accepted values are 0: use the number of channels in the JPG-encoded image. 1: output a grayscale image. 3: output an RGB image. Optional
Returns: Tensor3D
cameraWithTensors (CameraComponent) function Source

A higher-order-component (HOC) that augments the Expo.Camera component with the ability to yield tensors representing the camera stream.

Because the camera data will be consumed in the process, the original camera component will not render any content. A provided by this component is used to render the camera preview.

Notably the component allows on-the-fly resizing of the camera image to smaller dimensions, this speeds up data transfer between the native and javascript threads immensely.

In addition to all the props taken by Expo.Camera. The returned component takes the following props

  • cameraTextureWidth: number — the width the camera preview texture (see example and note below)
  • cameraTextureHeight: number — the height the camera preview texture (see example and note below)
  • resizeWidth: number — the width of the output tensor
  • resizeHeight: number — the height of the output tensor
  • resizeDepth: number — the depth (num of channels) of the output tensor. Should be 3 or 4.
  • autorender: boolean — if true the view will be automatically updated with the contents of the camera. Set this to false if you want more direct control on when rendering happens.
  • onReady: ( images: IterableIterator<tf.Tensor3D>, updateCameraPreview: () => void, gl: ExpoWebGLRenderingContext, cameraTexture: WebGLTexture ) => void — When the component is mounted and ready this callback will be called and recieve the following 3 elements:
    • images is a (iterator)[https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Iterators_and_Generators] that yields tensors representing the camera image on demand.
    • updateCameraPreview is a function that will update the WebGL render buffer with the contents of the camera. Not needed when autorender is true
    • gl is the ExpoWebGl context used to do the rendering. After calling updateCameraPreview and any other operations you want to synchronize to the camera rendering you must call gl.endFrameExp() to display it on the screen. This is also provided in case you want to do other rendering using WebGL. Not needed when autorender is true.
    • cameraTexture The underlying cameraTexture. This can be used to implement your own updateCameraPreview.
import { Camera } from 'expo-camera';
import { cameraWithTensors } from '@tensorflow/tfjs-react-native';

const TensorCamera = cameraWithTensors(Camera);

class MyComponent {

   handleCameraStream(images, updatePreview, gl) {
     const loop = async () => {
       const nextImageTensor = images.next().value

       //
       // do something with tensor here
       //

       // if autorender is false you need the following two lines.
       // updatePreview();
       // gl.endFrameEXP();

       requestAnimation(loop);
     }
     loop();
   }

   render() {
    // Currently expo does not support automatically determining the
    // resolution of the camera texture used. So it must be determined
    // empirically for the supported devices and preview size.

    let textureDims;
    if (Platform.OS === 'ios') {
     textureDims = {
       height: 1920,
       width: 1080,
     };
    } else {
     textureDims = {
       height: 1200,
       width: 1600,
     };
    }

    return <View>
      <TensorCamera
       // Standard Camera props
       style={styles.camera}
       type={Camera.Constants.Type.front}
       // Tensor related props
       cameraTextureHeight={textureDims.height}
       cameraTextureWidth={textureDims.width}
       resizeHeight={200}
       resizeWidth={152}
       resizeDepth={3}
       onReady={this.handleCameraStream}
       autorender={true}
      />
    </View>
   }
}
Parameters:
  • CameraComponent (React.ComponentType) an expo Camera component constructor
Returns: typeof CameraWithTensorStream

Utility function that tests the GL context for capabilities to enable optimizations.

For best performance this should be be called once before using the other camera related functions.

Parameters:
  • gl (WebGL2RenderingContext)
Returns: any
fromTexture (gl, texture, sourceDims, targetShape, options?) function Source

Creates a tensor3D from a texture.

Allows for resizing the image and dropping the alpha channel from the resulting tensor.

Note that if you the output depth is 3 then the output width should be a multiple of 4.

Parameters:
  • gl (WebGL2RenderingContext) the WebGL context that owns the input texture
  • texture (WebGLTexture) the texture to convert into a tensor
  • sourceDims (Object) source dimensions of input texture (width, height, depth)
  • width (number)
  • height (number)
  • depth (number)
  • targetShape (Object) desired shape of output tensor
  • width (number)
  • height (number)
  • depth (number)
  • options (Object) Optional
  • alignCorners (boolean)
  • interpolation ('nearest_neighbor'|'bilinear')
Returns: tf.Tensor3D
renderToGLView (gl, texture, size, flipHorizontal?) function Source

Render a texture to the GLView. This will use the default framebuffer and present the contents of the texture on the screen.

Parameters:
  • gl (WebGL2RenderingContext)
  • texture (WebGLTexture)
  • size (Object)
  • width (number)
  • height (number)
  • flipHorizontal (boolean) Optional
Returns: void
toTexture (gl, imageTensor, texture?) function Source

Transfers tensor data to an RGB(A) texture.

Parameters:
  • gl (WebGL2RenderingContext) the WebGL context that owns the texture.
  • imageTensor (tf.Tensor3D) the tensor to upload
  • texture (WebGLTexture) optional the target texture. If none is passed in a new texture will be created. Optional
Returns: Promise<WebGLTexture>