Frame Processors Tips
Avoiding Frame-drops
Frame Processors will be synchronously called for each frame the Camera sees and have to finish executing before the next frame arrives, otherwise the next frame(s) will be dropped. For a frame rate of 30 FPS, you have about 33ms to finish processing frames. At 60 FPS, you only have 16ms. If your Frame Procesor has not finished executing when the next frame arrives, the next frame will be dropped.
Some general tips:
- Use
runAsync(..)
if you don't need your Frame Processor to run synchronously - Use
runAtTargetFps(..)
if you don't need your Frame Processor to run on every frame - Use Shared Values (
useSharedValue(..)
) instead of React State (useState(..)
) when sharing data - Prefer native Frame Processor Plugins instead of pure JavaScript based plugins
FPS Graph
Use the FPS Graph to profile your Frame Processor's performance over time:
<Camera {...props} enableFpsGraph={true} />
Fast Frame Processor Plugins
If you use native Frame Processor Plugins, make sure they are optimized for realtime Camera use-cases. Some general tips:
- Prefer plugins that use the PixelFormat
yuv
instead ofrgb
, asyuv
is more efficient in both memory usage and processing efficiency - Prefer plugins that can work with the native Frame types (
CMSampleBuffer
andImage
/HardwareBuffer
) instead of passing the byte array (frame.toArrayBuffer()
), as the latter involves a GPU -> CPU copy - If you need to use the byte array (
frame.toArrayBuffer()
), prefer plugins that work withuint8
instead offloat
types, asuint8
is much more efficient - Prefer plugins that support GPU acceleration. For Tensorflow, this might be the CoreML or Metal GPU delegates
- For operations such as resizing, prefer GPU or CPU vector acceleration (e.g. Accelerate/vImage) instead of just array loops
ESLint react-hooks plugin
If you are using the react-hooks ESLint plugin, make sure to add useFrameProcessor
to additionalHooks
inside your ESLint config so dependencies are detected properly. (See "advanced configuration")
Technical
Frame Processors
Frame Processors are JS functions that will be workletized using react-native-worklets-core. They are created on a parallel camera thread using a separate JavaScript Runtime ("VisionCamera JS-Runtime") and are invoked synchronously (using JSI) without ever going over the bridge. In a Frame Processor you can write normal JS code, call back to the React-JS Thread (e.g. setState
), use Shared Values and call Frame Processor Plugins.
Frame Processor Plugins
Frame Processor Plugins are native functions (written in Objective-C, Swift, C++, Java or Kotlin) that are injected into the VisionCamera JS-Runtime. They can be synchronously called from your JS Frame Processors (using JSI) without ever going over the bridge. Because VisionCamera provides an easy-to-use plugin API, you can easily create a Frame Processor Plugin yourself. Some examples include Barcode Scanning, Face Detection, Image Labeling, Text Recognition and more.
Learn how to create Frame Processor Plugins, or check out the example Frame Processor Plugin for iOS or Android.
The Frame
object
The Frame Processor gets called with a Frame
object, which is a JSI HostObject. It holds a reference to the native (C++) Frame's GPU Buffer (~10 MB in size) and exposes properties such as width
, height
, bytesPerRow
and more to JavaScript so you can synchronously access them. You can access the Frame data in JavaScript using frame.toArrayBuffer()
, which copies over the GPU buffer to the CPU.
The Frame
object can be passed around in JS, as well as returned from- and passed to a native Frame Processor Plugin.
With 4k Frames, roughly 1.5 GB of Frame data flow through your Frame Processor per second.
See this tweet for more information.