ml — Machine Learning
The ml module contains functionality for loading and running TensorFlow Lite models on the
OpenMV Cam. The module exposes a single user-facing class, ml.Model, which wraps the underlying
C Model class with additional Python-side conveniences (automatic label loading and automatic
image-to-tensor conversion).
Sub Modules
class Model – Model Container
- class ml.Model(path: str, postprocess: object = None) Model
Loads a TensorFlow Lite model from
pathinto memory and prepares it for inference.pathmay be a file on the filesystem or the name of a model built into the firmware image.postprocess is an optional post-processing callable invoked by
Model.predictafter inference. It receives(model, inputs, outputs)and may return any value (e.g. a list of bounding boxes). When provided, the post-processor receives the raw model output tensors (un-dequantized) for performance.On construction, the wrapper additionally attempts to load a
.txtfile with the same base name aspath; if found, each line is loaded intoModel.labels. OtherwiseModel.labelsisNone.- predict(inputs: list, *, callback: object = None) list
Runs inference on the model and returns the output tensors.
inputs is a list with one entry per model input tensor. Each entry may be:
An
ndarraywhose shape matches the corresponding entry inModel.input_shape. Values are quantized using the input tensor’s scale and zero point (float32 inputs are passed through unchanged).An
image.Imageobject. The wrapper automatically wraps it in aml.preprocessing.Normalizationobject to convert it to the expected tensor.A callable. It will be invoked with
(bytearray, shape, dtype)and is expected to fill the bytearray with the input tensor data.
callback is an optional per-call post-processing callable. When supplied, it overrides the
postprocessset on the constructor for this call only. The callback receives(model, inputs, outputs)and its return value is returned bypredict.Returns a list of
ndarrayoutputs, one per model output tensor. If no post-processor is active the outputs are dequantized tofloat32; if a post-processor is active the raw output tensors (using each tensor’s native dtype) are passed to it instead.
- input_dtype: list[str]
A list of single-character strings giving the dtype of each input tensor:
'b'(int8),'B'(uint8),'h'(int16),'H'(uint16),'f'(float32).
- input_zero_point: list[int]
A list of ints giving the quantization zero point of each input tensor.
- output_dtype: list[str]
A list of single-character strings giving the dtype of each output tensor:
'b'(int8),'B'(uint8),'h'(int16),'H'(uint16),'f'(float32).