|View source on GitHub|
A Segmentation class model.
tfm.vision.models.SegmentationModel( backbone: tf.keras.Model, decoder: tf.keras.Model, head: tf.keras.layers.Layer, mask_scoring_head: Optional[tf.keras.layers.Layer] = None, **kwargs )
Input images are passed through backbone first. Decoder network is then applied, and finally, segmentation head is applied on the output of the decoder network. Layers such as ASPP should be part of decoder. Any feature fusion is done as part of the segmentation head (i.e. deeplabv3+ feature fusion is not part of the decoder, instead it is part of the segmentation head). This way, different feature fusion techniques can be combined with different backbones, and decoders.
||a backbone network.|
||a decoder network. E.g. FPN.|
||mask scoring head.|
||keyword arguments to be passed.|
||Returns a dictionary of items to be additionally checkpointed.|
call( inputs: tf.Tensor, training: bool = None ) -> Dict[str, tf.Tensor]
Calls the model on new inputs and returns the outputs as tensors.
In this case
call() just reapplies
all ops in the graph to the new inputs
(e.g. build a new computational graph from the provided inputs).
||Input tensor, or dict/list/tuple of input tensors.|
Boolean or boolean scalar tensor, indicating whether to
||A mask or list of masks. A mask can be either a boolean tensor or None (no mask). For more details, check the guide here.|
|A tensor if there is a single output, or a list of tensors if there are more than one outputs.|