|View source on GitHub|
Returns true if batched input should be used.
tfma.is_batched_input( eval_shared_model: Optional[
tfma.types.EvalSharedModel] = None, eval_config: Optional[
tfma.EvalConfig] = None, config_version: Optional[int] = None ) -> bool
We will keep supporting the legacy unbatched V1 PredictExtractor as it parses the features and labels, and is the only solution currently that allows for slicing on transformed features. Eventually we should have support for transformed features via keras preprocessing layers.
||Shared model (single-model evaluation) or list of shared models (multi-model evaluation). Required unless the predictions are provided alongside of the features (i.e. model-agnostic evaluations).|
||Optional config version for this evaluation. This should not be explicitly set by users. It is only intended to be used in cases where the provided eval_config was generated internally, and thus not a reliable indicator of user intent.|
|A boolean indicating if batched extractors should be used.|