mindpose.engine

mindpose.engine.create_evaluator(annotation_file, name='topdown', metric='AP', config=None, dataset_config=None, **kwargs)[source]

Create evaluator engine. Evaluator engine is used to provide metric performance based on the provided prediction result.

Parameters:
  • annotation_file (str) – Path of the annotation file. It only supports COCO-format now.

  • name (str) – Name of the evaluation method. Default: “topdown”

  • metric (Union[str, List[str]]) – Supported metrics. Default: “AP”

  • config (Optional[Dict[str, Any]]) – Evaluaton config. Default: None

  • dataset_config (Optional[Dict[str, Any]]) – Dataset config. Since the evaluation method sometimes relies on the dataset info. Default: None

  • **kwargs (Any) – Arguments which feed into the evaluator

Return type:

Evaluator

Returns:

Evaluator engine for evaluation

mindpose.engine.create_inferencer(net, name='topdown_heatmap', config=None, dataset_config=None, **kwargs)[source]

Create inference engine. Inference engine is used to perform model inference on the entire dataset based on the given method name.

Parameters:
  • net (EvalNet) – Network for evaluation

  • name (str) – Name of the inference method. Default: “topdown_heatmap”

  • config (Optional[Dict[str, Any]]) – Inference config. Default: None

  • dataset_config (Optional[Dict[str, Any]]) – Dataset config. Since the inference method sometimes relies on the dataset info. Default: None

  • **kwargs (Any) – Arguments which feed into the inferencer

Return type:

Inferencer

Returns:

Inference engine for inferencing

mindpose.engine.inferencer

class mindpose.engine.inferencer.BottomUpHeatMapAEInferencer(net, config=None, progress_bar=False, decoder=None)[source]

Bases: Inferencer

Create an inference engine for bottom-up heatmap with associative embedding based method. It runs the inference on the entire dataset and outputs a list of records.

Parameters:
  • net (EvalNet) – Network for evaluation

  • config (Optional[Dict[str, Any]]) – Method-specific configuration. Default: None

  • progress_bar (bool) – Display the progress bar during inferencing. Default: False

  • decoder (Optional[BottomUpHeatMapAEDecoder]) – Decoder cell. It is used for hflip TTA. Default: None

Inputs:
dataset: Dataset
Outputs:
records: List of inference records.
infer(dataset)[source]

Running the inference on the dataset. And return a list of records. Normally, in order to be compatible with the evaluator engine, each record should contains the following keys:

Keys:
pred: The predicted coordindate, in shape [M, 3(x_coord, y_coord, score)]
box: The coor bounding boxes, each record contains (center_x, center_y, scale_x, scale_y, area, bounding box score)
image_path: The path of the image
bbox_id: Bounding box ID
Parameters:

dataset (Dataset) – Dataset for inferencing

Return type:

List[Dict[str, Any]]

Returns:

List of inference results

load_inference_cfg()[source]

Loading the inference config, where the returned config must be a dictionary which stores the configuration of the engine, such as the using TTA, etc.

Return type:

Dict[str, Any]

Returns:

Inference configurations

class mindpose.engine.inferencer.Inferencer(net, config=None)[source]

Bases: object

Create an inference engine. It runs the inference on the entire dataset and outputs a list of records.

Parameters:
  • net (EvalNet) – Network for inference

  • config (Optional[Dict[str, Any]]) – Method-specific configuration for inference. Default: None

Inputs:
dataset: Dataset for inferencing
Outputs:
records: List of inference records

Note

This is an abstract class, child class must implement load_inference_cfg method.

infer(dataset)[source]

Running the inference on the dataset. And return a list of records. Normally, in order to be compatible with the evaluator engine, each record should contains the following keys:

Keys:
pred: The predicted coordindate, in shape [C, 3(x_coord, y_coord, score)]
box: The coor bounding boxes, each record contains (center_x, center_y, scale_x, scale_y, area, bounding box score)
image_path: The path of the image
bbox_id: Bounding box ID
Parameters:

dataset (Dataset) – Dataset for inferencing

Return type:

List[Dict[str, Any]]

Returns:

List of inference results

load_inference_cfg()[source]

Loading the inference config, where the returned config must be a dictionary which stores the configuration of the engine, such as the using TTA, etc.

Return type:

Dict[str, Any]

Returns:

Inference configurations

class mindpose.engine.inferencer.TopDownHeatMapInferencer(net, config=None, progress_bar=False, decoder=None)[source]

Bases: Inferencer

Create an inference engine for Topdown heatmap based method. It runs the inference on the entire dataset and outputs a list of records.

Parameters:
  • net (EvalNet) – Network for evaluation

  • config (Optional[Dict[str, Any]]) – Method-specific configuration. Default: None

  • progress_bar (bool) – Display the progress bar during inferencing. Default: False

  • decoder (Optional[TopDownHeatMapDecoder]) – Decoder cell. It is used for hflip TTA. Default: None

Inputs:
dataset: Dataset
Outputs:
records: List of inference records.
infer(dataset)[source]

Running the inference on the dataset. And return a list of records. Normally, in order to be compatible with the evaluator engine, each record should contains the following keys:

Keys:
pred: The predicted coordindate, in shape [M, 3(x_coord, y_coord, score)]
box: The coor bounding boxes, each record contains (center_x, center_y, scale_x, scale_y, area, bounding box score)
image_path: The path of the image
bbox_id: Bounding box ID
Parameters:

dataset (Dataset) – Dataset for inferencing

Return type:

List[Dict[str, Any]]

Returns:

List of inference results

load_inference_cfg()[source]

Loading the inference config, where the returned config must be a dictionary which stores the configuration of the engine, such as the using TTA, etc.

Return type:

Dict[str, Any]

Returns:

Inference configurations

mindpose.engine.evaluator

class mindpose.engine.evaluator.BottomUpEvaluator(annotation_file, metric='AP', num_joints=17, config=None, remove_result_file=True, result_path='./result_keypoints.json')[source]

Bases: Evaluator

Create an evaluator based on BottomUp method. It performs the model evaluation based on the inference result (a list of records), and outputs with the metirc result.

Parameters:
  • annotation_file (str) – Path of the annotation file. It only supports COCO-format.

  • metric (Union[str, List[str]]) – Supported metrics. Default: “AP”

  • num_joints (int) – Number of joints. Default: 17

  • config (Optional[Dict[str, Any]]) – Method-specific configuration. Default: None

  • remove_result_file (bool) – Remove the cached result file after evaluation. Default: True

  • result_path (str) – Path of the result file. Default: “./result_keypoints.json”

Inputs:
inference_result: Inference result from inference engine
Outputs:
evaluation_result: Evaluation result based on the metric
eval(inference_result)[source]

Running the evaluation base on the inference result. Output the metric result.

Parameters:

inference_result (Dict[str, Any]) – List of inference records

Return type:

Dict[str, Any]

Returns:

metric result. Such as AP.5, etc.

load_evaluation_cfg()[source]

Loading the evaluation config, where the returned config must be a dictionary which stores the configuration of the engine, such as the using soft-nms, etc.

Return type:

Dict[str, Any]

Returns:

Evaluation configurations

class mindpose.engine.evaluator.Evaluator(annotation_file, metric='AP', num_joints=17, config=None)[source]

Bases: object

Create an evaluator engine. It performs the model evaluation based on the inference result (a list of records), and outputs with the metirc result.

Parameters:
  • annotation_file (str) – Path of the annotation file. It only supports COCO-format now.

  • metric (Union[str, List[str]]) – Supported metrics. Default: “AP”

  • num_joints (int) – Number of joints. Default: 17

  • config (Optional[Dict[str, Any]]) – Method-specific configuration. Default: None

Inputs:

inference_result: Inference result from inference engine

Outputs:

evaluation_result: Evaluation result based on the metric

Note

This is an abstract class, child class must implement load_evaluation_cfg method.

eval(inference_result)[source]

Running the evaluation base on the inference result. Output the metric result.

Parameters:

inference_result (Dict[str, Any]) – List of inference records

Return type:

Dict[str, Any]

Returns:

metric result. Such as AP.5, etc.

load_evaluation_cfg()[source]

Loading the evaluation config, where the returned config must be a dictionary which stores the configuration of the engine, such as the using soft-nms, etc.

Return type:

Dict[str, Any]

Returns:

Evaluation configurations

property metrics: Set[str]

Returns the metrics used in evaluation.

class mindpose.engine.evaluator.TopDownEvaluator(annotation_file, metric='AP', num_joints=17, config=None, remove_result_file=True, result_path='./result_keypoints.json')[source]

Bases: Evaluator

Create an evaluator based on Topdown method. It performs the model evaluation based on the inference result (a list of records), and outputs with the metirc result.

Parameters:
  • annotation_file (str) – Path of the annotation file. It only supports COCO-format.

  • metric (Union[str, List[str]]) – Supported metrics. Default: “AP”

  • num_joints (int) – Number of joints. Default: 17

  • config (Optional[Dict[str, Any]]) – Method-specific configuration. Default: None

  • remove_result_file (bool) – Remove the cached result file after evaluation. Default: True

  • result_path (str) – Path of the result file. Default: “./result_keypoints.json”

Inputs:
inference_result: Inference result from inference engine
Outputs:
evaluation_result: Evaluation result based on the metric
eval(inference_result)[source]

Running the evaluation base on the inference result. Output the metric result.

Parameters:

inference_result (Dict[str, Any]) – List of inference records

Return type:

Dict[str, Any]

Returns:

metric result. Such as AP.5, etc.

load_evaluation_cfg()[source]

Loading the evaluation config, where the returned config must be a dictionary which stores the configuration of the engine, such as the using soft-nms, etc.

Return type:

Dict[str, Any]

Returns:

Evaluation configurations