czbenchmarks.tasks.single_cell.perturbation

Attributes

logger

Classes

PerturbationTask

Task for evaluating perturbation prediction quality.

Module Contents

czbenchmarks.tasks.single_cell.perturbation.logger
class czbenchmarks.tasks.single_cell.perturbation.PerturbationTask[source]

Bases: czbenchmarks.tasks.base.BaseTask

Task for evaluating perturbation prediction quality.

This task computes metrics to assess how well a model predicts gene expression changes in response to perturbations. Compares predicted vs ground truth perturbation effects using MSE and correlation metrics.

property display_name: str

A pretty name to use when displaying task results

property required_inputs: Set[czbenchmarks.datasets.DataType]

Required input data types.

Returns:

Set of required input DataTypes (ground truth perturbation effects)

property required_outputs: Set[czbenchmarks.datasets.DataType]

Required output data types.

Returns:

required output types from models this task to run (predicted perturbation effects)

set_baseline(data: czbenchmarks.datasets.PerturbationSingleCellDataset, gene_pert: str, baseline_type: Literal['median', 'mean'] = 'median', **kwargs)[source]

Set a baseline embedding for perturbation prediction.

Creates baseline predictions using simple statistical methods (median and mean) applied to the control data, and evaluates these predictions against ground truth.

Parameters:
  • data – PerturbationSingleCellDataset containing control and perturbed data

  • gene_pert – The perturbation gene to evaluate

  • baseline_type – The statistical method to use for baseline prediction (median or mean)

  • **kwargs – Additional arguments passed to the evaluation

Returns:

List of MetricResult objects containing baseline performance metrics for different statistical methods (median, mean)