API Reference
The cz-benchmarks package consists of several core modules, each designed to work independently while contributing to a cohesive benchmarking workflow. Below is an overview of these modules, along with links to their detailed documentation.
Core Modules
- Datasets (czbenchmarks.datasets):
Contains classes for loading and validating datasets (e.g., SingleCellDataset), with support for AnnData and custom metadata. See the full documentation: czbenchmarks.datasets.
- Models (czbenchmarks.models):
- Implementations:
Contains the concrete model inference logic in Docker container form. The base class is BaseModelImplementation.
- Validators:
Enforces that datasets meet the requirements of particular models. Validators extend from BaseModelValidator or BaseSingleCellValidator. See the full documentation: czbenchmarks.models.
- Tasks (czbenchmarks.tasks):
Provides evaluation tasks (e.g., clustering, embedding, perturbation prediction) by extending the BaseTask class. See the full documentation: czbenchmarks.tasks.
- Metrics (czbenchmarks.metrics):
Maintains a registry of metric functions through the MetricRegistry interface and organizes metrics into categories (clustering, embedding, etc.). See the full documentation: czbenchmarks.metrics.
- Runner (czbenchmarks.runner):
Orchestrates the overall workflow: loading datasets, running model inference, executing tasks, and serializing results. See the full documentation: czbenchmarks.runner.
Additional Utilities
- CLI (czbenchmarks.cli):
Command-line interface for interacting with the cz-benchmarks package. See the full documentation: czbenchmarks.cli.
- Utils (czbenchmarks.utils):
Contains utility functions and helpers used across the package. See the full documentation: czbenchmarks.utils.