czbenchmarks.tasks.single_cell.cross_species_label_prediction ============================================================= .. py:module:: czbenchmarks.tasks.single_cell.cross_species_label_prediction Attributes ---------- .. autoapisummary:: czbenchmarks.tasks.single_cell.cross_species_label_prediction.logger Classes ------- .. autoapisummary:: czbenchmarks.tasks.single_cell.cross_species_label_prediction.CrossSpeciesLabelPredictionTaskInput czbenchmarks.tasks.single_cell.cross_species_label_prediction.CrossSpeciesLabelPredictionOutput czbenchmarks.tasks.single_cell.cross_species_label_prediction.CrossSpeciesLabelPredictionTask Module Contents --------------- .. py:data:: logger .. py:class:: CrossSpeciesLabelPredictionTaskInput(/, **data: Any) Bases: :py:obj:`czbenchmarks.tasks.task.TaskInput` Base class for task inputs. Create a new model by parsing and validating input data from keyword arguments. Raises [`ValidationError`][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. `self` is explicitly positional-only to allow `self` as a field name. .. py:attribute:: labels :type: List[czbenchmarks.types.ListLike] .. py:attribute:: organisms :type: List[czbenchmarks.datasets.types.Organism] .. py:attribute:: sample_ids :type: Optional[List[czbenchmarks.types.ListLike]] :value: None .. py:attribute:: aggregation_method :type: Literal['none', 'mean', 'median'] :value: 'mean' .. py:attribute:: n_folds :type: int :value: 5 .. py:class:: CrossSpeciesLabelPredictionOutput(/, **data: Any) Bases: :py:obj:`czbenchmarks.tasks.task.TaskOutput` Base class for task outputs. Create a new model by parsing and validating input data from keyword arguments. Raises [`ValidationError`][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. `self` is explicitly positional-only to allow `self` as a field name. .. py:attribute:: results :type: List[Dict[str, Any]] .. py:class:: CrossSpeciesLabelPredictionTask(*, random_seed: int = RANDOM_SEED) Bases: :py:obj:`czbenchmarks.tasks.task.Task` Task for cross-species label prediction evaluation. This task evaluates cross-species transfer by training classifiers on one species and testing on another species. It computes accuracy, F1, precision, recall, and AUROC for multiple classifiers (Logistic Regression, KNN, Random Forest). The task can optionally aggregate cell-level embeddings to sample/donor level before running classification. :param random_seed: Random seed for reproducibility :type random_seed: int .. py:attribute:: display_name :value: 'cross-species label prediction' .. py:attribute:: requires_multiple_datasets :value: True .. py:method:: compute_baseline(**kwargs) :abstractmethod: Set a baseline for cross-species label prediction. This method is not implemented for cross-species prediction tasks as standard preprocessing workflows need to be applied per species. :raises NotImplementedError: Always raised as baseline is not implemented