Skip to content
This repository was archived by the owner on Jul 29, 2024. It is now read-only.
This repository was archived by the owner on Jul 29, 2024. It is now read-only.

Scoring and evaluation for continuous outcome #6

@shaddyab

Description

@shaddyab

Q1)
Given the fact that for continuous outcome the theoretical max (i.e., q1_) and practical max(i.e., q2_) curves are not well defined and will not be correct, then only the following six metrics can be used to evaluate the model. Is this correct?

  1. Q_cgains
  2. Q_aqini
  3. Q_qini
  4. max_ cgains
  5. max_aqini
  6. max_qini

Q2)
Based on lines 205
score_name = 'q1_'+method
And the _score function in base.py

    def _score(self, y_true, y_pred, method, plot_type, score_name):
        """ scoring function to be passed to make_scorer.
        """
        treatment_true, outcome_true, p = self.untransform(y_true)
        scores = get_scores(treatment_true, outcome_true, y_pred, p, scoring_range=(0,self.scoring_cutoff[method]), plot_type=plot_type)
        return scores[score_name]

three of the scoring methods which can be used for grid search: 'q1_qini', 'q1_cgains', 'q1_aqini' should not be used with continuous variables. If this is indeed the case, then I would suggest that this issue be fixed using the continuous_outcome argument already available and maybe be replaced with ‘Q_’ scores for continuous variables.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions