python - Sci-kit learn: applying custom error function to favor False Positives? -


while scikit learn documentation fantastic, couldn't find if there way specify custom error function optimize in classification problem.

backing bit, i'm working on text classification problem false positives better false negatives. because labeling text important user, , false positives @ worst waste small amount of time user, whereas false negatives cause potentially important information never seen. therefore i'd scale false negative errors (or false positive errors down, whichever) during optimization.

i understand each algorithm optimizes different error function, there isn't one-size-fits-all solution in terms of supplying custom error function. there way? example, scaling labels work algorithm treats labels real values, wouldn't work svm, example, because svm scales labels -1,+1 under hood anyway.

some estimators take class_weight constructor argument. assuming classes ["neg", "pos"], can give negative class arbitrarily higher weight positive class, e.g.:

clf = linearsvc(class_weight={"neg": 10, "pos": 1}) 

then, when you're using gridsearchcv optimize hyperparameters of estimator, should change scorer 1 favors false positives, such variant of fᵦ high β:

from sklearn.metrics import fbeta_score  def f3_scorer(estimator, x, y_true):     y_pred = estimator.predict(x)     return fbeta_score(y_true, y_pred, beta=3)  gs = gridsearchcv(clf, params, scoring=f3_scorer) 

Comments

Popular posts from this blog

php - regexp cyrillic filename not matches -

c# - OpenXML hanging while writing elements -

sql - Select Query has unexpected multiple records (MS Access) -