meddecide
Diagnostic test evaluation, decision-curve analysis, and interobserver agreement
← Home · Onboarding · jamovi overview
What it does
The clinical-decision arm of the ClinicoPathJamoviModule. Covers the three questions that come up whenever a new test, biomarker, or AI model is being evaluated:
- How good is the test? — sensitivity, specificity, predictive values, likelihood ratios, ROC with bootstrap CIs.
- Does using the test help the patient? — decision-curve analysis, net benefit, clinical-impact curves.
- Do observers agree? — Cohen’s kappa, weighted kappa, Fleiss’ kappa, ICC, Krippendorff’s alpha, Bland–Altman.
When to use
- Validating a new biomarker or IHC panel.
- Evaluating an AI model vs pathologists (use this — not raw accuracy — for clinical framing).
- Running any interobserver-agreement study.
Repos
meddecide— focused module.- Shipped inside
ClinicoPathJamoviModule.
Quick start in jamovi
Diagnostic test evaluation:
- Analyses → ClinicoPath → Decision → Diagnostic test.
- Select the test variable (predicted) and reference variable (gold standard).
- Specify the “positive” level for each.
- Output includes 2×2 table, sensitivity / specificity / PPV / NPV with CIs, and ROC.
Decision curve analysis:
- Analyses → ClinicoPath → Decision → Decision curves.
- Supply one or more predicted-probability columns plus the outcome.
- Net-benefit plot appears with reference lines for treat-all / treat-none.
Agreement:
- Analyses → ClinicoPath → Agreement → Kappa / ICC.
- Select rater columns; the module picks the right statistic for your data type.
Pitfalls
- Prevalence sensitivity. PPV and NPV depend on prevalence. Always report both sensitivity / specificity and PPV / NPV, and note the cohort prevalence.
- Weighted kappa weights matter. Quadratic vs linear weights give materially different values on ordinal scales — pick one, document it.
- Decision curves are not ROC curves. Don’t interpret them as “a higher line is better on all thresholds” — read the module’s interpretation text.