About
What it does. Given contract type, tenure, charges, and service flags, the API returns churn probability, a binary prediction (F1-tuned threshold from training), and a risk band.
How it was built. Python pipeline: feature engineering, Logistic Regression / Random Forest / Gradient Boosting, best model by ROC-AUC. This deploy is inference-only — weights live in model/churn_model.joblib.
- GET /meta — stack, dataset, training summary
- GET /health — API + model file check
Key findings
From EDA and modeling — full notebooks in the repo.
- Class balance. ~27% churn; stratified split.
- Who churns. Month-to-month and shorter tenure dominate.
- Models. Three candidates compared; API serves ROC-AUC winner.
- Threshold. F1-tuned on the test set — not fixed 0.5.
Chart missing. From repo:
python scripts/generate_figures.py, copy
reports/figures/model_comparison.png → vercel_demo/public/model_comparison.png.
Notebooks on GitHub — 01 EDA, 02 features, 03 models, 04 threshold.
Limitations
- Dataset. Public teaching data — not production distribution.
- Churn definition. Snapshot label; real goals may need different cutoffs.
- Ops. No retraining, drift, or monitoring in this demo.
Full pipeline & notebooks
Training, EDA, model comparison, threshold analysis, FastAPI source.
Customer profile
Prediction
Churn probability
—
Prediction
—
Risk band
—
Threshold used
—