Methodology
How LLM Adoption Rating is computed from Git activity
LLM Adoption Rating (0–100)
The score is a single headline number that summarizes whether a profile shows a sustained, step-change in activity patterns near the user-reported “90% LLM point” (dominant_at).
Inputs
We use the declared dominant_at plus weekly aggregates derived from Git history:
- •commits and changed (insertions + deletions) per week
- •Optional: per-week technology breakdown used for “breadth” (entropy). If missing, the score is still computed without the breadth term.
Active-Week Windows (Holiday Safe)
Calendar windows break when someone takes time off. Instead, we build windows in terms of active weeks (weeks with commits above a per-profile threshold).
K = 8 active weeks before dominant_at (“pre”)
K = 8 active weeks on/after it (“post”)
K2 = 8 more active weeks after that (“post2”)
Subscores
We combine several subscores (0..1). Core signals use log transforms (log1p) and robust statistics (median + MAD).
- Strength: effect size of the pre → post change (commits + changed, and entropy when available)
- Alignment: how close the strongest local “jump” is to dominant_at, down-weighted when the best jump is near-zero
- Persistence: how often weeks in post2 remain uplifted vs the pre baseline
- Breadth (optional): whether technology entropy increases
- Confidence: reduces scores when evidence is thin (low volume or a large calendar span to collect the active weeks)
Final Score
We compute a weighted sum and then apply confidence:
raw = 0.45*Strength + 0.20*Alignment + 0.25*Persistence + 0.10*Breadth
rating = round(100 * Confidence * raw)
If breadth inputs are unavailable, the breadth term is dropped and the remaining weights are renormalized.
Accuracy Disclaimer
All metrics are observational and based on user-declared inflection points. The service makes no claims of causality or performance guarantees. Correlation does not imply causation. Many factors affect developer productivity beyond LLM usage.