Every AI-generated insight on CaliforniaCourtIntel — from ruling tendency summaries to Case Outcome Predictions — displays a confidence score expressed as a percentage. Understanding this number helps you decide how much weight to give the AI's analysis.
What the score represents. The confidence score reflects how much verified data the AI had available when generating the insight. It is not a probability of the AI being correct — it is a measure of data richness. A 90% confidence score means the AI had substantial, consistent data to work with. A 45% score means the underlying dataset is thin, recent, or inconsistent.
How it is calculated. Confidence is derived from three factors: (1) Volume — the number of rulings, observations, and data points available for this judge and motion type. (2) Recency — data older than 18 months is weighted less heavily because judicial behavior can shift over time. (3) Consistency — if the available data shows highly variable outcomes, the confidence score is reduced to reflect that unpredictability.
When to be cautious. Treat any insight with a confidence score below 60% as directional rather than definitive. Low scores commonly appear for: newly appointed judges with limited ruling history, judges in low-volume courthouses, motion types that rarely appear before a given judge, and courts whose public portals do not publish tentative rulings.
For low-confidence judges, supplement the AI analysis with direct attorney observations, court clerk inquiry, and your own prior experience. Use the Add Observation feature to contribute data that improves future confidence scores for the whole community.
The confidence indicator appears as a colored badge — green (80%+), yellow (60–79%), and red (below 60%) — on every AI-generated panel in a judge profile and on every Outcome Prediction result.