Current Methodology: V3 (February 2026)

Built Different

Our erosive scoring model penalizes lies hidden among truths. No more "truthwashing" with 95% accuracy rates.

100%
Starting credibility
50%
Unverifiable weight
Coverage
Indicator shown

How It Works

Content flows through our five-stage pipeline, from raw media to credibility score.

Content

Video, article, speech

Extract

Filter opinions

Verify

True, False, Mixed

Score

Erosive model

85%

Credibility

Host-only, versioned

Key insight: We only score claims made by the host of the content. Guest statements and clips are tracked separately so hosts aren't penalized for what others say.

How Scoring Works

V3 includes unverifiable claims in scoring and shows coverage indicators when data is limited.

True claims
100%
Full credit (1.0 each)
Mixed claims
50%
Half credit (0.5 each)
Unverifiable
50%
Half credit (0.5 each)
False claims
0%
No credit (0.0 each)

Unverifiable = Neutral

Claims we can't verify count as 0.5. Not penalized, but no full credit either.

📊

Coverage Indicator

When <50% of claims are verified, we show "Based on X of Y verified claims".

🎯

Honest Uncertainty

100% credibility requires actual verified true claims, not just "nothing wrong found".

Example: 1 true claim, 0 false claims, 5 unverifiable claims

V2 (Old)OLD
100%

1 true / 1 verified = 100% (misleading)

V3 (Current)NEW
58%

(1 + 0.5×5) / 6 = 58% + "low coverage" indicator

V3 prevents "100% credibility" from content where we simply couldn't verify most claims

What We Check

We extract verifiable claims and filter out opinions, predictions, and hyperbole.

👤

Biographical

Claims about people's positions, roles, or biographical facts

Examples: "Chuck Schumer is a US Senator" • "Kristi Noem is DHS Secretary"

Verification: Wikipedia, Congressional database

📊

Statistical

Numerical claims about demographics, economics, or finance

Examples: "Unemployment is 4.3%" • "California has 39M people"

Verification: Census, BLS, FRED, FEC APIs

📅

Event

Time-bound claims about specific events or occurrences

Examples: "The shutdown lasted 20 days" • "Hurricane Ian hit Florida in 2022"

Verification: NewsAPI, GDELT archive

💬

Statement

General statements or broad factual assertions

Examples: "Climate change is caused by humans" • "Vaccines are safe"

Verification: Fact-checker aggregation, Google Grounding

Checkability Filtering

Not all statements are fact-checkable. We classify and filter accordingly:

VERIFIABLEKept
OPINIONFiltered
PREDICTIONFiltered
HYPERBOLEFiltered

Our Sources

We prioritize authoritative sources based on claim type and credibility tier.

Government APIs

PRIMARY
  • U.S. Census Bureau

    Population, demographics, income

  • Bureau of Labor Statistics

    Unemployment, CPI/inflation

  • Federal Reserve (FRED)

    GDP, interest rates, 800k+ series

  • Federal Election Commission

    Campaign finance, donations

Knowledge Bases

TIER 1
  • Wikipedia

    Biographical info with freshness checking

  • U.S. Congress Database

    Current positions, affiliations

  • Google Grounding

    Real-time web data with attribution

Fact-Checker Network

TIER 2
  • Snopes

    Urban legends, misinformation

  • PolitiFact

    Political claims, Truth-O-Meter

  • FactCheck.org

    Policy claims, campaign statements

  • AP / Reuters Fact Check

    Breaking news verification

News Archives

TIER 2
  • NewsAPI

    Recent news with temporal filtering

  • GDELT

    40+ year global event archive

  • Brave Search

    Fact-checker domain prioritization

Source Credibility Tiers

P

Primary Tier (1.00)

Official government APIs

1

Tier 1 (0.80+)

HIGH factuality + CENTER bias (AP, Reuters, BBC)

2

Tier 2 (0.60+)

HIGH factuality, any bias (fact-checkers, quality news)

3

Tier 3 (<0.60)

MOSTLY FACTUAL or lower, known bias

Version History

Our methodology evolves based on results. All changes are documented for transparency.

V3February 2026 — CurrentLIVE

Unverifiable Claims + Coverage Indicators

  • +Unverifiable claims now count in scoring with 0.5 weight (neutral impact)
  • +Formula: (true + 0.5×mixed + 0.5×unverifiable) / totalScoredClaims
  • +"Low coverage" indicator when <50% of claims are verified
  • +Prevents misleading 100% scores from unverifiable-only content
  • +Verifiability status tracked: verified, low_coverage, pending, no_claims
V2January 2026 — February 2026DEPRECATED

Checkability Filtering + Erosive Scoring

  • Claim extraction with checkability classification
  • Filters opinions, predictions, hyperbole before verification
  • Erosive model with anti-sheltering multiplier
  • 85% hard ceiling with any false claim
  • Unverifiable claims excluded from scoring (could show misleading 100%)
V1Launch — December 2025DEPRECATED

Symmetric Scoring

  • Basic claim extraction without checkability filtering
  • Formula: (trueCount + 0.5 × mixedCount) / verifiedCount
  • All claims weighted equally
  • No anti-sheltering protection

Our Commitment to Transparency

  • All scoring methodology changes are documented with version history
  • Content is scored using the methodology version active at verification time
  • The scoring formulas and thresholds are publicly documented on this page
  • Claims filtered during extraction are logged for auditability
  • We continuously evaluate and improve our methodology based on results

Technical Details

For those who want to verify our methodology.