About & Methodology
DeepSyte is a nonpartisan civic accountability tool. It helps you follow the bills moving through Congress that actually line up with — or conflict with — the things you care about. This page explains where the data comes from, how we analyze it, and what we deliberately don't do.
About the name
DeepSyte names a particular kind of seeing: past the rehearsed surface of politics, into what the record actually contains — the bill text, the roll-call vote, the disclosure form. There's almost always a gap between the version on the surface and the version on the record, and clarity comes from reading both.
Politics runs on that gap. A press release says one thing; a vote says another. DeepSyte is built to read both — and surface where they don't match. The tagline says it directly: Read the record, not the rhetoric.
What this is
A free website that shows you U.S. federal bills and, if you take our short values quiz, ranks them by how closely each bill's policy effects line up with the answers you gave. If you tell us where you live, we also show how your senators and House member actually voted — with a chip on each vote saying whether that vote aligns with or conflicts with your answers.
DeepSyte is not affiliated with any campaign, party, PAC, or advocacy group. We don't endorse candidates or positions, and we don't take money from anyone who does.
Data sources
- Bill metadata & text: the official Congress.gov API (bill titles, statuses, introduced dates, last actions, full text URLs).
- Summaries: when the Congressional Research Service has published a plain-English summary, we use that verbatim. When CRS hasn't yet, we generate an AI summary from the bill text and mark it clearly as AI-generated.
- Roll-call votes: the House Clerk's XML feed and the Senate XML feed. Each recorded vote — including procedural, cloture, amendment, and final-passage votes — is stored with its type preserved.
- Member directory: the Congress.gov member list, re-validated on each sync so retired or replaced members drop out of "My reps" automatically.
- Member photos: the public-domain @unitedstates headshot CDN.
Methodology: bill → effects → match score
Most bills don't announce what they do in their title. "A bill to amend section 4(b) of…" doesn't tell you whether it tightens or loosens the policy it touches. So we do a structured analysis of each substantive bill:
- Extract policy effects. A language model reads the bill summary and text and, for each of our 16 policy topics, decides whether the bill supports or opposes that topic. Each effect comes with a confidence score.
- Collect your stances. The values quiz asks one or two questions per topic. You answer support / oppose / unsure / skip.
- Score the overlap. For every question where both the bill has an effect and you gave a support or oppose answer, we check whether the effect matches your stance. A match adds the confidence to the bill's agreement total; a mismatch adds to the disagreement total. The score is agreement minus disagreement.
- Show you the counts. On each bill card you see how many topics agree vs. conflict. Hover the match badge to see which specific topics drove the result.
Ceremonial, commemorative, and renaming bills have no policy effects and produce no match score — they show up in the feed but don't claim to match or conflict with anything. You can hide them with the "Hide procedural bills" filter.
How rep alignment is computed
When your rep casts a recorded vote on a bill we've analyzed, we translate their vote into a stance on each policy topic the bill touches:
- Yea on a bill that supports a topic → rep supports the topic
- Yea on a bill that opposes a topic → rep opposes the topic
- Nay on a bill that supports a topic → rep opposes the topic
- Nay on a bill that opposes a topic → rep supports the topic
We compare that stance to your quiz answer on the same topic, weight by confidence, and the sign of the net tells us whether the vote aligns with, mixes with, or conflicts with your views. Only passage votes get an alignment chip — procedural, cloture, and amendment votes can legitimately diverge from a rep's stance on the full bill, so we don't score them, but we do explain what each one meant legislatively.
Nonpartisan stance
Every moving part in the match system is symmetric. Our topics are defined in terms of policy positions, not parties. The effect extractor doesn't know which party sponsored the bill. Your quiz and your reps' votes are run through the same scoring function in the same direction. A bill that scores high for one user can score low for another — that's the point.
We don't rate reps on a single left-right axis, and we don't publish aggregate "scores" of reps. Each alignment chip is personal to the user viewing it.
Known limits
- AI summaries can be wrong. We label them as AI-generated and link to the full bill text. Always verify against primary sources before acting on anything.
- Not every bill is analyzed yet. Analysis runs in batches; brand-new bills may show up in the feed before we've extracted effects. Those bills won't contribute to your match score until analysis completes.
- 16 topics is not everything. A bill about a topic we don't cover will show zero matches even if you care about it. We expand the topic set as the tool matures.
- Roll-call coverage. We currently ingest the current Congress. Historical votes from prior Congresses aren't loaded.
- Confidence, not certainty. Low-confidence effects are still counted but weighted less. A score is a starting point for your own reading, not a verdict.
Spotted something wrong? A misclassified effect, a missing vote, a summary that doesn't match the bill? Let us know — we read every piece of feedback and corrections improve the analyzer for everyone.
Ready to try it? Take the values quiz →