Review Methodology

How we test, score, and describe the privacy and security tools we cover. Last updated: 2026-05-02.

Review Methodology
Quick answer

We score each product on six pillars: privacy posture, security posture, transparency, performance, usability, and price-fairness. Every claim we make is tied back to a primary source — vendor docs, audits, RFCs, or reproducible tests we ran ourselves.

The six pillars

1. Privacy posture

What is collected, what is logged, what jurisdiction governs that data, and how independent that claim is. Independent audits weigh more than self-attestations.

2. Security posture

Cryptography, protocols, kill-switch behavior, leak prevention (DNS, IPv6, WebRTC), update cadence, vulnerability disclosure programs, and breach history.

3. Transparency

Public ownership, transparency reports, source-available components, documented threat model, and willingness to publish negative findings.

4. Performance

Throughput on multiple regions, latency overhead, reconnect speed, server-network breadth — measured against a vendor-free baseline.

5. Usability

Setup, default behavior, platform coverage, multi-device support, and the language used in error states. Privacy software with confusing defaults isn't safe.

6. Price fairness

Headline price vs renewal price, refund policy, multi-year discount honesty, and the size of the gap between the most-marketed and most-honest tier.

How we score

Each pillar gets a 0–10 score. We weight privacy and security at 25% each, transparency at 20%, and performance, usability, and price at 10% each. The weighted total becomes the published score, rounded to one decimal.

We don't grade on a curve. A category can have several "9.0+" products, or none — the score reflects what we actually saw, not where the product fits in a ranking.

How we test

  • Hands-on installation on at least two operating systems per product.
  • Network testing with reproducible scripts: throughput on three continents, leak tests for DNS, IPv6, WebRTC, latency overhead at idle.
  • Documentation review: privacy policy, terms, audit reports, transparency reports, and threat-model write-ups.
  • Real support contact: we file a question with the vendor's support channel and time the response.
  • Account purchase: we buy our own subscriptions. We don't accept vendor-comped accounts for review purposes.

What we don't do

  • We don't accept payment, gifts, or free hardware in exchange for coverage.
  • We don't allow vendors to preview reviews before publication.
  • We don't grade products against each other on artificial benchmarks designed to flatter sponsors.
  • We don't publish "best of" lists where every option happens to pay us.

When something changes

Privacy tools change quickly. When a product's audit, ownership, jurisdiction, or default behavior changes materially, we re-score and update the article — not silently. The change appears at the top of the page with a date and a one-line summary of what moved.

Conflicts of interest

Some links on the site are affiliate links. They're disclosed inline. Affiliate revenue is monitored monthly to confirm it does not correlate with editorial decisions; that data is reviewed by an editor who is not paid on commercial performance.

Reproducibility

Wherever possible, we publish the test commands and configurations we used so a reader can reproduce them. Privacy claims that can't be reproduced from public artifacts are flagged as "vendor-attested" rather than "verified."

Questions or disputes

If you believe a review is inaccurate, write to us with the page URL and the specific claim. Corrections that change a score are republished with the change visible. See the Editorial Policy for the broader framework.