How We Model Brand Revenue at Pulsse — The Full Methodology
Transparency issue: we walk through exactly how we estimate annual revenue for brands without public filings — from Amazon SKU pulls to DTC traffic modeling.
Every number on Pulsse is an estimate. We say that clearly in our terms of service and we will say it clearly here: we are not auditors, and the brands we cover do not share their books with us. But our estimates are not guesses either. They are models — built from real data signals, cross-checked against each other, and calibrated against the handful of brands that have disclosed their numbers publicly.
This issue is about transparency. Here is exactly how we build the revenue picture for a brand.
The Three Revenue Streams
Every consumer brand we track has three revenue channels: DTC (their own website), Amazon, and Retail (physical stores). We model each one separately, then sum them to get total annual revenue. The channel mix varies wildly — AG1 is roughly 70% DTC, Liquid Death is roughly 40% Amazon, and Four Sigmatic is increasingly retail-led. Getting the mix right matters as much as getting the total right.
Amazon: The Most Precise Signal
Amazon is the easiest channel to model because the data is relatively observable. We use Jungle Scout to pull estimated monthly revenue and unit sales for every SKU a brand sells on Amazon. The process looks like this:
- We identify every ASIN associated with the brand — including sub-brands, co-branded products, and product lines that don't use the brand name in the title (this step is critical and easy to get wrong)
- We deduplicate the data — parent ASINs and child variation ASINs often report the same revenue, and counting them twice is a common error
- We verify against our internal ASIN map for brands where we have done manual research
- We apply a Jungle Scout accuracy adjustment — JS data tends to run approximately 8–12% high on bestseller-ranked SKUs
The output is a monthly Amazon revenue figure with a confidence interval. For brands with 10+ verified ASINs, our Amazon estimates are typically within ±15% of actual. For brands with 1–2 SKUs and fewer than 200 reviews, the error band widens to ±30%.
DTC: Traffic Times Conversion Times AOV
The DTC model is more complex because it chains three uncertain variables:
DTC Revenue = Monthly Visits × Conversion Rate × Average Order Value
We source visit data from SimilarWeb, which provides monthly unique visitor estimates for any domain with sufficient traffic (roughly 5,000+ monthly visits). SimilarWeb's accuracy improves significantly above 50,000 monthly visits — below that, the error bands are wide.
Conversion rate is the hardest variable. We do not have direct access to brand dashboards. Instead, we calibrate conversion rate estimates using three inputs: the brand's price point (higher AOV tends to mean lower CVR), the brand's traffic source mix (email-driven traffic converts much higher than paid social), and any public disclosures (some founders mention CVR in interviews or podcasts). We typically use a range of 1.2%–3.8% depending on these factors, with 1.8% as our default for brands with no additional signal.
AOV we estimate from the brand's product catalog — weighted average of their top SKUs by Amazon sales rank, adjusted for subscription discounts if they offer them.
Retail: The Least Precise Channel
Retail is the hardest to model accurately. We build it from door counts and revenue-per-door estimates. The inputs:
- Door counts from brand websites, press releases, investor updates, and LinkedIn posts where founders or investors mention distribution milestones
- Revenue-per-door benchmarks by retailer — we have built these from disclosed data across ~40 brands that have shared retail revenue per door publicly or in interviews
- Price point and velocity adjustments — a $45 supplement sells differently than a $4 beverage
Retail estimates carry the widest error bars. A brand with "5,000 doors" could mean 5,000 Walmart endcaps at $200/door/month or 5,000 independent natural grocery stores at $80/door/month. Those are very different numbers. When we are uncertain, we say so.
Cross-Checking the Model
The final step is a sanity check. We look at the ratio of Amazon to DTC to Retail and ask whether it makes sense given what we know about the brand's go-to-market strategy. We look at revenue per employee (brands with disclosed headcount) and compare it to category benchmarks. And we look at funding history — a brand that raised $50M at a $250M valuation but only has $8M in modeled revenue is worth flagging.
We are not always right. But we are trying to be honest about what we know, what we are estimating, and how wide the error bars are. That is the commitment we make with every brand page on Pulsse.