Know what your engineers contribute upstream. Not what a dashboard guesses.
Canopy analyzes your team's open source contributions across GitHub, Jira, and mailing lists. Every contributor ranked by complexity, stewardship, and review depth. Every claim verified against source data.
The problem
Your OSPO tracks contributions by counting PRs. The actual story is in what dashboards leave out.
PR counts reward volume. Review counts reward rubber-stamping. Neither captures the reviewer who gatekeeps 352 PRs, the engineer whose stewardship keeps the build green, or the architectural influence that lives entirely in review comments.
Dashboard says
Intel Kubernetes team: 2 engineers
What actually happened
Those 2 engineers rank #1 and #2 out of 1,653 contributors. 194 merged PRs, 2,717 review comments, and ownership of the entire DRA subsystem.
Dashboard says
Databricks Spark team: 0 merged PRs
What actually happened
Spark uses rebase-and-close. Those 6 engineers committed 31 patches to master and shaped 89 of 163 PRs through review. They control what enters the codebase.
Dashboard says
NVIDIA: 107 PRs from dims
What actually happened
102 of 107 are stewardship: dependency updates, test fixes, cleanup. The invisible infrastructure that lets other companies' feature PRs build and test correctly.
Dashboard says
Union.ai: 38% of Flyte PRs
What actually happened
But 53% of all reviews. The team gates more code than it writes. machichima averages 3.7 review comments per PR; the project average is under 1.
What it does
Engineering intelligence that dashboards can't produce.
Three-view contributor ranking
Every engineer profiled across complexity (who solves the hardest problems), stewardship (who maintains codebase health through tests, CI, dependency updates), and review depth (comments per review, probing ratio). Three views capture three kinds of value that a single metric collapses.
Company-specific filtering
Filter any project's analysis to show only your company's engineers. See their complexity profiles, review relationships, mentorship pairs, and stewardship breakdowns. Know exactly what your team contributes upstream, with evidence a dashboard can't produce.
PR complexity from review signal
Classifies every review comment as probing (reviewer uncertain), directing (reviewer knows the fix), or polishing (nits). Scores PRs by what reviewers worried about, not lines changed. Each project's complexity vocabulary emerges from what its reviewers actually debate.
Cross-source triangulation
Cross-references GitHub, Jira, and mailing lists. Matches contributors across systems. Surfaces the 79% of tickets that never became code, the binding votes that shaped direction, and the invisible architects who drive decisions without merging PRs.
Mentorship network detection
Review concentration data identifies structured mentorship: when one person reviews another 103 times, that's deliberate development, not random assignment. Named and quantified, not inferred from org charts.
Every claim mechanically verified
PR numbers, ticket references, contributor stats, and quoted text are checked against source data before the report ships. No hallucinations. The verification report shows exactly what was checked.
Real output
Company-specific reports, built from public data
Not mockups. These are findings from actual company-specific reports generated for 11 companies across Kubernetes, Kafka, Spark, Ray, Airflow, and Flyte. Every claim cites a specific PR with a clickable link.
Intel
/ Kubernetes2 engineers rank #1 and #2 out of 1,653 contributors by composite score. pohly: 194 merged PRs (1st), 2,717 review comments (1st), 126 unique contributors reviewed (1st). bart0sh: 199 newcomer reviews and a mentorship intensity of 211 reviews of a single contributor.
Databricks
/ Apache Spark6 engineers authored 0 traditionally merged PRs but shaped 89 of 163 total PRs through review and committed 31 patches to master. cloud-fan redirected holdenk's entire PR #46143 design during review. 84% of merge authority concentrated in two people.
Anyscale
/ Ray40 engineers across Ray and KubeRay. can-anyscale executed a 55-PR telemetry migration from OpenCensus to OpenTelemetry. andrew-anyscale's portfolio is 96% stewardship. harshit-anyscale and abrarsheikh have a 136/86 reciprocal review relationship in Serve.
What a dashboard would show vs. what actually happened
| Dashboard says | What actually happened |
|---|---|
| chia7712: 0 PRs, 0 lines | 347 reviews + 36 Jira tickets + 60 mailing list messages + 5 binding votes. The project's nervous system across all three sources. |
| Google K8s team: 8.6% of code | But 28-32% of the review layer. Google shapes Kubernetes more through architectural gatekeeping than code volume. |
| Astronomer: 12 Airflow contributors | 2.4% of contributors but 31.2% of all review comments. Review dominance proves project ownership. |
| dims (NVIDIA): 107 PRs merged | 102 are stewardship: dependency updates, test fixes, cleanup. The invisible infrastructure that lets Intel's DRA feature PRs build correctly. |
| Confluent: 10 Kafka contributors | A net-reviewer team: they review more code than they write. 23 binding mailing list votes. KIP-1271 coordinated across 3 engineers. |
Every claim mechanically verified
PR numbers, contributor stats, review comment quotes, and ticket references are checked against source data before publication.
[ OK ] PR #132706: DRA API graduation to GA (pohly)
[ OK ] cloud-fan: 76 review comments across 19 PRs
[ OK ] Quote "Should we also deal with the status update error?" found in machichima's comment
[ OK ] Anyscale: 40 engineers identified across Ray + KubeRay
... verified across 11 company reports
Coverage
13 projects. 5,000+ contributors. 11 company reports.
Free project-level reports published permanently. Company-specific reports available that filter to show only your team's engineers.
Company-specific reports generated
“You got it right.”
Built for OSPO leads, engineering leaders, and open source contributors who want to see what the data actually shows.
Get a report.
Request a project digest, a company-specific report, or nomination evidence for specific contributors. We'll generate it from public data and send it to you.
Engineering intelligence, not dashboards.