Why Your PMI Chapter Retention Rate Is Probably Wrong (and How to Fix It)
Picture a typical PMI chapter board meeting. It is the second Tuesday of the month, everyone has a coffee, and the VP of Membership is halfway through her report. She clicks to the big number on her slide: "Our member retention rate is 92 percent." The board nods. The President makes a note to mention it in the next PMI Global community call. Someone says it is the best number they have seen in years.
Four weeks later, the same VP opens the same dashboard to prepare for the next meeting. The number now reads 78 percent. Nothing dramatic has happened. No wave of cancellations. No billing crisis. Just one month of normal chapter life. She stares at it for a minute, refreshes the page, and starts drafting an email to the platform vendor asking what broke.
Nothing broke. The number was never really 92 percent in the first place, and it is probably not really 78 percent now either. What she is looking at is a rolling-window approximation, and it is sensitive to things that have very little to do with whether your members are actually renewing. If you have ever had this exact experience on your own board, this post is for you.
Key takeaway
Most chapter retention numbers come from rolling-window math against current member state, which lumps cohorts together and hides churn. Real retention math uses point-in-time monthly snapshots and asks "of the members whose memberships expired in March 2025, how many had renewed by May 2025?" Ask your platform whether it stores monthly snapshots. If it does not, your retention rate is an approximation.
Two Ways to Calculate Retention (Only One of Them Is Right)
Retention sounds like a simple concept. Out of the members you had, how many stayed? In practice, there are two very different ways to answer that question, and most chapter platforms default to the easier one.
The Rolling Window
The rolling-window method looks at your current member list and asks, roughly: "Of the people who were members twelve months ago, how many are still members today?" It sounds reasonable. The problem is that there is usually no actual record of who was a member twelve months ago. What the platform actually has is a current list of members with expiration dates on each record. So it back-calculates. It looks at everyone whose expiration date is more than a year in the past (lapsed), everyone whose expiration date is in the future (retained), and computes a ratio.
That ratio is cheap to compute, refreshes instantly, and looks like a retention rate. It is not a retention rate. It is a snapshot of your current membership status, rearranged to resemble one.
The Cohort Method
The cohort method asks a different question, and it asks it one month at a time. "Of the 100 members whose memberships expired in March 2025, how many had a renewal recorded by May 2025?" That is a cohort: a specific group of people whose membership decision was all due at the same time. You measure that group, and only that group, and you do it with a short grace window to catch late renewers. Then you do it again for April, and May, and June. Each month gets its own rate. You can average them, trend them, or look at them individually, but each one answers a real question about a real set of humans.
A Worked Example
Imagine a fictional chapter called PMI Midwest with exactly 1,000 members, evenly distributed across the year. Roughly 83 memberships expire every month. Through most of the year, about 70 of those members renew within a two-month grace window, for a steady renewal rate around 84 percent. Not great, not terrible, but steady.
Now suppose that in September, something unusual happens. Maybe a local employer stopped reimbursing dues. Maybe a popular speaker series ended. Only 50 of the 83 September members renew. That is a 60 percent cohort rate for September. A painful signal. Something changed, and the board needs to know.
Here is the uncomfortable part. If PMI Midwest is using a rolling-window calculation, that September shock is almost invisible. The 33 members who did not renew in September get absorbed into a denominator of 1,000. The dashboard moves from 84 percent retention to roughly 81 percent. A three-point dip is easy to write off as noise. The board never gets the signal. Six months later, when the trend finally shows up in the rolling number, the damage has compounded and nobody remembers what happened in September.
The cohort method would have flagged it in October, when the September number came back at 60 percent. That is the difference. It is not about decimal-point accuracy. It is about whether your retention number can actually see what is happening to your chapter.
Why Most Chapter Dashboards Use the Rolling Window
If the cohort method is better, why does almost every chapter dashboard show you the rolling one? Two reasons, and neither of them is a conspiracy.
First, it is easy. The data you need is right there in your chapter management system. Every member record has a current status and an expiration date. Subtract twelve months, compare, divide, done. No special schema, no history table, no extra syncing. A developer can build that query in an afternoon. Building the cohort version requires you to capture and store a snapshot of the full member roster every month so that months later you can look back at who was in it. That is more engineering, more storage, and more moving parts.
Second, the rolling window usually produces a friendlier number. Because it averages across twelve months of decisions, it hides short-term shocks. Board members see a steady 80-something percent, the vendor gets to show a respectable KPI in the product, and nobody complains. It is comfortable. It is also wrong in ways that matter.
Three things in particular trip up the rolling window:
- Cohort blending. Your January renewers and your July renewers are very different populations. Some chapters run big enrollment pushes in the fall. Some have corporate sponsors whose dues cycle lines up with a fiscal year. The rolling window averages all of that into a single headline number, so you cannot tell whether your problem is seasonal, structural, or concentrated in a specific group.
- Churn waves get diluted. As the worked example above shows, a bad month of cohort churn barely moves the rolling needle until months later, at which point the signal is muddied and late.
- One-time events distort everything. The February 2026 PMI Global membership model change is a good example. When PMI Global restructures chapter membership enrollment, you get a surge (or a dip, or a re-classification) that rolls through your member count in ways that have nothing to do with how your chapter is actually performing. Any retention number that is just looking at current membership status will swing wildly and will keep swinging for a year as that event works its way through the denominator.
None of this is the dashboard vendor being sneaky. It is the mathematical consequence of using current-state data to approximate a historical question. The method cannot help it.
How Cohort Retention Actually Works
To do this properly, your platform needs two things: a record of who was a member in each historical month, and a consistent rule for what counts as a renewal.
The record part is called a point-in-time snapshot. Once a month, the system captures a full copy of the member roster as it looks on that day: who is active, what their PMI number is, when their membership expires, what their status is. That snapshot is frozen and stored. Next month, you take another one. Over time you build up a series of roster photographs. You can look at any one of them and ask, truthfully, "Who was a member of this chapter in March 2025?"
The rule part is the cohort definition. A reasonable one: for the cohort of members whose memberships expired in month X, count them as renewed if they appear as active in any snapshot from X through X + 2 months. That two-month grace window is important. Members often renew late. PMI Global processing takes a few days. Some chapters have dues reminders that go out the week after expiry. If you cut the window too tight, you will flag honest renewers as churned. Two months is a reasonable default, and some chapters go longer.
Once you have both pieces, the math is straightforward. For each cohort month, divide the number of members who renewed within the grace window by the number who were in that cohort. You get a monthly cohort rate, a clean series you can trend, and a denominator that actually corresponds to a real group of real people. You can compare March 2025 to March 2024. You can spot a specific bad month. You can tell whether a programming change helped.
You cannot do any of that with a rolling-window number, because a rolling-window number is not made of cohorts. It is made of current-state noise.
What to Ask Your Platform Vendor
You do not need to rebuild your chapter dashboard from scratch to get value out of this post. What you need is a slightly sharper conversation with whoever provides your member analytics. Here is the single question worth asking:
"Does your retention number use point-in-time historical snapshots of our member roster, or does it back-calculate from our current member list?"
If the answer is the latter, or if the answer is some version of "I would have to check," your retention number is a rolling-window approximation and you should treat it as a rough mood indicator, not a metric you report with confidence. A few follow-ups that will tell you more:
- How far back does the historical snapshot data go? One year is the bare minimum for any kind of trend. Five or more years is ideal, because it lets you compare this year's March cohort to previous years and actually see whether the shift is normal seasonality or a real move.
- What grace window does the cohort math use? Two months is reasonable. Anything shorter than one month is too aggressive and will systematically understate retention. Anything longer than three can start to blur into the next year's cohort.
- Can you see the underlying rates per cohort month, or only the blended number? A single headline rate with no monthly breakdown is almost useless for programming decisions. You want to be able to point at a specific bad month and investigate it.
- How does the number handle the February 2026 PMI Global model change, or any similar one-time events? A cohort-based system will show a blip in one specific month and then return to normal. A rolling-window system will smear the event across an entire year of reporting.
If the vendor cannot answer these questions clearly, or if the answers boil down to "we compute it from the current member list," you now know what you are looking at. It does not mean your platform is bad. It means the retention number on the dashboard is not load-bearing, and you should not make board decisions as if it were.
How ChapterPulse Handles This
ChapterPulse writes a point-in-time snapshot of your full member roster as part of the nightly sync, refreshing the current month each night and rolling forward when the month closes. The result is one durable row per (chapter, month, member), stored alongside the current-state view so the cohort math has real historical data to work from rather than an approximation. The dashboard retention tile runs the cohort calculation by default. Chapters that have enough snapshot history get the cohort number; chapters that do not yet have a year of history fall back to the legacy rolling-window calculation so there is always a number to report, with the intention that it will become more accurate over time.
For chapters onboarding onto ChapterPulse, there is also a one-time historical backfill. If your chapter has ThoughtSpot access (and every PMI chapter does), ChapterPulse can pull nine or more years of historical monthly roster data and populate your cohort history in a single operation. That means on the day you onboard, you already have years of proper cohort retention history, not a blank slate you have to wait a year to fill in.
The conversational insights chat uses this same cohort data by default. When a board member types "what was our retention rate last year compared to this year" into the chat, it runs the query against the monthly snapshots and returns a cohort answer, not a back-calculation. The raw SQL is visible alongside the result, so you can see exactly which snapshot months the answer is built from.
Get a Retention Number You Can Actually Stand Behind
The gap between a 92 percent rolling retention number and the real cohort answer is not an academic quibble. It is the difference between a board that knows what is happening to its members and a board that finds out six months too late. If you are running a chapter on volunteer time, you do not have six months to spare.
ChapterPulse was built by former PMI chapter volunteers who got tired of reporting numbers they could not defend. If you want to see what proper cohort retention looks like with your own chapter data, including the historical backfill, we would be happy to walk you through it.