Evaluate Member Programs to Determine What to Cut and What to Keep

scissors July 11, 2016 By: Sara Wood, MBA, CAE

Associations are notorious for letting poor-performing programs live on past their prime. To keep your association focused on the most valuable member programs and services, use this sample program evaluation tool to make your case to eliminate programs that aren't making the grade.

One of the biggest challenges a membership department or association overall faces is the question of "what can we stop doing?" Programs are often added, but—whether it's because of politics, sentimentality, or "we've always done it this way"—many activities stay on an operational plan well past their expiration date.

There are many ways to go about making your case for eliminating poorly performing programs, but sometimes data-driven arguments can help reach a breakthrough. The following sample methodology was created to deliver such a data-driven argument. It aims to take an accurate look at an association's programs and the value (or lack thereof) they provide the organization.

Hopefully, this tool will arm you with the data you need to finally stop doing what doesn't matter and start focusing on what does.

Most associations are comfortable and familiar with the discussion of costs versus expenses. However, this tool evaluates five key components to determine where the program falls on the scale: revenue, expenses, overhead, value, and relevance to core functions. Your association may use slightly different terminology and have different "buckets," but this discussion is to serve as an example framework to get to the bottom of what you should keep and what you should eliminate.

Methodology

The following scoring structure was used for a sample set of association programs. Please note that a desirable score is high and an undesirable score is low. For example, a score of "0" for expenses indicates that the program is very expensive. Also, these values are for illustrative purposes only. Depending on the organization's overall budget and goals, the values assigned to the scale should be changed accordingly.

The point is that you must first create an empirical way to evaluate each category that removes personal or external biases toward the program. Also note that in this tool bonuses are given for items that are directly related to core value; in this sample, that core value is growing membership. Also, if something is actively losing money, it receives a deduction.

Rating Scales

Direct revenue scoring.

  • 5) more than $400,000
  • 4) $200,000 to $400,000
  • 3) $100,000 to $200,000
  • 2) $50,000 to $100,000
  • 1) $25,000 to $50,000
  • 0) less than $25,000

Direct expenses scoring.

  • 5) less than $5,000
  • 4) $5,000 to $10,000
  • 3) $10,000 to $25,000
  • 2) $25,000 to $75,000
  • 1) $75,000 to $125,000
  • 0) more than $125,000

Value scoring. This sample association has five different strategic categories of value for the organization. Depending upon the number of categories for the organization, this scale should be adjusted.

  • 5) The program aligns with five or more value categories.
  • 4) four categories
  • 3) three categories
  • 2) two categories
  • 1) one category
  • 0) zero categories

Overhead scoring. Mid-level manager hours are multiplied by a factor of 2 and senior management will be multiplied by a factor of 3.

  • 5) less than 10 hours per month
  • 4) 10 to 20 hours
  • 3) 20 to 50 hours
  • 2) 50 to 100 hours
  • 1) 100 to 200 hours
  • 0) more than 200 hours

Core function additions. An additional 5 points will be added if the item directly supports core membership goals of the organization of recruitment and retention.

Revenue loss subtraction. If a program loses more money than it earns, it will receive a deduction of 2 points.

Again, this sample scoring system is only an example. Values and scale determiners can be adjusted to fit the goals and needs of the organization. Regardless of the assigned values, the next step is to apply the scale to each program. This will then turn out an overall score.

Practical Application

Following are several generic examples of how the scale can practically be applied. The lesson is that, as you apply your rating criteria, clear divisions between some programs and others will begin to emerge. The sample data set below has been sorted by highest value to lowest.

Program/Activity Direct revenue score Direct expense score Overhead score Value Score Core function bonus Subtract 2 if the program lost money Cumulative score
Marketing and Membership
5
1
1
5
5
0
17
Sponsorship program
4
4
5
3
0
0
16
Credentialing program 1
5
1
2
2
5
0
15
Professional development
4
4
3
4
0
0
15
New book series
4
5
2
3
0
0
14
Credentialing program 2
3
3
0
2
5
0
13
Trade show 1
5
1
1
3
0
0
10
Educational program 1
3
2
3
2
0
0
10
Educational program 2
3
3
1
2
0
0
9
Standing committee 1
0
5
2
3
0
-2
8
Spring event
3
2
0
3
0
0
8
Standing committee 2
0
3
1
4
0
-2
6
Standing committee 3
0
3
2
1
0
-2
6
External exhibitions
1
2
0
1
0
0
4

Grading breakdown.

A 14-17
B 11-13
C 8-10
D 5-7
F 0-4

Analysis

Now that the evaluation method has been applied to each sample program, it can be separated by category. The total score for what constitutes an A or B program will likely vary from organization to organization. However, the point is that clear groups emerge between what is a valuable activity and what is not. The next part of this discussion is just one way an association can go about reviewing the traits of what constitutes each grade and creating recommended actions.

Grade A programs. Core high-value programs will likely fall into this category. By grading a program with an A, the evaluation instrument has determined that the program is profitable, relevant to the values of the organization, and an appropriate use of staff time.

Recommendation: As profit centers for the organization that are also aligned with the strategy, resources should be used to grow these activities.

Grade B programs. Upon further examination of B grade programs, these are likely activities that are valuable but could use some revision. Looking at the sample data, for any one of these programs to become an A grade program, it needs to increase in revenue or decrease in expenses. The evaluation indicates that these programs are in line with the mission of the organization, but the profit margin is not where it needs to be.

Recommendation: To get these programs into the A grade category, evaluate ways to cut expenses or increase revenue. This category is not completely off the mark; however, if neither revenue nor expenses can be addressed, the program should be evaluated for long-term sustainability by leadership. While the programs in this section are consistent with the mission, if an increase in revenue or a decrease in expenses is impossible, a deeper discussion about the program should take place.

Grade C programs. Programs in this category are likely slightly consistent with the core values of the organization; however these programs have either a low profit margin or none at all.

Recommendation: This category should receive serious evaluation for either restructure or elimination. To remain with the organization long term, both the monetary side and value side of these programs likely need more than minor changes. If only one or neither of these can be addressed, the program should be considered for elimination.

Grade D and F programs. The programs receiving a D or F grade had high costs, high staff time, low direct monetary return, or low value. While it is true that many association programs that do not bring in revenue can be extremely valuable to the organization in other ways, those that have low value and no return should receive deep scrutiny. Also, this category can highlight the fact that the organization's staff is putting a substantial amount of time into programs that is minimally connected to the overall value of the organization.

Recommendation: Eliminate these programs based on cost, value, and time investment.

A Tool for Action

This tool has been designed to guide association professionals through a deliberate evaluation process. The key to its success is applying the metrics that are applicable for each individual organization. The bottom line is, once you determine your customized scaling and apply the tool, to not only have meaningful conversations about the results but also take action and cut the activities that don't pass muster. Hopefully, this tool will arm you with the data you need to finally stop doing what doesn't matter and start focusing on what does.

Sara Wood, MBA, CAE

Sara L. Wood, MBA, CAE, is director, membership and marketing, at the National Court Reporters Association in Reston, Virginia.