Risk Management at Scale Part 4: building a framework

Risk Management 4.JPG

{If you missed the first three parts of Risk Management at Scale, read those first and then come back here}.

To briefly recap, we’ve determined what our qualitative vs. quantitative data points are, we’ve developed ways to collect all the data (through human input and telemetry data from your platform), and we’ve taken a stab at analyzing the resulting information.  Now for the fun part: building a framework to consistently interpret your data, provide you with cues to your customer’s risk and create playbooks for your team (and/or software) to execute.  In our final chapter, we’ll dig deep into automation, change management and scalability.

Before jumping in - we have a question for you:  during your data analysis, did you review the data associated with current customers as well as customers who have churned?  The latter analysis will uncover some of the most valuable insights you have.  Don’t overlook the nuggets of information, trends and even absence of data for customers who are no longer around.  And if you haven’t designed a Post-Mortem process for your team yet, be sure to tune in for our future blog post regarding this very valuable data collection activity.

During this chapter, we’ll provide you with a sample structure that we use to provide a weighted risk management framework around up to four quantitative data points as well as a maximum of three data points that are qualitative.  Remember, good, automated and consistently updated data is the only way to ensure that your risk management framework remains consistent and proactive.  If your data is hand-entered (especially quantitative data), doesn’t have a consistent time that it flows in (preferably daily) or relies too much on human interpretation (is entirely qualitative), your ability to manage real risk against your customers will be too slow and too inaccurate.  Strive for the ability to “set it and forget it” when dealing with quantitative data and aim for consistent habits (see Part 5) when dealing with qualitative data.

Let’s start with qualitative data.  This is the data collected from your team members about how they think the customer is doing based on their interaction (or lack thereof) with the customer.  What does your current qualitative health score look like?  The complexity of this score often depends on the background of the team member who implemented it.  Sometimes it’s a single data point: Customer Health.  Other times there are multiple facets: Risk, Engagement, Value.  We’ve even seen a customer who had broken down customer health into five different health indicators over three different levels of customer persona (buyer, champion, day-to-day).  Needless to say, this health score though comprehensive, was difficult to complete and even more harrowing to maintain.

We have also come across elaborate scoring systems such as 0-100 or other numeric scales (0-6 was used in one system, outlined below).  The problem with these systems has to do with how they are understood by the person entering the number (your customer team member) and how that differs from the understanding of the person interpreting the number.

  • Is this customer an 88, or are they a 72?  
  • What does it mean when you say one customer is a 91 and another customer is a 92?  Are they the same? Is there a real reason for the single point difference?  
  • If I have entered a customer as 33 because they are at extreme risk, but my colleague also entered their customer at 33 because they are just worried about their customer and only rate extreme risk as 10 or below, how do we prioritize these customers?  

The amount of granularity allows for too much confusion. Similarly, if you scale it back to 0-6, how do you define these numbers?  

  • 0 = no relationship
  • 1 = bad
  • 2 = stressed
  • 3 = fair
  • 4 = good
  • 5 = happy
  • 6 = excellent

Which leads to further questions:

  • What’s the difference between “stressed” and “bad” or “stressed” and “fair”?  
  • What qualifies a customer as “happy” versus “excellent” or “good”?  

Health scores with too much gradient or too much ambiguity in their definition means your quality of data will be poor and the variation between team member interpretation when scoring will be high.

Generally, we recommend keeping the score simple and easy to understand, like Red / Yellow / Green.  

  • Red means stop (high risk)
  • Yellow means caution (needs improvement)
  • Green means go (great health)

If you feel that more granularity is necessary, add a letter grade: A, B, C, D, F as used in most US grade school scoring systems.  

  • A = Excellent (everything is perfect)
  • B = Good (room for improvement, but generally very good)
  • C = Average / Warning (should improve, customer is not achieving full potential)
  • D = Risk (unhappy, low value)
  • F = Failure / Will Churn (red alert!).  

For easy visual interpretation, we recommend associating these letters with colors (A&B = green // C = yellow // D&F = red), since this makes the score easier to read at a glance and simplifies the concept back into RYG.  Notice, we didn’t assign five colors, only three.

We also recommend using three points of data for qualitative health: risk, value and engagement.  This way you can allow your team members a little bit of nuance with their health score.  If a customer is getting great value from your product but they constantly cancel calls with your customer team members, they may get a “green / A” score for value and a “red / D” score for engagement.  We also recommend providing the team with a health score matrix for qualitative health (see below) to define clearly how they should use each color / letter score for each data point.

Risk Management 4a.JPG

Now let’s add in a quantitative health score.  This score can be one of the most valuable indicators for your team regarding the overall success of the customer leveraging your product.  These metrics should be automatically captured by your product and fed into a spreadsheet or other CRM tool (like Gainsight or Salesforce) on a daily or weekly basis.  Capturing data like this less frequently will not allow your team to react quickly to negative scores.  

When building your score (start with just one final score), you will want to leverage multiple data points from your customer data analysis.  For this section, you’ll need to know the following things:  your top 3 or 4 data points that “move the needle”, the threshold above or below which the customer is “go/no go” and which ONE data point is your primary, which ONE data point is your secondary metric (the other two are tertiary).  

There should be one primary indicator of customer health and happiness.  Usually, this primary indicator is something BIG - tied directly into your value proposition for your product.  Usage, ROI, transactions completed, time to value, etc. are often the biggest indicators of health.  This primary metric will be the make-or-break score.  If a customer does not have this metric “in the green”, then they cannot achieve a score higher than yellow / C.  

There should also be one secondary indicator of customer health.  This secondary indicator is usually the one that moves the needle, but not quite as much as your primary.  This secondary metric must be green to allow a score of green / A.  

The one or two tertiary indicators are additional metrics that provide insight into the customer’s overall performance with the product.  Perhaps this is a metric tied to the customer’s time on site, user adoption, impressions or other, lower-value metric.  These help provide additional context if a customer is doing poorly on one of the more important health indicators.

Now it’s time to build your weighted framework.  The goal will be to have one quantitative health score that indicates to your customer team whether a customer’s performance is sufficient for success.  This will be an 11-point framework.  

  • Primary metric = 6 points
  • Secondary metric = 3 points
  • Tertiary metric #1 = 1 point
  • Tertiary metric #2 = 1 point

Total = 11 points

The key to this weighted framework is the health threshold:  what is the number or percentage above or below which your customer gets a “go/no go” on this metric.  Let’s assume your primary metric is ROI.  In order for a customer to get a “go” (aka: all 6 points), they have to be receiving 3x ROI from the platform.  Otherwise, they get 0 points for their primary metric.

  • A = 11 or 10 points
  • B = 9 or 8 points
  • C = 7, 6, or 5 points
  • D = 4 or 3 points
  • F = 2, 1, or 0 points

Green = 8+ points || Yellow = 7, 6, or 5 points || Red = 4-0 points

Now you have your quantitative health score - a single score based on up to 4 metrics from your customer’s direct interaction with your product - and you have your qualitative health score - three scores based on your customer’s direct interaction with your team.  These should be viewed independently to ensure context is provided, but you can also create a roll-up score that gives your customer a health score based on all of these results combined.  There are three ways to accomplish this, but only two that we recommend.

You can build a simple, unweighted roll-up score; the four scores each contribute 25% of the overall score.  This means that if your CSM has marked the customer as Red / F for risk (meaning they know the customer will not renew), but the other three scores are green, you will not see the customer as at significant risk.

--- Example 1 (Bad) ---

  • Risk = F
  • Value = A
  • Engagement = A
  • Quantitative / Product = A → Overall = B

This can blind you and your team to what is really going on with your customers.  We do not recommend unweighted scores.

You can create an “all-or-none” score, which is a great way to see risk: no matter how many greens, yellows or reds a customer has, their overall score is equal to the lowest score on their card.  So the same example customer would have a Red / F overall score, even if 3/4 of their card is green.  

--- Example 2 ---

  • Risk = F
  • Value = A
  • Engagement = A
  • Quantitative / Product = A → Overall = F

--- Example 3 ---

  • Risk = A
  • Value = B
  • Engagement = A
  • Quantitative / Product = A → Overall = B

This method ensures you will always see risk.  This method also means that there is no gradient - it is all or none.  If your customer team has some kind of exception in their data or they are working toward a goal of improving overall value, this method can create a LOT of red scores on your scorecard.

The final method is to weight your scores, perhaps providing more weight to Risk and Quantitative / Product scores than to Value and Engagement.  This method can be good for ensuring that Risk is surfaced, but creating an accurate weighting system can be difficult and data can also hide between the cracks.  If a customer is disengaged for a long period of time, for example, your team member may overlook this if the overall score seems good.

Sample Weighting

  • Risk = 35%
  • Value = 15%
  • Engagement = 15%
  • Quantitative / Product = 35%

--- Example 4 ---

  • Risk = A
  • Value = A
  • Engagement = D
  • Quantitative / Product = A → Overall = A

The final step for building your framework is to design a few, easy-to-follow playbooks for your team to run if a customer sees a low score.  Tracking your scores is only half the battle, the real value comes in having a plan for improvement and acting on it!  These playbooks can include simple tasks such as “Re-engage the sales team” if engagement drops, or more complex tasks, such as “audit customer value” if value drops.  Designing playbooks or plans for your team to execute if a score drops or stays below a certain threshold ensures that you will be able to stay focused on what matters and avoid over- or under-working a problem that has arisen.  It also empowers your team to do what you need them to do: mitigate risk.  


Now that you’ve built your framework and a few playbooks for action, we’ll learn how to automate and scale this process for your team AND steps for change management to ensure the right habits are built to maintain good quality health scores (part 5).  To get updates when we publish the additional parts of this series, be sure to follow Sandpoint Consulting on LinkedIn.

For more information about Risk Management, or to request a customized Risk Management Workshop for your team, send us a note at contact@sandpoint.io.