RFP weighted scoring makes me think of my humanities classes.
I always pitied my teachers with their stacks of papers to grade. Coming in Monday morning, regaling us with pitiful stories of spending weekends grading midterms.
Some of my papers came back with an exact grade (95.6 pts.), while some were a bit more ambiguous (A+, smiley face, gold star, etc). The latter, while positive, were really more of a philosophy exercise than an exact evaluation.
I didn't blame them though. Precisely scoring content (dense verbiage) isn't easy. It can be tricky for humanities instructors and RFP evaluators alike. Weighted scoring can be a great solution.
It's like a rubric for grading (or scoring) responses, turning text-based answers into a quantifiable ranking.
The goal is to support fact-based, accurate, decision-making. In this blog we'll cover a quick overview of how weighted scoring works, and the main steps involved.
What is Weighted Scoring?
Weighted scoring is the time-honored practice of setting weights (or point values) for sections or individual questions in an RFP. You might be doing it (loosely) already. For instance, you might currently tell your bidders that 20% of your choice is based on criteria x, 35% is based on criteria y, etc.
- Approach 10%
- Cost 40%
- Management and Leadership 10%
- Technical requirements 30%
- Innovation 10%
This overall section percentage score is great, but it's only the tip of the iceberg of what's possible.
The thing to remember as you begin to use weighted scoring is that the end-goal is to create a measurement systems that clearly whittles down candidates according to your unique needs, making your "best-fit" more obvious. (Learn more about why weighted scoring is worth the work here.)
Basically, it's using a defined scale to “grade” RFP responses. There are 3 major steps to the process.
Step 1: Pick Your Process
Before you determine the actual "weight" of each question and section, first you need to choose your overall scoring approach.
Option A: The manual route. Complex scoring spreadsheetsIf you aren't using "RFP management software" (RFP365, Ariba, etc.) you are probably doing your scoring "manually," most likely meaning complex Excel spreadsheets.
Scoring manually can be perfectly adequate, depending on how many RFPs you process a year, as well as how detailed those requests are. But if your RFP volume is high, or if your team is stretched thin, going digital can be worth the investment.
Advantage: You don't have to purchase specialized software.
Disadvantage: You may have spreadsheets with macros, complex formulas (or errors). And you probably have to compile feedback from your team manually.
Without a built-in algorithm, it's tempting to keep scoring simple by generalizing, i.e. only rating whole sections, not individual questions. Making overall scores less indicative. Or if you do go ahead and score each question, it complicates the math, and leaves more room for miscalculation.
Option B: The digital route. Automate RFP scoringGood RFP management software can give you several features that make scoring easier.
Centralized collaboration means fewer emails (thus fewer missed emails), real-time visibility, workflow management, reusable content, custom drop-down menus, etc...not to mention auto-scoring.
Autoscoring allows you to set specific scoring values in your custom question drop-down menus (see example below). So when respondents select a certain option from said menu, a default score will be applied.
The name auto-scoring is actually a bit of a misnomer. It's really more default-scoring, since (if you're using software like ours) your evaluators can override the default scores tabulated.
Autoscoring provides a great start and can significantly speed up your evaluations... especially if you ask dozens of detailed questions.
(See an example of what "going digital" and using default-scoring looks like in the RFP365 platform below).
Advantage: it can save you significant time. It also makes the actual weighing and tabulating of scores infinitely easier and immediate.
Disadvantage: it's an investment. Dynamic software doesn't come cheap, but it can be worth it. If you're on the fence see some demos and/or take a couple for a test drive.
Step 2: Ask the Right Questions
Whether you decide to go digital or stick with spreadsheets you'll need to make the same decision. How are you going to quantify the content of your RFP responses?
To effectively grade responses, requirements need to written as closed questions (specific rather than open-ended) so you can easily score them. Note: There are many great opportunities for open-ended questions...but requirements are not one of these times.
This article explains how to rate requirements for importance and gives a great example of turning "content" into a concrete score (see their example table below).
*Image borrowed from Chris Doig's article
Essentially, you are:
- quantifying your priorities
- clearly defining expectations
- taking subjectivity out of the equation
Step 3: See how Respondents Compare
One of the big selling points for weighted scoring is that it makes vendor comparison so much easier. The problem with general section rating percentages is that that they're vague (and highly subjective). Which can make decision making murky.
But when you have exact section (and question) scores, you can easily compare and feel confident about your selection.
Doing weighted scoring right is worth the time on the front-end.
Good software can make selection even easier by visualizing those numbers into a side-by-side comparison matrix (see our example below).
ProTip: Whether you're weighing digitally or manually, we highly recommend taking the time to score not only sections, but individual questions.
Drilling down to score each question will give you a much better, much less subjective, overall score. Yes it's more work, but just like you wouldn't marry someone you didn't know, wouldn't hire an employee you didn't interview, you don't want to get into partnership with someone without doing your homework.
Conclusion: Why Weighted Scoring
You need to have the right information to make the right decision. Namely, data that is as objective as possible. To get that data, you need to have a game plan not only for what you're asking, but for how you're going to make sense of it.
That's why you need to set guidelines before you publish your RFP. When responses come in you will have an agreed-upon priority base to grade them on.
Having that rubric makes side-by-side comparison easy, because each value is included and weighed according to priority (helping balance out tiebreakers).
This initial investment means you feel confident in your final selection results, and if management, shareholders, or other stakeholders question it, you'll have a solid answer.
How to Do Weighted Scoring (Recap)
Step 1. Pick the RFP process that best fits your needs (software: faster/more in-depth-comparison vs. manual: status quo/ cheaper).
Step 2. Write Requirements as closed RFI questions, so they can be objectively (and easily) scored.
Step 3. See how they stack up. Compare your choices in a side-by-side in a matrix (using either software or DIY style with Excel) so that your best-fit stands out.
Editor's Note: This post was originally published in April 2015 and has been updated for freshness, accuracy, and comprehensiveness.]