This article is about how to conduct a software capabilities demonstration (SCD) as part of a software acquisition project. A more graphic and entertaining term for this activity is “vendor shootout”, which explains the gunslingers pictured for this article.
Another term you may run across is “Operational Capabilities Demonstration,” or “OCD”. While this term may be more precise, I’m not a fan of it because of the potential for confusion with Obsessive Compulsive Disorder.
I wouldn’t blame you for thinking that only project teams composed of OCD suffering members could complete an SCD because of the work involved. That doesn’t mean you can’t tailor the process to fit your needs. Just keep in mind that, like most things in life, what you get out of the SCD is commensurate with the effort you put into it.
I’ve used the SCD approach described in this posting on about a half-dozen off the shelf acquisition projects, the first one being the Federal Judiciary’s Financial Accounting System for Tomorrow (FAS4T) project back in the mid-1990s. The results of the SCD fed into the vendor evaluation factors as part of a Request for Proposals (RFP).
In that acquisition, the financial system with the highest SCD score did not win because its cost was significantly higher than the competition. Another vendor had a much lower cost than the winning bid, but its SCD scores were remarkably worse than the ultimate winner, giving it a much lower composite score and taking it out of the winner’s circle.
Instead of ending-up with the lowest cost and least capable system, the Judiciary was able to justify purchasing a more capable system at a higher cost. The evaluation team was able to justify this “best value” decision because it used the results of each vendor’s SCD as one of several evaluation factors for award.
Without the SCD results as an evaluation factor, it’s likely that the Judiciary would have bought the lowest cost offering. While saving on the up-front acquisition cost, the Judiciary probably would have erased those savings in the long run by paying for a less efficient, more labor intensive financial system.
There are several advantages to using a SCD in software acquisition:
- As the SCD uses end users as evaluators, it’s a great way for stakeholders to participate in the acquisition process and to get their buy-in with the selected product.
- Your SCD evaluation team develops a deep understanding of the vendor products, especially the winning product. Of course, the degree of understanding depends upon how much detail is built into the SCD process.
- It’s rare to find an off the shelf offering meeting 100% of your requirements, so the SCD team gains insight on the willingness of the vendors to configure (and, perhaps modify) their offerings to better meet your requirements.
- Results of the SCD also give insight to where the “gaps”, or the where each vendor product fails to meet your important requirements. It can also uncover bugs in the vendor’s system (though that is not the main reason for the SCD).
- Materials developed for the SCD can be used again for later testing (i.e, test data and test scenarios). Conversely, you may already have the materials available from earlier acquisition or development efforts that you can repurpose for the SCD.
In the interest of full disclosure, there are also some disadvantaged to the SCD approach:
- It can be time consuming, especially if requirements are poorly documented. What usually happens in this case is the SCD devolves into a requirements definition (or use case analysis) effort. This is indicative of a different and more serious problem of rushing into a procurement without fully understand the requirements.
- It can also be expensive as the SCD requires a dedicated group of evaluators selected from stakeholder subject matter experts (or “superusers”). If the evaluators are geographically dispersed, then holding face-to-face meetings will run-up travel expenses. However, the internet has evolved to the point where travel can be eliminated entirely with the team working together using remote meeting tools.
- It can scare-off potential bidders. This “fear” could indicate how confident a vendor is in their product.
SCD Part of a Larger Process
The SCD is one tool in your acquisition tool chest. In my earlier article How to Build a Business Case, I walked through the process for analyzing alternative solutions for meeting an emerging business need. The article also discusses how to analyze costs and benefits of those alternatives and ways of presenting the analysis to decision-makers in the form of a business case.
The goal of the business case is to present a recommendation to the decision authority (an executive, or “sponsor”) to satisfy the business need. As Figure 1 below shows, there are three possible outcomes to this decision:
Each outcome results in a different acquisition strategy. The three possible acquisition strategies are:
Make – None of the off the shelf alternatives can meet the business need. In this situation, the outcome is to acquire developer resources to build a custom solution. As you can’t run a SCD on a software system that doesn’t exist, you can forget about doing an SCD as described in this article. From a procurement standpoint, getting developer resources to build the system could be as easy as modifying an existing contract. If there isn’t a usable contract in place, you’ll have to start a new procurement.
Buy – If the decision is to purchase an off the shelf solution, then consider including an SCD in the acquisition effort. And, while we’re at it, I strongly suggest that you manage this acquisition effort as a project – an acquisition project – discussed later.
Status Quo – In this case, the decision is to do nothing. For example, the business case may show that business as usual is “good enough” or that minor updates to existing systems and processes could accommodate the need without new acquisition. As all the pieces are already there, you won’t have to start a new acquisition.
Acquisition as a Project
This section describes the major activities in an acquisition project for purchasing an off the shelf solution. As you can see below in Figure 2, I’ve taken the normal acquisition process used by the U.S. federal government and added:
- SCD planning as part of the procurement planning activity.
- Conducting the SCD as part of the evaluating vendor proposals activity.
- Closing out the project to collect lessons learn for improving the overall approach for the next acquisition.
A description of each activity, and their products, follows.
For most procurements, the planning activity involves assigning a contracting specialist and members to the cost and technical evaluation teams. Managing the procurement project that includes an SCD also requires:
- A project manager;
- Five to seven business subject matter experts (SME) to serve on the SCD evaluation team, and;
- Two business analysts (who can be in-house resources or contractors).
Like any other well-managed project, the acquisition project should have an executive assigned as the project sponsor. It is the project sponsor’s responsibility to approve the charter for the acquisition project (example available here), which is written during procurement planning.
The products of the procurement planning activity are:
- Acquisition strategy;
- Funds available to obligate onto a new contract;
- At least the procurement specialist (contracting officer) and project manager selected and assigned to the acquisition project team, and;
- Acquisition project sponsor selected and assigned.
The project team, led by the project manager and advised by the procurement specialist, is responsible for developing the acquisition project plan. Much of the project schedule will depend upon the acquisition strategy and the time constraints (if any) specified by your organization’s procurement requirements.
Also accomplished during project planning is an assessment of the requirements readiness for use in the procurement. I’ll cover this in more detail later in this article in my description of the SCD process.
Suffice it to say that the degree requirements readiness has a direct influence on the acquisition project schedule. If requirements are poorly understood, poorly documented, and/or volatile, the project team will have to modify the project schedule to accommodate additional work needed to address the situation.
If the requirements are in really sad shape, then it’s time to question the business case and determine what kind of requirements went into producing it. The acquisition project manager and acquisition specialist should discuss this situation with the project sponsor who can decide if it is worthwhile proceeding with the acquisition project.
I’ve seen business cases that were very thin on requirements but approved by executives. Nothing good comes from a situation like this unless the acquisition project team is very lucky. And depending upon luck is not a risk mitigation strategy, though I’ve seen many a project team (and some Project Management Offices) do just that.
Products of the project planning activity include:
- Revisions to the project charter (approved by the project sponsor);
- Requirements readiness assessment;
- Draft project plan (including the overall schedule and the SCD plan discussed later);
- Descriptions of project team members roles and responsibilities and the individuals assigned;
Project Plan Review and Approval
After the acquisition project team has completed the acquisition project plan, the next step is to walk through the plan with the project sponsor and other interested executives (investment review committee members, the sponsor’s immediate supervisory, and such). The goal is to clearly describe how the acquisition project will work, the project cost and schedule, and any modifications the project charter may require to reflect the project plan.
Products of the project plan and review activity are:
- Approved project plan signed-off by the project sponsor;
- Updated project charter, and;
- Approval to start the project.
Prepare and Issue Request for Proposal (RFP)
The focus on this activity of the acquisition project is to assemble the RFP package and release it to potential vendors. Most of this work follows the normal procedures your organization follows for releasing an RFP. In addition to the normal workflow, the project team will develop these materials pertaining to the SCD:
- Obtaining copies of business/functional requirements documentation;
- Description of SCD scoring;
- SCD evaluation factor in addition to the usual evaluation factors for this type of procurement.
- Instructions for vendors participating in the SCD;
- SCD test data (supplied in electronic format), and;
- SCD script.
Requirements documentation and SCD items can be added as appendices to the RFP package.
Most RFPs that I’ve worked on involving an SCD required a legal review before the procurement specialist could release the RFP package to bidders. Follow your organization’s procedures for this.
During the response period (running from the RFP release to the proposal due dates), vendors may have questions about the acquisition. The procurement specialist provides the answers after coordinating with the acquisition project manager (as some of the questions may be technical rather than contract related).
Products of the prepare and release RFP activity include:
- RPF package, including the SCD materials noted in the bulleted list above in this section;
- Answers to clarification questions posed by bidders during the response period;
- Vendor technical and cost proposals.
This acquisition project activity is concerned with the cost, technical, and SCD evaluation teams reviewing the proposals submitted by the bidders. The teams base the scoring on the guidance described in the RFP (usually Section M for US federal procurements).
Vendors meeting certain requirements specified in the RFP are invited to participate in the SCD (indicated by the vendor demonstration activity to the left of evaluate proposals in Figure 2). Usually, this is not an issue as vendors won’t submit a proposal if they are not confident they can complete the SCD with reasonable effort.
Products of this acquisition project activity include:
- Bidder participation in the vendor demonstrations;
- Completed scoring for cost, technical, and software capabilities (through the SCD and vendor demonstrations), and;
- Award recommendation/justification.
During this activity, the acquisition project manager and procurement specialist present the award recommendation to the approving authority. Individuals that actually sign-off on the contract award recommendation depend upon your organization’s process for awarding contracts. Presumably, the project sponsor (the same executive who approved the project plan and charter earlier) is this approval authority, but that may not be the case.
The products of the contract award phase are:
- Contract award;
- Award justification, and;
- Authorization to initiate a follow-on project to implement the acquired system.
As soon as possible after approving the contract award, the project team should identify and capture lessons learned from the effort. Gather and preserve all project documentation as reference materials for similar future efforts and as supporting materials for possible contract award protests. Team members should not be released from the project until after completing the lessons learned document.
I won’t go into too much detail for completing the lessons learned, but I will say that most project teams aren’t very good at doing them. The most effective way to “harvest” lessons is through facilitated sessions with an experienced third-party having no stake in the project.
Products of project closeout include:
- Lesson learned documentation, and;
- Released project resources.
Don’t forget to celebrate completing the project!
Diving into the SCD
The first part of this posting focused on the concept of managing an acquisition effort as a project. I described the three possible acquisition strategies (including the strategy of, essentially, doing nothing). Lastly, I went into a bit of depth describing how to “projectize” off the shelf software acquisition. I also introduced the concept of a software capabilities demonstration, or “SCD”, as an addition evaluation factor for selecting a winning software product. Next, I’ll do a deep-dive in to the SCD itself.
Figure 3 below shows the activities involved in completing an SCD for off the shelf software acquisition. In the diagram, I’ve mapped the SCD activities to their corresponding “parent” acquisition project activities.
The following sections describe each of the SCD activities in detail.
Assess Requirements Readiness
While showing this as the first step of the SCD process, it is also, as noted earlier, a very important consideration for determining if there should even be a procurement.
Locate, review and evaluate the business requirements for the new system looking for requirements that:
- Are ambiguously written and open to intrepretation;
- Untestable, and;
- Not prioritized (the MoSCoW method used with functional requirements documented as user stories, for example).
Also verify that the requirements are complete and accurately match the scope of the system to be acquired. For example, when using the MoSCoW method, “won’t have” requirements are out of scope and should be excluded from requirements specification because they are irrelevant to the acquisition.
Have professional business analyst work with stakeholders clarify ill-defined requirements that are in scope, but do not pass the three tests listed above. This will, of course, delay releasing the RFP (and add cost). However, waiting to clarify requirements after awarding a contract is a very risky strategy and best avoided.
Assess Existing Test Documentation
If the effort is to replace an existing obsolescent system, then review the available testing materials to determine what can be reused for the SCD. The goal is to maximize reuse and minimize having to create new materials, which is more costly and time consuming.
As the SCD uses scenarios to realistically simulate system usage, there may already be a set of test scenarios (and associated test data) that can be repurposed (sometimes with little effort) for the SCD. Existing test scenarios should be mapped back to their “source” requirements. If they aren’t, then invest some professional business analyst time to do the mapping. Again, this will add delay and cost, but in the long run, the effort will more than pay for itself.
Write Test Scenarios
If unsuccessful in finding useful test documentation and materials, then it will be necessary to develop a set of test scenarios “from scratch”. Use a team of professional business analysts working with subject matter experts to develop the scenarios. Each scenario should:
- Describe the users involved in the scenario. For users, I suggest using personas and story writing, like writing a play. Each persona becomes a character in the story with certain roles (that correspond to user roles in the business process being simulated and the security roles assigned to the user in the system).
- Describe the test data used in the scenario.
- Describe the expected end state of the data, or the results, of the scenario. This is the “expected result” of the scenario.
- Specify date/time if simulating the passage of time (refer to my article Time Travel and Software Testing for more information).
- Map each scenario to one or more requirement.
- Ensure that all “must have” (or equivalent priority used in your prioritization scheme) map to at least one scenario.
Again, depending upon the state of the existing test materials, a lot of this work could already exist to the point where it is a matter of verification. Also, in the interest of saving cost and time, consider limiting the scenarios to the “must have” requirements.
Assemble SCD Script
Review the test scenarios and, if necessary, arrange them into logical sequences simulating how one or more users would interact with the system. Similar to writing a script for a play or a television show, arrange the scenarios so they tell a story with a logical sequence of events. The SCD script should also have a cast of characters, such as:
- System administrator performing task in the system that only they can do, such as setting up user security roles and assigning roles to users.
- Process worker using the system to support their work responsibilities such as entering data, making a decision based on the data, or some other work activity.
- Approver who reviews the work of the process worker and approves or denies their recommended action (if the business process involves an approval step).
- Other workers using the system in different capacities, such as:
- Viewing data, but not having the appropriate security role to update the data.
- Updating existing data.
- Removing data.
- The system itself can be in the cast. For example, there may be scenarios involving batch processing triggered by a timer.
The requirements documentation should identify the roles and responsibilities – or access to system functions based on user responsibilities. (Look out for this when assessing requirements readiness).
One of the advantages of correctly specified user stories is they explicitly identify the user role in the story. User stories beginning with “as a system user I want to…” will have to be rewritten unless every “system user” has the same access to data and system functions (highly unlikely for more complex systems).
Keep in mind that real users often have several roles in a system. Consider creating characters having multiple roles as a way to test role based access during the SCD.
Create and Package SCD Test Data
When assessing existing test documentation and materials, also assess the availability and quality of any test data. If none are available, the SCD team will have to develop new test data for the scenarios to operate upon.
The SCD process diagram clumsily shows that creating test data is an iterative process. As the team refines the scenarios, the test script, and writes evaluation questions, they may have to make adjustments to the test data.
All SCD test data will be given to vendors responding to the RFP. Therefore, the SCD team has to “package” the data in a way vendors can access and use it in preparing for their SCD sessions. For structured data, consider using spreadsheets, JASON, and XML formatted data files. For unstructured data, use the appropriate file formats (e.g., JPEG for images).
Consider using a file sharing service to share the test data with the bidding vendors. Pay attention to the sensitivity of the test data and use the appropriate access controls if necessary. Ideally, the test data should simulate sensitive data, but not use any data that are truly sensitive.
Write Evaluation Questions
After arranging the test scenarios into a realistic and coherent script and identifying the “cast of characters”, the next step is to write evaluation questions to capture the evaluator’s assessment of the vendor’s performance. Use a numbering scheme to cross reference actions in the SCD script with each evaluation question.
Questions should follow the scenarios (and the script) in lock-step fashion using these four question types:
- Yes/No response
The following subsections describe each type of question in more detail.
Use yes/no questions to assess if a vendor successfully completed the task (e.g., generated the expected result) referenced in the SCD script. Some example yes/no questions (taken from actual SCD scripts) are:
Does the system allow the user to check and view their assigned security role(s)?
Can the user view a list of all financial transactions having a due date from the current date through the end of the current month?
As the term implies, yes/no questions are binary. The vendor earns credit only if the answer to the question is yes and none if the answer is no.
Usability questions assess the ease of use of the vendor’s system. In software engineering, usability is defined as:
The degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context ofErgonomic Requirements for Office Work with Visual Display Terminals, ISO 9241-11, ISO, Geneva, 1998.
Each evaluator will react to the system differently. Some evaluators are more tolerant of a poorly designed interface (screen layout and workflow) than others. A large number of factors (some psychological) affect the usability of a software system and completing an in-depth usability study takes more time and effort than allowed under most procurement schedules.
Using a yellow-green-blue scoring scheme to capture evaluator impressions of system usability simplifies this task and gives meaningful results considering the tight deadline most procurements are under. Table 1 below summarizes how to use the color scheme to assess usability.
Usability questions should be used in conjunction with a yes/no question. If a vendor was unsuccessful and received a NO response, then evaluators will skip the indicated usability questions (there may be more than one per yes/no question). Here’s an example:
Login as the user JSchmedlap.
View the user roles assigned to JSchmedlap.
- Does the system allow you to view JSchmedlap security roles (Y/N)? If the answer is NO, proceed to Scenario 2 and skip the remaining questions.
- How easy was it to view user JSchmedlap's security roles (usability question)?
- Assess the usefulness and understandability of the display of JSchmedlap's security roles.
Actions followed by questions
Use comments to collect the evaluator’s general impressions about some aspect of the vendor’s performance. As it is extremely difficult to score a comment, this type of question should not be scored so that it does not factor into a vendor’s SCD score. This question type really serves to help “jog” the evaluator’s memory during post-SCD discussions.
Tips for Writing Evaluation Questions
The scoring concept behind an SCD is that every vendor starts their demonstration (described later) with zero points. If the vendor cannot successfully complete a scenario, then the penalty is by not awarding any points. Carefully phrase evaluation questions so that a positive outcome results in a score.
Use yes/no questions to setup one or more usability questions (as shown in the example above). Here’s how the sequence works:
- Following the SCD script, the vendor performs an action with their system.
- The first evaluation question, a yes/no, asks if the vendor successfully completed the action and achieved the expected result.
- If the vendor did complete the action, the evaluator answer’s “yes” and them moves on to assess the usability aspects of the action.
- However, if the vendor failed to complete the action or generate the expected result, then the usability question(s) about the action are skipped and the vendor doesn’t score any points.
Using this sequence avoids situations where a vendor could receive a high usability score, but fail to meet most functional requirements (in other words, a “pretty system” that fundamentally doesn’t work).
Questions referencing “must have” priority requirements should carry the most weight than those referencing should have requirements. Questions referencing could have priority requirements, if even included in the SCD, ought to have little weight.
Don’t get too elaborate in the weighting scheme. For example, there may be situations where several questions reference the same requirement. This can be handled by keeping only the first instance of the question and removing the remaining instances from SCD evaluator scoring. Evaluators will see the question once and not have to respond to it multiple times (which saves time and simplifies the SCD).
I’ve successfully used the weighting scheme I’m about to described on several SCDs.
Weighting Yes/No Questions
The weights shown in Table 2 below for yes/no have worked well for several earlier SCDs (substitute your prioritization scheme if not using MoSCoW):
|Referenced Requirement Priority||Recommended Question Weight/ Maximum Points|
|Must Have||10 points|
|Should Have||3 points|
|Could Have||1 point|
Weighting Usability Questions
Usability questions have a range of possible points to score. Experience has shown that using a three-point Lickert scale for the color score works best. Table 3 below shows an approach for scoring usability questions by requirement priority corresponding to the color scores listed in Table 1 earlier.
The numbers show the points to award when the user select the corresponding color for assessing usability. Note that while the table above suggests scores for questions referencing could have priority requirements, it’s usually not worth the time to include questions for these low priority requirements in the SCD.
Example SCD Scoresheet/Script
Figure 4 below shows a portion of a combination SCD script/scoresheet (you can view a sample here).
A couple of things to highlight about the scoresheet/script:
- Note the use of time shifts and directions to the vendor to simulate the passage of time.
- Step two directs the vendor to log in as a user (Jschmedlap, in this case) and complete a series of steps as the Jschemdlap persona.
- The first evaluation question asks if the Jschmedlap was able to see his roles (question 3.1). If this wasn’t possible (perhaps a system design flaw), the script skips over several questions to step 5. The vendor receives no points for the skipped questions because the system doesn’t allow the user to view their user roles (which is a must-have requirement).
WARNING! The scoresheet/script can become quite lengthly for very complex systems. Focus on the “must haves” and avoiding repeatedly testing the same requirements to help reduce the number of responses required of the SCD evaluators.
Strategies for Collecting Evaluator Scores
After settling on the scenarios, writing the evaluation questions, weighing the questions, and finalizing the SCD scrip/scoresheet, give some thought to how to collect the SCD scores from the evaluators after each bidder’s SCD session.
My recommendation is to give evaluators paper copies of the script/scoresheet and have them write their responses and comments on the sheets. Upon completing a bidder SCD session, collect the completed scoresheets from the evaluators and tally the score by hand (like a high school teacher would do for grading student tests).
I also suggest creating a spreadsheet for capturing all of the scoring data by question. This allows further analysis of the SCD using spreadsheet features. For example, using a spreadsheet helps with rapidly identifying areas of weakness with the winning system after contract award. This knowledge gives insight about where to emphasize user training and support and can be quite valuable. (This level of detail isn’t required for awarding a contract as it’s the total SCD score that counts.)
Regarding manual scoring, you can use web-based opinion survey tools (SurveyMonkey and the Microsoft SharePoint survey tool are examples) in lieu of paper forms. However, setting up an online survey and adding the logic to skip questions based on evaluator response can take a lot of time (not to mention testing the survey) and is probably not worth the added complexity.
Finalize and Integrate with RFP Package
At this point in the SCD process, the acquisition project team should have:
- Selected not more than seven stakeholders representing the end users of the system to be acquired to serve as SCD evaluators.
- Developed and reviewed a set of scenarios, complete with verified test data (including personas) and expected results, to test.
- Developed and verified the combination SCD script/scoresheet.
- Packaged the test data and selected a mechanism (such as a file sharing service) to make the data available to the bidders
- Loaded the test data onto the file sharing service.
There are a few more items that need wrapping-up before adding the SCD materials to the RFP package:
- Integrating the SCD into proposal scoring, and;
- Adding a section to the RFP describing the SCD.
I’ll describe these remaining items in the next sections.
Integrating the SCD into Proposal Scoring
Government acquisition regulations require that agencies disclose the evaluation factors and criteria in the RFP. Your organization may have similar requirements (if it isn’t a US federal agency). For US government procurements, the evaluation factors are usually split between technical and cost factors. If cost is more important than technical, then more weight is given to the cost factor, and vice versa.
There are usually several technical evaluation factors in government procurements that, when combined, yield a composite technical score. The technical evaluation factors I am most familiar include:
- Technical approach the bidder will use to accomplishing the work described in the RFP;
- Corporate experience working on other projects of a similar size and scope, and;
- Proposed staff (project manager, for example) experience working other projects of similar size and scope.
There may be other technical evaluation factors beyond those listed here that your organization requires in its RFP package. The point here is that a technical evaluation factor for the SCD must be added to the RFP.
For example, let’s say that bidder technical capability is important, but not quite as important as cost. To show this bias toward cost, the cost evaluation factor is weighted at 60% and the technical factor at 40%. Converting this breakout into points, we set a maximum point score to 200 where a bidder can earn up to 120 points (60% of the total) for cost and 80 points (40%) for technical. Table 5 lists the cost and technical evaluation factors and the total points for each.
Note the break-down of the 80 technical points into “sub-evaluation” factors. In the example in Table 5, of the technical evaluation factors, technical approach carries the most weight followed by the equally important corporate experience and SCD factors, and, finally, staff experience.
According to this breakout, a perfect SCD score earns 20 points. To convert the SCD score into the evaluation factor score, calculate the maximum possible score from the SCD script/scoresheet, multiply that number by the number of SCD evaluators, and then divide by 20 to calculate a score conversion factor.
For example, if the maximum possible SCD score is 980 points and there are five evaluators, the score conversion factor is 20/(980 X 5) = 0.00408. A vendor scoring 3,350 total points (adding the points awarded by all five SCD evaluators together) on their SCD would receive 3,350 X 0.00408 = 13.7 points for their software capabilities demonstration score.
Adding an SCD Description to the RFP
In addition to describing the new SCD technical evaluation factors, also include a description of how the SCD will be administered. You can view an example description here. In the next section, I’ll go into more detail about actually conducting vendor demonstrations and elaborate the example description.
Conduct Vendor Demonstrations
The SCD description discussed immediately above notes “only vendors submitting a complete proposal package will be invited to demonstrate their proposed solution”. Combined with the corporate experience evaluation factor, this should prevent bidders that do not have an operational product from participating in the SCD. Sometimes this isn’t enough.
An earlier procurement that I worked, the RFP specified that the vendor’s offering be a native Microsoft Windows application (this was during the era of client-server computing before multi-tier web-based applications). One vendor took the specification to mean that their system, which was an old-fashioned, “green screen” application, met the requirements by running in a Microsoft Windows terminal emulator. While not truly being a native Windows application, the vendor demonstration proceeded anyway. This vendor, while receiving a low SCD score, was not selected because it did not score very high on the other technical and cost evaluation factors.
Here are a few points about conducting the SCD Demonstrations:
- The description says that the demonstrations will take place at company headquarters. This is meant as a placeholder. Demonstrations can be held at each vendor’s site or at some other venue (such as a hotel meeting room or other location) that serves your organization’s purpose.
- Face-to-face demonstrations are, by far, the best way to go. But if funding is an issue, consider using web meetings to remotely connect vendors and evaluators.
- The description suggests two eight-hour days for each vendor demonstration, which is about right for a complex system. For smaller, less complex systems, consider limiting the demonstration to one day.
- Vendor personnel (usually sales engineers) demonstrate their offering, following the SCD script, while the evaluators watch and keep score.
Each vendor should provide a demonstration system. In past demonstrations, I’ve seen vendor bring in laptops running their software. However, with the prevalence of internet connectivity and cloud computing today, this is no longer necessary.
Suggestions for Managing a Vendor Demonstration
Here are some suggested guidelines for managing vendor demonstrations:
- The SCD evaluation team should strive to maintain a “level playing field” by treating all vendors equally.
- Vendors have a fixed amount of time to complete the script and do not receive credit for areas missed. If a vendor completes the script before the allotted time for their demonstration, they may (at their option) revisit earlier scenarios to clarify any misunderstandings, describe functionality offered beyond the requirements, or end the demonstration session early.
- Consider starting off the demonstration with a short question and answer session about the mechanics of the SCD. The example description I provided has a 30 minute session before actually launching the vendor demonstration, which works best.
- Vendors must actually demonstrate the functionality specified in the SCD script/scoresheet. It is entirely possible that a vendor successfully demonstrates a specified feature but generates an incorrect or unexpected result because of a bug in the system. Unless the evaluation question specifically calls for an expected result, evaluators should not hold the erroneous result against the vendor, but make note of it.
- The procurement specialist and project management personnel should be present during a vendor demonstration to help guide the process. Other individuals may be invited to sit-in during a demonstration (such as the project sponsor), but should be prohibited from asking questions of the vendor or behaving in a way that may influence evaluator scores.
- Allow time between each vendor demonstration for the evaluators to meet and discuss their impressions of the vendor’s performance and adjust scores accordingly. Observers do not attend these sessions.
Evaluate Vendor Demonstrations
Shortly after the last vendor demonstration, SCD evaluators should discuss their impressions of each vendor’s performance and adjust their scoresheets accordingly. This should be a facilitated session involving the procurement specialist.
As the evaluators finalize the score for each vendor, the procurement specialist (with the assistance of the acquisition project team members) review the evaluator SCD script/scoresheets and calculate the total vendor SCD score. This process repeats until all vendor scores are calculated.
The procurement specialist reviews the total SCD scores with the SCD evaluators, ranking the scores from highest to lowest. Evaluators discuss the ranking and come to consensus that it represents, from their point of view, the most to least desirable system from a functional/usability perspective. Results of the SCD are finalized by the procurement specialist and provided to the technical evaluation team as input to their deliberations.
Between this article and the How to Build a Business Case article, I’ve described a process for justifying and selecting a new off the shelf business system. This process worked well for about a half-dozen procurements where I was either the technical project manager or a Project Management Office (PMO) consultant on a project team.
At one point, the process became so repeatable and predictable that I could pretty much guarantee that it would take from one year to 18 months from start to contract award to complete the entire business case – off the shelf acquisition process.