search instagram arrow-down

Introduction

When the 17 disciples of Agile published the 12 principles of the Agile Manifesto in February 2001 they didn’t include a principle for customer service. Though customer satisfaction is given the highest priority principle (listed first), this point is couched through the lens of “early and continuous delivery of valuable software.” While that is an admirable goal, it doesn’t do the users much good they can’t figure out how the complex system your team just delivered works and they don’t know where to go when they need help.

What is Customer Service?

By customer service, I mean the processes used to deliver training, support, and documentation to the users of the new system. Having these processes in sorted out before the new system goes live ensures that users get the help they need when they need it.

Ideally, customer service processes should be in place before the new system goes live. Otherwise, your users could end up “spinning their wheels” trying to get up to speed as new features are delivered on a periodic (perhaps weekly for Agile) basis. It’s also likely that Just as the users get used to features delivered in the last release, your developers unload a feature set on the semi-suspecting users. Depending upon how forgiving your users are, you might get away with deficient training and user support for a while before exasperation sets in and your users throw their hands up.

Differences Between Build versus Buy

If your organization is buying an off the shelf system rather than building a custom solution, then you may be able to save time and cost by using the software vendor’s already established customer service processes and products (provided you used this as an evaluation criterion). The more your implementation of the off the shelf solution veers away from the vendor’s core configuration, the more you will have to “tweak” the vendor’s user documentation and training to match your configuration. If you’ve followed the process I described the Vendor “Shoot-out article, you should have a very good idea of the extent of configuration (and possible customization) needed for making the vendor’s solution meet your acceptance requirements.

However, if your acquisition decision is to build a custom system or customize an off the shelf solution (probably the worst option), then it’s all on you. You’re responsible for developing the customer service processes and for making sure they’re up and running and available before the new system goes live. And, unless you’ve read my article How to Build a Business Case, you probably wouldn’t have included the cost of developing customer service processes and documentation in your cost estimates. Presumably, you would have used the pilot testing period (see Pilot Testing a New Business System) to “shake down” customer service processes (including piloting user training) along with the new system and made improvements in customer service before going live.

The approach I’ve just described for synchronizing system with customer service process development works well when there’s an “Initial Operational Capabilities” (IOC) date when the new system goes live. But it also works for Agile/DevOps if we substitute IOC with the date of the sprint delivering the Minimal Viable Product (MVP). At that point, there will be enough features delivered to consider the system in production (operational). Whether you decide to first pilot test the now operational MVP or just push it into production without a pilot depends upon the level of confidence you have in your DevOps process.

Customer Service Complexity

The extent of the customer service processes needed for supporting the new system should be assessed based on the complexity of the solution you plan on delivering. This should be done very early, even before making an acquisition decision.

Perhaps the the problem your system is trying to fix isn’t very complex. For example, take Gas Buddy, an application (or “app”) running on iPhones, iPads, Apple Watch, and Google used to compare fuel prices between service stations within a geographic area. While “the devil in the details” revolves around collecting and mapping fuel prices, the Gas Buddy app, from a user perspective, is intuitive and it doesn’t require much training to become proficient. Nor does the Gas Buddy app need a user manual as its user experience is lean and simple.

However, Gas Buddy has several marketing programs designed for generating user involvement (it’s the users who contribute fuel prices by locations) and other revenue generating programs. Gas Buddy wants to educate users about these programs and has developed a customer service portal where users can get more information about them. Through this portal, users can request assistance about using the app and get information about and assistance with the various programs that would be harder (and perhaps more expensive) to deliver through the app. (I’ll describe this portal a bit later.)

This last point is something you should watch out for with your system. There may be information and processes existing beyond the functional scope of your system that should be covered by the customer service processes. In the Gas Buddy case, this is the information about the marketing programs I mentioned earlier. A good example of how customer service processes can extend beyond the functional scope of a software system is the U.S. Environmental Protection Agency’s (USEPA) Safe Drinking Water Information System (SDWIS – you can read more about SDWIS in my article Some Inconvenient Truths about SDWIS).

Among the many things SDWIS does is to determine if a public drinking water system is out of compliance with the primary national drinking water rules that apply to it. (You can read about drinking water rules on USEPA’s website here.) These drinking water rules can be very complex and difficult to intrepret. Over time, USEPA developed guidance in the form of Data Entry Instructions and detailed compliance decision flowcharts for all of the federal drinking water rules to explain how the rules work. (Good luck finding copies of them from EPA’s pubic web site – you may have to make a Freedom of Information Access request to see them.)

It would make sense to share the decision flowcharts and the Data Entry instructions using the customer service processes with the SDWIS users so it’s at their fingertips when they need it. Fortunately, these documents don’t change that much, so many users maintain personal copies of these documents (in electronic and paper form). However, consider what happens when a new user comes in or if one of these documents is updated – how do the users access this documentation? Do they need to build up their own document library? How do they know they have the latest verions?

Measuring Customer Service Performance

A big difference between an app like Gas Buddy, which any member of the public can use, and your business system is the latter will likely insist on some minimum level of service. With Gas Buddy, if the system is down, the user won’t be able to identify the closest service station with the lowest prices. That could cost them a couple of bucks, but in the end, it’s a minor inconvinence. However, a service disruption may not be as tolerable to the users of your system, especially if it affects someone’s livelihood or health.

Determining your system’s availability target is beyond my discussion here, but you would want to measure how quicly and accurately your customer service processes are responding to user help requests. For example, the number of help requests received and the number of resolved and unresolved calls. You may also want to collect information gathered through costumer satisfaction surveys or questionnaires about the quality of the customer support service received. (Often, results of system user satisfaction surveys are included in system performance measures.)

You certainly would want to collect data about how quickly the customer service processes are delivering help to the users. For example, you could collect information on the average time it took for a user to receive a response to a request for help. In this manner, you’ll be able to tell if your customer service processes are overworked (long response time) or not. Combined with the customer service satisfaction surveys, you can begin to get a good idea of the efficiency and effectiveness of your customer service processes. You can also use this information to help with budgeting for maintaining g a sustainable customer service operation.

What you may need is a customer relationship management system, or CRMS, for managing your customer service processes. However, if your organization is already using a CRMS supporting for another system or business line, you may be able to jumpstart the process by using the same CRMS tailored for your system. Like many software offerings today, access to a CRMS can be rented on a subscription basis from a service provider in addition to installed on-premise. Need a new help desk for your new system? It’s possible to set one up in a matter of minutes. In a few hours, after making some configuration tweaks to the CRMS, you have have a customer service portal up and running and ready for use.

Customer Service Example – Gas Buddy

At one time I owned an Volkswagen Jetta diesel Sportwagen (which I sold back to VW after EPA discovered problems with pollution controls on the engine). As many gas stations did not sell diesel fuel and diesel fuel prices varied widely between gas stations, I installed the “Gas Buddy” app onto my cell phone to help me locate the closest and least expensive purveyor of diesel fuel.

Like many mobile applications, Gas Buddy hads a socialization aspect where volunteers (mainly the users of the app like me) updated the price they paid and the location where they bought for deisel fuel (or gasoline) in the app. In this manner, Gas Buddy constantly updates the latest reported prices for the various grades of gasoline and deisel fuel sold at different service stations. Using the GPS features on my cell phone, Gas Buddy can generate a map showing me the latest reported fuel prices relative to my position.

There is a lot going on under the Gas Buddy “hood,” but it was a straight-forward and easy to use app to the point users can train themselves. Like a lot of mobile apps, there is no context sensitive online help. Instead, if you experience an issue with the Gas Buddy app, you have to connect to the Gas Buddy Help Center from the Gas Buddy Web site or from the app. The screenshot below shows the Gas Buddy Help Center as it appears on my laptop computer.

Gas Buddy Help Center Home Page

From the Home Page, you can access various articles about other services offered by Gas Buddy. For example, I clicked on the “Pay with Gas Buddy Membership” article link, and the Help Center served up the article shown below:

Viewing the Article Pay with Gas Buddy Premium Selected from the Gas Buddy Help Center Home Page

As the Help Center leverages responsive design, the look and feel is tailored to the device I’m using. For example, when I access the Gas Buddy Help Center from my cell phone, it looks like this:

Gas Buddy Help Center on a Cell Phone Web Browser

Unlike my laptop, I have to scroll to see the “”Promoted Articles” due to the limitations in cell phone screen real estate, but the search tool and automated assistant (the floating greenish circle with the question mark) are immediately available. If I can’t find what I’m looking, I can use the automated assistant to help me or I can submit a request, which opens a help request form. Here’s the request entry form as seen from my laptop:

Request Form Available from the Gas Buddy Help Center

As you can see, the form lets me leave my email address, enter more information about my request (or issue), and even attach files (like screen shots). When I’ve completed the form, I click on the “submit” button (not shown in the screen shot above as I would have to scroll down to see it), and a new “ticket” is created, along with a notification to a customer service representative of my request. As my issue is resolved, I will receive email messages generated through the Help Center.

It would be interesting to know how many customer service reps Gas Buddy has on staff (or contracted out), but I suspect it isn’t many. The Help Center is designed to make it easy for users to find the information they need to the point where submitting a request is the exception rather than the rule. As novel requests come in, Gas Buddy can “tweak” the Help Center content to accommodate new information to share with its customers.

A Brief History of Customer Service Systems

Gas Buddy isn’t alone in how it supports its users. Many retail vendors are also leveraging web-based, self-service customer relationship management (CRMS) systems like Gas Buddy’s. If your organization isn’t following this well-established trend, then you should find out why because it could be missing out.

CRMS have been around for quite some time and started out as “help desk” ticket tracking systems. In the late 1980s, I was working as a program manager for a contractor supporting an organization responsible for managing the finances for ship construction and conversion within the US Naval Sea Systems Command (NAVSEA) . My team’s job was to provide help desk support for what was then called office automation applications (spreadsheet, word processing, etc.) to 1,500 plus users. My team also gave training classes on how to use the office automation applications and on several NAVSEA proprietary applications. One of my contractual deliverables was to report the number of help desk calls received per month and the rate of call resolution.

We started by manually compiling help desk calls using a written log supported by a spreadsheet, and that worked for a while. But as our customer base ramped up to 1,500 users, this proved impractical, especially as the technology of the time didn’t allow simlutaneous multi-user sharing of the spreadsheet. We could have gone out and acquired a commercial off the shelf “ticketing” system, but our budget precluded that option. Instead, my help desk team developed a simple database application that logged and tracked help requests (or tickets). This made reporting of the help desk statistics easier, but it also provided some unexpected benefits.

For example, the Help Desk team published a monthly newsletter to publicized upcoming training classes. Included in the newsletter was the top five types of help requests logged during the previous month and a description of the resolution for each request type. In this manner, the Help Desk was proactively addressing the common issues users were experiencing. Users were encouraged to check the list in the newsletter first before calling the help desk. We found that, after a few months, that the number of calls for the publicized issues trended downwards. Users were learning to help themselves before calling our chronically understaffed help desk.

This was before the world wide web. We had email, but only the Help Desk crew could access the help desk system (even our email system, developed by a Navy lab in Washington state, was home grown). Aside from the newsletter, we didn’t have a “self-service” customer service process where user could access a “knowledgebase” of know issues. Providing user support was labor intensive as each encounter had to be orchestrated by a one of my Help Desk Analysts.

Fast forward to 2010. The “web” is now ubiquitous and “the cloud” is beginning to come into its own. Things are starting to look a lot like the Gas Buddy example above. Most organizations still maintain the CRMS software on their own servers, but that was changing rapidly as vendors moved to cloud platforms and subscription based services. Soon enough, you could “rent” a CRMS from the vendor and not have to worry about buying servers and hiring (or contracting) operations specialist. For a relatively inexpensive monthly fee, you could access the vendor’s CRMS offering, configured to meet your organization’s specific needs through the web browser on your device, be it computer, cell phone, or tablet.

Modern CRMS Features

Recently, many of the more elaborate CRMS have added social networking like features to their offerings. For example, users can create profiles, participate in discussion boards, and rate information found in the CRMS. Furthermore, CRMS vendors have added features like bots where users can chat “real time” with an agent, which is not a person, but the CRMS giving programmed questions to responses entered by the user. Additional features include integrated logins and user provisioning with Microsoft Office 365, Google, and iCloud among others. There are also features for interfacing with other social networking platforms like FaceBook, Twitter, and such (whether it makes sense to use social networking interfaces is up to you). Some other features available in a modern CRMS include:

  • Content Management, Authoring and Publishing
  • Service Level Agreement (SLA) Tracking
  • ReST Web Services for Sharing CRMS Content.

I’ll expand upon these last three next.

Content Management, Authoring, and Publishing

Earlier, I mentioned adding user manuals, job aides and related information into the CRMS system and making it available online to the users. Through a search tool (which just about any self-respecting CRMS offering has), users can query the online user documentation to find an answer to their question or the resolution to their issue without having to bug the help desk. If the online documentation doesn’t help, the user could either submit a ticket by filling it out online or call a number to talk to a help desk analyst.

Some of the higher end CSRMS vendors added content management, authoring, and publishing workflow features to their offerings for creating a user support knowledge base. Similar to the online Word Press editor I used to compose this article, documentation specialists can compose articles and submit to a workflow for review and publishing into the knowledge base. The advanced search features available in the CRMS allows users to quickly and easily find content that could help with resolving their issue.

Service Level Agreement (SLA) Tracking

Another aspect of modern CSRMS offerings are features for specifying performance metrics for user assistance service level agreements (SLAs). As the system uses timestamps to track user requests for through the ticketing process, the CRMS uses this information to generate support statistics. Some help desk offerings include dashboards showing trends in how well the user support operation is meeting its SLA commitments. In addition, many of the CRMS offerings I have seen have include “canned” help desk statistical reports and features for designing reports using no-code approaches specific for your organization’s needs.

ReST Web Services for Sharing CRMS Content

One of the more intriguiging features found in some CRMS offerings is a ReST web service for fetching content. This presents another way of sharing content maintained in the CRMS and displaying in your app within context. For example, if one of the requirements of a new application you’re building is to provide context sensitive online help, what better way to do that than to leverage the ReST service? Furthermore, as the content in the CRMS is updated, it is instantly available to your system’s online help feature through the ReST web service call feature.

A Real World Example

In this example, I’ll describe how an application development effort using a modified form of SCRUM leveraged a cloud CRMS to incrementally build customer service processes, a user support knowledge base and context sensitive help content.

Project Background

This project was actually quite complex from a technical and a stakeholder management perspective. Technically, the system would implement complex business rules developed over decades. Because of the complexity of the business rules and the ambiguity in those rules, the project team first had to identify and then eliminate the ambiguity by involving the rule managers. This led to a bifurcated development process – one thread for specifying the business rules (less the ambiguities) and the second for iterative development of the application and integration of the business rules.

Stakeholder management was complicated by the way the regulatory program the new system would support was organized. Though the program was managed at the national level by a federal government agency, implementation of it was delegated to the states. The vast majority of users were state and local government employees and their contractors and not employees of the federal agency paying for system development and operations and maintenance. Maintaining effective working relationships with the state users was of paramount importance to the sponsoring agency. Of course, the state users knew this and used it to their advantage many times.

A “synergistic” result of the stakeholder management and technical complexity was that the new system would have to be incredibly flexible to meet the needs of every state that might use it (as they were not not required to use it). The system would have to accommodate local variations of the complex business rules mentioned earlier as well as localized business processes. Perhaps rather than investing time, energy and dollars in replacing the obsolete legacy systems (because several states still used one-off legacy systems), it might have made more sense to rationalize the business processes across all of the states first.

CRMS Selection and Configuration

The project team was able to get a jump on selecting a CRMS by learning from another program at the federal agency that had been using one for several years. As it turned out, the CRMS was available on the GSA schedule, so it took hardly any time for the project team to acquire several licenses. Access to the new CRMS was entirely through the user’s web browser, so project team members could begin working with the software as soon as the purchase order for the subscription went through.

Before doing any configuration work on the CRMS, it was necessary to define a “concept of operations” for customer support. After reviewing the customer support processes for several systems of similar size and scope and how those customer support functions were leveraging the CRMS, an operational concept emerged that differed remarkably from the legacy customer support process.

Similar to the ticket tracking system I described earlier used for the Naval Sea Systems Command, the legacy CRMS was a ticket tracking system built on top of Microsoft Access. Only the contractor help desk analysts had access to this application (system seems like too generous a term). Every year, a copy of the database was received from the contractor and distributed to the users on request. This was the only method of reviewing help desk issues and their resolution. Clearly, the team could, leveraging the new CRMS, make several improvements to the current mode of operation.

First, the team decided to create a help center web portal for the new system. Similar to the Gas Buddy help portal discussed above, this new help center would be a place where users could submit help requests (in the form of tickets), read articles describing tips and tricks for using the system and get the latest news about the system. More importantly, the help center would include a searchable knowledge base consisting of a user manual, associated documentation on the program the system supported (business rule documentation, for example), and job aids.

After reviewing the help centers for several other applications, the team discovered that some help centers were uploading portable document format (pdf) files as attachments to articles in their help centers. Unfortunately, handling user documentation as attachments meant the CRMS search tool could not index them (and it didn’t matter what file format the attachments were in). This defeated the purpose of leveraging the search tool for finding information in the knowledge base. As a result, the team decided to fully leverage the “out of the box” CRMS document authoring and publishing features for the online user manual. Pdf attachments would be limited to job aids and certain files in native formats (such as spreadsheets and other files).

In a nutshell, the new help center would become the “one stop shop” for everything users needed to know about the new system and it’s intersection with the regulatory program it supported. If users were having problems using the system, they could go to the search screen and locate information in the online user manual. If users needed information about how one of the business rules worked, they could find it in the help center. If they couldn’t find what they were looking for in the help center, they could submit a request for help and a help desk analyst would get the information for them or point them to where they could find it. In short, the goal was to make user support as self-service as possible by making it easy to find information in the help center without assistance.

The team also explored using the CRMS ReST web service in the new system as a way to implement context sensitive online help. When the user clicked on a help glyph, the system would use a ReST API call to populate a window displaying relevant content from the online user manual. This showed great promise as it meant the team would have one “version of truth” for context sensitive online help and for the online user manual.

The next step was to configure the CRMS. This consisted of these activities: configuring the help ticket format, the help ticket review process, and the help portal using a prepackaged theme. Earlier in this article, you saw an example of the help request form (which becomes a “ticket”) and help portal for Gas Buddy. What the screenshots didn’t show was the steps involved in processing help requests. These steps follow the status of each help request submitted through the help portal. Here’s the steps in the ticket submission and review process and the corresponding help request status:

Process StepStatus
User has submitted a new request for helpNew
User has been contacted, issue is in the process of being resolvedOpen
Waiting for further information from outside partyPending
Issue has been resolvedClosed
Help request has been deletedDeleted
Example Process for Reviewing Help Requests

Just a couple of notes about the process shown in the table above. The pending status “stops” the clock for the help desk because the issue has been escalated to another party. This “third party” may have to meet different response requirements than the help desk does for answering end user request. Regarding the deleted status, requests should rarely be deleted, if at all.

Lastly, the team developed several screen reports for viewing information about help tickets. This didn’t involve any programming as the CRMS had a point and click interface for building views. This was actually quite a powerful feature and the team planned on creating more (or tweaking existing) views as it gained experience with the CRMS.

Iterative Approach for Developing Customer Service Processes

Earlier, I discussed how there was a two-pronged approach used for developing the new system. The first prong involved developing the rule logic after eliminating ambiguity and the second prong was iterative development, using a modified version of SCRUM, of the new system. There was also a third prong that involved building out the knowledge base in the help center and iteratively refining the customer support processes in parallel with iterative development.

Development, Test, and Production Environments

Before discussing the iterative development environment and how the team integrated help center development into that process, a discussion of the environments used for development, software testing, and production is in order. The diagram depicts these environments in a simplified manner, but sufficient for the discussion here.

Develop, Test and Production Environments

Note that there are five environments spread two “clouds.” The federal agency sponsoring and funding the project has separate development, test, and production environments in its cloud. There are strict procedures for moving code from development to test and from test to the production environment. For example, when a new version of system is ready for user testing, the developers must hand the code off to the agency’s operations staff, where it is scanned for potential cybersecurity vulnerabilities. If the scan detects any vulnerabilities, the developers must remediate them and resubmit the code for another scan. This cycle continues until a “clean scan” is achieved, at which point the code is installed in the test environment where testing can begin. A similar process involving vulnerability scans is in place for moving test code into the production environment. Once the code passes the cybersecurity scans, it’s considered production ready.

The setup I’ve just described works well when software releases are spaced anywhere from six months to a year apart. However, it doesn’t work so well with the shorter release cycles used in SCRUM and other Agile development approaches because the security scans take too much time relative to the release cycle.

As I mentioned earlier, the example project used a modified SCRUM approach consisting of a series of one month long Sprints. At the end of each Sprint, following the model described above, the team would release code to the federal agency cloud test environment for scanning and then would remediate remaining issues (if any). If issues did come up, and if those issues consumed considerable time to remediate (indicative of a different set of potentially serious problems), the entire release schedule could go off-kilter.

Instead, the developers used a “contractor cloud,” as shown in the lower half of the diagram, consisting of separate development and test environments. Development is done in the contractor cloud development environment and not in the government cloud development environment. At the end of the Sprint, the developers would clone the code over to the test environment in the contractor cloud. Once in the test environment, any project stakeholder could participate in user testing of the system.

Because the contractor cloud was totally independent of the federal agency cloud, it didn’t matter if there were vulnerabilities in the code and if that cloud was compromised. Only test data was used and there was a small subset of users (mainly stakeholders assisting the developers with testing) given access to the test environment.

Every quarter, the developers would send the code for scanning and installation into the federal agency test environment. Here, the code would receive the full effect of the cybersecurity scans (and other reviews), ultimately resulting in the code being installed in the federal agency test environment. Unlike the contractor test environment, testing was open to a much larger set of stakeholders. Furthermore, the federal agency test environment allowed testing of data exchanges between other interfacing systems in a controlled environment that was beyond the scope of the contractor test environment.

Ultimately, when the developers had completed building-out the minimum viable product (MVP), the code would be pushed out to the federal agency production environment. Here, the system would be maintained in an operational state in full production mode.

I should mention that there is also a third cloud – that belonging to the CRMS vendor. The federal agency’s control over this cloud was limited by the CRMS vendor to managing the resources specific to supporting the new system. Unlike the development-test-production environments discussed above, there was a single CRMS production environment. While it was possible to create a localized instance of the CRMS installation, it was deemed as an unnecessary complication by the project team. Since the project team wasn’t developing code, but creating content and setting configurations, there was no need for the robust and thorough cybersecurity scanning used with the federal agency test and production environments.

Iterative Development of Customer Support Processes

Earlier, I discussed how the team acquired a CRMS and, through various configuration settings, configured a help center web portal, which included these features:

  • Help request form;
  • Help request status tracking configuration;
  • News section for articles discussing latest project status.
  • Empty knowledge base, which would be a repository for articles comprising the online user manual
  • Empty known issues and tips and tricks sections.

The goal of this project “swim lane” was to develop the online user manual and “shake down” the user problem resolution process. I’ll describe the process for each of these separately.

Developing the online user manual

The Sprint teams used on this project were configured differently from the “classic” sprint teams found in SCRUM. Like SCRUM, the team was led by a SCRUM Master leading the developers and a Product Owner. In addition, the Sprint team also included one to two Business Analysts, three to four subject matter experts to help the Product Owner with development testing, and an Application Architect to ensure design continuity.

Following SCRUM guidance, the Sprint team selected a set of user stories (prioritized under the supervision of the Product Owner) for implementation during the Sprint. As the Sprint proceeded, the Application Architect would write and publish articles in the help center web portal describing how the implemented user stories functioned. Over the span of multiple Sprints, the article collection would expand. Sometimes, as is typical with Agile methods, some functionality would change and the Application Architect would revise the article to reflect the current system functionality.

Because the articles were screen (or page) centric, the intention was to use the ReST interface to the help center web portal content to display text from the articles for context sensitive online help. This approach would simplify maintenance of user documentation as the online user manual and the context sensitive help in the application would use the same content.

Developing the User Problem Resolution Process

As I eluded above, at the end of a Sprint, the team would move the code over to the contractor test environment. Here, a larger set of stakeholders would run the new release through its paces. Release notes accessible from the help center described the new features added and provided links to the pertinent online user manual content. Users were encouraged to leverage the help portal to research issues found through their testing. If they were unable to find solutions, they were encouraged to submit a ticket and, in this manner, test the problem reporting and resolution process.

Acting as test “help desk analysts,” the Application Architect, Business Analysts, and Product Owner would review tickets, escalate tickets (as necessary to developers), and work with end users to either resolve their issues. Shortly after configuring the help center web portal, the project team discovered there was a CRMS plug-in for interfacing tickets with the issue tracking system used internally by the Sprint team. This plug-in would be used to create tickets in the Sprint team’s tracking system by pulling data from the tickets created in help center.

Having the help center in a separate cloud helped immensely in post-Sprint testing and in the quarterly testing. Nothing had to be changed; users accessed the same online documentation and ticket processing system used during both types of testing. If anything, the quarterly testing involved four or five times as many testers as the Sprint testing, which gave the project team more experience with using the CRMS to manage tickets and provide self-help user support.

At the end of each testing period (post-Sprint or quarterly), the project team would use the report views created earlier to review the help request (tickets) submitted by the users during testing. The purpose of this review was to identify potential errors in the new system, review and evaluate enhancements suggested by the testers, and prioritize issues for upcoming Sprints.

Outcome

I wish that I could report that the process the project team followed as described in this article successfully resulted in fully functional customer service processes and an effective self-help portal, but that wasn’t the case with this project. In fact, the project experienced several issues unrelated to the customer service processes that ultimately led to the federal agency cancelling the project.

If the project had successfully concluded with an operational system, it’s doubtful the customer service processes and tools would have been rolled out as intended. Due to the state-federal government dynamics on the project discussed earlier, the Project Owner was able to exert a great deal of influence over the customer service development swim lane of the project. As this individual never bought into the CRMS or the notion of an online user manual and they used their influence to dismantle the help port. Eventually, the knowledge base was replaced it with a document based user manual.

Furthermore, the Project Owner had difficulty using the CRMS system to review and analyze help ticket submission and they insisted that everything be exported into MS Excel spreadsheets, which took significant effort on the part of the project team. In effect, the Product Owner was taking the customer service processes back to the state they existed with the legacy system.

Perhaps what’s more interesting about the customer service situation on the project is that the federal agency allowed someone who did not work for the agency make decisions that would affect its cost of providing customer service for the new system in the years to come. If the project was successful, it would have delivered a 21st century system with customer service from the 1980s.

Wrap-up

In this article, I proposed a process for developing, in an Agile manner, customer service processes and customer documentation for a complex system. I believe that as originally intended, the processes described in this article would have resulted in an effective and affordable customer service for the new system. Unfortunately, the project team was unable to follow through because the project was cancelled.

As with many late, over budget, and cancelled software development projects, there’s a thousand things that can go wrong or right that can break or make the project. Using Agile development might help eliminate many of these issues, but if that’s all you’re concerned about, then you could be missing a bigger part of the picture – how are you going to support your users once the Agile team completes development? Which begs another question for you to ponder – is development ever really complete?

Leave a Reply
Your email address will not be published. Required fields are marked *

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: