Customer satisfaction and experience survey
Objectives of a customer satisfaction and experience survey
An evaluation of the customer experience may be performed with strategic objectives or managerial objectives in mind. Strategic objectives often require an in-depth evaluation (detailed satisfaction, reputation, image, loyalty) and are focused on providing input to strategic work. Managerial objectives on the other hand are aimed at assessing the quality of service delivered by a specific team and is often based on a specific, precise action (e.g. request for information, control operation, dealing with a claim, etc.).
Setting up a customer satisfaction and experience survey
1/ Preparation. Our vast experience in setting up or reengineering customer relationship evaluation programs leads us to place a great deal of importance on the preparatory phases that lead to the choice of KPIs because, in order to provide useful feedback, they must be able to capture what is essential from the customers' point of view. To be effective, these indicators must constitute genuine levers for the public in question. To be exploitable, they must be understood by those who will be required to implement corrective actions. Our approach therefore aims at securing these strategic choices at the earliest possible opportunity so as to ensure that team input and available data is put to good use in recommending new KPIs, but also in ensuring the timely provision of clear information and instructions to the teams involved, before the evaluation process is too far down the line.
2/ Implementation. This will involve several choices that depend to a large extent on the nature of the objectives (i.e. strategic or managerial) but also the corporate context. Our work will therefore focus on the method of collection from customers (i.e. immediate vs. delayed feedback), the scope of the questionnaire (i.e. overall review of the relationship vs. specific review of certain acts), KPIs used (satisfaction, NPS, CES) taking into account the benefits and limitations of each one, and last but not least, the means of delivering the results (rhythm, public concerned, content, etc.).
3/ Follow-up. Monitoring and following-up actions is a key phase, since without proper follow-up it is difficult to ensure an effective call to action, and without action of this kind, it is unlikely that customer satisfaction and experience will improve. In a nutshell, good follow-up helps to keep teams on track and to generate a continuous improvement mindset
Learning from others' past mistakes...
A certain number of errors should be avoided at all costs:
Error #1: developing the evaluation program without sufficient detailed prior analysis of customers' needs (so as to ensure the customer-related relevance of what we are evaluating), but also without consulting the operational audiences whose performance is being measured. This kind of mistake is a recurrent cause of failure of measurements of this kind.
Error #2: setting quantified objectives for the KPIs, for example indicating that we want to achieve a given value "X" for the NPS before even evaluating the feasibility of this objective. This can result in two types of situation, each equally demotivating: an objective that is attained as soon as the first measurement is taken, thereby annihilating any incentive to improve, or on the other hand, an objective that is too difficult to attain, unrealistic and totally discouraging.
Error #3: falling into the trap of a dogmatic score fixation with regard to KPIs, NPS or CES and only looking at customer relations from a number-related point of view. This is usually another effective way of ensuring failure…
Error #4: calling too often on input from the same customers via pushes (SMS, e-mail, etc...). This generally results in a fall in satisfaction levels and a related fall in participation in the survey. Experience has shown that if customers are too often called to action, they will tend to detach themselves from the process and only those who are really dissatisfied will take the opportunity to express their discontent, thus bringing a downward bias to the KPIs.
These are just some examples of mistakes; many other pitfalls need to be avoided, such as the way in which questions are worded, the structure of the customer sample from one evaluation wave to the next, the interpretation of certain results, etc. Having a real specialist at your side can genuinely help to avoid such negative fallout.