Articles on: Troubleshooting

Conducting a Pilot Test

After testing your flows in the simulator, you’re ready to move to the second step in our recommended testing protocol: the pilot test. A pilot test, or "pilot," is your first trial run; a small-scale version of your larger project, and arguably the most important step in testing your SMS program. Your SMS program will be an automated system comprised of multiple components (contacts, phones, carriers, channels and flows). Moreover, it represents your project - so it's best to test each one thoroughly.

A pilot allows you to:

Make sure messages are being delivered to each major carrier in your country.

Get a good sense of how long it will take your channel or carrier to deliver and receive messages.

Provide your team test-facilitation practice.

Evaluate the clarity of your questions and flow logic from the perspectives of your test contacts. Are they making sense to your test contacts?

Make last minute adjustments (carrier, connection method, flow structure/content, etc.)

Determine whether you’re ready to increase scale.

Pilot requirements

A group of 5-10 independent test contacts that represent your target population.

The current version(s) of your flow(s).

One or more pilot facilitators. This person will conduct the pre and post-pilot evaluations as well as the test itself.

Observants. These people will observe the test contacts’ responses through the dashboard in addition to their overall behavior. See below.

During a pilot, it’s best to be present with your test contacts. If your flows are scheduled using a campaign, or you want to allow your contacts to respond asynchronously (throughout the day/week at their own leisure), you might compensate by communicating with them at the start or end of each day. Run the pilot 3-5 days prior to your usability test so that you have time to deal with any technical issues and/or make scenario/materials changes.

Things to look for:

Do the test contacts understand the objective of the flow/campaign?

Do the test contacts feel comfortable responding to your questions and/or performing your tasks?

Is the wording of your flow(s) clear?

Are your contacts being sorted into the right groups? Could you do a better job of sorting them?

Are you properly categorizing responses?

Are the answer choices compatible with the test contacts’ experiences?

Do any of the items require them to think too long or hard before responding? If so, which ones?

Do any steps produce irritation or confusion?

Which steps are receiving the most “other” responses?

Do the answers collected satisfy your objectives?

Are your flows too long?

According to your test contacts, has anything been overlooked?

Best practices

Remain neutral. If the participant asks a question, reply “What’s your best guess?”Don’t lead your test contacts. If a test contact gives up, you’ll need to decide whether to provide a hint or end the test.

Focus your observations. Give productive and unproductive paths equal attention. Observants should focus on what the test contacts did in as much detail as possible as well as what they say (in their words). The more you can understand about your contacts’ SMS behavior, the more effective your SMS program will be.

Measure both performance and preference. People’s performance and preferences don’t always match. This is especially true with regard to their mobile phones. Often, contacts will perform poorly even though their subjective ratings are high. Conversely, they may perform well but give your program a poor subjective rating.

Qualitative metrics include: completion rate, time to completion, errors (“other” responses), opt-outs, etc.

Subjective metrics include: test contacts’ self-reported satisfaction and comfort ratings.

Evaluating a pilot

At the end of a pilot, you should be able to answer the following questions:

Was the test group’s overall reaction positive or negative? The test group’s feedback can help confirm whether or not your program is a good fit for your population and whether minor changes to the program are appropriate and/or necessary.

Are you allocating your time and resources properly? The pilot will help you determine whether you need to spend more time or resources on particular aspects of your program. For example, you might learn that changes to your method of engagement, flow length, or flow timing are necessary.

Does your evaluation strategy need improvement? Look at this as an opportunity to test your evaluation method as well. Are there metrics you’d like to have that you aren’t collecting? The pilot will give your evaluation and implementation teams a chance to work together before increasing scale to troubleshoot any logistical issues that might arise with the distribution and collection of evaluation data.

Are you ready to increase scale? A pilot can shed light on unforeseen challenges that might arise during a larger-scale implementation, and ensure your team is prepared to handle issues that might accompany an increase in scale. This question is largely dependent on the answers to the others.

Updated on: 30/11/2021

Was this article helpful?

Share your feedback


Thank you!