A lot has been written about a good versus a bad sales call – what works, how to structure it etc. Ideally, we would like to be able to A/B test our calls, just like we do with our emails, ads or other marketing material. Unfortunately, we can’t and so we just go with our gut feeling or whatever comes naturally to us when it comes to a sales call.
The problem with gut feeling though, is that we can’t really run experiments. Sales are all about running experiments to get better.
Here is how to run your own sales experiment and outcomes from an experiment that one of Strings.ai’s customers ran.
Step 1: What is your hypothesis?
You might have wondered what would happen if you said ABC vs XYZ. E.g. should I do need analysis or introduce my product in the first call? Is it important to talk about my company’s credentials or should I stick to product features? Irrespective of the amount of advice you read, every sale is different. So the only way to get an answer for your specific sale is to run an experiment.
Step 2: Define your test in more detail
Be clear and honest about what counts or doesn’t count as your test case. This will ensure 2 things:
- Your sales reps know the expectations clearly and the analysis can be transparent and inclusive
- Better and more standardized adoption for H1 among sales reps
Step 3: Using both your hypothesis
It is important that you have sales calls with both H1 and H0. If your source of leads, segment etc. is the same you can use your historic calls as H0. Else, it is best to consciously divide your calls into 2 groups and try H1 with one group (and H0 with the other group)
In the ‘trust’ experiment we ran there was a stable source of leads. So we didn’t expect any material difference in leads before and during our experiment. Hence, we used old sales data as the sample data for H0.
Step 4: Tada…. And the winner is….
Before deciding a winner, decide on the metric that most closely ties with your input. E.g. if you are testing something in your introductory call with the customer, measure the number of customers moving to step 2 of your process. If you are making an overarching change in the sales process, look for a change in conversion rates.
Our little experiment…
This experiment was run by a small IT services company in India. Outsourced IT services is a very competitive space in which it is hard to create a sustainable differentiation. After listening back to a few calls systematically and analyzing the prospect’s questions it seemed that the underlying question that was most often not answered by sales reps (and frequently not asked directly) was “Why should I trust you”?
Step 1: The Hypothesis
So the company tried this hypothesis:
Hypothesis (H1): Talk about the company’s credentials on the sales call aka build trust
Null Hypothesis (H0): Do the usual i.e. don’t talk about it
Step 2: The test
In our experiment we defined 3 phrases that the sales reps would use for building trust for the company:
- We have X customers in Y countries
- Some of our top customers are A. B and C
- We are a  people team
A clear definition helped set expectations with sales reps on what exactly would be monitored.
Step 3: What’s the ‘control’?
The IT services company that ran the experiment had a stable source of leads. So we didn’t expect any material difference in leads before and during our experiment. Hence, we used old sales data as the sample data for H0.
In our experiment, we realized that trust was a checkbox that needed to be discussed at some point during the sales process, the sooner the better though. Here are the stats from our experiment
Call analytics was used to track and coach all sales reps to talk about the company’s credibility to build trust with the prospect. The results we saw were mind-boggling – the number of deals won in this period doubled as more prospects were able to get past the finish line to become paying customers.
So what are you waiting for? Think of your first experiment and get rolling.
Making it work for you…
Unlike A/B testing in marketing you can’t achieve 100% in your sales experiments i.e. you won’t be able to include your H1 in all your experiments. In our experiment, the sales team was able to get to ~40%. The question is, how do you track what calls were able to try your new approach (H1). How do you leverage technology to run these tests? Here are a couple of options:
- You could add another field to your CRM to track if sales reps brought up the new discussion point/ script element. Your sales reps will need to manually update the CRM field after each call. You could then correlate this field with your pre-defined metric to understand if there is a correlation.
- You could use an integrated solution like CallHippo+Strings.ai.
This will allow you to quickly run A/B tests and also build new hypotheses based on historical trends from your calls.