Joey Bridges
Group Media Director
Every industry faces particular challenges when it comes to marketing, and marketing in the financial sector is no different. Banks have to aggressively compete with one another for each new customer and making a switch to a new financial institution can seem like a hurdle to customers. Marketing in the financial sector can seem like an overwhelming challenge. Fortunately, at Mindgruve, we enjoy a good challenge.
We also believe in continuous learning. To remain experts in paid search, we need to adapt our processes and live in a world of test-and-iterate. In this blog, we’ll outline how we tested a new strategy and show a 90-day snapshot of its performance. We’ve changed the names of certain conversion events to keep our client data anonymous.
A different approach to Microsoft Ads
Our reps at Microsoft often recommend combining branded and generic (non-branded) campaigns. In the past, we’ve tended to keep those campaigns separate, mostly because we were concerned that the bid strategy would skew toward the branded keywords at the expense of prospecting for new, high-value generic content. But for the campaign that we’re going to cover below, we decided to combine them into a single bid strategy to test. A few factors swayed our decision:
1. Statistically significant data.
By this point, we had acquired enough data from our ongoing Microsoft campaigns to determine that this new strategy amounted to an untapped opportunity for us.
2. Low risk.
Google was going to deliver the majority of the volume that we needed for the campaign, so the risk was minimal and the upside was high.
3. Niche product.
This was the clincher: Since this was a specialized product, the likelihood that specific brand traffic would overwhelm the campaign was also minimal.
Testing a combined bid strategy in Microsoft Ads
One of our financial clients has a specific product that we were advertising to a niche audience. So we carved out a small portion of the budget for Microsoft Ads. (We usually keep 5–10% of our spend in Microsoft Ads as a result of how Microsoft performs compared to Google.) We also set up the funnel so we could focus on two user decisions:
Step 1
A user had to input information into a form that our client needed. We considered Step 1 to be the first major component in the campaign.
Step 2
The final step of the sales process — information that the user had to input and that our client would use to complete the transaction. Step 2 was the conversion.
Then we analyzed the results of user behavior according to two separate methods:
- Performance by keyword type (brand vs. generic)
- Performance by match type (phrase vs. exact)
Let’s go over each one.
Performance by keyword type (brand vs. generic)
Our first concern — the overindexing toward branded keywords — was put to rest when we looked at how the engine was allocating spend. As you can see from the table below, the spend was not weighted toward the brand. When we looked at impression share lost to budget, we discovered instances when the brand campaign lost out. In other words, the combined strategy balanced the campaigns in order to maximize the return.
We anticipated that the cost per lead was higher for generic terms. What we didn’t anticipate was that generic keywords would outperform branded terms at Step 1 in the funnel.
Keyword Type |
Cost |
Impressions |
Clicks |
CTR |
Leads |
CPL |
Step 1 |
Cost Per Step 1 |
Step 2 |
Cost Per Step 2 |
Brand |
$2,348 |
15,645 |
1,612 |
10.30% |
6 |
$391 |
12 |
$196 |
5 |
$470 |
Generic |
$23,151 |
112,769 |
6,527 |
5.79% |
18 |
$1,286 |
147 |
$157 |
14 |
$1,654 |
Total |
$25,499 |
128,414 |
8,139 |
6.34% |
24 |
$1,062 |
159 |
$160 |
19 |
$1,342 |
Performance by match type (phrase vs. exact)
In the chart below, you can see that we chose a mix of brand and non-brand keywords, and we separated keywords according to match type. We assumed that the exact match performance would be related to brand terms. But we discovered that this was not the case. Instead, the exact performance depended heavily on the performance of the generic terms. In fact, when we looked at the conversion performance in Step 2, all 11 conversions were from generic keywords.
Match Type Evaluation |
Match Type |
Cost |
Impressions |
Clicks |
CTR |
Leads |
CPL |
Step 1 |
Cost Per Step 1 |
Step 2 |
Cost Per Step 2 |
Phrase match |
$13,779 |
70,047 |
3,794 |
5.42% |
18 |
$765 |
76 |
$181 |
8 |
$1,722 |
Exact match |
$11,721 |
58,367 |
4,345 |
7.44% |
6 |
$1,953 |
83 |
$141 |
11 |
$1,066 |
Total |
$25,499 |
128,414 |
8,139 |
6.34% |
24 |
$1,062 |
159 |
$160 |
19 |
$1,342 |
Unexpected learning: 3 letters can make the biggest difference
Many companies analyze which keywords turn into leads, and then stop measuring. They often can’t pinpoint which leads turn into sales, and, as a result, believe that keywords that generate leads are “good” while keywords that don’t generate leads are “bad.” But that view is limiting, because full-funnel measurement is key to determining which keywords are the most profitable. After all, we want to deliver conversions, not only leads. We don’t want users to just enter the funnel — we want them to make it all the way through.
When we dug into the keyword performance of this campaign, we discovered that only three letters could make a massive impact in leading users all the way through the funnel. For example, take two keywords that appear nearly identical at a glance. Keyword 1 is “Baking a cake.” Keyword 2 is “Bake a cake.” Both keywords are the exact match keywords. The only difference is “Baking” vs. “Bake.” Yet the phrase with the longer character count outperformed the shorter one.
Keyword 1, with no leads, led to robust results — converting from Step 1 to Step 2 a total of 5 out of 18 times. Meanwhile, Keyword 2 was 0/8. In other words, we found our keyword that turned valuable leads into sales. Thank you, Keyword 1, for optimizing to the end result, rather than focusing on the result of leads. You crushed it. All because of three little letters.
3 Letter Difference |
Keyword |
Cost |
Impressions |
Clicks |
CTR |
Leads |
Cost Per Lead |
Step 1 |
Cost Per Step 1 |
Step 2 |
Cost Per Step 2 |
Keyword 1 |
$413 |
2,540 |
164 |
6.46% |
0 |
No Leads |
18 |
$23 |
5 |
$83 |
Keyword 2 |
$1,128 |
6,023 |
388 |
6.44% |
1 |
$1,128 |
8 |
$141 |
0 |
No Conversions |
Total |
$1,541 |
8,563 |
552 |
6.45% |
1 |
$1,541 |
26 |
$59 |
5 |
$308 |
Key takeaways
- Combined strategy. Paid search teams typically keep branded and generic campaigns separate. But this campaign combined them successfully, balancing spend to maximize returns.
- Spend allocation. Contrary to our initial concerns, the Microsoft engine did not heavily weigh spend toward branded keywords. Instead, it distributed spend to maximize overall performance.
- Generic keyword performance. While cost per lead (CPL) was higher for generic terms, they outperformed branded terms in Step 2 of the conversion funnel.
- Match type performance. Exact match keywords (which are primarily generic terms) drove a significant portion of Step 2 conversions.
- Keyword differences. Minor changes in keyword phrasing (“Bake a cake” vs. “Baking a cake”) can impact performance far more than we thought. To our surprise, longer phrases sometimes outperformed shorter ones.
Implementing a combined bid approach
This strategy demonstrates the effectiveness of combining keyword types and match types within a single bid strategy in Microsoft Ads, even with initial reservations. It highlights the importance of monitoring performance closely and adjusting based on the data.
Want to ask a question or learn more? Get in touch with us today.