My roles: designer, researcher, optimizer
I was responsible for planning and executing qualitative and quantitative research activities, and then providing high-impact actionable recommendations to product teams. I also worked with 1. product owners to prioritize jobs, 2. developers to confirm feasibility of recommendations and develop AB tests, 3. designers to help problem solve issues and make some spec'd visual designs, 4. information analysts to help query the data warehouse and review my data analyses, and 5. partnered with another user researcher to plan and execute research.
Usability tests and analytics helped me uncover salient user pain points in the online insurance quote and in the pay-per-click insurance flows.
One of the first things I did was conduct qualitative usability and 5-second tests of the online quote and application, and collect call center data about users calling in about the insurance quote. I moderated 4 sessions of users attempting to complete tasks in the online quote. The other researcher and I surfaced several dozen usability pain points while reviewing notes and recordings. I dug through analytics, partnering with another information analyst to triangulate usability insights with quantitative data about users' form inputs and where they were most often exiting the quote. Looking at referral traffic and keyword data helped me identify potential problems with the paid search landing page for insurance. Running some more usability tests, 5-second tests, desirability tests, and interviewing stakeholders revealed credibility issues and a wonky paid-search-traffic strategy that siphoned current clients from account management to marketing brochureware.
"It looks like a page that's trying way too hard to sell me something."
- user
Analytics showed roughly 50% of users do not proceed past quote results into an application. Usability tests indicated the quote result page felt amateurish and salesy to users, and hard to skim and scan.
Analytics showed 35% of users bail as soon as they start a quote, and 30% of users who make it to the demographics page bail when they're asked to provide personal and contact information. 5-second tests revealed users didn't know the quote start page was the beginning of a quote form. Usability tests indicated users are tired of filling out forms once they get to the demographics page.
Analytics showed organic traffic converted 186.86% better than paid traffic in the first half of 2020. The e-commerce team pays for a lot of irrelevant traffic to visit the quote and they direct account-management bound users to the quote instead of where they need to go.
A screenshot of the old paid search landing page. People who searched 'Hagerty login' on Google were led here by our paid search ads. But there's nowhere obvious for current customers to log in!
I reviewed my findings with product owners, developers, designers, the marketing team, and other stakeholders
After identifying high-impact issues, I helped set priorities by reviewing findings with several teams and calculating conservative potential impact to revenue for several usability issues. I found 4 quote and paid search related opportunities that predictably could gain Hagerty $1.2 million in additional revenue over 1 year and $15.5 million over 5 years (accounting for our average customer retention, seasonality, and average insurance premium rates), all other things being equal.
Iterating and testing design changes
Once priorities were set, I helped designers, developers, and product managers understand the usability problems critical to fix. The designers and I crafted solutions, and I reviewed them and tested with target users via usability tests, 5-second tests, desirability tests and AB tests, depending on the research questions. We followed a rough process of design > test > iterate > test > development and QA > Release > Optimize for 4 main opportunity areas, including starting the quote, entering personal information, reviewing quote results, and lastly, the pay-per-click marketing page. AB tests (for 2 solutions) took 2 months each (due to relatively low visitor traffic and the need to power sample sizes for AB tests to mean anything).
The AB tests had to be done 1 at a time so that 1. they didn't interfere with each other, 2. confounding variables were minimized, and 3. we could gather adequate sample sizes for each without prolonging product changes. If we had AB tested all 4 solutions at once, nothing would have been permanently released until Apr. 2021. The way we did it made sure we had consecutive things to release to the public. (While AB tests ran, we all worked on other projects, too.)