Tight on engineering resources, our team found an opportunity to test an improved mobile experience for our customers. We ended up with with a lean, testable prototype, but we soon learned that our test design would prevent us from getting measurable results.
In the winter of 2016-2017, most of our engineering team was involved in a months-long initiative to resolve our product's technical & infrastructure problems. Given that limitation, we explored other opportunities where we could provide value to our customers. A combination of circumstances led us to test an improved mobile experience of TORCHx:
We already had a hunch the current TORCHx mobile experience was problematic. We didn't offer a mobile app, so all our mobile touchpoints were through our responsive website, which suffered from a cluttered interface with poor usability.
TORCHx was acquired by Web.com the previous year, and this interface was a symptom of its pre-acquisition mobile strategy: replicate the entire functionality of the desktop experience in the responsive mobile site. This resulted in many usability issues with the mobile site, which we hypothesized was a cause for its low usage, since only 10% of traffic came from mobile devices, despite real estate agents' need for on-the-go solutions. Armed with this knowledge, we wanted to validate some of our assumptions by interviewing customers. We spoke with real estate agents that had recently used the mobile site, and those that hadn't used it in months, and learned:
They weren’t tracking conversations that happen on-the-go because manual entry was inconvenient in our mobile site
This research helped frame some issues with the mobile site, but we knew that to truly improve on the value of the mobile experience we had to release something and measure its impact. Now the question was: what do we release?
Our goal was to put something in the hands of our customers and learn from it, whether it proved or disproved our hypothesis. The next step was to generate ideas, align on what we wanted to test, and come up with a more focused hypothesis. I facilitated a design studio with participants from Product Management, Design, Product Marketing and Engineering to ensure everyone was bought-in, engaged, and involved in the product development process.
We generated ideas that fell into different buckets:
We voted and aligned on testing a notification-driven experience that would alert agents of new leads and incoming messages. Our gut reaction was to design & build a lean (read: maybe only 1-2 features) mobile app, since that was what our customers were asking for, and what we were losing sales to. But since our goal was to test our hypotheses and learn about our customers, we wanted to build something faster, and less expensive to maintain and update. Mobile apps also pose an entirely different problem, adoption, which wasn't part of our hypothesis.
The team went through a few story-mapping to figure out what the "thinnest" slice could be for our first test. That way we could more quickly build a testable prototype, and really focus the test on a risky hypothesis.
This helped us define our risky hypothesis:
Clients using the new mobile experience respond to more texts in less time, and spend more time in the CRM than those without the new experience.
With this in mind, I could move forward with design iterations. We decided to first build out a new version of the mobile website instead of a mobile app for a few reasons:
After a few usability tests and design reviews with the product team, we had run through several design iterations of the optimized mobile web experience:
This process really helped us further hone in on exactly what features we needed to build into the new experience so we could properly test our hypothesis. The final process flow for users would be as follows:
Additionally, users would have access to a full list of conversations, and a details page with information about the person they're messaging. Thus, the testable experience really boils down to three screens, plus the initial SMS notifications that triggers it:
At this point I was working closely with our lead engineer to implement these designs, make small tweaks to the experience, and resolve gaps that we hadn't fully accounted for (e.g. what happens if a message fails to send?). I also created a research & test plan with input from Product, Product Marketing and Engineering teammates so we could track progress during the test, keep tabs on clients we talk to, and update metrics against our benchmark numbers. We decided to start with an alpha test rolled out to 5 clients so we could iron out any final technical or UI bugs before rolling out the experience to a larger population for a beta test.
For the alpha test, we recruited 5 clients who were open to testing the updated experience. We "turned on" the new experience for these 5 clients, and waited.
We monitored usage data for the new mobile-optimized pages, and kept tabs on text messages sent to these 5 clients from potential homebuyers. We weren't seeing any usage or text messages come through after a week. I touched base with these clients after 1 week to see if they had been receiving messages at all, and to understand if there was anything preventing them from using the new experience.
As it turns out, in that 1-week timeframe they hadn't received text messages from potential homebuyers. They had been texting leads that they were already nurturing for some time, but those conversations were taking place in their native SMS applications, not through our platform. The way we had set up our test meant we relied on our clients getting new leads, and we couldn't guarantee that would happen within a reasonable timeframe for such a small population of users (5 clients). We waited another week, and still nothing happened.
At this point we had to decide whether we'd continue waiting and move forward with the test, or expand it to more clients who were receiving text messages from potential homebuyers more regularly, or hit the pause button. We ultimately decided to hit the pause button because our lead engineer, who was the only technical resource on this project, had to dedicate 100% of his time to technical & infrastructure problems.
There were a couple of key takeaways at the conclusion of this project:
I was a little disheartened, but this pushed us to keep looking for ways to deliver value to our customers through experiences that didn't rely on the limited engineering resources we had at hand. This was the silver lining, since it led us to redesign our onboarding process which ultimately had an incredibly positive impact on our customers and the business. Check out that case study here!