Helping Optilyz customers be confident that the marketing post they pay
for can be delivered to an existing address.
My role
I was responsible for the end-to-end design and worked from concept to
launch in close collaboration with tech and product teams.
Problem
One of the challenges that digital marketing professionals face when
switching to addressed mail is having to deal with the complexity of
physical addresses. The raw data extracted from databases doesn't
always have the structure and format necessary to ensure that the post
can be delivered.
On a previous project we built a basic address validation feature that
flagged up incomplete records such as addresses without postal code,
name or house number; but through customer feedback we learned that
this was not enough. They needed detailed information about the issues
with their data and have the confidence that their mailing lists were
suitable for delivery.
Project goals
We wanted to build confidence and trust though a system that deals
transparently with the address data that the customers provide,
showing issues in detail and helping them as much as possible to clean
their data and make the final mailing lists deliverable.
The team started working on a technical solution that:
checks the provided addresses against a database
tries to replace misspellings with the correct address
cleans and standardizes the formatting across the addresses
flags missing information that might affect deliverability
This would result in cost savings from the removal of undeliverable
records and in increased customer satisfaction.
Process
The development of the validation script and the design work run
almost in parallel. Apart from understanding user and business needs,
this project required frequent collaboration with the tech team to
make sure that the technical implementation and the design work were
in sync.
Customer insights
I conducted customer interviews and tested the usability of the
previous basic validation feature. I wanted to understand the
situations that our customers had to navigate when working with
address data.
Additionally, the team analysed address files provided by customers to
identify the most common sources of errors.
Summary of initial findings
Understanding the implementation
I needed to pay close attention to the technical solution and
regularly collaborated with the tech team to answer questions such as:
how to translate the script tasks and results into user friendly
language?
how the addresses are processed and how long it takes?
should we show the results as they are processed or wait until the
process is completed?
can we let the users continue configuring the campaign while the
validation is in progress?
From a business perspective, it was important to give the customers a
detailed overview of what we were doing to clean up the addresses, and
also that the end results didn't give room to questions.
Communicating the validation process to the user
Describing the end result
Exploring ideas
Sharing work in progress and low-fidelity wireframes enabled fast
communication with the team.
Asking users to decide on warnings:
Focused version on an intermediate step (left)
Issues and actions are presented together with the overview of results
(right)
We went through several review sessions until we had a stable enough
version to test.
Progressing through design versions
Final version used for usability testing
People reacted positively to the new feature but there were several
aspects to improve. One of them was the donut chart area.
The purpose of the chart was to give an easy and quick representation
of the proportion of valid vs non valid and problematic addresses.
That would help users quickly understand the results before they even
look at the numbers.
However, it was misleading people to think that the validation was
still in progress and not fully completed.
We continued versioning the design and adapting it to user needs but
also technical requirements and edge case scenarios.
Next steps
At the end of the process I had still open questions that could only
be answered with post-launch metrics and live iterations. What is the
most common behavior during the validation process? Do users continue
configuring their campaign as we expected, do they close the browser
and come later, do they try to wait? Is this feature helping them
understand and structure their data better? Is it generating the
outcome that we expected?
Takeaways
We can validate some assumptions with user research but not all the
questions can be answered before going live. Designing for the
unknowns is as important as applying what is known.
It can be tempting to add features or non-essential details just
because it is possible and it doesn't seem to take much effort. Having
a design strategy that emphasizes learning goals can help keeping
design decisions focused.