Testing a product's brand attributes through user surveys

Defining the brand direction of a global communication company with clarity and consensus.

Testing a product's brand attributes through user surveys
This article originally appeared in

Last year, a Global Telecommunications Company approached Wizeline to work on a multiplatform project: they asked us to redesign four of their products to improve the user experience and provide a consistent look & feel to be used across more than 15 countries.

Six months in, the team had done an incredible job. They had run generative research across several countries, defined personas and the main pain points to address, created the interaction design for every user flow, and delivered a few proposals on the look and feel. The team was facing a challenge in the last step, though: despite having gone through multiple meetings to show high-fidelity mockups to get approval, the discussions hadn't been successful.

The main problem was the approval path the design needed to go through. The company our team was working with was huge, and different departments and leaders were involved in this decision (product teams, marketing, and regional managers). Whenever designers presented a proposal, the main stakeholder would take it to other people to review internally and come back with changes.

The second problem is different but linked to the first one. Every discussion about the look and feel was subjective and primarily based on personal preferences, making it difficult for the team to find direction. It started to feel like designing by committee.

My role on the project was not a hands-on designer and was not client-facing either. I played the Sponsor role, a role we created at Wizeline to support designers backstage. We "clear the way" for designers to do their work, and every manager and I play this role across the different projects in the company. These are some of our activities:

  • Have recurring syncs with the design team to provide coaching and tools for their work
  • Work with Delivery Directors on planning potential project extensions or adding new members to the team
  • Work with Project Management on any adjustments on the timeline, activities, and deliverables
  • Sync monthly with our client stakeholders to get feedback on the designers' performance and proactively address any issues on either side

During one of our sponsorship meetings, the team brought this topic to me. A few weeks before, they ran a Brand Distillation workshop to define the attributes they wanted the look & feel to communicate and felt everything was flowing, but then got stuck once they started presenting the high-fidelity mockups.

Tension Matrix with brand attributes defined during the Brand Distillation workshop

This situation is common when discussing visual design, where the feedback received is mainly based on gut feeling, but it is not strange to product design reviews either. The difference with product design is that we know we will validate it during usability studies, which prevents teams from getting too fixated on personal opinions or small details. In the end, the user's feedback dictates the direction of the product.

Following that line of thought, I suggested validating their visual proposals as well. The goal would be to bring data to the stakeholders for the next meeting so they feel confident the design is going in the right direction and unlock this step of the process. For that meeting to be successful, it would be essential to speak the same language as our stakeholders. Instead of showing qualitative feedback, they would show data from a good number of users.

I suggested running a concept testing study:

  1. Put together a Google Form
  2. Add the visual proposals the team created so far
  3. Ask people to select the attributes the mockups reminded them of: friendly, old fashion, modern, efficient, cluttered, etc. As options, the team should include the attributes the client wanted to communicate plus other non-related attributes to remove bias.

Considering we didn't need any input on usability or whether people would find the product helpful, I also advised the team not to be too strict with the type of audience.

That was the direction I provided, but the team shaped the study a lot better:

  • They started the survey by asking the question of age and added two more questions to know how familiar they are with these types of platforms and their frequency of use.
  • After showing the visual proposal and asking people to select attributes, they added another question to ask why they chose that option, which helped them get some qualitative data.
  • They reached out to people within the company to answer the survey, making the recruitment part a lot easier.
Structure of the survey sent to users

Outcomes

The team brought the outcomes of the study to the client. The results were primarily favorable – most of the attributes they intended to communicate were the ones the audience subjectively perceived and selected, there was only one attributed that wasn't perceived, so the team suggested some minor changes to the user interface to reflect it.

Example of the results presented to the client

This study unlocked the approval. The client was happy to see that people had the correct perception of the brand. Bringing the users' voices to the table helped to change the conversation. We learned that visual design (and not only product design) could be tested to remove subjectivity and move things forward.