My approach:
- Heuristic analysis for UX that would hinder users accomplishing their goals within the app
- Description and light wireframes to fix some of the low-hanging fruit from the heuristic analysis
- Interviews with users to get clearer on their goals, needs, and frustrations (both with the app, and the overall task)
- Deeper redesign of most heavily used flows based on interview feedback
- Iterate on most heavily used flows, based on user feedback and analysis of usage metrics
- Redesign less used flows and iterate on them
My key design principles:
- Show data graphically rather than using tables. Humans are far faster at interpreting visual data (especially based on gestalt principles) than they are at reading text, especially a table. It’s not always possible to do this, but truly amazing improvements come from it.
- Research to learn what user needs and frustrations are. As designers, engineers, etc, we often think we’re the user. But we often aren’t — unless you’re working at Github, your users are by definition not working on a product to do the tasks you’re interested in. So talk to users! Get clear on what their needs (not wants) are, what their frustrations are, and how they talk. Then synthesize that into personas, so the whole company can keep moving toward the same goals.
- Focus on data over opinions. Select a metric and a goal value before designing (and definitely before releasing) anything. That way it’s easy to see if a design is “better” or not. A/B testing also makes this easier, and I enjoy running it.
- In the absence of data, listen to subject-matter experts. If there’s a salesperson on the team, trust them when they talk about the right way to do sales. If there’s an engineer, trust them when they talk about the right way to implement something.
- Release the smallest possible changes at a time. Not only does it make it easier to implement, but it also makes it easier to roll-back changes if it isn’t improving metrics.
- Measure how well a new design works, preferably via A/B testing. If A/B tests aren’t possible, then by comparing prior weeks’ data with metrics after rolling out a design.