I ask a lot of questions
As a designer I never know the answers to all of a client's questions. Or those of my team, for that matter. Sure, projects can be delivered based on assumptions, but at a certain point these can threaten the effectiveness of a given solution. So, collecting data is the best way to move forward with reduced risk--more confident that problems have been addressed and challenges are being met.
I have personally moderated over 100 one-on-one interviews. I can't even recall how many projects I have run as a member of a mixed research and design team. Fidelity of assets has varied between functional devices, interactive powerpoint files, and paper prototypes. The methods I have employed have spanned from Open and Closed Card Sorts, Likert Scale usage, to performance data collection across a range of criteria and metrics.
I have conducted testing for applications, websites, services, and physical products in sessions ranging from a half hour to one and a half hours in duration. I have written recruitment screeners, testing protocol, and built functional prototypes for both the mobile and desktop spaces. Whereas I'm more than comfortable being a practitioner, I have been doing more training and mentoring of Junior Staff than anything else as of late--which I enjoy.
Recent usability evaluations have involved remote testing. This involved the use of a browser-based application in conjunction with a web camera. Whereas this approach had it's challenges the recorded footage was of editable quality, and the data were reliable.
Having conducted ethnographic studies across the country for various clients I can say that few exercises yield better results. That being said, these efforts are costly in nature, and the data collected requires substantial Signal : Noise analysis, often requiring a team.
I have substantial experience with regards to analysis and the translation of research findings into next steps. I often employ a combination of Post-It notes, paper, and whiteboards to approach reporting of data from all angles in order to tell the resulting story.
Human Factors and Ergonomics
During a portion of my tenure at Motorola I was responsible for the fit, comfort, and usability of the Bluetooth Accessory portfolio. While working with Herman Miller, I studied how different body types moved while working throughout the day while seated in prototypes.
I have worked on the launch of each of the products above--from wearable consumer electronics and mobile devices to award-winning seating and workspace environments. Involvement has ranged from brainstorming to late-stage prototype evaluation and interface design.
Full Disclosure: This is my least favorite method due to participant interactions effecting results. Sure, there is a time and a place for group testing, but I do not think that it is product design-related. Marketing, Consumer Research, and Market Validation studies are all examples of when this can be a useful tool to leverage customer feedback. I believe it's important to think about the input relative to the development lifecycle of the product or service in question. This can lead to something that cannot be addressed before launch. That being said, I can run groups, but prefer not to.
Triads and Small Groups
Small groups of 2-3 participants can be ideal, depending on the purpose of the study. Similar to Focus Groups, but more effective in nature. When concept testing in an aggressive 'Go-To-Market' environment this is an acceptable option. Having conducted studies of this nature I can say they facilitate a specific type of need, as long as the recruitment is targeted.