A comprehensive documentation of heuristic evaluation and usability testing conducted on our midterm prototype — presented through Nielsen's 10 Usability Heuristics framework.
Meet the researchers and evaluators behind this usability study.
All instruments, consent forms, and respondent profiles used in our evaluation process.
All participants provided informed consent prior to testing. The consent form outlined the purpose of the study, confidentiality terms, and voluntary participation rights.
View Consent Form →A total of 22 respondents participated in usability testing, selected based on familiarity with the system's target domain.
Each group member independently evaluated the prototype against Jakob Nielsen's 10 Usability Heuristics.
📄 View Workbooks (PDF)The primary survey instrument used for this study was the standardized System Usability Scale (SUS) , supplemented by a demographic profile and qualitative open-ended questions. This instrument was administered digitally using Google Forms to ensure efficient data collection and confidentiality.
📄 View Survey Instrument
Presentation/Testing of Prototype via Google Meet
Pie Chart of Users' Voluntary Participation Agreement
Suggestions From User
Participants' SUS Results
Users UX & UI Experience
Bar Chart of User's Ages
Key usability findings, analysis of issues, and actionable design recommendations.
While users liked the ease of booking, there were no specific mentions of "undo" or "cancel" functions. In a high-stakes healthcare environment, ensuring users can easily back out of or change a mistaken appointment is critical.
H3: User Control and FreedomThe survey result for "too much inconsistency" received a 2.73/5 score (54.55%). While not a failure, it suggests that certain elements (buttons, colors, or navigation patterns) may differ between the patient and doctor portals, causing slight confusion.
H4: Consistency & StandardsA significant improvement suggestion was the inclusion of a "chatbox" to ask questions before the appointment. This indicates a heuristic gap where users feel they might make an "error" in booking the wrong type of service or doctor without a preliminary communication channel.
H5: Error PreventionUsers explicitly suggested adding online payments and mobile accessibility. The lack of a mobile-optimized view is a major efficiency barrier for "expert" or frequent users who wish to book appointments on the go
H7: Flexibility and Efficiency of UseQualitative remarks mentioned a "beautiful user interface design." However, the moderate score for "Complexity" (2.5/5 for being unnecessarily complex) suggests that some pages might contain redundant information that could be simplified.
H8: Aesthetic & Minimalist DesignGiven the complexity score, if a user encounters an error (e.g., a booked-out slot), the system must provide clear, non-technical instructions on how to proceed.
H9: Help Users Recognize, Diagnose, and Recover from ErrorsFirst-time users had difficulty navigating without guidance; no tooltips or onboarding present. Participants gave the highest "Negative" score to the "Need for prior learning" (60%) and "Support of a technical person" (55.45%). This identifies a critical need for a "Help" section, FAQs, or a simple "How-to" guide to assist users who are not tech-savvy.
H10: Help & DocumentationThe usability evaluation of the HealthConnect prototype yielded an overall System Usability Scale (SUS) score of 69, which marginally surpasses the industry standard average of 68.
The quantitative data reveals a high degree of user confidence (86.36%) and a strong consensus that the system is easy to learn (87.27%), with an average "Ease of Use" rating of 4.0/5. However, the results also highlight certain friction points, specifically regarding perceived complexity (50%) and the requirement for prior learning (60%), suggesting that while the system is functional, it is not yet fully intuitive for all user segments.
The findings indicate that HealthConnect successfully meets the primary goal of providing a convenient and efficient booking process, as evidenced by the high percentage average for frequent use (83.64%). The discrepancy between the high "Ease of Use" score and the moderate "Complexity" score suggests that while the core workflow (booking an appointment) is straightforward, the secondary features or the density of the user interface may feel overwhelming to first-time users. Qualitative feedback reinforces this, as users praised the system’s "hassle-free" nature but simultaneously requested more advanced features like integrated payments and real-time chat.
The SUS score of 69 classifies the system as "Good," but implies it is currently in a transitional phase between a functional prototype and a highly optimized, professional-grade application.
Add toast notifications, loading spinners, and success/error messages to every form interaction and system action.
Conduct a content audit and unify all navigation terminology using a shared design system and component library.
Implement real-time validation on all form fields with clear inline error messages before data is submitted.
Develop a contextual help system with tooltips, an FAQ section, and an onboarding tutorial for new users.
Restructure using progressive disclosure — show summary metrics first, with details accessible on demand.
Conduct a follow-up usability test after implementing changes to measure improvement in success and satisfaction.
Full individual evaluations from all 5 members.
A visual tour of the prototype's key screens and interaction flows evaluated during testing.
The entry point of the prototype. Users authenticate via a login form.
The main hub displaying key information.
The step to look for a specific doctor that the user needs to book an appointment.
The primary task flow users were asked to complete.
Personal insights from each team member on the evaluation process and learnings.
Conducting the heuristic evaluation opened my eyes to how small design decisions create large usability barriers. I learned to evaluate systems not just as a developer, but as a user.
Analyzing data and finding patterns across multiple respondents was challenging but rewarding. The numbers told a clear story about where our design fell short.
Facilitating usability tests taught me the importance of neutral observation. Watching real users interact with our prototype was both humbling and incredibly informative.
Nielsen's heuristics gave us a shared vocabulary to discuss design problems. I'll carry this framework into every future project as a first-pass evaluation tool.
Documenting every step made me realize how important traceability is in UX research. Good documentation transforms findings into actionable improvements.