D2DCure
Fall 2024 Client Project
Introduction
Our client
The Design-to-Data (D2D) program, led by Dr. Justin B. Siegel at Siegel Lab, seeks to find the connection between protein structure and function. D2DCure is their main database submission and management tool, where the Siegel Lab, along with over 40 other institutions, use the D2DCure website to submit, curate, and characterize their enzymes.
Our task
Built over 5 years ago with outdated technologies and features, our main goal was to overhaul the outdated UI of the website and rebuild both the frontend and backend of D2DCure. We accomplished this by streamlining user flows, implementing new features, creating a new design system, and improving overall website performance. Special thanks to Ashley Vater for being our amazing client and primary liason for this project.
The Team
Timeframe
October — December 2024 (Continued since Jan — June 2024)
Tools
Design — Figma
Development — Next.js, Prisma, TypeScript, mySQL, Firebase, Python, AWS cloud storage, Vercel, NextUI
Workflow — Jira, Notion, Slack, Github
The Product
Check out the live website now! — d2dcure
User Research
Our main users for this project involved students (submitting data), faculty (curating data), and the public (viewing characterized data). We first conducted competitive analysis of various data visualization user interfaces to identify the most effective practices from various companies and adapt them to our designs to meet our specific needs.
We also conducted a total of 9 user interviews with students and lab faculty. We asked questions to uncover their daily use cases on the website and understand their workflow. To maintain within scope and record our interview feedbacks, we created a product requirements document and categorized features and suggestions based on priority. Here are some key quotes we gathered from our interviews:
Previous Website
Define
To synthesize our insights from our user interviews, we established three main pain points to be our focus for our redesign: Organization, Communication, and Error Handling
To help pinpoint where the previous website could be improved, we created a comprehensive information architecture for the current website to thoroughly understand its content structure. This process allowed us to evaluate the overall layout, identify strengths, and pinpoint areas for improvement. The D2D Cure website presented a significant challenge due to its complex information architecture, featuring multiple user flows, numerous pages, and scattered information. This was one of the most demanding aspects of the project, but it was crucial to streamline this architecture to ensure a seamless development process.
Initial Wireframes
Our objective was to develop low-fidelity prototypes to bring our design ideas to life visually. A significant challenge we encountered was finding the optimal layout that not only met our users’ needs effectively but also ensured consistency and coherence across all pages. It was essential for us to maintain a uniform design to facilitate seamless navigation and enhance the overall user experience throughout the website.
Mid-Fidelity Wireframes
For the mid-fidelity phase, we focused on integrating the site’s functionalities with an optimized layout and establishing how data information would be visually stored. This involved our cross-functional collaboration with the developers to determine the most logical and efficient solutions for both our users and the backend of the website.
Some major design decision and hierarchies we established were a adding a side-bar layout, data tables for input and viewing, breadcrumbs, status tags, and search and filter bars.
User-Testing
To determine if research students could navigate and comprehend each element of the new website, we tested our designs with 6 researchers through both individual and group user testing sessions.
Key insights and changes:
- High success rate for navigation & task completion
- Users enjoyed new features and found interactions intuitive
- User suggested to resize data tables, add more resources, and reduce the notification system to just emails
- Some features user suggested were recorded in our notes but deemed out of scope for this timeline
Design System
Our main goal was to enhance the UI by creating a more modern aesthetic to improve user-experience while preserving the core functionalities of the application. We utilized components from the NextUI library and customized them to fit our unique functionality needs.
The subtle drop shadows and rounded elements establish a contemporary visual identity and the brightened the primary blue color creates a more vibrant interface to emphasize key interactions.
Final Designs
During the high-fidelity phase, designs were refined with user suggestions in mind, ensuring consistency across the design system. The filter system was improved to better meet user needs and touchpoint sizing was corrected to meet accessibility standards. Additionally, we made annotations on our design and components to facilitate a smooth handoff and implementation of the final designs.
Homepage
Data Submission
The sidebar layout keeps key enzyme info on the side while having the main data input table on the right. Status tags indicate which fields have been completed and selecting the pencil icon enters a unique input/edit window for each respective field.
Data Curation
Faculty can approve, reject, or request revisions on one or multiple variant profiles. Status tags update automatically based on faculty actions, enhancing communication and clarity across approval stages. Users also have the ability to search and filter to find specific datasets easier.
Data Characterization
The characterization page features the same sidebar layout, including a WCAG-compliant color key, additional enzyme details, and a main data table on the right. Enhancements include a color key toggle to reduce visual noise and a filter panel for easy dataset search.
Login and Registration
One of our key tasks for the project was overhauling the login and registration system to make it secure, seamless, and user-friendly. To manage authentication, we used Firebase, which made implementing features like account creation, password management, and role-based access super efficient. Users can now easily create an account, but to maintain the previous methods of approvals, we added admin approval for new registrations. This ensures only the right people — students, faculty, and admins — gain access to the platform.
We also introduced a “Sign in with Google” option to make the login process faster and hassle-free especially for students. For those who forget their password (because, let’s face it, it happens!) we set up an email-based reset process so they can get back into their accounts quickly. Since the platform has different types of users with specific needs, we incorporated multi-level user rendering. This ensures everyone students submitting data, faculty curating it, or public users viewing it only sees what’s relevant to them. It was a fun challenge combining usability and security, but the end result worked out great!
Dashboard
The dashboard for our platform was developed to put all the important information and tools right at the user’s fingertips. It provides quick access to previously submitted variants and gel images, so users don’t need to dig through pages to find their data. This streamlines their workflow and helps them focus on what matters most.
We also added action buttons to make common tasks like submitting new data or uploading gel images intuitive and fast. Insights are displayed upfront, giving users a snapshot of their activity and submissions. For any questions, an FAQs section is easily accessible directly from the dashboard too.
Data Submission Pages
Another pivotal task was to revamp and optimize the data submission process for both single variant and wild type submissions. This required translating PHP functionalities, such as kinetic assay data, melting points, and protein expression checklists, into TypeScript. Our aim was to uphold the original application’s functionality and enhance its performance.
We developed an 11-step workflow specifically for Single Variant submissions and enabling selective database queries to optimize each upload step. Metadata extraction and integration were also implemented to provide users with key insights, with features like adding teammates and instructor comments. To top it off, we added in built-in error detection and pre-designed templates to help users identify and address issues early in the process.
One notable challenge we encountered was managing file storage, particularly for CSV files containing data and PNG images of graphs. In the previous setup, files were locally stored within the codebase and managed via GoDaddy for uploading, storing, and retrieval. However, transitioning to Vercel for hosting necessitated a different approach to file storage.
After thorough evaluation, we opted to leverage AWS S3 buckets. This decision was based on several factors including ease of setup, flexible pricing models, and our existing utilization of AWS for hosting our MySQL database. By integrating AWS S3, we not only resolved our immediate file storage requirements but also ensured a scalable solution aligned with future development needs.
Data Curation
In a parallel effort, we recreated the curation pages by migrating functionality from PHP to our React application. This included reconstructing the multi-stage approval process for database entries, ensuring actions could pass through specific states such as In Progress, Pending Approval, Needs Revision, Approved, Awaiting Replication, and PI Approved. We implemented distinct role-based actions for both admin and professor users, seamlessly integrating this logic with our new Firebase authentication system to provide tailored views and actions based on user roles.
Similar to our approach with the BglB Characterization Page, we optimized performance by fetching all data at once, eliminating excessive database calls and page reloads. Previously, switching between admin and professor views would trigger a page reload and data refetch. With our implementation, this transition became much smoother and instantaneous.
Our primary challenge centered around ensuring the accuracy of the two-step approval flow, ensuring consistency between the page interface and the underlying database operations. Through meticulous testing and refinement, we successfully replicated this functionality.
Data Characterization
One of our primary tasks was to redesign and update the main BglB Characterization Data page. This involved migrating existing PHP code and functionalities to React/TypeScript. Key features included fetching data for all variants and presenting them in a table format, with capabilities for filtering by Rosetta numbering, curated data, and institution. Users were also able to navigate directly to specific rows, and we implemented color coding for table cells.
Our approach aimed at enhancing performance was notable. Initially, applying a filter in the old system triggered a reload of the entire page with the filter as a query parameter. In contrast, our revamped approach involved fetching all data during the initial load and applying filters client-side. Although this increased the initial load time slightly, it significantly improved user experience by enabling instant application of subsequent filters without needing to reload data or the page.
One significant challenge we encountered was implementing column sorting functionality. Originally, users could sort each column in ascending or descending order by triggering data refetching with sorting criteria as a query parameter. Replicating this server-side posed difficulties, leading us to opt for a client-side approach for sorting as well. By adding a few lines of code to manage sorting criteria during table rendering, we successfully integrated this feature into the application.
Rendering Graphs & Tables
We also had to migrate the graphing functionality from the old PHP-based site, which relied on Python scripts for generating graphs. A significant challenge we faced was the integration of these Python scripts into our new React frontend environment.
To address this challenge, we implemented a solution where we hosted the Python scripts on a Flask server. This Flask server acted as a backend service that could handle requests from our React frontend. This approach allowed us to maintain the existing graphing logic while seamlessly integrating it into our new technology stack. We also used AWS S3 cloud storage for rendering graphs because of its scalability and reliability compared to local storage solutions.
In addition to the graphing functionality, there was a need to fetch the data associated with a single variant and display it alongside the graphs. Since most of the other APIs built fetched the entire dataset, a new API that fetched only a single variant’s data was made to support this functionality. The retrieved data is then parsed using string parsing techniques, and then stored for later processing.
Gel Image Uploads
We later had to migrate gel image uploads from the old site, which presented challenges related to storage and organization of these files. To address this, we opted to store the images on AWS and organize them directly within the file names using metadata as (2024–04–11_yourUsername_GELID.png) This approach proved effective and efficient.
A significant takeaway from this experience was the realization that sometimes the simplest method, such as embedding metadata directly into file names, can be the most efficient solution. By structuring the file names with relevant metadata, we streamlined the organization and categorization process without the need for complex additional systems or databases.
Error Consistency
We prioritized ensuring consistent error handling across the platform to address potential issues such as database connection failures, API errors, or access restrictions. We implemented a system that included developing modular error components for every type of issue, ensuring uniformity across modals and error pages. These components allowed students to immediately understand the problem, view actionable steps, and access retry mechanisms when appropriate.
Additionally, we integrated a bug-reporting system that we will talk more next. It enabled users to submit screenshots of errors directly through the platform. These implementations were designed to streamline debugging processes, reduce resolution times for the upkeep team, and ensure consistent handling of the vast range of issues users might encounter.
Bug Reports
As part of our project, we developed a bug reporting system that simplifies how bugs are submitted and managed. The system allows users to fill out a form with all the details about the issue, including their contact information, a description, and screenshots. Once the form is submitted, the system automatically generates a clean, structured email with all the details and a unique reference ID, so the client has everything they need at a glance.
We also added features to make the process more efficient for clients. A quick-reply option lets them respond easily with a pre-written template, and a ticketing system keeps track of all the submitted reports in one place. This setup helps ensure that every bug is addressed quickly and nothing gets lost in the process. It’s a simple, automated solution that works well for managing issues.
Responsiveness
To top it all off, we ensured that every page, interaction, and modal in the system is fully responsive. Whether it’s the bug report form, the email view, or the ticketing system, everything adapts seamlessly to any screen size. From desktops to tablets to smartphones, the system is designed to be accessible and functional across all devices, ensuring users and clients can interact with it effortlessly, no matter where they are or what they’re using.
Next steps
We will continue beta testing our application with students and faculty at the Siegel Lab in order to finalize its functionalities and fix any bugs. Additionally, we will wrap up curating any documentation content that will be available for access. Once ready, we will migrate the old application to our new one prepare to launch hopefully by the end of the year!
Challenges
Timeline
For such a large scale project, one of our main struggle was planning out our timeline. We tackled this by approaching our project in three stages: First was gaining a thorough understanding of the project brief and the lab, next was rebuilding the website in the new codebase, and lastly was implementing additional features based on our user-research. While we’ve made substantial progress, we are taking extra time as mentioned to further flesh out new features and the overall frontend design.
Understanding Complex Concepts
A large hurdle of this project was the specialized and complex nature of our client’s work. Before we could dive into the development or design, we first had to understand the background of their enzyme research and how they use the website to submit their data. Once we accomplished this, it made the existing code easier to understand and rebuild, and it clarified the areas for improvement for design and product features.
Prioritizing Features
While our user-interviews were extremely valuable to understand our users, we had to distinguish what suggestions were absolutely necessary vs nice-to-haves. Creating a product requirements document allowed us to categorize features by lowest and highest priority, and ensured we had a defined end goal that was realistic for our short timeline.
Closing Remarks ✨
We would like to give a special thank you to Ashley Vater from the Siegel Lab for being an amazing client and working with us this year. We are looking forward to be able to contribute to an application that will be used by lab faculty and students across 40+ institutions.