Volare
Introduction
Preparing for a job interview can be daunting. Imagine spending hours researching potential questions, practicing answers, and wondering if you’re focusing on the right skills — only to walk into the room (or log into a virtual platform) and face questions you never expected. The anxiety, the guesswork, and the lack of tailored feedback leave many candidates feeling stuck and underprepared.
With the rise of online interviews on platforms like HireVue, where algorithms analyze everything from your word choice to your facial expressions, the stakes are higher than ever. Yet, traditional resources like generic guides or mock interviews fall short because they don’t adapt to your unique needs or the specifics of the job you’re pursuing.
That’s where Volare comes in. Designed to empower job seekers, Volare is an AI-powered interview preparation platform that combines the power of advanced language models with interactive, personalized coaching. Whether you’re preparing for a technical role, a behavioral interview, or industry-specific challenges, Volare tailors its feedback to your goals — helping you navigate the toughest questions with confidence and precision.
In this article, we’ll take you behind the scenes of how we built Volare from the ground up — showcasing the product’s evolution through low, mid, and high-fidelity designs, technical architecture, and thoughtful marketing strategies. You’ll get a closer look at the platform’s core features, including AI-powered mock interviews, real-time feedback, resume parsing, and adaptive analytics. We’ll also highlight how we integrated tools like Groq, HumeAI, and Whisper to make the experience feel human, immersive, and job-specific. From sleek visual mockups and onboarding flows to branding direction and Instagram-inspired marketing ideas, this is a deep dive into how Volare was designed to empower job seekers — and how we plan to launch it into the real world.
The Team
Timeframe
October 2025 — June 2025 | 22 weeks
Tools
Design — Figma
Development — Next.js, 11Labs, HumeAI, Groq, Tailwind CSS, ShadCN, Supabase, Docker, TypeScript
Maintenance — Jira, Notion, Slack, Github
User Research
Identifying User Pain Points
Our initial vision for Volare was to build an AI-driven platform that supported interview preparation for users at all career stages, from entry-level to senior professionals. However, the key challenge emerged early: how to replicate the nuances of a real interview while addressing the diverse needs and comfort levels of users. To tackle this, we conducted extensive research, including literature reviews, competitive analysis, and user interviews and surveys, which highlighted the need to balance advanced AI features with authenticity and accessibility.
Our findings revealed that regular AI users appreciated personalized feedback and scalable tools but still sought human-like interaction, while AI skeptics doubted the accuracy of feedback for soft skills and the realism of simulations. Additionally, users across the spectrum noted that existing platforms failed to provide meaningful feedback on non-verbal cues and lacked natural flow. Recognizing the most urgent needs among entry-level users — college students and recent graduates navigating their first job searches — we narrowed our focus to this demographic. This shift allowed us to develop a more targeted and impactful solution.
Design
Design Project Summary
Volare’s design team plays a critical role in crafting an intuitive, accessible user experience. From understanding our user and their needs (anticipated and unanticipated) to translating research into compelling interfaces, the design team is responsible for making sure Volare is not only a functional service, but one that ultimately supports our founding purpose of empowering users to succeed in interviews. By iterating through mid-fidelity and high-fidelity designs, we are able to hone Volare’s stand-out elements, such as the real-time analytics and AI feedback features.
Early Design and Lo-Fi Wireframes
In the early stages of Volare’s development, we faced the core challenge of designing a platform that felt both human-like and effective for preparing users across technical and behavioral interviews. Initially envisioned as a passive chatbot offering career advice and resume tips, Volare quickly evolved based on user feedback. Users wanted a more immersive experience — something that could simulate real interviews and offer actionable, real-time feedback. This led us to shift our focus toward building an interactive AI-powered mock interview tool, complete with behavioral and technical analytics tailored to college students and recent graduates.
Our design process began with low-fidelity wireframes that helped map out basic flows, but they revealed early challenges. We struggled to balance structure with realism — how guided should interviews be? How much should users customize before starting? And how do we visually communicate what matters most on each screen? These early design hurdles, while difficult, were essential in shaping the user-centric, scalable platform that Volare is becoming.
Mid Fidelity Wireframes
The Volare mid-fidelity wireframes showcase a comprehensive and user-focused design that seamlessly guides users through the AI-powered interview preparation process. The wireframes outline key features and user flows, starting with a clean landing page that introduces Volare and prompts users to sign up or log in. The onboarding process follows, where users provide their career field, job position, and interview challenges to tailor their experience. The dashboard, a central hub, features intuitive navigation for accessing session history, analytics, AI tips, and settings.
Users can initiate a new interview session by answering prompts about their career field and goals, allowing Volare to generate relevant questions. During the interview, real-time AI-powered feedback — such as the quality of responses, risk of language errors, and communication scores — appears on-screen, enhancing the user’s understanding of their performance. The session history page enables users to review past interviews, access AI-generated feedback summaries, and pinpoint areas for improvement. The analytics page features interactive graphs that visually track progress over time, empowering users to measure their performance metrics like response quality, general attitude, and time management.
Hi Fidelity Wireframes
Following extensive user feedback and internal design reviews, we transitioned from our mid-fidelity wireframes to a refined high-fidelity design system that emphasizes clarity, accessibility, and brand identity.
One of the most noticeable changes was the shift in color scheme — moving from a standard neutral palette to a bold, modern combination of deep purple and sleek black, enhancing both visual appeal and focus.
We redesigned the mock interview interface to create a more immersive experience: the AI interviewer now occupies the central screen, while the user’s live camera feed is positioned neatly in the top-right corner, mimicking the layout of real virtual interviews. This change helps users better visualize the dynamics of an actual remote interview setting. We also added a minimalist Volare logo throughout the platform to reinforce brand consistency and give the application a polished, professional look.
We also expanded the platform’s functionality with new pages designed to provide additional support and customization. A new Resources page offers users curated articles, interview tips, and resume guides — making Volare not just a practice tool, but a comprehensive interview prep companion.
We introduced a dedicated Account Settings page where users can manage personal information, privacy preferences, and notification settings. Together, these high-fidelity enhancements ensure that users feel supported, seen, and prepared throughout their journey.
Overall Development Process
User Interface
In order to promote best User Interface capabilities we aimed to ensure the front end was as easily accessible as possible. With the help of Next.js and Tailwind for primary frontend development, the user is able to access the product with ease. The implementation of gradient buttons that prompt the user on where to click and visit allows the flow of the program to be clearly demonstrated. By creating a sign in/sign up capabilities users are able to save any progress within their profile which facilities a better experience. Through the onboarding and Interview creation process, users are clearly shown the prompts to answer and in which area which helps streamline the input and output aspects.
The use of product components allowed to buttons, logos, fields and more to be consistent throughout the application which allows the user to gain a better familiarity on how to navigate to their desired section. The additional use of the sidebar that contains a direct link to the Dashboard, Session History, Resources, Settings and more allows for there to be a consistent capability for users to visit every page at all times. Using dynamic and user responsive components allowed the User Interface to be fully functional which allowed for the end-user experience to be heightened.
Onboarding Section
To deliver a more personalized user experience, our platform prompts users upon their first login to share their personal interests, past experiences, and upload their resume. This information is processed by a large language model (LLM), which then generates relevant questions and suggestions to better align with each user’s background and goals.
For resume handling, we utilize the PDF-Parser library to extract text content from the uploaded PDF files. This parsed data is stored in a dedicated database column for structured querying. Additionally, a copy of the original resume is securely stored in a Supabase Bucket (AWS S3-like storage solution) for future reference. To ensure user privacy, we enforce strict access policies, allowing only the authenticated owner to access their file.
This feature distinguishes our platform from other applications by delivering highly relevant, personalized questions that vary for each user’s resume. It mirrors the natural flow of real-world interviews, where recruiters review a candidate’s resume and ask context-specific questions. This solution contributed to creating a more authentic and effective interview preparation.
AI Interview
The user’s process begins with job details entered through a form. This includes job title, field, experience level, job description text, desired interview type (Behavioral or Technical), and any specific challenges. The system also uses the user’s resume text, stored in Supabase during onboarding. Using this input, AI-generated interview questions are created. The AI (Mistral via Groq) creates 5–7 questions based on job details and the candidate’s resume. Behavioral interviews focus on resume experiences; technical interviews focus on job description technologies and skills. Once questions are ready, the user goes to a call interface. An AI interviewer (powered by ElevenLabs) guides the user through questions. During the call, the user’s audio and video are recorded. The system captures user responses and expressions. After the interview, the recorded audio/video uploads to the backend. The backend processes this recording. Hume AI measures expression, analyzing prosody and facial expressions from video. This analysis segments the recording into utterances and assesses emotional content. Finally, a feedback report is generated. This sends analyzed interview data (spoken responses and emotion scores) to another AI model (Gemini 2.0 Flash). This AI acts as an interview evaluator. It provides improvement suggestions for content and tone, advice on emotion display, and assigns numerical scores for overall performance, clarity, content, STAR method implementation (if applicable), and relevance. This feedback and scores are stored and available to the user on the session history page, allowing them to review performance and AI analysis.
Dashboard Population & Graphs
Volare’s dashboard is a React-based interface that tracks interview performance metrics and provides analytics visualization. The implementation focuses on handling complex session data, calculating performance trends, and rendering interactive graphs through a tabbed interface.
The core challenge was managing multiple data streams simultaneously — user authentication, session history, performance metrics, and UI state. We structured this using separate state hooks for each data type, with a main fetchData
function orchestrating all the asynchronous operations. The state management includes arrays for session data, objects for score calculations, and simple strings for tab navigation.
interface RawSession {
id: string;
user_id: string;
created_at: string;
company_name?: string;
analytics?: {
clarity_score?: number;
content_score?: number;
star_method_score?: number;
response_relevance_score?: number;
};
}
This interface handles the raw data coming from Supabase, where the analytics column stores JSONB data with optional scoring metrics. The optional properties are crucial since not every session has complete analytics data.
The data transformation pipeline converts these raw records into display-ready formats. Date processing extracts readable timestamps from ISO strings, while the analytics processing maps JSONB columns into structured metric objects:
interface SessionData {
clarity_score: number;
content_score: number;
star_method_score: number;
response_relevance_score: number;
date: string;
}
Performance change calculations compare the latest session against the previous one to show improvement trends. The getChange
function handles cases where users might have incomplete data or only one session, defaulting to zero when calculations aren't possible.
The dashboard layout uses CSS Grid for responsive metric cards that adapt from single-column mobile to four-column desktop layouts. The main content area splits into a 9–7 column ratio, with session history taking the larger portion and AI tips in the sidebar. This grid system maintains consistent spacing while handling different screen sizes.
Tab navigation switches between overview and analytics views using conditional rendering. The active tab state controls which components mount, with the overview showing metric cards and session lists, while analytics displays the performance graph. Each tab maintains its own styling state with dynamic border colors.
The tips system generates personalized recommendations by extracting improvement suggestions from the latest session data. When no session data exists, it falls back to generic interview advice. This ensures new users still receive value while experienced users get targeted feedback.
Error handling wraps the entire data fetching process, displaying user-friendly messages when API calls fail or data processing encounters issues. The loading states prevent the interface from breaking while asynchronous operations complete.
Database integration with Supabase handles the JSONB analytics column extraction, safely accessing nested properties with fallbacks for missing data. The session history extraction processes timestamps, company names, and analytics scores into the required interface formats.
The performance graph component receives the processed session metrics array and renders trend lines for each scoring category. This visualization helps users understand their improvement patterns across multiple interview sessions using ReactECharts with custom configuration.
The component leverages ReactECharts to create an interactive performance visualization that receives sessions: SessionData[]
containing clarity, content, star method, and response relevance scores. It uses useState
for chart type selection between day, week, and month views, loading states, and ECharts options. A chartKey
state increments on chart type changes to force complete component re-renders and prevent visual artifacts.
The data processing pipeline groups sessions by date and calculates averages when multiple sessions exist on the same day. Time-based filtering works differently for each view: day view shows today’s sessions or displays individual sessions if multiple exist, week view filters the last 7 days using date range calculations, and month view shows current month’s data using proper month boundaries. All filtered data gets chronologically sorted for proper timeline visualization.
const handleChartTypeChange = (type: 'day' | 'week' | 'month') => {
setChartKey(prevKey => prevKey + 1);
if (chartInstance.current && chartInstance.current.getEchartsInstance) {
const echartsInstance = chartInstance.current.getEchartsInstance();
echartsInstance.clear();
}
setChartType(type);
};
The ECharts configuration uses a multi-line chart with smooth curves and circular markers. Custom colors are applied to each metric: #4486FF
, #2683F6
, #69BCFF
, #6344FF
. Animation includes staggered line drawing with 1000ms duration and cubic-out easing. The Y-axis maintains a fixed 0-100 scale with 20-point intervals for consistent visual reference, while the tooltip uses a custom formatter showing metric values with color-coded indicators.
series: metrics.map((metric, index) => ({
name: metricNames[index],
type: 'line',
data: filteredData.map(item => [
chartType === 'day' && 'session' in item ? item.session : item.date,
item[metric]
]),
symbolSize: 10,
symbol: 'circle',
smooth: true,
lineStyle: {
width: 3,
color: colors[index]
},
animationDuration: 1000,
animationEasing: 'cubicOut',
animationDelay: (idx: number) => idx * 100
}))
Performance optimizations include canvas rendering for better performance with large datasets and lazy update disabled to ensure immediate visual updates. The component only renders the chart when data exists and shows loading or empty states otherwise.
Responsive design features include grid configuration with responsive margins and containLabel for proper text display, auto-interval for x-axis labels with overlap prevention, date formatting that converts YYYY-MM-DD to localized short format, and fixed 55vh height with full width scaling.
Component modularity keeps the dashboard maintainable, with separate components for metric cards, session history, AI tips, and performance graphs. Each component receives typed props and handles its own display logic, making the codebase easier to debug and extend.
The styling approach combines Tailwind utility classes with custom CSS for the full-screen dashboard experience. Global styles handle scroll behavior and overflow management to create an app-like interface that fills the viewport without browser scrollbars.
Backend
Data and Information Flow
One of the most critical aspects of Volare was to ensure that information was being passed around our architecture seamlessly and instantaneously. This information loop began on the interview setup page (there was also another initial onboarding loop which had its own flow for authentication and database information). When the user entered information in about the company and role that they were preparing for, it would be sent to our Supabase database and also our question creation module on the backend. The question creation module uses a llama model with inference from Groq to come up with a preliminary question to get the interview going. Originally we went down the route of statically creating some questions given the resume and the job details, but we quickly realized that a dynamic and conversational experience would enhance the quality of service that our users would feel. So we further went on to integrate an AI agent from ElevenLabs into our backend. The agent enabled us to simulate how a real interviewer would sound, allowed the interviewee to interrupt a question, and even gave us an easy way to transcribe the entire conversation. Apart from this unique, conversational experience, one of our flagship features was a detailed analysis tool that would quantify each practice interview’s performance. Our formula to calculate this performance was not simply to take into account the content of each response but also whether it followed the STAR formula, the length of the response, and also (our secret ingredient) the facial expressions of the interviewee themselves. This was done through the integration of a Hume FACS 2.0 model that accurately took frames from the camera to identify 48 different user emotions. We narrowed this list of 48 emotions down to the top 3 emotions and did this at a rate of 1FPS.
All of this resulted in a sizeable amount of data and there would be no formulaic way of giving personalized feedback and scores in a manner that was scalable. To overcome this we parsed all of this data (emotions, questions, and answers) into a formatted JSON file and then passed the entire JSON file into a Gemini model using the Google API. We had predefined what an example structured output should look like so that our LLM had an idea on how to evaluate the various pieces of information, what scores correspond to what performance, and what sort of tips to give. And finally, this structured output was then parsed and passed into our database so that it could be displayed on the frontend in a way that was aesthetic and actionable for the user.
Implementation Tips
The FastAPI backend’s main endpoint /recordings/{session_id}
handles file uploads and triggers a complex pipeline that includes: processing audio recordings through Hume AI's emotion detection API to analyze prosody and facial expressions, retrieving conversation transcripts via ElevenLabs' conversational AI service, and generating detailed feedback reports using Google's Gemini AI model. I want to comment on some
- When waiting for HumeAI to complete inference on a video or audio sample for emotional metrics, you cannot get the predictions until it is complete. The technique used to accomplish this is to continuously get the job details every set time interval until the inference job returns that it is COMPLETE. For example, a busy waiting implementation:
while True:
job_details = await hume_client.expression_measurement.batch.get_job_details(id=job_id)
if job_details.state.status == "COMPLETED":
break
await asyncio.sleep(5) # Wait 5 seconds before polling again
# Now that the inference job is complete, get the predictions
predictions = requests.get(
f"https://api.hume.ai/v0/batch/jobs/{job_id}/predictions",
headers={"X-Hume-Api-Key": HUME_API_KEY},
)
This gives a lot of emotional predictions from the tone and facial data, but to help reduce context size and to ensure that only the most relevant information is used by the Gemini grading LLM, we also take only the top 3 or so emotional predictions:
"facial_data": [
{
"time": 0.0,
"top_emotions": [
{
"name": "Calmness",
"score": 0.6167358160018921
},
{
"name": "Concentration",
"score": 0.6100975275039673
},
{
"name": "Boredom",
"score": 0.5183489322662354
}
]
},
{
"time": 1.0,
"top_emotions": [
{
"name": "Amusement",
"score": 0.6814385652542114
},
{
"name": "Joy",
"score": 0.6076269149780273
},
{
"name": "Interest",
"score": 0.5855327248573303
}
]
},
- If you ever see a table in Supabase not updating after you try to call update or upsert, you may have to disable Row-Level Security by clicking Edit Table > Disable RLS.
- Gemini’s 2 million token context window proved very useful for processing these large dictionary files, where other LLMs failed due to having smaller context windows.
Challenges
Year-Long Timeline
One of the biggest challenges we faced with Volare was managing the scale and duration of the project. With a development timeline spanning nearly a year, it was difficult to maintain consistent momentum across all phases — from ideation and user research to design, development, and refinement. The long timeline also meant balancing shifting schedules, academic commitments, and varying levels of availability within the team. While progress started off slow in the early quarters, we made significant strides toward the end by establishing clear milestones, increasing communication frequency, and focusing our efforts on completing a strong, functional prototype. Ultimately, our ability to ramp up in the final months allowed us to deliver on our vision.
Maintaining Team Consistency
With a project of this scope, another major hurdle was maintaining a consistent and aligned team throughout the year. As students, we had to work around class schedules, exams, and internship cycles, which made it challenging to stay in sync. Despite these fluctuations, we kept the team grounded through weekly standups, shared documentation in Notion, and clearly defined responsibilities. Having a core group of committed contributors helped carry momentum and ensure that critical work didn’t stall — even when individual availability shifted.
Balancing Vision with Feasibility
As we received feedback and imagined new possibilities for Volare, we had to consistently evaluate what features were essential versus nice-to-have. Our original vision included a wide range of features, from full voice analytics to industry-specific interview flows, but we learned to prioritize by impact and technical feasibility. This meant refining our scope mid-development and focusing on features like role-based question generation, feedback dashboards, and resume parsing — core elements that aligned directly with user needs. These constraints ultimately helped us build a more focused, polished product.
Marketing Strategy
To market Volare effectively, we’re centering our strategy around authenticity, community engagement, and visual storytelling with a proud emphasis on our roots at UC Davis. As a product built by UC Davis students, Volare will initially launch through partnerships with campus organizations, career services, and student clubs, offering exclusive early access to Aggies preparing for internships and full-time roles. We plan to roll out Instagram carousel posts showcasing interview tips, growth analytics, and mock interview sneak peeks, alongside short-form video content on TikTok and YouTube Shorts that highlight real users and the AI experience in action. By collaborating with peer mentors, TA networks, and UC Davis alumni on LinkedIn, we aim to organically spread the word across both student and professional circles. Our voice is modern, encouraging, and grounded in the college experience making Volare feel like a natural extension of the UC Davis journey from classroom to career.
Impact
In today’s job market where competition is fierce, AI tools are evaluating candidates, and even entry-level roles demand experience students are facing more pressure than ever. With thousands of qualified applicants applying for the same positions, simply having a good resume isn’t enough. What often separates successful candidates is their ability to communicate confidently and adapt to unpredictable interview formats, especially in virtual settings. That’s where Volare makes a difference. Built by students who understand these exact challenges, Volare provides a level of preparation that goes beyond surface-level tips offering tailored, immersive, and data-driven practice to help students stand out. In a market this tough, having a tool like Volare isn’t just helpful — it’s essential.
Closing Remarks
As the year-long journey of building Volare comes to a close, we reflect on just how far this product has come — from a simple concept born out of shared frustration with interview prep, to a fully realized platform powered by AI and shaped by real student experiences. What began with lo-fi sketches and abstract ideas has grown into a sleek, functional tool that delivers personalized, role-specific mock interviews, actionable feedback, and meaningful performance insights.
We’re incredibly proud of what we’ve built — a platform that not only prepares users for interviews, but empowers them to enter them with confidence. Every feature, from our AI-generated questions to the analytics dashboard, was designed with the intention of supporting real people navigating a challenging job market. As students building for students, we know what’s at stake. And now, with Volare complete, we’re excited to see it begin its next chapter — out in the world, helping others land the roles they’ve worked so hard for.