Render
Introduction
For CodeLab’s 2024 Fall cohort, our team had the incredible opportunity to work with Render, delivering two products by the end.
Render is a cloud platform designed to simplify the deployment and management of web applications. It provides a fully managed environment that takes care of infrastructure and scaling, allowing developers to focus on building and scaling their projects without dealing with operational complexities.
The platform supports a variety of use cases, including hosting static websites, running background APIs, managing databases, and scheduling recurring jobs. A standout feature is its seamless integration with GitHub repositories, enabling applications to be deployed directly from the codebase with minimal configuration.
By automating much of the deployment process, Render streamlines the workflow for creating and maintaining scalable applications.
The Team
Timeframe
October — December 2024 | 6 weeks
Tools
Design — Figma
Development — Render, PostgreSQL + vector extension, FastAPI, React, Next.js, TypeScript, VSCode extension API
Maintenance — Jira, Notion, GitHub, Slack
The Project
We were lucky enough to be commissioned to create two projects for Render, a VSCode extension for developers to use, and a chat guide to introduce new users to Render.
Project 1: VSCode Extension
The first part of our project focused on creating a Visual Studio Code (VS Code) extension tailored to improve the developer experience for those using Render. VS Code is a popular code editor known for its support of multiple programming languages, built-in debugging tools, and extensive library of extensions that allow developers to customize their environment to suit their workflow.
One of the challenges we aimed to address was the fragmented workflow that comes from Render’s current setup. While Render provides an excellent platform for deploying applications, it does not include an in-website code editor. This can make it harder to seamlessly integrate deployed websites with the underlying code.
To tackle this, we developed a VS Code extension designed to bridge that gap and streamline the workflow for Render users. The extension includes several key features:
- Live Notifications: Developers can stay updated with deployment statuses and other events in real time, without leaving their code editor.
- Autocomplete: Built-in suggestions and shortcuts for render.yaml file, reducing friction during development.
- QuickStart Side Panel: A dedicated side panel to guide users through Render’s QuickStarts page, making it easier to get started and manage projects directly from VS Code.
This extension aimed to make working with Render more efficient and cohesive by bringing key features directly into the developer’s primary workspace.
About QuickStarts
A major component of our project was enhancing the Render QuickStarts page, a central resource that introduces users to the services Render offers and helps them get started with the platform. The QuickStarts page features a variety of pre-configured web services and tools that users can deploy on Render with minimal setup. These include popular frameworks and tools like Flask, Ruby on Rails, WordPress, Ackee, ClickHouse, Celery, PostgreSQL, Redis, MySQL, and FastAPI, among others.
Each QuickStart provides sample code that new users can deploy to explore how Render works, making it easier to understand the platform’s capabilities. This is especially valuable for developers unfamiliar with Render, as it gives them a practical, hands-on introduction to deploying applications.
A key focus of our project was improving the usability of the QuickStarts page. This involved making it more accessible, ensuring that users could quickly find the services they needed, and providing a smoother experience when navigating the available options. By enhancing this page, we aimed to lower the learning curve for new users while streamlining workflows for experienced developers.
Project 2: ChatGuide
To improve the accessibility and usability of the Render QuickStarts page, we developed an AI Chat Guide. This tool was designed to provide a more interactive and personalized experience for users exploring Render’s services.
The chat guide simplifies the process of finding the right QuickStart by asking three key questions:
- The purpose of the user’s application.
- The programming languages being used.
- Any additional tools or services the app requires (e.g., databases, background workers).
Powered by a combination of a Large Language Model (LLM) and a Retrieval-Augmented Generation (RAG) model, the AI Chat Guide analyzes Render’s available services, and suggests the most relevant QuickStarts based on the user’s inputs.
This approach not only makes the QuickStarts page easier to navigate but also adds a personalized touch for first-time users, helping them quickly identify and deploy the tools they need without getting overwhelmed. By streamlining the onboarding process, the AI Chat Guide ensures a smoother experience for developers at all levels.
Live Notifications
A key feature of our VS Code extension for Render is live notifications, which enable real-time monitoring of deployments directly within the code editor. This feature was developed to provide developers with an integrated workflow by combining WebSocket streaming, Render’s API, and configurable user settings.
The development process began with thorough research into Render’s API documentation to understand the endpoints and WebSocket capabilities needed for real-time log streaming. By leveraging WebSockets, we enabled a continuous connection between the Render API and the VS Code extension, allowing seamless real-time streaming of service logs without requiring manual updates or polling.
Each log message retrieved via the WebSocket connection appears as a notification in the bottom-right corner of the VS Code interface. Notifications include:
- The log message itself for immediate context.
- A direct link to the corresponding deployment on the Render website, enabling quick access for further investigation or verification.
This feature keeps developers informed about deployment statuses, errors, or progress without needing to switch between tools, improving overall productivity.
To make the live notification feature adaptable to various user needs, we introduced configurable settings within the extension. These settings allow developers to securely input:
- Their Render API key for authentication.
- Their GitHub username and the deployment resource ID to tie notifications to specific projects.
This customization ensures that developers can tailor the notifications to monitor only the deployments they care about, creating a focused and personalized experience.
To further enhance usability, we implemented streamlined filtering through a dynamic search term. Developers can input specific terms to proactively monitor logs for keywords or patterns, making it easier to detect and resolve issues as they occur. For instance, filtering logs for errors, warnings, or specific deployment messages allows developers to respond quickly without sifting through unnecessary information.
Side Panel
The side panel feature in our VS Code extension was designed to provide developers with an easy-to-navigate interface for exploring all available QuickStart combinations on Render. By integrating the QuickStarts directly into the VS Code environment, the side panel eliminates the need to leave the editor, streamlining the workflow for developers.
The side panel can be accessed by clicking the Render logo located in the VS Code activity bar. Once opened, the panel displays all possible QuickStart combinations organized in a structured folder view. For example:
- Clicking on a Node folder expands it to reveal related frameworks or tools like Express.
- Opening a secondary folder, such as Express, provides additional details, including:
- A direct link to the relevant QuickStart page.
- A folder containing associated web services.
- A folder with key terms related to that combination.
This hierarchical structure ensures that developers can quickly drill down to find the specific resources they need without unnecessary friction.
To make navigation even more efficient, two buttons are provided at the top of the side panel:
- Filter QuickStarts: Clicking this button allows developers to input a search term. The panel dynamically filters the folder structure, showing only the paths that contain the specified keyword. For instance, searching for “MongoDB” will display only the QuickStart folders and combinations that include MongoDB.
- All QuickStarts: This button resets the filter, restoring the full list of QuickStart combinations.
The filtering system makes it easy to narrow down the available options, ensuring that developers can quickly locate the tools and frameworks they are looking for.
Autocomplete
The autocomplete feature in our VS Code extension was designed to simplify the process of creating and editing render.yaml
files, a critical component for launching applications on Render. Every Render Blueprint requires a render.yaml
file, which defines the configuration for services, databases, and environment groups within a project. While Render provides detailed documentation and examples on building these files, referencing this material during development can disrupt the workflow and create friction.
To address this, we implemented an autocomplete system that enables developers to efficiently build render.yaml
files directly within VS Code. The feature provides:
- Key values and suggestions: It offers predefined suggestions for the required fields (e.g., services, databases, environment groups) based on Render’s specifications.
- Context-aware completion: As users begin typing, the autocomplete system dynamically suggests appropriate values, helping them fill out the YAML file faster and with fewer errors.
This eliminates the need to constantly refer back to external documentation, keeping the workflow uninterrupted and integrated within the editor.
To build the autocomplete functionality, we utilized the Language Server Protocol (LSP), a standard introduced by Microsoft for communication between code editors and language tooling. The LSP allows for:
- Language Server Implementation: The autocomplete logic was implemented as a Language Server that runs as an independent process, ensuring minimal performance overhead for VS Code.
- Cross-Platform Compatibility: Since LSP is editor-agnostic, the same Language Server implementation can integrate seamlessly with other LSP-compliant editors in the future, broadening its potential impact.
The LSP architecture makes the feature robust, extensible, and efficient. By decoupling the language tooling from the editor itself, we ensured the autocomplete system could scale and evolve without compromising performance.
The autocomplete feature significantly improves the experience of working with render.yaml
files. Developers no longer need to context-switch between their editor and documentation, allowing them to focus on building and deploying their applications more efficiently.
User Research
Competitive Analysis
To start our research, we conducted a competitive analysis on existing chatbots to get a better understanding of who the target audience might be and to build a more thorough background on chatbots.
Surveys
We created a survey to gather both quantitative and qualitative data from potential users.
The questions included inquiries about the user’s background with development, their familiarity with Render and its services, their experience with searching and deploying quick starts, and their history with chatbots.
Many of the responses noted that they were proficient in code development and deployment. In regards to Render, respondents reported the difficulty of searching and navigating Render’s QuickStarts page. As for using chatbots, people found them useful but often times lacked direction on what prompts to ask for help.
Ideation
Insights
To the average developer, Render may be an unfamiliar platform as it caters to companies and business with the intentions to deploy large scaled websites and services. With this in mind, we narrowed down the scope of our project to specifically tackle the navigation of their QuickStarts page. We found that Render’s QuickStarts page was challenging to navigate to new users and visitors were often overwhelmed with an overload of information.
Lofis / Midfis
Sketches
Our initial sketches included a full page for Render’s chatbot that takes users through a chat interaction to find a recommended QuickStart. Inspired by existing services such as ChatGPT, we initially designed with a generic chatbot in mind.
User Testing
We were able to conduct a user testing interview on our mid-fidelity prototypes where we were able to gather key insights on how to improve our product.
Clarifying the User Flow
As we watched our user go through the wireframes, we realized they faced difficulty finding where the interactions started and ended. To avoid further confusion, we went back to the drawing board and redid the user flow to ensure users clearly understand the steps they needed to take to reach the solution.
Creating a Landing Page
Part of the confusion was also because our initial wireframes did not fall on a landing page. We started to integrate the user’s flow to land directly on Render’s Quickstart page to provide context.
Pivoting to a Chatbot User Interface
While users liked being guided by questions, the format of the guide was too short and was not the most efficient way to interact with users. To fix this, we made our high-fidelity prototypes take the interface of a traditional AI chatbot interface.
Style Guide
Render’s design system and branding is thoroughly established through their existing site. We decided to make out product consistent with the simple and sharp designs that compliments Render’s identity.
HiFis
After all of the different iterations with creating the user flows and testing the different versions of mid-fi, we felt confident enough to move into Hi-Fidelity.
Here are the different flows we made and their respective goals:
- New UI Homepage: encouraged new users to use the AI Chatbot feature we created that would give recommendations to the appropriate Quickstarts
- Pre-set prompts: by making the most commonly asked questions and prompts accessible up front, they are able to learn more about which Quickstart is best for them faster!
- AI Chatbot: This feature is utilized if the user has more specific needs and niche features within their code, and does not know where to start. The chatbot asks them integral questions that can give more accurate recommendations as it receives more input.
RAG Model
At the core of the AI Chat Guide is a Retrieval-Augmented Generation (RAG) model designed to intelligently recommend the most suitable QuickStart based on user inputs. To achieve this, we first needed to create a structured and searchable dataset from Render’s available tools and services.
We started by web scraping detailed information from the official GitHub pages of the tools and services listed in Render’s QuickStarts. Additionally, we extracted data from the official markup of the QuickStarts page itself. The collected data was organized into JSON files with the following key headers:
- Name: The name of the tool or service (e.g., Flask, PostgreSQL, FastAPI).
- Description: A brief description of the tool or service.
- Key Languages: The programming languages the tool supports.
- Link to QuickStart: A direct link to the relevant QuickStart resource.
- About: Information taken from the QuickStart markup file.
To make this data efficiently searchable, we hosted a PostgreSQL database on Render and created a table to store the structured information. We then leveraged the pgvector extension for PostgreSQL, which allows vector-based similarity searches, ideal for comparing embeddings and retrieving relevant results.
We transformed the “About” descriptions into vector embeddings to enable semantic search within the RAG model. For this, we used OpenAI’s text-embedding-ada-002
model, a high-performance embedding model specifically designed to convert text into dense vector representations.
The text-embedding-ada-002
model generates embeddings that capture the semantic meaning of text, allowing similar pieces of information to have embeddings that are close in vector space. Each "About" description was passed through the model to generate its corresponding vector, which was then stored in the PostgreSQL database using the pgvector extension.
By combining these embeddings with the RAG framework, the AI Chat Guide is able to:
- Accept user queries about the purpose, languages, and add-ons for their app.
- Search for semantically relevant tools or services by comparing the query against the embeddings.
- Retrieve and recommend the most relevant QuickStarts based on the query.
This architecture ensures the recommendations are not only fast but also contextually accurate, providing users with a tailored onboarding experience.
Backend
The backend of the AI Chat Guide was built using FastAPI, a modern, high-performance Python web framework designed for building APIs quickly and efficiently. FastAPI is particularly well-suited for this project due to its speed, built-in support for asynchronous programming, and automatic generation of API documentation using OpenAPI standards. These features allowed us to create a scalable and maintainable backend while ensuring rapid response times for user queries.
To manage the flow of the chat, we implemented a state manager that handled user input across multiple stages of the conversation. Each of the three questions — the purpose of the app, the programming languages being used, and any required add-ons or services — was treated as a separate state. The state manager was responsible for:
- Progressing through the different states as users provided responses.
- Compounding the input, meaning it collected and organized user responses into a single query that could later be processed for retrieval.
This stateful approach ensured the guide could adapt to user input incrementally, providing a smooth and logical conversational experience.
The core retrieval logic was built to handle the compounded user input. Once all responses were collected, the retrieval process involved the following steps:
- Creating an embedding: The compounded user input was passed through OpenAI’s
text-embedding-ada-002
model to generate a semantic vector representation. This embedding captured the meaning of the user’s query. - Searching the database: The resulting vector was compared against the stored embeddings in the PostgreSQL database using the pgvector extension. The search identified tools and services with embeddings closest to the input query, effectively retrieving the most relevant QuickStarts.
- Returning results: The retrieved QuickStarts, including their names, descriptions, and links, were formatted and sent back to the user as recommendations.
By combining FastAPI’s speed and flexibility with the powerful retrieval logic, the backend was able to deliver accurate and context-aware suggestions in real time.
Frontend
The frontend for the AI Chat Guide was built using Next.js and Tailwind CSS to ensure responsiveness and usability across different devices. The focus was on creating a clean, intuitive interface that made interacting with the chatbot straightforward.
Key Development Details
- API Integration: Custom APIs were developed to connect the frontend to the backend, enabling the chatbot to process user inputs and return relevant QuickStart suggestions.
- Reusable React Components: The UI was structured using reusable React components, which made the codebase cleaner and easier to maintain.
- Responsive Design: Tailwind CSS was used to build a responsive layout, ensuring the interface adjusted well across varying screen sizes.
The frontend was kept simple and functional, prioritizing clear communication with the backend while providing a smooth user experience.
Takeaways & Challenges
Challenge 1: Understanding the Project
One of the key challenges we encountered during this project was gaining a thorough understanding of the requirements and scope before diving into implementation. This was especially true for the AI Chat Guide, where the project’s goals evolved significantly over time.
Initially, the Chat Guide was envisioned as a general-purpose chatbot capable of generating code snippets for users. However, the requirements shifted multiple times — first toward a broader help chatbot that could guide users through the QuickStarts page and related resources, and eventually into a more focused, structured tool specifically designed to navigate the QuickStarts offerings. Adjusting to these evolving expectations required flexibility and close communication with the client to align on a clear, final direction.
Another layer of complexity came from understanding Render’s product and ecosystem itself. Improving the QuickStarts page and building features around it required a deep understanding of how Render works from both a technical and user perspective. This learning curve applied to both developers and designers on our team. For developers, this lack of familiarity initially made it difficult to implement features that utilized Render API efficiently. Meanwhile, for designers, this knowledge gap made it harder to conceptualize user workflows and conduct meaningful user interviews. Without a strong grasp of how Render’s deployment process functioned, aligning the team’s design and development efforts required additional time and research.
This challenge reinforced the importance of:
- Thorough research and documentation: Spending more time upfront exploring APIs, documentation, and user workflows to avoid misalignment later.
- Iterative feedback: Adapting quickly as requirements evolved and ensuring the client and the team remained on the same page.
Challenge 2: User Research and Testing
User research and testing were challenging due to time constraints and scheduling issues. Finding participants was difficult, and the one person we secured had to cancel and reschedule twice, delaying the process.
Additionally, the changing designs created multiple iterations to test, making it harder to gather consistent feedback. Each version required validation, but earlier insights often became outdated as the features evolved.
This highlighted the importance of flexible testing processes and streamlined planning to adapt quickly to evolving designs and tight schedules. Despite the challenges, the testing we conducted provided valuable input for refining the final features.
Challenge 3: Timeline
Balancing the development of two major projects — the VS Code extension and the AI Chat Guide — within a tight timeline presented significant challenges. The transition from one project to the other was particularly rough. Work on the Chat Guide began later than planned, as the team’s focus was fully absorbed by the VS Code extension. In hindsight, starting initial groundwork for the Chat Guide in parallel with the extension would have eased the transition and reduced the pressure on the second half of the timeline.
Another contributing factor was the series of small but critical roadblocks encountered during the VS Code extension’s development. While the issues, such as debugging WebSocket integration and fine-tuning the autocomplete feature, seemed minor at first, they required careful attention to resolve. Each delay, no matter how small, accumulated and ultimately pushed back the start of the Chat Guide. This compressed its development window, leaving less time for testing and refinement.
The experience underscored the importance of:
- Parallel planning and task distribution: Starting foundational work on multiple projects early to reduce bottlenecks later.
- Buffer time for unexpected blockers: Accounting for small errors or challenges that inevitably arise during implementation.
- Clearer project transitions: Ensuring resources and timelines are adjusted to allow smooth handoffs between projects.
By recognizing these challenges, it became clear that better task overlap and more flexible scheduling could have improved project pacing and reduced the impact of delays.
Closing Remarks
We want to express our deepest gratitude to Jess Lin, our client contact at Render. Jess was incredibly friendly, approachable, and supportive throughout the project. Her active involvement and genuine care for our growth made a huge difference, and this project would not have been nearly as successful or rewarding without her guidance.
Through this experience, we learned so much — about cutting-edge tools like FastAPI, LLMs, and the Next.js, about effective collaboration across a team, and about the importance of time management when balancing multiple projects.
Finally, we are truly grateful to CodeLab for giving us this opportunity. The challenges we faced and the skills we developed along the way have been invaluable, and we’re excited to carry these lessons forward into future projects.