CS 499 Capstone ePortfolio

Naim Lindsay

Computer Science, Southern New Hampshire University. Backend and full-stack software engineering, with a focus on scalable API design, data structures, and secure systems.

Node.js MongoDB Express Angular JWT RBAC REST APIs
3Enhancements
1Artifact
5Course Outcomes
2023Started

My CS Journey

I started the Computer Science program at SNHU in 2023 with a general interest in technology but not a lot of real experience building software. Over the course of the program, that changed. Through courses covering data structures, algorithms, full-stack development, secure coding, and software testing, I built the skills to actually approach engineering problems at a professional level. This capstone brings that growth together through enhancements to one real-world application, covering software engineering, algorithms, and databases.

Collaborating in a Team Environment

CS 250 (Software Development Lifecycle) was where I first learned how software is actually built professionally. Working through Agile, writing user stories, and doing sprint planning taught me that writing code is only part of the job. Good code has to be readable and structured in a way that other developers can work with it. That idea shaped how I approached this capstone. The service layer I built separates business logic from the HTTP layer so that any developer can change one part without needing to understand the whole system. That kind of structure is not just personal preference, it is a form of collaboration built into the architecture.

Communicating with Stakeholders

CS 250 also reinforced that engineers have to explain their decisions to people who are not engineers. Learning to write user stories and present trade-offs in plain language helped me understand that technical ability without communication is not enough. In the code review I produced for this capstone, I explained my planned enhancements in terms of what they would improve, not just the technical details. Being able to frame technical decisions in terms of outcomes is something I want to keep building on in my career.

Data Structures and Algorithms

CS 260 gave me the foundation I actually used in this capstone. Understanding how hash maps and linked lists work at the implementation level made it possible to build the custom LRU cache from scratch rather than installing a library. I used a doubly linked list for O(1) move-to-front and a hash map for O(1) lookups, the same structure used in real caching systems. The MongoDB aggregation pipeline I built for price-range filtering required similar thinking: structuring a sequence of operations to compute, filter, sort, and paginate while minimizing unnecessary database work.

Software Engineering and Databases

CS 465 introduced the MEAN stack and gave me the foundation for this capstone artifact. The enhancements I made through CS 499 cover what a single course could not fully address: production-quality backend architecture. Adding a layered structure with routes, controllers, and services, centralized error handling, input validation middleware, and role-based access control represents the difference between a course project and something built to professional standards. On the database side, adding strict schema validation, unique constraints, and enum enforcement on user roles showed me that data integrity has to be designed in from the start.

Security

CS 405 (Secure Coding) gave me a way to think about security that I have not stopped using. The most important idea was defense in depth: no single layer should be the only thing protecting the system. In this capstone I applied that directly. Input is validated at the middleware layer before the controller. Constraints are enforced again at the Mongoose schema layer before anything is written to the database. Write routes require both JWT authentication and role-based authorization. Each layer is designed assuming the previous one might be bypassed.

How the Artifacts Fit Together

All three enhancements build on the same artifact and they are connected. The software engineering enhancement introduced the architecture that makes the code maintainable. The algorithms enhancement added caching and query capabilities that make the API fast and flexible. The database enhancement tightened the schema and added access control that makes the system secure. These are not separate demonstrations. A clean architecture is easier to secure. Secure access control depends on a well-designed schema. Efficient queries depend on a service layer that can be reasoned about independently. Together they show I can design and build software at a professional level.

Code Review

A walkthrough of the existing Travlr Getaways codebase before any enhancements. Covers existing functionality, areas for improvement in structure, logic, efficiency, and security, and outlines the planned enhancements across all three categories.

Watch Code Review Video

Travlr Getaways

A full-stack travel booking application built with the MEAN stack. Originally developed in CS-465 and enhanced throughout CS-499 to demonstrate growth in software engineering, algorithms, and databases.

Original Artifact

CS-465 final submission before CS-499 enhancements.

Download ZIP

Enhanced Artifact

Final version with all three category improvements applied.

Download ZIP
Software Design and Engineering

Backend Architecture Refactor

What It Is

The artifact is the Travlr Getaways full-stack web application, originally built in CS-465. It uses the MEAN stack and includes a customer-facing web interface and an admin panel, both backed by a Node/Express API connected to MongoDB.

What Was Improved

The original implementation had routes and controllers separated but mixed business logic, database calls, and error handling inside the same functions. I refactored the backend into three clear layers. A new tripService.js module owns all database interaction. A dedicated errorHandler.js middleware handles JWT errors, Mongoose validation errors, and general failures in one place. An input validation middleware checks all required fields and types before a request reaches the controller, so bad data never gets to the database.

Course Outcomes

This enhancement meets Outcome 4 (using well-founded techniques to accomplish industry goals) and Outcome 5 (developing a security mindset). The validation middleware prevents malformed data from reaching the database. The centralized error handler ensures internal details are never exposed to the client in production.

What I Learned

The biggest thing I learned was how much cleaner a controller becomes when it does not own the database logic. Before the refactor, each function was mixing HTTP handling, queries, and error formatting together. After separating it out, each layer has a single clear responsibility. The main challenge was rethinking the error flow. Express's four-parameter error middleware requires every async handler to call next(err) rather than catching and responding inline. Getting that consistent across every controller was a straightforward but important change.

Algorithms and Data Structures

LRU Cache, Pagination, and Filtering

What It Is

The same Travlr Getaways application, focusing specifically on the backend trip listing endpoint. The original version retrieved every trip from the database in a single unfiltered query with no caching, every time.

What Was Improved

I built a custom LRU cache from scratch using a doubly linked list and a hash map. Most recently accessed entries are at the head, least recently used at the tail, and the tail is evicted when capacity is reached. Both get and put run in O(1) time. I also added skip-offset pagination, field-based sorting, case-insensitive resort filtering, and numeric price-range filtering. The price filter was the most interesting part: because perPerson is stored as a formatted string, I used a MongoDB aggregation pipeline to strip formatting characters and cast the value to a number for comparison, without changing the schema.

Course Outcomes

This meets Outcome 3 (designing solutions using algorithmic principles and managing trade-offs) and Outcome 4. Skip-offset pagination trades implementation simplicity for performance at very large scale. Cursor-based pagination would perform better on massive datasets but is significantly more complex. Documenting that trade-off was part of the work.

What I Learned

The main challenge was the price filtering. Since the field is a string I could not use a simple numeric filter. My first thought was to add a separate numeric field to the schema, but that would create a migration and a maintenance burden. Using the aggregation pipeline to compute the value at query time was cleaner. Running the count pipeline and data pipeline in parallel with Promise.all was also something I had not done before and it made a real difference.

Databases

Schema Validation and Role-Based Access Control

What It Is

The database architecture and backend API for Travlr Getaways, built with MongoDB and Mongoose for data modeling, storage, and retrieval.

What Was Improved

The original schemas were basic and lacked strict validation, which left the application open to bad data and unauthorized modifications. I enforced strict schema-level constraints in Mongoose: unique identifiers on trip codes, maxlength on text fields, and enum validators. I also implemented a database-backed Role-Based Access Control system. By adding a role field restricted to user or admin on the User schema and embedding that role into authentication JWTs, only authorized administrators can execute POST, PUT, and DELETE operations on the trips collection.

Course Outcomes

This meets Outcome 4 (well-founded database implementation techniques) through the schema design and Outcome 5 (security mindset) by locking down write operations against unauthorized users at both the application and database layer.

What I Learned

The most important thing I learned was to treat the database schema as the final line of defense for data integrity. API-level validation is not enough on its own. The main challenge was checking a user's role efficiently without hitting the database on every request. I solved this by embedding the role directly into the JWT at login, so the RBAC middleware can read it instantly from the token without any extra database call. That decision improved my understanding of how database design and application performance intersect.

A note on how I work.

There was a time when compilers did not exist. Programmers wrote machine code by hand, directly in binary or assembly, with full manual control over every instruction the hardware executed. When Grace Hopper introduced the first compiler in 1952, a significant part of the industry pushed back. The argument was that no machine could produce code as efficient or as reliable as an experienced programmer writing it themselves. That argument lost. Compilers did not make programming easier by removing the need to understand computation. They made programmers faster by abstracting the repetitive parts, so engineers could focus on solving harder problems at a higher level. The fundamentals did not become less important. They became the baseline that separated engineers who could use the new tools well from the ones who could not.

I think AI is the same kind of shift. I use it as a development tool, the same way engineers before me used compilers, IDEs, Stack Overflow, and documentation to accelerate their work. The difference is that I am deliberate about it. I do not blindly accept output. I read it, question it, and own every line that goes into my projects. The work in this portfolio is mine, and I can explain every decision in it.

That said, I will be honest: there are still moments, especially when I need to move fast and get something production-ready quickly, where things move faster than my full understanding. I do not always have time to internalize every layer before it ships. That is the real tension of developing in this era. The goal is to close that gap continuously, to build the fundamentals alongside the speed, so that over time the two stop feeling like opposites.

I think the industry is still figuring out how to think about this shift. But I believe the engineers who will thrive are the ones who learn to work strategically with these tools while building strong enough fundamentals to know when the output is wrong, when there is a better approach, and when to push back. I am still learning, still developing, and I think that is exactly where I should be.