Memozze - memoriais digitais e celebrações da vida
Overview
Memozze started from a very human problem: after someone passes away, memories are scattered across WhatsApp groups, photo albums, old stories, and individual accounts that slowly fade. There was no simple, permanent and collaborative place where family and friends could build a living memorial with stories, photos, events, and tributes around a single person.
I turned that into a production-ready SaaS. Memozze lets users create digital memorial pages with biography, timelines, photo galleries, slideshows and comments, plus dedicated events (life celebrations, birthdays, special dates) attached to each memorial. The goal is to make it extremely easy for non-technical users to contribute and keep someone’s memory alive together.
This is a solo project end to end: product ideation, UX, architecture, full‑stack implementation, infrastructure, billing with Stripe and CI/CD with GitHub Actions. I took the project from a local monorepo to a stable production environment running on a VPS with monitoring, structured logging and background processing for heavy tasks.
In production since 2025, Memozze has been growing organically with around 150 registered users, 20+ memorials created and 50+ active events used by real families to organize memories and celebrations.
Key Differentiator
Memozze is not “just another CRUD app”. It lives in a sensitive domain where people use it during emotionally intense moments, which means it must be stable, fast and simple for non-technical users. Technically, that led to three key design choices.
First, a performance‑oriented and resilient backend: Fastify 5 with TypeScript and schema validation, Drizzle ORM for a predictable data access layer, Redis for caching and BullMQ for background jobs. This ensures the system can handle traffic spikes — for example when a memorial is shared on social networks right after a loss — without degrading the reading experience.
Second, a deliberately pragmatic infrastructure approach. I initially deployed to a Kubernetes (K3s) cluster and later migrated to a Docker Compose + VPS setup once it became clear that operational complexity and cost did not pay off at this stage. The project shows, in practice, how to consciously step back from “enterprise infra” to something lean and maintainable, while keeping deployment automation, observability and proper isolation.
Third, a lazy‑initialization pattern using JavaScript’s Proxy for providers that depend on external services (mail, storage, queues). This resolved an entire class of Fastify bootstrap‑order bugs without bringing in a heavy dependency injection framework. The pattern is reusable and demonstrates engineering maturity focused on DX and reliability.
Architecture
- Web App (Next.js 15): React/Next.js frontend with TypeScript, Tailwind CSS, Radix UI and TanStack Query; renders memorial and event pages, handles navigation, async states and real‑time interaction with the API.
- Memorial Editor (Tiptap + Embla): Rich text editor built with Tiptap for biographies and stories, plus carousels using Embla for photo slideshows and featured sections.
- HTTP API (Fastify 5): Node.js/TypeScript backend exposing REST endpoints for users, plans, memorials, events, comments, file uploads and billing; uses schema‑based validation for requests and responses.
- Persistence Layer (PostgreSQL + Drizzle ORM): Relational database for core entities (users, subscriptions, memorials, events, invites), accessed through Drizzle for strong type‑safety and readable, idiomatic SQL.
- Cache & Queues (Redis + BullMQ): Redis used both for caching critical queries and as a backend for BullMQ job queues (email sending, event notifications, maintenance tasks).
- File Storage Service: Stores memorial photos and assets, with upload and access managed through the API and proper ACL handling for public/private content.
- Email Service: Handles transactional emails (account confirmation, invitations to contribute to memorials, event reminders), triggered from queue workers.
- Secrets Management: Production secrets (Stripe keys, database credentials, AWS access keys) stored and rotated in Vault, accessed programmatically instead of static environment variables.
- Billing & Subscriptions (Stripe): Stripe integration for paid plans, billing cycles, and subscription status, tied to feature flags and limits in the application.
- Logging & Monitoring: Structured logging with Pino across backend services; Sentry for error tracking, performance monitoring and cross‑stack tracing between web and API.
- Monorepo & Build System (Turborepo + pnpm): Monorepo using pnpm workspaces for
web,apiand shared packages (types, utils, configs), orchestrated by Turborepo for efficient builds and CI pipelines. - Infrastructure & Deploy (Docker Compose + VPS + GitHub Actions): All services containerized; orchestrated with Docker Compose on a dedicated VPS; GitHub Actions handle automated builds, migrations and deploys via SSH.
Technical Highlights
- Designed and implemented a Turborepo monorepo with pnpm workspaces, sharing types, validation and utilities between frontend and backend for true end‑to‑end type‑safety.
- Migrated from a production K3s Kubernetes cluster to a lean Docker Compose + VPS setup, significantly reducing operational overhead and cost while keeping automated deploys and observability.
- Implemented a lazy‑initialization provider pattern using JavaScript
Proxyfor MailProvider, FileProvider and BullMQ queue emitter, eliminating bootstrap‑order issues in Fastify without adding a DI framework. - Chose Drizzle ORM over Prisma to keep runtime overhead low and database access transparent, with strongly typed queries and readable SQL for easier long‑term maintenance and tuning.
- Modeled async workflows with BullMQ + Redis for email and notification pipelines, keeping HTTP request latency low even under spikes in memorial creation and sharing.
- Configured HashiCorp Vault as the central source of truth for production secrets, integrating it into the deployment pipeline and avoiding leaked credentials via environment variables or logs.
- Set up a CI/CD pipeline on GitHub Actions that builds, tests, creates container images and deploys via SSH to the VPS, with simple rollback via Compose configuration.
- Delivered a live SaaS product with ~150 registered users, 20+ memorials and 50+ active events, validating both the technical architecture and the product assumptions in a real‑world, sensitive domain.