Algozen
The Future Of Competitive Programming

Frontend - Next Js | Backend - Django, Kubernetes, Redis, Celery | Scalable to Millions Of Users
Algozen
The Future Of Competitive Programming
Timeline :
April - June 2025
Techstack :
- Frontend : Next JS - Backend : Django, Redis, Celery, Kubernetes
🧩 The Problem
When people practice coding online (think LeetCode or HackerRank), they need two things:
A huge library of problems.
A safe, reliable system to run their code and show results instantly.
The challenge? Running code safely is hard. If someone’s program goes into an infinite loop or tries to hack the system, the platform must contain it — while still feeling fast and smooth.
🚀 The Solution: AlgoZen
I designed AlgoZen, a next-gen coding platform where developers can solve company-specific problems (Google, Amazon, Microsoft, etc.) and test their code in real-time.
The system is powered by a hybrid execution engine:
For quick demos or small deployments, code runs locally in a safe subprocess.
For large-scale production, each user’s code runs inside an isolated Kubernetes pod (tiny container sandbox).
This gave AlgoZen speed during development and security + scalability in production.
🏗️ How It Works (Explained Simply)
Imagine you’re in class, and you hand your code to a teacher. The teacher passes it to a grader, the grader runs it safely in a separate room, writes the result on a slip, and the slip comes back to you. That’s how AlgoZen works at scale:
User writes code on the frontend (built in Next.js + Tailwind + CodeMirror).
Backend (Django) receives the submission.
The backend creates a task and puts it into a queue (Redis + Celery).
A worker picks up the task and runs the code either:
In a subprocess (local mode), or
In a Kubernetes job (production mode).
Results (output, errors, performance stats) are sent back to Redis.
The frontend keeps checking for results and finally shows them to the user.
🔑 Key Features I Built
Hybrid Execution Engine
Subprocess mode for development
Kubernetes pod execution for production (with resource limits & security sandboxing)
Secure Code Execution
Non-root containers
Read-only filesystems
CPU/memory limits
Timeout protection
Scalable Architecture
Redis as the backbone for queuing + result storage
Celery for distributed async processing
Kubernetes for horizontal scaling
User Experience Enhancements
Real-time feedback loop (frontend polls results)
Support for Python, C++, Java
Detailed error handling (compile errors, runtime errors, TLE, etc.)
🌟 Impact
1,400+ problems across 20+ companies integrated.
Fast & safe code execution with isolation at scale.
AI Assistant (Lila the Compile Cat 🐱) integrated using Groq’s LLM to guide learners without spoiling solutions.
🧠 What I Learned
How to design a distributed system that balances speed, safety, and scale.
Deep hands-on with Django, Celery, Redis, Kubernetes.
Writing production-ready code where security is as important as functionality.
⚡ Project FeaturesParallel Processing
Multiple Celery workers process submissions at the same time.
Ensures quick turnaround even under heavy load.
Fast Queuing System
Redis used as a high-performance task broker.
Handles thousands of tasks with low latency.
Scalable Architecture
Kubernetes orchestration allows horizontal scaling of workers.
New pods spin up automatically to handle spikes in traffic.
Hybrid Code Execution
Subprocess mode for development and lightweight deployments.
Kubernetes mode for production with maximum isolation and reliability.
Secure Execution Environment
Code runs in isolated sandboxes (non-root users, read-only filesystem).
Strict CPU, memory, and timeout limits prevent abuse.
Multi-Language Support
Python, C++, and Java execution with language-specific Docker images.
Real-Time Feedback
Results are stored in Redis and continuously polled by the frontend.
Users see output, errors, and performance metrics almost instantly.
Company-Specific Problem Collections
1,400+ curated problems from 20+ companies (Google, Amazon, Microsoft, etc.).
Helps learners prepare with targeted interview practice.
AI-Powered Assistant
Groq’s LLM integration for hints, explanations, and learning support.