AI Agents Workshop with Yuval KeshtcherLearn about upcoming cohorts →
AI Makers Lab

Build a Reddit Monitoring Tool with Claude Code

0/6
All tutorials
Marketing
Advanced
25-30 minutes
6 milestones

Build a Reddit Monitoring Tool with Claude Code

What you'll build

A Reddit monitoring tool that scans subreddits for posts you should respond to, drafts responses in your voice, tracks emerging trends, and emails you a daily digest.

Milestone 0 of 60% complete

The Problem

You know there are people on Reddit right now asking for help with the exact thing your product does. But finding those threads means scrolling three subreddits every morning, evaluating dozens of posts, writing responses that don't sound like ads — and doing it consistently. You skip a day. Then a week. Someone else answers those threads. The leads were there. You just didn't have a system.

What You're Building

A full-stack monitoring tool that watches your chosen subreddits via RSS (no Reddit API key needed), scores every post for relevance to your business, drafts responses that match your writing style, and sends you a morning email with the best opportunities. You'll build the backend, the AI scoring, the response drafter, a web dashboard to manage it all, and a daily email digest.


Milestone 1: Project Brief + Folder Structure

Complex builds go off the rails when Claude loses context halfway through. The fix: planning files that act as a persistent source of truth. Claude reads them at every step, so even when conversation history compacts, the plan stays clear.

Prompt:

Prompt
Create a new project called reddit-monitor. I want to build a Reddit monitoring tool with these features: 1) Monitor specific subreddits via RSS feeds for relevant posts. 2) A knowledge base where I store URLs and keywords that define what's relevant to my business. 3) AI scoring that classifies each post as an opportunity (relevant to respond to) or not. 4) AI-drafted responses matching my tone profile. 5) A trend tracker that spots rising terms across subreddits. 6) A daily email digest with top opportunities. Set up the project with a PRD file describing the full app, a plans/ folder with separate markdown files for each major feature (backend, knowledge base, scoring, drafting, frontend dashboard, email digest), and a CLAUDE.md with project rules. Use Python/FastAPI for the backend with SQLite, and Next.js for the frontend.

What Claude Code does: Claude creates the entire project skeleton — PRD, planning files for each feature, and a CLAUDE.md. Planning files are the key pattern here. Every time Claude starts a new task, it reads these files to stay aligned. This is what separates a clean multi-file build from one that drifts into chaos. The video that inspired this tutorial used 14 planning files to keep three agents on the same page.

Try it: Open the plans/ folder. You should see 5-6 markdown files — one per feature. Read the PRD. Does it describe the app you want to build? Edit anything that's off. These files drive the entire build.


Milestone 2: Reddit RSS Fetcher

Reddit has public RSS feeds for every subreddit. No API key, no OAuth, no rate limit headaches. Just append .rss to any subreddit URL and you get the latest posts.

Prompt:

Read the plans/ files. Build the Reddit RSS fetcher. It should: 1) Take a list of subreddit names from a SQLite database. 2) Fetch the RSS feed for each one (https://www.reddit.com/r/{subreddit}/new.rss). 3) Parse each post — title, body text, author, URL, timestamp, subreddit. 4) Store everything in SQLite, deduplicating by post URL. 5) Add a FastAPI endpoint POST /api/scout that triggers a fetch, and GET /api/posts that returns stored posts. 6) Limit to the 25 most recent posts per subreddit to keep API costs down. Add a script to seed 2-3 default subreddits and a test script that runs a scout and prints the results.

What Claude Code does: Claude builds the RSS parser, database models, and API endpoints. RSS feeds as a data source means zero setup — no API keys, no approval process, no rate limits to manage. The 25-post limit is intentional: when AI scores every post later, fewer posts means lower API costs. The test script gives you something to verify immediately without needing a REST client.

Try it: Start the backend (python -m uvicorn main:app --reload or whatever Claude set up). Run the test script Claude created — you should see real Reddit posts printed with titles, bodies, and URLs. Alternatively, use curl: curl -X POST http://localhost:8000/api/scout then curl http://localhost:8000/api/posts. Click a URL — it should link to an actual Reddit thread.


Milestone 3: Knowledge Base + Opportunity Scoring

Raw posts aren't useful. You need to know which ones are worth responding to. The knowledge base defines what's relevant — your URLs, your keywords — and the AI scorer classifies every post against it.

Prompt:

Read the plans/ files. Build the knowledge base and opportunity scoring system. 1) Knowledge base: CRUD endpoints for "assets" — each asset has a URL, a description, and a list of keywords. Store in SQLite. 2) Scoring: After fetching posts, run each post through OpenAI (gpt-4o-mini to keep costs low). Send the post title + body + list of all keywords from the knowledge base. Ask the model to classify as "opportunity" or "skip", assign a confidence score 0-100, and flag whether the response should be promotional (includes a link to one of our assets) or non-promotional (pure value). 3) Store the score and classification with each post. 4) Add a promotional ratio setting (default 20%) — this percentage of opportunities should be promotional. Use my OPENAI_API_KEY from the environment.

What Claude Code does: Claude builds the knowledge base CRUD and the AI scoring pipeline. The promotional ratio is the critical detail. Reddit hates spam. Set it too high and you get downvoted into oblivion. The 20% default means 4 out of 5 responses are pure value — you're helping, not selling. The fifth one naturally includes a link. This is how experienced Reddit marketers actually operate.

Try it: Add a knowledge base asset through the API (or use the seed script). Run a scout. Then check GET /api/posts — each post should now have a classification (opportunity/skip), a confidence score, and a promotional flag. Sort by confidence. The top results should genuinely be posts worth responding to.


Milestone 4: Response Drafter with Tone Profile

The scorer found the opportunities. Now let's draft responses that sound like you, not like ChatGPT.

Prompt:

Read the plans/ files. Build the response drafter. 1) Add a "tone profile" to the database — a text field where I describe my writing style plus 3-5 examples of Reddit comments I've actually written. Seed it with a default Reddit-friendly tone (casual, helpful, no corporate speak). 2) When a post is classified as an opportunity, generate one draft response using OpenAI. Include the tone profile in the system prompt. If the post is flagged as promotional, naturally weave in a link to the most relevant knowledge base asset. If non-promotional, just provide genuine value. 3) Add a POST /api/posts/{id}/regenerate endpoint that generates a new draft for a specific post. 4) Store drafts in the database linked to the post.

What Claude Code does: Claude builds the drafter with a tone profile system. This is the difference between AI responses that get upvoted and ones that get called out as bot spam. The tone profile doesn't just say "be casual" — it includes your actual writing samples so the model mimics your specific voice. The regenerate endpoint means you can keep hitting it until the draft feels right, without re-running the entire scout.

Try it: Run a scout. Check an opportunity — it should have a draft response. Read it. Does it sound like something you'd post on Reddit? Update the tone profile with 3 real Reddit comments you've written, regenerate, and compare. The difference should be noticeable.


Milestone 5: Dashboard UI

The backend works. Now let's make it usable without curl commands.

Prompt:

Read the plans/ files. Build a Next.js dashboard for the Reddit monitoring tool. Pages needed: 1) Dashboard home — shows latest scout results: number of opportunities found, list of opportunity posts with confidence scores, click to expand and see/edit the draft response, and a "Run Scout" button. 2) Knowledge base page — add/edit/delete assets (URL, description, keywords). 3) Settings — manage monitored subreddits, promotional ratio slider, tone profile editor with a textarea for style description and writing examples. 4) Trends page — sorted list of terms with rising mentions across monitored subreddits, showing mention count and percentage increase. Keep the design clean and minimal — dark mode, no unnecessary decoration. Connect everything to the FastAPI backend.

What Claude Code does: Claude builds the entire frontend — four pages, API integration, the works. This is where Claude Code shines on full-stack builds. One prompt produces a working dashboard because it reads the backend code it already built and generates matching API calls. The trends page aggregates post data to surface terms gaining traction — it gets more useful as you accumulate data over multiple scouts.

Try it: Start the frontend (npm run dev). Open http://localhost:3000. You should see the dashboard with your scout results. Click an opportunity to see the draft. Go to Settings and add a subreddit. Go to Knowledge Base and add an asset. Hit "Run Scout" and watch new results appear.


Milestone 6: Daily Email Digest + Deployment

The last mile: make it run without you.

Prompt:

Read the plans/ files. Add two final features: 1) Daily email digest — add a scheduled task that runs a scout every morning and sends an email summary to a configured address. Include: number of new opportunities, top 5 by confidence score (with draft responses), and any trending terms. Use Python's smtplib with Gmail SMTP (or any SMTP provider — store credentials in environment variables). Add the recipient email and schedule to the settings page. 2) Create a simple deploy script — a docker-compose.yml that runs both the FastAPI backend and Next.js frontend, plus a cron job for the daily scout. Also create a local start script (start.sh) that launches both servers for local development.

What Claude Code does: Claude adds the daily automation loop — the whole point of building this. Every morning the scout runs, scores posts, drafts responses, and emails you the results. By the time you open your inbox, the work is done. The docker-compose setup means you can run it anywhere — your machine, a VPS, or a service like Railway or Modal. The local start script keeps development simple.

Try it: Run ./start.sh (or whatever Claude created) to launch both servers. Test the email by asking Claude to create a manual trigger: "Add a POST /api/test-email endpoint that sends a sample digest to my configured email." Check your inbox. If deploying: run docker-compose up and verify the app is accessible. The daily cron should handle the rest.


What You Built

Remember scrolling Reddit every morning, hoping to find threads worth responding to? You just automated the entire workflow. Your tool monitors subreddits, finds the opportunities, drafts responses in your voice, and emails you the results before you wake up.

  • RSS fetcher — pulls posts from any subreddit, no API key needed
  • Knowledge base — defines what's relevant to your business
  • AI scoring — classifies every post and assigns confidence scores
  • Tone-matched drafter — writes responses that sound like you
  • Dashboard — manage everything from a clean web UI
  • Daily digest — runs on autopilot, results in your inbox at 7 AM

The promotional ratio keeps you in Reddit's good graces. The trend tracker shows you what's rising before everyone else notices.


Take It Further

  • Add Claude Code agent teams — enable experimental_agent_teams: 1 in .claude/settings.json and rebuild with a backend dev, frontend dev, and QA agent working in parallel. Compare the speed and quality to the single-agent build you just did.
  • Multi-profile support — add separate profiles for different products or topics, each with its own subreddits, keywords, and tone
  • Auto-post mode — for responses above a 90% confidence threshold, auto-post directly to Reddit via PRAW (Reddit's Python API) with a manual review queue for the rest

Ready to build your first AI agent?

Live Zoom workshop + 1 month WhatsApp follow-up with Yuval Keshtcher (Hebrew)

Learn about the Workshop