This post lays out a day-by-day plan to ship a working web app in seven days, using AI as the primary builder. The stack: Next.js, Supabase or Turso, Vercel, Claude Code or Cursor, and Resend for email. The plan assumes you can read code at a basic level but cannot write a full app from scratch. Updated April 2026.
Why is seven days the right frame for a first AI-built app?
Seven days is long enough to ship something real and short enough to keep scope honest. A weekend is not enough for a working data model plus auth plus deployment. A month is so long that scope creep eats the project before you ship. Seven days forces a one-page spec, four to five features maximum, and a real user on day seven. That is the discipline. We have run this same seven-day loop with operators inside AI Masterminds who had never opened a code editor in their life. The ones who ship by day seven all did the same thing: they wrote the one-page spec on day one, refused to add features after day three, and used Claude Code or Cursor as the main builder rather than copying snippets out of ChatGPT into a separate editor. The pace is the point.
What is the day 1 plan (one-page spec and project scaffolding)?
Day one is the spec, not the code. Open a blank document and write down four things: what the app does in one sentence, the three to five core features, the data model in one paragraph, and the user flow from sign-up to first value. That is your spec. Do not start coding until the spec fits on one page. Once the spec is clean, scaffold the project. Use npx create-next-app@latest for the Next.js base, push it to a fresh GitHub repo, connect the repo to Vercel for auto-deploy on every push, and verify the default page is live on a Vercel URL. The deployment loop being alive on day one is the single most important thing you do all week. Every following day will close with a deploy that proves the app still works. Without that loop, the seven days collapse by day three.
What is the day 2 plan (schema and auth scaffolding)?
Day two builds the database and the auth layer. Open a Supabase project, write the table schema in the dashboard or via SQL, and enable email auth. The schema should map directly to the data model paragraph from your day one spec. Resist any urge to add tables that are not on the spec. Then open Claude Code in your project directory and ask it to scaffold the Supabase client, the auth pages (sign up, log in, log out), and a protected route that confirms a user is logged in. Prompt pattern: paste the schema, paste the auth requirement, and ask for the minimum viable implementation with no styling. The reason styling waits is simple. Styling without a working backend is decoration. Get the data flow alive first, dress it up later. Push to GitHub, watch Vercel redeploy, and confirm you can sign up, log in, and see the protected page. That is day two done.
What is the day 3 plan (core feature implementation)?
Day three is the first real feature, the one your spec said the app does. If your app is a habit tracker, day three is creating and listing habits. If it is a CRM, day three is creating and listing contacts. If it is a writing tool, day three is creating and listing documents. The feature should be the simplest possible end-to-end version: create the record, list the records, view one record. No editing, no deleting, no advanced filters. Use Claude Code or Cursor to scaffold the create form, the list page, and the detail page. Prompt pattern: paste the schema for the relevant table, describe the user flow in one paragraph, and ask for the three pages plus the API routes that connect them. Test it manually, push to GitHub, and confirm the deploy works. By the end of day three, a real user could sign up and create their first record. That is the magic moment, even if the UI is plain.
What is the day 4 plan (second feature plus UI polish)?
Day four adds the second feature on top of feature one and does the first pass of styling. The second feature is whatever turns the app from "can store data" to "produces value". For a habit tracker it is daily check-ins. For a CRM it is logging an interaction. For a writing tool it is exporting or sharing a draft. Build the second feature first, then spend the second half of the day on UI polish using a component library like shadcn/ui (which Claude Code can install and apply across the existing pages in one or two prompts). Polish does not mean "make it beautiful". It means "make it not embarrassing". The bar is that a friendly user lands on the app, understands what it does in five seconds, and can complete the core flow without asking you a question. That is the day four bar. Higher polish waits for after launch.
What is the day 5 plan (deployment, DNS, and analytics)?
Day five locks the deployment. Buy your domain (Cloudflare, Namecheap, or Vercel Domains all work), point it at the Vercel project, and confirm the custom domain serves the app with a valid SSL certificate. Wire up basic analytics: Vercel Analytics for traffic, Plausible or PostHog for behavior. Set up Resend for transactional email so password resets, sign-up confirmations, and any future product email can ship from your domain. Test the full flow once: sign up with a fresh email, confirm the email arrives, log in, complete the core feature, and check that analytics records the visit. The Vercel platform updates (2025) make this whole loop straightforward in a single afternoon. Day five is the day where the app stops feeling like a prototype and starts feeling like software. The custom domain matters more than people think for that psychological switch. See Cursor vs Claude Code vs Continue for our take on the builder layer choices that drive day three to day five.
What is the day 6 plan (user testing and bug bash)?
Day six is the brutal day. You stop building and start watching real users. Send the link to five people you trust to be honest. Three should be in your target audience, two should be smart strangers. Watch them use the app on a screen share if you can, or read their messages closely if you cannot. Make a list of every confusion point, broken flow, and visual problem they hit. Sort the list into three buckets: critical (blocks the core flow), high (annoying enough to lose a user), and cosmetic (would be nice). Spend the rest of the day fixing critical and high. Ignore cosmetic. Push the fixes to GitHub, watch Vercel redeploy, and run the five-user test again with two new users. Day six is where most first-time builders try to add a new feature instead of fixing what is broken. Resist that. The list of broken things from real users is always the highest-value backlog you have. See the AI How-To pillar for the broader pattern of testing-driven shipping that we run across every build week.
What is the day 7 plan (launch and first marketing post)?
Day seven is launch. Pick the smallest possible launch surface that still creates pressure: a personal LinkedIn post, a tweet thread, a Product Hunt submission, or a launch email to your existing list. Do not try to launch on five surfaces at once. One channel is enough to get the first ten real users. Write the launch post in the morning, ship it before lunch, and spend the afternoon answering every comment, message, and bug report that comes in. Most first launches get five to fifty users in the first twenty-four hours. That is enough signal to know if the app has any life in it. If users come back on day two, you have something. If they do not, you learned what does not land for the cost of one week. Either outcome is a win compared to building in the dark for six months. Anthropic's Claude Code launch (February 2025) and Cursor's product changelog (2025) both made this seven-day pace realistic for non-developers for the first time. Use that.
What are the most common ways the seven days break?
Three patterns break almost every first attempt. First, the spec is too broad on day one. The fix is to cut feature five before you start, not after. Second, the deployment loop is not alive by end of day one. The fix is to redo day one before touching day two, even if it costs you a day on the calendar. Third, the bug bash on day six gets skipped because the build feels almost done. The fix is to enforce the bash by setting up the five-user test before you start building, so the calendar invite already exists. Beyond those three patterns, the most common drift is feature creep on day four and day five, where the model happily builds a fourth and fifth feature when you ask. Hold the line. The seven-day plan is a discipline, not a tooling problem. The tools are good enough. The constraint is you. If you want this guided in person rather than alone in a chat window, the Vibe Coding for CEOs workshop runs the same seven-day loop with live coaching across the week. Otherwise, ship the spec, open Claude Code, and start day one.
The community of operators running this same seven-day loop on their own products lives inside AI Masterminds. Build first, share the build, and the feedback loop tightens fast.
FAQ
Do you actually need to know how to code to do this in 2026?
Not in the traditional sense. You need to read code well enough to spot when the model is hallucinating an API and you need to understand the shape of a web app (frontend, backend, database, auth, deployment). Most non-developers we coach pick up the reading skill in the first two days of building. Writing code from scratch is no longer the gating skill. The new gating skill is being able to describe what you want clearly, spot when the model is wrong, and stay narrow on scope. Those three skills together replace about eighty percent of what a junior developer used to do.
Why Next.js and Supabase as the default stack?
Two reasons. First, both have deep documentation and a huge corpus of open source code, which means models like Claude Sonnet 4.5 and GPT-5 have seen them at scale and produce reliable code. Second, the deployment loop is short: Vercel deploys Next.js in under sixty seconds, Supabase gives you a working Postgres plus auth in five minutes. The stack removes the most painful infrastructure friction. Other stacks work, but they cost you time on day one. The seven-day plan only holds if the infrastructure is invisible. Next.js plus Supabase makes it invisible by default.
What if the AI gets stuck on a bug for hours?
The pattern that breaks every long bug loop is the same: stop, write a clean reproduction case in a fresh chat, and feed only the reproduction plus the relevant file to the model. Long context bug threads accumulate noise, and the model starts patching symptoms instead of causes. Resetting the conversation costs fifteen minutes and almost always solves the bug in the next two prompts. If a bug survives three reset attempts, the issue is usually architectural, not local. That tells you to step back, draw the data flow on paper, and rewrite the offending function from a clean spec rather than patching it further.
How much should you expect to spend on tools across the seven days?
Roughly the cost of two dinners out, in the low tens of dollars. Claude Code and Cursor both have paid plans in that range. Vercel is free for hobby use, Supabase is free up to a real production tier, Resend gives you free email up to a few thousand sends per month. The expensive part is the initial domain registration and the time you spend, not the tooling. Most non-developers overestimate the tooling cost by a factor of ten because they assume building software still requires expensive infrastructure. In 2026 it does not. The cost has shifted entirely to time and judgment.
What if you do not finish in seven days?
Most first attempts do not finish in seven days. They finish in nine or ten, and the lesson is almost always the same: the spec on day one was too broad. The fix is to ruthlessly cut feature five and ship feature one to four with real users, then iterate. Shipping at day ten with four working features is a much better outcome than not shipping at day fifteen with seven half-finished features. The seven-day frame is a forcing function, not a deadline. The point is the discipline of daily ship pressure, which is what produces real software, not the calendar itself.
Sources
- Claude Code: agentic coding in the terminal · Anthropic · February 24, 2025
- Vercel platform updates (2025) · Vercel · December 1, 2025
- Cursor product changelog · Cursor · November 20, 2025
- Supabase platform overview · Supabase · September 15, 2025

