2) Let's Get #Technical

Piecing together something that just might resemble a modern tech stack.

Behind-the-scenes building Vambrace AI, a company on a mission to forge stronger relationships with users. (typos are to make sure you’re paying attention)

Subscribe to follow along or see the site here:

Introductory Remarks

Dear Vambracers —

In last week’s post, Introducing: Vambrace AI, I announced the launch of my user insights initiative and discussed the origins of this endeavor. I also tried to describe the idea space that I’m interested in and some of the perceived problems that I aim to address. It marked the first public step of many to come in pursuit of forging stronger relationships between companies and their users.

A Modern Tech Stack

Today, I wanted to discuss the technological exploration I’ve embarked upon over the past 2ish months, and go behind-the-scenes on my current tech stack. Apologies in advance for improper terminology or imperfect understandings of some concepts—I’m learning as I go. But, I think, the magic of the journey has been how intuitive and personalized the “learning as I go” approach can be in an AI-first world.

Core Toolkit

The following services are the actual picks and shovels for probably 95% of the work that I’ve done thus far. I might try to actually guess-timate the rough distribution of labor across those tools. But this entire journey has been largely technology-driven, and I think over time I’ve developed some comfort with the unique skillsets and aptitudes of each tool—and I also think I’ve developed my own personal like workflows that leverage multiple models to arrive at some targeted outcome. Okay, let’s get into it!

ChatGPT: Mentor, Co-Architect, and Confidante

  • Distribution of Labor: ~15%

  • Core Use(s): software development mentorship (I made a “Coding Ninja” GPT), architecture design, future feature feasibility checks (cursory), light business research (e.g., competitive arena, pricing, unit economic modeling)

  • Subscription: $20/month plus tier

ChatGPT was popularized AI in Fall 2022 and has (I think) held a brand (if not technical) lead since then. It’s become clear in the past few months especially that a lot of value within the AI value chain will accrue to the application layer and OpenAI has been taking concrete steps in that direction (s/o Jony Ive). To that end, I do think that the user experience is overall pretty delightful and I feel like I can mostly trust the veracity and quality and intent of the output. It also doesn’t quite feel as “techy” and/or nerdy as a Claude—and it feels like moderately heftier than a Perplexity.

I’ve found myself really leveraging ChatGPT’s voice chat feature on car rides or on walks to almost debunk some of the more esoteric or longer-term fears I want to address or features that I might want to build. It helps me discern quickly whether or not something is like mostly technically feasibile and how that implementation—at a high-level—would work.

Some downsides, there was one specific instance where I was implementing a vector database (s/o Pinecone) and trying to figure out how to actually get an embedding model up and running—and it recommended that I use DeepSeek’s embedding model, since I’m using DeepSeek’s chat for the rest of the platform—and it took me literally like 2 hours to realize that DeepSeek doesn’t actually have an embedding model. So that was the scariest moment with ChatGPT and was a good reminder to really make sure I don’t always take ChatGPT’s responses at face value.

I also think that the output is only as good as the context—and it became burdensome to keep ChatGPT up to date with all the changes that were happening in my actual platform so then if I’d come to it to help me build a pitch deck outline or create my landing page there was extra learning that had to happen before it could accurately represent the current version of the platform. So that was also a learning.

But in general I’ve come to enjoy ChatGPT’s intuitive UX/UI and think that the GPT, Project, and Chat features have been really fundamental to my broader business development and coding conceptualization workflows. It’s also so nice to have something to which you can literally say: “explain it to me in excruciating detail like I’m 5” without judgement.

Cursor: My First Code-Guide

  • Distribution of Labor: ~40%

  • Core Use(s): coding, feature implementation plans

  • Subscription: $20/month + (I’m sure) many one-time credit payments to come

Cursor was my introductory product into the world of AI-assisted coding (or #vibecoding) and I think in general the brand is pretty well-regarded. I know GitHub Copilot was maybe one of the first AI-assisted coding tools to pop up and then possibly also Replit (?) but Cursor feels like it really caught the wave and has experienced rapid growth. I think they were most recently valued in the ~$9B range—after rejecting an acquisition offer from OpenAI (unlike Windsurf). And overall I’ve really loved the experience. I built the v1 of the core functionality of my platform using Cursor and will always have a soft spot for it because of that.

I will say here that I did end up leaving Cursor pretty seriously for about a month and did all of my coding in Windsurf—because I thought the models were a bit better and it was easier (and I think more cost-effective) for me to just do a lot of Claude 3.7 Sonnet work in Windsurf—and I think that that model really understands codebase-context really well and can propose more effective solutions. I think I needed to be a bit more handholdy with Cursor’s AI agent and auto-model approach (which I think I just didn’t realize there were alternative approaches.

But then within the past literally like 3 days I’ve actually come back to Cursor with a vengeance because they exclusively offer (as of this writing) Claude 4.0 Sonnet—which is SCARY good—and so I’ve been doing everything in Cursor now because of that. I also discovered the preferences panel, which allowed me to go from dark mode to dark blue mode—and that actually has an inordinately positive impact on my experience. Superficial? Possibly. A real consideration? You bet.

I think it’s also fascinating to watch the broader foundation model companies start to engage in some more intense competition and take that competition to different venues within the broader AI world. For example, my understanding is that Anthropic isn’t allowing Claude 4.0 Sonnet to be on Windsurf—probably because Windsurf was acquired by OpenAI—and that really is actually a big deal imo—because Claude has carved out a reputation as the best model for coding. So I’m just fascinated to see how these and other battle continue to play out—and I’m dead set on slurping up literally as much consumer surplus as possible while Masa subsidizes our use.

The last thing I’ll mention here, and then I’ll get to Windsurf, is that I have negative loyalty when it comes to a critical function like coding. The leverage associated with better models and more intuitive user experiences is massive—and so I’ll go to whoever has the best model and isn’t going to charge me like hundreds of dollars for those models. I do think that brand matters—but for coding I really am just looking for results and ease-of-use, and I’m not going to be too dogmatic or principled about it. No holy wars here; just code.

Windsurf: My Second Code-Guide

  • Distribution of Labor: ~35%

  • Core Use(s): coding, feature implementation plans

  • Subscription: $15/month + ~$40 in one-time credit payments (so far)

This will be a bit shorter because I talked about Windsurf a lot in the previous section, but I really really do love Windsurf. At first I thought it was Cursor’s ugly step-cousin or something—and for whatever reason I didn’t give it much thought. And then a mentor indicated that Windsurf was actually better than Cursor—and so I gave it a try and loved it. I thought the user experience was a bit more intuitive and 3.7 Sonnet was really powerful when I first transitioned over. I also do probably like the branding and the vibe a bit more—it feels like it’s arm are a bit wider and there’s less judgement for the vibe-coders to set up camp here. Like, we’re all just trying to ride some waves.

With that said, I do think it’s an absolutely devastating development if Anthropic prevents Windsurf from accessing its latest models. I’m probably more loyal to the underlying model at this point when it comes to coding, and so I’ve built a real dependence on Claude for actually doing the coding—and I honestly don’t think I could trust any other foundation models at this point for the most important implementations and system “rearchitects.”

So I think my current position on Windsurf is: “I love you, but I think it’s best if we continue to see other people, and then we can see where this goes.” Like I really honestly am partial to Windsurf, but there’s this crazy little thing called Claude—and she really is the object of my coding-affection (please don’t take offense at my gendering of the model).

Before I move on, I will say more broadly on the use of AI-assisted coding tools, I’ve started to develop more intuition and feel for what the tools can effectively accomplish and what requires some more oversite on my end. For example, sometimes they go down package rabbit holes that ultimately aren’t productive and can ruin the entire codebase because it relies on a certain set of packages or whatever that the model tries to change to accomplish some narrow task. I also think that I’ve gained more confidence in stepping in and stopping the model and asking why it’s doing something and saying: “no we’re not going to worry about that now,” and so it’s been an interesting area of growth for me.

Okay, last main thing here: I’ve also started using the coding tools for more and more non-coding business deliverables. For instance, the landing page I created (more on that later), I basically asked ChatGPT to find the best landing pages on the web—and I gave it some that I really liked—and then I had it distill those landing pages into sections, descriptions, etc., with instructions. From there, I basically gave that to Cursor / Windsurf and said: “Based on what you know about this platform, please populate the following list for a landing page.” And that created some pretty powerful outputs that I think were amazing high-fidelity in terms of effectively communicating what it is the software literally does.

Okay and then actually-last main thing here: I will say that the main area of remaining anxiety is taking code from Windsurf and Cursor and effectively executing database changes in Supabase. I’ve gotten pretty comfortable with Windsurf → SQL Code → Supabase SQL Editor → Confirmation of Output, and so I think that’s been good enough for now. But I’m never as confident in that work as what happens within the actual codebase, because Cursor and Windsurf are kind of riding blind there.

Claude Sonnet 3.7 / 4: My Code-Guides’ Paintbrush

  • Distribution of Coding Labor: ~95%

  • Core Use(s): coding

  • Subscription: $0 (just use it through Windsurf and Cursor)

Not going to spend much time here, but I think Anthropic has done a great job of owning the coding space—and it’s been fairly obvious to me throughout my journey that Claude’s Sonnet models have consistently delivered the best results. Of all loyalties within my broader AI stack, I would say that I’m most loyal to Claude. I just feel like we’ve developed an understanding of each other (or, really, I’ve developed an understanding of it) and so I feel comfortable that we can tackle pretty much any task together. I don’t know why I feel so strongly about Claude as compared to ChatGPT or other models—but I just really trust it. ChatGPT almost tries to be too cute or something—and Claude just knows how to get sh*t done.

Gemini: Occasional “Celeb-shot” Analyst

  • Distribution of Labor: 5%

  • Core Use(s): deep research

  • Subscription: free (I think, with my google business account?)

Gemini has I think regained some popularity of late as Google has tried to reestablish itself within the broader AI narrative. It was of course super early in the space with DeepMind, and so it had an intense more research-y first-mover advantage here—and then I think they might have gotten a bit complacent and failed to commercialize effectively and therefore also failed to grab humanity’s attention and be positively part of the zeitgeist (because they most certainly are negatively part of the zeitgeist, #innovatorsdilemma). But I think Sergey Brin has been more public lately and there’s no question that Google definitely has some of the best tech, which is like it’s whole thing, and so I’ve been experimenting more with it.

Thus far, I’ve found some great use in its Deep Research product and I’ve used that several times for product-related roadmap research and then also competitive research and stuff like that. It feels a little more business-like and heftier than ChatGPT’s Deep Research and then also much heftier than Perplexity’s Deep Research—and so that’s been a fun development. I’m sure I’ll continue exploring Gemini moving forward—especially for more frontier-AI, including video generation (Veo looks pretty insane) and also like screen sharing context (AI Editor, I think it’s called) and stuff like that, because I do still trust Google to be way at the bleeding edge of the technology here.

GitHub: My Fearless Code Accountant

  • Distribution of Labor: NA

  • Use Case(s): code management (obviously), peace of mind

  • Subscription: free (I think)

I have an interesting relationship with GitHub because I think, before I started on this journey, GitHub always felt kind of unapproachable and confusing and scary to me—and so I like expressly would do coding or technical stuff that felt as if it didn’t require the use of GitHub (which I now realize is silly). But now I’ve built a strong relationship with the platform—and almost feel like I should be thanking it for allowing me the privilege of its use.

Like, whenever I type those words: “git add .” / “git commit -m ‘…’” / “git push origin [branch]” I feel like I’m literally like doing covert codebreaking operations or something. Like it feels as if I’ve arrived at the peak of some mountain of technical competence, paying fealty to the great and powerful GitHub god. I’m not sure that I’ll ever feel truly secure in my relationship with GitHub—and I think that’s ultimately something I’d work on with an actually-technical co-founder.

Notion: Information Hub

  • Distribution of Labor: 5%

  • Use Case(s): general information management

  • Subscription: free (I hope)

Not much to say here that hasn’t already been said. Notion is the “lego” of the business software world, and I enjoy making my little creations. I think I’ve found a lot of value in documenting like longer-form implementation plans, or competitive research, or AI architecture explanations in Notion and I’m not positive what the short-term value will be but I’m hopeful that eventually there will be some real benefit. Really, Notion is great at making me feel like I’m just being very #operationallyexcellent and that all cylinders are firing—even if the jury is out on longer-term real impact. To feel is to be; and to be is to is.

Software Stack

The following services and platforms are more software-related and I don’t have quite as strong relationships with or opinions on them. In general, with the exception of DeepSeek, I arrived at these platforms upon the recommendation of ChatGPT or in listening to an episode of the Acquired podcast—and so I don’t have strong like belief-systems that inform their use. With that said, let’s explore.

DeepSeek: My (Cheap) Interview Analyst

  • Use Case(s): underlying AI for my platform

  • Subscription: $/token—I’ve spent ~$0.60 in the past month and a half

DeepSeek of course shook the American stock market earlier this year with the introduction of its model and claims around its low-cost development and technological innovations that underpinned said cheapness. I’m not here to debate the merits of its cost and stuff like that—but what I will say is that I did some research and it is meaningfully less expensive than if I had used OpenAI for my backend AI work—and so for that I’m grateful.

Obviously with any discussion of a foreign model there’s some concern around privacy and security—and those are definitely legitimate. But cost is king right now for where I am in my development, and I honestly don’t think I’m giving DeepSeek any information that the Chinese government would be even remotely interested in—so, the cost tradeoff is just worth it right now. Longer-term, I think I’ll fork the model or try to build out our own secure model or something like that. But for where I am right now it’s been amazing.

Vercel: My Portal to the World

  • Use Case(s): web hosting

  • Subscription: ~$14/month (I think)

Again, no real strong opinions here other than I love the Aquired podcast and they had Vercel’s Founder & CEO on and so that was enough for me to take the leap. So far it’s been a pretty seamless experience—and then I also think that knowing I wanted to use Vercel helped me decide on other pieces of my tech stack in the earlier stages of the process.

The only negative experience I had here is that, in my general rush to get something up w/r/t the landing page, I like set up my account at first with the wrong GitHub account (I know, I can’t believe it either) and so that was a whole ordeal because then when I was committing changes to GitHub it wasn’t successfully being deployed by Vercel and it was a whole thing. So then I just deleted my Vercel account and my GitHub account—and then I didn’t realize that I had to wait a week to setup the Vercel account again—and then when I submitted a help request to the Vercel team they didn’t respond—and so I was just offended that they weren’t as sensitive to my need-for-speed as I would have liked. But that’s life as a #vibecoder I guess.

Supabase: Unstructured Excel

  • Use Case(s): database management

  • Subscription: free (for now)

I had no idea what Supabase was—or really what databases actually were—before this entire endeavor—and I’ve grown to love what databases have to offer. I also didn’t know anything about Supabase in particular but did some talking with ChatGPT about what to use and ultimately decided to go forward with Supabase. It’s open-source and has a pretty solid UX and so it’s been pleasant thus far—no complaints. It was between Supabase and Firebase, and, without really knowing enough to have a strong opinion here, I just went with the non-Google option. I think Supabase is also Australian, which doesn’t not matter and isn’t not fun.

But working with Supabase has also been probably the most difficult piece of the broader technical puzzle as it relates to the entire workflow of getting a real piece of software up and running. I’ve of course heard of databases before this and I even know a good bit of SQL from my college days, but I never had an actual grasp of what role databases play in technology. So that’s been a wonderful and challenging experience.

I think it’s also interesting, because now I have a much better appreciation for like backend and front end stuff, and making them cohesive and sing a coordinated song together—and I see how the backend could grow ridiculously complex before you know it. And so I think that’s been a realization for me, that I have to actually spend a lot of my intellectual energy and effort on backend architecture and management and I can’t really just rely on Cursor and Windsurf here because they don’t really know everything that’s going on and yet it’s really probably the most critical like system for me to design.

And then the last thing I’ll mention here is that I’m absolutely terrified of actual launch (within a week!) because that’s when I have to really figure out like production database vs development database—and I know it relates to environment variables I think—but I still don’t really know how I’m going to navigate that. And that’s a perfect example of something I’ll talk through with ChatGPT and go very very slowly on when it’s time to actually get going there.

Pinecone: My Vector Tree (I guess somehow vectors are kinda like pinecones??)

  • Use Case(s): vector database management

  • Subscription: free (for now)

There is a custom chat component to my platform (s/o Pearl) and so I honesty had no idea how to actually implement that and—after talking with some friends who are in the space and then also conversing with ChatGPT—I learned about the concept of a vector database and chunking and embedding and all that good stuff. It’s one of those things that feels kind of complex but actually feels super obvious and intuitive when you dive into it. But I did some research there and ChatGPT recommended Pinecone. So far it’s been a pretty painless experience but again this is where there’s a lot of time spent I think for me, just making sure that everything on the backend happens when we expect it to.

For my platform, the core actual user-funciton is interview upload + delete—like that’s really pretty much it. And then when the interview is uploaded that kicks off an entire backend process around sending the transcript to AI for processing, chunking, embedding, and storing the transcript in Pinecone, and then a lot of other ancillary stuff around other insights that are stored in other structures within Supabase. And so it’s just been the most work to make sure all of that happens effectively when interviews are uploaded and when they’re deleted. Because the thought of just like random scraps of deleted interview transcript or any like permissioning issues around RLS and making sure user interviews are restricted to each individual user (obvious, but something you have to think about) terrifies me. Like that is my biggest fear, that we launch and then there’s some snafu around backend stuff and then I think that’s where a ton of cleanup and time could be spent.

So as is probably obvious this is definitely an area where I anticipate a lot of future work but also where I think I’ll get the most like organizational ROI in terms of really knowing how this stuff works. I also think a recurring theme is that none of this stuff really feels like fundamentally above me intellectually, and so it’s just a matter of not getting scared or discouraged and just chipping away until understanding slowly reveals itself. I also do think that my academic background positions me well to not be super overwhelmed with all of this stuff. I do generally understand how data works and how it relates to each other and all that fun stuff. I just need to really commit here and jump fearlessly into the messiness.

HuggingFace: My Translator

  • Use Case(s): embedding model

  • Subscription: free (or maybe $ / token?)

I had heard of HuggingFace before starting this journey, but I didn’t really know what it did and it also felt a little bit like quirky or whimsical or something—like what really is a hugging face? But this is where I turned whenever ChatGPT led me astray on my a DeepSeek embedding model and it’s been a pretty seamless experience thus far plugging into a HuggingFace opensource embedding model.

I honestly haven’t really explored the platform much beyond that and it’s not like open for me on a regular basis—so I don’t even know how much I’m being charged or if I even gave a credit card. But I think this is definitely a platform that I intend to explore in the future especially as we start thinking about more specialized and/or custom models for like emotional embeddings across interview corpuses. That’s just a random thought that’s been percolating for quite some time.

But I think my experience with HuggingFace reinforces a more general sense of comfort and safety throughout this entire process. Like, it’s maybe super obvious, but nothing that I’m doing is like technically novel, really, I’m just trying to innovate around application and use case, and then like think I can compete pound-for-pound with anyone in the world on work ethic, evangelization, and fervor. But HuggingFace was a good example of this feeling that’s colored the edges of this entire experience which is: pretty much anything I can think of is possible, and has been done before, and I can figure out how to do it right now—and there are all these tools and platforms and services literally built to make exactly what I want to do as seamless and low-lift as possible. So if I’m willing to stomach some short-term discomfort and pain around not getting it perfectly right on the first try, well then that’s okay. Because this is possible; and I will get it done. I just might feel like a fish out of water for a few hours before I get there.

React: The Muscles (maybe??)

  • Use Case(s): the language that animates the platform

  • Subscription: free (I think just a package?)

So this isn’t really software, necessarily—or maybe it is and I just am mischaracterizing things, but I know the right word is “framework” for React, and I know that it’s what I’m using for my platform. I’m pretty sure I heard somewhere that React worked well with Vercel, and maybe that Vercel’s Founder like invested React or maybe Node.JS or something before starting Vercel and so I just kind of accepted that at face value and made a hardcoded decision early on to just use React. And I also think I had heard that coding tools were pretty good at using React because there was a lot of information around React on the web which the models trained on. So far, I’d say it’s been great. I just don’t really have any frame(work) of reference here—so I don’t really know what I don’t know. But I’ve been pleased with the look and feel of the platform I’ve built.

SendGrid

  • Use Case(s): email automation

  • Subscription: free (I think? Honestly totally forget)

The last thing I’ll mention here is SendGrid. I went through a brief phase of Jeff Lawson fandom maybe 3 or 4 years ago—and so was familiar with Twilio and then of course SendGrid was acquired by Twilio and so that’s how I had some familiarity with SendGrid. I personally have used SendGrid for email automation when folks sign up for the waitlist online and then am working on building some type of 2-month drip campaign right now for after people join the waitlist. Honestly I failed to successfully implement that, unfortunately, but that felt more on me than on SendGrid—and then it just wasn’t a priority. But I definitely plan to lean-in more here in the future. It’s also so fun because you can create email templates using html, I think, and so I can just use Cursor and Windsurf and ChatGPT to make like super on-brand emails that look and feel really compelling using html—which has been wonderful.

Some Specific Workflows I Enjoyed

Now I’m going to share high-level some specific workflows that gave me a sense of mastery over myself and turned something that felt like it might’ve taken tens of hours in the past into like a sub-hour experience.

Landing Page Creation

I launched my public landing page last week, which was very exciting, and I thought that it would be a difficult process—but it honestly took me about 30 minutes to make. I mentioned this earlier, but I basically asked ChatGPT to find some of the best B2B SAAS landing pages and identify structure there and create an outline of what a great, leading-edge landing page would look like. So it then deconstructed that into like 4 sections, and provided descriptions, instructions, etc. And then I fed that into Windsurf and prompted it to populate the outline with information from my platform based on its understanding of what my platform does (as told through code).

And then I took that, created a separate Project in my code workspace, and then basically asked it to create a cutting-edge modern landing page in the style of other B2B SAAS companies with the following information—and I uploaded the material that it had created. And then I gave it general color and brand guidelines and told it to act like it was a top-level designer at a top tech company—and, voila! I was actually really pleased with the final result and it actually took me like 30 minutes to get from first request to the final output.

Insights Revamp

Separately, a big area of focus for me lately has been figuring out how to make my platform and the insights I’m surfacing more relevant, insightful, accurate, etc. At first I kind of was just spit-balling some general stuff around like sentiment scores and pain points and feature requests and stuff. But I realized somewhat recently that I needed some more like real business heft around the recommendations and analysis—and then separately that I also needed to root recommendations and analysis in some contextual narrative. And so I’ve been spending a lot of time there and that will be my last “lift” before launch in like a week (fingers crossed).

But I honestly wasn’t sure really what to do here and was feeling overwhelmed. So I told Windsurf to create a comprehensive overview of my platform, as it currently existed, and saved that as a PDF. I then took that context, and gave it to Gemini Deep Research, and said: “This is my platform, this is what it does, and this [the insights] is pretty much the most important component of what I’m building. Please research all you can around best practices for deriving actionable insights with clear business outcomes and then turn that into a comprehensive set of recommendations for an insights structure for my platform.” And so then it went off for about 20 minutes and came back with a super comprehensive overview of my challenges and a recommended course of action.

I then took those recommendations, fed them into ChatGPT to make it a bit shorter (since I can’t put a 20-page research report into Windsurf), and then I took that abridged copy of the recommendations and put that into Windsurf—without manipulating any code—and asked Windsurf for a comprehensive feasibility analysis and implementation plan for revamped insight capabilities that adhered to Gemini’s research. And then Windsurf helped me design this whole like three-pillar analysis system and then just went forward and implemented it. The entire process took maybe 3-4 hours from like initial Deep Research to actual implementation. And a lot of the implementation difficulties there were actually around backend manipulation. But I got it up and running relatively quickly.

And then from there that unlocked me to kind of think more deeply around what I wanted the platform to be and the type of insights I wanted to provide—and so then I was able to build off that revamp and add some other like narrative insights layers which I think will be interesting (and hopefully solve a REAL business pain point). So I think this entire process was amazing and showed how it’s possible to stitch all the models together to really create and implement a comprehensive, like platform-defining restructure and go from idea to code in a matter of days.

Now, some quick-hitting learnings that have surfaced throughout the process thus far.

#1) I still mostly need to understand what’s happening

The general perception of vibecoding is one of a hands-off non-technocrat who just presses access constantly and sees where things go. I think I definitely started things out a bit more in that camp. But, even from Day 1, I wanted to leverage all the AI tools to help me develop some intuition around how it all works. And I’ve realized over time that: (1) there’s really nothing that’s happening that I can’t understand—like there’s no real secret here it just takes time and T&E to wade through the detritus—but it’s super possible, and (2) I still need to, I should want to, and I lowkey have to still understand generally what’s going on—particularly around like core architecture, data flows, and stuff like that.

I must be principled and opinionated in how I think things should work, and then it’s up to me to engage with my AI stack to make sure that I dig to the bottom of issues and provide thoughts and opinions on how we can sequence or modify things to arrive at some desired outcome. Feeling like I have a worthwhile opinion here has been a big area of growth over the past month or so—and being able to really guide some like fundamental architectural decisions has been fun.

#2) There’s no shame in taking it super slow

With that said, I’ve definitely spent a lot of time asking super embarrassing questions or going like really really slow with the AI. I ask it to explain and reexplain things all the time and I think I’ve grown comfortable with being pretty shameless about starting from zero. That’s been a benefit though I think of the personalized nature of AI, though, because who really cares if I ask it to explain something to me like I’m 5? There’s no shame in learning and going at my own pace. And without any fear of social stigma, I have literally nothing to lose and everything to gain by being embarrassingly thorough in my requests.

#3) There’s virtually nothing I can’t do

There’s been an overwhelming sense of agency and like anything-is-possible throughout this entire process. I started getting at it earlier, but there’s very little that I’d want to do right now that hasn’t been implemented in some way, shape, or form previously—and so there’s very little that feels impossible for me to do. And I think that’s a belief that I’ve seen in the tech community that I share, which is: the types of problems we work on and the things that we do will fundamentally shift with AI, meaning that we will start to tackle harder problems because AI makes the previously-hard problems easy. And I think that’s mostly been my sense as well.

I’m no expert in anything here, but with enough resolve and a dash of semi-insanity, I’m pretty confident that I can figure just about anything out. So then if you follow this progression to its natural conclusion, then it starts to free us up to think about what haven’t we been able to face yet, and what does a truly next-level experience look like for whatever idea space I’m targeting? And so I truly believe the solutions of tomorrow won’t look anything like the solutions of today. But so I’m building a solution today to prepare me to drive the direction of the solutions of tomorrow. I hope that makes sense.

#4) Notwithstanding the optimism and agency of the foregoing, I pretty much just press “Accept” a lot—like a lot a lot

With that said, I do spend a lot of time just reading through what Claude is saying about my codebase and then pressing accept all. Like that’s probably at least 75% of my actual coding time, just agreeing and trying to understand with the information. I haven’t written a single line of code or even manipulated like a single word within my entire codebase. I’ve just talked to Claude and watched it do its work. I’ve guided and provided thoughts and opinions and stuff—but the actual codewriting has been 100% hands-off, and that’s both wonderful and kind of shocking.

It does make me think a lot about like how much human labor must have been needed to create the code that I’ve built—and I can’t imagine it. I also have no idea if it’d be like 500 labor-hours or 50–but I know it’s probably much more than the amount of time I’ve spent—and I also know that it’s something I would have been willing to spend like tens of thousands of dollars on—and I’ve built it myself for hundreds.

#5) But, it feels like I really am coding

Now, with all that said, it still really does feel like I’m doing something technical, and like I’m really coding. I know I haven’t written even one word of actual code, but I’ve built and designed a codebase, complete with a backend infrastructure and AI-enabled integrations, that is powered with code. And that really has been me; I really have created that; I started with zero lines of code and literally just an idea in my mind. And in a little under two months I’ve gone from idea to MVP—and I think that’s not worth nothing—and it’s been one of the most fulfilling and growth-intensive experiences of my life.

#6) Code is no-code

The last thing I’ve been reflecting on is this idea that no- and low-code platforms are at existential risk with these new AI-assisted coding platforms. For example, I spent about 30 minutes trying to integrate Typeform within my landing page to handle waitlist collection—and then eventually I was frustrated with how it looked and couldn’t get it to feel exactly right, and so I just asked if there were alternatives—and then Windsurf said I could use Supabase and a form and SendGrid to create a waitlist and send a thank you email. And so I got that integrated and operational within like 30 minutes—and it is exactly how I want it to look, feel, and operate.

And so I just had this eureka-adjacent moment of, like: “oh, wow, this is the code that the no-code tools were built to abstract.” But if I can get to the code-solution with natural language through chat interface, then why would I ever opt for a no- or low-code solution when I can just build the code that more effectively accomplishes what I want to accomplish in a super-custom manner?

Looking Forward

That’s all from me this week! I really can’t overstate how incredible the learning journey has been building the platform thus far, starting from complete scratch and getting something that really works. It’s not perfect, and of course there’re still a million questions around business use—and I have to turn it from a tool into a company—but that’s the fun part for me, and that’s what I’ve spent my entire professional career learning about.

The barrier to entry for entrepreneurship for me really has always been technical competence, not business wisdom, and now that I have tools to guide me through any technical process that can then implement those technical processes on my behalf, I feel more empowered than ever to put myself out there and try to build something that people want. Because I now have the power to do so. And that’s been a magical experience.

Have a wonderful week! Try to build something! The tools are there for you to use—you just have to have the courage to try!

Sincerely,

Luke