• 🚧 Website Under Construction 🚧
  • 🚧 Website Under Construction 🚧

Building in the Age of AI — Product Thinking, Vibe Coding, and Actually Shipping Stuff

Start with real needs. Focus on the product and the user story. As for code and UI? Let AI take care of it.

Aug 2, 2025

15 min read

Building in the Age of AI — Product Thinking, Vibe Coding, and Actually Shipping Stuff

Start with real needs. Focus on the product and the user story. As for code and UI? Let AI take care of it.

Aug 2, 2025

15 min read

Before we dive in…
Before we dive in…

I’m a product designer with almost zero coding background. Most of what I know about front-end and back-end comes from working on projects or chatting with engineer friends. So yeah — this post is really just me sharing my own hands-on experience, not some “AI is here to replace everyone” hot take (let’s get that out of the way first 😂).

I’m a product designer with almost zero coding background. Most of what I know about front-end and back-end comes from working on projects or chatting with engineer friends. So yeah — this post is really just me sharing my own hands-on experience, not some “AI is here to replace everyone” hot take (let’s get that out of the way first 😂).

Back when I first entered the industry, I was at a tiny company where I taught myself just enough HTML and CSS to hack around with WordPress templates. Beyond that, I pretty much stayed far, far away from coding. I even thought about making my own Figma plugin at one point — but since I didn’t fully understand the tech side, I just… didn’t.

Back when I first entered the industry, I was at a tiny company where I taught myself just enough HTML and CSS to hack around with WordPress templates. Beyond that, I pretty much stayed far, far away from coding. I even thought about making my own Figma plugin at one point — but since I didn’t fully understand the tech side, I just… didn’t.

My portfolio site? Built with Framer, the no-code website builder. Never once did I imagine I’d be opening up an IDE and actually writing something myself.

My portfolio site? Built with Framer, the no-code website builder. Never once did I imagine I’d be opening up an IDE and actually writing something myself.

And then this year happened. The rise of LLM-powered coding tools felt like discovering a whole new continent. I started small — asking AI to write a script that could sync my iCloud file changes and automatically back them up to an external drive. But recently, a pain point in our product development process at work pushed me to take on something bigger: building a Figma plugin from scratch.

And then this year happened. The rise of LLM-powered coding tools felt like discovering a whole new continent. I started small — asking AI to write a script that could sync my iCloud file changes and automatically back them up to an external drive. But recently, a pain point in our product development process at work pushed me to take on something bigger: building a Figma plugin from scratch.

Let’s just say — working with AI is a love-hate relationship. It’ll help you a ton… and sometimes, it’ll trip you up just as much 😂.

Let’s just say — working with AI is a love-hate relationship. It’ll help you a ton… and sometimes, it’ll trip you up just as much 😂.

This isn’t a tutorial. It’s a behind-the-scenes look at how one designer (me) decided to stop waiting for someone else to build the thing — and tried to do it myself with AI’s help. From setting up GitHub to debugging through ChatGPT chats, this is how it actually felt to build and ship a Figma plugin from scratch.

This isn’t a tutorial. It’s a behind-the-scenes look at how one designer (me) decided to stop waiting for someone else to build the thing — and tried to do it myself with AI’s help. From setting up GitHub to debugging through ChatGPT chats, this is how it actually felt to build and ship a Figma plugin from scratch.

Before the Code, Set the Stage
Before the Code, Set the Stage

Dev environments matter — even for a tiny Figma plugin.

Dev environments matter — even for a tiny Figma plugin.

Why did I choose a Figma plugin as my first real “shipped” product?
Honestly — because it’s relatively simple 😌 (and I mean that in the best way).

Why did I choose a Figma plugin as my first real “shipped” product?
Honestly — because it’s relatively simple 😌 (and I mean that in the best way).

All you really need to do is right-click in Figma, choose “new plugin…” in development menu, follow a few prompts, and boom — Figma auto-generates a starter folder with the basics all wired up. I didn’t have to make any tech stack decisions. It just… worked. By default, it uses TypeScript for the logic, and if your plugin has a UI, it’s just standard HTML/CSS. Clean and minimal.

All you really need to do is right-click in Figma, choose “new plugin…” in development menu, follow a few prompts, and boom — Figma auto-generates a starter folder with the basics all wired up. I didn’t have to make any tech stack decisions. It just… worked. By default, it uses TypeScript for the logic, and if your plugin has a UI, it’s just standard HTML/CSS. Clean and minimal.

Here’s the only real “manual” prep I did before starting:

Here’s the only real “manual” prep I did before starting:

  1. Installed VS Code, and opened up the plugin folder in it

  2. Installed the GitHub Copilot extension for VS Code (aka, my AI pair programmer)

  3. Installed Node.js + npm so I could run build scripts

  4. Made sure TypeScript compilation was turned on — so my .ts changes would auto-convert to .js, which the browser actually runs

  1. Installed VS Code, and opened up the plugin folder in it

  2. Installed the GitHub Copilot extension for VS Code (aka, my AI pair programmer)

  3. Installed Node.js + npm so I could run build scripts

  4. Made sure TypeScript compilation was turned on — so my .ts changes would auto-convert to .js, which the browser actually runs

A little bit of setup, a sprinkle of VS Code plugins, a pinch of terminal incantations… and voilà — development can begin!

A little bit of setup, a sprinkle of VS Code plugins, a pinch of terminal incantations… and voilà — development can begin!

…Or can it? 😅

…Or can it? 😅

Version Control Matters Too
Version Control Matters Too

Never used GitHub? No worries — you’re not late to the party.

Never used GitHub? No worries — you’re not late to the party.

If you’ve never touched Git or GitHub before, that’s totally fine. You’re not behind. And honestly, with ChatGPT around, learning it now is easier than ever. You can literally ask it to explain version control using analogies from whatever field you know best. That’s exactly how I picked it up.


If you’ve never touched Git or GitHub before, that’s totally fine. You’re not behind. And honestly, with ChatGPT around, learning it now is easier than ever. You can literally ask it to explain version control using analogies from whatever field you know best. That’s exactly how I picked it up.


Okay fine, my commit messages are a mess…but having version control still gives me so much peace of mind.

Okay fine, my commit messages are a mess…but having version control still gives me so much peace of mind.

Here’s the thing — software development isn’t like design work in Figma.

Here’s the thing — software development isn’t like design work in Figma.

In Figma, we have pages on pages to stash flows, alternate layouts, weird explorations, and half-dead ideas we might resurrect later. I can always dig into some mysterious corner only I know about and drag something back.

In Figma, we have pages on pages to stash flows, alternate layouts, weird explorations, and half-dead ideas we might resurrect later. I can always dig into some mysterious corner only I know about and drag something back.

But with code? It’s different.

But with code? It’s different.

You’re usually building directly on the latest version. And when you’re working with AI (especially in the beginning), things can go off the rails real quick. A version control system lets you roll back to a clean slate when things get messy — which, trust me, they will.

You’re usually building directly on the latest version. And when you’re working with AI (especially in the beginning), things can go off the rails real quick. A version control system lets you roll back to a clean slate when things get messy — which, trust me, they will.

And no, you don’t need to live in the terminal to use Git. Tools like GitHub Desktop or SourceTree from Atlassian give you a clean, visual interface. You don’t have to memorize commands — you can still manage your commits, branches, and merges like a pro.

And no, you don’t need to live in the terminal to use Git. Tools like GitHub Desktop or SourceTree from Atlassian give you a clean, visual interface. You don’t have to memorize commands — you can still manage your commits, branches, and merges like a pro.

Here’s how I’ve been using it:

Here’s how I’ve been using it:

  • The live, public-facing version of the plugin — the one running in Figma — is always the main branch.

  • Any time I’m working on a new feature or fixing a bug, I branch off from main so I don’t mess up production. I name branches however I want, like proj-export-setting for changes related to export configs.

  • After a chunk of changes (e.g. UI tweaks, adjusting CSV output format, or polishing UX copy), I commit those edits.

  • Then I push the commits to the remote repo (a.k.a. GitHub in the cloud).

  • Once everything looks good and works well, I open a pull request (PR) to merge the branch back into main. GitHub also lets me create a release tag to keep track of version history.

  • The live, public-facing version of the plugin — the one running in Figma — is always the main branch.

  • Any time I’m working on a new feature or fixing a bug, I branch off from main so I don’t mess up production. I name branches however I want, like proj-export-setting for changes related to export configs.

  • After a chunk of changes (e.g. UI tweaks, adjusting CSV output format, or polishing UX copy), I commit those edits.

  • Then I push the commits to the remote repo (a.k.a. GitHub in the cloud).

  • Once everything looks good and works well, I open a pull request (PR) to merge the branch back into main. GitHub also lets me create a release tag to keep track of version history.

So yeah — once you get your dev environment set up and version control running smoothly, it finally leads to the fun part:

So yeah — once you get your dev environment set up and version control running smoothly, it finally leads to the fun part:

💥 Time to. Just. Talk. To. The. AI. (Let’s gooo!) 🎤✨

💥 Time to. Just. Talk. To. The. AI. (Let’s gooo!) 🎤✨

Now Comes to the Fun Part: Just Talk to the AI
Now Comes to the Fun Part: Just Talk to the AI

Keep it modular. Nail the spec. Tell a good user story.

Keep it modular. Nail the spec. Tell a good user story.

Here’s the short version:

Here’s the short version:

Take everything in your head — every detail of your spec — and explain it to the AI as clearly as you can. The more specific you are, the better it gets.

Take everything in your head — every detail of your spec — and explain it to the AI as clearly as you can. The more specific you are, the better it gets.

Think of it like working with a new engineer on your team.

Think of it like working with a new engineer on your team.

Sometimes when a spec is missing something, they’ll ask follow-up questions to clarify. But sometimes… they’ll just wing it. And you won’t realize something went sideways until QA flags it later. Sound familiar? 😅

Sometimes when a spec is missing something, they’ll ask follow-up questions to clarify. But sometimes… they’ll just wing it. And you won’t realize something went sideways until QA flags it later. Sound familiar? 😅

Talking to AI is exactly like that.

Talking to AI is exactly like that.

Except the AI is the kind of engineer who’s way too good at filling in the blanks on their own. Give it vague instructions, and you might get back something wildly off-track.

Except the AI is the kind of engineer who’s way too good at filling in the blanks on their own. Give it vague instructions, and you might get back something wildly off-track.

But this time, you’re the QA.

But this time, you’re the QA.

You’re the one who has to catch the gaps, misreads, and weird guesses. So the more precise you are upfront, the less debugging you’ll need later.

You’re the one who has to catch the gaps, misreads, and weird guesses. So the more precise you are upfront, the less debugging you’ll need later.

And since most products are really just a bunch of small, reusable modules working together, I’ve found it best to talk to the AI one task at a time — bite-sized, focused, and scoped tight. That way, it’s easier to build, debug, and evolve later.

And since most products are really just a bunch of small, reusable modules working together, I’ve found it best to talk to the AI one task at a time — bite-sized, focused, and scoped tight. That way, it’s easier to build, debug, and evolve later.

What I thought in my lil brain 🧠

What I thought in my lil brain 🧠

Let’s take this image (or my use case PoC) as an example:

Let’s take this image (or my use case PoC) as an example:

In my head, I imagined all annotation labels appearing on the side of the selected text node that’s closest to the outermost frame. Why? Because I didn’t want annotations crossing over the frame and covering the actual UI. Simple, clean, non-intrusive.

In my head, I imagined all annotation labels appearing on the side of the selected text node that’s closest to the outermost frame. Why? Because I didn’t want annotations crossing over the frame and covering the actual UI. Simple, clean, non-intrusive.

🪄 Here’s what my prompt looked like:

🪄 Here’s what my prompt looked like:

I want to trigger the annotation function with the plugin command “annotate”. Core feature: When a text node is selected, the plugin should grab its layerName and use that to create an annotation. If no text node is selected, show a toast: “Please select a text block to annotate.” The annotation component should include: A text label, a 2px line, and a 4x4 circular vector. The label text should use the selected text node’s layerName, and be wrapped in an auto layout set to hug content, with padding: 2px 4px. The label text color should be #FFF, and the background, line, and vector should all be #333. Placement logic (for UI layout): If the annotation is placed on the left, layout should be horizontal center-aligned, with no gap: Label → Line → Dot. If it’s on the right, reverse the order. For top, stack vertically: Label → Line → Dot. For bottom, reverse the order again. The circular vector should align to the center of the selected text node’s side. The label should always maintain a 60px gap from the outermost parent layer. Placement logic (for side decision): Calculate the gap between the text node and its parent frame. Choose the side with the smallest gap: left, right, top, or bottom. If all gaps are equal, default to right. Split the code into two independent utilities: One for creating & arranging the annotation component, and one for calculating positioning.

I want to trigger the annotation function with the plugin command “annotate”. Core feature: When a text node is selected, the plugin should grab its layerName and use that to create an annotation. If no text node is selected, show a toast: “Please select a text block to annotate.” The annotation component should include: A text label, a 2px line, and a 4x4 circular vector. The label text should use the selected text node’s layerName, and be wrapped in an auto layout set to hug content, with padding: 2px 4px. The label text color should be #FFF, and the background, line, and vector should all be #333. Placement logic (for UI layout): If the annotation is placed on the left, layout should be horizontal center-aligned, with no gap: Label → Line → Dot. If it’s on the right, reverse the order. For top, stack vertically: Label → Line → Dot. For bottom, reverse the order again. The circular vector should align to the center of the selected text node’s side. The label should always maintain a 60px gap from the outermost parent layer. Placement logic (for side decision): Calculate the gap between the text node and its parent frame. Choose the side with the smallest gap: left, right, top, or bottom. If all gaps are equal, default to right. Split the code into two independent utilities: One for creating & arranging the annotation component, and one for calculating positioning.

That prompt included the key essentials:

That prompt included the key essentials:

  1. What you want to build, how it works, and what the acceptance criteria are

  2. How the user interacts with it, and what feedback they get

  3. Edge cases (like what happens if nothing’s selected)

  4. And a rough idea of how the UI should look

  1. What you want to build, how it works, and what the acceptance criteria are

  2. How the user interacts with it, and what feedback they get

  3. Edge cases (like what happens if nothing’s selected)

  4. And a rough idea of how the UI should look

The more of this you include, the more context the AI has to actually build what’s in your head. And the fewer back-and-forths you’ll need before it works the way you imagined.

The more of this you include, the more context the AI has to actually build what’s in your head. And the fewer back-and-forths you’ll need before it works the way you imagined.

You Built One Module… Now What?
You Built One Module… Now What?

Time to ask AI to run a feasibility workshop.

Time to ask AI to run a feasibility workshop.

In my head, this plugin was always meant to support four core features.

In my head, this plugin was always meant to support four core features.

Three of them would involve UI and user interaction, while one could be triggered purely through a command.

Three of them would involve UI and user interaction, while one could be triggered purely through a command.

But the question is: how do you go from a single working PoC (proof of concept) to a plugin with real scope? How do you scale without breaking what you’ve already built?

But the question is: how do you go from a single working PoC (proof of concept) to a plugin with real scope? How do you scale without breaking what you’ve already built?

That’s when I started asking AI to help me evaluate feasibility.

That’s when I started asking AI to help me evaluate feasibility.

🪄 Here’s what my prompt looked like:

🪄 Here’s what my prompt looked like:

I’ve got a few bigger features I want to add next: 1. Users should be able to manually input the annotation’s keyName, instead of using the text node’s layer name. 2. After multiple annotations are created (possibly dozens), I’ll need a UI to view and manage all annotations, showing fields like key name, content, and tags — and allow editing of key name + tag. 3. The concept of “tags” will be user-defined, and needs to be set before any annotations are created. Based on the current PoC (with auto-alignment and labeling), please evaluate feasibility and propose a plan to gradually expand this into a v1 MVP — without rewriting or breaking the core logic. For each new feature, suggest how it could be cleanly added to the current codebase. If there are parts that should be broken into smaller modules for better maintainability, or areas that might need refactoring, please call that out.

I’ve got a few bigger features I want to add next: 1. Users should be able to manually input the annotation’s keyName, instead of using the text node’s layer name. 2. After multiple annotations are created (possibly dozens), I’ll need a UI to view and manage all annotations, showing fields like key name, content, and tags — and allow editing of key name + tag. 3. The concept of “tags” will be user-defined, and needs to be set before any annotations are created. Based on the current PoC (with auto-alignment and labeling), please evaluate feasibility and propose a plan to gradually expand this into a v1 MVP — without rewriting or breaking the core logic. For each new feature, suggest how it could be cleanly added to the current codebase. If there are parts that should be broken into smaller modules for better maintainability, or areas that might need refactoring, please call that out.

This kind of prompt helps AI shift from “doer” to “planner.”

This kind of prompt helps AI shift from “doer” to “planner.”

It starts thinking in terms of structure, extensibility, and system design — not just implementation. You’re inviting it into the early-stage product thinking process.

It starts thinking in terms of structure, extensibility, and system design — not just implementation. You’re inviting it into the early-stage product thinking process.

And every time you want to build out a new function, this kind of structured conversation lets AI analyze what you’ve already built and suggest the safest, cleanest path forward. Sometimes it even recommends breaking things down further or identifying early signs of spaghetti code before you go too deep.

And every time you want to build out a new function, this kind of structured conversation lets AI analyze what you’ve already built and suggest the safest, cleanest path forward. Sometimes it even recommends breaking things down further or identifying early signs of spaghetti code before you go too deep.

When Scope Grows, Testing Gets Harder
When Scope Grows, Testing Gets Harder

Even a “simple” plugin can surprise you during bug fixing.

Even a “simple” plugin can surprise you during bug fixing.

It’s just a Figma plugin, right?
How buggy could it be?

It’s just a Figma plugin, right?
How buggy could it be?

Yeah… I was very wrong 😅

Yeah… I was very wrong 😅

As soon as the feature set started to grow — and data started flowing between TypeScript and HTML — bugs began popping up everywhere. I’d click something and… nothing. Or worse, the wrong thing. And half the time I had no idea why it broke or where to look.

As soon as the feature set started to grow — and data started flowing between TypeScript and HTML — bugs began popping up everywhere. I’d click something and… nothing. Or worse, the wrong thing. And half the time I had no idea why it broke or where to look.

That’s when I realized how crucial it is to let AI help with debugging:

That’s when I realized how crucial it is to let AI help with debugging:

Reading error logs, helping structure test cases, and making sense of the invisible wiring behind the scenes.

Reading error logs, helping structure test cases, and making sense of the invisible wiring behind the scenes.

So I asked AI to go through the codebase and add console.log() statements — at every data flow and user interaction point.
But not just random logs — logs with enough context that the AI could later help me reason through them.

So I asked AI to go through the codebase and add console.log() statements — at every data flow and user interaction point.
But not just random logs — logs with enough context that the AI could later help me reason through them.

And yet… sometimes, even that wasn’t enough.
There were moments where AI would go:

And yet… sometimes, even that wasn’t enough.
There were moments where AI would go:

“Hmm, seems fine to me. I can’t reproduce the bug. Code looks clean.”

“Hmm, seems fine to me. I can’t reproduce the bug. Code looks clean.”

Which means: you’re the QA now.
You need to know what every feature is supposed to do.
You need to write your own test cases.
You need to test the dev build like a real user.
And most importantly, you need to document the exact steps to reproduce a bug — so AI can help you actually fix it.

Which means: you’re the QA now.
You need to know what every feature is supposed to do.
You need to write your own test cases.
You need to test the dev build like a real user.
And most importantly, you need to document the exact steps to reproduce a bug — so AI can help you actually fix it.

Here’s an example of how I’d phrase that:

Here’s an example of how I’d phrase that:

🪄 Here’s what my prompt looked like:

🪄 Here’s what my prompt looked like:

I got the following error in the console: OOOOXXXXX. Here’s what I did: I Opened XXXX, entered ZZZZ in the input field, selected C, and clicked Confirm. Expected behavior: a new annotation component should appear on the Figma canvas. Actual result: nothing was created. I tried it multiple times and consistently reproduced the issue. Please help debug this.

I got the following error in the console: OOOOXXXXX. Here’s what I did: I Opened XXXX, entered ZZZZ in the input field, selected C, and clicked Confirm. Expected behavior: a new annotation component should appear on the Figma canvas. Actual result: nothing was created. I tried it multiple times and consistently reproduced the issue. Please help debug this.

The more context you give — what happened, where it happened, what you expected, what actually happened — the faster AI can pinpoint where things went wrong.

The more context you give — what happened, where it happened, what you expected, what actually happened — the faster AI can pinpoint where things went wrong.

And for UI bugs? Well… designers are built with eagle eyes 🦅👁️
Just screenshot the issue and tell AI what looks off. Something like:

And for UI bugs? Well… designers are built with eagle eyes 🦅👁️
Just screenshot the issue and tell AI what looks off. Something like:

“This button should be vertically centered, but it’s misaligned by 4px.”

“This button should be vertically centered, but it’s misaligned by 4px.”

It’s weirdly satisfying to become the QA for your own AI-built product. Debugging becomes less about “what went wrong” and more about figuring out how to help your new robot teammate… help you better.

It’s weirdly satisfying to become the QA for your own AI-built product. Debugging becomes less about “what went wrong” and more about figuring out how to help your new robot teammate… help you better.

Do I Need to Design the UI in Figma?
Do I Need to Design the UI in Figma?

Spoiler: I didn’t. At least…this time.

Spoiler: I didn’t. At least…this time.

Figma’s Dev Mode can now hook into MCP, which honestly sounds amazing (even though I haven’t tried it yet 🙈). I’d love to hear how other folks are using it.

Figma’s Dev Mode can now hook into MCP, which honestly sounds amazing (even though I haven’t tried it yet 🙈). I’d love to hear how other folks are using it.

But to be real: for this entire plugin project, I didn’t design any UI screens in Figma. Not one.

But to be real: for this entire plugin project, I didn’t design any UI screens in Figma. Not one.

What I gave the AI was basically a bunch of wireframes only I could understand 😂

What I gave the AI was basically a bunch of wireframes only I could understand 😂

What really helped, though, was that I’d already defined a handful of component tokens from the start:

What really helped, though, was that I’d already defined a handful of component tokens from the start:

  • Color tokens

  • Typography settings

  • Button styles and padding

  • Input field states and behaviors

  • etc.

  • Color tokens

  • Typography settings

  • Button styles and padding

  • Input field states and behaviors

  • etc.

So when I asked AI to build individual modules, it already had enough to get pretty close to what I had in mind. The focus was always on consistency and modular thinking, not pixel-perfect mocks.

So when I asked AI to build individual modules, it already had enough to get pretty close to what I had in mind. The focus was always on consistency and modular thinking, not pixel-perfect mocks.

That said… because I didn’t really polish the UI during the MVP phase, some screens ended up looking, uh… pretty rough.

That said… because I didn’t really polish the UI during the MVP phase, some screens ended up looking, uh… pretty rough.

The current production version of the plugin. It works, but it’s not exactly easy on the eyes

The current production version of the plugin. It works, but it’s not exactly easy on the eyes

In the example above, I used a table layout to show annotation data. Technically, it worked. But after using it a few times, I realized:

In the example above, I used a table layout to show annotation data. Technically, it worked. But after using it a few times, I realized:

  • It made parsing info harder, not easier

  • Tables take up a lot of space

  • When the plugin UI opened, it blocked so much of the Figma canvas that I couldn’t easily cross-reference elements

  • It made parsing info harder, not easier

  • Tables take up a lot of space

  • When the plugin UI opened, it blocked so much of the Figma canvas that I couldn’t easily cross-reference elements

So earlier this week, I had a random idea: Why not just revamp the UI?

So earlier this week, I had a random idea: Why not just revamp the UI?

I ran a quick feasibility check with AI, and turns out the required changes were mostly on the UI + data render layer. The core logic could stay the same. So I jumped into “just talk it out” mode again — this time, with a super sketchy wireframe as visual reference.

I ran a quick feasibility check with AI, and turns out the required changes were mostly on the UI + data render layer. The core logic could stay the same. So I jumped into “just talk it out” mode again — this time, with a super sketchy wireframe as visual reference.

🪄 Here’s what my prompt looked like:

🪄 Here’s what my prompt looked like:

Based on the feasibility review, I’d like to update the UI layout: • Replace the table view with cards • Keep the same logic for edit / locate / delete actions • Change filters from a dropdown to chip-style toggles • Move export & search into a floating action button at the bottom (reference: image below)

Based on the feasibility review, I’d like to update the UI layout: • Replace the table view with cards • Keep the same logic for edit / locate / delete actions • Change filters from a dropdown to chip-style toggles • Move export & search into a floating action button at the bottom (reference: image below)

Super impressionist wireframe lmfao 😂

Super impressionist wireframe lmfao 😂

Let’s start with first component: Card. Here’s how I described the new card layout. 1. Top row: label + 3 action buttons (edit / locate / delete) 2. Middle: Key Name 3. Bottom: Content When user click on “edit”, Card will turn into Edit Mode. 1. Top row: dropdown selector for labels + save button 2. Middle: Key Name Input Field 3. Bottom: Content (still only in preview) 4. All other cards should be disabled during edit mode. … … and so on.

Let’s start with first component: Card. Here’s how I described the new card layout. 1. Top row: label + 3 action buttons (edit / locate / delete) 2. Middle: Key Name 3. Bottom: Content When user click on “edit”, Card will turn into Edit Mode. 1. Top row: dropdown selector for labels + save button 2. Middle: Key Name Input Field 3. Bottom: Content (still only in preview) 4. All other cards should be disabled during edit mode. … … and so on.

It only took 4–5 prompt rounds before AI had fully rebuilt the UI the way I imagined it.

It only took 4–5 prompt rounds before AI had fully rebuilt the UI the way I imagined it.

latest version now!

latest version now!

So no — you don’t always need to draw the UI.
If you can clearly describe every part of your UI spec — styles, layout, behaviors, interactions — then AI can turn that into code surprisingly fast.

So no — you don’t always need to draw the UI.
If you can clearly describe every part of your UI spec — styles, layout, behaviors, interactions — then AI can turn that into code surprisingly fast.

(Of course, if I were building a web app instead of a plugin, I’d probably still design things in Figma… otherwise I’d end up with 20 slightly different paddings across the app 😂)

(Of course, if I were building a web app instead of a plugin, I’d probably still design things in Figma… otherwise I’d end up with 20 slightly different paddings across the app 😂)

So… What Exactly Is This Plugin?
So… What Exactly Is This Plugin?

And why did I build the whole thing in the first place?

And why did I build the whole thing in the first place?

The entire plugin was built through that same loop:

The entire plugin was built through that same loop:

Talk to AI → build a module → test it → fix it → repeat.

Talk to AI → build a module → test it → fix it → repeat.

After dozens of these tiny iterations, I finally ended up with something that’s (almost!) ready to ship. But wait — what even is this plugin? 😄

After dozens of these tiny iterations, I finally ended up with something that’s (almost!) ready to ship. But wait — what even is this plugin? 😄

A Lokalisation Key Annotator for Figma!

A Lokalisation Key Annotator for Figma!

Well… it’s an unofficial Lokalise Key Annotation plugin for Figma. I named it “Localization Key Annotation”. And honestly, the reason I built it is pretty simple: The official Lokalise plugin is just… really hard to use. It didn’t fit our actual workflow or needs. So I made my own.

Well… it’s an unofficial Lokalise Key Annotation plugin for Figma. I named it “Localization Key Annotation”. And honestly, the reason I built it is pretty simple: The official Lokalise plugin is just… really hard to use. It didn’t fit our actual workflow or needs. So I made my own.

Here’s what it does (so far):

Here’s what it does (so far):

STEP 1 Save your internal Lokalise projects

STEP 1 Save your internal Lokalise projects

Start by inputting your team’s dev-related projects from Lokalise — whether that’s the client-side, backend, or other more projects. Save them locally.

Start by inputting your team’s dev-related projects from Lokalise — whether that’s the client-side, backend, or other more projects. Save them locally.

project setting UI

project setting UI

STEP 2 Annotate all your Lokalise Keys directly on Figma

STEP 2 Annotate all your Lokalise Keys directly on Figma

No more guessing or manually cross-referencing keys. Just mark them up, right in your design file — great for dev handoff, even better for key mapping.

No more guessing or manually cross-referencing keys. Just mark them up, right in your design file — great for dev handoff, even better for key mapping.

project setting UI

project setting UI

STEP 3 Use the “Manage” command to view & edit keys

STEP 3 Use the “Manage” command to view & edit keys

This command opens a full list of all your annotations. From here, you can:

This command opens a full list of all your annotations. From here, you can:

  • Edit the key name

  • Switch its assigned project

  • Download CSVs by project (single or multiple at once)

  • Edit the key name

  • Switch its assigned project

  • Download CSVs by project (single or multiple at once)

Manage Page UI

Manage Page UI

OTHER Re-Align annotations anytime

OTHER Re-Align annotations anytime

Since annotation layers are usually locked to prevent accidental edits, they can sometimes get misaligned during design changes. The Re-Align command fixes this — bringing everything neatly back into place with one click.

Since annotation layers are usually locked to prevent accidental edits, they can sometimes get misaligned during design changes. The Re-Align command fixes this — bringing everything neatly back into place with one click.

Manage Page UI

Manage Page UI

That’s the current feature set. If this sounds like something your team could use — feel free to try it out or give it a spin!

That’s the current feature set. If this sounds like something your team could use — feel free to try it out or give it a spin!

And even if you don’t use Lokalise, this plugin might still help.
If you’re managing your copy keys manually in JSON (or any other format), this could save you a ton of time by keeping annotations synced and tidy in your design files.

And even if you don’t use Lokalise, this plugin might still help.
If you’re managing your copy keys manually in JSON (or any other format), this could save you a ton of time by keeping annotations synced and tidy in your design files.

Go Try it if you're interested!

Go Try it if you're interested!

Final Thoughts
Final Thoughts

Vibe coding is honestly… kind of addictive.

Vibe coding is honestly… kind of addictive.

It frees up so much time I’d usually spend clicking through repetitive UI stuff, and lets me focus more on the fun part — thinking about what a product feels like, what it’s really for, and how it could actually help people.

It frees up so much time I’d usually spend clicking through repetitive UI stuff, and lets me focus more on the fun part — thinking about what a product feels like, what it’s really for, and how it could actually help people.

Sure, this was “just” a Figma plugin.

Sure, this was “just” a Figma plugin.

But it was also the first time I built something real, end-to-end, with AI as a teammate.

But it was also the first time I built something real, end-to-end, with AI as a teammate.

And I gotta say — it felt pretty great 😌

And I gotta say — it felt pretty great 😌

I’m hoping to kick off some new projects soon. They might not be big. They might not be fancy. But I want them to be meaningful.

I’m hoping to kick off some new projects soon. They might not be big. They might not be fancy. But I want them to be meaningful.

Thanks for making it all the way here —

Thanks for making it all the way here —

I’m Should, and I’ll see you in the next one 👋

I’m Should, and I’ll see you in the next one 👋

💡 What about you? Have you ever built something with AI—big or small—that surprised you? Or is there a project you’ve been wanting to try, but haven’t started yet? I’d love to hear your experiences (or your wildest ideas) in the comments!

💡 What about you? Have you ever built something with AI—big or small—that surprised you? Or is there a project you’ve been wanting to try, but haven’t started yet? I’d love to hear your experiences (or your wildest ideas) in the comments!

Leave your experience on Medium below!

Leave your experience on Medium below!

Articles, plugins, and design chaos — all powered by caffeine. If you enjoyed this, help me reboot the system with your support.

Articles, plugins, and design chaos — all powered by caffeine. If you enjoyed this, help me reboot the system with your support.

404: caffeine not found.

Fix it? 

→→→

404: caffeine not found.

Fix it? 

→→→