Blog

Figma to React with AI: Complete Guide

Convert Figma designs to production React code using AI. Token-aware approach that preserves design variables instead of hardcoding pixel values.

The Figma-to-React pipeline has been broken for years. Export plugins produce div soup with hardcoded values. Manual translation is accurate but slow. AI tools get the structure right but miss design tokens entirely.

The missing piece is not smarter AI. It is giving AI access to what designers actually defined: variable bindings, not rendered pixels.

Why existing approaches fail

Figma export plugins generate markup from the visual tree. The output is nested divs with inline styles or utility classes. No component abstraction, no token references, no semantic meaning. You get background: #181818 instead of var(--color-background-primary-solid). Every exported file needs a full rewrite.

Screenshot-to-code tools match pixels well. They reverse-engineer layout from rasterized images. But they have zero knowledge of your design system. Colors are hex-picked from screenshots. Spacing is approximated. There is no connection to your token architecture, and dark mode does not exist in the output.

Manual translation works. A skilled engineer reads the Figma spec, maps values to tokens, and writes correct components. This is accurate and slow. A single complex component takes hours. Multiply that across a design system and you have weeks of mechanical work.

Standard Figma MCPs — including the official read-only one — return raw computed values from nodes. The AI sees borderRadius: 6 and writes border-radius: 6px. It never learns that 6 means radius/sm, which maps to var(--radius-sm) in your CSS. The semantic layer is invisible.

The token-aware approach

The core idea: read design intent, not rendered pixels.

Every well-built Figma file stores two layers of information. The visual layer is what you see — colors, sizes, spacing. The semantic layer is what designers defined — variable bindings that connect each property to a named token.

Standard tools read the first layer. AI Bridge reads both.

When AI reads a component through Bridge, it gets the full binding map:

{
  "fills": "surface/secondary",
  "paddingTop": "spacing/lg",
  "paddingBottom": "spacing/lg",
  "paddingLeft": "spacing/xl",
  "paddingRight": "spacing/xl",
  "itemSpacing": "spacing/md",
  "topLeftRadius": "radius/md",
  "topRightRadius": "radius/md",
  "bottomLeftRadius": "radius/md",
  "bottomRightRadius": "radius/md"
}

Now the AI writes var(--color-surface-secondary) instead of #F5F5F5. Your design tokens handle light mode, dark mode, density, and theme changes automatically.

Step-by-step workflow

1. Set up Bridge

Install the Figma plugin and start the bridge server. Setup takes under two minutes — covered in detail in Connect Claude Code to Figma and Connect Cursor to Figma.

Once running, verify the connection:

curl http://localhost:8867/status
# → {"connected":true}

2. AI reads component structure

The AI sends HTTP commands to read the Figma node tree:

# Get the component's properties
POST /command {"command":"get-node-props","params":{"nodeId":"1234:5678"}}

# Find all children with their types
POST /command {"command":"find-children","params":{"nodeId":"1234:5678"}}

This returns the full node hierarchy — frame nesting, auto-layout direction, text content, instance references. The AI now understands the component architecture, not just a flat pixel grid.

3. AI reads token bindings

This is where Bridge diverges from every other Figma integration:

POST /command {"command":"get-bound-variables","params":{"nodeId":"1234:5678"}}

The response maps every bound property to its design token name. Padding, spacing, colors, corner radius, font size, line height, letter spacing — all resolved to semantic tokens.

4. AI generates React + CSS module

With structure and tokens in hand, the AI writes production code. No intermediate format, no config file, no export step. The AI maps Figma's auto-layout to flexbox, instances to component imports, and every bound variable to a CSS custom property reference.

Real example: converting a card component

A notification card in Figma: surface background, medium radius, icon on the left, title and description stacked vertically.

Bridge returns the structure and bindings. The AI generates:

function NotificationCard({ icon, title, description }) {
  return (
    <div className={styles.card}>
      <div className={styles.icon}>{icon}</div>
      <div className={styles.content}>
        <span className={styles.title}>{title}</span>
        <span className={styles.description}>{description}</span>
      </div>
    </div>
  );
}
.card {
  display: flex;
  align-items: flex-start;
  gap: var(--spacing-md);
  padding: var(--spacing-lg) var(--spacing-xl);
  background: var(--color-surface-secondary);
  border-radius: var(--radius-md);
}

.icon {
  width: var(--control-icon-size-md);
  height: var(--control-icon-size-md);
  color: var(--color-text-secondary);
}

.content {
  display: flex;
  flex-direction: column;
  gap: var(--spacing-xs);
}

.title {
  font-size: var(--font-size-sm);
  font-weight: var(--font-weight-medium);
  line-height: var(--line-height-sm);
  color: var(--color-text-primary);
}

.description {
  font-size: var(--font-size-sm);
  font-weight: var(--font-weight-normal);
  line-height: var(--line-height-sm);
  color: var(--color-text-secondary);
}

No hex colors. No pixel values. Light and dark mode work automatically because var(--color-surface-secondary) resolves differently per theme. Change your spacing scale and every component updates.

What good output looks like

Check AI-generated code against this list:

  • No hardcoded colors. Every color references var(--color-*). If you see # or rgb( in component CSS, the AI missed a token.
  • No pixel spacing. Padding, gap, and margin use var(--spacing-*). Raw px values mean lost semantic intent.
  • No magic radius. Corner radius maps to var(--radius-*). Hardcoded 6px breaks when you update your radius scale.
  • No raw font sizes. Typography uses var(--font-size-*), var(--font-weight-*), var(--line-height-*).
  • Correct layout model. Figma auto-layout maps to flexbox with the right direction, alignment, and sizing mode.
  • Component composition. Figma instances become React component imports, not inlined markup.

If the output passes all six checks, it is production-ready. If it fails any, the AI did not have access to the token layer — which is exactly what Bridge provides.

Which AI models work

All of them. Bridge exposes a local HTTP API at localhost:8867. Any model that can send HTTP requests can use it:

  • Claude Code — native tool use, reads Bridge commands directly
  • Cursor — via MCP configuration or terminal commands
  • Codex — HTTP calls from code generation context
  • GPT / ChatGPT — through function calling or plugin interface
  • Gemini — via tool definitions

The model does not matter. The protocol is HTTP. You send JSON, you get JSON. No SDK, no vendor lock-in, no model-specific adapter. Switch models freely — the Bridge interface stays the same.

Further reading

Read more about the token-aware approach in Design Tokens, Not Just Pixels. Get the Bridge at plexui.com/bridge.