Figma-Led Design

In a Figma-led workflow, the tool itself is intentionally neutral. Figma does not enforce rules, surface structure, or resolve ambiguity. It provides an open canvas and a set of visual tools, leaving it to the designer to apply judgment, maintain consistency, and anticipate how decisions will hold together beyond the frame.

What a designer can do in Figma:

  • • Represent ideas visually so they can be reviewed and discussed
  • • Manually establish hierarchy through layout, spacing, and typography
  • • Define flows, states, and variants based on anticipated behavior
  • • Maintain consistency by tracking rules and decisions across screens
  • • Prepare artifacts that others must interpret and translate into reality

Figma reflects decisions made by the designer; it does not validate or enforce them.

AI-Led Design Tools

In an AI-led workflow, the tool is no longer a passive canvas. AI tools actively interpret instructions, apply rules, and simulate outcomes. Rather than waiting for inconsistencies to surface downstream, the system continuously evaluates structure, relationships, and constraints as the design takes shape. This changes the designer's role from manually enforcing rules to clearly articulating them—and monitoring whether the system's outputs still align with the intended vision.

What a designer can do with AI-led tools:

  • • Encode structural rules and constraints directly into the design process
  • • Surface edge cases, conflicts, and inconsistencies early through targeted prompts
  • • Iterate on logic and behavior before committing to visual refinement
  • • Isolate functionality into discrete components or files to manage complexity
  • • Maintain data consistency through shared utilities rather than manual syncing

AI tools execute against the rules they are given; they do not inherently understand intent unless it is made explicit.

Where Figma-Led Design Falls Short

Because Figma is an open canvas, it does not actively test or enforce the structural assumptions embedded in a design. This makes it excellent for exploration and communication, but fragile when complexity increases or when designs move closer to real system behavior. Many issues remain invisible until development begins or usage scales.

Common gaps in a Figma-led workflow:

  • • Edge cases must be discovered manually, often late in the process
  • • Structural conflicts only surface once designs are implemented in code
  • • Consistency depends on the designer's memory, discipline, and documentation
  • • Rules exist implicitly across screens rather than being enforced explicitly
  • • Validation is representational, not behavioral—things look right but may not work right

Figma captures decisions, but it does not stress-test them. That burden stays with the designer and downstream teams.

Where AI-Led Tools Fall Short

AI-led tools excel at enforcing structure and surfacing inconsistencies, but they do not inherently understand intent, taste, or product vision. Without clear guidance, they can optimize locally while drifting globally—producing systems that are coherent but misaligned with the desired experience.

Common gaps in an AI-led workflow:

  • • Vision must be actively maintained by the designer, or outputs can drift from intent
  • • AIs optimize rules and consistency, not experiential quality or nuance
  • • Poorly framed prompts can lock in incorrect assumptions early
  • • High-level product judgment (what should exist) is not inferred reliably
  • • Iteration can converge too quickly, reducing exploratory breadth

AI tools validate structure aggressively, but they do not replace authorship. Direction, framing, and value judgments remain human responsibilities.

An AI-Directed Design Approach

Rather than using AI tools to generate finished screens, I treated them as a system-level collaborator. The goal was to make assumptions, rules, and dependencies explicit early—while changes were still inexpensive and reversible. This shifted design effort away from depicting outcomes and toward constructing a system that could be tested as it evolved.

Design judgment remained central throughout the process; the tools enforced structure, but direction and intent stayed human-led.

Audit → Plan Inquiry → Execution → Validation

The workflow followed a repeatable prompt lifecycle designed to reduce drift and surface issues early. I began by asking the system to examine existing structure, assumptions, and risks. This included identifying potential edge cases, conflicts, and areas where ambiguity could compound over time.

Before making changes, prompts focused on clarifying what needed to be true for the system to function correctly. This phase emphasized constraints, relationships, and invariants rather than solutions. Execution was deliberately scoped, addressing one concern at a time and avoiding broad, multi-purpose instructions that could introduce unintended side effects.

After each step, the system was asked to validate its own output—checking for internal consistency, unintended consequences, and alignment with previously established rules. This lifecycle helped ensure that decisions were reasoned through before being implemented.

Design Judgment in an AI-Directed Workflow

AI tools enforce rules and surface inconsistencies, but they do not understand intent. Throughout the process, maintaining alignment with the intended vision remained a human responsibility. The system could push back on structure, but deciding what should exist—and why—required continuous oversight.

Used this way, AI tools complemented design judgment rather than replacing it, making system behavior visible while keeping authorship firmly with the designer.