yancymin Profile picture
Product designer ∙ Design engineer Head of Design at https://t.co/OTPljuChyR Maker of @wegic_ai, https://t.co/C9wfQHd11M and https://t.co/sjDfGvpXzb

May 28, 2024, 9 tweets

The future that is happening.

Automation powered by GPT-4o generates Figma designs based on PRD.

@figma

@figma I will provide more details about the process later.

I've been using GPT-3.5 since May 2023 to realize this vision.

The motivation for doing this was that the current AI2UI products on the market are all template-based, which have poor scalability for different product requirements and cannot achieve the level of detail in understanding requirements that human designers can. So I started a three-month exploration with the goal of improving the practicality of AI-generated design drafts (using the user's own design system for interface generation) while hoping to discover a revolutionary solution for the entire interface construction process.

The initial test results were very poor, but I gained insights into the potential of GPT. (The earliest version focused on solving layout issues, using Ant Design, without any content data filling.)

Second Phase:
Building on the first phase, this phase aimed to improve the rationality of component selection and optimize the organization of page layouts.

Third Phase:
After over a month of intensive testing, we incorporated local styles, text, images, and icon content. This made the entire interface more practical and detailed, better meeting real-world usage needs.

Fourth Phase:
After three months of testing and adjustments to the implementation plan, I achieved the following result in the early hours of the morning. I still remember the feeling of joy and excitement mixed with a bit of frustration. I hadn't expected AI's capabilities to exceed my expectations in a field I was so familiar with.

(The following are the results from testing with multiple design systems)

Finally, I want to say:

The three-month journey was incredibly challenging because the results from GPT were uncontrollable in the early stages (of course, learning prompt engineering was essential). I spent countless days and nights battling with it. During this time, I replaced many data interaction implementation schemes and connected various process nodes in engineering.

Three months ago, the goal was: the current AI2UI products on the market are all template-based, which have poor scalability for different product requirements and cannot achieve the level of detail in understanding requirements that human designers can. Using the design system chosen by the user to create UI design drafts is the correct path.

I believe this result has achieved 70% of that goal, with the following capabilities:

- Support for using mid-to-high quality design systems, such as Ant Design Mobile and Arco Mobile.
- Understanding and parsing PRDs into a specific data format.
- Rational filling using local styles, custom icon libraries, and text content.
- Tested results for desktop web, though they are about 30% less refined compared to mobile apps (the focus was on mobile at the time, and I believe generating design drafts for the desktop will not be an issue).
- Interactive links between multiple pages can be achieved, and a path for implementation already exists.
- All generated design drafts are in auto layout (supporting adaptive stretching) and have semantically named layers.

✨In the future, I plan to create a Figma plugin or product. If you are interested and have a need, please fill out the form. Perhaps one night, you will receive my sincere invitation for a trial.
docs.google.com/forms/d/e/1FAI…

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling