Would you a design tool's exported code in a project?
Ok, some context. I've spent the week researching these kinds of features, maybe it'll help to tweet through it. 🧵
These days, we expect that a modern design tool will include some kind of "inspection" mode where a developer can view dimensions, export values for colors, typography, etc. Pretty basic but a dev can use this information in any kind of project with zero commitment.
On the other end of the commitment spectrum, you've got tools that export an entire project: tools like @webflow or @draftbit or even @wix or @squarespace. These export a more or less "sealed" version of an app—you likely won't be using their code alongside your own code
Then there's the uncanny middle ground, where design tools like @modulz or @plasmic will export assets and components that are meant to be used alongside human-authored code. These might come via a build dependency or through an API.
On one hand, this kind of export is really exciting. A designer's changes would get to the app more directly; more ownership. You'd see more small improvements that wouldn't have been worth resourcing dev time for—because you wouldn't need a developer to make those changes.
On the other hand... I have a hard time thinking of any project that I've worked on where a direct design production pipeline would have been realistic / desirable beyond icons or other atomic assets.
For one, there's too much to break. Design releases would need to be versioned, published, and approved. (Not sure how that would work with an API). I'd worry that the designs would become fragile once they became the source of truth for production.
For two, every project is built differently. With so many unknowns, the tool's component export would likely need to bring its own solutions for styling, animation, state management, etc. And that would be the case for every target platform: iOS, web, Android, etc.
And then there's the trust issue. Let's say my production app is consuming styles or even streaming components from a design tool's API at runtime. A team would need to be very confident in that API's stability, right? (Maybe I'm over-estimating that, I dunno).
And maybe the upsides would be worth it. A new project could get a big head start through a relationship to a design tool. Processes could catch up with the change, teams could adjust to more design ownership.
Is it a niche use-case though? Is something like Figma-to-React just not practical for most teams, or are the tools just not caught up to where they need to be?
*use
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Turning brush strokes into polygons is fun, but what does a regular dab-style brush engine look like in the browser? brush-engine.vercel.app
This is a *much* easier problem than creating pressure-sensitive stroke polygons. All we do is repeat a texture (in this case, a circle) at a regular distances along a line, with the pressure (real or simulated) determining the size of the texture.
This is how apps like Photoshop or Procreate work. There's some interpolation between points in order to fill in the blank spaces—and you can use a lot more data to determine the texture's size, skew, opacity, etc—but that's the general idea.
pressure sensitive lines (drawing with iPad pencil)
In order to make this work, we start by capturing points from a mark.
We also capture a delta (the change between this point and the previous point), from which we can get a direction for each point. Here I'm drawing lines at each point with that point's direction rotated 90deg.
Aspect ratio resizing was breaking my head. I just couldn't figure out when to respect the user's height (and scale the width) vs. when to respect the user's width (and scale the height).
Long story short, remember the selection's initial aspect ratio (width / height, shown in green) and compare it against the current aspect ratio (pink). If the current ratio is greater than the initial ratio, we're tall so scale the height; otherwise, scale the width.
Flips also took a minute to work though. Depending on which dimension is being scaled, we need to offset a opposite point by the scaled width or height. Really visual problem—my comments have drawings in them.
Small fix: zooming in on arrows in @excalidraw no longer leads to clipped arrows.
(This is how it used to look.)
Since I have the GIFs, here's a quick explanation. When it comes time to draw an element, it is first drawn onto an off-screen canvas. This canvas is then composed onto the main canvas.
Hit testing against our axis-aligned rectangles is easy. Is the point left of the left edge? Right of the right edge, above the top, or below the bottom? If you answered yes to any of these questions then it's a miss; otherwise, it's a hit.
Our lines are more difficult. We want to score hits within a certain distance of the line so we need to turn the line into a box. If the line is straight then we test against a polygon (a potentially rotated rectangle). Trickier but not bad. Curved lines? Those are hard.
Experiment with adjustable arrows. The whole point of my arrows library was to avoid this kind of thing—and let the algorithm "guess" the best arc for an arrow, based on its distance—but this is useful too. 🏹🧵
For comparison, here's the same arrangements (almost) with the guessing algorithm in action.
The library has always supported two options: "bow" defines the minimum arc and "stretch" adds more arc to short arrows. The first GIF is essentially a UI to set the bow of arrows with no stretch. The second GIF is a set of arrows that all have the same non-zero bow and stretch.