This is building on some of the patterns that I used in globs.design: a normal design canvas space and a code editor that can be used to create things on the canvas.
I've added the ability to create controls like these. A control's values may be used in the code; and when you change the control's value, the code will run again with the new value.
Why make it? I've been building little 2D sandboxes like this for almost a year now, each in support of a different project (arrows, freehand, globs). I wanted a tool to help me do that work faster, and that could be easily extended to support new primitives.
It's great to be able to move between manipulating shapes myself (like I would in a regular design tool) and then punching shapes out with code, and then back. Perfect for finding patterns.
Anyway, there's still a bunch of work to do (especially in the code API) but the foundation is strong. I think I can use this for all sorts of stuff. Keep an eye out for a live demo soon.
Let's look at rotation in a canvas UI. Rotation is hard. 💀
When we have a single shape selected, we can start dragging on a handle here and then rotate the shape. The shape rotates around its center point.
When more than one item is selected, we do something a little different. We're actually making two changes to these shapes as we rotate the group. I'll show each change separately.
Turning brush strokes into polygons is fun, but what does a regular dab-style brush engine look like in the browser? brush-engine.vercel.app
This is a *much* easier problem than creating pressure-sensitive stroke polygons. All we do is repeat a texture (in this case, a circle) at a regular distances along a line, with the pressure (real or simulated) determining the size of the texture.
This is how apps like Photoshop or Procreate work. There's some interpolation between points in order to fill in the blank spaces—and you can use a lot more data to determine the texture's size, skew, opacity, etc—but that's the general idea.
Would you a design tool's exported code in a project?
Ok, some context. I've spent the week researching these kinds of features, maybe it'll help to tweet through it. 🧵
These days, we expect that a modern design tool will include some kind of "inspection" mode where a developer can view dimensions, export values for colors, typography, etc. Pretty basic but a dev can use this information in any kind of project with zero commitment.
pressure sensitive lines (drawing with iPad pencil)
In order to make this work, we start by capturing points from a mark.
We also capture a delta (the change between this point and the previous point), from which we can get a direction for each point. Here I'm drawing lines at each point with that point's direction rotated 90deg.
Aspect ratio resizing was breaking my head. I just couldn't figure out when to respect the user's height (and scale the width) vs. when to respect the user's width (and scale the height).
Long story short, remember the selection's initial aspect ratio (width / height, shown in green) and compare it against the current aspect ratio (pink). If the current ratio is greater than the initial ratio, we're tall so scale the height; otherwise, scale the width.
Flips also took a minute to work though. Depending on which dimension is being scaled, we need to offset a opposite point by the scaled width or height. Really visual problem—my comments have drawings in them.
Small fix: zooming in on arrows in @excalidraw no longer leads to clipped arrows.
(This is how it used to look.)
Since I have the GIFs, here's a quick explanation. When it comes time to draw an element, it is first drawn onto an off-screen canvas. This canvas is then composed onto the main canvas.