Nick Lockwood Profile picture
iOS. 3D graphics. Retro games. He/Him. https://t.co/L9WoWqyRx8 https://t.co/ZhvOPhQsmv
Andrew Glen Profile picture 1 subscribed
Nov 7, 2022 7 tweets 2 min read
I see that Jack has had the audacity to make a reappearance, so while it's been fun hating on Elon Musk these last few weeks, we should spare a thought for the real villain of this piece: possibly the most worthless CEO who ever lived. Is there a bigger cuck in the world than Jack Dorsey? A man who literally invited Elon Musk to come in and fuck his company while he watched and offered tips.
Sep 30, 2022 8 tweets 2 min read
Reminder that AI researchers have been saying this for longer than this man has been alive, and we are still not any closer to achieving AGI than we were in the 1950s when Alan Turing first proposed it. To be clear, I don't believe AGI is impossible or unachievable (humans have it, and we aren't powered by magic, so it's clearly possible).

I'm also not suggesting we haven't achieved anything in non-general AI - stuff like Stable Diffusion is mindblowing (but unrelated to AGI).
Jul 13, 2022 13 tweets 3 min read
We joke a lot about AbstractHammerFactories and over-engineering, but I've realized (based on code reviews) that a lot of iOS devs don't really seem to understand the purpose behind mocking and factories and when it is and isn't appropriate to use them.

So here's an explainer 🧵 Let's start with mocking.

A mock object is a fake implementation of an object used provide known behavior for testing purposes.

Suppose you have an object A, and you want to test it. Many devs start by creating an AProtocol so they can mock A, but this isn't necessarily needed.
Oct 24, 2021 9 tweets 2 min read
Thinking more about the latest Google revelations, I think in some sense this is largely a consequence of the modern trend towards data-driven decision making. (🧵) You may recall a post from a decade ago by a design lead who left Google after Marissa Mayer insisted on running an experiment to choose the shade of blue for a button.

That's a red flag for anyone who cares about artistry, but it actually has much more sinister implications…
Oct 17, 2021 6 tweets 2 min read
The scariest thing about machine learning is that we increasingly see it being applied to problems that *it can't possibly solve*, and yet everybody involved seems cheerfully confident that with just a bit more data it will be perfected. Answering questions is not merely a matter of data, it's a matter of *comprehension*. It requires the responder to *understand* the question, and then either know the answer or be able to work it out from their existing knowledge. That is absolutely not what is happening here.
Sep 10, 2021 6 tweets 2 min read
DRY (Don't Repeat Yourself) is one of the most important principles for good programming, but it's also one of the most misunderstood.

It's not about avoiding repetition of *code* per-se, it's about avoiding repetition of *maintenance*. The problem with repetition is that it's easy to apply a fix in one place and then forget to do it in another.

But it's equally painful when you need to make a change in a particular code path, and find that the logic is shared with another path that isn't supposed to change.
Sep 8, 2021 6 tweets 1 min read
In ShapeScript it seems like I'm going to eventually have to add support for accessing arrays/lists/tuples/whatever by index.

I've held off on this until now because of the zero vs one index problem. I want ShapeScript to be accessible to non-programmers and I can't shake the… … feeling that arrays starting at zero is one of those big gotchas for people, and yet I also can't bring myself to make them start at one.
Sep 6, 2021 6 tweets 2 min read
How capitalism ruins everything example 41735:

You know those annoying nag screens and popups on app launch? The ones that everybody hates?

They work. You make more money if you have them than if you don't. It's a simple statistical fact. For every user who deletes your app in disgust because you asked them to subscribe before they even tried it, there are 1.x users who subscribe who otherwise wouldn't have.

This is born out by analytics (yes, that other thing we all hate but which apps have because it works)
Apr 13, 2021 9 tweets 3 min read
I needed some code to create Data from a hex string in Swift, and the first reasonable-looking solution I found on Google was this… code.i-harness.com/en/q/194609c …except on closer inspection it's actually O(n^2), which is not great at all.

But the reason it's bad is interesting… In Swift, unlike most other C-like languages, you can't access the characters in a string directly by integer index. The reason for that is that Swift strings use a variable-width UTF8 encoding, so the 5th character is not necessarily located at the 5th byte in the string.
Sep 7, 2020 10 tweets 2 min read
Nobody:

Me: here's how I use git (thread): the commits in my projects are almost never a reflection of actual history. Instead, before pushing to upstream (and sometimes after!) I collapse all my wips, bug fixes, etc. so my commits are mostly each stable, self-contained features I also prefer to rebase all changes rather than merge so that the history is linear and contains no merge commits.

Why do this? Well I don't see git history as a record of "how did we get here" so much as "when was this introduced"?
Aug 6, 2020 4 tweets 1 min read
Don't write code that is easy to extend, write code that is easy to delete. Easy to delete generally means:

* Concise (as few methods, classes, files as possible)
* Self-contained (as few public entry points as possible)
* Well-tested (with good enough tests, someone can recreate your code from scratch with the same behavior, even if they never saw it)
Apr 11, 2020 11 tweets 2 min read
I used to believe this seemingly obvious truism, but it turns out that it's not at all easy to rewrite your entire architecture when it's slow by design, especially if the design flaws are baked into the public API. Part of the problem is that the very paradigms we use are mostly slow by design. OOP, RAII, FRP, ARC, GC… All of these require masses of runtime heap allocations and deallocations, and actively work against the CPU cache.
Jan 17, 2020 12 tweets 3 min read
Interesting post from Brent Simmons here about use of let vs var by default.

I generally disagree, and I think most structs should have var properties by default, but there is some nuance to the decision about which to use: (thread) The choice depends on whether the struct is just a bundle of properties, or if it enforces a contract. For example, for a struct like this

struct Vector {
var x, y, z: Float
}

These should absolutely be vars because lets would force all mutations to go through the constructor
Jan 15, 2020 34 tweets 7 min read
I got a 2X speed boost in Euclid by replacing the guts of my Polygon struct with a class instead of having the properties inline.

github.com/nicklockwood/E…

Never let anyone tell you Swift perf is intuitive (I ❤️ that Swift semantics mean this wasn't an API-breaking change though) (Some explanations/clarifications)

Perf-wise, this is the same as just using a class for Polygon instead of a struct. I did it this way to avoid an API change, and to retain value semantics for the couple of mutable properties (for the immutable properties it doesn't matter).
Nov 7, 2019 8 tweets 2 min read
I once complained how much boilerplate it took to wrap a String in Swift vs just using a typealias, and how it leads to worse design decisions.

It somehow it escaped my notice how simple it is now:

struct FoobarID: RawRepresentable, Hashable, Codable {
let rawValue: String
} There's really no excuse now for ever exposing raw String identifiers, constants or UUIDs in a Swift API.

The aforementioned code lets you easily create a unique type for every identifier, providing complete type safety without any inconvenience at the point of use.
Sep 17, 2019 5 tweets 1 min read
Something I've spent time on recently is the problem of serializing heterogenous arrays (arrays containing multiple types) in Swift using Codable.

Here's a pattern I've found that works pretty well, using a protocol and a type-erased wrapper: gist.github.com/nicklockwood/8… Normally in Swift you do polymorphism by *either* using a protocol *or* an enum (for open or closed sets, respectively). This approach requires you to use *both*, which is slightly odd, and it inherently only supports closed sets, but it's relatively little code to add new cases.
Jun 27, 2019 5 tweets 1 min read
I get a real cognitive dissonance when I see threads of Obj-C fans complaining about how complex and confusing Swift is. I used and loved Objective-C for most of a decade, but I just can't reconcile the rose-tinted views of it with my own experience. Swift is certainly not perfect, but it has freed me from a bunch of pain I suffered when trying to write esoteric things like graphics, games or parsers in Obj-C (mostly around trying to find the balance between fast-but-primitive C and slow-but-elegant Obj-C APIs)
Nov 8, 2018 4 tweets 1 min read
God, the -1 comments on the Swift Evolution Result proposal are so depressing.

“I’ve never used this, I prefer Promises” - ok, well I don’t see your Promises proposal.

“Let’s not be hasty and add something half-baked” - hasty?! This is the 3rd or 4th iteration of this proposal! “We won’t need Result once we have async/await” - great, then we can deprecate it when we get those. That’s how evolution works.

“What’s the point if it doesn’t have language sugar” - sugar can easily be added after the ABI lockdown. New types can’t.
Jun 26, 2018 10 tweets 2 min read
On the face of it, yearly subscriptions for software aren't fundamentally different from yearly paid updates. In theory the customer pays the same and gets the same. The difference with subs is that the burden is on the customer to cancel rather than on the company to retain them (I should add that there *is* a major difference between these models if the software actually stops working when the subscription ends, but assuming that the customer simply stops getting updates and the old version continues to work until it succumbs to bitrot, the above holds)
May 9, 2018 7 tweets 2 min read
The problem with the Turing test is that it hinges on the ability to fool a human, but humans have an innate desire to be fooled.

That’s why true scientific experiments rely on “double blind” conditions, so the experimenter’s own credulity is eliminated as a factor in the result If we look at that Google AI demo, there were several remarkable technical advances on display, but what was the thing that impressed the audience most?

It was the fact that the AI said “um” between sentences. A cheap trick that required no advances in AI whatsoever.