The biggest problem with async await is the “colored functions” problem brilliantly explained by this article journal.stuffwithstuff.com/2015/02/01/wha…. It’s a never ending problem because everything can’t be async and it’s viral. It’s not a new problem though, it’s always been this way.
JavaScript has an easier time because blocking always meant you’d destroy the browsers UI thread. That model naturally made it nicely non blocking on the server side.
Then golang chose a different direction and did go routines. Not conceptually different but the big thing it solves is the “virality”problem. Java’s Loom is also headed this direction. It’s easy to say that .NET should follow but it’s never easy…
One of the fundamental tradeoffs is the performance of interop. .NET is one of the platforms that has excellent support for interop with the underlying OS (pinvokes) aka FFI (foreign function interfaces). It has one of the best FFI systems on the market
The moment you need to call into the underlying platform, you need to context switch from your current “green” thread, to one compatible to what the underlying platform supports. This is one of the big costs and why golang had to rewrite things in go and goasm.
The other difficulty .NET has is that it allows pinning memory. Maybe you pinned some object to get the address or pass it to another function. This is problematic, when you want you want to grow the stack dynamically in your user mode thread implementation.
The inability to copy the stack means you need to do a linked list instead. This is a complex and inefficient implementation. Java and go can both copy because there’s no way to get the underlying address of anything (without really unsafe code).
Interestingly, async state machines in .NET form a linked list. If you squint, the state for a single async frame are fields on the async state machine and continuations point to the “return address”.
Execution wise, most of these systems work in a similar way. There’s a thread pool with a queue of work and work stealing. Java’s loom uses one of Java’s threadpool implementations and golang has a scheduler that does similar things.
The biggest difference is in the ergonomics of using it and the “virality”. Sure there’s devil in the details but don’t let anyone tell you that green threads fundamentally performance better than the alternative, they don’t
Or maybe if we wait long enough, the operating system thread will be super cheap and we can remove these programming language runtime specific thread implementations 🙃
I missed the other big problem with user mode threads! It resets the tooling ecosystem. All of the tools that can look at OS threads don't work with your threads.
Watch @pressron's talk on this in the context of Java's loom . It's good.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
High level IC (individual contributors) should have a support group. Managing the transition from being “just another engineer” to being a “force multiplier by working through others” is tough. Talking to others that have managed that transition is calming.
Your role suddenly goes from cranking out lots of code to mentoring/growing others, and shaping the team culture. Often times companies train managers but don’t formally prepare ICs for those roles. Learn on the job, become a great people person!
One of the hardest things is measuring your impact. You don’t have anyone reporting to you, and you are no longer being judge solely on your technical abilities. What did you do at the end of the year? It can feel very abstract at times.
As usual, there are a boatload of new APIs coming in .NET 6. Most of these are driven by custom requests. Lets talk about some of them. #dotnet#aspnetcore
In .NET 6, there's a new low-level API to enable reading/writing of files without using a FileStream. It also supports scatter/gather IO (multiple buffers) and overlapping reads and writes at a given file offset.
There are a couple of new ways to access a process path and process id without allocating a new process object:
Here's an interesting .NET-ism. Async methods capture the execution context on entry and restore them on exit. What does the following print? #dotnet#csharp
It prints, Before: 0, After: 10. The async local value bled out of the method call because it was synchronous method that directly returned the task.
This on the other hand will not let the async local value bleed out of the method.
I've been playing with the .NET's native AOT (Ahead of Time Compilation)(github.com/dotnet/runtime…) support to get a better understanding of the implications for .NET libraries and applications that want to take advantage of it. #dotnet
The promise of AOT is that you trade off some compile time performance and dynamism for a system that can optimize for reduced output size, startup speed and improved steady state throughput.
.NET has had lots of different versions of AOT over the years (ngen, crossgen, ready to run). Those versions of AOT always run with a JIT fallbacks so binaries carry both the native compiled code *and* the IL as a fallback that the JIT can use to further optimize.
After spending the last 5 years deep in networking code, I can say one of the most fundamental missing pieces is the ability to know why a connection closed (root causing the problem).
I wish all the protocols from here on out would also have a "reason for close" field for additional debugging information. The cumulative time that has been lost trying to debug what part of the stack caused the connection to drop (OS, proxy, libraries) probably adds up to years.
This gets even more complicated by these "invisible layers" introduced by virtualization. Cloud networking comes to mind... don't forget the layers built on top of that in orchestrators like kubernetes.
How is it different from SignalR you ask? Well internally it's built on the same underlying tech but the big difference is that there's no client requirement or protocol requirement, BYOWL (bring your own websocket library).
The mainline scenarios are also focused on severless so we can handle your long running websocket connections and trigger HTTP calls to any backend. It can be azure functions or any addressable HTTP endpoint!