Here's some code that is on the hot path on your application and you want to optimize it. This is what a typical C# developer would write (actually copilot wrote this). It's pretty clear, but suboptimal. How could you go about improving it? #dotnet #csharp
There are lots of allocations here: 1. The string[] splitting up query string parts by & 2. Each key value pair string[] splitting each part by = 3. The List<string> of new results 4. The final string
One more assumption you can make: The instanceId will only occur once or 0 times in the input querystring.
I think most people would start with this code a single LINQ statement 😊.
There are some trends in the replies. The most efficient solutions really require re-thinking the problem and constraints. This is why performance work is difficult and not a one size fits all solution. Do you optimize the current code? Or do you change the approach?
Here's the allocation profile for 100,000 calls to this method:
I think most of the answers landed on 2 approaches: 1. Optimize the current code using Spans, stack alloc and various other approaches to reduce the overhead and intermediate allocations. 2. Find the "instanceId=" inside of the query string and remove it.
I ended up with number 2 as well, but I think it's worth discussing 1 more as an exploration of new APIs, options and tradeoffs.
So, if we assume the existing pattern: 1. Split up the query string into parts 2. Find the relevant key value pair with the "instanceId" key. 3. Build a new string without that pair.
Let's look at each problem separately.
Splitting the query string into parts. Any good 20+ year old framework has multiple ways of doing things: 1. 2. https://t.co/0BDOvCcmkI
There's also a new Split method () for Spans that works well if you know the maximum number of segments after splitting: https://t.co/RUiGKOlFyJlearn.microsoft.com/en-us/dotnet/a…
When you don't know the segments in advance, you can do a pass to figure that out with MemoryExtensions.Count.
It's extremely optimized (vectorization etc etc). https://t.co/rT1FGFBQmelearn.microsoft.com/en-us/dotnet/a…
These are pretty good solutions for allocation free enumeration. What about storing these objects and then producing a final string? What solutions exist for this?
You could use the trusty StringBuilder!
There's an internal linked list of char[] buffers inside of the StringBuilder. Passing a capacity pre-sizes the initial buffer. Use the new high performance string interpolation handler to build up the new string. https://t.co/c28hLp47TJlearn.microsoft.com/en-us/dotnet/a…
Notice we don't have any intermediate strings here, the only allocations are: 1. The StringBuilder itself 2. The internal char[] that the string builder uses 3. The final string
This is pretty good. Can we do better?
Lets use the same technique but manage the internal char[] ourselves:
1. Allocate the Span<char> on the stack if it's small enough. 2. Use MemoryExtensions.TryWrite () to write the data to the char buffer using interpolation. 3. Allocate the final string! https://t.co/32b26R1VGflearn.microsoft.com/en-us/dotnet/a…
@_neonsunset I'm pretty confident we'll get there capability wise, it'll just take a couple of releases. Then we need to get the world the catch up
Here's what it looks like when you put it all together. 1. Allocation free enumeration of key value pairs of a query string (with decoding support!) 2. Minimal allocation string building using stack allocation and the new string interpolation handlers to build up a new string!
Ah there's a bug! I need to slice the final string (maybe I shouldn't put code in images...)
@BrunoLM7 It’s why you have to prompt things like gpt because there are decisions that need to be made by developers
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Discrete events masquerading as a workflow should be expressed as such. Consider the following event-based model: #dotnet
The game has 3 events:
- GameStarted
- GameEnded
- OnQuestion
The order of execution should be obvious from the naming...
The application doesn't control the event loop, the event loop will trigger the events at the appropriate time. Storing state across events means understanding the order in which they fire, the thread safety of such events and more (do they fire concurrently? can you block?)
Currently designing how this trivia game will work on multiple servers. I have 3 architectures in mind (Twitter can help me pick one, but I have a preferred one). Both clients are part of the same game. Games are ephemeral and last a maximum of 2 minutes.
Architecture 1 - Using Redis as the game state storage and SignalR backplane.
Architecture 2 - Use Orleans grains as the SignalR backplane and state storage for a game.
It's Friday so time for spicy opinions. Every single sample ASP.NET Core project I see uses CQRS and the mediator pattern or CLEAN architecture. I'm over the over engineering 🙃. #dotnet
I lean very heavily towards YAGNI and readable code with minimal abstractions until needed. I appreciate that every situation is different, but like anything it's hard to appreciate the benefits when the "scale" problem isn't evident.
Blindly applying patterns bugs me ALOT. I don't subscribe to the dogma, I don't write code that way...
Another cool tye feature is networking support between containers and processes on the host. This nginx configuration is referring the process running on the machine by host name "sample" but that application isn't running in a container.
It manages all of the tricky ways to communicate between containers and local machine using (host.docker.internal) etc. This lets you run and debug your processes normally and run what needs to run as containers in your inner loop.
It creates an environment that looks like this behind the scenes.
Structured logs via the stdout is *NOT* a good idea. Standard out is a single stream, and you should be using it for human readable logs. Push structured logs via another output stream in your application.
Modern logging providers can push logs out to various sinks in the "right" target format. Loggers are just serializers at the end of the day. I can produce a log that has with the text "Hello World", that gets packed and shipped differently depending on the consumer.
JSON on the console seems convenient and "standard" but it's:
- Inefficient (there are more efficient ways to pick structure)
- Hard to read when tailing logs with any log tail tool/command