⏱️ Just ten more days until the release of @java 17, the next version with long-term support! To shorten the waiting time a bit, I'll do one tweet per day on a cool feature added since 11 (previous LTS), introducing just some of the changes making worth the upgrade. Let's go 🚀!
🔟 Ambigous null pointer exceptions were a true annoyance in the past. Not a problem any longer since Java 14: Helpful NPEs (JEP 358, openjdk.java.net/jeps/358) now exactly show which variable is null. A very nice improvement to #OpenJDK, previously available only in SAP's JVM.
9⃣ Varying load and new app instances must be started up quickly? Check out class-data sharing (CDS), whose dev exp has improved a lot with JEP 350 (Dynamic CDS Archives, Java 13); also way more classes are archiveable since Java 15. More details here: morling.dev/blog/smaller-f…
8⃣ Adding JSON snippets to your Java code, e.g. for tests? Or multi-line SQL queries? Much easier now thanks to text blocks, without any escaping or concatenation. After two preview cycles, text blocks were added as stable language feature in Java 15 (openjdk.java.net/jeps/378).
7⃣ Flight Recorder has changed the game for JVM performance analysis. New since Java 14: JFR event streaming. Either in-process (JEP 349), or out-of-process since Java 16. "health-report", a nice demo of the latter, introduced in this post by @ErikGahlin: egahlin.github.io/2021/05/17/rem…
@ErikGahlin 6⃣ Occasionally, you need to take specific actions depending on the type of a given object -- just one use case for pattern matching. Added in Java 16 via JEP 394, with more kinds of patterns to be supported in the future. Details in this post by @nipafx: nipafx.dev/java-pattern-m….
@ErikGahlin@nipafx 5⃣ Running application and database on the same host? Looking for efficient IPC between the processes of a compartmentalized desktop app? Then check out Unix-Domain Socket Channels (JEP 380), added in Java 16. Discussing several use cases in this post: morling.dev/blog/talking-t…
4⃣ Excited about pattern matching (6⃣)? Then you'll love switch expressions (JEP 361, added in @java 14), and pattern matching for them (brand-new as preview in 17). Super-useful together with sealed classes (finalized in 17). Note how the non-exhaustive switch fails compilation.
3⃣ Vectorization via #SIMD (single instruction, multiple data) can help to significantly speed up certain computations. Now supported in @java (JEP 414, incubating), fully transparent and portable across x64 and AArch64. Even FizzBuzz faster than ever 😜!
@java 2⃣ Elastic Metaspace (JEP 387), ZGC and Shenandoah collectors ready for production (377/379), G1 NUMA support (345), G1 quickly uncommitting unused memory (346, some details here:
) -- Tons of improvements related to GC and memory management since @java 11!
1⃣ Records, oh records! Long-awaited and going through two previews, @java language support for nominal tuples has been finalized in version 16 (JEP 395). Great for immutable data carriers like DTOs. A nice discussion of record semantics here by @nipafx: nipafx.dev/java-record-se….
• • •
Missing some Tweet in this thread? You can try to
force a refresh
#Postgres as an event store -- Thanks a lot for all the super-insightful answers 🙏! It looks like using a jsonb[] for modeling an event stream isn't ideal performance-wise, but several great pointers to using #Postgres for event sourcing here. Mentioned solutions include... 1/4
A short 🧵 on @apachekafka topic creation (triggered by @niko_nava, thanks!): who should create Kafka topics, how to make sure they have the right settings, how to avoid dependencies between producer and consumer(s)? Here's my take:
2⃣ Don't use broker-side topic auto-creation! You'll lack fine-grained control over different settings for different topics; Merely polling, or requesting metadata, will trigger creation based on global settings. Plus, some cloud services don't expose auto-creation to begin with.
3⃣ Instead, the producer side should be in charge of creating topics. There you have the information and knowledge about the required settings (replication factor, no. of partitions, retention policy, etc.) for each topic. Depending on your requirements, different approaches...
Agreed, the term is sub-par. But hear me out, the architecture is not. Let's talk about a few common misconceptions about Serverless!
1⃣ "Serverless means no servers"
There *are* servers involved, but it's not on you to run and operate them. Instead, the serverless provider is managing the platform, scaling things up (and down) as needed. Less things to take care of, billed per-use.
Myth: BUSTED!
2⃣ "Serverless is cheaper"
Pay-per-use makes low/medium-volume workloads really cheap. But pricing is complex: no. of requests, assigned RAM/CPU, API gateways, traffic etc. Depending on your workload (e.g. high, sustained), other options like VMs are better.
Thanks for all votes and insightful answers to the poll on usage of @java's var! Not unexpectly, replies range from "using var all the time" to "don't see the point of it". Yet one third never using var at all was a surprise for me. Some repeating themes from replies in this 🧵.
1⃣ Readability vs. writability: some argued var optimizes for writing code (less characters to type) at the cost of reading code (less explicit type info). I don't think that's the intention behind var. In fact, more (redundant, repetitive) code may read worse.
2⃣ var only or primarily used in test code: described by several folks as a good starting point for getting their feet wet with local variable type inference, before using it more widely.
Message transformations (SMTs) are an invaluable feature of @ApacheKafka Connect, enabling tons of use cases with a small bit of coding, or even just configuration of existing SMTs ready to use. Here are some applications in the context of change data capture: (1/7)
* Converting data types and formats: date/time formats are the most common example here, e.g. to convert milli-seconds timestamps into strings adhering to a specific date format (2/7)
* Creating an "anti-corruption layer", shielding consumers from legacy schemas or ensuring compatibility after schema changes; e.g. could use an SMT to choose more meaningful field names, or re-add a field using its old name after a column rename, easing consumer migration (3/7)
Some folks wonder whether @ApacheKafka is "worth it at their scale". But solely focusing on message count and through-put means to miss out on many other interesting characteristics of Kafka. Here here are just three which make it useful for all kinds of deployments (1/5):
* Fault-tolerance and high availability; topics can be replicated, consumers can fail-over -- Machines will fail, programs will crash, being able to mitigate this is always of value, no matter the scale of an application (2/5)
* Messages can be retained for a potentially indefinite time, consumers are in full control from where they read a topic -- Comes in handy to re-process some messages or entire topics, e.g. after failures, or for bringing in new consumers of existing messages (3/5)