bia Profile picture
Apr 21, 2023 31 tweets 7 min read Read on X
Just arrived at #PyCon and I will be live tweeeting day 1 of talks here - also posting some stories on PythOnRio’s instagram: instagram.com/pythonrio (in Portuguese over there)!

follow the 🧵if you’re interested! Image
I lost the opening keynote because I was just arriving, but I am now waiting for the first talk that called my attention. Super interested in open telemetry but never actually got my hands dirty and did something with it. #PyConUS2023 Image
This is the talk: us.pycon.org/2023/schedule/…, called How to Monitor and Troubleshoot Applications in Production using OpenTelemetry.
Ron started with the three pillars of observability: metrics, logs and traces; metrics help us answer the what? questions; logs help us answere the why? questions; traces help us answere the where? questions.
He smartly explained that a trace is a family, where a span is a person in the family - easier way to explain this I've ever seen.
You could technically instrument your code with <5 lines of code, but lately there are lots of no-code options: command line, Kubernetes, Serverless functions, etc (see opentelemetry.io/docs/instrumen…),
I'm particularly interested by the Kubernetes solution, which auto-instruments DotNet, Java, NodeJS and Python: github.com/open-telemetry….
Alrighty, off to stretch my legs and then check out the talk: Inside CPython 3.11's new specializing, adaptive interpreter (us.pycon.org/2023/schedule/…).
Okay, this is a pretty heavily technical talk! Brandt basically walked us through the bytecode improvements and adaptive instructions for Python 3.11, explaining how 3.11 is faster to 3.10 and how they intent to further implement improvements on 3.12.
3.11 is basically much faster than Python 3.10 due to a process called quickening, and Python 3.12 implements an even faster quickening process - due to the usage of adaptive instructions.
On 3.11, with adaptive instructions, the Python bytecode basically adapts according to the calls in your code, and if nothing changed in your code, classes, etc, it can basically save some time by retrieving cached data instead of doing slower lookups on hash tables, etc.
This basically requires your function code to warm up, and be called a couple of times before you get the benefit on an adaptive instruction.
But on Python 3.12, every instruction is adaptive by default - and calls specialized operations whenever possible, and we don't need to wait for a particular function to warm up - the bytecode instructions themselves are the ones that need to warm up.
Lil break before heading to instrumentation nightmares - us.pycon.org/2023/schedule/…!
Alrighty I'm hungry and can't transcribe anything until after lunch, but this is a pretty cool talk given by folks that works in New Relic's intrumentation team so if you like these things def go watch the recording!
Landed at the charlas (spanish track of talks) here where there is a talk going on about hatch and python packaging, if anyone is looking for a tool to package their python projects, looks lit. Image
I’ve heard great recommendations about the talk @julianaklulo gave about creating interactive games using MicroPython, so make sure to watch that recording later if you missed it!
And I am now @ the Grand Balroom waiting for the lighting talks!
First LT is about saving lives with Python and how AWS Lambdas are a good low/no cost solution for folks developing software for pet rescue orgs . Image
Check Dallas Pets Alive btw, awesome work!
Then a talk that brings up some major updates off Tox, that consisted of a full rewrite of the tool - last stable version is 4.4.4 apparently :) Been a while since I last used tox and tbh this makes me wanna try again.
And then a demo off @SourceryAI and damn, the tool they are building to help us read code is really impressive!
Demo of conda store which is a tool to create data science envs for collab: conda.store/en/latest 👏👏
Then a interesting lightning talk about using GPT to generate a data dictionary, and then the important warning to not push private company data into a LLM 🤣
The next LT is from @psobot and asks to not use multiprocessing before trying other things and explains (so quickly!) how multiprocessing works, the cost of sharing data from one process to another, and shows threads as a solution to try before using multiprocessing!
Next LT: Physic Fighters, which is a game that was developed using Python!
Ugh psychic actually, damn auto correction!
Erin is talking about how to (not be) an OSS jerk and damn, the sarcasm is strong on this one 🤣🤣🤣🤣 Image
@erinmikail you’re the bomb :) Image
My energy to live tweet the LTs is gone, there are too many of them!

See y’all tomorrow hopefully in a new thread (this is actually really fun and helps me take lil notes from the things I see) 👋👋👋

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with bia

bia Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @__biancarosa

Apr 22, 2023
Volunteering at the PyLadies Booth on #PyCon2023 this morning from 8-10am, come chat, get stickers, learn more about PyLadies and/or buy a shirt to support the amazing work we do at @pyladies! 🥰🥰 #PyConUS2023
Alrighty, took a walk around the sponsors booth and now off to see @pydebb at the Charlas track!
Debora is talking about ways to contribute with the Python community, for example attending sprints, being a moderator, contributor in discord, telegram, slack channels, helping translate resources, etc.
Read 13 tweets
Mar 4, 2023
🧵 Fio de sábado pra galera que curte e ou já sabe ou quer aprender a programar em #Go
Por onde começar a programar?

biancarosa.com.br/pt/posts/comec…
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(