Just arrived at #PyCon and I will be live tweeeting day 1 of talks here - also posting some stories on PythOnRio’s instagram: instagram.com/pythonrio (in Portuguese over there)!
follow the 🧵if you’re interested!
I lost the opening keynote because I was just arriving, but I am now waiting for the first talk that called my attention. Super interested in open telemetry but never actually got my hands dirty and did something with it. #PyConUS2023
This is the talk: us.pycon.org/2023/schedule/…, called How to Monitor and Troubleshoot Applications in Production using OpenTelemetry.
Ron started with the three pillars of observability: metrics, logs and traces; metrics help us answer the what? questions; logs help us answere the why? questions; traces help us answere the where? questions.
He smartly explained that a trace is a family, where a span is a person in the family - easier way to explain this I've ever seen.
You could technically instrument your code with <5 lines of code, but lately there are lots of no-code options: command line, Kubernetes, Serverless functions, etc (see opentelemetry.io/docs/instrumen…),
I'm particularly interested by the Kubernetes solution, which auto-instruments DotNet, Java, NodeJS and Python: github.com/open-telemetry….
Alrighty, off to stretch my legs and then check out the talk: Inside CPython 3.11's new specializing, adaptive interpreter (us.pycon.org/2023/schedule/…).
Okay, this is a pretty heavily technical talk! Brandt basically walked us through the bytecode improvements and adaptive instructions for Python 3.11, explaining how 3.11 is faster to 3.10 and how they intent to further implement improvements on 3.12.
3.11 is basically much faster than Python 3.10 due to a process called quickening, and Python 3.12 implements an even faster quickening process - due to the usage of adaptive instructions.
On 3.11, with adaptive instructions, the Python bytecode basically adapts according to the calls in your code, and if nothing changed in your code, classes, etc, it can basically save some time by retrieving cached data instead of doing slower lookups on hash tables, etc.
This basically requires your function code to warm up, and be called a couple of times before you get the benefit on an adaptive instruction.
But on Python 3.12, every instruction is adaptive by default - and calls specialized operations whenever possible, and we don't need to wait for a particular function to warm up - the bytecode instructions themselves are the ones that need to warm up.
Alrighty I'm hungry and can't transcribe anything until after lunch, but this is a pretty cool talk given by folks that works in New Relic's intrumentation team so if you like these things def go watch the recording!
Landed at the charlas (spanish track of talks) here where there is a talk going on about hatch and python packaging, if anyone is looking for a tool to package their python projects, looks lit.
I’ve heard great recommendations about the talk @julianaklulo gave about creating interactive games using MicroPython, so make sure to watch that recording later if you missed it!
And I am now @ the Grand Balroom waiting for the lighting talks!
First LT is about saving lives with Python and how AWS Lambdas are a good low/no cost solution for folks developing software for pet rescue orgs .
Check Dallas Pets Alive btw, awesome work!
Then a talk that brings up some major updates off Tox, that consisted of a full rewrite of the tool - last stable version is 4.4.4 apparently :) Been a while since I last used tox and tbh this makes me wanna try again.
And then a demo off @SourceryAI and damn, the tool they are building to help us read code is really impressive!
Demo of conda store which is a tool to create data science envs for collab: conda.store/en/latest 👏👏
Then a interesting lightning talk about using GPT to generate a data dictionary, and then the important warning to not push private company data into a LLM 🤣
The next LT is from @psobot and asks to not use multiprocessing before trying other things and explains (so quickly!) how multiprocessing works, the cost of sharing data from one process to another, and shows threads as a solution to try before using multiprocessing!
Next LT: Physic Fighters, which is a game that was developed using Python!
Ugh psychic actually, damn auto correction!
Erin is talking about how to (not be) an OSS jerk and damn, the sarcasm is strong on this one 🤣🤣🤣🤣
Volunteering at the PyLadies Booth on #PyCon2023 this morning from 8-10am, come chat, get stickers, learn more about PyLadies and/or buy a shirt to support the amazing work we do at @pyladies! 🥰🥰 #PyConUS2023
Alrighty, took a walk around the sponsors booth and now off to see @pydebb at the Charlas track!
Debora is talking about ways to contribute with the Python community, for example attending sprints, being a moderator, contributor in discord, telegram, slack channels, helping translate resources, etc.