The first thing you'll always do when starting a new #bpftrace script is selecting a probe. In this thread we will discuss how to select the probes that will give you the information you desire. 1/
2/ This is arguably the most difficult step. First, we need to know the landmarks. Use the very basic command of "bpftrace -l" to list all the probes. On my system, that produces 40032 lines
3/ Each line consists of N-tuples, some 2-, and some 3-tuples. The tuples are colon-separated and can be thought of as pathnames to events that fire to inform you something of interest has occurred
4/ The landmarks to find here are the top-level probe elements. Let's count them to see how bpftrace is organized:

$ sudo bpftrace -l | sed -e 's/:.*//' | sort | uniq -c
10 hardware
38527 kprobe
11 software
1484 tracepoint
5/ The two big landmarks here are kprobes and tracepoints. Nearly every event you will be interested in will fall into one of these two categories.
6/ Finding the landmarks are important because unlike DTrace, where you can blindly hook on to, say, all the probes, you can't do that with bpftrace. You're limited to 512 probes you can attach to in one-go
7/ Next, before we do an exploratory trace, we need a command that does what we want to observe. For our immediate needs (writing network observation tools), let's go with:

curl -sLo- google.com

Make TCP/80 request for index and dump to stdout
8/ Now we need to come up with a probe glob that selects a wide swath of what we are interested in but not more than 512 probes. I came up with:

$ sudo bpftrace -l 'kprobe:*tcp*' | wc -l
381
9/ Now we have our program to run and a set of probes we are interested in, we can use the following syntax to do an exploratory trace:

bpftrace -c "command to run" -e "bpftrace code"

bpftrace exits when your command completes
10/ Next, it is important to know two bpftrace code landmarks. Just like awk, it supports BEGIN {} and END {}

I find it helps to throw some silly printf()'s in one of these each to help separate the output

Here's the whole recipe with output
pastebin.com/WJLccCU5
From this output we can select the few probes that we are interested in. The ones that stand out like a sore thumb are:

18 curl[4405]: kprobe:tcp_connect
39 …tcp_sendmsg
68 …tcp_recvmsg
110 …tcp_sendmsg
141 …tcp_recvmsg
150 …tcp_recvmsg
162 …tcp_recvmsg

/11 End thread.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with FreeBSD Frau

FreeBSD Frau Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @freebsdfrau

14 Aug
I am releasing details today. All information has been gathered, timelines analyzed, old code vetted, major flaw fixed, and results scrutinized. Strap-in boys and girls, this rollercoaster goes to 11! Sun Microsystems #GridEngine has been hiding an ace 🂡 up its sleeve for 16 yrs
Show you the code? OK ... github.com/FrauBSD/sge/

Still interested in how a decades-old job scheduler might have contained the necessary bits to dethrone even future contenders?

Stick around and we'll see how Sun #GridEngine may just surprise you
Schedulers: Airflow, Mesos, Torque, Univa, Open Grid, Sun Grid, Cook, Slurm, there's so many. For the sake of argument, let's say you need to schedule 10M jobs but you only have 100K cores. Does your scheduler break down like most of those listed at 10-30K pending?
Read 30 tweets
3 Aug
Hey, want to see a crazy trick?
+ Compile package from port on #FreeBSD 12.1
+ Convert to old-style FreeBSD package
+ Install on FreeBSD 9.2
Only works with packages containing no binaries, of course; security/acme.sh for example
So let's get into it. Because that probably sounds like dark magic (well, it is, and I'm going to show you how to wield it).

This probably sounds like Debian/Ubuntu's "alien" utility but for FreeBSD. Well, it kind of is.

Thread.
Over the years, FreeBSD has had 2 major package formats (speaking strictly about the payloads; the tarballs themselves; not the ports system or Makefiles that create them) and 3 different compression formats.

In the beginning, there was pkg_add, pkg_delete, pkg_info, etc.
Read 18 tweets
21 Jul
I've publicly released nearly everything you need to make these dashboards. The only thing you need is a BeeGFS cluster, the rest is easy.
github.com/FrauBSD/beegfs…
github.com/FrauBSD/beegfs…
Step 1: Download/Install beegfstop

That's easy, it's a shell script. Just download it and put it in /usr/bin (if you're too lazy to say "make" in the redhat dir to make an RPM or you're using a non-RH based Linux distro)

raw.githubusercontent.com/FrauBSD/beegfs…
Step 2: Install the daemons

Clone this repo: github.com/FrauBSD/beegfs…
say "make install" in the top-level directory

or say "make" in the redhat directory to make an RPM
Read 10 tweets
2 Jul
Don't make these mistakes in #Grafana when using #InfluxDB
Let's go over each mistake [thread]
The "scan" template variable was an earlier attempt to solve the problem of not being able to use timeFilter, to, or from variables inside other template variables. The user would have to select a scan value to modify the other template variable scan range
Read 17 tweets
21 Jun
Details still pending, but what if I told you I discovered a way to improve cluster scheduling performance under load by up to 60000% or more? Sun Microsystems hid a really amazing optional feature in #GrodEngine and it has largely been unknown/unused/unfinished for 19 years!
The problem arises when you have over 100k pending jobs. Things start to slow down, job wait times sky-rocket, and the end-result is low-throughout. All-bad if you spent $8MM for your equipment and it is under-performing, and not returning your investment. Well...
It turns out that Sun Microsystems identified the problem almost two full decades before hardware would be powerful enough to push these boundaries. Thankfully there was a solution 99% coded and ready to go. The missing 1% you ask? 💥
Read 12 tweets
29 Jul 18
Watching @JurassicWorld I witnessed "dmidecode --isolate" and throughout the remainder of the film I could not stop thinking about this. movies.stackexchange.com/questions/9006… I am positive I saw "dmidecode --isolate" and now I'm thinking how I could make that a real thing
There's something called the DPDK for Linux. dpdk.org Stands for Data Plane Development Kit. It has an option for isolating CPUs. If you're going to add an option to dmidecode to "isolate" something/everything -- and in the premise of the @JurassicWorld movie -- 1/
if "dmidecode --isolate" is to "save the day" in a failing ventilation system, it would have to (no doubt) isolate a failing component. In #FreeBSD quite often we see cards that violate our standard understanding and ultimately evoke warnings about interrupt storms and thus 2/
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!