Thanks to @david_picard , I can now formulate why I am for social media.
That is an instrument, where many things depends on you and how good you have done your job.
Writing a good paper is another such thing, but then you are dependent on (random) reviewer choice.
1/
ArXiv is better than the conference in a sense that everyone can see and judge. But it also quite random: your target audience may not check the feed today. However, it is googlable, so not full failure
2/
Now, if we add social media, then we can almost make sure (in long term at least) that target audience will see the paper.
Maybe they will not like it, but they will see it.
3/3
I also admit, that this thread is from kind of privileged position: 1) I assume that you are not gravely dependent on paper acceptance, but was to deliver paper content. 2) it also implies that one have resources, mainly time, to build up the social media account.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The situation is following, in thread: 1) relatively small group gets the ideas of AE+ViT=SiT, validates on CIFAR, STL,puts on arXiv in April. Also submits to PAMI
1/
3) Masked AE + ViT from Meta AI, comes out in November, gets nice ImageNet results and get all attention. Not citing SiT
So arXiv does not protect SiT from scooping.
Oh, and PAMI rejects SiT
2/
However, arXiv + social media allow us to at least have this conversation and maybe fix a credit assignment.
Without arXiv as a proof of work, and social media to point out the situation, complete scooping.
3/
Let me introduce pixelstitch: simple correspondence annotator based on @matplotlib + jupyter notebook.
Just propvide list of image_fnames and
import CorrespondenceAnnotator.
You can add and erase points, zoom, move, and visualize epipolar geometry, induced by correspondences. 1/
You can install it by
pip install pixelstitch
and the documentation is here: ducha-aiki.github.io/pixelstitch/
Ofc, powered by nbdev.
All suggestions are welcomed
2/3
Why did I do this? Well, the built-in labelling tools in SfM apps like RealityCapture are nice, but one cannot quickly go through unrelated image pairs.
New artisan datasets for WxBS from me are coming soon,
specifically WxBS-relabeled and EVD 2.0.
3/3
For example, you can detect ORB features is OpenCV and convert them to kornia lafs with
lafs = laf_from_opencv_ORB_kpts(kps)
Or you can match descriptors on GPU with kornia.feature.match_snn() and transfer to OpenCV with cv2_matches_from_kornia 2/3
The repo is not @kornia_foss "official" in the terms that we don't promise to maintain everything in a timely manner. However, it should work for most of the times :) 3/3