Avatars are central to the success of the #metaverse and #metacommerce. We need different #avatars for different purposes: accurate #3D digital doubles for shopping, realistic looking for #telepresence, stylized for fun, all with faces & hands. @meshcapade makes this easy. (1/8)
For on-line shopping, clothing try-on, and fitness, an avatar should be realistic – your digital twin. You need a true digital double to see how clothing will look in motion. But, creating avatars that are accurate enough for shopping is hard. (2/8)
Since it’s hard to 3D scan everyone, digital doubles must be created from a few images or a video. Existing methods require users to wear tight clothes and have cumbersome capture protocols. @Meshcapade uses a single image of a person in any pose, making creation easy. (3/8)
Avatars need to move, talk, and show emotion. We're very sensitive to motions and emotions that are not lifelike – the uncanny valley. A personalized avatar should move like its ‘owner’. On approach is to extract realistic animations from 3D motion capture data (#mocap). (4/8)
Mocap requires expensive equipment in a lab setting. To democratize avatar animation, @Meshcapade uses computer vision to accurately track a user’s #3D body and face in video, capturing all the emotional nuance. This is then applied to any #avatar. (5/8)
What if you want your avatar to be a fantasy character? You still want your motions and expressions. This requires an underlying avatar representation that can support #retargeting between accurate digital doubles and cartoon characters. (6/8)
The #SMPL body model does exactly this. SMPL represents the details of real people and their motions yet can be used to animate cartoon characters. SMPL is designed to be portable and works with any game engine or graphics software. (7/8)
This lets you have a whole collection of #avatars – from realistic to fantasy – that you can take anywhere and that always move like you do. This is all supported by @Meshcapade’s Avatar-as-a-Service platform. (8/8) Videos/images: @meshcapade
More info: meshcapade.com/home
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Young scientists regularly ask me for career advice. Academia or industry? Big company or startup? US or Europe? Good scientists in AI disciplines are fortunate to have many choices. But choosing can be stressful. I always give the same advice. 1/10
There is no globally-optimal life. There is no sequence of choices in life that will produce the "perfect life" or "perfect career". This is hard to accept but, once you accept it, it's very freeing. 2/10
So my advice is to choose the option that is the most fun. Why fun? Shouldn't you maximize future reward? Maximize future options? Maximize impact? Maximize income? 3/10
Build what you need and use what you build. This is a core philosophy of my research. It shifts the focus away from publishing “papers” to what really matters — impact. This thread unpacks why I think this is a successful approach to science. 1/10 Or see: perceiving-systems.blog/en/post/build-…
At the start of any research project, I ask my students “Who’s your customer?” By customer, I don’t mean “paying customer”. I mean “who needs what you’re proposing?” Who will use it? Who cares? If you can’t answer this, then the work is likely to be irrelevant. 2/10
A good answer to “who’s your customer?” can be “me”. If you need it, then you need it. And if you need it, there are probably other people out there in the world like you who will need it too. Corollary: if you aren’t going to use it, why do you think others will? 3/10
These are exciting times. There's a sense that AI will change everything, including how science is done. Implicit in this excitement is the hope that everything will change for the better. Let’s look at that. First, we need to define “better.”
Here, it’s the idea that science serves people to produce new artifacts (drugs, technologies, etc) that improve our lives. Behind this definition is a utilitarian view of science that is not quite correct and doesn't apply to all disciplines but that's a complex story for later.
Instead, I’ll focus on whether the AI utopia in science is likely. The argument goes like this: science is the domain of a few self-selected wizards who keep the rest of the population out through arcane jargon and ancient rituals.
.@ylecun writes “science must solely evaluate *impact*” and “evaluating work done by humans has ABSOLUTELY NOTHING TO DO with scientific publication.” Original emphasis retained. Let’s unpack the notion of impact and the evaluation of humans in science.
Consider the claim that all that matters in science is impact. This sounds sensible but requires the ability to *measure* impact. In ML today, the turnaround time between innovation and application is short. the “customer base” is huge, and impact may seem easy to evaluate.
But groundbreaking innovation often languishes in the shadows for years or decades before having measurable impact; e.g. research on neural nets before they proved their merit. During this pre-impact stage, the machinery of science is designed to make bets, albeit imperfectly.
In the LLM-science discussion, I see a common misconception that science is a thing you do and that writing about it is separate and can be automated. I’ve written over 300 scientific papers and can assure you that science writing can’t be separated from science doing. Why? 1/18
Anyone who has taught knows the following is true. You think you understand something until you go to teach it. Explaining something to others reveals gaps in your understanding that you didn’t know you had. Well, writing a scientific paper is a form of teaching. 2/18
Your paper is teaching your reader about your hypothesis, problem, method, the prior work in the field, your results, and what it all means for future work. When you write up your work and find it challenging, this is typically because you don’t yet fully understand it. 3/18
I repeat: Easily produced science text that's wrong does not advance science, improve science productivity, or make science more accessible. I like research on LLMs but the blind belief in their goodness does a disservice to them and science. Here is an example from #ChatGPT 1/5
SMPL is actually short for Skinned Multi-Person Linear model. #SMPL is a popular 3D model of the body that's based on linear blend skinning with pose-corrective blend shapes. It's learned from 3D scans of people, making it accurate and compatible with rendering engines. 2/5
Despite what #ChatGPT thinks, it wasn't developed at Berkeley or the MPI for Informatics. It was developed in the @PerceivingSys department of the @MPI_IS (the Max Planck Institute for Intelligent Systems). Run it again and you'll get different answers every time. 3/5