They first introduce an optimization scheme that utilizes a CLIP-based loss to modify an input latent vector in response to a user-provided text prompt.
Next, they describe a latent mapper that infers a text-guided latent manipulation step for a given input image, allowing faster and more stable textbased manipulation.
Finally, they present a method for mapping a text prompts to input-agnostic directions in StyleGAN’s style space, enabling interactive text-driven image manipulation.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Learn more about spaCy v3.0 and its new features like: transformer-based pipelines, the new training config and workflow system to help you take projects from prototype to production.
STEP BY STEP
↓ 2/4
01:54 – State-of-the-art transformer-based pipelines
05:03 – Declarative configuration system
11:06 – Workflows for end-to-end projects
17:03 – Trainable and rule-based components
21:43 – Custom models in any framework
26:20 – Features and summary
The past decade has witnessed a groundbreaking rise of machine learning for human language analysis, with current methods capable of automatically accurately recovering various aspects of syntax and semantics - including sentence
structure and grounded word meaning - from large data collections.
Recent research showed the promise of such tools for analyzing acoustic communication in nonhuman species.
They posit that machine learning will be the cornerstone of future collection, processing, and analysis of multimodal streams of data in animal communication studies, including bioacoustic, behavioral, biological, and environmental data.
nbdev is a library that allows you to develop a python library in Jupyter Notebooks, putting all your code, tests and documentation in one place. That is: you now have a true literate programming environment, as envisioned by Donald Knuth back in 1983!
Does BERT Pretrained on Clinical Notes Reveal Sensitive Data? • Large Transformers pretrained over clinical notes from Electronic Health Records (EHR) have afforded substantial gains in performance on predictive clinical tasks.
The cost of training such models and the necessity of data access to do so is coupled with their utility motivates parameter sharing, i.e., the release of pretrained models such as ClinicalBERT.
↓ 2/4
While most efforts have used deidentified EHR, many researchers have access to large sets of sensitive, non-deidentified EHR with which they might train a BERT model (or similar).
Would it be safe to release the weights of such a model if they did?