NVIDIA DLI Workshop: Building Transformer-Based NLP Applications

NVIDIA DLI Workshop: Building Transformer-Based NLP Applications

Discover how deep learning works through hands-on exercises in advanced natural language processing tasks and in production.

By BME TMIT

Date and time

Mon, 13 Dec 2021 00:00 - 08:00 PST

Location

Online

About this event

 *** REGISTER IS POSSIBLE WITH UNIVERSITY AND RESEARCH LAB EMAIL ADDRESS ONLY. REGISTRATIONS WILL BE VERIFIED IN THE LOBBY OF THE ONLINE EVENT. ***

This is an advanced level workshop. You need to have a basic understanding of Python programming language and how deep learning works.

Deep learning models have gained widespread popularity for natural language processing (NLP) because of their ability to accurately generalize over a range of contexts and languages.Transformer-based models, such as Bidirectional Encoder Representations from Transformers (BERT), have revolutionized NLP by offering accuracy comparable to human baselines on benchmarks like SQuAD for question-answer, entity recognition, intent recognition, sentiment analysis, and more.

The NVIDIA Deep Learning Institute (DLI) is offering instructor-led, hands-on training on how to use Transformer-based natural language processing models for text classification tasks, such as categorizing documents.

In the course, you’ll also learn how to use Transformer-based models for named-entity recognition (NER) tasks and how to analyze various model features, constraints, and characteristics. The training will help developers determine which model is best suited for a particular use case based on metrics, domain specificity, and available resources.

By participating in is this workshop, you’ll be able to:

  • Understand how word embeddings have rapidly evolved in NLP tasks, from Word2Vec and recurrent neural network (RNN)-based embeddings to Transformer-based contextualized embeddings
  • See how Transformer architecture features, especially self-attention, are used to create language models without RNNs
  • Use self-supervision to improve the Transformer architecture in BERT, Megatron, and other variants for superior NLP results
  • Leverage pre-trained, modern NLP models to solve multiple tasks such as text classification, NER, and question answering
  • Manage inference challenges and deploy refined models for live applications

Organised by

Sales Ended