Deep learning models have gained widespread popularity for natural language processing (NLP) because of their ability to accurately generalize over a range of contexts and languages.
Transformer-based models, such as Bidirectional Encoder Representations from Transformers (BERT), have revolutionized NLP by offering accuracy comparable to human baselines on benchmarks like SQuAD for question-answer, entity recognition, intent recognition, sentiment analysis, and more.
The NVIDIA Deep Learning Institute (DLI) is offering instructor-led, hands-on training on how to use Transformer-based natural language processing models for text classification tasks, such as categorizing documents.
In the course, you’ll also learn how to use Transformer-based models for named-entity recognition (NER) tasks and how to analyze various model features, constraints, and characteristics. The training will help developers determine which model is best suited for a particular use case based on metrics, domain specificity, and available resources.
By participating in this is workshop, you’ll be able to:
- Understand how word embeddings have rapidly evolved in NLP tasks, from Word2Vec and recurrent neural network (RNN)-based embeddings to Transformer-based contextualized embeddings
- See how Transformer architecture features, especially self-attention, are used to create language models without RNNs
- Use self-supervision to improve the Transformer architecture in BERT, Megatron, and other variants for superior NLP results
- Leverage pre-trained, modern NLP models to solve multiple tasks such as t ext classification, NER, and question answering
- Manage inference challenges and deploy refined models for live applications
Learn more about this DLI workshop. See the NVIDIA DLI Remote Instructor-Led training experience in this short video. Visit the NVIDIA DLI homepage for a full list of online and instructor-led training.