-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating
Advanced Natural Language Processing with TensorFlow 2
By :
In the world of deep learning, specific architectures have been developed to handle specific modalities. Convolutional Neural Networks (CNNs) have been incredibly effective in processing images and is the standard architecture for CV tasks. However, the world of research is moving toward the world of multi-modal networks, which can take multiple types of inputs, like sounds, images, text, and so on and perform cognition like humans. After reviewing multi-modal networks, we dived into vision and language tasks as a specific focus. There are a number of problems in this particular area, including image captioning, visual question answering, VCR, and text-to-image, among others.
Building on our learnings from previous chapters on seq2seq architectures, custom TensorFlow layers and models, custom learning schedules, and custom training loops, we implemented a Transformer model from scratch. Transformers are state of the art at the time of writing. We took a quick look at the...
Change the font size
Change margin width
Change background colour