Book Image

Matplotlib 3.0 Cookbook

By : Srinivasa Rao Poladi, Nikhil Borkar
Book Image

Matplotlib 3.0 Cookbook

By: Srinivasa Rao Poladi, Nikhil Borkar

Overview of this book

Matplotlib provides a large library of customizable plots, along with a comprehensive set of backends. Matplotlib 3.0 Cookbook is your hands-on guide to exploring the world of Matplotlib, and covers the most effective plotting packages for Python 3.7. With the help of this cookbook, you'll be able to tackle any problem you might come across while designing attractive, insightful data visualizations. With the help of over 150 recipes, you'll learn how to develop plots related to business intelligence, data science, and engineering disciplines with highly detailed visualizations. Once you've familiarized yourself with the fundamentals, you'll move on to developing professional dashboards with a wide variety of graphs and sophisticated grid layouts in 2D and 3D. You'll annotate and add rich text to the plots, enabling the creation of a business storyline. In addition to this, you'll learn how to save figures and animations in various formats for downstream deployment, followed by extending the functionality offered by various internal and third-party toolkits, such as axisartist, axes_grid, Cartopy, and Seaborn. By the end of this book, you'll be able to create high-quality customized plots and deploy them on the web and on supported GUI applications such as Tkinter, Qt 5, and wxPython by implementing real-world use cases and examples.
Table of Contents (17 chapters)

Word embeddings in two dimensions

For natural language processing (NLP) applications, words need to be represented in numerical format as machines can only process numeric data. Representing words in arrays of numbers is known as "Word embedding", as each of these arrays (word representations) is a point in an n-dimensional space, where "n" is the number of dimensions/features (length of the array) that represents each word.

Depending on the type of NLP application, a machine learning (ML) algorithm is trained to learn the word representations. The typical length of the word representations can vary from 50 to 300 dimensions, which is impossible to visualize or comprehend. Using dimensionality reduction techniques such as PCA or t-SNE, this high-dimensional space is reduced to a two- or three-dimensional space so that it can be plotted on a graph to visualize...