Book Image

Raspberry Pi 3 Cookbook for Python Programmers - Third Edition

By : Steven Lawrence Fernandes, Tim Cox
Book Image

Raspberry Pi 3 Cookbook for Python Programmers - Third Edition

By: Steven Lawrence Fernandes, Tim Cox

Overview of this book

Raspberry Pi 3 Cookbook for Python Programmers – Third Edition begins by guiding you through setting up Raspberry Pi 3, performing tasks using Python 3.6, and introducing the first steps to interface with electronics. As you work through each chapter, you will build your skills and apply them as you progress. You will learn how to build text classifiers, predict sentiments in words, develop applications using the popular Tkinter library, and create games by controlling graphics on your screen. You will harness the power of a built in graphics processor using Pi3D to generate your own high-quality 3D graphics and environments. You will understand how to connect Raspberry Pi’s hardware pins directly to control electronics, from switching on LEDs and responding to push buttons to driving motors and servos. Get to grips with monitoring sensors to gather real-life data, using it to control other devices, and viewing the results over the internet. You will apply what you have learned by creating your own Pi-Rover or Pi-Hexipod robots. You will also learn about sentiment analysis, face recognition techniques, and building neural network modules for optical character recognition. Finally, you will learn to build movie recommendations system on Raspberry Pi 3.
Table of Contents (23 chapters)
Title Page
Copyright and Credits
Dedication
Packt Upsell
Contributors
Preface
Index

Building a bag-of-words model


When working with text documents that include large words, we need to switch them to several types of arithmetic depictions. We need to formulate them to be suitable for machine learning algorithms. These algorithms require arithmetical information so that they can examine the data and provide significant details. The bag-of-words procedure helps us to achieve this. Bag-of-words creates a text model that discovers vocabulary using all the words in the document. Later, it creates the models for every text by constructing a histogram of all the words in the text.

How to do it...

  1. Initialize a new Python file by importing the following file:
import numpy as np 
from nltk.corpus import brown 
from chunking import splitter 
  1. Define the main function and read the input data from Brown corpus:
if __name__=='__main__': 
        content = ' '.join(brown.words()[:10000]) 
  1. Split the text content into chunks:
    num_of_words = 2000 
    num_chunks = [] 
    count = 0 
    texts_chunk...