Book Image

Learning Salesforce Einstein

Book Image

Learning Salesforce Einstein

Overview of this book

Dreamforce 16 brought forth the latest addition to the Salesforce platform: an AI tool named Einstein. Einstein promises to provide users of all Salesforce applications with a powerful platform to help them gain deep insights into the data they work on. This book will introduce you to Einstein and help you integrate it into your respective business applications based on the Salesforce platform. We start off with an introduction to AI, then move on to look at how AI can make your CRM and apps smarter. Next, we discuss various out-of-the-box components added to sales, service, marketing, and community clouds from Salesforce to add Artificial Intelligence capabilities. Further on, we teach you how to use Heroku, PredictionIO, and the Force platform, along with Einstein, to build smarter apps. The core chapters focus on developer content and introduce PredictionIO and Salesforce Einstein Vision Services. We explore Einstein Predictive Vision Services, along with analytics cloud, the Einstein Data Discovery product, and IOT core concepts. Throughout the book, we also focus on how Einstein can be integrated into CRM and various clouds such as sales, services, marketing, and communities. By the end of the book, you will be able to embrace and leverage the power of Einstein, incorporating its functions to gain more knowledge. Salesforce developers will be introduced to the world of AI, while data scientists will gain insights into Salesforce’s various cloud offerings and how they can use Einstein’s capabilities and enhance applications.
Table of Contents (10 chapters)

Artificial Intelligence key terms

Artificial Intelligence is a computerized system that is designed to mimic how humans think, learn, process, and perceive information. In simple terms, it's about first understanding and then recreating the human mind.

There are some common terminologies that we need to understand before we proceed further.

Machine Learning

As per Wikipedia:

"Machine learning provides computers with the ability to learn without being explicitly programmed"

Machine learning in general comprises three major steps:

  1. We collect a lot of examples that specify the correct output for a given input.
  2. Based on the input dataset, we apply algorithms to form a model or a mathematical function that can predict the outcome.
  3. We pass the input to the mathematical function obtained in step 2 to obtain the necessary results. Consider the following diagram:
The high level major steps of any machine learning system

In this chapter, we will cover a simple experiment using Google's Prediction API with Salesforce data, and, in the later chapters, we will introduce you to the PredictionIO part of Einstein offerings from Salesforce, which is an open source Machine Learning Server that allows developers and data scientists to capture data via its Event server, build predictive models with algorithms, and then deploy it as a web service.

Neural networks

A neural network is a set of algorithms designed to recognize patterns. Neural networks are superficially based on how the brain works.

They consist of a set of nodes (similar to human brain neurons) arranged in multiple layers, with weighted interconnections between them. Each neuron combines a set of input values to produce an output value, which in turn is passed on to other neurons downstream. Artificial neural networks are used in Deep Learning.

Deep Learning

In Deep Learning, the neural network has multiple layers. At the top layer, the network trains on a specific set of features and then sends that information to the next layer. The network takes that information, combines it with other features and passes it to the next layer, and so on.

Deep Learning has increased in popularity because it has proven to outperform other methodologies for machine learning. Due to the advancement of distributed computing resources and businesses generating an influx of image, text, and voice data, Deep Learning can deliver insights that weren't previously possible.

Consider the following diagram:

Deep learning diagram. (Source and credit - http://www.nanalyze.com/2016/11/artificial-intelligence-definition/)

From an example from the U.S. government report, in an image recognition application, a first layer of units might combine the raw data of the image to recognize simple patterns in the image; a second layer of units might combine the results of the first layer to recognize patterns of patterns; a third layer might combine the results of the second layer, and so on. We train neural networks by feeding them lots of delicious big data to learn from.

Salesforce Einstein offers Predictive Vision Services (currently in Pilot) for training and solving image recognition use cases. We will discuss in detail how to use these services to bring the power of image recognition to the CRM apps.

Natural language processing

Natural language processing (NLP) is the ability of computers to understand human language and speeches. A good example for this is Google Translator or a Google Voice Search. Modern day NLP systems use machine learning to detect patterns.

Cognitive computing

Cognitive computing involves self-learning systems that use data mining (big data), pattern recognition (machine learning), and natural language processing to mimic the way the human brain works. The difference between Artificial Intelligence and cognitive computing boils down to the idea that the former tells the user what course of action to take based on its analysis while the latter provides information to help the user decide. The goal of cognitive computing is to automatically solve IT problems without human intervention.

Pattern recognition

Humans have been finding patterns everywhere, ranging from astronomy to biology and physics. A pattern is a set of object/concept/phenomena where elements are like one another in certain aspects.

Statistical and structural patterns form the basis of machine learning.

Data mining

Data mining is the process of finding patterns or correlations among dozens of fields in a relational database.

Data mining consists of the following five major elements:

  • ETL (Extraction ,Transformation and Loading ) of data from data warehouse
  • Storing and managing the data in a multidimensional database system
  • Providing data access to the Business Analysts and IT professionals
  • Use Application Software to analyze data
  • Using charts and dashboards to present the data

GPUs

Graphics processing units (GPU) basically help computers work much faster than those operating with a central processing unit (CPU) alone. Some companies have built their own versions of GPUs. For example, Google being Google, the technology giant has a chip it calls the Tensor processing unit (TPU), which supports the software engine (TensorFlow) that drives its Deep Learning services.