Book Image

Mastering Application Development with Force.com

By : Kevin J. Poorman
Book Image

Mastering Application Development with Force.com

By: Kevin J. Poorman

Overview of this book

Force.com is an extremely powerful, scalable, and secure cloud platform, delivering a complete technology stack, ranging from databases and security to workflow and the user interface. With salesforce.com's Force.com cloud platform, you can build any business application and run it on your servers. The book will help you enhance your skillset and develop complex applications using Force.com. It gets you started with a quick refresher of Force.com's development tools and methodologies, and moves to an in-depth discussion of triggers, bulkification, DML order of operations, and trigger frameworks. Next, you will learn to use batchable and schedulable interfaces to process massive amounts of information asynchronously. You will also be introduced to Salesforce Lightning and cover components—including backend (apex) controllers, frontend (JavaScript) controllers, events, and attributes—in detail. Moving on, the book will focus on testing various apex components: what to test, when to write the tests, and—most importantly—how to test. Next, you will develop a changeset and use it to migrate your code from one org to another, and learn what other tools are out there for deploying metadata. You will also use command-line tools to authenticate and access the Force.com Rest sObject API and the Bulk sObject API; additionally, you will write a custom Rest endpoint, and learn how to structure a project so that multiple developers can work independently of each other without causing metadata conflicts. Finally, you will take an in-depth look at the overarching best practices for architecture (structure) and engineering (code) applications on the Force.com platform.
Table of Contents (16 chapters)
Mastering Application Development with Force.com
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

In the beginning, we physically moved tapes around


When computers were the size of small houses and required their own nuclear power station to run, data was moved between systems on magnetic tapes. Because computing time was a precious resource, operations were often done in bulk. Over time, a common pattern emerged: extracting data from the database in bulk, transforming that data into a bulk, and finally, loading that transformed data back into the database in bulk. This process is also known as the Extract, Transform, and Load (ETL) process. The idea was to pull records from a data store, run some kind of calculation or transformation on them, and then load that data back to a data store. This worked not only intrasystem loading data from an internal data source, but also form intersystem, as extracted data from one system could be loaded from the magnetic tapes that were physically transported to the second system. In a sense, ETL was the first API. APIs are built from a combination...