Book Image

Xamarin.Forms Projects - Second Edition

By : Daniel Hindrikes, Johan Karlsson
Book Image

Xamarin.Forms Projects - Second Edition

By: Daniel Hindrikes, Johan Karlsson

Overview of this book

Xamarin.Forms is a lightweight cross-platform development toolkit for building apps with a rich user interface. Improved and updated to cover the latest features of Xamarin.Forms, this second edition covers CollectionView and Shell, along with interesting concepts such as augmented reality (AR) and machine learning. Starting with an introduction to Xamarin and how it works, this book shares tips for choosing the type of development environment you should strive for when planning cross-platform mobile apps. You’ll build your first Xamarin.Forms app and learn how to use Shell to implement the app architecture. The book gradually increases the level of complexity of the projects, guiding you through creating apps ranging from a location tracker and weather map to an AR game and face recognition. As you advance, the book will take you through modern mobile development frameworks such as SQLite, .NET Core Mono, ARKit, and ARCore. You’ll be able to customize your apps for both Android and iOS platforms to achieve native-like performance and speed. The book is filled with engaging examples, so you can grasp essential concepts by writing code instead of reading through endless theory. By the end of this book, you’ll be ready to develop your own native apps with Xamarin.Forms and its associated technologies, such as .NET Core, Visual Studio 2019, and C#.
Table of Contents (13 chapters)

Essential theory

This section will describe how AR works. The implementation differs slightly between platforms. Google's implementation is called ARCore, and Apple's implementation is called ARKit.

AR is all about superimposing computer graphics on top of a camera feed. This sounds like a simple thing to do, except that you have to track the camera position with great accuracy. Both Google and Apple have written some great application programming interfaces (APIs) to do this magic for you, with the help of motion sensors in your phone and data from the camera. The computer graphics that we add on top of the camera feed are synced to be in the same coordinate space as the surrounding real-life objects, making them appear as if they are part of the image you see on your phone.