Book Image

Flutter Cookbook

By : Simone Alessandria, Brian Kayfitz
4 (1)
Book Image

Flutter Cookbook

4 (1)
By: Simone Alessandria, Brian Kayfitz

Overview of this book

“Anyone interested in developing Flutter applications for Android or iOS should have a copy of this book on their desk.” – Amazon 5* Review Lauded as the ‘Flutter bible’ for new and experienced mobile app developers, this recipe-based guide will teach you the best practices for robust app development, as well as how to solve cross-platform development issues. From setting up and customizing your development environment to error handling and debugging, The Flutter Cookbook covers the how-tos as well as the principles behind them. As you progress, the recipes in this book will get you up to speed with the main tasks involved in app development, such as user interface and user experience (UI/UX) design, API design, and creating animations. Later chapters will focus on routing, retrieving data from web services, and persisting data locally. A dedicated section also covers Firebase and its machine learning capabilities. The last chapter is specifically designed to help you create apps for the web and desktop (Windows, Mac, and Linux). Throughout the book, you’ll also find recipes that cover the most important features needed to build a cross-platform application, along with insights into running a single codebase on different platforms. By the end of this Flutter book, you’ll be writing and delivering fully functional apps with confidence.
Table of Contents (17 chapters)
About Packt

How it works...

When using ML Kit, the process required to get results is usually the following: 

  1. You get an image.  
  2. You send it to the API to get some information about the image.  
  3. The ML Kit API returns data to the app, which can then use it as necessary. 

The first step is getting an instance of the Firebase ML Vision API. In this recipe, we got it with the following instruction:  

final FirebaseVision vision = FirebaseVision.instance; 

The next step is creating a FirebaseVisionImage, which is the image object used for the API detector. In our example, you created it with the following instruction: 

final FirebaseVisionImage visionImage = FirebaseVisionImage.fromFile(image); 

Once the FirebaseVision instance is available and FirebaseImageVision is available, you call a detector; in this case, you called a TextRecognizer detector with the following instruction: 

TextRecognizer recognizer = vision.textRecognizer()...