Book Image

Comet for Data Science

By : Angelica Lo Duca
Book Image

Comet for Data Science

By: Angelica Lo Duca

Overview of this book

This book provides concepts and practical use cases which can be used to quickly build, monitor, and optimize data science projects. Using Comet, you will learn how to manage almost every step of the data science process from data collection through to creating, deploying, and monitoring a machine learning model. The book starts by explaining the features of Comet, along with exploratory data analysis and model evaluation in Comet. You’ll see how Comet gives you the freedom to choose from a selection of programming languages, depending on which is best suited to your needs. Next, you will focus on workspaces, projects, experiments, and models. You will also learn how to build a narrative from your data, using the features provided by Comet. Later, you will review the basic concepts behind DevOps and how to extend the GitLab DevOps platform with Comet, further enhancing your ability to deploy your data science projects. Finally, you will cover various use cases of Comet in machine learning, NLP, deep learning, and time series analysis, gaining hands-on experience with some of the most interesting and valuable data science techniques available. By the end of this book, you will be able to confidently build data science pipelines according to bespoke specifications and manage them through Comet.
Table of Contents (16 chapters)
1
Section 1 – Getting Started with Comet
5
Section 2 – A Deep Dive into Comet
10
Section 3 – Examples and Use Cases

Introducing model evaluation

Model evaluation is the process of assessing the performance of one or more data science models to decide which is the best one to solve a given task. Model evaluation is an iterative task because we run it over and over again, until we reach a satisfactory model.

Model evaluation depends on the task we want to solve. In general, there are two types of tasks:

  • Supervised learning – You train a model with some labeled data, you test the model on other labeled data, and then you try to predict the target value for unseen and unlabelled data. In this case, model evaluation is simple because, during the testing phase, you can compare the output produced by the model with the labeled testing data.
  • Unsupervised learning – You do not have any labeled data, but you try to predict the output on the basis of some criteria, such as data similarity. In this case, model evaluation is quite complicated because you do not have any testing...