Book Image

SQL Server Query Tuning and Optimization

By : Benjamin Nevarez
Book Image

SQL Server Query Tuning and Optimization

By: Benjamin Nevarez

Overview of this book

SQL Server is a relational database management system developed by Microsoft. As a database server, it is a software product with the primary function of storing and retrieving data as requested by other software applications. This book starts by describing the inner workings of the query optimizer, and will enable you to use this knowledge to write better queries and provide the query engine with all the information it needs to produce efficient execution plans. As you progress, you’ll get practical query optimization tips for troubleshooting underperforming queries. The book will also guide you through intelligent query processing and what is new in SQL Server 2022. Query performance topics such as the Query Store, In-Memory OLTP and columnstore indexes are covered as well. By the end of this book, you’ll be able to get the best possible performance for your queries and applications.
Table of Contents (14 chapters)

Cardinality estimation feedback

As mentioned in Chapter 6, Understanding Statistics, the cardinality estimator estimates the number of rows to be processed by each operator in a query execution plan. Similar to the concept of memory grant feedback, and also based on the query store, cardinality estimation feedback is another intelligent query processing feature introduced with SQL Server 2022, which can learn and adjust based on the history of previous query executions. As we learned in Chapter 6, Understanding Statistics, the cardinality estimator uses different model assumptions to perform cardinality estimations. In addition, starting with SQL Server 2014, SQL Server has two different cardinality estimators to choose from.

The cardinality estimator feedback feature works by analyzing repeating queries. If an existing model assumption appears incorrect or produces a suboptimal query plan, the cardinality estimator will identify and use a model assumption that better fits a given...