Tag - Query Store

Dude, Where’s My Forced Plan?! – Part 2

In my previous post about Query Store, you saw that the persistence of forced plans does not imply that they are used. This post brings another example where forced plans are still in the system, with no failures, but they are not applied against the desired query.

For this demo, we need a database that is a part of the availability group. Within a database we need only one sample table, one user table type, and one stored procedure:

DROP TABLE IF EXISTS dbo.T;
CREATE TABLE dbo.T(
id INT IDENTITY(1,1) NOT NULL,
c1 INT NOT NULL,
c2 INT NULL,
CONSTRAINT PK_T PRIMARY KEY CLUSTERED (id)
);
GO
INSERT INTO dbo.T(c1)
SELECT 1 + ABS(CHECKSUM(NEWID())) % 10000
FROM dbo.GetNums(100000);
GO


CREATE TYPE dbo.Tvp AS TABLE(
Id INT NOT NULL PRIMARY KEY CLUSTERED
)
GO
CREATE OR ALTER PROCEDURE dbo.GetList
(@tvp AS dbo.Tvp READONLY)
AS
SELECT t.*
FROM dbo.T t
INNER JOIN @tvp tvp ON t.id=tvp.Id;
GO

Let’s ensure that Query Store is empty and turned on:

ALTER DATABASE A SET QUERY_STORE CLEAR;
GO
ALTER DATABASE A SET QUERY_STORE = ON (QUERY_CAPTURE_MODE = AUTO);
GO

Now, we can execute our sample query multiple times:

DECLARE @t AS dbo.Tvp;
INSERT @t SELECT TOP (10) id FROM dbo.T ORDER BY 1 DESC;
EXEC dbo.GetList @t;
GO 3

The execution is fast, with a Nested Loop join based execution plan. Let’s suppose we want to be sure that this plan will be used for all further calls, and we want to achieve this by forcing the plan. This is not always a good idea, but in this demo, we want to have any forced plan. When we execute the query again, we should see in Query Store something like this:

post1
The plan forcing is respected, all fine.

One of the main advantages of Query Store is that data belongs to the database. Thus, it should survive server crashes and failovers. In case of a failover, we should see a similar screen when we run the query on another server, which takes the role of the primary server after the failover. After the failover, I executed the same query again (against a new primary):

DECLARE @t AS dbo.Tvp;
INSERT @t SELECT TOP (10) id FROM dbo.T ORDER BY 1 DESC;
EXEC dbo.GetList @t;
GO 3

When I looked at the Query Store report, I saw no new executions for this query:

post2.png

I checked then the sys.query_store_query DMV and figured out, why these executions are not captured:

SELECT * FROM sys.query_store_query WHERE query_hash = (SELECT query_hash FROM sys.query_store_query WHERE query_id = 35);



post3

Query Store has instantiated new query_id (57) for my query, and executions are associated to it:

post21

Let’s look closer at the batch_sql_handle attribute. In the following figure, you can see the only difference:

post4.png

The selected number are hex numbers with value of 20 and 16 respectively. Where is this number coming from? This is the database_id attribute. And yes, in this example, my databases have different database_ids:

post5

We have again a situation where query_id has been changed for a query that we did not touch at all, this time because of different database_id attributes on primary and secondary.

Conclusion

When you force a plan for a query that uses table variables or table-valued parameters, you have to check frequently whether it is still in use, since the uniqueness of query_id is very weak and can be affected even if you did not change the code.

Thanks for reading.

Dude, Where’s My Forced Plan?! – Part 1

Query Store is a great performance troubleshooting tool. I like it; in the past three years, it saved me a lot of time and money. It allows you to quickly identify exec plan regressions. By forcing an old plan, you can almost instantly implement a workaround for the issue and buy a time necessary for the final fix. Forcing plan acts (should act) as “a temporary solution” for plan regression issues. But until you finally fix the underlying issue, it should be considered as a solution:  a temporary, but stable and steadily solution. However, forcing plans does not always work. You can find interesting articles about plan forcing in the excellent series of Query Store articles written by Erin Stellato (blog | twitter). In this article, I won’t write about forcing plan failures, where a plan fails because of missing index or objects, but describe a situation where everything seems to be fine – no failure at all – but it does not work.

To demonstrate the issue, we need to create a simple user table type and use it in a stored procedure in the WideWorldImporters sample database.

CREATE TYPE dbo.IntList AS TABLE(
fValue INT NOT NULL PRIMARY KEY CLUSTERED
)
GO
CREATE OR ALTER PROCEDURE dbo.GetOrderList
(@tvp AS dbo.IntList READONLY)
AS
SELECT o.*
FROM Sales.Orders o
INNER JOIN @tvp t on o.OrderID=t.fValue;
GO

It’s very simple: we’ll invoke the stored procedure to get order details for a given list of order ids. Let’s ensure that Query Store is empty and turned on:

ALTER DATABASE WideWorldImporters SET QUERY_STORE CLEAR;
ALTER DATABASE WideWorldImporters SET QUERY_STORE = OFF;
GO
ALTER DATABASE WideWorldImporters
SET QUERY_STORE = ON (QUERY_CAPTURE_MODE = AUTO);
GO

Now, run the following code that calls the stored procedure with 10 order Ids, and after clearing the cache calls it again, but this time with 2000 order ids:

DECLARE @t AS dbo.IntList;
INSERT @t SELECT TOP (10) OrderId FROM Sales.Orders ORDER BY 1 DESC;
EXEC dbo.GetOrderList @t;
GO 3
DBCC FREEPROCCACHE;
GO
DECLARE @t AS dbo.IntList;
INSERT @t SELECT TOP (2000) OrderId FROM Sales.Orders ORDER BY 1 DESC;
EXEC dbo.GetOrderList @t;

We expect here two plans: a Nested Loop Join based one for the first query and a Merge or Hash Join plan for the low selective second query. Let’s look at the Query Store to see, how this stored procedure executes. Since Query Store works with queries rather than with procedures, we need to identify the query_id of our SELECT statement:

SELECT qs.query_id, q.query_sql_text, qs.query_hash, qs.batch_sql_handle
FROM sys.query_store_query AS qs
INNER JOIN sys.query_store_query_text AS q ON qs.query_text_id = q.query_text_id
WHERE object_id = OBJECT_ID('dbo.GetOrderList')
ORDER BY qs.last_execution_time DESC

I got the value 1 for the query_id  (in your case it could be different, depending on your workload). Here is what we can see in the Tracking query report:

P1


P2

As we expected, we see two different plans: Nested Loop Join for 10 Ids and Merge Join for 2000 order ids.

Let’s assume now that we want to have the Nested Loop Join plan for our query, and that we have no time or access to the code to achieve this otherwise than forcing the plan. The assumption is not realistic for this simple query, but it’s reasonable in case of complex queries with complex plans on production system. Let’s force the lower plan and run the following code:

DECLARE @t AS dbo.IntList;
INSERT @t SELECT TOP (10) OrderId FROM Sales.Orders ORDER BY 1 DESC;
EXEC dbo.GetOrderList @t;
GO
DECLARE @t AS dbo.IntList;
INSERT @t SELECT TOP (2000) OrderId FROM Sales.Orders ORDER BY 1 DESC;
EXEC dbo.GetOrderList @t;
GO

What we expect to see now is the same execution plan for both executions. Let’s check in the Query Store report:

P3

We see that SQL Server reused the same plan for both calls, exactly one that we forced. All fine.

What would happen, when we clear the cache and call the stored procedure with parameters in the opposite order, i.e:

DBCC FREEPROCCACHE;
GO
DECLARE @t AS dbo.IntList;
INSERT @t SELECT TOP (2000) OrderId FROM Sales.Orders ORDER BY 1 DESC;
EXEC dbo.GetOrderList @t;
GO
DECLARE @t AS dbo.IntList;
INSERT @t SELECT TOP (10) OrderId FROM Sales.Orders ORDER BY 1 DESC;
EXEC dbo.GetOrderList @t;
GO

Let’s observe the Query Store report:

P4

What we see now is the third execution plan. The plan that we forced is not reused this time, SQL Server has created a new one. But this plan is also a Nested Loop join plan; it reflects the shape defined by forcing the plan. When you look at its XML representation, you would find the UsePlan attribute with the value True. Since there was no plan in cache, a new one is created, with the same shape but different estimations. Although it’s not exactly the same plan, it is still the plan we want to see – a Nested Loop Join based one. Still all fine – forcing plan works.

Let’s imagine now that we rollout a new version of the application. There were no changes in our procedure, but somebody included it in the installation and this statement has been executed.

CREATE OR ALTER PROCEDURE dbo.GetOrderList
(@tvp AS dbo.IntList READONLY)
AS
SELECT o.*
FROM Sales.Orders o
INNER JOIN @tvp t on o.OrderID=t.fValue;
GO

From a business logic point of view, there are no changes, altering a procedure with the same text should only mean that the new plan will be generated for it. And this should not be a problem because with forcing plan we ensured that the “good plan” will be used. Let’s confirm this by executing the following code:

DBCC FREEPROCCACHE;
GO
DECLARE @t AS dbo.IntList;
INSERT @t SELECT TOP (2000) OrderId FROM Sales.Orders ORDER BY 1 DESC;
EXEC dbo.GetOrderList @t;
GO 10
DECLARE @t AS dbo.IntList;
INSERT @t SELECT TOP (10) OrderId FROM Sales.Orders ORDER BY 1 DESC;
EXEC dbo.GetOrderList @t;
GO 10

Let’s have a look at the Query Store:

P5

When you observe this screenshot, you might think – hey dude, it seems you used the old screenshot, it does not show new executions! No, I did not, I refreshed the report, the report is correct. But, where are the recent 20 executions? Well, let’s execute again the statament looking for query_id:

P6

Ooops… Our SELECT query got a new query_id! And since a forced plan is associated not to a particular query, but to a given query_id, it’s clear why it does not affect the execution. The forcing plan is correctly configured – there is no forced plan failure, it is just waiting for execution of the query with the query_id 1. If this would happen, it would work correctly, but since a new query_id is associated with our query, the forced plan has nothing to do with it. We can confirm this by observing the Query Store report:

P7

You can see that all 20 executions are done with the Merge Join based execution plan, although we have forced the plan and our query was not changed at all!

Forced plans are stable and will remain in the system only if query_id does not change. And query_id depends on the following five parameters:

  • query_hash  – hash value of query text
  • object_id of the object (e.g. stored procedure) where the query has been executed (0 for standalone queries)
  • context_settings_id (e.g. ANSI standard, etc.)
  • query_parameterization_type – indicating how the query is parameterized
  • batch_sql_handle –  ID of the statement batch the query is part of. Populated only if the query references temporary tables or table variables

Since we call the stored procedure by using table variables, the batch_sql_handle attribute has been changed after we issued the ALTER PROCEDURE statement. Here is a query that returns two queries in the Query Store for the same query text:

P8

You can see that the values in these two rows are slightly different and this small difference canceled all our efforts with forcing plans and revitalized our issue.

Conclusion

Be very careful with forcing plans against queries referencing table variables and table-valued parameters. The query_id attribute for such queries is very unstable and forcing plans even as a workaround could have a very short lifetime. You should use a different approach to fix such issues.

Thanks for reading.

 

SQL Server 2019 CTP3.0 Query Store Changes

SQL Server 2019 CTP 3.0 brings small changes in the Query Store. Two parameters have new default values compared to previous SQL Server version:

Attribute SQL Server 2017 SQL Server 2019
MAX_STORAGE_SIZE_MB 100 1000
QUERY_CAPTURE_MODE ALL AUTO

In addition to default value, the minimum number of query executions in the AUTO query_capture_mode for storing the query, its plan(s) and runtime statistics in SQL Server 2019 has been increased from 3 to 30. That means, Query Store does not store anything for first 29 query executions. It reserves query_ids, but it starts storing execution plan and runtime stats from 30th execution in a single day. When you would run the following code, you would find nothing in Query Store:

IF DB_ID('Z') IS NULL
CREATE DATABASE Z;
GO
USE Z;
GO
ALTER DATABASE Z SET QUERY_STORE CLEAR;
GO
ALTER DATABASE Z SET QUERY_STORE = ON;
GO
CREATE TABLE dbo.T(id INT);
GO
SELECT COUNT(*) FROM T;
GO 29
SELECT SUM(id) FROM T;
GO 29

With the next execution of a query, its details, plan and runtime stats will be stored in Query Store:

SELECT SUM(id) FROM T;

Bl1

As you can see, the count_executions attribute shows 1; all previous executions are ignored; only query_id is preserved, but not stored; therefore, our query has id 2 because it was second in the sample code sequence.

There is another enhancement for the query_capture_mode attribute. In SQL Server 2017 it can have the following values: ALL, AUTO and NONE. SQL Server 2019 brings a new one – CUSTOM. SQL Server Management Studio 18 (SSMS) does not support it, but you can configure this feature by using Transact-SQL code. The CUSTOM mode configures QUERY_CAPTURE_POLICY options. The following options can be used to define the minimum condition for a query to be captured by Query Store:

EXECUTION_COUNT – minimal number of query executions for a query to be captured. As we saw earlier, default is 30 execution per day.

TOTAL_COMPILE_CPU_TIME_MS – minimum total query compilation time in a day to be persisted in Query Store. Default is 1000 milliseconds of CPU time.

TOTAL_ EXECUTION_CPU_TIME_MS – minimum total query CPU time spent for query execution per day to be persisted in Query Store. Default is 100 milliseconds.

A query will be captured if at least one of these three conditions is fulfilled; i.e. they are evaluated as EXECUTION_COUNT OR TOTAL_COMPILE_CPU_TIME_MS OR TOTAL_ EXECUTION_CPU_TIME_MS.

The fourth option is STALE_CAPTURE_POLICY_THRESHOLD which defines the evaluation interval period to determine if a query should be captured. The default is 1 day, and it can be set from 1 hour to 7 days.

Let’s try to reconfigure Query Store to capture queries after 20 executions:

USE Z;
GO
ALTER DATABASE Z SET QUERY_STORE CLEAR;
GO
ALTER DATABASE Z SET QUERY_STORE = ON
(
QUERY_CAPTURE_POLICY =
(
EXECUTION_COUNT = 20,
TOTAL_COMPILE_CPU_TIME_MS = 1000,
TOTAL_EXECUTION_CPU_TIME_MS = 100,
STALE_CAPTURE_POLICY_THRESHOLD = 1 day
)
);
GO
SELECT COUNT(*) FROM T;
GO 29
SELECT SUM(id) FROM T;
GO 29

EDIT: When you execute this code, you see no errors, but Query Store is still empty, so, the CUSTOM mode does not work in SQL Server 2019 CTP3.0 even with Transact-SQL.. The QUERY_CAPTURE_POLICY attribute settings are implemented only if you set the QUERY_CAPTURE_MODE to CUSTOM:

ALTER DATABASE Z SET QUERY_STORE = ON
(
QUERY_CAPTURE_MODE = CUSTOM,
QUERY_CAPTURE_POLICY =
(
EXECUTION_COUNT = 20,
TOTAL_COMPILE_CPU_TIME_MS = 1000,
TOTAL_EXECUTION_CPU_TIME_MS = 100,
STALE_CAPTURE_POLICY_THRESHOLD = 1 day
)
);
GO

So, my initial  claim is wrong – the CUSTOM mode for the QUERY_CAPTURE_MODE attribute works in SQL Server2019 CTP 3.0, you just need to use proper syntax.

Thanks for reading.

SQL Server 2019 Query Store Defaults

When you look at articles, posts and documents about new features and enhancements in SQL Server 2019 CTP2, you will find nothing about Query Store. However, there are some  graphical enhancements in SQL Server Management Studio in the version 18.0, also default configuration for Query Store attributes is changed too.

First SSMS 18.0. From this version, you can see another Query Store report – Query Wait Statistics. When you click on it, you can see aggregate waits per category in a given time interval (default is last hour). Bl1

By clicking on the wait category bar, a new screen shows up with ids of queries that contribute to this wait category.

Bl2

You can also use tabular view for initial report (aggergated time for wait categories):

Bl3

The second change is related to default configuration of Query Store. When you enable Query Store for a database without specifying attributes, by using the Properties tab or command:

ALTER DATABASE WideWorldImporters SET QUERY_STORE = ON;

SQL Server adds default values for Query Store parameters. In SQL Server 2019 Microsoft decided to change default values for some of them. Here is the table with the attributes and former and current default values:

Attribute SQL Server 2017 SQL Server 2019
max_storage_size_mb 100 500
interval_length_minutes 60 15
query_capture_mode ALL AUTO
flush_interval_seconds 900 3,000
max_plans_per_query 200 1,000

 

Thanks for reading.

Workshop: Performance and Query Tuning with SQL Server 2017 and 2019

PFTAs part of the SQL Saturday Linz conference, I will have a performance tuning workshop on 17th January in Tabakfabrik in Linz. The full workshop name is Performance and Query Tuning with SQL Server 2017 and 2019. As the name suggests, the workshop covers performance tuning techniques specific for two recent SQL Server versions. It is aimed at application and database developers who already work or plan to work with the latest versions of SQL Server – 2017 and 2019. The first part of the workshop is focused on execution plans, statistics and query optimizer. It covers common mistakes made by developers that lead to poor performing queries and brings tips how to avoid them.

The second part of the workshop includes a detailed view of Query Store, a new powerful troubleshooting tool introduced in SQL Server 2016 and improved in the later versions. You can see how you can use it not only to troubleshoot, but also to fix regressed queries, and it will be demonstrated its power by upgrade to new SQL Server version or changing compatibility level. At the end, you can learn about database engine improvements in the SQL Server 2019 CTP2 feature with the most promising name – Intelligent Query Processing.

The workshop will cover the following topics:

  • How SQL Server uses statistics and what are common issues with them
  • Quick overview of index usage; how to create a useful index
  • Typical mistakes made by developers that lead to poor performing queries
  • Troubleshooting parameter sniffing issues in different SQL Server versions
  • Cardinality estimation issues and tuning queries with large tables
  • Tips for writing well-performed queries
  • Query Store as game changer in SQL Server performance troubleshooting
  • Fixing execution plans for regressed queries by using Query Store
  • Improvements in SQL Server 2019 Intelligent Query Processing

Level: 300 – Advanced

Here is the link for registration: https://bit.ly/2rkj4wX.

Hope to see you in Linz!

Rgds,
M.