Highly concurrent systems that feed off normalized data stores typically require a middle layer of logic to serve the front-end needs. More often than not, this middle layer of logic is stored in the same stored procedures that the web layer accesses. While sometimes this may be the right place for simple logic, for more complex calculations and joins it is simply not efficient. The answer in these cases is to create a new meta layer of data that pre-joins data and rolls up necessary aggregations. To paint a better picture – in an ideal database each procedure that feeds the front-end would house a simple select statement from a single table. We know in real life this is not always possible, however we should think in these terms with every web proc we write. The reason is simple – complex logic is both IO and CPU intensive. We have no control over the web traffic, but we do have control over what logic we use to serve the web. Often times it is better to run jobs in the background that perform complicated logic on behalf of the web procs and dump the results into static tables. This methodology basically creates a denormalized meta layer of data on top of the normalized data. The argument against this is that the data will not truly be real-time. However you need to ask yourself what’s more important, “real-time” data that is 5-10 times slower or preaggregated data that is potentially seconds […]
Continue reading ...
Though SQL may not be the most elegant language for string handling, it does perform most functions and in a set based manner. The SQL substring function basically has the same syntax as found in other languages. In this example we will take the common scenario of extracting a string from between two fixed characters. In our example we’ll start with a domain name that contains the subdomain. We will extract only the top level domain from the results. Keep in mind for the purposes of this example all the URLs listed in the table need to have a subdomain. Setup the Data Let’s begin by creating a new table named URLs and insert some data. [cc lang=”sql”] IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N’dbo.URLs’) AND OBJECTPROPERTY(id, N’IsUserTable’) = 1) –DROP TABLE dbo.URLs CREATE TABLE dbo.URLs( url nvarchar(1024) NULL ) INSERT dbo.URLs (url) VALUES (N’www.litwareinc.com/’) GO INSERT dbo.URLs (url) VALUES (N’www.treyresearch.net/’) GO INSERT dbo.URLs (url) VALUES (N’www.adatum.com/’) GO INSERT dbo.URLs (url) VALUES (N’www.northwindtraders.com/’) GO INSERT dbo.URLs (url) VALUES (N’www.wideworldimporters.com/’) GO INSERT dbo.URLs (url) VALUES (N’www.proseware.com/’) [/cc] Now that we have the data setup let’s start with the extraction. For the substring function, the first parameter is the string we want to parse from, the next parameter is the starting position to be extracted and the last parameter is the number of characters to extract. [cc lang=”sql”] DECLARE @first_char nvarchar(10) DECLARE @second_char nvarchar(10) SET @first_char = ‘.’; SET @second_char = ‘/’; SELECT SUBSTRING ( — column url — […]
Continue reading ...
Sometimes slow queries can be rectified by changing the query around a bit. One of these examples can be illustrated when multiple values are compared within a WHERE clause using an OR or IN statement. Often times, OR can cause a scan against an index or table which may not be the preferable execution plan in terms of IO consumption, or overall query speed. A lot of variables come into play when the query optimizer creates an execution plan. These variables include a multitude of hardware specs, instance settings, database settings, statistics (table, index, auto-generated), and also the way the query is written. The one we are changing here is the way the query is written. As unsuspecting as it may seem, even though two different queries can return the exact same results, the path at which they take can be entirely different just based upon the format of the query. UNION vs OR In most of my experience with SQL Server, the OR is generally less efficient than a UNION. What tends to happen with an OR is it causes a scan more often. Now this sometimes may actually be a better path to take in some cases and I’ll leave that to a separate article but overall I’ve found that touching more records is the biggest cause of slowness. So let’s start our comparison. Here is our OR statement: [cc lang=”sql”] SELECT SalesOrderID, * FROM sales.SalesOrderDetail WHERE ProductID = 750 OR ProductID = 953 [/cc] From this execution […]
Continue reading ...
When you have multiple rows for let’s say a productID, there will probably be times when you need to select the top n subset of each those productid’s based upon a sort criteria. This is a relatively common TSQL need and there are a couple different methods to do this. First let’s create some dummy data. [cc lang=”sql”] CREATE TABLE [#productzabub]( [SalesOrderID] [int] NOT NULL, [ProductID] [int] NOT NULL ) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71782, 707) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71783, 707) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71784, 707) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71782, 708) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71783, 708) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71784, 708) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71782, 711) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71783, 711) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71784, 711) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71782, 712) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71783, 712) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71784, 712) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71782, 714) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71783, 714) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71784, 714) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71782, 715) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71783, 715) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71784, 715) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71783, 716) INSERT [#productzabub] ([SalesOrderID], [ProductID]) VALUES (71784, 716) [/cc] Now that we have our data created, let’s select the top 2 rows for each productid based upon the earliest salesorderid. This salesorderid could actually represent any sequential column in the table, be it a date, or a primary […]
Continue reading ...
Every once in a while I’ll find a field someone created where the datatype choice was not the best. This comes up relatively often with Varchar vs Char. All of the confusion just comes from not understanding the differences and causes and effects of each. With that, I’ll outline the only times I will use one over the other. Let’s take a look at the behavior of char vs varchar: [cc lang=”sql”] CREATE TABLE #meme ( first_name char(50), last_name varchar(50) ) INSERT INTO #meme (first_name, last_name) SELECT ‘john’, ‘smith’ SELECT * FROM #meme SELECT DATALENGTH(first_name), DATALENGTH(last_name) FROM #meme [/cc] You can see from the value returned from DATALENGTH that first_name is 50 bytes long despite only taking 4 characters. The other characters are empty strings. Storage wise, varchar is out the gate up to 2 bytes larger than a char because it needs to store the length. It’s 1 byte larger if it’s over 255 characters. That’s why you will often see varchar definitions as varchar(255). Pain-in-the-butt wise, char is far more a pain to deal with unless all the data in the char column takes up the exact amount of space defined for the char column. I typically only use char if all the fields are going to be the exact same size guaranteed. Even then, I’m hesitant. Partly because disk space is not as much of an issue as it was before, and mostly because if you DO have variable length fields in a char column, then the […]
Continue reading ...