Comparing Query Performance

If you write TSQL then comparing the performance of two queries will probably be something you do on a daily basis. The difficult part of comparing queries is getting an accurate baseline. Why is this you ask? Because SQL Server has behind the scenes functionality that optimizes queries for multiple executions. This allows minimal resources the second time a query is run. This is why often times, you will notice the second run can be considerably faster than the first. There are a few features that make this happen. Plan Caching Immediately prior to the execution of a query, SQL Server needs to determine the best route to take for the query. This includes the indexes to use, the order of the tables to reference, the types of join algorithms to perform, and many other factors. Finding this route is a rather expensive process, so instead of performing this every time a query is ran, SQL Server caches the plan in memory so it can be reused. Naturally, the second time the query is run, it will be quicker because it does not have to determine the execution plan. There are two ways to clear a plan cache. You can nuke all plans on the instance using DBCC FREEPROCCACHE with no parameters. Or you can pass in the parameter of the plan handle to clear. Just running the following command will clear the plan cache on the entire instance [cc lang=”sql”] — Clear the instance DBCC FREEPROCCACHE [/cc] This will […] Continue reading ...

SQL Server 2012 – Features Update

After attending a meeting with a Microsoft engineer I have some more good news on Denali. Unfortunately the meeting ran short and we didn’t get through all the content so below I could not extrapolate on some points. I still listed them all below. Performance Datawarehouse This has existed in SQL Server 2008, however now it comes with a wizard and is much easier to setup. Basically you install a new SQL Server and run through a wizard (using a domain account) to implement the Performance Data warehouse simply by selecting the remote boxes you want to monitor. The ETL is automatically done for you and collects data on a 15 minute interval. Very cool. CodeName “Juneau” New SQL Server Developer tools integrated with Visual Studio. At the time of this writing, it’s on version CTP3. This tool looks excellent. It promises to make database development much easier and wrapped in source control. It’s easy integration with Azure seems like it will make Azure more attractive. You can view videos here: Download here. Filetable Store files (ppt, word) and link them into sql server. Get transactional support for accessing files. You can Drag drop files into a directory, and then dynamically access them via SQL Server. When files are added, searches can be run against the documents using Full Text Search. Semantic search within FT An optional parameter that captures the top 100 statistically relevant words from a document. Also captures the top 10 similar documents based on the […]

Dedicated SQL Server Hosting

Want dedicated SQL Server hosting? The following guide may help in evaluating SQL Server Hosts. SQL Server Azure – SQL Server Azure is the Microsoft SQL Server Cloud computing solution that allows you to remotely administer SQL Server via you SQL Server Management Studio. While it is difficult to figure the pricing model, it is obvious it is competitive, as these are the times to gather as many clients as possible. This network also makes use of a CDN (Content Delivery Network), and you can use SQL Server Management Studio to administer your site. iWeb – This is the host I use. Yes I use them for WordPress hosting. Why do I host a SQL Server site on WordPress? Basically because I believe in using the right tool for the job. What I have to say about this host, is that they are one of the most reliable hosts according to netcraft that there is. According to Netcraft iNetU is one of the most dependable SQL Server Hosts there is. The following link is from May 2010, however they have been reliable even since then. Rackspace is an undeniable source of SQL Server hosting. While their prices are expensive, their support is extensive and they have superior customer service. Click here check out their SQL Server hosting. Continue reading ...

Audit Tables Using XML Schema

Auditing tables can be a complicated procedure if you require a new table for every table being audited. Fortunately the new XML datatype can help to automate this procedure of auditing multiple tables into a single table. The following trigger template can be used to audit a table into a single table. First let’s create the table needed to hold the audited values: [cc lang=”sql”] CREATE TABLE [Log].[AuditTableHistory]( [HistoryID] [int] IDENTITY(1,1) NOT NULL, [TableSchema] [varchar](10) NULL, [TableName] [varchar](100) NULL, [AuditValue] [xml] NULL, [DateCreated] [datetime] NULL, CONSTRAINT [PK_AUDITTABLEHISTORY] PRIMARY KEY CLUSTERED ( [HistoryID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY] ) ON [PRIMARY] — Now let’s create the trigger CREATE TRIGGER [Audit].[Trg_Schema_Table] ON [Schema].[Table] AFTER UPDATE,DELETE AS BEGIN — SET NOCOUNT ON added to prevent extra result sets from — interfering with SELECT statements. SET NOCOUNT ON; DECLARE @AuditValue xml SET @AuditValue = ( SELECT TOP 1 * FROM DELETED FOR XML AUTO ) INSERT INTO Log.AuditTableHistory ( AuditValue, TableSchema, TableName, DateCreated ) SELECT AuditValue = @AuditValue, TableSchema = OBJECT_SCHEMA_NAME(parent_obj), TableName = OBJECT_NAME(parent_obj), DateCreated = GETDATE() FROM sysobjects so WHERE = @@PROCID END GO [/cc] We can apply this trigger to any table we want just by changing the “ON” statement. It will work for any table. The only difficulty comes in when we change the schema for a particular table. When this happens, we need to know the date/time the table schema changed so […]

When to use the XML Datatype

When I first heard of the XML Datatype, I had mixed feelings. The reason is because from a traditional approach, all data structures can be represented using the classical entity relationship model. For me, a problem arises when trying to represent “documents”. When I say documents, I don’t just mean things like “Word Docs”. What I mean by that is data structures that mean nothing when the data is broken up into segments, however when represented in a whole document, that’s where the only relevance comes in. An example of this is a contract. It is possible to take disparate pieces of a contract and break it into separate rows and columns according to the contract terms. However, when you break a contract up into individual pieces, it means nothing when it’s separate. It’s context is only meaningful when the pieces of the contract are together. This is an example of when to use the XML Datatype. I recently wrote a rules engine, where I created all the rules using XML. When I showed some people, their immediate reaction was, “why can’t we break these individual parts of the rule into a table”. My answer was, “because separated, these individual rule items mean nothing, there is no reason to separate them. They only mean something when they are put together because that defines the rule”. In other words, the entire rule is atomic, there is no reason whatsoever to break it apart. When it is separate it has no meaning. […] Continue reading ...

Featured Articles

 Site Author

  • Thanks for visiting!