Schemas are a concept that was introduced in SQL Server 2005 that replaced object owners. Schemas are methods used to abstract objects into separate categories in order to simplify permissions and help categorization and organization. To create a schema, simply do the following: [cc lang=”sql”] CREATE SCHEMA app AUTHORIZATION dbo [/cc] The app represents the name of the schema and the dbo represents the owner of the schema. Users, groups, or roles can be specified as owners. I personally like to use schemas in order to automatically grant permissions to a group or user. All you have to do is assign specific permissions a user has to a schema then any object you create under that schema, will allow the user that particular permission. This is better than explicitly specifying permissions to every object created.
Continue reading ...
There is definitely a lack of awareness in the SQL world regarding the use of user defined functions and the potential performance hit they can have when using within your queries. Don’t get me wrong, I would love nothing more than to be able to centralize my commonly used code into functions for reuse. In a lot of cases this is possible, however there are specific cases where this can cause a huge performance impact. The Problem The one thing we need to be aware of with SQL is that its efficiency lies in the fact that it deals with data in SETS. Meaning that its power does not come in performing row-by-row operations, rather it wants to retrieve chunks of data and manipulate them as recordsets. Keeping this in mind, you can look out for scenarios where certain operations will cause more of a row-by-row operation and therefore impact performance. The most common no no, is the use of scalar functions within a set based operation. It seems (but I can’t prove) that SQL 2008 has actually made some great strides in being able to deal with these situations, however there will always be a negative impact. First, let’s look at a common scenario. The Test First, let’s deploy this scalar user defined function which calculates the End of month for a given date: [cc lang=”sql”] CREATE FUNCTION [dbo].[ufn_GetLastDayOfMonth] ( @pInputDate DATETIME ) RETURNS DATETIME BEGIN DECLARE @vOutputDate DATETIME SET @vOutputDate = CAST(YEAR(@pInputDate) AS VARCHAR(4)) + ‘/’ + CAST(MONTH(@pInputDate) […]
The SQL Server Agent by default only keeps 1000 rows of history for all jobs. There is also a limitation for each job to keep only 100 rows each. These rows do not only include the a row for the entire job, but the rows for each step in the job also. These rows can quickly get filled leaving you in the dark when you need to troubleshoot what had happened in a job. To change these values, you need to have sysadmin access to the SQL Server Agent. Within the object explorer, right click on the SQL Server Agent node, and choose properties. A new pop-up window will appear displaying the SQL Server Agent properties. Select the History node and the history settings are displayed: Here you have the option to not limit the agent job history, by unchecking the corresponding box. You can also choose the maximum # of rows to store for all jobs, and an option to limit each job. The last option is to Remove history that’s older than a specified time period. This would be recommended if you do not limit the history log size, since you will not be pruning any records otherwise.
Continue reading ...
This command shows the last statement executed for a particular SPID. This statement is mostly used for troubleshooting purposes to determine the exact command a particular SPID is running. You must be a member of the sys admin fixed server role, or have VIEW SERVER STATE permission (if SQL 2005+). As for a real world use, I will mostly use this command after using sp_who2 to find out which SPID is taking a lot of resources. To execute, simply replace the SPID 55 below with the one you want to spy on. [cc lang=”sql”] DBCC INPUTBUFFER(55) [/cc] And here are the results: The results you are looking for are displayed in the third column above. This shows the last statement that has been or is currently being executed by the SPID. Another alternative to the DBCC INPUTBUFFER is to use the dm_exec_requests DMV. This query will show the currently executing statement for a SPID: [cc lang=”sql”] SELECT SQLStatement = SUBSTRING ( qt.text, er.statement_start_offset/2, (CASE WHEN er.statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(MAX), qt.text)) * 2 ELSE er.statement_end_offset END – er.statement_start_offset)/2 ) FROM sys.dm_exec_requests er CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) as qt WHERE er.session_id = 54 [/cc] This DMV will only work on SQL 2005+ and you also need the VIEW SERVER STATE permission as a minimum. The one advantage DBCC INPUTBUFFER has over this, is the DMV will only show the currently executing request and not the previously executed request so you need to be a little quicker to catch it. The above DMV […]
This is typically a hot topic and I’m going to try and tackle it with my rudimentary math. Ultimately given time and the frequency of generation, there is no such thing as a completely unique random number. There will always be some chance that a random number can be regenerated even though the chances do go way down when given a larger set of bytes. Using Numbers Let’s first look at probably the most random way to generate a random number. Using a function introduced in SQL 2008 called CRYPT_GEN_RANDOM(). This function takes a parameter of byte length, and returns a random hex based number within that range of bytes. We can easily convert this varbinary data to an integer based value. This function is based upon a low level windows API and is cryptographically secure. [cc lang=”sql”] SELECT CAST(CRYPT_GEN_RANDOM(8) AS bigint) [/cc] Using the method above returning an 8 byte random number and casting it as a bigint, your chances of repeating a duplicate are (roughly) between 1 and 15 billion. To break that out in terms of time, if you generated a random number every second you would without a doubt hit a duplicate random number between 31 and 47 years. However there’s no guarantee that it might not happen way before or way after however the chances go down the farther you move from the mean. Even though this may sound like a long time, the randomness that a bigint provides may not be enough for your […]
Continue reading ...