How to Shrink Log File

Sometimes after one or more large transactions, the t-log (transaction log) will become full. In these particular cases you may receive an error message indicating the transaction log is full. In order to alleviate this issue, you need to find the names of the transaction logs on your system and then shrink them. To find the names, execute the following statement: [cc lang=”sql”] SELECT name FROM sys.database_files WHERE TYPE = 1 [/cc] Once you’ve found the names of your transaction logs, use them in the DBCC SHRINKFILE command in order to make them smaller. DBCC SHRINKFILE(‘AdventureWorksLT2008_Log’); If your databases are in full recovery mode, then using this methodology will shrink the transaction logs while retaining full recovery mode. A riskier, yet highier yielding way to free disk space is to utilize: [cc lang=”sql”] — database name below BACKUP LOG AdventureWorksLT2008 — database name WITH TRUNCATE_ONLY — t-log name below DBCC SHRINKFILE ( ‘AdventureWorksLT2008_Log’ — transaction log name ,1 ) [/cc] While this command will shrink the transaction log to a greater degree, it will drop the previous transactions since the last T-Log backup. Essentially changing the recovery model of the database to simple since the last transaction log backup. Depending upon the backup schedule you have, this may or may not be risky. If you backup the transaction logs every hour, then the max possible data loss (should a complete loss occur) would be one hours worth of data. While the chance of this are very slim, it should be […] Continue reading ...

Simulate ROW_NUMBER in SQL 2000

While the row_number feature in sql 2005+ has proven to be a very powerful feature, there are still ways to implement that same functionality in SQL Server 2000. Let’s first look at the SQL 2005+ implementation of ROW_NUMBER, then compare it to the SQL 2000: [cc lang=”sql”] — SQL 2005+ SELECT RowNumber = ROW_NUMBER() OVER (ORDER BY c.LastName ASC) ,c.LastName ,c.FirstName FROM SalesLT.Customer c [/cc] And the SQL 2000 version: [cc lang=”sql”] SELECT RowNumber = IDENTITY(INT,1,1) ,c.LastName ,c.FirstName INTO #Customer_RowID FROM SalesLT.Customer c ORDER BY c.LastName ASC [/cc] As with everything, there are pro’s and cons however. Using the SQL 2000 version, the total time take to run is 113ms, while the ROW_NUMBER function returns in 134ms (a 21ms difference). That’s a positive for the SQL 2000 version. Problem is, in this case, there were only 440 rows returned. Increase the result set to over 1000+ and the advantages begin to fade. The biggest drawback regarding the SQL 2000 version is that if you are inserting the results using a SELECT..INTO model, you cannot insert the identity column from the table you are selecting from. Like this (notice the *) : [cc lang=”sql”] SELECT RowNumber = IDENTITY(INT,1,1) ,c.LastName ,c.FirstName ,* INTO #Customer_RowID FROM SalesLT.Customer c ORDER BY c.LastName ASC [/cc] Otherwise, you receive the following error message: Cannot add identity column, using the SELECT INTO statement, to table ‘#Customer_RowID’, which already has column ‘CustomerID’ that inherits the identity property. So essentially, with SQL 2000, there is no way to throw […]

SQL Server Max SmallInt Value

The maximum value for a smallint in SQL Server is: -32768 through 32767 And the byte size is: 2 bytes other maximum values: BigInt: -9223372036854775808 through 9223372036854775807 (8 bytes) Int: -2147483648 through 2147483647 (4 bytes) TinyInt: 0 through 255 (1 byte) Continue reading ...

SQL Server Max TinyInt Value

The maximum value for a tinyint in SQL Server is: 0 through 255 And the byte size is: 1 byte other maximum values: BigInt: -9223372036854775808 through 9223372036854775807 (8 bytes) Int: -2147483648 through 2147483647 (4 bytes) SmallInt: -32768 through 32767 (2 bytes)

Gracefully Handling Deadlocks

In some situations, deadlocks may need to be dealt with not by changing the source of the deadlock, but by changing handling the deadlock gracefully. An example of this may be an external subscription that runs on a schedule deadlocking with another process. If the subscription deadlocks then it would be ok to just kill it and not allow it to interfere with the source process. However if it raises an error, then the source of publication pull will fail. First to make sure one of the deadlocking procedures becomes a victim over the other, a deadlock priority can be set in order to tell SQL Server which process it should kill first. If the priority is set to “high”, then the procedure will not be the victim of the deadlock, (that is, if the other process does not have a priority). Now to handle the deadlock gracefully, wrap it in a try / catch. If the process is the victim of the deadlock, can simply be rolled back without the error bubbling up to the end-user. To give an example of this, let’s simulate a deadlock by creating two tables and write to them from separate query windows. First we’ll setup the tables. [cc lang=”sql”] ——— setup ———- CREATE TABLE ##t1 ( i int ) CREATE TABLE ##t2 ( i int ) INSERT INTO ##t1 SELECT 4 INSERT INTO ##t2 SELECT 5 ——— /setup ———- [/cc] Now copy the two code snippets into separate windows and see the results. […] Continue reading ...

Featured Articles

 Site Author

  • Thanks for visiting!
css.php