SQL Server 2008 Minimally Logged Inserts

SQL Server 2008 has now introduced minimally logged inserts into tables that already contain data and a clustered index. What happens is the initial inserts may be fully logged if the data pages they are filling already contain data. However any new data pages added to the table will be minimally logged if all the requirements below are met. Trace flag 610 must be on Database recovery model must be bulk-logged or Simple Inserted data must be ordered by the clustered index To turn on the trace flag for your current session: DBCC TRACEON (610) INSERT INTO dbo.MyTable SELECT * FROM ORDER BY 1 DBCC TRACEOFF (610) This new change differs dramatically from the previous requirements for minimal logging. Previously there could be no clustered index and a table lock had to be acquired on the target table. For more information, visit: Minimal Logging Changes – MSDN Blog Continue reading ...

Quick Table Transfers (Imports) using SSIS, Bulk Insert or BCP

Ever wonder why sometimes data transfer can be lightning fast while other times you’re watching sp_who2 wondering when it’s going to finish? It’s likely you’re noticing the difference between minimal logging and full logging. Even in a simple recovery model for a database you can experience row inserts to both the transaction log and the data pages. The easiest way to take advantage of minimal logging is to set the database recovery model to simple, drop all indexes in the target table then use SSIS, DTS, or BULK INSERT to transfer the data in. The speed of inserting data in SQL Server is wholly dependent on how many writes occur to the transaction log. These writes occur in two different modes, Minimal logging and Full logging. Minimal logging directly to the data page then writes only a pointer to the datapage in the transaction log, while Full logging writes the content of all the rows to the transaction log prior to inserting them into the data page. Needless to say, in order to take advantage of quick inserts, you will want to employ minimal logging. There are however a few prerequisites. The database recovery model of the target table must be either Simple or Bulk Logged If the target table contains a clustered index, it cannot contain data A table lock must be aquired on the target table The table cannot be part of a replication scheme If the table contains a non clustered index, the index itself will be […]

Custom Pagination with Dynamic ORDER BY

SQL Server Denali has a new feature allowing pagination using the order by clause. A common solution needed for the front end is to paginate records prior to sending them to the webserver. More frequently now, we are seeing demormalized data sets being stored in the WebServer’s or a middle tiers cache mechanism. Those solutions however are more difficult to maintain, persist and synchronize. Enter the old fashioned database paging solution. This paging solution initially grabs a subset of a table and counts the records. It then stores ordered results based on the parameter passed into the common table expression. Additional parameters are the number of rows the caller wants on each page and the page number the caller is currently retrieving. CREATE PROCEDURE dbo.GetEmployees (         @SortColumn VARCHAR(50) = NULL,         @iRows INT = 10,         @iPageNum INT = 1 ) AS BEGIN SET NOCOUNT ON DECLARE @RecordCount INT DECLARE @iNbrPages INT SET @RecordCount = 0 SET @iNbrPages = 0 SELECT     emp.EmployeeID,     emp.FirstName,     emp.LastName,     emp.DateHired INTO #Employees FROM HR.Employees emp WHERE emp.IsTerminated = 1 SELECT     @iNbrPages = CEILING(COUNT(1)/(@iRows*1.0)),     @RecordCount = COUNT(1) FROM #Employees BEGIN     ;WITH PagingCTE     (         Row_ID,         EmployeeID,         FirstName,         LastName,         DateHired     )     AS     (       […] Continue reading ...

View Active Connections

With SQL Server 2005+, it is very easy to view the specifics of connection information. This is very useful because when troubleshooting slowdowns. Luckily there are a few dynamic management views that provide insight into connection and session information. The following query groups the connections according the program that is connected to SQL Server. This information can be spoofed however using a connection string. When running this query, you will find how important it is to add the application name to the query string. The query also shows the number of connections opened by each application. -- By Application SELECT      CPU            = SUM(cpu_time)     ,WaitTime       = SUM(total_scheduled_time)     ,ElapsedTime    = SUM(total_elapsed_time)     ,Reads          = SUM(num_reads)     ,Writes         = SUM(num_writes)     ,Connections    = COUNT(1)     ,Program        = program_name FROM sys.dm_exec_connections con LEFT JOIN sys.dm_exec_sessions ses     ON ses.session_id = con.session_id GROUP BY program_name ORDER BY cpu DESC This next query groups the same information by user: -- Group By User SELECT      CPU            = SUM(cpu_time)     ,WaitTime       = SUM(total_scheduled_time)     ,ElapsedTime    = SUM(total_elapsed_time)     ,Reads          = SUM(num_reads)     ,Writes         = SUM(num_writes)     ,Connections    = COUNT(1)     ,[login]        = original_login_name FROM sys.dm_exec_connections con LEFT […]

Fix – Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. This error message is due to the server setting of Remote Connection Query Timeout. The default is 600 seconds, or 10 minutes. EXEC SP_CONFIGURE 'remote query timeout', 1800 reconfigure EXEC sp_configure EXEC SP_CONFIGURE 'show advanced options', 1 reconfigure EXEC sp_configure EXEC SP_CONFIGURE 'remote query timeout', 1800 reconfigure EXEC sp_configure After making this change, make sure to close the window and create a new connection in order to inherit the new query timeout settings. Continue reading ...

Featured Articles

 Site Author

  • Thanks for visiting!