Export Table to CSV File – step by step

A very common requirement with SQL Server is to export a table to a csv file. It’s actually an easy task when using SQL Server Management Studio (SSMS). The export method actually behind the scenes uses SSIS wizard which also gives you an option to save the package that performs the export, however saving the package is often not necessary. The permissions you need to perform this, are merely read permissions on the table. Here are the steps broken out in detail. Right-click on the database that contains the table and choose tasks -> Export Data… Click next past the intro screen and on the “Choose a Data Source Screen” make sure the database that contains your table is listed in the drop down. Then Select Next. On the “Choose a Destination” Window, select “Flat File Destination” in the top drop down box labeled “Destination”. On the same “Choose a Destination” Window, click the “Browse” button and the Open file dialog box appears prompting you to find a location for your file. Find the location, then type in the file name. THEN to the right of the file name text box, change the drop down from “Text Files (*.txt)” to “CSV Files (*.csv)” As shown below. While still on the same “Choose a Destination” window, select whether you want the column names to be included in the CSV by selecting the “Column names in first data row” check box. Then Select Next. On the “Specify Table Copy or Query” window […]

Continue reading ...

Search Procedure Text

This has to be one of the most common repetitive functions needed for development. Updating a certain column in a specific table and want to find all the procedures that are affected by that? You’ll need to use an object search. There are a few different methods for doing this. The ANSI standard method using the INFORMATION_SCHEMA.ROUTINES system views have not in the past contained all of the stored procedure text. Only the first 8000 characters or so. (If I’m wrong on that, don’t challenge me because I don’t care). Simple snippet [cc lang=”sql”] SELECT sm.*, so.* FROM sys.sql_modules sm WITH (NOLOCK) INNER JOIN sys.objects so WITH (NOLOCK) ON sm.[object_id] = so.[object_id] WHERE sm.definition LIKE ‘%table_name%’ [/cc] However if you’re going to be using this search a lot, and I can’t imagine you’re not, then why not encapsulate it in a stored procedure so you can include multiple terms, exclude terms, and sort. You will have to place this procedure in each database you want to search however. Stored Procedure: sp_search [cc lang=”sql”] USE master GO CREATE PROCEDURE spsearch @include varchar(max) = NULL, @sort varchar(5) = 1, — 1 =’name_asc’ 2 = ‘name_desc’ 3 = ‘date_desc’, 4 = ‘date_asc’ @exclude varchar(max) = NULL AS — EXEC sp_search ‘UPDATE,products,status’, 3 BEGIN SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; DECLARE @Delimiter varchar(max) = ‘,’ IF OBJECT_ID(N’tempdb..#split’) IS NOT NULL DROP TABLE #split ;WITH split(stpos,endpos) AS( SELECT 0 AS stpos, CHARINDEX(@delimiter,@include) AS endpos UNION ALL SELECT CAST(endpos+1 as int), CHARINDEX(@delimiter,@include,endpos+1) FROM split WHERE endpos […]

Continue reading ...

Creating log tables to track running processes

In any SQL Server system, you will have jobs that run on a schedule or at specified intervals. In these cases, it’s always nice to keep track of certain aspects over time, so you can compare when things go wrong or how performance has been affected over time. In my experience these are indepsensible when it comes to troubleshooting, and for running delta jobs. Here we’ll show a small example of the log tables you can create to facilitate this. First let’s look at some DDL for 2 different tables: [cc lang=”sql”] CREATE TABLE [dbo].[ProcessLogMaster]( [process_log_master_id] [int] IDENTITY(1,1) CONSTRAINT PK_process_log_master PRIMARY KEY CLUSTERED NOT NULL, [process_master_name] [varchar](100) NOT NULL, [datetime_start] [datetime] NULL DEFAULT (getdate()), [datetime_end] [datetime] NULL, [elapsed_ms] [int] NULL, [rows_updated] [int] NULL, [rows_inserted] [int] NULL, [rows_deleted] [int] NULL, [complete] [tinyint] NULL DEFAULT ((0)), [success] [tinyint] NULL, [error_description] [varchar](max) NULL ) CREATE TABLE [dbo].[ProcessLogDetail]( [process_log_detail_id] [int] IDENTITY(1,1) NOT NULL CONSTRAINT [PK_process_log_detail] PRIMARY KEY, [process_log_master_id] [int] NOT NULL, [process_detail_name] [varchar](100) NOT NULL, [datetime_start] [datetime] NULL, [datetime_end] [datetime] NULL, [elapsed_ms] [int] NULL, [rows_updated] [int] NULL, [rows_inserted] [int] NULL, [rows_deleted] [int] NULL, [complete] [tinyint] NULL, [success] [tinyint] NULL, [error_description] [varchar](max) NULL ) [/cc] What we have here are two different tables that can be used to describe job steps. The ProcessLogMaster table, is used to specify the Master or Top level of the job. If there are multiple steps to the job, that report back to the master, we would enter their entries into the ProcessLogDetail table. We could then sum up the […]

Continue reading ...

How to cache stored procedure results using a hash key

There are a lot of different design patterns that lend themselves to creating the shortest path to the data. One of the most efficient is the caching of stored procedure result sets. In order to do this, we need to read the incoming parameters and create a cache key. This cache key is then stored along with the stored procedures result set as a unique identifier representing that combination of the stored procedures parameters. The caveat with this method is that the business requirement needs to allow stale data. There are times where you will need to use values other than the passed in parameters in order to create the cache key. Some examples include datetime data types or keys that are unique (like a customerkey). If the hash that gets created from the parameters is unique, then you will never reuse that dataset again. With this in mind you would even have determine whether the procedure is even cacheable. Another concern to keep in mind is the duration of time you can serve stale data. Maybe 30 seconds, 1 minute, or 1 hour? Any time increment is able to be worked with by clearning the cache tables at the desired interval. Design Let’s look at the basic workflow for how this procedure will work. First of all, we will need to hash all the parameters that are coming into the procedure (unless they are unique in which case we may not be able to cache, or we can possibly […]

Continue reading ...

Overcoming Issues with TempDB Contention

One of the possible bottlenecks for a busy SQL Server is contention in TempDB. This symptoms of this bottleneck are mostly excess wait_types of PAGELATCHEX that are blocking one another. (you can use sp_who3 in order to see whether those wait types are popping up during your busy times). The cause of this is either implicit or explicit use of TempDB. Explicit being creation of table variables and temp tables, or implicit being worktables being generated by complex execution plans. Luckily there are a number of steps you can take in order to overcome these issues. 1) Enable traceflag 1118 There is some argument as to whether this trace flag actually makes a difference. first of all, MS SQL CSS engineers state it’s already the default for 2008. What the flag does is immediately allocate space for tempdb tables upon creation rather than on insert (if I understand it correctly). It did seem to help our database during one point of contention, but I’m not willing to test it again by turning it off. So with that said, “it can’t hurt to turn it on” at least that’s that the consensus of what I’ve read. 2) Put TempDB on a PCIE SSD The PCIE cards are getting cheaper every month. Just putting this in, reduced our write speed from 2.3ms to .3ms. What that translates into is, it’s quicker to get “in and out” of tempdb allowing less contention and higher throughput. Also while I’m too lazy to prove it, […]

Continue reading ...

Featured Articles

 Site Author

  • Thanks for visiting!
css.php