According to the title of the article, you’re probably here for one reason, to find the maximum value for a smalldatetime. Here it is: 2079-06-06 23:59:00 And the Minimum is: 1900-01-01 00:00:00 And here’s the proof: [cc lang=”sql”] DECLARE @smalldatetime SMALLDATETIME SET @smalldatetime = GETDATE() WHILE 1 = 1 BEGIN BEGIN TRY SET @smalldatetime = DATEADD(dd,1,@smalldatetime) END TRY BEGIN CATCH WHILE 1 = 1 BEGIN BEGIN TRY SET @smalldatetime = DATEADD(mi,1,@smalldatetime) END TRY BEGIN CATCH SELECT @smalldatetime BREAK END CATCH END BREAK; END CATCH END [/cc] The precision for smalldatetime is 1 minute. Meaning that you can only specify minute intervals. If you stumbled on this article and are wondering why someone might want the max smalldatetime, here’s a possible use. Say you have an effective date for a record. By this, I mean that this particular record is active from x date to y date. If the current day falls outside of that range, then this record is no longer active. So if you do not want to define an end date that the record is effective for (meaning it is always active), the best way to do this, would be to insert the maximum smalldatetime in order to signify, “always effective”. 2079 not long enough of a time for you? Hmm.. either you are a purist or somehow think you will be around that long, or are VERY devoted to your company. In any case, use Max datetime value instead.
Continue reading ...
If you are like me, then you will find yourself using the COUNT function many times throughout the day. I’ll go into my many uses of the count function later, but for now let’s look at the syntax. There are basically two ways to use COUNT. One is to perform a count of ALL records, and the other is to perform a count of distinct records. [cc lang=”sql”] SELECT COUNT(DISTINCT ProductID) FROM Production.TransactionHistory SELECT COUNT(*) FROM Production.TransactionHistory [/cc] This is likely the most common use that I use it for. Often I will want to find the cardinality of a specific field within a table. What I mean by that, is to determine how unique the column is when compared to all the rows in the table. That’s what the query above does. Let’s look at the results. From this we see that the ProductID field is duplicated quite a bit in the TransactionHistory table.
This is a handy method can be used to detect if a change has happened to any set of data. This can include an entire table or even just two rows. [cc lang=”sql”] SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM Sales.SalesOrderDetail [/cc] From our execution, we see that a number is returned. The number itself doesn’t have a particular meaning. All it is, is a hash. A hash is a number or sequence that is generated that represents the data you are hashing. The idea is that, if you change the underlying data you are hashing, then the hash will change. In this case, we are using the hash to say, “This number represents all the data contained ithin this table.” If the data changes, then the hash will change also. So let’s change a minute piece of data and see what happens. We’ll update the unitprice within one row by a penny. [cc lang=”sql”] SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM Sales.SalesOrderDetail UPDATE Sales.SalesOrderDetail SET UnitPrice = UnitPrice + .01 WHERE SalesOrderID = 43659 SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM Sales.SalesOrderDetail [/cc] Here’s what we see: So what use is this for? Well, there are plenty of uses. The most common use comes when trying to persist data. What this means, is that say we have lookup tables that we want to cache either on the web tier, or even on the client’s machines. What we can do, is send the lookup tables, along with a hash key that represents the state of the data. Then, we can have the […]
Continue reading ...
Alter index is a powerful new command that replaces both the DBCC INDEXDEFRAG and the DBCC DBREINDEX, and at the same time enhances them both. The following example encapsulates the most common settings used when rebuilding an index. This example is the same as using the DBCC DBREINDEX on an entire table. The settings below are simply detailed so you can uncomment or set them yourself. They are all set to the default: [cc lang=”sql”] ALTER INDEX ALL — All indexes on Table ON Sales.SalesOrderDetail REBUILD — REBUILD OR REORGANIZE WITH ( –FILLFACTOR = 80, — Leaf page pad % PAD_INDEX = OFF, — Pads Intermediate pages SORT_IN_TEMPDB = OFF, STATISTICS_NORECOMPUTE = OFF, — Auto update stats on ONLINE = OFF, — ON for concurrency ALLOW_ROW_LOCKS = ON, — Allow row locking ALLOW_PAGE_LOCKS = ON, — Allow page locking MAXDOP = 0, DATA_COMPRESSION = NONE — NONE | ROW | PAGE ) [/cc] Let’s look at what all this means. Setting Values Description [Indexes] ALL | (Specify an index name) All – Applies the settings to all the indexes on the table. Or alternatively specify an index name to isolate one index. [Alter Type] REBUILD | REORGANIZE REBUILD completely rebuilds the indexes, while REORGANIZE simply defgraments them. This is akin to DBCC DBREINDEX VS DBCC INDEXDEFRAG. Rebuild is more thorough. FILLFACTOR 0 – 100 Specifies the percentage of free space to be left for pages at the leaf level. Higher the number helps with quicker seeks, but you will need […]
As easy as this syntax is, I had to keep looking it up for about a year. I suppose that’s why you are here. Well, here it is: [cc lang=”sql”] ALTER SCHEMA newschema TRANSFER oldschema.Table [/cc] This will transfer the table defined under “oldschema” and transfer it to “newschema”.
Continue reading ...