SQL Server’s Batch Requests represents the number of SQL Statements that are being executed per second. In my opinion this is the one metric that gives the best baseline on how your server is performing. It cannot be looked at by itself however. It is necessary to correlate with other metrics (especially CPU usage) in order to get an overall understanding of how your server is performing. The overall goal of any SQL Server is to be able to achieve the most batch requests/sec while utilizing the least amount of resources.
On some busy machines I’ve worked with, this counter averaged around 25,000 batch requests/sec during peak time. This peak throughput is heavily dependent on the architectural design of the system, including procedures, tables, and index design. One notable example of this was a system I worked with whose procedures were written using loops (ugh).. The average batch requests we could ever achieve was only 200. I was very surprised to initially see this, however digging deeper into the code I became less and less shocked. Ultimately I found one piece of code that affected the throughput of the entire system. It was a scalar UDF defined as a computed column in the busiest table on the system (don’t get me started). Anyway, after rewriting that one process, the system then found batch request peaks that went over 3500! The fact that the system could achieve that number now was a big achievement. It meant the overall throughput of the system was dramatically increased 10 fold and not bottle-necked in one spot.
So pay attention to this counter and realize the goal is to achieve the greatest number of batch requests while keeping the resources low (CPU, Disk, Memory).
To add this counter, open perfmon (performance monitor), click This counter can be found in performance monitor under SQLServer:SQL Statistics: Batch Requests/sec.