May 30, 2008

Optimizing SQL Server Performance

Optimizing SQL Server CPU Performance
A number of factors can affect CPU utilization on a database server: compilation and recompilation of SQL statements, missing indexes, multithreaded operations, disk bottlenecks, memory bottlenecks, routine maintenance, and extract, transform, and load (ETL) activity, among others. CPU utilization is not a bad thing in itself—performing work is what the CPU is there for. The key to healthy CPU utilization is making sure that the CPU is spending its time processing what you want it to process and not wasting cycles on poorly optimized code or sluggish hardware.

There are two paths to identifying CPU performance problems. The first is reviewing the system's hardware performance, and second is SQL Server performance issues.
For CPU performance problems, use PerfMon SQL Profiler.
Look for performance counters like
i ) % Processor Time <90%
ii) Processor Queue Length : Processor Queue Length shows how many threads are waiting to perform work on the CPU. Typically you shouldn't have anything higher than five times the number of physical processors on a dedicated SQL Server.
iii) Context Switches/sec: It should be less than 5000 times the number of processors in the server. When CPU utilization and context switches both exceed their thresholds regularly, this indicates a CPU bottleneck. If this is a regular occurrence, you should start planning for the purchase of more or faster CPUs if your system is outdated.
If you are seeing Context Switches/sec higher than 5000 per physical processor you should strongly consider turning off hyper-threading on your system and retesting performance.

Some considerations for T-SQL CPU optimization are:
Query plan reuse
Reducing compiles and recompiles
Sort operations
Improper joins
Missing indexes
Table/index scans
Function usage in SELECT and WHERE clauses

Optimizing SQL Server Memory Performance

The Memory\Pages/sec counter indicates the number of paging operations to disk during the measuring interval, and this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system.

Another key counter to watch here is Memory\Available Bytes, and if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM and don't need to worry.

Optimizing SQL Server I/O Bottlenecks

If “Average Disk Sec/Read” is between 200ms and 300ms delays, then there is I/O bottleneck. Possible solution is Defragmenatation.
Another solution is increase I/O bandwidth -such as adding more disks.
Storage area networks, or SANs, are widely used to store data for Microsoft SQL Server installations with large databases. SANs are one of the most economical ways to deal with very large data sets. They're designed to scale better in this regard than disk arrays installed directly on the host.
Data files should be placed on one or more RAID 5 or RAID 6 arrays (based on system needs). In certain systems, you should place the data files on RAID 1 or RAID 10 arrays, but those systems are in the minority. Place the transaction log files on one or more RAID 1 or RAID 10 arrays (again, based on size and performance needs). The tempdb database should be placed on its own RAID 1 or RAID 10 array.

Optimizing tempdb Performance
The size and physical placement of the tempdb database can affect the performance of a system. For example, if the size that is defined for tempdb is too small, part of the system-processing load may be taken up with autogrowing tempdb to the size required to support the workload every time you restart the instance of SQL Server. You can avoid this overhead by increasing the sizes of the tempdb data and log file.

Perfmon counters to watch:

1 comment:

daspeac said...

you can open any search engine and get to know more about the way of pst repair, provided by some applications

Creating DataFrames from CSV in Apache Spark

 from pyspark.sql import SparkSession spark = SparkSession.builder.appName("CSV Example").getOrCreate() sc = spark.sparkContext Sp...