check cpu, harddisk queue, read & write, memory usage, all very normal.
but one purticular database is very slow( all others are normal). it took 10 seconds to insert one record into the slow database. each record is less than 300 byte, all varchar. now there are 1 million records in the core table.
use DBCC showcontig to check the slow db, found out this:
Extent Scan Fragmentation -- 99.47%
will a "DBCC indexdefrag" help in this case?
Thanks for any help!Possible causes for poor performance:
* Statistics out of date. Use auto update statistics option or UPDATE STATISTICS
* Slow insert, could be due to the indexes needing an update with say a 90% fill factor. Don't use clustered index for data that is always changing.
* Have you got too many indexes?
* Try using data striping using file groups. Eg if you have 2 heavily used tables in a database, performance would be better if they were on separate disks / RAID arrays
* Is your transaction log being truncated (normally happens when tranaction log is backed up). Use DBCC sqlperf(logspace) .
* Is your transaction log file expanding every time you add more data?
* Is your database file expanding every time you add more data?
* Have you tried a DBCC checkdb ?
* Use NT Performance monitor to look at Disk, CPU and Memory activity counters.
No comments:
Post a Comment