database intensive -- lots of fairly frequent queries, inserts, updates
-- the gamut. The application does not make use of performance hogs
like cursors, but I know there are lots of ways the application could
be made more efficient database-wise. The server code is running VB6
of all things, using COM+ database interfaces. There are some
clustered and non-clustered indexes defined, but I'm pretty sure
there's room for improvement there as well.
Some tables have grown into the millions of records in recent months,
and performance of the application slowed to a crawl. Optimizing the
database helped a little, but not much. We know that several million
records in a table is a lot, but one would think that SQL Server should
be able to still handle that pretty well. We do have plans to archive
a lot of old data, but in the meantime, we were hurting for a quick
fix.
So we threw hardware at the problem, and transferred the database to a
new, more powerful server. The performance improvement was dramatic.
Transactions were many many times faster than before. Without
implementing any of the other more difficult performance improvements
we have planned, we suddenly became minor heros. :-)
Well, the honeymoon seems to be somewhat over. While performance is
still much better than when the database resided on our old server,
performance appears to have degraded rather significantly again.
Performance is also not significantly better with fewer users on our
system. What the heck?
Yes, the database continues to grow unchecked as we haven't quite got
an archive utility in place yet, but the growth is relatively gradual,
so you wouldn't think that would be the issue. The database is
optimized on a weekly basis, and our web and database servers are both
rebooted monthly. Our database administrators don't seem to have
answers, so I appeal to the experts reading this forum to maybe offer
some clues.
Prior to posting I did do a fair amount of research to see what people
have suggested in similar situations, and ran this by our database
admin. Here's what I can tell you from this research:
- Statistics are updated weekly along with whatever else the database
optimization does
- We do not use the "autoshrink" option for automatically shrinking log
files
- Regarding preallocating space and setting growth factors for log and
data files to minimize time spent allocating disk space, our admin
says, "We do allow database files to grow unchecked, but we do monitor
growth and manually expand as needed. Autogrow is typically set in
50MB increments or less as the amount of time it takes to expand this
amount is negligible."
- Transaction logging is turned on, and data and log devices are on
separate physical disks
- The database server is monitored to ensure no process is hogging all
of the CPU, I/O or memory> We know that several million records in a table is a lot
Not really. It all depends on your hardware and how your application
has been written. I've used tables that are in the tens of millions of
records (even close to 100 million) without encountering performance
problems.
Have you run Profiler to see where the slowness is originating? A good
idea would be to run profiler, logging to a table, for a day. Use that
data to find the longest running SQL statements as well as the most
commonly used SQL statements. Concentrate your efforts on improving
those queries that appear in both categories - i.e. SQL statements that
take a long time to run and which are frequently used. If you check out
Microsoft's website I'm sure that you can find a whitepaper on using
Profiler to optimize a database application.
Another thing to look at is how the front end application is accessing
the server. Is it making a lot of unnecessary round trips to the SQL
Server? Network latency can also play into this a lot. If your frontend
application is scrolling through records in your table one at a time,
retrieving each one individually, it doesn't matter how fast your
database is, the application will be slow.
This is one problem with throwing hardware at a database issue. The
problem that is causing the slowness will almost always come up again
until you fix the part of the database/application that is causing the
issue in the first place.
Hopefully these ideas will yield some improvements for you.
Good luck,
-Tom.|||(teedilo@.hotmail.com) writes:
> Some tables have grown into the millions of records in recent months,
> and performance of the application slowed to a crawl. Optimizing the
> database helped a little, but not much. We know that several million
> records in a table is a lot, but one would think that SQL Server should
> be able to still handle that pretty well.
Well, let me put it this way: a couple of million row tables is
enough to make SQL Server slower than a snail if you have poor indexing
or malformed queries. But as Thomas said: you can have over 100 million
rows, and excellent performance. All depends on the matching between
queries and indexes.
> So we threw hardware at the problem, and transferred the database to a
> new, more powerful server. The performance improvement was dramatic.
> Transactions were many many times faster than before.
Since SQL Server uses a cost-based optimizer, the same query with the
same indexes can give different qurey plans, with a little difference
in statistics. If there was such a drastic changes, it seems that the
tables were turned in favour of better plans.
> Well, the honeymoon seems to be somewhat over. While performance is
> still much better than when the database resided on our old server,
> performance appears to have degraded rather significantly again.
> Performance is also not significantly better with fewer users on our
> system. What the heck?
And you ask us who don't even see the system!
As Thomas said, use the Profiler to track down slow-running queries,
and address them one by one; either by rewriting or modifying indexes.
Or take a shortcut: save a day's workload, and feed it to the Index
Tunig Wizard.
> - Statistics are updated weekly along with whatever else the database
> optimization does
Nevertheless, if can be good idea to do DBCC SHOWCONTIG on some
big table, to check for fragmentation.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Thanks for your replies, Tom and Erland.
Yes, I know that several million records in a table is not necessarily
a lot, especially if the application, database, indexes, etc. are all
well designed. I'll concede that that's probably not the case here!
And yes, I'm sure that network latency contributes to our problems, as
there are a fair amount of round trips going on that probably wouldn't
be necessary if the application were better designed. The consistency
of the bad performance seems to suggest that the degradation in
performance has more to do with the database, however.
I like the Profiler suggestion, which we've been talking about doing
anyway. It's too bad that we didn't do that right after moving to this
new server so we could maybe pinpoint what has changed to make it run
slower.
I also like the suggestion of running DBCC SHOWCONTIG on the bigger
tables to see if there's significant fragmentation even though we've
been optimizing every week.
I also appreciate Erland's comment that it's difficult for someone on
the outside to know what's really going on. I just figured that there
*might* be fairly common/general things to look for/do in the situation
where 1) performance was awful on an old server, 2) performance was
vastly improved when the database was moved to a new server, and 3)
performance quickly degraded again on the new server.
Yes, I know that throwing hardware at the problem is not a silver
bullet. Our old server needed replacing anyway, so we figured it was
worth doing, at least as a stopgap measure to improve performance,
while we were still working on the more difficult fixes. And while I
am not surprised that some slowness would arise again as Tom suggests,
I am rather surprised that the degradation was as significant as it was
in a relatively few short months since the new server was put into
place. In fact, my theory is that reformatting the new server and
starting completely from scratch would suddenly give us improved
performance again -- just a hunch.
One thing that confuses me a little in relation to Erland's comments
about SQL Server's cost-based optimizer. I've read a little about
this, how SQL Server continually tries to find better execution plans,
etc. But wouldn't that lead to *improved* performance instead of worse
performance? Or is it possible that SQL Server will occasionally make
the "wrong choice" in terms of finding execution plans that offer
better performance?
Thanks much again for your suggestions. They're probably about as good
as I can expect given the lack of specific information that I can offer
on this problem.|||> One thing that confuses me a little in relation to Erland's comments
> about SQL Server's cost-based optimizer. I've read a little about
> this, how SQL Server continually tries to find better execution plans,
> etc. But wouldn't that lead to *improved* performance instead of worse
> performance?
Indexes are the key to an efficient execution plan. Statistics on
non-indexed columns can help too (but to a lesser extent) and SQL Server can
generate column statistics automatically. However, indexes are entirely up
to you because you need to balance the cost of maintaining indexes with data
retrieval performance benefits. This is where Profiler and execution plan
analysis can help you out.
When you start with a well-designed database and application, the SQL Server
cost-based optimizer usually does a pretty good job of adapting to changes
in data volume and cardinality. When you don't have appropriate indexes,
the optimizer can't do much to help you out and performance degrades with
data volume. This appears to be the case in your environment based on your
description of the problem.
> Or is it possible that SQL Server will occasionally make
> the "wrong choice" in terms of finding execution plans that offer
> better performance?
Yes, it possible that the optimizer makes the wrong choice. When you find
an execution plan that isn't using indexes effectively, first ensure stats
are up-to-date and expressions are sargable. You can specify hints to
override the cost-based optimizer choices but this should be done as a only
as a last resort.
--
Hope this helps.
Dan Guzman
SQL Server MVP
<teedilo@.hotmail.com> wrote in message
news:1109981213.142149.22520@.o13g2000cwo.googlegro ups.com...
> Thanks for your replies, Tom and Erland.
> Yes, I know that several million records in a table is not necessarily
> a lot, especially if the application, database, indexes, etc. are all
> well designed. I'll concede that that's probably not the case here!
> And yes, I'm sure that network latency contributes to our problems, as
> there are a fair amount of round trips going on that probably wouldn't
> be necessary if the application were better designed. The consistency
> of the bad performance seems to suggest that the degradation in
> performance has more to do with the database, however.
> I like the Profiler suggestion, which we've been talking about doing
> anyway. It's too bad that we didn't do that right after moving to this
> new server so we could maybe pinpoint what has changed to make it run
> slower.
> I also like the suggestion of running DBCC SHOWCONTIG on the bigger
> tables to see if there's significant fragmentation even though we've
> been optimizing every week.
> I also appreciate Erland's comment that it's difficult for someone on
> the outside to know what's really going on. I just figured that there
> *might* be fairly common/general things to look for/do in the situation
> where 1) performance was awful on an old server, 2) performance was
> vastly improved when the database was moved to a new server, and 3)
> performance quickly degraded again on the new server.
> Yes, I know that throwing hardware at the problem is not a silver
> bullet. Our old server needed replacing anyway, so we figured it was
> worth doing, at least as a stopgap measure to improve performance,
> while we were still working on the more difficult fixes. And while I
> am not surprised that some slowness would arise again as Tom suggests,
> I am rather surprised that the degradation was as significant as it was
> in a relatively few short months since the new server was put into
> place. In fact, my theory is that reformatting the new server and
> starting completely from scratch would suddenly give us improved
> performance again -- just a hunch.
> One thing that confuses me a little in relation to Erland's comments
> about SQL Server's cost-based optimizer. I've read a little about
> this, how SQL Server continually tries to find better execution plans,
> etc. But wouldn't that lead to *improved* performance instead of worse
> performance? Or is it possible that SQL Server will occasionally make
> the "wrong choice" in terms of finding execution plans that offer
> better performance?
> Thanks much again for your suggestions. They're probably about as good
> as I can expect given the lack of specific information that I can offer
> on this problem.|||(teedilo@.hotmail.com) writes:
> I also appreciate Erland's comment that it's difficult for someone on
> the outside to know what's really going on. I just figured that there
> *might* be fairly common/general things to look for/do in the situation
> where 1) performance was awful on an old server, 2) performance was
> vastly improved when the database was moved to a new server, and 3)
> performance quickly degraded again on the new server.
So how did you migrate the database? If you now say "Copy Database
Wizard" or any other methods that include rebuild of the database,
I'm tempted to place my bets on fragmentation as the main issue. To
wit, if you move the database in this way, you get quite a well-
defragmented database in the other end.
If you migrated the database my means of backup/restore or detach/attach,
your also copied the fragmentation. In this case... I could go into
speculation, but it would not really address the actual problem.
> One thing that confuses me a little in relation to Erland's comments
> about SQL Server's cost-based optimizer. I've read a little about
> this, how SQL Server continually tries to find better execution plans,
> etc. But wouldn't that lead to *improved* performance instead of worse
> performance? Or is it possible that SQL Server will occasionally make
> the "wrong choice" in terms of finding execution plans that offer
> better performance?
To say that SQL Server constantly try to find better query plans, is
not maybe really correct. (That's more applicable to the SQL Server
dev team.) But SQL Server do occasionally review query plans, because
statistics have changed - or because the query plan has fallen out of
cache.
And plans are bulit on estimates from statistics, and they are indeed
a gamble. Sometimes the opimtizer makes very bad picks for one reason
or another. This also applies to different SQL Server versions. MS
does some tweak to the optimizer, which in many cases are beneficiary,
but at some sites can backfire.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Hi
Glancing through the posts it is not clear what maintenace you are doing
other than the weekly updating the statistics, so I will add my two pence
worth!!!
Your DBCC SHOWCONTIG will probably indicate fragmentation if you don't have
maintenace in place that calls DBCC INDEXDEFRAG and/or DBCC DBREINDEX. You
may also want to look at running the index tuning wizard on your profiles to
see if it suggests alternate fill factors. You may want to try alternate
fill factors on a different (controlled) system and replay the profiles
against it to see what fragmentation and speed you can achieve.
John
<teedilo@.hotmail.com> wrote in message
news:1109981213.142149.22520@.o13g2000cwo.googlegro ups.com...
> Thanks for your replies, Tom and Erland.
> Yes, I know that several million records in a table is not necessarily
> a lot, especially if the application, database, indexes, etc. are all
> well designed. I'll concede that that's probably not the case here!
> And yes, I'm sure that network latency contributes to our problems, as
> there are a fair amount of round trips going on that probably wouldn't
> be necessary if the application were better designed. The consistency
> of the bad performance seems to suggest that the degradation in
> performance has more to do with the database, however.
> I like the Profiler suggestion, which we've been talking about doing
> anyway. It's too bad that we didn't do that right after moving to this
> new server so we could maybe pinpoint what has changed to make it run
> slower.
> I also like the suggestion of running DBCC SHOWCONTIG on the bigger
> tables to see if there's significant fragmentation even though we've
> been optimizing every week.
> I also appreciate Erland's comment that it's difficult for someone on
> the outside to know what's really going on. I just figured that there
> *might* be fairly common/general things to look for/do in the situation
> where 1) performance was awful on an old server, 2) performance was
> vastly improved when the database was moved to a new server, and 3)
> performance quickly degraded again on the new server.
> Yes, I know that throwing hardware at the problem is not a silver
> bullet. Our old server needed replacing anyway, so we figured it was
> worth doing, at least as a stopgap measure to improve performance,
> while we were still working on the more difficult fixes. And while I
> am not surprised that some slowness would arise again as Tom suggests,
> I am rather surprised that the degradation was as significant as it was
> in a relatively few short months since the new server was put into
> place. In fact, my theory is that reformatting the new server and
> starting completely from scratch would suddenly give us improved
> performance again -- just a hunch.
> One thing that confuses me a little in relation to Erland's comments
> about SQL Server's cost-based optimizer. I've read a little about
> this, how SQL Server continually tries to find better execution plans,
> etc. But wouldn't that lead to *improved* performance instead of worse
> performance? Or is it possible that SQL Server will occasionally make
> the "wrong choice" in terms of finding execution plans that offer
> better performance?
> Thanks much again for your suggestions. They're probably about as good
> as I can expect given the lack of specific information that I can offer
> on this problem.
No comments:
Post a Comment