Sunday, March 25, 2012

Database size....

Hi,
I have a master database that I am using to import data into while we
migrate and upgrade our customers. Once complete I detach and rename the
database for my client, attach to their server, etc. However, when I move
to import the next customers set of data I delete all records from the table
and begin the reimport process one after the other. This approach is
working fine however we are finding that the database is 500+/- MB's when it
should be tiny. Is there a step or process I am missing for the "shrink"
procedures. Should I "shrink" when the database is empty? Will that solve
the size issue?
A half a GB database that only contains a small number of records is a waste
of disk space.
Thanks,
ChrisWhen you are about to start the next process, why not drop the database and
re-create it?
On 2/28/05 6:21 PM, in article uXH1EweHFHA.896@.TK2MSFTNGP10.phx.gbl, "Chris
Marsh" <cmarsh@.synergy-intl.com> wrote:
> Hi,
> I have a master database that I am using to import data into while we
> migrate and upgrade our customers. Once complete I detach and rename the
> database for my client, attach to their server, etc. However, when I move
> to import the next customers set of data I delete all records from the table
> and begin the reimport process one after the other. This approach is
> working fine however we are finding that the database is 500+/- MB's when it
> should be tiny. Is there a step or process I am missing for the "shrink"
> procedures. Should I "shrink" when the database is empty? Will that solve
> the size issue?
> A half a GB database that only contains a small number of records is a waste
> of disk space.
> Thanks,
> Chris
>|||Shrink each file individually, not SHRINKDATABASE. In all likelihood, you
are using temporary tables to stage and then dropping them when you are done
with the import.
Sincerely,
Anthony Thomas
"Chris Marsh" <cmarsh@.synergy-intl.com> wrote in message
news:uXH1EweHFHA.896@.TK2MSFTNGP10.phx.gbl...
Hi,
I have a master database that I am using to import data into while we
migrate and upgrade our customers. Once complete I detach and rename the
database for my client, attach to their server, etc. However, when I move
to import the next customers set of data I delete all records from the table
and begin the reimport process one after the other. This approach is
working fine however we are finding that the database is 500+/- MB's when it
should be tiny. Is there a step or process I am missing for the "shrink"
procedures. Should I "shrink" when the database is empty? Will that solve
the size issue?
A half a GB database that only contains a small number of records is a waste
of disk space.
Thanks,
Chris|||This is a multi-part message in MIME format.
--000809080808010904020109
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Have you confirmed if it's the data file or the log file that accounts
for the majority of the DB size? You can find out with a simple
sp_helpdb '<MyDB>'. Perhaps for this process (I'm guessing it's the
transaction log) you should set the recovery model to SIMPLE so that the
log doesn't grow large while preparing your customers' data.
Just out of curiosity, is there a reason you're throwing this data into
the master db? That's almost certainly a bad thing to do.
--
*mike hodgson* |/ database administrator/ | mallesons stephen jaques
*T* +61 (2) 9296 3668 |* F* +61 (2) 9296 3885 |* M* +61 (408) 675 907
*E* mailto:mike.hodgson@.mallesons.nospam.com |* W* http://www.mallesons.com
Chris Marsh wrote:
>Hi,
>I have a master database that I am using to import data into while we
>migrate and upgrade our customers. Once complete I detach and rename the
>database for my client, attach to their server, etc. However, when I move
>to import the next customers set of data I delete all records from the table
>and begin the reimport process one after the other. This approach is
>working fine however we are finding that the database is 500+/- MB's when it
>should be tiny. Is there a step or process I am missing for the "shrink"
>procedures. Should I "shrink" when the database is empty? Will that solve
>the size issue?
>A half a GB database that only contains a small number of records is a waste
>of disk space.
>Thanks,
>Chris
>
>
--000809080808010904020109
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
<tt>Have you confirmed if it's the data file or the log file that
accounts for the majority of the DB size? You can find out with a
simple sp_helpdb '<MyDB>'. Perhaps for this process (I'm
guessing it's the transaction log) you should set the recovery model to
SIMPLE so that the log doesn't grow large while preparing your
customers' data.<br>
<br>
Just out of curiosity, is there a reason you're throwing this data into
the master db? That's almost certainly a bad thing to do.<br>
</tt>
<div class="moz-signature">
<title></title>
<meta http-equiv="Content-Type" content="text/html; ">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font> </span><b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"> <font face="Tahoma"
size="2">|</font><i><font face="Tahoma"> </font><font face="Tahoma"
size="2"> database administrator</font></i><font face="Tahoma" size="2">
| mallesons</font><font face="Tahoma"> </font><font face="Tahoma"
size="2">stephen</font><font face="Tahoma"> </font><font face="Tahoma"
size="2"> jaques</font><font face="Tahoma"><br>
</font><b><font face="Tahoma" size="2">T</font></b><font face="Tahoma"
size="2"> +61 (2) 9296 3668 |</font><b><font face="Tahoma"> </font><font
face="Tahoma" size="2"> F</font></b><font face="Tahoma" size="2"> +61
(2) 9296 3885 |</font><b><font face="Tahoma"> </font><font
face="Tahoma" size="2">M</font></b><font face="Tahoma" size="2"> +61
(408) 675 907</font><br>
<b><font face="Tahoma" size="2">E</font></b><font face="Tahoma" size="2">
<a href="http://links.10026.com/?link=mailto:mike.hodgson@.mallesons.nospam.com">
mailto:mike.hodgson@.mallesons.nospam.com</a> |</font><b><font
face="Tahoma"> </font><font face="Tahoma" size="2">W</font></b><font
face="Tahoma" size="2"> <a href="http://links.10026.com/?link=/">http://www.mallesons.com">
http://www.mallesons.com</a></font></span> </p>
</div>
<br>
<br>
Chris Marsh wrote:
<blockquote cite="miduXH1EweHFHA.896@.TK2MSFTNGP10.phx.gbl" type="cite">
<pre wrap="">Hi,
I have a master database that I am using to import data into while we
migrate and upgrade our customers. Once complete I detach and rename the
database for my client, attach to their server, etc. However, when I move
to import the next customers set of data I delete all records from the table
and begin the reimport process one after the other. This approach is
working fine however we are finding that the database is 500+/- MB's when it
should be tiny. Is there a step or process I am missing for the "shrink"
procedures. Should I "shrink" when the database is empty? Will that solve
the size issue?
A half a GB database that only contains a small number of records is a waste
of disk space.
Thanks,
Chris
</pre>
</blockquote>
</body>
</html>
--000809080808010904020109--|||This is a multi-part message in MIME format.
--=_NextPart_000_000B_01C51E3C.900058A0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Mike,
Thank you for the reply, I poorly worded my question by stating that I =was using the "master database". I should have been clear in saying our =master database, I obviously need to come up with a new term for our =default, install database. What I did double-check was that I am using =the "simple" method and withoiut almost any records in the database it =appears as if I can get it down to 87MB's, I will assume that some of =the required data that we include is what's taking up the space. That's =a lot better than when I first sent my email out.
I came from the VFP (foxpro) data world and the SQL database is easily 4 =- 5 times larger than VFP's.
Thanks again.
Chris
"Mike Hodgson" <mike.hodgson@.mallesons.nospam.com> wrote in message =news:%23i%23KDlgHFHA.3196@.TK2MSFTNGP15.phx.gbl...
Have you confirmed if it's the data file or the log file that accounts =for the majority of the DB size? You can find out with a simple =sp_helpdb '<MyDB>'. Perhaps for this process (I'm guessing it's the =transaction log) you should set the recovery model to SIMPLE so that the =log doesn't grow large while preparing your customers' data.
Just out of curiosity, is there a reason you're throwing this data =into the master db? That's almost certainly a bad thing to do.
--
mike hodgson | database administrator | mallesons stephen jaques
T +61 (2) 9296 3668 | F +61 (2) 9296 3885 | M +61 (408) 675 907
E mailto:mike.hodgson@.mallesons.nospam.com | W =http://www.mallesons.com=20
Chris Marsh wrote: Hi,
I have a master database that I am using to import data into while we migrate and upgrade our customers. Once complete I detach and rename =the database for my client, attach to their server, etc. However, when I =move to import the next customers set of data I delete all records from the =table and begin the reimport process one after the other. This approach is working fine however we are finding that the database is 500+/- MB's =when it should be tiny. Is there a step or process I am missing for the ="shrink" procedures. Should I "shrink" when the database is empty? Will that =solve the size issue?
A half a GB database that only contains a small number of records is a =waste of disk space.
Thanks,
Chris
--=_NextPart_000_000B_01C51E3C.900058A0
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

Mike,
Thank you for the reply, I poorly =worded my question by stating that I was using the "master database". I should =have been clear in saying our master database, I obviously need to come up with a =new term for our default, install database. What I did double-check was =that I am using the "simple" method and withoiut almost any records in the =database it appears as if I can get it down to 87MB's, I will assume that some of the required data that we include is what's taking up the =space. That's a lot better than when I first sent my email out.
I came from the VFP (foxpro) data world =and the SQL database is easily 4 - 5 times larger than VFP's.
Thanks again.
Chris
"Mike Hodgson" wrote in message news:%23i%23KDlgH=FHA.3196@.TK2MSFTNGP15.phx.gbl...Have you confirmed if it's the data file or the log file that accounts for =the majority of the DB size? You can find out with a simple =sp_helpdb ''. Perhaps for this process (I'm guessing it's the transaction log) you should set the recovery model to SIMPLE so that =the log doesn't grow large while preparing your customers' data.Just =out of curiosity, is there a reason you're throwing this data into the master = db? That's almost certainly a bad thing to do.
--mike =hodgson | database =administrator | mallesons =stephen jaquesT +61 (2) 9296 3668 | F +61 (2) 9296 3885 | M +61 (408) 675 907E mailto:mike.hodgson@.mal=lesons.nospam.com | W http://www.mallesons.com Chris Marsh wrote: Hi,
I have a master database that I am using to import data into while we migrate and upgrade our customers. Once complete I detach and rename =the database for my client, attach to their server, etc. However, when I =move to import the next customers set of data I delete all records from the =table and begin the reimport process one after the other. This approach is working fine however we are finding that the database is 500+/- MB's =when it should be tiny. Is there a step or process I am missing for the ="shrink" procedures. Should I "shrink" when the database is empty? Will that =solve the size issue?
A half a GB database that only contains a small number of records is a =waste of disk space.
Thanks,
Chris

--=_NextPart_000_000B_01C51E3C.900058A0--|||Aaron,
Very interesting thought! Are you suggesting doing that by a script? If
so, then how to I deal with some of the default data that we do not import
but is required to make the application function? In fact, if I may how do
you suggest that one deals with general updates to an application. We are
updating stored procs, tables, views, etc. Currently we have been doing
this by hand but there must be a better method. Any suggestions?
Thanks,
Chris
"Aaron [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message
news:BE492D4F.1826%ten.xoc@.dnartreb.noraa...
> When you are about to start the next process, why not drop the database
> and
> re-create it?
>
>
> On 2/28/05 6:21 PM, in article uXH1EweHFHA.896@.TK2MSFTNGP10.phx.gbl,
> "Chris
> Marsh" <cmarsh@.synergy-intl.com> wrote:
>> Hi,
>> I have a master database that I am using to import data into while we
>> migrate and upgrade our customers. Once complete I detach and rename the
>> database for my client, attach to their server, etc. However, when I
>> move
>> to import the next customers set of data I delete all records from the
>> table
>> and begin the reimport process one after the other. This approach is
>> working fine however we are finding that the database is 500+/- MB's when
>> it
>> should be tiny. Is there a step or process I am missing for the "shrink"
>> procedures. Should I "shrink" when the database is empty? Will that
>> solve
>> the size issue?
>> A half a GB database that only contains a small number of records is a
>> waste
>> of disk space.
>> Thanks,
>> Chris
>>
>|||Anthony,
Thank you for the reply. That also might be a good option to try.
Chris
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:uY3NORgHFHA.2620@.tk2msftngp13.phx.gbl...
> Shrink each file individually, not SHRINKDATABASE. In all likelihood, you
> are using temporary tables to stage and then dropping them when you are
> done
> with the import.
> Sincerely,
>
> Anthony Thomas
>
> --
> "Chris Marsh" <cmarsh@.synergy-intl.com> wrote in message
> news:uXH1EweHFHA.896@.TK2MSFTNGP10.phx.gbl...
> Hi,
> I have a master database that I am using to import data into while we
> migrate and upgrade our customers. Once complete I detach and rename the
> database for my client, attach to their server, etc. However, when I move
> to import the next customers set of data I delete all records from the
> table
> and begin the reimport process one after the other. This approach is
> working fine however we are finding that the database is 500+/- MB's when
> it
> should be tiny. Is there a step or process I am missing for the "shrink"
> procedures. Should I "shrink" when the database is empty? Will that
> solve
> the size issue?
> A half a GB database that only contains a small number of records is a
> waste
> of disk space.
> Thanks,
> Chris
>

No comments:

Post a Comment