How do I reduce transaction log backup size after a full backup?Transaction log backup size changeBackup...

What prevents the construction of a CPU with all necessary memory represented in registers?

Which aircraft had such a luxurious-looking navigator's station?

How to count words in a line

"Murder!" The knight said

Where is this triangular-shaped space station from?

Is there a German word for “analytics”?

Is there a low-level alternative to Animate Objects?

Called into a meeting and told we are being made redundant (laid off) and "not to share outside". Can I tell my partner?

How to approximate rolls for potions of healing using only d6's?

What if I store 10TB on azure servers and then keep the vm powered off?

How can I handle a player who pre-plans arguments about my rulings on RAW?

How do ISS astronauts "get their stripes"?

How do I construct an nxn matrix?

If nine coins are tossed, what is the probability that the number of heads is even?

Creature spells vs. ability to convert a permanent into a creature

Significance and timing of "mux scans"

What do the pedals on grand pianos do?

Auto Insert date into Notepad

When was drinking water recognized as crucial in marathon running?

Is divide-by-zero a security vulnerability?

Has the Isbell–Freyd criterion ever been used to check that a category is concretisable?

I can't die. Who am I?

You'll find me clean when something is full

As a new poet, where can I find help from a professional to judge my work?



How do I reduce transaction log backup size after a full backup?


Transaction log backup size changeBackup strategies in SQL Server 2005Transaction Log backup fails if no full backup existsTroubleshoot odd large transaction log backupsFull/Differential backup - what's used to determine the differential backup content?Does a failed full backup invalidate future transaction log backups?SQL Transaction log backups after Full SQL BackupsTransaction log: Truncate vs. backupSQL Server 2012 SP1 transaction log getting full?What is the risk of a SQL Server backup strategy that is only weekly full backups and daily transaction log backups?SQL 2012 Transaction Log backup does not truncate the logs













38















I have three maintenance plans set up to run on an Sql Server 2005 instance:




  • Weekly database optimisations followed by a full backup

  • Daily differential backup

  • Hourly transaction log backups


The hourly log backups are usually between a few hundred Kb and 10Mb depending on the level of activity, daily differentials usually grow to around 250Mb by the end of the week, and the weekly backup is about 3.5Gb.



The problem I have is that the optimisations before the full backup seem to be causing the next transaction log backup to grow to over 2x the size of the full backup, in this case 8Gb, before returning to normal.



Other than BACKUP LOG <DatabaseName> WITH TRUNCATE_ONLY, is there any way to reduce the size of that log backup, or prevent the optimisations from being recorded in the transaction log at all, as surely they will be accounted for in the full backup they precede?










share|improve this question




















  • 1





    +1 Upvoted this because of the exchange of ideas produced by this question.

    – MarlonRibunal
    May 27 '09 at 18:24
















38















I have three maintenance plans set up to run on an Sql Server 2005 instance:




  • Weekly database optimisations followed by a full backup

  • Daily differential backup

  • Hourly transaction log backups


The hourly log backups are usually between a few hundred Kb and 10Mb depending on the level of activity, daily differentials usually grow to around 250Mb by the end of the week, and the weekly backup is about 3.5Gb.



The problem I have is that the optimisations before the full backup seem to be causing the next transaction log backup to grow to over 2x the size of the full backup, in this case 8Gb, before returning to normal.



Other than BACKUP LOG <DatabaseName> WITH TRUNCATE_ONLY, is there any way to reduce the size of that log backup, or prevent the optimisations from being recorded in the transaction log at all, as surely they will be accounted for in the full backup they precede?










share|improve this question




















  • 1





    +1 Upvoted this because of the exchange of ideas produced by this question.

    – MarlonRibunal
    May 27 '09 at 18:24














38












38








38


13






I have three maintenance plans set up to run on an Sql Server 2005 instance:




  • Weekly database optimisations followed by a full backup

  • Daily differential backup

  • Hourly transaction log backups


The hourly log backups are usually between a few hundred Kb and 10Mb depending on the level of activity, daily differentials usually grow to around 250Mb by the end of the week, and the weekly backup is about 3.5Gb.



The problem I have is that the optimisations before the full backup seem to be causing the next transaction log backup to grow to over 2x the size of the full backup, in this case 8Gb, before returning to normal.



Other than BACKUP LOG <DatabaseName> WITH TRUNCATE_ONLY, is there any way to reduce the size of that log backup, or prevent the optimisations from being recorded in the transaction log at all, as surely they will be accounted for in the full backup they precede?










share|improve this question
















I have three maintenance plans set up to run on an Sql Server 2005 instance:




  • Weekly database optimisations followed by a full backup

  • Daily differential backup

  • Hourly transaction log backups


The hourly log backups are usually between a few hundred Kb and 10Mb depending on the level of activity, daily differentials usually grow to around 250Mb by the end of the week, and the weekly backup is about 3.5Gb.



The problem I have is that the optimisations before the full backup seem to be causing the next transaction log backup to grow to over 2x the size of the full backup, in this case 8Gb, before returning to normal.



Other than BACKUP LOG <DatabaseName> WITH TRUNCATE_ONLY, is there any way to reduce the size of that log backup, or prevent the optimisations from being recorded in the transaction log at all, as surely they will be accounted for in the full backup they precede?







sql-server sql-server-2005






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 27 '09 at 13:04







Dave

















asked May 27 '09 at 12:23









DaveDave

2931410




2931410








  • 1





    +1 Upvoted this because of the exchange of ideas produced by this question.

    – MarlonRibunal
    May 27 '09 at 18:24














  • 1





    +1 Upvoted this because of the exchange of ideas produced by this question.

    – MarlonRibunal
    May 27 '09 at 18:24








1




1





+1 Upvoted this because of the exchange of ideas produced by this question.

– MarlonRibunal
May 27 '09 at 18:24





+1 Upvoted this because of the exchange of ideas produced by this question.

– MarlonRibunal
May 27 '09 at 18:24










7 Answers
7






active

oldest

votes


















33














Some interesting suggestions here, which all seem to show misunderstanding about how log backups work. A log backup contains ALL transaction log generated since the previous log backup, regardless of what full or differential backups are taken in the interim. Stopping log backups or moving to daily full backups will have no effect on the log backup sizes. The only thing that affects the transaction log is a log backup, once the log backup chain has started.



The only exception to this rule is if the log backup chain has been broken (e.g. by going to the SIMPLE recovery model, reverting from a database snapshot, truncating the log using BACKUP LOG WITH NO_LOG/TRUNCATE_ONLY), in which case the first log backup will contain all the transaction log since the last full backup - which restarts the log backup chain; or if the log backup chain hasn't been started - when you switch into FULL for the first time, you operate in a kind of pseudo-SIMPLE recovery model until the first full backup is taken.



To answer your original question, without going into the SIMPLE recovery model, you're going to have to suck up backing up all the transaction log. Depending on the actions you're taking, you could take more frequent log backups to reduce their size, or do more targeted database.



If you can post some info about the maintenance ops you're doing, I can help you optimize them. Are you, by any chance, doing index rebuilds followed by a shrink database to reclaim the space used by the index rebuilds?



If you have no other activity in the database while the maintenance is occuring, you could do the following:




  • make sure user activity is stopped

  • take a final log backup (this allows you to recover right up to the point of maintenance starting)


    • switch to the SIMPLE recovery model

    • perform maintenance - the log will truncate on each checkpoint

    • switch to the FULL recovery model and take a full backup

    • continue as normal




Hope this helps - looking forward to more info.



Thanks



[Edit: after all the discussion about whether a full backup can alter the size of a subsequent log backup (it can't) I put together a comprehensive blog post with background material and a script that proves it. Check it out at https://www.sqlskills.com/blogs/paul/misconceptions-around-the-log-and-log-backups-how-to-convince-yourself/]






share|improve this answer





















  • 4





    Paul - totally disagree. Just don't do log backups during the index maintenance. The log will grow, and the next full backup will be larger, but you won't have the performance hit of t-log backups occurring at the same time as your index maintenance jobs. Can you see the merit of that? Surely you would agree that simultaneous t-log backups and index maintenance would cause a performance hit.

    – Brent Ozar
    May 27 '09 at 16:49






  • 5





    No - I would still disagree. I'd rather have more frequent log backups so they are smaller, rather than one monster one after all the maintenance is done. Having disproportionately sized log backups can lead to problems copying them across the network (e.g. for offsite backups or log shipping). If there's no user activity and no other need for the log backups, then maybe, but if the system crashes and you have to do a tail-of-the-log backup, that's going to take a lot of time that's part of your downtime. I should do a blog post about this.

    – Paul Randal
    May 27 '09 at 17:23











  • And even that doesn't help the OP's original question of how to reduce the size of the log backup following the index maintenance - in fact it will potentially make it bigger, depending on what operations are being done.

    – Paul Randal
    May 27 '09 at 17:24



















5














You could shrink them, but they will just grow again, eventually causing disk fragmentation. Index rebuilds and defrags make very large transaction logs. If you don't need point-in-time recoverability, you could change to Simple recovery mode and do away with the transaction log backups entirely.



I'm guessing you're using a maintenance plan for the optimizations, you could change it to use a script that does index defrags only when a certain fragmentation level is reached and you would not likely suffer any performance hit. This would generate much smaller logs.



I would skip daily differentials in favor of daily full backups BTW.






share|improve this answer
























  • I suppose I could just do a straight TRUNCATE LOG on the end of the full backup, but that doesn't exactly seem like the best method, I was hoping for some alternatives... What would be the benefits of doing daily full backups rather than diffs? That just seems to use more space for relatively little benefit. I also can't switch to simple recovery as I need the level of granularity the hourly log backups give. Finally, I'm unsure how moving the optimisations to a script would help, surely I'd still have the same problem just less frequently?

    – Dave
    May 27 '09 at 12:46











  • I downvoted this because of the suggestion to skip diffs and go to daily fulls. Why? Fulls a 3.5GB whereas diffs are only 250MB. The backup strategy looks absolutely fine to me. Removing diffs means many GBs more storage for only a tiny, tiny speedup in restore time.

    – Paul Randal
    May 27 '09 at 16:20






  • 2





    Everyone's situation is different, there nothing wrong with diffs, but unless space is at a premium, if you need to recover quickly, it's better to have one step than two.

    – SqlACID
    May 27 '09 at 19:20






  • 1





    @Dave See Jeremy's response, create a stored procedure to defrag specific files, break it up into smaller chunks.

    – SqlACID
    May 27 '09 at 23:02



















3














Your final question was: "Other than BACKUP LOG WITH TRUNCATE_ONLY, is there any way to reduce the size of that log backup, or prevent the optimisations from being recorded in the transaction log at all, as surely they will be accounted for in the full backup they precede?"



No, but here's a workaround. If you know that the only activities in that database at that time will be the index maintenance jobs, then you can stop transaction log backups before the index maintenance starts. For example, some of my servers on Saturday nights, the job schedules look like this:




  • 9:30 PM - transaction log backup runs.

  • 9:45 PM - transaction log backup runs for the last time. The schedule stops at 9:59.

  • 10:00 PM - index maintenance job starts and has built-in stops to finish before 11:30.

  • 11:30 PM - full backup job starts and finishes in under 30 minutes.

  • 12:00 AM - transaction log backups start again every 15 minutes.


That means I don't have point-in-time recoverability between 9:45 and 11:30pm, but the payoff is faster performance.






share|improve this answer
























  • And you must switch to SIMPLE just before 10PM, right? Otherwise the 12AM log backup will contain all the log generated between 10PM and 12AM.

    – Paul Randal
    May 27 '09 at 16:32











  • Oops forgot to mention I downvoted this too because you didn't mention being in SIMPLE. Staying in BULK_LOGGED even will not change the size of the next log backup as it will pick up all data extents changed by minimally-logged operations. Wow - downvoted every answer to this so far.

    – Paul Randal
    May 27 '09 at 16:34











  • NO, definitely not switch to simple. He asked about the size of his transaction log backups, not the size of his full backups or his transaction log file.

    – Brent Ozar
    May 27 '09 at 16:46











  • So how does what you do reduce the size of transaction log backups? They will contain everything since the previous log backup, unless you're breaking the log backup chain and then restarting it with the full backup.

    – Paul Randal
    May 27 '09 at 16:50











  • Unless your index maintenance job doesn't do anything...

    – Paul Randal
    May 27 '09 at 16:50



















3














Easy answer: Change your weekly optimization job to run in a more balanced manner on a nightly basis. i.e. re-index tables a-e on Sunday night, f - l on Monday night etc... find a good balance, your log will be roughly 1/6th of the size on average. Of course this works best if you aren't using the built-in ssis index maintenance job.



The downside to this and it's significant depending on the load your db experiences is that it wreaks havoc with the optimizer and the re-use of query plans.



But if all you care about is the size of your t-log on a weekly basis, split it up from day to day or hour to hour and run the t-log backups in-between.






share|improve this answer































    2














    You might also look into a third party tool (Litespeed from Quest, SQL Backup from Red Gate, Hyperbac) to reduce the sizes of the backups and logs. They can pay for themselves quickly in tape savings.






    share|improve this answer































      2














      It can probably be assumed that your "optimizations" include index rebuilds. Only performing these tasks on a weekly basis may be acceptable on a database that does not encounter a great deal of updates and inserts, however if your data is highly fluid you may want to do a couple of things:




      1. Rebuild or reorganize your indexes nightly if your schedule permits and if the impact is acceptable. When performing these nightly index maintenance tasks target only those indexes that are fragmented beyond say 30% for rebuilds and in the range of 15-30% for reorgs.


      2. These tasks are logged transactions, so if you're concerned about log growth then I would advocate what Paul recommended. Final transaction log backup prior to index maintenance, switch to Simple recovery, followed by the maintenance process and then switch back to Full recovery followed by a Full data backup should do the trick.



      I take a zen-like approach to my log files: they are the size they want to be. So long as they've not endured abberant growth due to poor backup practices in comparison to database activity that is the mantra I live by.



      As for scripts that perform the discretionary index maintenance look online: there are a ton out there. Andrew Kelly published a decent one in SQL Magazine about a year ago. SQLServerPedia has some scripts from Michelle Ufford, and the latest issue of SQL Magazine (July 2009 I believe) has a full article on the topic as well. Point is to find one that works well for you and make it your own with minimal customizations.






      share|improve this answer































        2














        Can you specially backup your transaction log at various points during your database optimization? The total size of the t-logs would be the same, but each one would be smaller, possibly helping you in some way.



        Can you do more targeted database optimization so fewer transactions are created (someone mentioned this but I'm not sure the implications were spelled out). Such as tolerating a certain amount of fragmentation or wasted space for a while. If 40% of your tables are only 5% fragmented, not touching them could save quite a bit of activity.






        share|improve this answer

























          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "2"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f12793%2fhow-do-i-reduce-transaction-log-backup-size-after-a-full-backup%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          7 Answers
          7






          active

          oldest

          votes








          7 Answers
          7






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          33














          Some interesting suggestions here, which all seem to show misunderstanding about how log backups work. A log backup contains ALL transaction log generated since the previous log backup, regardless of what full or differential backups are taken in the interim. Stopping log backups or moving to daily full backups will have no effect on the log backup sizes. The only thing that affects the transaction log is a log backup, once the log backup chain has started.



          The only exception to this rule is if the log backup chain has been broken (e.g. by going to the SIMPLE recovery model, reverting from a database snapshot, truncating the log using BACKUP LOG WITH NO_LOG/TRUNCATE_ONLY), in which case the first log backup will contain all the transaction log since the last full backup - which restarts the log backup chain; or if the log backup chain hasn't been started - when you switch into FULL for the first time, you operate in a kind of pseudo-SIMPLE recovery model until the first full backup is taken.



          To answer your original question, without going into the SIMPLE recovery model, you're going to have to suck up backing up all the transaction log. Depending on the actions you're taking, you could take more frequent log backups to reduce their size, or do more targeted database.



          If you can post some info about the maintenance ops you're doing, I can help you optimize them. Are you, by any chance, doing index rebuilds followed by a shrink database to reclaim the space used by the index rebuilds?



          If you have no other activity in the database while the maintenance is occuring, you could do the following:




          • make sure user activity is stopped

          • take a final log backup (this allows you to recover right up to the point of maintenance starting)


            • switch to the SIMPLE recovery model

            • perform maintenance - the log will truncate on each checkpoint

            • switch to the FULL recovery model and take a full backup

            • continue as normal




          Hope this helps - looking forward to more info.



          Thanks



          [Edit: after all the discussion about whether a full backup can alter the size of a subsequent log backup (it can't) I put together a comprehensive blog post with background material and a script that proves it. Check it out at https://www.sqlskills.com/blogs/paul/misconceptions-around-the-log-and-log-backups-how-to-convince-yourself/]






          share|improve this answer





















          • 4





            Paul - totally disagree. Just don't do log backups during the index maintenance. The log will grow, and the next full backup will be larger, but you won't have the performance hit of t-log backups occurring at the same time as your index maintenance jobs. Can you see the merit of that? Surely you would agree that simultaneous t-log backups and index maintenance would cause a performance hit.

            – Brent Ozar
            May 27 '09 at 16:49






          • 5





            No - I would still disagree. I'd rather have more frequent log backups so they are smaller, rather than one monster one after all the maintenance is done. Having disproportionately sized log backups can lead to problems copying them across the network (e.g. for offsite backups or log shipping). If there's no user activity and no other need for the log backups, then maybe, but if the system crashes and you have to do a tail-of-the-log backup, that's going to take a lot of time that's part of your downtime. I should do a blog post about this.

            – Paul Randal
            May 27 '09 at 17:23











          • And even that doesn't help the OP's original question of how to reduce the size of the log backup following the index maintenance - in fact it will potentially make it bigger, depending on what operations are being done.

            – Paul Randal
            May 27 '09 at 17:24
















          33














          Some interesting suggestions here, which all seem to show misunderstanding about how log backups work. A log backup contains ALL transaction log generated since the previous log backup, regardless of what full or differential backups are taken in the interim. Stopping log backups or moving to daily full backups will have no effect on the log backup sizes. The only thing that affects the transaction log is a log backup, once the log backup chain has started.



          The only exception to this rule is if the log backup chain has been broken (e.g. by going to the SIMPLE recovery model, reverting from a database snapshot, truncating the log using BACKUP LOG WITH NO_LOG/TRUNCATE_ONLY), in which case the first log backup will contain all the transaction log since the last full backup - which restarts the log backup chain; or if the log backup chain hasn't been started - when you switch into FULL for the first time, you operate in a kind of pseudo-SIMPLE recovery model until the first full backup is taken.



          To answer your original question, without going into the SIMPLE recovery model, you're going to have to suck up backing up all the transaction log. Depending on the actions you're taking, you could take more frequent log backups to reduce their size, or do more targeted database.



          If you can post some info about the maintenance ops you're doing, I can help you optimize them. Are you, by any chance, doing index rebuilds followed by a shrink database to reclaim the space used by the index rebuilds?



          If you have no other activity in the database while the maintenance is occuring, you could do the following:




          • make sure user activity is stopped

          • take a final log backup (this allows you to recover right up to the point of maintenance starting)


            • switch to the SIMPLE recovery model

            • perform maintenance - the log will truncate on each checkpoint

            • switch to the FULL recovery model and take a full backup

            • continue as normal




          Hope this helps - looking forward to more info.



          Thanks



          [Edit: after all the discussion about whether a full backup can alter the size of a subsequent log backup (it can't) I put together a comprehensive blog post with background material and a script that proves it. Check it out at https://www.sqlskills.com/blogs/paul/misconceptions-around-the-log-and-log-backups-how-to-convince-yourself/]






          share|improve this answer





















          • 4





            Paul - totally disagree. Just don't do log backups during the index maintenance. The log will grow, and the next full backup will be larger, but you won't have the performance hit of t-log backups occurring at the same time as your index maintenance jobs. Can you see the merit of that? Surely you would agree that simultaneous t-log backups and index maintenance would cause a performance hit.

            – Brent Ozar
            May 27 '09 at 16:49






          • 5





            No - I would still disagree. I'd rather have more frequent log backups so they are smaller, rather than one monster one after all the maintenance is done. Having disproportionately sized log backups can lead to problems copying them across the network (e.g. for offsite backups or log shipping). If there's no user activity and no other need for the log backups, then maybe, but if the system crashes and you have to do a tail-of-the-log backup, that's going to take a lot of time that's part of your downtime. I should do a blog post about this.

            – Paul Randal
            May 27 '09 at 17:23











          • And even that doesn't help the OP's original question of how to reduce the size of the log backup following the index maintenance - in fact it will potentially make it bigger, depending on what operations are being done.

            – Paul Randal
            May 27 '09 at 17:24














          33












          33








          33







          Some interesting suggestions here, which all seem to show misunderstanding about how log backups work. A log backup contains ALL transaction log generated since the previous log backup, regardless of what full or differential backups are taken in the interim. Stopping log backups or moving to daily full backups will have no effect on the log backup sizes. The only thing that affects the transaction log is a log backup, once the log backup chain has started.



          The only exception to this rule is if the log backup chain has been broken (e.g. by going to the SIMPLE recovery model, reverting from a database snapshot, truncating the log using BACKUP LOG WITH NO_LOG/TRUNCATE_ONLY), in which case the first log backup will contain all the transaction log since the last full backup - which restarts the log backup chain; or if the log backup chain hasn't been started - when you switch into FULL for the first time, you operate in a kind of pseudo-SIMPLE recovery model until the first full backup is taken.



          To answer your original question, without going into the SIMPLE recovery model, you're going to have to suck up backing up all the transaction log. Depending on the actions you're taking, you could take more frequent log backups to reduce their size, or do more targeted database.



          If you can post some info about the maintenance ops you're doing, I can help you optimize them. Are you, by any chance, doing index rebuilds followed by a shrink database to reclaim the space used by the index rebuilds?



          If you have no other activity in the database while the maintenance is occuring, you could do the following:




          • make sure user activity is stopped

          • take a final log backup (this allows you to recover right up to the point of maintenance starting)


            • switch to the SIMPLE recovery model

            • perform maintenance - the log will truncate on each checkpoint

            • switch to the FULL recovery model and take a full backup

            • continue as normal




          Hope this helps - looking forward to more info.



          Thanks



          [Edit: after all the discussion about whether a full backup can alter the size of a subsequent log backup (it can't) I put together a comprehensive blog post with background material and a script that proves it. Check it out at https://www.sqlskills.com/blogs/paul/misconceptions-around-the-log-and-log-backups-how-to-convince-yourself/]






          share|improve this answer















          Some interesting suggestions here, which all seem to show misunderstanding about how log backups work. A log backup contains ALL transaction log generated since the previous log backup, regardless of what full or differential backups are taken in the interim. Stopping log backups or moving to daily full backups will have no effect on the log backup sizes. The only thing that affects the transaction log is a log backup, once the log backup chain has started.



          The only exception to this rule is if the log backup chain has been broken (e.g. by going to the SIMPLE recovery model, reverting from a database snapshot, truncating the log using BACKUP LOG WITH NO_LOG/TRUNCATE_ONLY), in which case the first log backup will contain all the transaction log since the last full backup - which restarts the log backup chain; or if the log backup chain hasn't been started - when you switch into FULL for the first time, you operate in a kind of pseudo-SIMPLE recovery model until the first full backup is taken.



          To answer your original question, without going into the SIMPLE recovery model, you're going to have to suck up backing up all the transaction log. Depending on the actions you're taking, you could take more frequent log backups to reduce their size, or do more targeted database.



          If you can post some info about the maintenance ops you're doing, I can help you optimize them. Are you, by any chance, doing index rebuilds followed by a shrink database to reclaim the space used by the index rebuilds?



          If you have no other activity in the database while the maintenance is occuring, you could do the following:




          • make sure user activity is stopped

          • take a final log backup (this allows you to recover right up to the point of maintenance starting)


            • switch to the SIMPLE recovery model

            • perform maintenance - the log will truncate on each checkpoint

            • switch to the FULL recovery model and take a full backup

            • continue as normal




          Hope this helps - looking forward to more info.



          Thanks



          [Edit: after all the discussion about whether a full backup can alter the size of a subsequent log backup (it can't) I put together a comprehensive blog post with background material and a script that proves it. Check it out at https://www.sqlskills.com/blogs/paul/misconceptions-around-the-log-and-log-backups-how-to-convince-yourself/]







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Jan 15 at 14:09









          kasperd

          26.5k1251103




          26.5k1251103










          answered May 27 '09 at 16:29









          Paul RandalPaul Randal

          6,70913145




          6,70913145








          • 4





            Paul - totally disagree. Just don't do log backups during the index maintenance. The log will grow, and the next full backup will be larger, but you won't have the performance hit of t-log backups occurring at the same time as your index maintenance jobs. Can you see the merit of that? Surely you would agree that simultaneous t-log backups and index maintenance would cause a performance hit.

            – Brent Ozar
            May 27 '09 at 16:49






          • 5





            No - I would still disagree. I'd rather have more frequent log backups so they are smaller, rather than one monster one after all the maintenance is done. Having disproportionately sized log backups can lead to problems copying them across the network (e.g. for offsite backups or log shipping). If there's no user activity and no other need for the log backups, then maybe, but if the system crashes and you have to do a tail-of-the-log backup, that's going to take a lot of time that's part of your downtime. I should do a blog post about this.

            – Paul Randal
            May 27 '09 at 17:23











          • And even that doesn't help the OP's original question of how to reduce the size of the log backup following the index maintenance - in fact it will potentially make it bigger, depending on what operations are being done.

            – Paul Randal
            May 27 '09 at 17:24














          • 4





            Paul - totally disagree. Just don't do log backups during the index maintenance. The log will grow, and the next full backup will be larger, but you won't have the performance hit of t-log backups occurring at the same time as your index maintenance jobs. Can you see the merit of that? Surely you would agree that simultaneous t-log backups and index maintenance would cause a performance hit.

            – Brent Ozar
            May 27 '09 at 16:49






          • 5





            No - I would still disagree. I'd rather have more frequent log backups so they are smaller, rather than one monster one after all the maintenance is done. Having disproportionately sized log backups can lead to problems copying them across the network (e.g. for offsite backups or log shipping). If there's no user activity and no other need for the log backups, then maybe, but if the system crashes and you have to do a tail-of-the-log backup, that's going to take a lot of time that's part of your downtime. I should do a blog post about this.

            – Paul Randal
            May 27 '09 at 17:23











          • And even that doesn't help the OP's original question of how to reduce the size of the log backup following the index maintenance - in fact it will potentially make it bigger, depending on what operations are being done.

            – Paul Randal
            May 27 '09 at 17:24








          4




          4





          Paul - totally disagree. Just don't do log backups during the index maintenance. The log will grow, and the next full backup will be larger, but you won't have the performance hit of t-log backups occurring at the same time as your index maintenance jobs. Can you see the merit of that? Surely you would agree that simultaneous t-log backups and index maintenance would cause a performance hit.

          – Brent Ozar
          May 27 '09 at 16:49





          Paul - totally disagree. Just don't do log backups during the index maintenance. The log will grow, and the next full backup will be larger, but you won't have the performance hit of t-log backups occurring at the same time as your index maintenance jobs. Can you see the merit of that? Surely you would agree that simultaneous t-log backups and index maintenance would cause a performance hit.

          – Brent Ozar
          May 27 '09 at 16:49




          5




          5





          No - I would still disagree. I'd rather have more frequent log backups so they are smaller, rather than one monster one after all the maintenance is done. Having disproportionately sized log backups can lead to problems copying them across the network (e.g. for offsite backups or log shipping). If there's no user activity and no other need for the log backups, then maybe, but if the system crashes and you have to do a tail-of-the-log backup, that's going to take a lot of time that's part of your downtime. I should do a blog post about this.

          – Paul Randal
          May 27 '09 at 17:23





          No - I would still disagree. I'd rather have more frequent log backups so they are smaller, rather than one monster one after all the maintenance is done. Having disproportionately sized log backups can lead to problems copying them across the network (e.g. for offsite backups or log shipping). If there's no user activity and no other need for the log backups, then maybe, but if the system crashes and you have to do a tail-of-the-log backup, that's going to take a lot of time that's part of your downtime. I should do a blog post about this.

          – Paul Randal
          May 27 '09 at 17:23













          And even that doesn't help the OP's original question of how to reduce the size of the log backup following the index maintenance - in fact it will potentially make it bigger, depending on what operations are being done.

          – Paul Randal
          May 27 '09 at 17:24





          And even that doesn't help the OP's original question of how to reduce the size of the log backup following the index maintenance - in fact it will potentially make it bigger, depending on what operations are being done.

          – Paul Randal
          May 27 '09 at 17:24













          5














          You could shrink them, but they will just grow again, eventually causing disk fragmentation. Index rebuilds and defrags make very large transaction logs. If you don't need point-in-time recoverability, you could change to Simple recovery mode and do away with the transaction log backups entirely.



          I'm guessing you're using a maintenance plan for the optimizations, you could change it to use a script that does index defrags only when a certain fragmentation level is reached and you would not likely suffer any performance hit. This would generate much smaller logs.



          I would skip daily differentials in favor of daily full backups BTW.






          share|improve this answer
























          • I suppose I could just do a straight TRUNCATE LOG on the end of the full backup, but that doesn't exactly seem like the best method, I was hoping for some alternatives... What would be the benefits of doing daily full backups rather than diffs? That just seems to use more space for relatively little benefit. I also can't switch to simple recovery as I need the level of granularity the hourly log backups give. Finally, I'm unsure how moving the optimisations to a script would help, surely I'd still have the same problem just less frequently?

            – Dave
            May 27 '09 at 12:46











          • I downvoted this because of the suggestion to skip diffs and go to daily fulls. Why? Fulls a 3.5GB whereas diffs are only 250MB. The backup strategy looks absolutely fine to me. Removing diffs means many GBs more storage for only a tiny, tiny speedup in restore time.

            – Paul Randal
            May 27 '09 at 16:20






          • 2





            Everyone's situation is different, there nothing wrong with diffs, but unless space is at a premium, if you need to recover quickly, it's better to have one step than two.

            – SqlACID
            May 27 '09 at 19:20






          • 1





            @Dave See Jeremy's response, create a stored procedure to defrag specific files, break it up into smaller chunks.

            – SqlACID
            May 27 '09 at 23:02
















          5














          You could shrink them, but they will just grow again, eventually causing disk fragmentation. Index rebuilds and defrags make very large transaction logs. If you don't need point-in-time recoverability, you could change to Simple recovery mode and do away with the transaction log backups entirely.



          I'm guessing you're using a maintenance plan for the optimizations, you could change it to use a script that does index defrags only when a certain fragmentation level is reached and you would not likely suffer any performance hit. This would generate much smaller logs.



          I would skip daily differentials in favor of daily full backups BTW.






          share|improve this answer
























          • I suppose I could just do a straight TRUNCATE LOG on the end of the full backup, but that doesn't exactly seem like the best method, I was hoping for some alternatives... What would be the benefits of doing daily full backups rather than diffs? That just seems to use more space for relatively little benefit. I also can't switch to simple recovery as I need the level of granularity the hourly log backups give. Finally, I'm unsure how moving the optimisations to a script would help, surely I'd still have the same problem just less frequently?

            – Dave
            May 27 '09 at 12:46











          • I downvoted this because of the suggestion to skip diffs and go to daily fulls. Why? Fulls a 3.5GB whereas diffs are only 250MB. The backup strategy looks absolutely fine to me. Removing diffs means many GBs more storage for only a tiny, tiny speedup in restore time.

            – Paul Randal
            May 27 '09 at 16:20






          • 2





            Everyone's situation is different, there nothing wrong with diffs, but unless space is at a premium, if you need to recover quickly, it's better to have one step than two.

            – SqlACID
            May 27 '09 at 19:20






          • 1





            @Dave See Jeremy's response, create a stored procedure to defrag specific files, break it up into smaller chunks.

            – SqlACID
            May 27 '09 at 23:02














          5












          5








          5







          You could shrink them, but they will just grow again, eventually causing disk fragmentation. Index rebuilds and defrags make very large transaction logs. If you don't need point-in-time recoverability, you could change to Simple recovery mode and do away with the transaction log backups entirely.



          I'm guessing you're using a maintenance plan for the optimizations, you could change it to use a script that does index defrags only when a certain fragmentation level is reached and you would not likely suffer any performance hit. This would generate much smaller logs.



          I would skip daily differentials in favor of daily full backups BTW.






          share|improve this answer













          You could shrink them, but they will just grow again, eventually causing disk fragmentation. Index rebuilds and defrags make very large transaction logs. If you don't need point-in-time recoverability, you could change to Simple recovery mode and do away with the transaction log backups entirely.



          I'm guessing you're using a maintenance plan for the optimizations, you could change it to use a script that does index defrags only when a certain fragmentation level is reached and you would not likely suffer any performance hit. This would generate much smaller logs.



          I would skip daily differentials in favor of daily full backups BTW.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered May 27 '09 at 12:34









          SqlACIDSqlACID

          2,0881717




          2,0881717













          • I suppose I could just do a straight TRUNCATE LOG on the end of the full backup, but that doesn't exactly seem like the best method, I was hoping for some alternatives... What would be the benefits of doing daily full backups rather than diffs? That just seems to use more space for relatively little benefit. I also can't switch to simple recovery as I need the level of granularity the hourly log backups give. Finally, I'm unsure how moving the optimisations to a script would help, surely I'd still have the same problem just less frequently?

            – Dave
            May 27 '09 at 12:46











          • I downvoted this because of the suggestion to skip diffs and go to daily fulls. Why? Fulls a 3.5GB whereas diffs are only 250MB. The backup strategy looks absolutely fine to me. Removing diffs means many GBs more storage for only a tiny, tiny speedup in restore time.

            – Paul Randal
            May 27 '09 at 16:20






          • 2





            Everyone's situation is different, there nothing wrong with diffs, but unless space is at a premium, if you need to recover quickly, it's better to have one step than two.

            – SqlACID
            May 27 '09 at 19:20






          • 1





            @Dave See Jeremy's response, create a stored procedure to defrag specific files, break it up into smaller chunks.

            – SqlACID
            May 27 '09 at 23:02



















          • I suppose I could just do a straight TRUNCATE LOG on the end of the full backup, but that doesn't exactly seem like the best method, I was hoping for some alternatives... What would be the benefits of doing daily full backups rather than diffs? That just seems to use more space for relatively little benefit. I also can't switch to simple recovery as I need the level of granularity the hourly log backups give. Finally, I'm unsure how moving the optimisations to a script would help, surely I'd still have the same problem just less frequently?

            – Dave
            May 27 '09 at 12:46











          • I downvoted this because of the suggestion to skip diffs and go to daily fulls. Why? Fulls a 3.5GB whereas diffs are only 250MB. The backup strategy looks absolutely fine to me. Removing diffs means many GBs more storage for only a tiny, tiny speedup in restore time.

            – Paul Randal
            May 27 '09 at 16:20






          • 2





            Everyone's situation is different, there nothing wrong with diffs, but unless space is at a premium, if you need to recover quickly, it's better to have one step than two.

            – SqlACID
            May 27 '09 at 19:20






          • 1





            @Dave See Jeremy's response, create a stored procedure to defrag specific files, break it up into smaller chunks.

            – SqlACID
            May 27 '09 at 23:02

















          I suppose I could just do a straight TRUNCATE LOG on the end of the full backup, but that doesn't exactly seem like the best method, I was hoping for some alternatives... What would be the benefits of doing daily full backups rather than diffs? That just seems to use more space for relatively little benefit. I also can't switch to simple recovery as I need the level of granularity the hourly log backups give. Finally, I'm unsure how moving the optimisations to a script would help, surely I'd still have the same problem just less frequently?

          – Dave
          May 27 '09 at 12:46





          I suppose I could just do a straight TRUNCATE LOG on the end of the full backup, but that doesn't exactly seem like the best method, I was hoping for some alternatives... What would be the benefits of doing daily full backups rather than diffs? That just seems to use more space for relatively little benefit. I also can't switch to simple recovery as I need the level of granularity the hourly log backups give. Finally, I'm unsure how moving the optimisations to a script would help, surely I'd still have the same problem just less frequently?

          – Dave
          May 27 '09 at 12:46













          I downvoted this because of the suggestion to skip diffs and go to daily fulls. Why? Fulls a 3.5GB whereas diffs are only 250MB. The backup strategy looks absolutely fine to me. Removing diffs means many GBs more storage for only a tiny, tiny speedup in restore time.

          – Paul Randal
          May 27 '09 at 16:20





          I downvoted this because of the suggestion to skip diffs and go to daily fulls. Why? Fulls a 3.5GB whereas diffs are only 250MB. The backup strategy looks absolutely fine to me. Removing diffs means many GBs more storage for only a tiny, tiny speedup in restore time.

          – Paul Randal
          May 27 '09 at 16:20




          2




          2





          Everyone's situation is different, there nothing wrong with diffs, but unless space is at a premium, if you need to recover quickly, it's better to have one step than two.

          – SqlACID
          May 27 '09 at 19:20





          Everyone's situation is different, there nothing wrong with diffs, but unless space is at a premium, if you need to recover quickly, it's better to have one step than two.

          – SqlACID
          May 27 '09 at 19:20




          1




          1





          @Dave See Jeremy's response, create a stored procedure to defrag specific files, break it up into smaller chunks.

          – SqlACID
          May 27 '09 at 23:02





          @Dave See Jeremy's response, create a stored procedure to defrag specific files, break it up into smaller chunks.

          – SqlACID
          May 27 '09 at 23:02











          3














          Your final question was: "Other than BACKUP LOG WITH TRUNCATE_ONLY, is there any way to reduce the size of that log backup, or prevent the optimisations from being recorded in the transaction log at all, as surely they will be accounted for in the full backup they precede?"



          No, but here's a workaround. If you know that the only activities in that database at that time will be the index maintenance jobs, then you can stop transaction log backups before the index maintenance starts. For example, some of my servers on Saturday nights, the job schedules look like this:




          • 9:30 PM - transaction log backup runs.

          • 9:45 PM - transaction log backup runs for the last time. The schedule stops at 9:59.

          • 10:00 PM - index maintenance job starts and has built-in stops to finish before 11:30.

          • 11:30 PM - full backup job starts and finishes in under 30 minutes.

          • 12:00 AM - transaction log backups start again every 15 minutes.


          That means I don't have point-in-time recoverability between 9:45 and 11:30pm, but the payoff is faster performance.






          share|improve this answer
























          • And you must switch to SIMPLE just before 10PM, right? Otherwise the 12AM log backup will contain all the log generated between 10PM and 12AM.

            – Paul Randal
            May 27 '09 at 16:32











          • Oops forgot to mention I downvoted this too because you didn't mention being in SIMPLE. Staying in BULK_LOGGED even will not change the size of the next log backup as it will pick up all data extents changed by minimally-logged operations. Wow - downvoted every answer to this so far.

            – Paul Randal
            May 27 '09 at 16:34











          • NO, definitely not switch to simple. He asked about the size of his transaction log backups, not the size of his full backups or his transaction log file.

            – Brent Ozar
            May 27 '09 at 16:46











          • So how does what you do reduce the size of transaction log backups? They will contain everything since the previous log backup, unless you're breaking the log backup chain and then restarting it with the full backup.

            – Paul Randal
            May 27 '09 at 16:50











          • Unless your index maintenance job doesn't do anything...

            – Paul Randal
            May 27 '09 at 16:50
















          3














          Your final question was: "Other than BACKUP LOG WITH TRUNCATE_ONLY, is there any way to reduce the size of that log backup, or prevent the optimisations from being recorded in the transaction log at all, as surely they will be accounted for in the full backup they precede?"



          No, but here's a workaround. If you know that the only activities in that database at that time will be the index maintenance jobs, then you can stop transaction log backups before the index maintenance starts. For example, some of my servers on Saturday nights, the job schedules look like this:




          • 9:30 PM - transaction log backup runs.

          • 9:45 PM - transaction log backup runs for the last time. The schedule stops at 9:59.

          • 10:00 PM - index maintenance job starts and has built-in stops to finish before 11:30.

          • 11:30 PM - full backup job starts and finishes in under 30 minutes.

          • 12:00 AM - transaction log backups start again every 15 minutes.


          That means I don't have point-in-time recoverability between 9:45 and 11:30pm, but the payoff is faster performance.






          share|improve this answer
























          • And you must switch to SIMPLE just before 10PM, right? Otherwise the 12AM log backup will contain all the log generated between 10PM and 12AM.

            – Paul Randal
            May 27 '09 at 16:32











          • Oops forgot to mention I downvoted this too because you didn't mention being in SIMPLE. Staying in BULK_LOGGED even will not change the size of the next log backup as it will pick up all data extents changed by minimally-logged operations. Wow - downvoted every answer to this so far.

            – Paul Randal
            May 27 '09 at 16:34











          • NO, definitely not switch to simple. He asked about the size of his transaction log backups, not the size of his full backups or his transaction log file.

            – Brent Ozar
            May 27 '09 at 16:46











          • So how does what you do reduce the size of transaction log backups? They will contain everything since the previous log backup, unless you're breaking the log backup chain and then restarting it with the full backup.

            – Paul Randal
            May 27 '09 at 16:50











          • Unless your index maintenance job doesn't do anything...

            – Paul Randal
            May 27 '09 at 16:50














          3












          3








          3







          Your final question was: "Other than BACKUP LOG WITH TRUNCATE_ONLY, is there any way to reduce the size of that log backup, or prevent the optimisations from being recorded in the transaction log at all, as surely they will be accounted for in the full backup they precede?"



          No, but here's a workaround. If you know that the only activities in that database at that time will be the index maintenance jobs, then you can stop transaction log backups before the index maintenance starts. For example, some of my servers on Saturday nights, the job schedules look like this:




          • 9:30 PM - transaction log backup runs.

          • 9:45 PM - transaction log backup runs for the last time. The schedule stops at 9:59.

          • 10:00 PM - index maintenance job starts and has built-in stops to finish before 11:30.

          • 11:30 PM - full backup job starts and finishes in under 30 minutes.

          • 12:00 AM - transaction log backups start again every 15 minutes.


          That means I don't have point-in-time recoverability between 9:45 and 11:30pm, but the payoff is faster performance.






          share|improve this answer













          Your final question was: "Other than BACKUP LOG WITH TRUNCATE_ONLY, is there any way to reduce the size of that log backup, or prevent the optimisations from being recorded in the transaction log at all, as surely they will be accounted for in the full backup they precede?"



          No, but here's a workaround. If you know that the only activities in that database at that time will be the index maintenance jobs, then you can stop transaction log backups before the index maintenance starts. For example, some of my servers on Saturday nights, the job schedules look like this:




          • 9:30 PM - transaction log backup runs.

          • 9:45 PM - transaction log backup runs for the last time. The schedule stops at 9:59.

          • 10:00 PM - index maintenance job starts and has built-in stops to finish before 11:30.

          • 11:30 PM - full backup job starts and finishes in under 30 minutes.

          • 12:00 AM - transaction log backups start again every 15 minutes.


          That means I don't have point-in-time recoverability between 9:45 and 11:30pm, but the payoff is faster performance.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered May 27 '09 at 14:23









          Brent OzarBrent Ozar

          4,3251219




          4,3251219













          • And you must switch to SIMPLE just before 10PM, right? Otherwise the 12AM log backup will contain all the log generated between 10PM and 12AM.

            – Paul Randal
            May 27 '09 at 16:32











          • Oops forgot to mention I downvoted this too because you didn't mention being in SIMPLE. Staying in BULK_LOGGED even will not change the size of the next log backup as it will pick up all data extents changed by minimally-logged operations. Wow - downvoted every answer to this so far.

            – Paul Randal
            May 27 '09 at 16:34











          • NO, definitely not switch to simple. He asked about the size of his transaction log backups, not the size of his full backups or his transaction log file.

            – Brent Ozar
            May 27 '09 at 16:46











          • So how does what you do reduce the size of transaction log backups? They will contain everything since the previous log backup, unless you're breaking the log backup chain and then restarting it with the full backup.

            – Paul Randal
            May 27 '09 at 16:50











          • Unless your index maintenance job doesn't do anything...

            – Paul Randal
            May 27 '09 at 16:50



















          • And you must switch to SIMPLE just before 10PM, right? Otherwise the 12AM log backup will contain all the log generated between 10PM and 12AM.

            – Paul Randal
            May 27 '09 at 16:32











          • Oops forgot to mention I downvoted this too because you didn't mention being in SIMPLE. Staying in BULK_LOGGED even will not change the size of the next log backup as it will pick up all data extents changed by minimally-logged operations. Wow - downvoted every answer to this so far.

            – Paul Randal
            May 27 '09 at 16:34











          • NO, definitely not switch to simple. He asked about the size of his transaction log backups, not the size of his full backups or his transaction log file.

            – Brent Ozar
            May 27 '09 at 16:46











          • So how does what you do reduce the size of transaction log backups? They will contain everything since the previous log backup, unless you're breaking the log backup chain and then restarting it with the full backup.

            – Paul Randal
            May 27 '09 at 16:50











          • Unless your index maintenance job doesn't do anything...

            – Paul Randal
            May 27 '09 at 16:50

















          And you must switch to SIMPLE just before 10PM, right? Otherwise the 12AM log backup will contain all the log generated between 10PM and 12AM.

          – Paul Randal
          May 27 '09 at 16:32





          And you must switch to SIMPLE just before 10PM, right? Otherwise the 12AM log backup will contain all the log generated between 10PM and 12AM.

          – Paul Randal
          May 27 '09 at 16:32













          Oops forgot to mention I downvoted this too because you didn't mention being in SIMPLE. Staying in BULK_LOGGED even will not change the size of the next log backup as it will pick up all data extents changed by minimally-logged operations. Wow - downvoted every answer to this so far.

          – Paul Randal
          May 27 '09 at 16:34





          Oops forgot to mention I downvoted this too because you didn't mention being in SIMPLE. Staying in BULK_LOGGED even will not change the size of the next log backup as it will pick up all data extents changed by minimally-logged operations. Wow - downvoted every answer to this so far.

          – Paul Randal
          May 27 '09 at 16:34













          NO, definitely not switch to simple. He asked about the size of his transaction log backups, not the size of his full backups or his transaction log file.

          – Brent Ozar
          May 27 '09 at 16:46





          NO, definitely not switch to simple. He asked about the size of his transaction log backups, not the size of his full backups or his transaction log file.

          – Brent Ozar
          May 27 '09 at 16:46













          So how does what you do reduce the size of transaction log backups? They will contain everything since the previous log backup, unless you're breaking the log backup chain and then restarting it with the full backup.

          – Paul Randal
          May 27 '09 at 16:50





          So how does what you do reduce the size of transaction log backups? They will contain everything since the previous log backup, unless you're breaking the log backup chain and then restarting it with the full backup.

          – Paul Randal
          May 27 '09 at 16:50













          Unless your index maintenance job doesn't do anything...

          – Paul Randal
          May 27 '09 at 16:50





          Unless your index maintenance job doesn't do anything...

          – Paul Randal
          May 27 '09 at 16:50











          3














          Easy answer: Change your weekly optimization job to run in a more balanced manner on a nightly basis. i.e. re-index tables a-e on Sunday night, f - l on Monday night etc... find a good balance, your log will be roughly 1/6th of the size on average. Of course this works best if you aren't using the built-in ssis index maintenance job.



          The downside to this and it's significant depending on the load your db experiences is that it wreaks havoc with the optimizer and the re-use of query plans.



          But if all you care about is the size of your t-log on a weekly basis, split it up from day to day or hour to hour and run the t-log backups in-between.






          share|improve this answer




























            3














            Easy answer: Change your weekly optimization job to run in a more balanced manner on a nightly basis. i.e. re-index tables a-e on Sunday night, f - l on Monday night etc... find a good balance, your log will be roughly 1/6th of the size on average. Of course this works best if you aren't using the built-in ssis index maintenance job.



            The downside to this and it's significant depending on the load your db experiences is that it wreaks havoc with the optimizer and the re-use of query plans.



            But if all you care about is the size of your t-log on a weekly basis, split it up from day to day or hour to hour and run the t-log backups in-between.






            share|improve this answer


























              3












              3








              3







              Easy answer: Change your weekly optimization job to run in a more balanced manner on a nightly basis. i.e. re-index tables a-e on Sunday night, f - l on Monday night etc... find a good balance, your log will be roughly 1/6th of the size on average. Of course this works best if you aren't using the built-in ssis index maintenance job.



              The downside to this and it's significant depending on the load your db experiences is that it wreaks havoc with the optimizer and the re-use of query plans.



              But if all you care about is the size of your t-log on a weekly basis, split it up from day to day or hour to hour and run the t-log backups in-between.






              share|improve this answer













              Easy answer: Change your weekly optimization job to run in a more balanced manner on a nightly basis. i.e. re-index tables a-e on Sunday night, f - l on Monday night etc... find a good balance, your log will be roughly 1/6th of the size on average. Of course this works best if you aren't using the built-in ssis index maintenance job.



              The downside to this and it's significant depending on the load your db experiences is that it wreaks havoc with the optimizer and the re-use of query plans.



              But if all you care about is the size of your t-log on a weekly basis, split it up from day to day or hour to hour and run the t-log backups in-between.







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered May 27 '09 at 18:46







              Jeremy Lowell






























                  2














                  You might also look into a third party tool (Litespeed from Quest, SQL Backup from Red Gate, Hyperbac) to reduce the sizes of the backups and logs. They can pay for themselves quickly in tape savings.






                  share|improve this answer




























                    2














                    You might also look into a third party tool (Litespeed from Quest, SQL Backup from Red Gate, Hyperbac) to reduce the sizes of the backups and logs. They can pay for themselves quickly in tape savings.






                    share|improve this answer


























                      2












                      2








                      2







                      You might also look into a third party tool (Litespeed from Quest, SQL Backup from Red Gate, Hyperbac) to reduce the sizes of the backups and logs. They can pay for themselves quickly in tape savings.






                      share|improve this answer













                      You might also look into a third party tool (Litespeed from Quest, SQL Backup from Red Gate, Hyperbac) to reduce the sizes of the backups and logs. They can pay for themselves quickly in tape savings.







                      share|improve this answer












                      share|improve this answer



                      share|improve this answer










                      answered May 27 '09 at 18:05









                      Steve JonesSteve Jones

                      78548




                      78548























                          2














                          It can probably be assumed that your "optimizations" include index rebuilds. Only performing these tasks on a weekly basis may be acceptable on a database that does not encounter a great deal of updates and inserts, however if your data is highly fluid you may want to do a couple of things:




                          1. Rebuild or reorganize your indexes nightly if your schedule permits and if the impact is acceptable. When performing these nightly index maintenance tasks target only those indexes that are fragmented beyond say 30% for rebuilds and in the range of 15-30% for reorgs.


                          2. These tasks are logged transactions, so if you're concerned about log growth then I would advocate what Paul recommended. Final transaction log backup prior to index maintenance, switch to Simple recovery, followed by the maintenance process and then switch back to Full recovery followed by a Full data backup should do the trick.



                          I take a zen-like approach to my log files: they are the size they want to be. So long as they've not endured abberant growth due to poor backup practices in comparison to database activity that is the mantra I live by.



                          As for scripts that perform the discretionary index maintenance look online: there are a ton out there. Andrew Kelly published a decent one in SQL Magazine about a year ago. SQLServerPedia has some scripts from Michelle Ufford, and the latest issue of SQL Magazine (July 2009 I believe) has a full article on the topic as well. Point is to find one that works well for you and make it your own with minimal customizations.






                          share|improve this answer




























                            2














                            It can probably be assumed that your "optimizations" include index rebuilds. Only performing these tasks on a weekly basis may be acceptable on a database that does not encounter a great deal of updates and inserts, however if your data is highly fluid you may want to do a couple of things:




                            1. Rebuild or reorganize your indexes nightly if your schedule permits and if the impact is acceptable. When performing these nightly index maintenance tasks target only those indexes that are fragmented beyond say 30% for rebuilds and in the range of 15-30% for reorgs.


                            2. These tasks are logged transactions, so if you're concerned about log growth then I would advocate what Paul recommended. Final transaction log backup prior to index maintenance, switch to Simple recovery, followed by the maintenance process and then switch back to Full recovery followed by a Full data backup should do the trick.



                            I take a zen-like approach to my log files: they are the size they want to be. So long as they've not endured abberant growth due to poor backup practices in comparison to database activity that is the mantra I live by.



                            As for scripts that perform the discretionary index maintenance look online: there are a ton out there. Andrew Kelly published a decent one in SQL Magazine about a year ago. SQLServerPedia has some scripts from Michelle Ufford, and the latest issue of SQL Magazine (July 2009 I believe) has a full article on the topic as well. Point is to find one that works well for you and make it your own with minimal customizations.






                            share|improve this answer


























                              2












                              2








                              2







                              It can probably be assumed that your "optimizations" include index rebuilds. Only performing these tasks on a weekly basis may be acceptable on a database that does not encounter a great deal of updates and inserts, however if your data is highly fluid you may want to do a couple of things:




                              1. Rebuild or reorganize your indexes nightly if your schedule permits and if the impact is acceptable. When performing these nightly index maintenance tasks target only those indexes that are fragmented beyond say 30% for rebuilds and in the range of 15-30% for reorgs.


                              2. These tasks are logged transactions, so if you're concerned about log growth then I would advocate what Paul recommended. Final transaction log backup prior to index maintenance, switch to Simple recovery, followed by the maintenance process and then switch back to Full recovery followed by a Full data backup should do the trick.



                              I take a zen-like approach to my log files: they are the size they want to be. So long as they've not endured abberant growth due to poor backup practices in comparison to database activity that is the mantra I live by.



                              As for scripts that perform the discretionary index maintenance look online: there are a ton out there. Andrew Kelly published a decent one in SQL Magazine about a year ago. SQLServerPedia has some scripts from Michelle Ufford, and the latest issue of SQL Magazine (July 2009 I believe) has a full article on the topic as well. Point is to find one that works well for you and make it your own with minimal customizations.






                              share|improve this answer













                              It can probably be assumed that your "optimizations" include index rebuilds. Only performing these tasks on a weekly basis may be acceptable on a database that does not encounter a great deal of updates and inserts, however if your data is highly fluid you may want to do a couple of things:




                              1. Rebuild or reorganize your indexes nightly if your schedule permits and if the impact is acceptable. When performing these nightly index maintenance tasks target only those indexes that are fragmented beyond say 30% for rebuilds and in the range of 15-30% for reorgs.


                              2. These tasks are logged transactions, so if you're concerned about log growth then I would advocate what Paul recommended. Final transaction log backup prior to index maintenance, switch to Simple recovery, followed by the maintenance process and then switch back to Full recovery followed by a Full data backup should do the trick.



                              I take a zen-like approach to my log files: they are the size they want to be. So long as they've not endured abberant growth due to poor backup practices in comparison to database activity that is the mantra I live by.



                              As for scripts that perform the discretionary index maintenance look online: there are a ton out there. Andrew Kelly published a decent one in SQL Magazine about a year ago. SQLServerPedia has some scripts from Michelle Ufford, and the latest issue of SQL Magazine (July 2009 I believe) has a full article on the topic as well. Point is to find one that works well for you and make it your own with minimal customizations.







                              share|improve this answer












                              share|improve this answer



                              share|improve this answer










                              answered Jun 3 '09 at 20:10









                              Tim FordTim Ford

                              713




                              713























                                  2














                                  Can you specially backup your transaction log at various points during your database optimization? The total size of the t-logs would be the same, but each one would be smaller, possibly helping you in some way.



                                  Can you do more targeted database optimization so fewer transactions are created (someone mentioned this but I'm not sure the implications were spelled out). Such as tolerating a certain amount of fragmentation or wasted space for a while. If 40% of your tables are only 5% fragmented, not touching them could save quite a bit of activity.






                                  share|improve this answer






























                                    2














                                    Can you specially backup your transaction log at various points during your database optimization? The total size of the t-logs would be the same, but each one would be smaller, possibly helping you in some way.



                                    Can you do more targeted database optimization so fewer transactions are created (someone mentioned this but I'm not sure the implications were spelled out). Such as tolerating a certain amount of fragmentation or wasted space for a while. If 40% of your tables are only 5% fragmented, not touching them could save quite a bit of activity.






                                    share|improve this answer




























                                      2












                                      2








                                      2







                                      Can you specially backup your transaction log at various points during your database optimization? The total size of the t-logs would be the same, but each one would be smaller, possibly helping you in some way.



                                      Can you do more targeted database optimization so fewer transactions are created (someone mentioned this but I'm not sure the implications were spelled out). Such as tolerating a certain amount of fragmentation or wasted space for a while. If 40% of your tables are only 5% fragmented, not touching them could save quite a bit of activity.






                                      share|improve this answer















                                      Can you specially backup your transaction log at various points during your database optimization? The total size of the t-logs would be the same, but each one would be smaller, possibly helping you in some way.



                                      Can you do more targeted database optimization so fewer transactions are created (someone mentioned this but I'm not sure the implications were spelled out). Such as tolerating a certain amount of fragmentation or wasted space for a while. If 40% of your tables are only 5% fragmented, not touching them could save quite a bit of activity.







                                      share|improve this answer














                                      share|improve this answer



                                      share|improve this answer








                                      edited 6 hours ago

























                                      answered May 27 '09 at 18:37









                                      ErikEErikE

                                      231310




                                      231310






























                                          draft saved

                                          draft discarded




















































                                          Thanks for contributing an answer to Server Fault!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function () {
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f12793%2fhow-do-i-reduce-transaction-log-backup-size-after-a-full-backup%23new-answer', 'question_page');
                                          }
                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          As a Security Precaution, the user account has been locked The Next CEO of Stack OverflowMS...

                                          Список ссавців Італії Природоохоронні статуси | Список |...

                                          Українські прізвища Зміст Історичні відомості |...