Showing posts with label log. Show all posts
Showing posts with label log. Show all posts

Sunday, March 25, 2012

Creating a T-SQL Stored Procedure to Truncate and Shrink ALL Log Files

Hello -
I'm hoping some T-SQL expert could help me figure out the code
necessary to implement a stored procedure that will truncate and
shrink the log files when run.
Let me give you a little background first. We are a small company
with an adequate, but not overwhelming amount of disk space. But,
with running SQL Server 2000 we do like the Recovery model set to
Full. Each week on back to back evenings (and on separate tapes) we
do a full backup with incremental backups throughout the week. After
the 1st full backup, I would like to run a procedure that would look
at each database and then truncate & shrink the log file. Then for
the 2nd full backup, I'd have much smaller logs obviously with the
ability to recover back to my original(s) via my 1st backup.
The code that works on individual databases that I now run manually
and change the parameters (@.database) accordingly is:
use master
BEGIN
BACKUP LOG @.database WITH TRUNCATE_ONLY
GO
USE @.database
DBCC SHRINKFILE (@.database + '_log')
I would like to have this placed in a stored procedure that would go
through each user database automatically. Unfortunately, I am not at
all familiar with the nuances of automatically changing databases nor
with working with stored procedures.
If anyone would/could be so kind as to provide me with the script that
would allow this to work as desired, I would be very very
appreciative. Thanks in advance for any and all help!
RichScull,
I am a little confused as to what your actually doing. You say you do a
FULL backup and then incremental but do not specify log backups at all other
than the truncate only. If you don't actually do log backups why do you
want to be in FULL recovery mode? What good does shrinking the logs do
anyway? When you backup a log (or a database) the size of the backup file
is proportional to the amount of data in the file not the size of the file
itself. So shrinking a log file will not give smaller backups. Shrinking
it and of itself is a dangerous act anyway for the reasons you mention. If
you have limited space and you shrink the file, which will certainly grow
back again, what will happen when you need more space and don't have it? It
is better to allocate the space in need (now and in the future) up front and
don't ever shrink the files. That way you ensure you always have the room
when you need it. If I misinterpreted what your doing please let me know so
we can work thru this.
Andrew J. Kelly
SQL Server MVP
"Scull" <myscullyfamily@.yahoo.com> wrote in message
news:a8a65e57.0402120523.459e3dea@.posting.google.com...
> Hello -
> I'm hoping some T-SQL expert could help me figure out the code
> necessary to implement a stored procedure that will truncate and
> shrink the log files when run.
> Let me give you a little background first. We are a small company
> with an adequate, but not overwhelming amount of disk space. But,
> with running SQL Server 2000 we do like the Recovery model set to
> Full. Each week on back to back evenings (and on separate tapes) we
> do a full backup with incremental backups throughout the week. After
> the 1st full backup, I would like to run a procedure that would look
> at each database and then truncate & shrink the log file. Then for
> the 2nd full backup, I'd have much smaller logs obviously with the
> ability to recover back to my original(s) via my 1st backup.
> The code that works on individual databases that I now run manually
> and change the parameters (@.database) accordingly is:
>
> use master
> BEGIN
> BACKUP LOG @.database WITH TRUNCATE_ONLY
> GO
> USE @.database
> DBCC SHRINKFILE (@.database + '_log')
> I would like to have this placed in a stored procedure that would go
> through each user database automatically. Unfortunately, I am not at
> all familiar with the nuances of automatically changing databases nor
> with working with stored procedures.
> If anyone would/could be so kind as to provide me with the script that
> would allow this to work as desired, I would be very very
> appreciative. Thanks in advance for any and all help!
> Rich|||Scull
Perhaps you want to use WITH INIT (after full backup database) to ovewrite
the log file.
Also,look at INFORMATION_SCHEMA.SCHEMATA in BOL.
"Scull" <myscullyfamily@.yahoo.com> wrote in message
news:a8a65e57.0402120523.459e3dea@.posting.google.com...
> Hello -
> I'm hoping some T-SQL expert could help me figure out the code
> necessary to implement a stored procedure that will truncate and
> shrink the log files when run.
> Let me give you a little background first. We are a small company
> with an adequate, but not overwhelming amount of disk space. But,
> with running SQL Server 2000 we do like the Recovery model set to
> Full. Each week on back to back evenings (and on separate tapes) we
> do a full backup with incremental backups throughout the week. After
> the 1st full backup, I would like to run a procedure that would look
> at each database and then truncate & shrink the log file. Then for
> the 2nd full backup, I'd have much smaller logs obviously with the
> ability to recover back to my original(s) via my 1st backup.
> The code that works on individual databases that I now run manually
> and change the parameters (@.database) accordingly is:
>
> use master
> BEGIN
> BACKUP LOG @.database WITH TRUNCATE_ONLY
> GO
> USE @.database
> DBCC SHRINKFILE (@.database + '_log')
> I would like to have this placed in a stored procedure that would go
> through each user database automatically. Unfortunately, I am not at
> all familiar with the nuances of automatically changing databases nor
> with working with stored procedures.
> If anyone would/could be so kind as to provide me with the script that
> would allow this to work as desired, I would be very very
> appreciative. Thanks in advance for any and all help!
> Rich|||First off, I want to thank you for your input- it's geratly appreciated!
I'm sorry I wasn't clear in terms of how we perform our backups. We use
a separate backup application for backing up all the SQL Server
databases and their respective transaction logs. We do FULL backups on
Friday and Saturday evening, and incremental backups throughout the
week.
My space issues arose when I converted all of my 7.0 databases to 2000
about 4 months ago. Rather quickly after converting the db's, I noticed
we were running out of disk space. In reviewing one of the converted
databases that was affecting my disk space - I noticed the MDF file with
a size of about 1 GIG and the transaction log with a size of 17 GIG.
Obviously, I was like "Yikes."
Anyway, at this point in time I'd would rather keep the recovery mode
set to full so that we could perform point-in-time recoveries. I figure
the best way, given my disk space concerns, for me to insure possible
point-in-time recovery is to take advantage of the fact that 2 full
backups are done each week. If after the 1st full backup, I can
truncate and shrink the transaction log before the 2nd full backup - I
will have a choice of restore points in the event something should go
awry.
Now while I know this is not necessarily the optimal solution, I do not
know of a better one at this point. I am obviously wide open to
suggestions/recommendations.
If no strategic process changes can be realized, I think I need to find
the answer to my T-SQL question concerning establishing a stored
procedure that I can schedule after the Friday backup but before the
Saturday backup - that will truncate and shrink the transaction log(s)
for each of the user databases automatically.
I will also check out the WITH INIT, etc. in the BOL.
Thanks in advance for any more feedback.
Rich
*** Sent via Developersdex http://www.examnotes.net ***
Don't just participate in USENET...get rewarded for it!|||Rich,
Your solution is to issue regular Log backups and forget about truncating
and shrinking the logs. If you want point in time recovery you have to be
in FULL mode. If your in FULL mode the log file will keep growing and
growing since it needs a log backup to properly backup the committed trans
and truncate them. If you don't issue log backups what will happen if your
disk array dies and you must restore from scratch? You will only have up to
your last full and incremental backup. You have lost your ability to
restore up to the minute or point in time. If you issue regular log backups
your tran log itself will stay at a size that it needs over a point it time
and should never be shrunk once it reaches that quiescent state. Of coarse
you log backup files will now contain the data that used to be in your log
so you need a place to store those and hopefully it's not on the same array
as the db. That should solve your disk space issue and alleviate the need
to schedule a log truncation and shrinking. Just schedule regular log
backups. The interval is up to you. Every 15 minutes or 1/2 hour should
keep your log file from growing too much and keep you relatively safe as
well. If your concerned about room to place your backup files you might
consider using a product called SQL LiteSpeed. It will reduce the time for
a backup and give you much smaller files to boot.
Andrew J. Kelly
SQL Server MVP
"Rich S" <myscullyfamily@.yahoo.com> wrote in message
news:O1Kiq8d8DHA.3112@.tk2msftngp13.phx.gbl...
> First off, I want to thank you for your input- it's geratly appreciated!
> I'm sorry I wasn't clear in terms of how we perform our backups. We use
> a separate backup application for backing up all the SQL Server
> databases and their respective transaction logs. We do FULL backups on
> Friday and Saturday evening, and incremental backups throughout the
> week.
> My space issues arose when I converted all of my 7.0 databases to 2000
> about 4 months ago. Rather quickly after converting the db's, I noticed
> we were running out of disk space. In reviewing one of the converted
> databases that was affecting my disk space - I noticed the MDF file with
> a size of about 1 GIG and the transaction log with a size of 17 GIG.
> Obviously, I was like "Yikes."
> Anyway, at this point in time I'd would rather keep the recovery mode
> set to full so that we could perform point-in-time recoveries. I figure
> the best way, given my disk space concerns, for me to insure possible
> point-in-time recovery is to take advantage of the fact that 2 full
> backups are done each week. If after the 1st full backup, I can
> truncate and shrink the transaction log before the 2nd full backup - I
> will have a choice of restore points in the event something should go
> awry.
> Now while I know this is not necessarily the optimal solution, I do not
> know of a better one at this point. I am obviously wide open to
> suggestions/recommendations.
> If no strategic process changes can be realized, I think I need to find
> the answer to my T-SQL question concerning establishing a stored
> procedure that I can schedule after the Friday backup but before the
> Saturday backup - that will truncate and shrink the transaction log(s)
> for each of the user databases automatically.
> I will also check out the WITH INIT, etc. in the BOL.
> Thanks in advance for any more feedback.
> Rich
> *** Sent via Developersdex http://www.examnotes.net ***
> Don't just participate in USENET...get rewarded for it!|||Rich
I suggest you setup a SQL 2000 Agent Job executing the following TSQL
command:
DBCC SHRINKFILE(2, TRUNCATEONLY)
on the DB(s) in question. Set it to run off hours, the "2" should be
for the
Log(LDF) file, a "1" should be the for MDF file.
I'm still waiting myself to see a script that will parse thru the DBs
on a server and build the DBCC above for every DB.
good luck
Steven|||You could do it vb6 or .net using
Public oSQLServer As New SQLDMO.SQLServer
For Each oDatabase In oSQLServer.Databases
'If odatabase.Status <> SQLOLEDBStat_Inaccessible Then
form1.List_Databases.AddItem oDatabase.Name
'frmExecVB.lstCat.AddItem oDatabase.Name
'End If
Next oDatabase
Then you can build up a string using filesystem object to write your
script for you.
Have a look at SQLDMO Object, it rocks !
I've even got some code to look round your network and come back with a
list of sql servers...
Tim Heap
Software & Database Manager
POSTAR Ltd
www.postar.co.uk
tim@.postar.co.uk
*** Sent via Developersdex http://www.examnotes.net ***
Don't just participate in USENET...get rewarded for it!|||Thanks for your reply. I have still been working on a solution and have
not had the opportunity to write back. I work for a relatively small
company where I were a number of 'hats' of which being the DBA is just
one of them.
Anyway, I setup transaction log backups hourly throughout the day (I
don't have that many transactions - but do have a lot of databases) and
that is working fine. In tracking the transaction log size, I was able
to summize that it grew immensely on late Saturday evenings - which is
the time I kick off my database maintenance plan which includes
rebuilding indexes and optimizing the databases.
For example, one particular database who MDF is 540 mb and had a
transaction log of 52 mb throughout the week - had it's transaction log
grow to 468 mb after running the maintenance plan.
I am thinking that I should then 1) backup the transaction log with
truncate only, then 2) run the SHRINKFILE command to reduce the
transaction log to say... 50 mb. 3) Then backup the entire database.
If that works, I would need to incorporate that into all of my db plans.
Any thoughts/concerns? Thanks again for everyone's assitance!!!
*** Sent via Developersdex http://www.examnotes.net ***
Don't just participate in USENET...get rewarded for it!|||try the following to truncate TLogs:
--drop procedure xxx_SQL_Logspace_truncate_all
--CREATE procedure xxx_SQL_Logspace_truncate_all as
USE Master
DECLARE @.logname varchar(20)
DECLARE @.query varchar(255)
DECLARE LogList_cursor CURSOR FOR
SELECT name FROM master.dbo.sysdatabases
ORDER BY name
OPEN LogList_cursor
FETCH NEXT FROM LogList_cursor INTO @.logname
WHILE @.@.FETCH_STATUS = 0
BEGIN
-- add DB's you want to skip to the following list
IF RTRIM(@.logname) = "master" or
RTRIM(@.logname) = "model" or
RTRIM(@.logname) = "msdb" or
RTRIM(@.logname) = "pubs" or
RTRIM(@.logname) = "Northwind"
print @.logname + ' skipped'
ELSE
begin
Set @.query = "DBCC SHRINKFILE("+ RTRIM(@.logname) + ".LDF, TRUNCATEONLY)"
select @.query
end
FETCH NEXT FROM LogList_cursor into @.logname
END
-- following will list all DB's Logspace stats
DBCC SQLPERF(LOGSPACE)
CLOSE LogList_cursor
DEALLOCATE LogList_cursor
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

Creating a T-SQL Stored Procedure to Truncate and Shrink ALL Log Files

Hello -
I'm hoping some T-SQL expert could help me figure out the code
necessary to implement a stored procedure that will truncate and
shrink the log files when run.
Let me give you a little background first. We are a small company
with an adequate, but not overwhelming amount of disk space. But,
with running SQL Server 2000 we do like the Recovery model set to
Full. Each week on back to back evenings (and on separate tapes) we
do a full backup with incremental backups throughout the week. After
the 1st full backup, I would like to run a procedure that would look
at each database and then truncate & shrink the log file. Then for
the 2nd full backup, I'd have much smaller logs obviously with the
ability to recover back to my original(s) via my 1st backup.
The code that works on individual databases that I now run manually
and change the parameters (@.database) accordingly is:
use master
BEGIN
BACKUP LOG @.database WITH TRUNCATE_ONLY
GO
USE @.database
DBCC SHRINKFILE (@.database + '_log')
I would like to have this placed in a stored procedure that would go
through each user database automatically. Unfortunately, I am not at
all familiar with the nuances of automatically changing databases nor
with working with stored procedures.
If anyone would/could be so kind as to provide me with the script that
would allow this to work as desired, I would be very very
appreciative. Thanks in advance for any and all help!
RichScull,
I am a little confused as to what your actually doing. You say you do a
FULL backup and then incremental but do not specify log backups at all other
than the truncate only. If you don't actually do log backups why do you
want to be in FULL recovery mode? What good does shrinking the logs do
anyway? When you backup a log (or a database) the size of the backup file
is proportional to the amount of data in the file not the size of the file
itself. So shrinking a log file will not give smaller backups. Shrinking
it and of itself is a dangerous act anyway for the reasons you mention. If
you have limited space and you shrink the file, which will certainly grow
back again, what will happen when you need more space and don't have it? It
is better to allocate the space in need (now and in the future) up front and
don't ever shrink the files. That way you ensure you always have the room
when you need it. If I misinterpreted what your doing please let me know so
we can work thru this.
--
Andrew J. Kelly
SQL Server MVP
"Scull" <myscullyfamily@.yahoo.com> wrote in message
news:a8a65e57.0402120523.459e3dea@.posting.google.com...
> Hello -
> I'm hoping some T-SQL expert could help me figure out the code
> necessary to implement a stored procedure that will truncate and
> shrink the log files when run.
> Let me give you a little background first. We are a small company
> with an adequate, but not overwhelming amount of disk space. But,
> with running SQL Server 2000 we do like the Recovery model set to
> Full. Each week on back to back evenings (and on separate tapes) we
> do a full backup with incremental backups throughout the week. After
> the 1st full backup, I would like to run a procedure that would look
> at each database and then truncate & shrink the log file. Then for
> the 2nd full backup, I'd have much smaller logs obviously with the
> ability to recover back to my original(s) via my 1st backup.
> The code that works on individual databases that I now run manually
> and change the parameters (@.database) accordingly is:
>
> use master
> BEGIN
> BACKUP LOG @.database WITH TRUNCATE_ONLY
> GO
> USE @.database
> DBCC SHRINKFILE (@.database + '_log')
> I would like to have this placed in a stored procedure that would go
> through each user database automatically. Unfortunately, I am not at
> all familiar with the nuances of automatically changing databases nor
> with working with stored procedures.
> If anyone would/could be so kind as to provide me with the script that
> would allow this to work as desired, I would be very very
> appreciative. Thanks in advance for any and all help!
> Rich|||Scull
Perhaps you want to use WITH INIT (after full backup database) to ovewrite
the log file.
Also,look at INFORMATION_SCHEMA.SCHEMATA in BOL.
"Scull" <myscullyfamily@.yahoo.com> wrote in message
news:a8a65e57.0402120523.459e3dea@.posting.google.com...
> Hello -
> I'm hoping some T-SQL expert could help me figure out the code
> necessary to implement a stored procedure that will truncate and
> shrink the log files when run.
> Let me give you a little background first. We are a small company
> with an adequate, but not overwhelming amount of disk space. But,
> with running SQL Server 2000 we do like the Recovery model set to
> Full. Each week on back to back evenings (and on separate tapes) we
> do a full backup with incremental backups throughout the week. After
> the 1st full backup, I would like to run a procedure that would look
> at each database and then truncate & shrink the log file. Then for
> the 2nd full backup, I'd have much smaller logs obviously with the
> ability to recover back to my original(s) via my 1st backup.
> The code that works on individual databases that I now run manually
> and change the parameters (@.database) accordingly is:
>
> use master
> BEGIN
> BACKUP LOG @.database WITH TRUNCATE_ONLY
> GO
> USE @.database
> DBCC SHRINKFILE (@.database + '_log')
> I would like to have this placed in a stored procedure that would go
> through each user database automatically. Unfortunately, I am not at
> all familiar with the nuances of automatically changing databases nor
> with working with stored procedures.
> If anyone would/could be so kind as to provide me with the script that
> would allow this to work as desired, I would be very very
> appreciative. Thanks in advance for any and all help!
> Rich|||Rich
I suggest you setup a SQL 2000 Agent Job executing the following TSQL
command:
DBCC SHRINKFILE(2, TRUNCATEONLY)
on the DB(s) in question. Set it to run off hours, the "2" should be
for the
Log(LDF) file, a "1" should be the for MDF file.
I'm still waiting myself to see a script that will parse thru the DBs
on a server and build the DBCC above for every DB.
good luck
Steven

Wednesday, March 21, 2012

Creating a SQL Log file in Event Viewer

I believe I have seen this done or at least read about it, in any event how
can a person setup the Windows Event Viewer log to monitor and log specific
events in a speficic SQL database?
TIABryan,
SQL Server can be configured to write error messages to the event log. See:
SP_ADDMESSAGE and RAISERROR (WITH LOG) in the SQL Server Books Online. You
can also use xp_logevent and write job status's in the log as well.
HTH
Jerry
"Bryan.S.Walker" <BryanSWalker@.discussions.microsoft.com> wrote in message
news:E400E837-5B3E-43BC-B146-F3FA0DEE6462@.microsoft.com...
>I believe I have seen this done or at least read about it, in any event how
> can a person setup the Windows Event Viewer log to monitor and log
> specific
> events in a speficic SQL database?
> TIA

Creating a SQL Log file in Event Viewer

I believe I have seen this done or at least read about it, in any event how
can a person setup the Windows Event Viewer log to monitor and log specific
events in a speficic SQL database?
TIA
Bryan,
SQL Server can be configured to write error messages to the event log. See:
SP_ADDMESSAGE and RAISERROR (WITH LOG) in the SQL Server Books Online. You
can also use xp_logevent and write job status's in the log as well.
HTH
Jerry
"Bryan.S.Walker" <BryanSWalker@.discussions.microsoft.com> wrote in message
news:E400E837-5B3E-43BC-B146-F3FA0DEE6462@.microsoft.com...
>I believe I have seen this done or at least read about it, in any event how
> can a person setup the Windows Event Viewer log to monitor and log
> specific
> events in a speficic SQL database?
> TIA

Creating a SQL Log file in Event Viewer

I believe I have seen this done or at least read about it, in any event how
can a person setup the Windows Event Viewer log to monitor and log specific
events in a speficic SQL database?
TIABryan,
SQL Server can be configured to write error messages to the event log. See:
SP_ADDMESSAGE and RAISERROR (WITH LOG) in the SQL Server Books Online. You
can also use xp_logevent and write job status's in the log as well.
HTH
Jerry
"Bryan.S.Walker" <BryanSWalker@.discussions.microsoft.com> wrote in message
news:E400E837-5B3E-43BC-B146-F3FA0DEE6462@.microsoft.com...
>I believe I have seen this done or at least read about it, in any event how
> can a person setup the Windows Event Viewer log to monitor and log
> specific
> events in a speficic SQL database?
> TIAsql

Monday, March 19, 2012

Creating a queryable failover

Hi,
We have a failover database that is currently run using log shipping,
this means that the fail over is unusable.
What we would like to do is use a tool such as replication to instead
ensure that the fail over database is kept in sync and we can also use
the fail over database to run out reporting jobs.
We investigated SQL Server 2005 and found that with two way replication
this would work wonderfully well except for the fact that we are unable
to replicate new tables without creating a new snap shot.
As our database's are about 150GB in size creating an restoring a
snapshot takes about 2 days for SQL server, meaning that everytime we
add a new table the failover will be unavailable for 2 days, which is
unacceptable for both a fail over and reporting.
Is there any other way we can do this ? Happy to provide more details
if need.
- Barry
barry at 5thfinger dot com
How about database mirroring and creating a snapshot of the mirrored database?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<blat001@.gmail.com> wrote in message news:1159315688.269386.201360@.d34g2000cwd.googlegr oups.com...
> Hi,
> We have a failover database that is currently run using log shipping,
> this means that the fail over is unusable.
> What we would like to do is use a tool such as replication to instead
> ensure that the fail over database is kept in sync and we can also use
> the fail over database to run out reporting jobs.
> We investigated SQL Server 2005 and found that with two way replication
> this would work wonderfully well except for the fact that we are unable
> to replicate new tables without creating a new snap shot.
> As our database's are about 150GB in size creating an restoring a
> snapshot takes about 2 days for SQL server, meaning that everytime we
> add a new table the failover will be unavailable for 2 days, which is
> unacceptable for both a fail over and reporting.
> Is there any other way we can do this ? Happy to provide more details
> if need.
> - Barry
> barry at 5thfinger dot com
>
|||Barry - a simple way of achieving this setup is to use transactional
replication with queued updating subscribers. This can take care of identity
range issues and provide you with a means of getting new articles easily
incorporated into the setup.
As Tibor mentions, Mirroring and Snapshots is also now to be considered as
an alternative in this space, assuming your reporting requirements don't
include the use of FTI. This has the (huge) added advantage of automatic
failover if you have a witness and high safety mode enabled.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .
|||You might want to add the new articles to a different publication. This way
you would not have to create the entire snapshot. What form of replication
are you using? Merge? Bi-directional transactional replication and peer to
peer do not require the entire snapshot to be regeneretated.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<blat001@.gmail.com> wrote in message
news:1159315688.269386.201360@.d34g2000cwd.googlegr oups.com...
> Hi,
> We have a failover database that is currently run using log shipping,
> this means that the fail over is unusable.
> What we would like to do is use a tool such as replication to instead
> ensure that the fail over database is kept in sync and we can also use
> the fail over database to run out reporting jobs.
> We investigated SQL Server 2005 and found that with two way replication
> this would work wonderfully well except for the fact that we are unable
> to replicate new tables without creating a new snap shot.
> As our database's are about 150GB in size creating an restoring a
> snapshot takes about 2 days for SQL server, meaning that everytime we
> add a new table the failover will be unavailable for 2 days, which is
> unacceptable for both a fail over and reporting.
> Is there any other way we can do this ? Happy to provide more details
> if need.
> - Barry
> barry at 5thfinger dot com
>

Creating a queryable failover

Hi,
We have a failover database that is currently run using log shipping,
this means that the fail over is unusable.
What we would like to do is use a tool such as replication to instead
ensure that the fail over database is kept in sync and we can also use
the fail over database to run out reporting jobs.
We investigated SQL Server 2005 and found that with two way replication
this would work wonderfully well except for the fact that we are unable
to replicate new tables without creating a new snap shot.
As our database's are about 150GB in size creating an restoring a
snapshot takes about 2 days for SQL server, meaning that everytime we
add a new table the failover will be unavailable for 2 days, which is
unacceptable for both a fail over and reporting.
Is there any other way we can do this ? Happy to provide more details
if need.
- Barry
barry at 5thfinger dot com
How about database mirroring and creating a snapshot of the mirrored database?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<blat001@.gmail.com> wrote in message news:1159315688.269386.201360@.d34g2000cwd.googlegr oups.com...
> Hi,
> We have a failover database that is currently run using log shipping,
> this means that the fail over is unusable.
> What we would like to do is use a tool such as replication to instead
> ensure that the fail over database is kept in sync and we can also use
> the fail over database to run out reporting jobs.
> We investigated SQL Server 2005 and found that with two way replication
> this would work wonderfully well except for the fact that we are unable
> to replicate new tables without creating a new snap shot.
> As our database's are about 150GB in size creating an restoring a
> snapshot takes about 2 days for SQL server, meaning that everytime we
> add a new table the failover will be unavailable for 2 days, which is
> unacceptable for both a fail over and reporting.
> Is there any other way we can do this ? Happy to provide more details
> if need.
> - Barry
> barry at 5thfinger dot com
>
|||Barry - a simple way of achieving this setup is to use transactional
replication with queued updating subscribers. This can take care of identity
range issues and provide you with a means of getting new articles easily
incorporated into the setup.
As Tibor mentions, Mirroring and Snapshots is also now to be considered as
an alternative in this space, assuming your reporting requirements don't
include the use of FTI. This has the (huge) added advantage of automatic
failover if you have a witness and high safety mode enabled.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .
|||You might want to add the new articles to a different publication. This way
you would not have to create the entire snapshot. What form of replication
are you using? Merge? Bi-directional transactional replication and peer to
peer do not require the entire snapshot to be regeneretated.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<blat001@.gmail.com> wrote in message
news:1159315688.269386.201360@.d34g2000cwd.googlegr oups.com...
> Hi,
> We have a failover database that is currently run using log shipping,
> this means that the fail over is unusable.
> What we would like to do is use a tool such as replication to instead
> ensure that the fail over database is kept in sync and we can also use
> the fail over database to run out reporting jobs.
> We investigated SQL Server 2005 and found that with two way replication
> this would work wonderfully well except for the fact that we are unable
> to replicate new tables without creating a new snap shot.
> As our database's are about 150GB in size creating an restoring a
> snapshot takes about 2 days for SQL server, meaning that everytime we
> add a new table the failover will be unavailable for 2 days, which is
> unacceptable for both a fail over and reporting.
> Is there any other way we can do this ? Happy to provide more details
> if need.
> - Barry
> barry at 5thfinger dot com
>

Creating a queryable failover

Hi,
We have a failover database that is currently run using log shipping,
this means that the fail over is unusable.
What we would like to do is use a tool such as replication to instead
ensure that the fail over database is kept in sync and we can also use
the fail over database to run out reporting jobs.
We investigated SQL Server 2005 and found that with two way replication
this would work wonderfully well except for the fact that we are unable
to replicate new tables without creating a new snap shot.
As our database's are about 150GB in size creating an restoring a
snapshot takes about 2 days for SQL server, meaning that everytime we
add a new table the failover will be unavailable for 2 days, which is
unacceptable for both a fail over and reporting.
Is there any other way we can do this ? Happy to provide more details
if need.
- Barry
barry at 5thfinger dot comHow about database mirroring and creating a snapshot of the mirrored databas
e?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<blat001@.gmail.com> wrote in message news:1159315688.269386.201360@.d34g2000cwd.googlegroups.
com...
> Hi,
> We have a failover database that is currently run using log shipping,
> this means that the fail over is unusable.
> What we would like to do is use a tool such as replication to instead
> ensure that the fail over database is kept in sync and we can also use
> the fail over database to run out reporting jobs.
> We investigated SQL Server 2005 and found that with two way replication
> this would work wonderfully well except for the fact that we are unable
> to replicate new tables without creating a new snap shot.
> As our database's are about 150GB in size creating an restoring a
> snapshot takes about 2 days for SQL server, meaning that everytime we
> add a new table the failover will be unavailable for 2 days, which is
> unacceptable for both a fail over and reporting.
> Is there any other way we can do this ? Happy to provide more details
> if need.
> - Barry
> barry at 5thfinger dot com
>|||Barry - a simple way of achieving this setup is to use transactional
replication with queued updating subscribers. This can take care of identity
range issues and provide you with a means of getting new articles easily
incorporated into the setup.
As Tibor mentions, Mirroring and Snapshots is also now to be considered as
an alternative in this space, assuming your reporting requirements don't
include the use of FTI. This has the (huge) added advantage of automatic
failover if you have a witness and high safety mode enabled.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .|||You might want to add the new articles to a different publication. This way
you would not have to create the entire snapshot. What form of replication
are you using? Merge? Bi-directional transactional replication and peer to
peer do not require the entire snapshot to be regeneretated.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<blat001@.gmail.com> wrote in message
news:1159315688.269386.201360@.d34g2000cwd.googlegroups.com...
> Hi,
> We have a failover database that is currently run using log shipping,
> this means that the fail over is unusable.
> What we would like to do is use a tool such as replication to instead
> ensure that the fail over database is kept in sync and we can also use
> the fail over database to run out reporting jobs.
> We investigated SQL Server 2005 and found that with two way replication
> this would work wonderfully well except for the fact that we are unable
> to replicate new tables without creating a new snap shot.
> As our database's are about 150GB in size creating an restoring a
> snapshot takes about 2 days for SQL server, meaning that everytime we
> add a new table the failover will be unavailable for 2 days, which is
> unacceptable for both a fail over and reporting.
> Is there any other way we can do this ? Happy to provide more details
> if need.
> - Barry
> barry at 5thfinger dot com
>

Creating a queryable failover

Hi,
We have a failover database that is currently run using log shipping,
this means that the fail over is unusable.
What we would like to do is use a tool such as replication to instead
ensure that the fail over database is kept in sync and we can also use
the fail over database to run out reporting jobs.
We investigated SQL Server 2005 and found that with two way replication
this would work wonderfully well except for the fact that we are unable
to replicate new tables without creating a new snap shot.
As our database's are about 150GB in size creating an restoring a
snapshot takes about 2 days for SQL server, meaning that everytime we
add a new table the failover will be unavailable for 2 days, which is
unacceptable for both a fail over and reporting.
Is there any other way we can do this ? Happy to provide more details
if need.
- Barry
barry at 5thfinger dot comHow about database mirroring and creating a snapshot of the mirrored database?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<blat001@.gmail.com> wrote in message news:1159315688.269386.201360@.d34g2000cwd.googlegroups.com...
> Hi,
> We have a failover database that is currently run using log shipping,
> this means that the fail over is unusable.
> What we would like to do is use a tool such as replication to instead
> ensure that the fail over database is kept in sync and we can also use
> the fail over database to run out reporting jobs.
> We investigated SQL Server 2005 and found that with two way replication
> this would work wonderfully well except for the fact that we are unable
> to replicate new tables without creating a new snap shot.
> As our database's are about 150GB in size creating an restoring a
> snapshot takes about 2 days for SQL server, meaning that everytime we
> add a new table the failover will be unavailable for 2 days, which is
> unacceptable for both a fail over and reporting.
> Is there any other way we can do this ? Happy to provide more details
> if need.
> - Barry
> barry at 5thfinger dot com
>|||Barry - a simple way of achieving this setup is to use transactional
replication with queued updating subscribers. This can take care of identity
range issues and provide you with a means of getting new articles easily
incorporated into the setup.
As Tibor mentions, Mirroring and Snapshots is also now to be considered as
an alternative in this space, assuming your reporting requirements don't
include the use of FTI. This has the (huge) added advantage of automatic
failover if you have a witness and high safety mode enabled.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .|||You might want to add the new articles to a different publication. This way
you would not have to create the entire snapshot. What form of replication
are you using? Merge? Bi-directional transactional replication and peer to
peer do not require the entire snapshot to be regeneretated.
--
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<blat001@.gmail.com> wrote in message
news:1159315688.269386.201360@.d34g2000cwd.googlegroups.com...
> Hi,
> We have a failover database that is currently run using log shipping,
> this means that the fail over is unusable.
> What we would like to do is use a tool such as replication to instead
> ensure that the fail over database is kept in sync and we can also use
> the fail over database to run out reporting jobs.
> We investigated SQL Server 2005 and found that with two way replication
> this would work wonderfully well except for the fact that we are unable
> to replicate new tables without creating a new snap shot.
> As our database's are about 150GB in size creating an restoring a
> snapshot takes about 2 days for SQL server, meaning that everytime we
> add a new table the failover will be unavailable for 2 days, which is
> unacceptable for both a fail over and reporting.
> Is there any other way we can do this ? Happy to provide more details
> if need.
> - Barry
> barry at 5thfinger dot com
>

Thursday, March 8, 2012

Creating a log file

I have a system which has several different dbases involved, and have created an update procedure within it all. It all works fine, but as a refrenence point I want to create a log file,each time the procedure is run.

This file will have useful info such as no of transactions added, no of product row n customer rows, any unmatching entries and any exception records. What I dont know is how do I create such a file?

I was thinking maybe I could save the info into a table, and then somehow transfer this into a txt file? The txt file will be replaced each time the stored procedure is run... Does any1 know any links or ideas that might help me?...I never does anything lik this b4 but I have seen it done....

Thanks in advance!found a perfect article for you
http://www.codeproject.com/useritems/Stored_Procedure_Log_File.asp

:beer: :beer: :beer:|||It might be my paranoia, but I'm not all that happy with the proposed solution. xm_cmdshell is after all among the most severe security risks in SQL Server. Would it not be a better choice to have a log TABLE instead? You could always script a command line tool to transfer the data to a text file on a regular basis.|||Thanks for the replies!!! !

roac, do you have an example site that does something like what you are sayin!!!

I really cant seem to find anything useful on the net?!|||what he meant to say is you have a log table something like

table_log
(logid identity key,
description --description will contain all relevant data you require
)

add a few more relevant columns.....

and each time then, u keep on updating this log table...so instead of having a log file u have a log table and u then need to query this table to get the description as to how many rows were inserted/updated

and his suggestion is the best way out....however if you want a log then u will have to xp_cmdshell :D|||"You could always script a command line tool to transfer the data to a text file on a regular basis."

how do i do this bit??|||"You could always script a command line tool to transfer the data to a text file on a regular basis."

how do i do this bit??
You might look at BCP, which can extract data from a table to a file. It's pretty well documented in Books Online I recall. You can use task scheduler in Windows or a SQL Server Agent Job to schedule it.|||Another alternative would be to use the SQL Server Job Agent to run the job, and include in the stored procedure/code it runs such things as "Print 'whatever data I want to log" and then have the SQL Server agent write the job output to a log.

You can use the "print" statement within the stored procedures OR the job step command window and it will be captured in the output log.

That way you can put the log anywhere on the network you want it to go, name it anything you want to name it, and not have to code anything special to create the log.

This is easily done by entering the code in the job step "command" box, similar to the following, which is a cut and past from one of our maintenance jobs.
DELETE
FROM EventLog
WHERE DATEDIFF(day, [Date], getdate()) > 90

PRINT CONVERT(varchar(10), @.@.ROWCOUNT) + ' old rows were deleted.'

DECLARE @.Count int
SELECT @.Count = (SELECT COUNT(*) FROM EventLog)
PRINT CONVERT(varchar(10), @.Count) + ' rows remain in the table.'
PRINT ' ++ JOB COMPLETE ++'
In the "advanced" tab of the step there is a place where you can enter the path/filename of the "output" file, which then becomes the log you so desperately desire.

This is the output cut and pasted from the generated log file:Job 'DailyProd- Weekly EventLog Cleanup' : Step 1, 'Delete From EventLog Where Older Than 90 Days' : Began Executing 2005-06-09 15:45:06

2244 old rows were deleted. [SQLSTATE 01000]
334 rows remain in the table. [SQLSTATE 01000]
++ JOB COMPLETE ++ [SQLSTATE 01000] and you can set up the job to overwrite the log file each time it runs, or append to the log file each time it runs.|||thanks SO much!!!

I've jsut tested it and its worked fone...
(I used the PRINT command that TallCowboy0614 spoke about)

2 questions:
1) Is there any way of inserting blank lines so that it is easier to read?
2) How do I get it to stop adding '[SQLSTATE 01000]' after each line?

Thanks again|||I think adding a PRINT ' ' (to print a blank line) will allow you that capability.

I don't think there is any way to avoid the [SQLSTATE 01000] at the end of each line, though - using this method, at least. At least if there is, I haven't been bothered by it enough to search for and/or implement it.|||'I think adding a PRINT ' ' (to print a blank line) will allow you that capability.'

I thought thats what it would be, but it didnt wrk...
also tried PRINT char(13) + char(10) but it doesnt wrk either...

:confused:|||Try a couple of them in sequence. I also have noticed sometimes stuff like that (formatting I try to do) doesn't seem to work as expected. I do get mixed results with both methods you describe though. Again, it hasn't been enough of an issue in my applications to put much thought and time into fixing it.