Tuesday, February 14, 2012

cross database insert

I have an application inserting, deleteing and changing records at a high
volume. Normally I get about 3 million of my transaction a second. For
each transaction I add a log entry to a table. My problem is that I wanted
to create a second database to store the log entries. Once I changed the
log insert to use the second database it cuts my transaction count down to
about 400K an hour. So is doing an insert to a second database always going
to be slow? Is there anything that can be done to improve performance? The
insert is done in a stored procedure in the first database.
BobI can imagine two reasons why the cross database work makes it slower:
You do it all in one transaction, and going across databases makes the trans
action implemented
internally using a 2-phase commit protocol (more expensive than if inside on
e database).
You are pushing the log writes so that you end up with waiting for physical
writes, and if you have
the transaction log files for the two databases on the same physical disk(s)
, you end up waiting for
head-movement.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Bob" <msgdev@.hotmail.com> wrote in message news:%23wv9my$VGHA.5668@.TK2MSFTNGP15.phx.gbl...[
vbcol=seagreen]
>I have an application inserting, deleteing and changing records at a high v
olume. Normally I get
>about 3 million of my transaction a second. For each transaction I add a l
og entry to a table. My
>problem is that I wanted to create a second database to store the log entri
es. Once I changed the
>log insert to use the second database it cuts my transaction count down to
about 400K an hour. So
>is doing an insert to a second database always going to be slow? Is there
anything that can be
>done to improve performance? The insert is done in a stored procedure in t
he first database.
>
> Bob
>[/vbcol]|||The only reason I am doing this is because of the size limitation of
SQLExpress. I need more space for my logs. Anyone have any suggestion on
another work around beside upgrading to the full version of SQL server?
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:OPUNT5$VGHA.5076@.TK2MSFTNGP14.phx.gbl...
>I can imagine two reasons why the cross database work makes it slower:
> You do it all in one transaction, and going across databases makes the
> transaction implemented internally using a 2-phase commit protocol (more
> expensive than if inside one database).
> You are pushing the log writes so that you end up with waiting for
> physical writes, and if you have the transaction log files for the two
> databases on the same physical disk(s), you end up waiting for
> head-movement.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "Bob" <msgdev@.hotmail.com> wrote in message
> news:%23wv9my$VGHA.5668@.TK2MSFTNGP15.phx.gbl...
>|||The cross database insert is always going to do a distributed transaction
and distributed commits get serialized so it's going to be slow. The people
around here that do this have several empty databases created and when they
get above a certain size, they change the database name for their inserts.
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Bob" <msgdev@.hotmail.com> wrote in message
news:eB2PzCAWGHA.5036@.TK2MSFTNGP15.phx.gbl...
> The only reason I am doing this is because of the size limitation of
> SQLExpress. I need more space for my logs. Anyone have any suggestion on
> another work around beside upgrading to the full version of SQL server?
>
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote
> in message news:OPUNT5$VGHA.5076@.TK2MSFTNGP14.phx.gbl...
>|||Thats going to be hard in my situation. I have serveral processes adding
records to a table. A second process doing a select on that table then
passing each record to a worker thread. The thread process the data and
then deletes the original record from the table. After each event the
status is logged to another table. This logging is what is causing my
problem. It would be hard to switch tables since I could have about 100k
items queued to be processed at any time.
"Roger Wolter[MSFT]" <rwolter@.online.microsoft.com> wrote in message
news:%23bq9IHAWGHA.2360@.TK2MSFTNGP09.phx.gbl...
> The cross database insert is always going to do a distributed transaction
> and distributed commits get serialized so it's going to be slow. The
> people around here that do this have several empty databases created and
> when they get above a certain size, they change the database name for
> their inserts.
> --
> This posting is provided "AS IS" with no warranties, and confers no
> rights.
> Use of included script samples are subject to the terms specified at
> http://www.microsoft.com/info/cpyright.htm
> "Bob" <msgdev@.hotmail.com> wrote in message
> news:eB2PzCAWGHA.5036@.TK2MSFTNGP15.phx.gbl...
>|||How critical is losing a log record? You could try logging in a different
transaction or even logging to a file if you ca afford to lose a log record
when the system loses power.
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Bob" <msgdev@.hotmail.com> wrote in message
news:O$dwuaAWGHA.4360@.TK2MSFTNGP14.phx.gbl...
> Thats going to be hard in my situation. I have serveral processes adding
> records to a table. A second process doing a select on that table then
> passing each record to a worker thread. The thread process the data and
> then deletes the original record from the table. After each event the
> status is logged to another table. This logging is what is causing my
> problem. It would be hard to switch tables since I could have about 100k
> items queued to be processed at any time.
>
> "Roger Wolter[MSFT]" <rwolter@.online.microsoft.com> wrote in message
> news:%23bq9IHAWGHA.2360@.TK2MSFTNGP09.phx.gbl...
>

No comments:

Post a Comment