Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts

Friday, March 30, 2012

Increased performance using 2 servers and a SAN

Hi
I need to increase performance of our SQL enviroment and i dont really want
to put one BIG server in.
What would be the best way of load balancing 1 SQL database over 2 servers?
My first thought was 2 servers clustered with a SAN and use SQL replication
between two differant Instances, is this possible or is there a better way?
Thanks
Andy
SQL 2000 can't not be load balanced for a single database. A database can
only be run from one server at a time. Sorry.
Cheers,
Rod
MVP - Windows Server - Clustering
http://www.nw-america.com - Clustering
http://www.msmvps.com/clustering - Blog
"Andrew Gilbert" <AndrewGilbert@.discussions.microsoft.com> wrote in message
news:7CCD89F6-9AAA-4F49-A77B-095B809384AC@.microsoft.com...
> Hi
> I need to increase performance of our SQL enviroment and i dont really
> want
> to put one BIG server in.
> What would be the best way of load balancing 1 SQL database over 2
> servers?
> My first thought was 2 servers clustered with a SAN and use SQL
> replication
> between two differant Instances, is this possible or is there a better
> way?
> Thanks
> Andy
|||Hi
Using replication is possible, but your application has to understand that
records can be inserted in different databases, so if any identity columns
exist, expect issues.
SQL Server can not be directly load balanced. Don't forget, a SAN may not
make IO quicker, it might actually be a lot slower (compared to local
storage).
What is your current hardware setup, DB sizes, transactions per minute and
what performance are you expecting?
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Andrew Gilbert" <AndrewGilbert@.discussions.microsoft.com> wrote in message
news:7CCD89F6-9AAA-4F49-A77B-095B809384AC@.microsoft.com...
> Hi
> I need to increase performance of our SQL enviroment and i dont really
want
> to put one BIG server in.
> What would be the best way of load balancing 1 SQL database over 2
servers?
> My first thought was 2 servers clustered with a SAN and use SQL
replication
> between two differant Instances, is this possible or is there a better
way?
> Thanks
> Andy
|||Hi
We currently have a HP DL380 with 2x Zeaon 2.8Ghz processors and 2GB RAM
we have 2 DB and all Logs on one set of RAID 5 10k DIsks and the Main DB on
its own Mirrored set of 10K Disks.
We are experiancing High Disk Queue lengths on the first set of Disk, so we
was looking to use a HP MSA500 SAN and split all DB and Logs onto seperate
Mirrored Disks?
The Main DB is 500mb and the other DB which is used for referance info is
1.3GB
We have about 180 users using the system and we seem to be getting slow
query times in peak load, i.e 400 - 1000ms query times.
If you have any suggestion then please let me know, would a MSA500 be a bad
idea?
Andy
"Mike Epprecht (SQL MVP)" wrote:

> Hi
> Using replication is possible, but your application has to understand that
> records can be inserted in different databases, so if any identity columns
> exist, expect issues.
> SQL Server can not be directly load balanced. Don't forget, a SAN may not
> make IO quicker, it might actually be a lot slower (compared to local
> storage).
> What is your current hardware setup, DB sizes, transactions per minute and
> what performance are you expecting?
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "Andrew Gilbert" <AndrewGilbert@.discussions.microsoft.com> wrote in message
> news:7CCD89F6-9AAA-4F49-A77B-095B809384AC@.microsoft.com...
> want
> servers?
> replication
> way?
>
>
|||While I am unfamiliar with the specific hardware you are considering, the
general idea of separate physical devices for Data and Log is a fundamental
key for increasing performance. Avoid RAID-5 even in a SAN for transaction
log devices. Larger write cache in a SAN device will lead to increased SQL
performance. Check and see if the high queue lengths are for read or for
write operations. If they are for read requests, increasing the RAM (and
moving to Enterprise SQL) may result in more performance improvements than
you would get through disk subsystem changes. Of course, the ideas are not
mutually exclusive.
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Andrew Gilbert" <AndrewGilbert@.discussions.microsoft.com> wrote in message
news:7B0503A3-DB49-4E1B-8D12-E2542575690B@.microsoft.com...
> Hi
> We currently have a HP DL380 with 2x Zeaon 2.8Ghz processors and 2GB RAM
> we have 2 DB and all Logs on one set of RAID 5 10k DIsks and the Main DB
on
> its own Mirrored set of 10K Disks.
> We are experiancing High Disk Queue lengths on the first set of Disk, so
we
> was looking to use a HP MSA500 SAN and split all DB and Logs onto seperate
> Mirrored Disks?
> The Main DB is 500mb and the other DB which is used for referance info is
> 1.3GB
> We have about 180 users using the system and we seem to be getting slow
> query times in peak load, i.e 400 - 1000ms query times.
> If you have any suggestion then please let me know, would a MSA500 be a
bad[vbcol=seagreen]
> idea?
> Andy
> "Mike Epprecht (SQL MVP)" wrote:
that[vbcol=seagreen]
columns[vbcol=seagreen]
not[vbcol=seagreen]
and[vbcol=seagreen]
message[vbcol=seagreen]
sql

Increase Performance with two connections

HI all,
I´ve two databases: DB1 and DB2 and an ASP.NET application using the data.
In DB2 I use a view which uses some joins against DB1 like:
select * from MyTable left outer join DB1.dbo.users on ...
This works, but I get a time problem: the view is very slow.
The reason seems to be the connection to DB1.
In SQL Management Studio I can reproduce the situation: when I´m in DB2 and
try to "USE DB1" it takes something like >=2 seconds on my local machine.
After making the "USE DB1" the view works great - very fast (SQL Server
seemed to cache the connection).
But the connection caching works only for some minutes - when "falling back"
the view is slow again.
So, I´m looking for a solution for my problem (I can´t change the
database-concept, some data from DB1 is needed also in DB2). Maybe open the
connection DB1 for ... some hours? Or somebody has better ideas?
Thanks,
TonyI assume you have two databases in the same SQL Server instance (which is what it sounds like from
your description). Check if the database DB1 has the database property autoclose turned on. If so,
turn it off.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Toni Pohl" <atwork43@.hotmail.com> wrote in message
news:7C81F6B8-1F6E-4DC0-85F0-004E17590108@.microsoft.com...
> HI all,
> I´ve two databases: DB1 and DB2 and an ASP.NET application using the data.
> In DB2 I use a view which uses some joins against DB1 like:
> select * from MyTable left outer join DB1.dbo.users on ...
> This works, but I get a time problem: the view is very slow.
> The reason seems to be the connection to DB1.
> In SQL Management Studio I can reproduce the situation: when I´m in DB2 and try to "USE DB1" it
> takes something like >=2 seconds on my local machine.
> After making the "USE DB1" the view works great - very fast (SQL Server seemed to cache the
> connection).
> But the connection caching works only for some minutes - when "falling back" the view is slow
> again.
> So, I´m looking for a solution for my problem (I can´t change the database-concept, some data from
> DB1 is needed also in DB2). Maybe open the connection DB1 for ... some hours? Or somebody has
> better ideas?
> Thanks,
> Tony
>|||HI Tibor,
thanks for your reply.
Yes, Autoclose=ON is standard.
Sounds like a workaround (I´ll try tomorrow and post the result, I´ll assume
it will work).
Does Autoclose=OFF have any other effects...?
Help of SQL2005 says:
This feature will be removed in a future version of Microsoft SQL Server.
Avoid using this feature in new development work, and plan to modify
applications that currently use this feature.
The AutoClose property exposes server behavior for databases not accessed by
a user.
Thanks, Tony
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> schrieb
im Newsbeitrag news:eGO8g%23O6HHA.3740@.TK2MSFTNGP02.phx.gbl...
>I assume you have two databases in the same SQL Server instance (which is
>what it sounds like from your description). Check if the database DB1 has
>the database property autoclose turned on. If so, turn it off.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
> "Toni Pohl" <atwork43@.hotmail.com> wrote in message
> news:7C81F6B8-1F6E-4DC0-85F0-004E17590108@.microsoft.com...
>> HI all,
>> I´ve two databases: DB1 and DB2 and an ASP.NET application using the
>> data.
>> In DB2 I use a view which uses some joins against DB1 like:
>> select * from MyTable left outer join DB1.dbo.users on ...
>> This works, but I get a time problem: the view is very slow.
>> The reason seems to be the connection to DB1.
>> In SQL Management Studio I can reproduce the situation: when I´m in DB2
>> and try to "USE DB1" it takes something like >=2 seconds on my local
>> machine.
>> After making the "USE DB1" the view works great - very fast (SQL Server
>> seemed to cache the connection).
>> But the connection caching works only for some minutes - when "falling
>> back" the view is slow again.
>> So, I´m looking for a solution for my problem (I can´t change the
>> database-concept, some data from DB1 is needed also in DB2). Maybe open
>> the connection DB1 for ... some hours? Or somebody has better ideas?
>> Thanks,
>> Tony
>|||> Yes, Autoclose=ON is standard.
Standard? In your shop or in SQL Server? I believe that some edition of SQL Server has this turned
on by default, which was a bad (IMO) choice. Anyhow, turn this off and see if it fixes your problem.
> Does Autoclose=OFF have any other effects...?
I hope you don't have other programs that tries to do anything with the database files while no user
is connected? If you do, then they will be affected.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Toni Pohl" <atwork43@.hotmail.com> wrote in message
news:E59C38A0-EECA-4B32-92B3-9B12FFFE6FFD@.microsoft.com...
> HI Tibor,
> thanks for your reply.
> Yes, Autoclose=ON is standard.
> Sounds like a workaround (I´ll try tomorrow and post the result, I´ll assume it will work).
> Does Autoclose=OFF have any other effects...?
> Help of SQL2005 says:
> This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature
> in new development work, and plan to modify applications that currently use this feature.
> The AutoClose property exposes server behavior for databases not accessed by a user.
> Thanks, Tony
>
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> schrieb im Newsbeitrag
> news:eGO8g%23O6HHA.3740@.TK2MSFTNGP02.phx.gbl...
>>I assume you have two databases in the same SQL Server instance (which is what it sounds like from
>>your description). Check if the database DB1 has the database property autoclose turned on. If so,
>>turn it off.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://sqlblog.com/blogs/tibor_karaszi
>>
>> "Toni Pohl" <atwork43@.hotmail.com> wrote in message
>> news:7C81F6B8-1F6E-4DC0-85F0-004E17590108@.microsoft.com...
>> HI all,
>> I´ve two databases: DB1 and DB2 and an ASP.NET application using the data.
>> In DB2 I use a view which uses some joins against DB1 like:
>> select * from MyTable left outer join DB1.dbo.users on ...
>> This works, but I get a time problem: the view is very slow.
>> The reason seems to be the connection to DB1.
>> In SQL Management Studio I can reproduce the situation: when I´m in DB2 and try to "USE DB1" it
>> takes something like >=2 seconds on my local machine.
>> After making the "USE DB1" the view works great - very fast (SQL Server seemed to cache the
>> connection).
>> But the connection caching works only for some minutes - when "falling back" the view is slow
>> again.
>> So, I´m looking for a solution for my problem (I can´t change the database-concept, some data
>> from DB1 is needed also in DB2). Maybe open the connection DB1 for ... some hours? Or somebody
>> has better ideas?
>> Thanks,
>> Tony
>>
>|||HI Tibor,
yes, with Autoclose=OFF the App works perfect!
But I only don´t really understand
> I hope you don't have other programs that tries to do anything with the
> database files while no user is connected? If you do, then they will be
> affected.
Well, in my case there are only _some_ asp.net webapps (.net 1.1 and 2.0)
which all use database DB1 - and some other database-stored procs/views.
Will they be affected?
Or is it ok to simply use Autoclose=OFF (and I don´t have to care about any
side-effects)?
Thanks again,
Tony
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> schrieb
im Newsbeitrag news:%23XzAX3T6HHA.5096@.TK2MSFTNGP04.phx.gbl...
>> Yes, Autoclose=ON is standard.
> Standard? In your shop or in SQL Server? I believe that some edition of
> SQL Server has this turned on by default, which was a bad (IMO) choice.
> Anyhow, turn this off and see if it fixes your problem.
>> Does Autoclose=OFF have any other effects...?
> I hope you don't have other programs that tries to do anything with the
> database files while no user is connected? If you do, then they will be
> affected.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
> "Toni Pohl" <atwork43@.hotmail.com> wrote in message
> news:E59C38A0-EECA-4B32-92B3-9B12FFFE6FFD@.microsoft.com...
>> HI Tibor,
>> thanks for your reply.
>> Yes, Autoclose=ON is standard.
>> Sounds like a workaround (I´ll try tomorrow and post the result, I´ll
>> assume it will work).
>> Does Autoclose=OFF have any other effects...?
>> Help of SQL2005 says:
>> This feature will be removed in a future version of Microsoft SQL Server.
>> Avoid using this feature in new development work, and plan to modify
>> applications that currently use this feature.
>> The AutoClose property exposes server behavior for databases not accessed
>> by a user.
>> Thanks, Tony
>>
>> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com>
>> schrieb im Newsbeitrag news:eGO8g%23O6HHA.3740@.TK2MSFTNGP02.phx.gbl...
>>I assume you have two databases in the same SQL Server instance (which is
>>what it sounds like from your description). Check if the database DB1 has
>>the database property autoclose turned on. If so, turn it off.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://sqlblog.com/blogs/tibor_karaszi
>>
>> "Toni Pohl" <atwork43@.hotmail.com> wrote in message
>> news:7C81F6B8-1F6E-4DC0-85F0-004E17590108@.microsoft.com...
>> HI all,
>> I´ve two databases: DB1 and DB2 and an ASP.NET application using the
>> data.
>> In DB2 I use a view which uses some joins against DB1 like:
>> select * from MyTable left outer join DB1.dbo.users on ...
>> This works, but I get a time problem: the view is very slow.
>> The reason seems to be the connection to DB1.
>> In SQL Management Studio I can reproduce the situation: when I´m in DB2
>> and try to "USE DB1" it takes something like >=2 seconds on my local
>> machine.
>> After making the "USE DB1" the view works great - very fast (SQL Server
>> seemed to cache the connection).
>> But the connection caching works only for some minutes - when "falling
>> back" the view is slow again.
>> So, I´m looking for a solution for my problem (I can´t change the
>> database-concept, some data from DB1 is needed also in DB2). Maybe open
>> the connection DB1 for ... some hours? Or somebody has better ideas?
>> Thanks,
>> Tony
>>
>|||The application uses the database *through* SQL Server. I.e., they don't access the database files
directly.
I recommend that you have autoclose set to off.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Toni Pohl" <atwork43@.hotmail.com> wrote in message
news:44904194-2251-46E3-995E-9616DD6FD563@.microsoft.com...
> HI Tibor,
> yes, with Autoclose=OFF the App works perfect!
> But I only don´t really understand
>> I hope you don't have other programs that tries to do anything with the database files while no
>> user is connected? If you do, then they will be affected.
> Well, in my case there are only _some_ asp.net webapps (.net 1.1 and 2.0) which all use database
> DB1 - and some other database-stored procs/views. Will they be affected?
> Or is it ok to simply use Autoclose=OFF (and I don´t have to care about any side-effects)?
> Thanks again,
> Tony
>
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> schrieb im Newsbeitrag
> news:%23XzAX3T6HHA.5096@.TK2MSFTNGP04.phx.gbl...
>> Yes, Autoclose=ON is standard.
>> Standard? In your shop or in SQL Server? I believe that some edition of SQL Server has this
>> turned on by default, which was a bad (IMO) choice. Anyhow, turn this off and see if it fixes
>> your problem.
>> Does Autoclose=OFF have any other effects...?
>> I hope you don't have other programs that tries to do anything with the database files while no
>> user is connected? If you do, then they will be affected.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://sqlblog.com/blogs/tibor_karaszi
>>
>> "Toni Pohl" <atwork43@.hotmail.com> wrote in message
>> news:E59C38A0-EECA-4B32-92B3-9B12FFFE6FFD@.microsoft.com...
>> HI Tibor,
>> thanks for your reply.
>> Yes, Autoclose=ON is standard.
>> Sounds like a workaround (I´ll try tomorrow and post the result, I´ll assume it will work).
>> Does Autoclose=OFF have any other effects...?
>> Help of SQL2005 says:
>> This feature will be removed in a future version of Microsoft SQL Server. Avoid using this
>> feature in new development work, and plan to modify applications that currently use this
>> feature.
>> The AutoClose property exposes server behavior for databases not accessed by a user.
>> Thanks, Tony
>>
>> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> schrieb im Newsbeitrag
>> news:eGO8g%23O6HHA.3740@.TK2MSFTNGP02.phx.gbl...
>>I assume you have two databases in the same SQL Server instance (which is what it sounds like
>>from your description). Check if the database DB1 has the database property autoclose turned on.
>>If so, turn it off.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://sqlblog.com/blogs/tibor_karaszi
>>
>> "Toni Pohl" <atwork43@.hotmail.com> wrote in message
>> news:7C81F6B8-1F6E-4DC0-85F0-004E17590108@.microsoft.com...
>> HI all,
>> I´ve two databases: DB1 and DB2 and an ASP.NET application using the data.
>> In DB2 I use a view which uses some joins against DB1 like:
>> select * from MyTable left outer join DB1.dbo.users on ...
>> This works, but I get a time problem: the view is very slow.
>> The reason seems to be the connection to DB1.
>> In SQL Management Studio I can reproduce the situation: when I´m in DB2 and try to "USE DB1"
>> it takes something like >=2 seconds on my local machine.
>> After making the "USE DB1" the view works great - very fast (SQL Server seemed to cache the
>> connection).
>> But the connection caching works only for some minutes - when "falling back" the view is slow
>> again.
>> So, I´m looking for a solution for my problem (I can´t change the database-concept, some data
>> from DB1 is needed also in DB2). Maybe open the connection DB1 for ... some hours? Or somebody
>> has better ideas?
>> Thanks,
>> Tony
>>
>>
>|||Hi Tibor,
yes, I also did a little reasearch on AutoClose - schould always be off on
server and productive environments!
Thanks again for your hint! This was my solution!
Tony
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> schrieb
im Newsbeitrag news:ePINj6U6HHA.5164@.TK2MSFTNGP05.phx.gbl...
> The application uses the database *through* SQL Server. I.e., they don't
> access the database files directly.
>
> I recommend that you have autoclose set to off.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
> "Toni Pohl" <atwork43@.hotmail.com> wrote in message
> news:44904194-2251-46E3-995E-9616DD6FD563@.microsoft.com...
>> HI Tibor,
>> yes, with Autoclose=OFF the App works perfect!
>> But I only don´t really understand
>> I hope you don't have other programs that tries to do anything with the
>> database files while no user is connected? If you do, then they will be
>> affected.
>> Well, in my case there are only _some_ asp.net webapps (.net 1.1 and 2.0)
>> which all use database DB1 - and some other database-stored procs/views.
>> Will they be affected?
>> Or is it ok to simply use Autoclose=OFF (and I don´t have to care about
>> any side-effects)?
>> Thanks again,
>> Tony
>>
>> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com>
>> schrieb im Newsbeitrag news:%23XzAX3T6HHA.5096@.TK2MSFTNGP04.phx.gbl...
>> Yes, Autoclose=ON is standard.
>> Standard? In your shop or in SQL Server? I believe that some edition of
>> SQL Server has this turned on by default, which was a bad (IMO) choice.
>> Anyhow, turn this off and see if it fixes your problem.
>> Does Autoclose=OFF have any other effects...?
>> I hope you don't have other programs that tries to do anything with the
>> database files while no user is connected? If you do, then they will be
>> affected.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://sqlblog.com/blogs/tibor_karaszi
>>
>> "Toni Pohl" <atwork43@.hotmail.com> wrote in message
>> news:E59C38A0-EECA-4B32-92B3-9B12FFFE6FFD@.microsoft.com...
>> HI Tibor,
>> thanks for your reply.
>> Yes, Autoclose=ON is standard.
>> Sounds like a workaround (I´ll try tomorrow and post the result, I´ll
>> assume it will work).
>> Does Autoclose=OFF have any other effects...?
>> Help of SQL2005 says:
>> This feature will be removed in a future version of Microsoft SQL
>> Server. Avoid using this feature in new development work, and plan to
>> modify applications that currently use this feature.
>> The AutoClose property exposes server behavior for databases not
>> accessed by a user.
>> Thanks, Tony
>>
>> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com>
>> schrieb im Newsbeitrag news:eGO8g%23O6HHA.3740@.TK2MSFTNGP02.phx.gbl...
>>I assume you have two databases in the same SQL Server instance (which
>>is what it sounds like from your description). Check if the database
>>DB1 has the database property autoclose turned on. If so, turn it off.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://sqlblog.com/blogs/tibor_karaszi
>>
>> "Toni Pohl" <atwork43@.hotmail.com> wrote in message
>> news:7C81F6B8-1F6E-4DC0-85F0-004E17590108@.microsoft.com...
>> HI all,
>> I´ve two databases: DB1 and DB2 and an ASP.NET application using the
>> data.
>> In DB2 I use a view which uses some joins against DB1 like:
>> select * from MyTable left outer join DB1.dbo.users on ...
>> This works, but I get a time problem: the view is very slow.
>> The reason seems to be the connection to DB1.
>> In SQL Management Studio I can reproduce the situation: when I´m in
>> DB2 and try to "USE DB1" it takes something like >=2 seconds on my
>> local machine.
>> After making the "USE DB1" the view works great - very fast (SQL
>> Server seemed to cache the connection).
>> But the connection caching works only for some minutes - when
>> "falling back" the view is slow again.
>> So, I´m looking for a solution for my problem (I can´t change the
>> database-concept, some data from DB1 is needed also in DB2). Maybe
>> open the connection DB1 for ... some hours? Or somebody has better
>> ideas?
>> Thanks,
>> Tony
>>
>>
>

Increase performance of SQL Server by using RAM

Hi,
I have MSSQL-Server 2000 installed on a PIII 1.2 GHz server running with 256 MB of RAM.
The server is starting to run slower and slower at the peak times.
When ever I check the task manager performance, the processor is always bussy with red indicator, while the RAM is calm and running under 50%.
Is there any configuration in MSSQL-Server, I can do, so I can use the RAM to take some of the load on the processor?

I will really appreciate your help.
ThanksSQL server does all its actions in RAM. So if you insert more RAM the performance should (technically) increase. But first check how much RAM is available for SQL server and monitor the harddisk activity.|||Could you please help me, how to check for the amount of RAM reserved for MS SQL server?

Thanks|||Start SQL Enterprise manager
1
Connect to the appropiate SQL server

2
Right click on de name of the server in the right panel of the screen and select properties.

3
Choose the Tab memory

The memory must be configured as Dynamically with a minimum of 0 and a maximum of the total amount of RAM availleble.

You did determine that the process sqlservr.exe is consuming the bulk of the RAM and of the CPU time? If not than another proces is responsible for the slow response time!|||If you've sufficient memory allocated to SQL Server (Microsoft recommends don't restrict sql to a particular amount of memory, let it acquire memory on its own, hence it is preferred to allocated all memory dedicated to sql server.) If this is setted properly go to 'Performance Monitor', and under System object see '%Processor Time' which should be well below 80% . At the same time also see 'Processor Queue Length' which should be below 2 or equal to 2.

If above conditions are not satisfied you have processor bottleneck and must upgrade your processor

Monday, March 12, 2012

Inconsistent SP performance on different Servers

One of my developers recently installed a backup of the production database onto his test site. His test server has the same configuration as the production server.

One of the Stored Procedures that is called takes 1:45 to run on his machine, but only 2 seconds on the production server. This same SP takes only 2 seconds on my development database.

The SP is called iteratively, up to 10 times... to run against 10 separate fields. Depending on a value for a parameter called @.CriteriaClassID, depends on which portion of the SP runs.

The significant difference in processing time in itself is baffling (since the servers are same specs / configuration, as far as I can tell, and the data is identical, since he has a backup of the most recent production data).

But more baffling: if, in his data, I switch the values from field 1 to field 2, and vice versa, his results take 2 seconds (switching the values in field 1 to field 2 switches the value in @.CriteriaClassID which is passed through to this SP).

It's exactly the same SP; the only difference is that field 1 is processed first, field 2 second, field 3 third etc. On the production site and my development site, it doesn't make a difference in the order they are processed. On his machine it does.

Any ideas? I though perhaps his Indexes were corrupted in the rebuild, but we ran a SQL Server maintenance schedule to clean it up, and no improvement.

This is the SP, if it is of any help:

CREATE procedure [dbo].[st_pull_model_data] @.ModelID as integer, @.CriteriaID as integer
as

declare @.ClientID as integer, @.CriteriaClassId as char(1)

/*Procedure to pull data from org_model_data and postalcode_model_data for modeling and media analysis */
/*Need to have table #temp_data created outside of SP with fields org_id and zip_code */
/*This procedure is used by SP st_model_data */

If @.CriteriaID is not null
begin

set @.CriteriaClassId = (Select model_criteria_type from model_criteria where model_criteria_id = @.CriteriaID)
if @.CriteriaClassID = 'G' -- changes client_id from specific to general, if General is required.
begin
set @.ClientID = 0
end
else
begin
set @.ClientID = (Select client_id from model where model_id = @.ModelID)
end

If @.CriteriaClassId in ('G','P')
Begin
update #temp_data
set data1 = postal_criteria_value
from #temp_data t
left outer join
(select postalcode, postal_criteria_value
from postalcode_model_data pmd
join model_org_trade_area mota on mota.zip_code = pmd.postalcode
join model_org mo on mo.model_org_id = mota.model_org_id
where model_criteria_id = @.CriteriaID
and client_id = @.ClientID
and mo.model_id = @.ModelID) as PMD
on PMD.postalcode = t.zip_code
end
else
Begin
update #temp_data
set data1 = org_criteria_value
from #temp_data t
left outer join
(select distinct postalcode, org_criteria_value, omd.org_id
from org_model_data omd
join org o on o.org_id = omd.org_id
join model_org_trade_area mota on mota.zip_code = omd.postalcode
join model_org mo on mo.model_org_id = mota.model_org_id and mo.org_id = o.org_id
where model_criteria_id = @.CriteriaID and o.client_id = @.ClientID and mo.model_id = @.ModelID) as OMD
on OMD.postalcode = t.zip_code and omd.org_id = t.org_id
end
endJust a thought on something to try...

Declare two local variables that are similar to the two passed in parameters, copy the parameters to these local variables at the top of your sporc and then use the local variables only within the code.

If this has the effect of solving your issue, you have a "parameter sniffing" issue.|||One other thought, is his tempdb on the same physical disk as the database data files? It won't explain the whole performance problem, but it will explain a part.|||Chopin,

Thanks a ton. The inclusion of the two local variables (copying over the SP variables) solved the problem for performance.

Another lesson learned.|||In that case does that mean backup restore do not restore query plans for the stored proc in the database? Any idea?|||NO ... backup does not save the procedure cache.

Inconsistent Reads and performance problems

I have a fairly complex query which takes about 3-4 seconds. Since it
uses quite some tables and views which I won't be able to post, I will
just post my findings.
This SPROC takes a long time to comlete often propting to kill the
process. During such poor show the profiler records unsually high
number of reads. The problem goes away on DBCC DBRINDEX. After
examining I found this SPROC uses multiple tables/views which have DBCC
SHOWCONTIG as follows
IndexName Level Pages Rows Extents Exten
tSwitches AverageFreeBytes AveragePa
geDensity ScanDensity BestCount ActualCo
unt LogicalFragmentation ExtentFragm
entation MinimumRecordSize MaximumRecord
Size AverageRecordSize ForwardedReco
rds
PK_RegHousehold 0 133 6840 17 16 2405.344971 70.28237152 100 17 17 0 0 100 1
32 108.651 0
PK_RegPeople 0 481 19648 61 60 2351.11792 70.95233917 100 61 61 0 0 127 177
138.639 0
IX_RegPeopleToAddress 0 190 18364 24 23
2393.199951 70.43241882 100 24 24 0
0 57 65 57.003 0
PK_RegPeopleToPhone 0 484 46855 61 60 23
84.335938 70.54193115 100 61 61 0 0
57 57 57 0
IX_RegPeopleToStudentFamily 0 850 49155
107 106 2408.256104 70.24640656 100
107 107 0 1.869158864 96 247 96.353 0
PK_RegPhones 0 249 25016 32 31 2409.035889 70.23677063 100 32 32 0 6.25 51 7
2 54.605 0
I apologize for the above formatting try reformatting with excel.
Should I try DROPPING an RECREATING all Indexes on these tables with
default fillfactor?
As you notice the the number of rows are not that significant.
The original fill factor on all tables is 70%. Any recommendations for
optimization would be appreciated.
Thanks
MasterofNoneAnybody?
MasterNone wrote:
> I have a fairly complex query which takes about 3-4 seconds. Since it
> uses quite some tables and views which I won't be able to post, I will
> just post my findings.
> This SPROC takes a long time to comlete often propting to kill the
> process. During such poor show the profiler records unsually high
> number of reads. The problem goes away on DBCC DBRINDEX. After
> examining I found this SPROC uses multiple tables/views which have DBCC
> SHOWCONTIG as follows
>
> IndexName Level Pages Rows Extents Exten
tSwitches AverageFreeBytes Average
PageDensity ScanDensity BestCount Actual
Count LogicalFragmentation ExtentFra
gmentation MinimumRecordSize MaximumReco
rdSize AverageRecordSize ForwardedRe
cords
> PK_RegHousehold 0 133 6840 17 16 2405.344971 70.28237152 100 17 17 0 0 100
132 108.651 0
> PK_RegPeople 0 481 19648 61 60 2351.11792 70.95233917 100 61 61 0 0 127 17
7 138.639 0
> IX_RegPeopleToAddress 0 190 18364 24 23
2393.199951 70.43241882 100 24 24
0 0 57 65 57.003 0
> PK_RegPeopleToPhone 0 484 46855 61 60 23
84.335938 70.54193115 100 61 61 0
0 57 57 57 0
> IX_RegPeopleToStudentFamily 0 850 49155
107 106 2408.256104 70.24640656 10
0 107 107 0 1.869158864 96 247 96.353 0
> PK_RegPhones 0 249 25016 32 31 2409.035889 70.23677063 100 32 32 0 6.25 51
72 54.605 0
> I apologize for the above formatting try reformatting with excel.
> Should I try DROPPING an RECREATING all Indexes on these tables with
> default fillfactor?
> As you notice the the number of rows are not that significant.
> The original fill factor on all tables is 70%. Any recommendations for
> optimization would be appreciated.
> Thanks
> MasterofNone|||MasterNone wrote:
> I have a fairly complex query which takes about 3-4 seconds. Since it
> uses quite some tables and views which I won't be able to post, I will
> just post my findings.
> This SPROC takes a long time to comlete often propting to kill the
> process. During such poor show the profiler records unsually high
> number of reads. The problem goes away on DBCC DBRINDEX. After
> examining I found this SPROC uses multiple tables/views which have DBCC
> SHOWCONTIG as follows
>
> IndexName Level Pages Rows Extents Exten
tSwitches AverageFreeBytes Average
PageDensity ScanDensity BestCount Actual
Count LogicalFragmentation ExtentFra
gmentation MinimumRecordSize MaximumReco
rdSize AverageRecordSize ForwardedRe
cords
> PK_RegHousehold 0 133 6840 17 16 2405.344971 70.28237152 100 17 17 0 0 100
132 108.651 0
> PK_RegPeople 0 481 19648 61 60 2351.11792 70.95233917 100 61 61 0 0 127 17
7 138.639 0
> IX_RegPeopleToAddress 0 190 18364 24 23
2393.199951 70.43241882 100 24 24
0 0 57 65 57.003 0
> PK_RegPeopleToPhone 0 484 46855 61 60 23
84.335938 70.54193115 100 61 61 0
0 57 57 57 0
> IX_RegPeopleToStudentFamily 0 850 49155
107 106 2408.256104 70.24640656 10
0 107 107 0 1.869158864 96 247 96.353 0
> PK_RegPhones 0 249 25016 32 31 2409.035889 70.23677063 100 32 32 0 6.25 51
72 54.605 0
> I apologize for the above formatting try reformatting with excel.
> Should I try DROPPING an RECREATING all Indexes on these tables with
> default fillfactor?
> As you notice the the number of rows are not that significant.
> The original fill factor on all tables is 70%. Any recommendations for
> optimization would be appreciated.
> Thanks
> MasterofNone
>
Forget the index stats for now. Start by analyzing the execution plan
for the query, determine where the bottleneck is (table or index
scans?), and focus on eliminating that.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||On 30.08.2006 15:06, Tracy McKibben wrote:
> Forget the index stats for now. Start by analyzing the execution plan
> for the query, determine where the bottleneck is (table or index
> scans?), and focus on eliminating that.
I second that. You might see an improvement just after DBCC DBRINDEX
just because now those index pages are in memory. But this won't help
you in the real application situation.
Kind regards
robert

Inconsistent Reads and performance problems

I have a fairly complex query which takes about 3-4 seconds. Since it
uses quite some tables and views which I won't be able to post, I will
just post my findings.
This SPROC takes a long time to comlete often propting to kill the
process. During such poor show the profiler records unsually high
number of reads. The problem goes away on DBCC DBRINDEX. After
examining I found this SPROC uses multiple tables/views which have DBCC
SHOWCONTIG as follows
IndexName Level Pages Rows Extents ExtentSwitches AverageFreeBytes AveragePageDensity ScanDensity BestCount ActualCount LogicalFragmentation ExtentFragmentation MinimumRecordSize MaximumRecordSize AverageRecordSize ForwardedRecords
PK_RegHousehold 0 133 6840 17 16 2405.344971 70.28237152 100 17 17 0 0 100 132 108.651 0
PK_RegPeople 0 481 19648 61 60 2351.11792 70.95233917 100 61 61 0 0 127 177 138.639 0
IX_RegPeopleToAddress 0 190 18364 24 23 2393.199951 70.43241882 100 24 24 0 0 57 65 57.003 0
PK_RegPeopleToPhone 0 484 46855 61 60 2384.335938 70.54193115 100 61 61 0 0 57 57 57 0
IX_RegPeopleToStudentFamily 0 850 49155 107 106 2408.256104 70.24640656 100 107 107 0 1.869158864 96 247 96.353 0
PK_RegPhones 0 249 25016 32 31 2409.035889 70.23677063 100 32 32 0 6.25 51 72 54.605 0
I apologize for the above formatting try reformatting with excel.
Should I try DROPPING an RECREATING all Indexes on these tables with
default fillfactor?
As you notice the the number of rows are not that significant.
The original fill factor on all tables is 70%. Any recommendations for
optimization would be appreciated.
Thanks
MasterofNoneAnybody?
MasterNone wrote:
> I have a fairly complex query which takes about 3-4 seconds. Since it
> uses quite some tables and views which I won't be able to post, I will
> just post my findings.
> This SPROC takes a long time to comlete often propting to kill the
> process. During such poor show the profiler records unsually high
> number of reads. The problem goes away on DBCC DBRINDEX. After
> examining I found this SPROC uses multiple tables/views which have DBCC
> SHOWCONTIG as follows
>
> IndexName Level Pages Rows Extents ExtentSwitches AverageFreeBytes AveragePageDensity ScanDensity BestCount ActualCount LogicalFragmentation ExtentFragmentation MinimumRecordSize MaximumRecordSize AverageRecordSize ForwardedRecords
> PK_RegHousehold 0 133 6840 17 16 2405.344971 70.28237152 100 17 17 0 0 100 132 108.651 0
> PK_RegPeople 0 481 19648 61 60 2351.11792 70.95233917 100 61 61 0 0 127 177 138.639 0
> IX_RegPeopleToAddress 0 190 18364 24 23 2393.199951 70.43241882 100 24 24 0 0 57 65 57.003 0
> PK_RegPeopleToPhone 0 484 46855 61 60 2384.335938 70.54193115 100 61 61 0 0 57 57 57 0
> IX_RegPeopleToStudentFamily 0 850 49155 107 106 2408.256104 70.24640656 100 107 107 0 1.869158864 96 247 96.353 0
> PK_RegPhones 0 249 25016 32 31 2409.035889 70.23677063 100 32 32 0 6.25 51 72 54.605 0
> I apologize for the above formatting try reformatting with excel.
> Should I try DROPPING an RECREATING all Indexes on these tables with
> default fillfactor?
> As you notice the the number of rows are not that significant.
> The original fill factor on all tables is 70%. Any recommendations for
> optimization would be appreciated.
> Thanks
> MasterofNone|||MasterNone wrote:
> I have a fairly complex query which takes about 3-4 seconds. Since it
> uses quite some tables and views which I won't be able to post, I will
> just post my findings.
> This SPROC takes a long time to comlete often propting to kill the
> process. During such poor show the profiler records unsually high
> number of reads. The problem goes away on DBCC DBRINDEX. After
> examining I found this SPROC uses multiple tables/views which have DBCC
> SHOWCONTIG as follows
>
> IndexName Level Pages Rows Extents ExtentSwitches AverageFreeBytes AveragePageDensity ScanDensity BestCount ActualCount LogicalFragmentation ExtentFragmentation MinimumRecordSize MaximumRecordSize AverageRecordSize ForwardedRecords
> PK_RegHousehold 0 133 6840 17 16 2405.344971 70.28237152 100 17 17 0 0 100 132 108.651 0
> PK_RegPeople 0 481 19648 61 60 2351.11792 70.95233917 100 61 61 0 0 127 177 138.639 0
> IX_RegPeopleToAddress 0 190 18364 24 23 2393.199951 70.43241882 100 24 24 0 0 57 65 57.003 0
> PK_RegPeopleToPhone 0 484 46855 61 60 2384.335938 70.54193115 100 61 61 0 0 57 57 57 0
> IX_RegPeopleToStudentFamily 0 850 49155 107 106 2408.256104 70.24640656 100 107 107 0 1.869158864 96 247 96.353 0
> PK_RegPhones 0 249 25016 32 31 2409.035889 70.23677063 100 32 32 0 6.25 51 72 54.605 0
> I apologize for the above formatting try reformatting with excel.
> Should I try DROPPING an RECREATING all Indexes on these tables with
> default fillfactor?
> As you notice the the number of rows are not that significant.
> The original fill factor on all tables is 70%. Any recommendations for
> optimization would be appreciated.
> Thanks
> MasterofNone
>
Forget the index stats for now. Start by analyzing the execution plan
for the query, determine where the bottleneck is (table or index
scans?), and focus on eliminating that.
--
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||On 30.08.2006 15:06, Tracy McKibben wrote:
> Forget the index stats for now. Start by analyzing the execution plan
> for the query, determine where the bottleneck is (table or index
> scans?), and focus on eliminating that.
I second that. You might see an improvement just after DBCC DBRINDEX
just because now those index pages are in memory. But this won't help
you in the real application situation.
Kind regards
robert

Inconsistent Query Performance

Hello!
We are having a problem with one of the queries that an application runs.
This query uses some temp tables (subbed as table variables with no
performance change) and heavily uses some user defined functions. When run
on a personal workstation (SQL Server 2000 Personal or perhaps SQL Server
2000 Developer Editions), it returns within a second. When run against a
enterprise class server with 8 GB of RAM and at least four processors, maybe
eight (can't tell if its hyperthreaded or not) it takes 30 seconds. There
is nothing else going on with the large server. This will be the only
activity on the system.. I disabled awe and turned off /PAE switch with no
change. The timings are repeatable and the only improvement on the
enterprise class server is shortly after a reboot and even then the results
take 2-4 seconds vs, 1 second on the workstation. This performance quickly
deteriorates on the server. I have tried this with the database both on SAN
or on local disk on the server and there is no measurable difference.
Any ideas on what I should check next?
ThanksSteve,

> Any ideas on what I should check next?
Execution Plans
AMB
"Steve H" wrote:

> Hello!
> We are having a problem with one of the queries that an application runs.
> This query uses some temp tables (subbed as table variables with no
> performance change) and heavily uses some user defined functions. When ru
n
> on a personal workstation (SQL Server 2000 Personal or perhaps SQL Server
> 2000 Developer Editions), it returns within a second. When run against a
> enterprise class server with 8 GB of RAM and at least four processors, may
be
> eight (can't tell if its hyperthreaded or not) it takes 30 seconds. Ther
e
> is nothing else going on with the large server. This will be the only
> activity on the system.. I disabled awe and turned off /PAE switch with n
o
> change. The timings are repeatable and the only improvement on the
> enterprise class server is shortly after a reboot and even then the result
s
> take 2-4 seconds vs, 1 second on the workstation. This performance quickl
y
> deteriorates on the server. I have tried this with the database both on S
AN
> or on local disk on the server and there is no measurable difference.
> Any ideas on what I should check next?
> Thanks|||First step is to check the execution plans. Same?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Steve H" <SteveH@.discussions.microsoft.com> wrote in message
news:2E38BDA4-E051-484B-BFF0-0BAEB1A9729E@.microsoft.com...
> Hello!
> We are having a problem with one of the queries that an application runs.
> This query uses some temp tables (subbed as table variables with no
> performance change) and heavily uses some user defined functions. When ru
n
> on a personal workstation (SQL Server 2000 Personal or perhaps SQL Server
> 2000 Developer Editions), it returns within a second. When run against a
> enterprise class server with 8 GB of RAM and at least four processors, may
be
> eight (can't tell if its hyperthreaded or not) it takes 30 seconds. Ther
e
> is nothing else going on with the large server. This will be the only
> activity on the system.. I disabled awe and turned off /PAE switch with n
o
> change. The timings are repeatable and the only improvement on the
> enterprise class server is shortly after a reboot and even then the result
s
> take 2-4 seconds vs, 1 second on the workstation. This performance quickl
y
> deteriorates on the server. I have tried this with the database both on S
AN
> or on local disk on the server and there is no measurable difference.
> Any ideas on what I should check next?
> Thanks|||Yes, execution plans are very similar. Not exactly identical - but all
components are very similar in costs.
"Tibor Karaszi" wrote:

> First step is to check the execution plans. Same?
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Steve H" <SteveH@.discussions.microsoft.com> wrote in message
> news:2E38BDA4-E051-484B-BFF0-0BAEB1A9729E@.microsoft.com...
>
>|||Similar <> same :-)
What does STATISTICS I/O say. I usually complement execution plans with stat
istics I/O. Also check
for parallelism operators in the plan for the large machine. Sometimes paral
lelism is done with a
inefficient result, and can be turned off using the MAXDOP hint (set to 1).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Steve H" <SteveH@.discussions.microsoft.com> wrote in message
news:348166BF-09E2-46F7-A42D-C16D337A8C4F@.microsoft.com...[vbcol=seagreen]
> Yes, execution plans are very similar. Not exactly identical - but all
> components are very similar in costs.
> "Tibor Karaszi" wrote:
>|||Stats IO showed very small differences in number of logical reads. Physical
reads were 0 all around - well cached I guess. I had tried to set the serve
r
to have SQL Server use only 1 processor which should have the same affect as
the MAXDOP setting = 1 (there is some looping and several queries involved
in this proc). Funny thing is that this behavior is repeatable on other
large servers and the quick performance occurs on other workstations.
"Tibor Karaszi" wrote:

> Similar <> same :-)
> What does STATISTICS I/O say. I usually complement execution plans with st
atistics I/O. Also check
> for parallelism operators in the plan for the large machine. Sometimes par
allelism is done with a
> inefficient result, and can be turned off using the MAXDOP hint (set to 1)
.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Steve H" <SteveH@.discussions.microsoft.com> wrote in message
> news:348166BF-09E2-46F7-A42D-C16D337A8C4F@.microsoft.com...
>
>|||Hmm, I'm out of ideas, I'm afraid. In general, if the execution plan is the
same (or cost is very
close), you tend to get the same performance (or a higher on the high-end ma
chine). You could try
posting to the group -relationalserver.performance and see if you get some m
ore attention there. Or
perhaps even time to open a ticket with MS?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Steve H" <SteveH@.discussions.microsoft.com> wrote in message
news:A24BB7B3-D5EC-40E9-8739-3DB0EB4E9A2C@.microsoft.com...[vbcol=seagreen]
> Stats IO showed very small differences in number of logical reads. Physic
al
> reads were 0 all around - well cached I guess. I had tried to set the ser
ver
> to have SQL Server use only 1 processor which should have the same affect
as
> the MAXDOP setting = 1 (there is some looping and several queries involve
d
> in this proc). Funny thing is that this behavior is repeatable on other
> large servers and the quick performance occurs on other workstations.
> "Tibor Karaszi" wrote:
>|||Thank you for your help.
"Tibor Karaszi" wrote:

> Hmm, I'm out of ideas, I'm afraid. In general, if the execution plan is th
e same (or cost is very
> close), you tend to get the same performance (or a higher on the high-end
machine). You could try
> posting to the group -relationalserver.performance and see if you get some
more attention there. Or
> perhaps even time to open a ticket with MS?
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Steve H" <SteveH@.discussions.microsoft.com> wrote in message
> news:A24BB7B3-D5EC-40E9-8739-3DB0EB4E9A2C@.microsoft.com...
>
>|||On Wed, 17 May 2006 13:01:02 -0700, Steve H
<SteveH@.discussions.microsoft.com> wrote:
>We are having a problem with one of the queries that an application runs.
>This query uses some temp tables (subbed as table variables with no
>performance change) and heavily uses some user defined functions. When run
>on a personal workstation (SQL Server 2000 Personal or perhaps SQL Server
>2000 Developer Editions), it returns within a second. When run against a
>enterprise class server with 8 GB of RAM and at least four processors, mayb
e
>eight (can't tell if its hyperthreaded or not) it takes 30 seconds. There
>is nothing else going on with the large server. This will be the only
>activity on the system.. I disabled awe and turned off /PAE switch with no
>change. The timings are repeatable and the only improvement on the
>enterprise class server is shortly after a reboot and even then the results
>take 2-4 seconds vs, 1 second on the workstation. This performance quickly
>deteriorates on the server. I have tried this with the database both on SA
N
>or on local disk on the server and there is no measurable difference.
>Any ideas on what I should check next?
Dunno, but I think I've got it too, or a close relative.
SP uses some permanent temp tables - tables that I truncate at start
of routine, then fill up with about a million records, perhaps
confusing the optimizer somewhat. Maybe I need to trigger more
recompiles or something? Also uses a simple UDF in the (newest
version) select clause only.
On same machine:
SP runs in ten minutes when it feels like it.
SP runs in twenty minutes other times.
SP runs in about 90 minutes other times.
Data is exactly the same.
Of course, production is seeing the longer times.
Not completely controlled for contending loads, but such as there is
does not explain the 20 to 90 jump in any case. It seems to go CPU
bound for extended periods. Stats IO doesn't show anything outrageous
except for the elapsed time - the time spent in the UDF seems shielded
from stats time, btw, thanks MSFT.
Unfortunately we're running this on SQL2K sp2, kind of retro, and
don't ask about the hardware!
But it's very frustrating to have this kind of poor performance and
variability for no apparent reason.
Josh

Inconsistent Query Performance

Hello!
We are having a problem with one of the queries that an application runs.
This query uses some temp tables (subbed as table variables with no
performance change) and heavily uses some user defined functions. When run
on a personal workstation (SQL Server 2000 Personal or perhaps SQL Server
2000 Developer Editions), it returns within a second. When run against a
enterprise class server with 8 GB of RAM and at least four processors, maybe
eight (can't tell if its hyperthreaded or not) it takes 30 seconds. There
is nothing else going on with the large server. This will be the only
activity on the system.. I disabled awe and turned off /PAE switch with no
change. The timings are repeatable and the only improvement on the
enterprise class server is shortly after a reboot and even then the results
take 2-4 seconds vs, 1 second on the workstation. This performance quickly
deteriorates on the server. I have tried this with the database both on SAN
or on local disk on the server and there is no measurable difference.
Any ideas on what I should check next?
ThanksSteve,
> Any ideas on what I should check next?
Execution Plans
AMB
"Steve H" wrote:
> Hello!
> We are having a problem with one of the queries that an application runs.
> This query uses some temp tables (subbed as table variables with no
> performance change) and heavily uses some user defined functions. When run
> on a personal workstation (SQL Server 2000 Personal or perhaps SQL Server
> 2000 Developer Editions), it returns within a second. When run against a
> enterprise class server with 8 GB of RAM and at least four processors, maybe
> eight (can't tell if its hyperthreaded or not) it takes 30 seconds. There
> is nothing else going on with the large server. This will be the only
> activity on the system.. I disabled awe and turned off /PAE switch with no
> change. The timings are repeatable and the only improvement on the
> enterprise class server is shortly after a reboot and even then the results
> take 2-4 seconds vs, 1 second on the workstation. This performance quickly
> deteriorates on the server. I have tried this with the database both on SAN
> or on local disk on the server and there is no measurable difference.
> Any ideas on what I should check next?
> Thanks|||First step is to check the execution plans. Same?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Steve H" <SteveH@.discussions.microsoft.com> wrote in message
news:2E38BDA4-E051-484B-BFF0-0BAEB1A9729E@.microsoft.com...
> Hello!
> We are having a problem with one of the queries that an application runs.
> This query uses some temp tables (subbed as table variables with no
> performance change) and heavily uses some user defined functions. When run
> on a personal workstation (SQL Server 2000 Personal or perhaps SQL Server
> 2000 Developer Editions), it returns within a second. When run against a
> enterprise class server with 8 GB of RAM and at least four processors, maybe
> eight (can't tell if its hyperthreaded or not) it takes 30 seconds. There
> is nothing else going on with the large server. This will be the only
> activity on the system.. I disabled awe and turned off /PAE switch with no
> change. The timings are repeatable and the only improvement on the
> enterprise class server is shortly after a reboot and even then the results
> take 2-4 seconds vs, 1 second on the workstation. This performance quickly
> deteriorates on the server. I have tried this with the database both on SAN
> or on local disk on the server and there is no measurable difference.
> Any ideas on what I should check next?
> Thanks|||Yes, execution plans are very similar. Not exactly identical - but all
components are very similar in costs.
"Tibor Karaszi" wrote:
> First step is to check the execution plans. Same?
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Steve H" <SteveH@.discussions.microsoft.com> wrote in message
> news:2E38BDA4-E051-484B-BFF0-0BAEB1A9729E@.microsoft.com...
> > Hello!
> > We are having a problem with one of the queries that an application runs.
> > This query uses some temp tables (subbed as table variables with no
> > performance change) and heavily uses some user defined functions. When run
> > on a personal workstation (SQL Server 2000 Personal or perhaps SQL Server
> > 2000 Developer Editions), it returns within a second. When run against a
> > enterprise class server with 8 GB of RAM and at least four processors, maybe
> > eight (can't tell if its hyperthreaded or not) it takes 30 seconds. There
> > is nothing else going on with the large server. This will be the only
> > activity on the system.. I disabled awe and turned off /PAE switch with no
> > change. The timings are repeatable and the only improvement on the
> > enterprise class server is shortly after a reboot and even then the results
> > take 2-4 seconds vs, 1 second on the workstation. This performance quickly
> > deteriorates on the server. I have tried this with the database both on SAN
> > or on local disk on the server and there is no measurable difference.
> >
> > Any ideas on what I should check next?
> >
> > Thanks
>
>|||Similar <> same :-)
What does STATISTICS I/O say. I usually complement execution plans with statistics I/O. Also check
for parallelism operators in the plan for the large machine. Sometimes parallelism is done with a
inefficient result, and can be turned off using the MAXDOP hint (set to 1).
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Steve H" <SteveH@.discussions.microsoft.com> wrote in message
news:348166BF-09E2-46F7-A42D-C16D337A8C4F@.microsoft.com...
> Yes, execution plans are very similar. Not exactly identical - but all
> components are very similar in costs.
> "Tibor Karaszi" wrote:
>> First step is to check the execution plans. Same?
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>>
>> "Steve H" <SteveH@.discussions.microsoft.com> wrote in message
>> news:2E38BDA4-E051-484B-BFF0-0BAEB1A9729E@.microsoft.com...
>> > Hello!
>> > We are having a problem with one of the queries that an application runs.
>> > This query uses some temp tables (subbed as table variables with no
>> > performance change) and heavily uses some user defined functions. When run
>> > on a personal workstation (SQL Server 2000 Personal or perhaps SQL Server
>> > 2000 Developer Editions), it returns within a second. When run against a
>> > enterprise class server with 8 GB of RAM and at least four processors, maybe
>> > eight (can't tell if its hyperthreaded or not) it takes 30 seconds. There
>> > is nothing else going on with the large server. This will be the only
>> > activity on the system.. I disabled awe and turned off /PAE switch with no
>> > change. The timings are repeatable and the only improvement on the
>> > enterprise class server is shortly after a reboot and even then the results
>> > take 2-4 seconds vs, 1 second on the workstation. This performance quickly
>> > deteriorates on the server. I have tried this with the database both on SAN
>> > or on local disk on the server and there is no measurable difference.
>> >
>> > Any ideas on what I should check next?
>> >
>> > Thanks
>>|||Stats IO showed very small differences in number of logical reads. Physical
reads were 0 all around - well cached I guess. I had tried to set the server
to have SQL Server use only 1 processor which should have the same affect as
the MAXDOP setting = 1 (there is some looping and several queries involved
in this proc). Funny thing is that this behavior is repeatable on other
large servers and the quick performance occurs on other workstations.
"Tibor Karaszi" wrote:
> Similar <> same :-)
> What does STATISTICS I/O say. I usually complement execution plans with statistics I/O. Also check
> for parallelism operators in the plan for the large machine. Sometimes parallelism is done with a
> inefficient result, and can be turned off using the MAXDOP hint (set to 1).
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Steve H" <SteveH@.discussions.microsoft.com> wrote in message
> news:348166BF-09E2-46F7-A42D-C16D337A8C4F@.microsoft.com...
> > Yes, execution plans are very similar. Not exactly identical - but all
> > components are very similar in costs.
> >
> > "Tibor Karaszi" wrote:
> >
> >> First step is to check the execution plans. Same?
> >>
> >> --
> >> Tibor Karaszi, SQL Server MVP
> >> http://www.karaszi.com/sqlserver/default.asp
> >> http://www.solidqualitylearning.com/
> >>
> >>
> >> "Steve H" <SteveH@.discussions.microsoft.com> wrote in message
> >> news:2E38BDA4-E051-484B-BFF0-0BAEB1A9729E@.microsoft.com...
> >> > Hello!
> >> > We are having a problem with one of the queries that an application runs.
> >> > This query uses some temp tables (subbed as table variables with no
> >> > performance change) and heavily uses some user defined functions. When run
> >> > on a personal workstation (SQL Server 2000 Personal or perhaps SQL Server
> >> > 2000 Developer Editions), it returns within a second. When run against a
> >> > enterprise class server with 8 GB of RAM and at least four processors, maybe
> >> > eight (can't tell if its hyperthreaded or not) it takes 30 seconds. There
> >> > is nothing else going on with the large server. This will be the only
> >> > activity on the system.. I disabled awe and turned off /PAE switch with no
> >> > change. The timings are repeatable and the only improvement on the
> >> > enterprise class server is shortly after a reboot and even then the results
> >> > take 2-4 seconds vs, 1 second on the workstation. This performance quickly
> >> > deteriorates on the server. I have tried this with the database both on SAN
> >> > or on local disk on the server and there is no measurable difference.
> >> >
> >> > Any ideas on what I should check next?
> >> >
> >> > Thanks
> >>
> >>
> >>
>
>|||Hmm, I'm out of ideas, I'm afraid. In general, if the execution plan is the same (or cost is very
close), you tend to get the same performance (or a higher on the high-end machine). You could try
posting to the group -relationalserver.performance and see if you get some more attention there. Or
perhaps even time to open a ticket with MS?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Steve H" <SteveH@.discussions.microsoft.com> wrote in message
news:A24BB7B3-D5EC-40E9-8739-3DB0EB4E9A2C@.microsoft.com...
> Stats IO showed very small differences in number of logical reads. Physical
> reads were 0 all around - well cached I guess. I had tried to set the server
> to have SQL Server use only 1 processor which should have the same affect as
> the MAXDOP setting = 1 (there is some looping and several queries involved
> in this proc). Funny thing is that this behavior is repeatable on other
> large servers and the quick performance occurs on other workstations.
> "Tibor Karaszi" wrote:
>> Similar <> same :-)
>> What does STATISTICS I/O say. I usually complement execution plans with statistics I/O. Also
>> check
>> for parallelism operators in the plan for the large machine. Sometimes parallelism is done with a
>> inefficient result, and can be turned off using the MAXDOP hint (set to 1).
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>>
>> "Steve H" <SteveH@.discussions.microsoft.com> wrote in message
>> news:348166BF-09E2-46F7-A42D-C16D337A8C4F@.microsoft.com...
>> > Yes, execution plans are very similar. Not exactly identical - but all
>> > components are very similar in costs.
>> >
>> > "Tibor Karaszi" wrote:
>> >
>> >> First step is to check the execution plans. Same?
>> >>
>> >> --
>> >> Tibor Karaszi, SQL Server MVP
>> >> http://www.karaszi.com/sqlserver/default.asp
>> >> http://www.solidqualitylearning.com/
>> >>
>> >>
>> >> "Steve H" <SteveH@.discussions.microsoft.com> wrote in message
>> >> news:2E38BDA4-E051-484B-BFF0-0BAEB1A9729E@.microsoft.com...
>> >> > Hello!
>> >> > We are having a problem with one of the queries that an application runs.
>> >> > This query uses some temp tables (subbed as table variables with no
>> >> > performance change) and heavily uses some user defined functions. When run
>> >> > on a personal workstation (SQL Server 2000 Personal or perhaps SQL Server
>> >> > 2000 Developer Editions), it returns within a second. When run against a
>> >> > enterprise class server with 8 GB of RAM and at least four processors, maybe
>> >> > eight (can't tell if its hyperthreaded or not) it takes 30 seconds. There
>> >> > is nothing else going on with the large server. This will be the only
>> >> > activity on the system.. I disabled awe and turned off /PAE switch with no
>> >> > change. The timings are repeatable and the only improvement on the
>> >> > enterprise class server is shortly after a reboot and even then the results
>> >> > take 2-4 seconds vs, 1 second on the workstation. This performance quickly
>> >> > deteriorates on the server. I have tried this with the database both on SAN
>> >> > or on local disk on the server and there is no measurable difference.
>> >> >
>> >> > Any ideas on what I should check next?
>> >> >
>> >> > Thanks
>> >>
>> >>
>> >>
>>|||Thank you for your help.
"Tibor Karaszi" wrote:
> Hmm, I'm out of ideas, I'm afraid. In general, if the execution plan is the same (or cost is very
> close), you tend to get the same performance (or a higher on the high-end machine). You could try
> posting to the group -relationalserver.performance and see if you get some more attention there. Or
> perhaps even time to open a ticket with MS?
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Steve H" <SteveH@.discussions.microsoft.com> wrote in message
> news:A24BB7B3-D5EC-40E9-8739-3DB0EB4E9A2C@.microsoft.com...
> > Stats IO showed very small differences in number of logical reads. Physical
> > reads were 0 all around - well cached I guess. I had tried to set the server
> > to have SQL Server use only 1 processor which should have the same affect as
> > the MAXDOP setting = 1 (there is some looping and several queries involved
> > in this proc). Funny thing is that this behavior is repeatable on other
> > large servers and the quick performance occurs on other workstations.
> >
> > "Tibor Karaszi" wrote:
> >
> >> Similar <> same :-)
> >> What does STATISTICS I/O say. I usually complement execution plans with statistics I/O. Also
> >> check
> >> for parallelism operators in the plan for the large machine. Sometimes parallelism is done with a
> >> inefficient result, and can be turned off using the MAXDOP hint (set to 1).
> >>
> >> --
> >> Tibor Karaszi, SQL Server MVP
> >> http://www.karaszi.com/sqlserver/default.asp
> >> http://www.solidqualitylearning.com/
> >>
> >>
> >> "Steve H" <SteveH@.discussions.microsoft.com> wrote in message
> >> news:348166BF-09E2-46F7-A42D-C16D337A8C4F@.microsoft.com...
> >> > Yes, execution plans are very similar. Not exactly identical - but all
> >> > components are very similar in costs.
> >> >
> >> > "Tibor Karaszi" wrote:
> >> >
> >> >> First step is to check the execution plans. Same?
> >> >>
> >> >> --
> >> >> Tibor Karaszi, SQL Server MVP
> >> >> http://www.karaszi.com/sqlserver/default.asp
> >> >> http://www.solidqualitylearning.com/
> >> >>
> >> >>
> >> >> "Steve H" <SteveH@.discussions.microsoft.com> wrote in message
> >> >> news:2E38BDA4-E051-484B-BFF0-0BAEB1A9729E@.microsoft.com...
> >> >> > Hello!
> >> >> > We are having a problem with one of the queries that an application runs.
> >> >> > This query uses some temp tables (subbed as table variables with no
> >> >> > performance change) and heavily uses some user defined functions. When run
> >> >> > on a personal workstation (SQL Server 2000 Personal or perhaps SQL Server
> >> >> > 2000 Developer Editions), it returns within a second. When run against a
> >> >> > enterprise class server with 8 GB of RAM and at least four processors, maybe
> >> >> > eight (can't tell if its hyperthreaded or not) it takes 30 seconds. There
> >> >> > is nothing else going on with the large server. This will be the only
> >> >> > activity on the system.. I disabled awe and turned off /PAE switch with no
> >> >> > change. The timings are repeatable and the only improvement on the
> >> >> > enterprise class server is shortly after a reboot and even then the results
> >> >> > take 2-4 seconds vs, 1 second on the workstation. This performance quickly
> >> >> > deteriorates on the server. I have tried this with the database both on SAN
> >> >> > or on local disk on the server and there is no measurable difference.
> >> >> >
> >> >> > Any ideas on what I should check next?
> >> >> >
> >> >> > Thanks
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
>
>|||On Wed, 17 May 2006 13:01:02 -0700, Steve H
<SteveH@.discussions.microsoft.com> wrote:
>We are having a problem with one of the queries that an application runs.
>This query uses some temp tables (subbed as table variables with no
>performance change) and heavily uses some user defined functions. When run
>on a personal workstation (SQL Server 2000 Personal or perhaps SQL Server
>2000 Developer Editions), it returns within a second. When run against a
>enterprise class server with 8 GB of RAM and at least four processors, maybe
>eight (can't tell if its hyperthreaded or not) it takes 30 seconds. There
>is nothing else going on with the large server. This will be the only
>activity on the system.. I disabled awe and turned off /PAE switch with no
>change. The timings are repeatable and the only improvement on the
>enterprise class server is shortly after a reboot and even then the results
>take 2-4 seconds vs, 1 second on the workstation. This performance quickly
>deteriorates on the server. I have tried this with the database both on SAN
>or on local disk on the server and there is no measurable difference.
>Any ideas on what I should check next?
Dunno, but I think I've got it too, or a close relative.
SP uses some permanent temp tables - tables that I truncate at start
of routine, then fill up with about a million records, perhaps
confusing the optimizer somewhat. Maybe I need to trigger more
recompiles or something? Also uses a simple UDF in the (newest
version) select clause only.
On same machine:
SP runs in ten minutes when it feels like it.
SP runs in twenty minutes other times.
SP runs in about 90 minutes other times.
Data is exactly the same.
Of course, production is seeing the longer times.
Not completely controlled for contending loads, but such as there is
does not explain the 20 to 90 jump in any case. It seems to go CPU
bound for extended periods. Stats IO doesn't show anything outrageous
except for the elapsed time - the time spent in the UDF seems shielded
from stats time, btw, thanks MSFT.
--
Unfortunately we're running this on SQL2K sp2, kind of retro, and
don't ask about the hardware!
But it's very frustrating to have this kind of poor performance and
variability for no apparent reason.
Josh

Inconsistent Performance of Insert

I am experiencing inconsistent performance when running
insert statements.
Our application automatically generates and sequentially
executes SQL statements that build a results table. After
loading data and running this process, all statements
(about 125 in total) will execute in about 10 minutes. If
it is necessary to rerun the statements I truncate the
results table and rerun the job. In some cases this
second job runs in 10 minutes, and in other cases it runs
6 hours. In particular, one statement takes an inordinate
amount of time.
I've used PROFILER (although I'm not too experienced in
this) and can see that when statements are running
normally the log file updates in the 1000+ per second
range, but with the problematic statement updates are in
the 10 per second range. These updates seem to appear
after the commit statement is executed.
In various combinations I've tried restarting SQLServer,
updating statistics, defragging the hard drive, using
query optimizer, all with no luck.
Any ideas?
Here's one of the statements that seems to get hung up:
Table AV has 4 columns including a Record ID (RID), a
record name (FID) and a record value (Value). This table
will ultimately have approximately 4 million rows at the
completion of the job.
Table A contains a Record ID (RID) and a Process ID
(PID). This table has approx. 100,000 records.
INSERT INTO AV(RID, FID, Value) SELECT A.RID, 325, ISNULL
(ZZTEMPA.Value, 0)+ISNULL(ZZTEMPB.Value, 0)+ISNULL
(ZZTEMPC.Value, 0)+ISNULL(ZZTEMPD.Value, 0)+ISNULL
(ZZTEMPE.Value, 0) FROM A LEFT JOIN AV AS ZZTEMPA ON A.RID
= ZZTEMPA.RID AND ZZTEMPA.FID = 496 LEFT JOIN AV AS
ZZTEMPB ON A.RID = ZZTEMPB.RID AND ZZTEMPB.FID = 497 LEFT
JOIN AV AS ZZTEMPC ON A.RID = ZZTEMPC.RID AND ZZTEMPC.FID
= 499 LEFT JOIN AV AS ZZTEMPD ON A.RID = ZZTEMPD.RID AND
ZZTEMPD.FID = 502 LEFT JOIN AV AS ZZTEMPE ON A.RID = ZZTEMPE.RID AND ZZTEMPE.FID = 504 WHERE (A.PID IN (275,
276, 277, 278, 279, 280, 281))
I recognize that the same table AV is being joined
multiple times, but it's my understanding that this should
not be a problem.
Thanks for any guidance on this.Not much to go on but have you looked to see if you are being blocked when
this is happening? How about disk and cpu queues?
--
Andrew J. Kelly SQL MVP
"Chuck Hardy" <chardy@.coreprofit.com> wrote in message
news:16c501c4a62e$f11bbd90$a601280a@.phx.gbl...
> I am experiencing inconsistent performance when running
> insert statements.
> Our application automatically generates and sequentially
> executes SQL statements that build a results table. After
> loading data and running this process, all statements
> (about 125 in total) will execute in about 10 minutes. If
> it is necessary to rerun the statements I truncate the
> results table and rerun the job. In some cases this
> second job runs in 10 minutes, and in other cases it runs
> 6 hours. In particular, one statement takes an inordinate
> amount of time.
> I've used PROFILER (although I'm not too experienced in
> this) and can see that when statements are running
> normally the log file updates in the 1000+ per second
> range, but with the problematic statement updates are in
> the 10 per second range. These updates seem to appear
> after the commit statement is executed.
> In various combinations I've tried restarting SQLServer,
> updating statistics, defragging the hard drive, using
> query optimizer, all with no luck.
> Any ideas?
> Here's one of the statements that seems to get hung up:
> Table AV has 4 columns including a Record ID (RID), a
> record name (FID) and a record value (Value). This table
> will ultimately have approximately 4 million rows at the
> completion of the job.
> Table A contains a Record ID (RID) and a Process ID
> (PID). This table has approx. 100,000 records.
> INSERT INTO AV(RID, FID, Value) SELECT A.RID, 325, ISNULL
> (ZZTEMPA.Value, 0)+ISNULL(ZZTEMPB.Value, 0)+ISNULL
> (ZZTEMPC.Value, 0)+ISNULL(ZZTEMPD.Value, 0)+ISNULL
> (ZZTEMPE.Value, 0) FROM A LEFT JOIN AV AS ZZTEMPA ON A.RID
> = ZZTEMPA.RID AND ZZTEMPA.FID = 496 LEFT JOIN AV AS
> ZZTEMPB ON A.RID = ZZTEMPB.RID AND ZZTEMPB.FID = 497 LEFT
> JOIN AV AS ZZTEMPC ON A.RID = ZZTEMPC.RID AND ZZTEMPC.FID
> = 499 LEFT JOIN AV AS ZZTEMPD ON A.RID = ZZTEMPD.RID AND
> ZZTEMPD.FID = 502 LEFT JOIN AV AS ZZTEMPE ON A.RID => ZZTEMPE.RID AND ZZTEMPE.FID = 504 WHERE (A.PID IN (275,
> 276, 277, 278, 279, 280, 281))
> I recognize that the same table AV is being joined
> multiple times, but it's my understanding that this should
> not be a problem.
> Thanks for any guidance on this.
>

Inconsistent Performance of Insert

I am experiencing inconsistent performance when running
insert statements.
Our application automatically generates and sequentially
executes SQL statements that build a results table. After
loading data and running this process, all statements
(about 125 in total) will execute in about 10 minutes. If
it is necessary to rerun the statements I truncate the
results table and rerun the job. In some cases this
second job runs in 10 minutes, and in other cases it runs
6 hours. In particular, one statement takes an inordinate
amount of time.
I've used PROFILER (although I'm not too experienced in
this) and can see that when statements are running
normally the log file updates in the 1000+ per second
range, but with the problematic statement updates are in
the 10 per second range. These updates seem to appear
after the commit statement is executed.
In various combinations I've tried restarting SQLServer,
updating statistics, defragging the hard drive, using
query optimizer, all with no luck.
Any ideas?
Here's one of the statements that seems to get hung up:
Table AV has 4 columns including a Record ID (RID), a
record name (FID) and a record value (Value). This table
will ultimately have approximately 4 million rows at the
completion of the job.
Table A contains a Record ID (RID) and a Process ID
(PID). This table has approx. 100,000 records.
INSERT INTO AV(RID, FID, Value) SELECT A.RID, 325, ISNULL
(ZZTEMPA.Value, 0)+ISNULL(ZZTEMPB.Value, 0)+ISNULL
(ZZTEMPC.Value, 0)+ISNULL(ZZTEMPD.Value, 0)+ISNULL
(ZZTEMPE.Value, 0) FROM A LEFT JOIN AV AS ZZTEMPA ON A.RID
= ZZTEMPA.RID AND ZZTEMPA.FID = 496 LEFT JOIN AV AS
ZZTEMPB ON A.RID = ZZTEMPB.RID AND ZZTEMPB.FID = 497 LEFT
JOIN AV AS ZZTEMPC ON A.RID = ZZTEMPC.RID AND ZZTEMPC.FID
= 499 LEFT JOIN AV AS ZZTEMPD ON A.RID = ZZTEMPD.RID AND
ZZTEMPD.FID = 502 LEFT JOIN AV AS ZZTEMPE ON A.RID =
ZZTEMPE.RID AND ZZTEMPE.FID = 504 WHERE (A.PID IN (275,
276, 277, 278, 279, 280, 281))
I recognize that the same table AV is being joined
multiple times, but it's my understanding that this should
not be a problem.
Thanks for any guidance on this.
Not much to go on but have you looked to see if you are being blocked when
this is happening? How about disk and cpu queues?
Andrew J. Kelly SQL MVP
"Chuck Hardy" <chardy@.coreprofit.com> wrote in message
news:16c501c4a62e$f11bbd90$a601280a@.phx.gbl...
> I am experiencing inconsistent performance when running
> insert statements.
> Our application automatically generates and sequentially
> executes SQL statements that build a results table. After
> loading data and running this process, all statements
> (about 125 in total) will execute in about 10 minutes. If
> it is necessary to rerun the statements I truncate the
> results table and rerun the job. In some cases this
> second job runs in 10 minutes, and in other cases it runs
> 6 hours. In particular, one statement takes an inordinate
> amount of time.
> I've used PROFILER (although I'm not too experienced in
> this) and can see that when statements are running
> normally the log file updates in the 1000+ per second
> range, but with the problematic statement updates are in
> the 10 per second range. These updates seem to appear
> after the commit statement is executed.
> In various combinations I've tried restarting SQLServer,
> updating statistics, defragging the hard drive, using
> query optimizer, all with no luck.
> Any ideas?
> Here's one of the statements that seems to get hung up:
> Table AV has 4 columns including a Record ID (RID), a
> record name (FID) and a record value (Value). This table
> will ultimately have approximately 4 million rows at the
> completion of the job.
> Table A contains a Record ID (RID) and a Process ID
> (PID). This table has approx. 100,000 records.
> INSERT INTO AV(RID, FID, Value) SELECT A.RID, 325, ISNULL
> (ZZTEMPA.Value, 0)+ISNULL(ZZTEMPB.Value, 0)+ISNULL
> (ZZTEMPC.Value, 0)+ISNULL(ZZTEMPD.Value, 0)+ISNULL
> (ZZTEMPE.Value, 0) FROM A LEFT JOIN AV AS ZZTEMPA ON A.RID
> = ZZTEMPA.RID AND ZZTEMPA.FID = 496 LEFT JOIN AV AS
> ZZTEMPB ON A.RID = ZZTEMPB.RID AND ZZTEMPB.FID = 497 LEFT
> JOIN AV AS ZZTEMPC ON A.RID = ZZTEMPC.RID AND ZZTEMPC.FID
> = 499 LEFT JOIN AV AS ZZTEMPD ON A.RID = ZZTEMPD.RID AND
> ZZTEMPD.FID = 502 LEFT JOIN AV AS ZZTEMPE ON A.RID =
> ZZTEMPE.RID AND ZZTEMPE.FID = 504 WHERE (A.PID IN (275,
> 276, 277, 278, 279, 280, 281))
> I recognize that the same table AV is being joined
> multiple times, but it's my understanding that this should
> not be a problem.
> Thanks for any guidance on this.
>
|||Thanks for replying Andrew.
I did forget to mention: when this occurs CPU usage is 100% with minor
disk access every few seconds. There is never any blocking. These jobs
are being run on a dedicated server with a single user.
The odd thing is that the same data and system conditions produce wildly
different results. In between runs with the same data I truncate the
results table. Is there a possibility that the truncate doesn't manage
indexes the same as a delete would?
Chuck
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
|||Not quite sure why sometimes it takes much longer but it may have something
to do with the query plans.. If you are starting with a small or empty
table the query plan will probably be a scan. As you continue to insert the
rows the scans will get longer and longer. Do you have proper indexes on
the table to ensure an efficient plan? If so then you may want to try and
use an Index hint(s) to force seeks rather than scans if necessary.
Normally I don't advocate using hints but since this is a controlled process
it should not be a problem if you know they are the correct thing to do.
Andrew J. Kelly SQL MVP
"Chuck Hardy" <chardy@.coreprofit.com> wrote in message
news:ueOQ7otpEHA.3244@.tk2msftngp13.phx.gbl...
> Thanks for replying Andrew.
> I did forget to mention: when this occurs CPU usage is 100% with minor
> disk access every few seconds. There is never any blocking. These jobs
> are being run on a dedicated server with a single user.
> The odd thing is that the same data and system conditions produce wildly
> different results. In between runs with the same data I truncate the
> results table. Is there a possibility that the truncate doesn't manage
> indexes the same as a delete would?
> Chuck
>
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!
|||I've used query optimizer to create indexes as required.
You're right though, the results table starts empty and fills up at
about 50-100K records per each of the 125+ statements that get executed.
Unfortunately, I don't have a lot of control over the way the queries
are built as a 3rd party tool to us does that. I can, however, control
the execution of batches to some degree. Would it make sense to run 1
or 2 smaller batches before the batch that causes the problem?
Also, is it possible that the number of inserts is causing index
fragmentation? (I haven't run DBCC SHOWCONTIG on the results table.)
Would index frag cause 100% CPU spin? BTW, I don't see an appreciable
change in memory usage or paging while this condition exists.
Andrew, thanks for all of your help.
Chuck
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
|||What does the actual DDL for the insert table look like, especially the
Clustered index?
Andrew J. Kelly SQL MVP
"Chuck Hardy" <chardy@.coreprofit.com> wrote in message
news:%23teT4expEHA.3712@.TK2MSFTNGP15.phx.gbl...
> I've used query optimizer to create indexes as required.
> You're right though, the results table starts empty and fills up at
> about 50-100K records per each of the 125+ statements that get executed.
> Unfortunately, I don't have a lot of control over the way the queries
> are built as a 3rd party tool to us does that. I can, however, control
> the execution of batches to some degree. Would it make sense to run 1
> or 2 smaller batches before the batch that causes the problem?
> Also, is it possible that the number of inserts is causing index
> fragmentation? (I haven't run DBCC SHOWCONTIG on the results table.)
> Would index frag cause 100% CPU spin? BTW, I don't see an appreciable
> change in memory usage or paging while this condition exists.
> Andrew, thanks for all of your help.
> Chuck
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!
|||Hi Andrew,
I hope this is what you're looking for. I'd ask your guidance on the
fill factors, along with any other insights. Because each of the 125
SQL statements are inserting, (and I assume but am not sure) that not
all inserts are being placed at the end of the table perhaps the fill
factors should be decreased?
Thanks,
Chuck
/****** Object: Table [dbo].[AccountValues] Script Date: 10/1/2004
9:56:09 AM ******/
CREATE TABLE [dbo].[AccountValues] (
[AccountID] [int] NOT NULL ,
[IFormulaID] [int] NOT NULL ,
[FormulaValue] [decimal](30, 10) NULL
) ON [PRIMARY]
GO
CREATE CLUSTERED INDEX [AccountValues1] ON
[dbo].[AccountValues]([AccountID]) WITH FILLFACTOR = 80 ON [PRIMARY]
GO
CREATE INDEX [AccountValues3] ON [dbo].[AccountValues]([IFormulaID],
[AccountID]) WITH FILLFACTOR = 80 ON [PRIMARY]
GO
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
|||Hi Andrew,
Two more things:
. The problem I'm experienceing is described almost exactly in MS
KB835864, except that I'm running a single CPU.
. I have noticed that the performance statistic Available Mbytes starts
high after a system reboot, but as I execute procedures or ad-hoc
queries this slowly decreases to 5-10Mb. Is there a way to make SQL
Server free up memory without stopping and restarting the server?
Thanks,
Chuck
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
|||Sounds like you don't have a whole lot of memory to begin with. How much do
you have and are there other processes running on the server other than SQL
Server? You can use the MAX Memory setting to ensure the OS and other apps
always have some memory available.
Andrew J. Kelly SQL MVP
"Chuck Hardy" <chardy@.coreprofit.com> wrote in message
news:e0FxYv%23pEHA.3396@.tk2msftngp13.phx.gbl...
> Hi Andrew,
> Two more things:
> The problem I'm experienceing is described almost exactly in MS
> KB835864, except that I'm running a single CPU.
> I have noticed that the performance statistic Available Mbytes starts
> high after a system reboot, but as I execute procedures or ad-hoc
> queries this slowly decreases to 5-10Mb. Is there a way to make SQL
> Server free up memory without stopping and restarting the server?
> Thanks,
> Chuck
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!
|||Chuck,
Not sure if I understand exactly what you are trying to do with that query.
It seems as if you can get the desired result like this:
SELECT A.RID, 325, SUM(A.Value) AS Value
FROM A
WHERE A.PID IN (275, 276, 277, 278, 279, 280, 281)
AND a.FID IN (496,497,499,502,504
GROUP BY a.RID
In any case you need to determine what the query plan is that this is using
during the slow periods. You can run a trace that will show recompiles and
query plans to see if they change during the operation. You can also try
specifying hints to force the proper plans.
Andrew J. Kelly SQL MVP
"Chuck Hardy" <chardy@.coreprofit.com> wrote in message
news:eSP09s9pEHA.2696@.TK2MSFTNGP15.phx.gbl...
> Hi Andrew,
> I hope this is what you're looking for. I'd ask your guidance on the
> fill factors, along with any other insights. Because each of the 125
> SQL statements are inserting, (and I assume but am not sure) that not
> all inserts are being placed at the end of the table perhaps the fill
> factors should be decreased?
> Thanks,
> Chuck
> /****** Object: Table [dbo].[AccountValues] Script Date: 10/1/2004
> 9:56:09 AM ******/
> CREATE TABLE [dbo].[AccountValues] (
> [AccountID] [int] NOT NULL ,
> [IFormulaID] [int] NOT NULL ,
> [FormulaValue] [decimal](30, 10) NULL
> ) ON [PRIMARY]
> GO
> CREATE CLUSTERED INDEX [AccountValues1] ON
> [dbo].[AccountValues]([AccountID]) WITH FILLFACTOR = 80 ON [PRIMARY]
> GO
> CREATE INDEX [AccountValues3] ON [dbo].[AccountValues]([IFormulaID],
> [AccountID]) WITH FILLFACTOR = 80 ON [PRIMARY]
> GO
>
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!