Showing posts with label production. Show all posts
Showing posts with label production. Show all posts

Friday, March 30, 2012

Increase in Database size after restoration

we have taken a production database back up of size 1.8 GB.
we restored the database on a standalone system and the size of the database
was same.
We executed some scripts through which some records got inserted.
After the inserts the database size has drasticallly increased to 5.8 GB.
The database whose size is 5.8 GB was due to the increase in reservedspace
and unused spaces of the tables where in the records got inserted.
Then we exported the database through DTS and imported the data and shruk
the database to which the size of the database got reduced to 2 GB.
I wanted to know what is the reason for the increase in database size when
we executed the insert scripts. And how to avoid it. Or any approach by which
we can find the solution.
Read this article
http://www.sql-server-performance.co...e_settings.asp
"CUBak_DBSize" <CUBak_DBSize@.discussions.microsoft.com> wrote in message
news:3D114A81-F276-4FB9-A878-6C6712058129@.microsoft.com...
> we have taken a production database back up of size 1.8 GB.
> we restored the database on a standalone system and the size of the
database
> was same.
> We executed some scripts through which some records got inserted.
> After the inserts the database size has drasticallly increased to 5.8 GB.
> The database whose size is 5.8 GB was due to the increase in
reservedspace
> and unused spaces of the tables where in the records got inserted.
> Then we exported the database through DTS and imported the data and shruk
> the database to which the size of the database got reduced to 2 GB.
> I wanted to know what is the reason for the increase in database size when
> we executed the insert scripts. And how to avoid it. Or any approach by
which
> we can find the solution.
>

Increase in Database size after restoration

we have taken a production database back up of size 1.8 GB.
we restored the database on a standalone system and the size of the database
was same.
We executed some scripts through which some records got inserted.
After the inserts the database size has drasticallly increased to 5.8 GB.
The database whose size is 5.8 GB was due to the increase in reservedspace
and unused spaces of the tables where in the records got inserted.
Then we exported the database through DTS and imported the data and shruk
the database to which the size of the database got reduced to 2 GB.
I wanted to know what is the reason for the increase in database size when
we executed the insert scripts. And how to avoid it. Or any approach by whic
h
we can find the solution.Read this article
http://www.sql-server-performance.c...se_settings.asp
"CUBak_DBSize" <CUBak_DBSize@.discussions.microsoft.com> wrote in message
news:3D114A81-F276-4FB9-A878-6C6712058129@.microsoft.com...
> we have taken a production database back up of size 1.8 GB.
> we restored the database on a standalone system and the size of the
database
> was same.
> We executed some scripts through which some records got inserted.
> After the inserts the database size has drasticallly increased to 5.8 GB.
> The database whose size is 5.8 GB was due to the increase in
reservedspace
> and unused spaces of the tables where in the records got inserted.
> Then we exported the database through DTS and imported the data and shruk
> the database to which the size of the database got reduced to 2 GB.
> I wanted to know what is the reason for the increase in database size when
> we executed the insert scripts. And how to avoid it. Or any approach by
which
> we can find the solution.
>

Increase in Database size after restoration

we have taken a production database back up of size 1.8 GB.
we restored the database on a standalone system and the size of the database
was same.
We executed some scripts through which some records got inserted.
After the inserts the database size has drasticallly increased to 5.8 GB.
The database whose size is 5.8 GB was due to the increase in reservedspace
and unused spaces of the tables where in the records got inserted.
Then we exported the database through DTS and imported the data and shruk
the database to which the size of the database got reduced to 2 GB.
I wanted to know what is the reason for the increase in database size when
we executed the insert scripts. And how to avoid it. Or any approach by which
we can find the solution.Read this article
http://www.sql-server-performance.com/database_settings.asp
"CUBak_DBSize" <CUBak_DBSize@.discussions.microsoft.com> wrote in message
news:3D114A81-F276-4FB9-A878-6C6712058129@.microsoft.com...
> we have taken a production database back up of size 1.8 GB.
> we restored the database on a standalone system and the size of the
database
> was same.
> We executed some scripts through which some records got inserted.
> After the inserts the database size has drasticallly increased to 5.8 GB.
> The database whose size is 5.8 GB was due to the increase in
reservedspace
> and unused spaces of the tables where in the records got inserted.
> Then we exported the database through DTS and imported the data and shruk
> the database to which the size of the database got reduced to 2 GB.
> I wanted to know what is the reason for the increase in database size when
> we executed the insert scripts. And how to avoid it. Or any approach by
which
> we can find the solution.
>

Monday, March 12, 2012

Inconsistent Subscription Success

Hi,
I've got a production SQL Reporting Services installation and have
created some file share subscriptions for one of the reports.
Sometimes the subscriptions work and sometimes they don't. My customer
has now had enough and wants to have them working all of the time
(unsurprisingly!).
I have spent the whole day testing various things to try to get some
consistency and haven't been able to prove anything. If, for example,
I create 5 new subscriptions (either to run all at the same time or to
run one minute after each other), any number of the subscriptions will
run properly (ie. and will create the file on the file share) -
sometimes none will run, sometimes a few will run, sometimes all five
will run. I have just created eight new subscriptions in an absolutely
identical manner and only one of the eight ran properly (and it wasn't
the first or last one).
I cannot find any error messages anywhere in the system, ie. in event
viewer, in the SQL RS logs, in the SQL logs, etc., and as nothing is
changing on the system - ie. local user rights, NTFS permissions, IIS
permissions, SQL permissions, etc. - I can't work out why the
subscriptions work sometimes and not others.
I've looked through a lot of the Google postings and can see that other
people have similar things.
If anyone can offer any suggestions so that I can get SQL RS to work
properly, please let me know - I'll really appreciate it.
Cheers,
Rich
(MCSE MCSD MCDBA)Hello Richard,
Can you see all the corresponding jobs created for your subscriptions? When
a subscription does not work, can you see that the job in SQL Server Agent is
running or was triggered as expected?
Ricardo.
"richard.warner@.zurich.com" wrote:
> Hi,
> I've got a production SQL Reporting Services installation and have
> created some file share subscriptions for one of the reports.
> Sometimes the subscriptions work and sometimes they don't. My customer
> has now had enough and wants to have them working all of the time
> (unsurprisingly!).
> I have spent the whole day testing various things to try to get some
> consistency and haven't been able to prove anything. If, for example,
> I create 5 new subscriptions (either to run all at the same time or to
> run one minute after each other), any number of the subscriptions will
> run properly (ie. and will create the file on the file share) -
> sometimes none will run, sometimes a few will run, sometimes all five
> will run. I have just created eight new subscriptions in an absolutely
> identical manner and only one of the eight ran properly (and it wasn't
> the first or last one).
> I cannot find any error messages anywhere in the system, ie. in event
> viewer, in the SQL RS logs, in the SQL logs, etc., and as nothing is
> changing on the system - ie. local user rights, NTFS permissions, IIS
> permissions, SQL permissions, etc. - I can't work out why the
> subscriptions work sometimes and not others.
> I've looked through a lot of the Google postings and can see that other
> people have similar things.
> If anyone can offer any suggestions so that I can get SQL RS to work
> properly, please let me know - I'll really appreciate it.
> Cheers,
>
> Rich
> (MCSE MCSD MCDBA)
>|||Hi, Ricardo.
Thanks for your reply.
Yes - the jobs all show in the SQL Server Agent jobs view in Enterprise
Manager. They show as Succeeded (<date> <time>). As SQL server
considers them to have succeeded, there is no entry in the event log.
During some of the testing, I modified one of the jobs (Notification
tab) so that it wrote to the Windows application event log "whenever
the job completes", and all that did was write an entry to the event
log to say the job had completed successfully!
When I look in the Execution Log table of the SQL RS database, I can
see that not all of the subscriptions show there.
When I look in the Subscriptions table of the SQL RS database, I can
see all of the subscriptions, but the ones that didn't work properly
still show a LastRun time of <NULL>.
Cheers,
Rich|||Hello Richard,
It seems that there is a mismatch between the jobs in SQL and the
subscriptions. Have you tried recreating the jobs? Stop the ReportServer
service, then delete all SQL Agent jobs related to reporting services (all
jobs that have category "Report Server"), and then start the ReportServer
service. It will recreate the necessary SQL Agent jobs.
Ricardo.
"richard.warner@.zurich.com" wrote:
> Hi, Ricardo.
> Thanks for your reply.
> Yes - the jobs all show in the SQL Server Agent jobs view in Enterprise
> Manager. They show as Succeeded (<date> <time>). As SQL server
> considers them to have succeeded, there is no entry in the event log.
> During some of the testing, I modified one of the jobs (Notification
> tab) so that it wrote to the Windows application event log "whenever
> the job completes", and all that did was write an entry to the event
> log to say the job had completed successfully!
> When I look in the Execution Log table of the SQL RS database, I can
> see that not all of the subscriptions show there.
> When I look in the Subscriptions table of the SQL RS database, I can
> see all of the subscriptions, but the ones that didn't work properly
> still show a LastRun time of <NULL>.
> Cheers,
>
> Rich
>|||As you suggested, we stopped the ReportServer service, deleted the
Report Server SQL Agent jobs, then restarted the ReportServer service,
and the Report Server jobs were recreated in the SQL Agent. However,
it hasn't helped resolve the problem.
I have just created five new run-once subscriptions - each configured
to run one minute after the last. The first three were successful, the
fourth wasn't, and the fifth was. Looking in the SQL Agent, all of
them show as Succeeded (<date> <time>). Again - all five subscriptions
were created in an absolutely identical manner.
Have you got any other suggestions?
Cheers,
Rich|||Hello Richard,
What is the status of the subscription in the Subscriptions page? Does it
show that all subscription run and all of them were successful?
In the logs, can you see the calls for all five subscriptions?
Ricardo.
"richard.warner@.zurich.com" wrote:
> As you suggested, we stopped the ReportServer service, deleted the
> Report Server SQL Agent jobs, then restarted the ReportServer service,
> and the Report Server jobs were recreated in the SQL Agent. However,
> it hasn't helped resolve the problem.
> I have just created five new run-once subscriptions - each configured
> to run one minute after the last. The first three were successful, the
> fourth wasn't, and the fifth was. Looking in the SQL Agent, all of
> them show as Succeeded (<date> <time>). Again - all five subscriptions
> were created in an absolutely identical manner.
> Have you got any other suggestions?
> Cheers,
>
> Rich
>|||Hi, Ricardo.
Thanks for your quick reply again.
Sorry - I was mistaken in my last mail - three of the subscriptions ran
successfully and two of them didn't (not four and one, as I'd said).
In the Subscriptions page, the three successful subscriptions show as
"File xx.xx was written to xx", and the two unsuccessful subscriptions
show as "New subscription".
Which log are you referring to? If you're referring to the
ReportServer_<date>_<time>.log file in the SQL RS LogFiles directory,
then yes - I can see a line for the creation of each of the five
subscriptions. The line is as follows:
"aspnet_wp!subscription!bf4!<date>-<time>:: Subscription Created for
report /<folder>/<report> at <date>T<time> by <me>"
This line occurs five times and corresponds exactly with the times that
I created the subscriptions.
Cheers,
Rich|||Hello Richard,
If the subscription stays at "New subscription" then it means it hasn't run,
or (I think) something happened and it is retrying. Has the process
dealocked? Are you trying to write the same filename all the time? Is it
possible that there was a sharing violation? In the logs, after the
subscription was queued the first time, can you see whether RS is queing the
two "missing" subscriptions again? Has the status changed in the
Subscriptions page?
Ricardo.
"richard.warner@.zurich.com" wrote:
> Hi, Ricardo.
> Thanks for your quick reply again.
> Sorry - I was mistaken in my last mail - three of the subscriptions ran
> successfully and two of them didn't (not four and one, as I'd said).
> In the Subscriptions page, the three successful subscriptions show as
> "File xx.xx was written to xx", and the two unsuccessful subscriptions
> show as "New subscription".
> Which log are you referring to? If you're referring to the
> ReportServer_<date>_<time>.log file in the SQL RS LogFiles directory,
> then yes - I can see a line for the creation of each of the five
> subscriptions. The line is as follows:
> "aspnet_wp!subscription!bf4!<date>-<time>:: Subscription Created for
> report /<folder>/<report> at <date>T<time> by <me>"
> This line occurs five times and corresponds exactly with the times that
> I created the subscriptions.
> Cheers,
>
> Rich
>|||Hi, Ricardo.
Are you referring to the aspnet_wp.exe process? If so, there are no
events in the event log showing that the process has deadlocked and
been restarted.
Each subscription is created to write to a different file name.
I don't think it's possible that there was a sharing violation. In
each case, the file doesn't exist before the subscription runs and
nothing tries to open the file subsequently.
In the log file, RS isn't queuing the two missing subscriptions again.
The status hasn't changed in the Subscriptions page.
In the ReportServerService_<date>_<time>.log file, I can see the
successful subscriptions being run. These ran today at 13:23, 13:25
and 13:27. The missing ones were scheduled to run at 13:24 and 13:26.
Here is a typical section of the log from the successful subscriptions:
ReportingServicesService!dbpolling!a20!02/11/2005-13:27:04::
EventPolling processing 1 more items. 1 Total items in internal queue.
ReportingServicesService!dbpolling!d64!11/02/2005-13:27:04::
EventPolling processing item ce6bf843-1a37-4c0a-aa76-218284fafc1a
ReportingServicesService!library!d64!11/02/2005-13:27:04:: Schedule
69782008-e708-4927-9eeb-f1640c7ec996 executed at 11/02/2005 13:27:04.
ReportingServicesService!schedule!d64!11/02/2005-13:27:04:: Creating
Time based subscription notification for subscription:
63b17fae-0d44-4d2e-8a75-c0ac68ebd080
ReportingServicesService!library!d64!11/02/2005-13:27:04:: Schedule
69782008-e708-4927-9eeb-f1640c7ec996 execution completed at 11/02/2005
13:27:04.
ReportingServicesService!dbpolling!d64!11/02/2005-13:27:04::
EventPolling finished processing item
ce6bf843-1a37-4c0a-aa76-218284fafc1a
ReportingServicesService!dbpolling!a20!02/11/2005-13:27:04::
NotificationPolling processing 1 more items. 1 Total items in internal
queue.
ReportingServicesService!dbpolling!d64!11/02/2005-13:27:04::
NotificationPolling processing item
ceeebad5-0578-4b59-af37-5fd8305ddbac
ReportingServicesService!library!d64!11/02/2005-13:27:04:: i INFO: Call
to RenderFirst( '/<folder>/<report>' ) !-- this line modified by me in
Google post to hide folder/report name
ReportingServicesService!library!d64!11/02/2005-13:27:06:: i INFO:
Initializing EnableExecutionLogging to 'True' as specified in Server
system properties.
ReportingServicesService!notification!d64!11/02/2005-13:27:06::
Notification ceeebad5-0578-4b59-af37-5fd8305ddbac completed. Success:
True, Status: File E5.pdf was written to \\<server>\<share>,
DeliveryExtension: Report Server FileShare, Report: ARMReport1, Attempt
0 !-- this line modified by me again
ReportingServicesService!dbpolling!d64!11/02/2005-13:27:06::
NotificationPolling finished processing item
ceeebad5-0578-4b59-af37-5fd8305ddbac
I have created 12 more subscriptions running at various intervals (1
minute, 2 minutes, 3 minutes, 5 minutes and 10 minutes) - mixing these
intervals up to see if it's anything to do with how far apart the jobs
are running. I know this is grabbing at straws, but I'm happy to try
anything! :-> I've just realised that the results from these
subscriptions will be in soon, so I'll hold on before posting this and
will include the results ...
New results from 12 subscriptions:
1 - (Time = 15:40) - Didn't run
2 - (Time = 15:41) - Didn't run
3 - (Time = 15:43) - Ran
4 - (Time = 15:45) - Ran
5 - (Time = 15:48) - Didn't run
6 - (Time = 15:50) - Didn't run
7 - (Time = 15:52) - Didn't run
8 - (Time = 15:53) - Ran
9 - (Time = 15:55) - Ran
10 - (Time = 15:58) - Didn't run
11 - (Time = 16:03) - Didn't run
12 - (Time = 16:13) - Didn't run
That seems fairly inconculsive to me!
Any other thoughts?
Cheers,
Rich|||Hello Richard,
I am talking about the process in the database. Are you running the reports
from a stored procedure, or a simple query? If you run the stored procedures
or queries in Query Analyzer at those intervals, does it always run and
return data? Do they deadlock?
Ricardo.
"richard.warner@.zurich.com" wrote:
> Hi, Ricardo.
> Are you referring to the aspnet_wp.exe process? If so, there are no
> events in the event log showing that the process has deadlocked and
> been restarted.
> Each subscription is created to write to a different file name.
> I don't think it's possible that there was a sharing violation. In
> each case, the file doesn't exist before the subscription runs and
> nothing tries to open the file subsequently.
> In the log file, RS isn't queuing the two missing subscriptions again.
> The status hasn't changed in the Subscriptions page.
> In the ReportServerService_<date>_<time>.log file, I can see the
> successful subscriptions being run. These ran today at 13:23, 13:25
> and 13:27. The missing ones were scheduled to run at 13:24 and 13:26.
> Here is a typical section of the log from the successful subscriptions:
> ReportingServicesService!dbpolling!a20!02/11/2005-13:27:04::
> EventPolling processing 1 more items. 1 Total items in internal queue.
> ReportingServicesService!dbpolling!d64!11/02/2005-13:27:04::
> EventPolling processing item ce6bf843-1a37-4c0a-aa76-218284fafc1a
> ReportingServicesService!library!d64!11/02/2005-13:27:04:: Schedule
> 69782008-e708-4927-9eeb-f1640c7ec996 executed at 11/02/2005 13:27:04.
> ReportingServicesService!schedule!d64!11/02/2005-13:27:04:: Creating
> Time based subscription notification for subscription:
> 63b17fae-0d44-4d2e-8a75-c0ac68ebd080
> ReportingServicesService!library!d64!11/02/2005-13:27:04:: Schedule
> 69782008-e708-4927-9eeb-f1640c7ec996 execution completed at 11/02/2005
> 13:27:04.
> ReportingServicesService!dbpolling!d64!11/02/2005-13:27:04::
> EventPolling finished processing item
> ce6bf843-1a37-4c0a-aa76-218284fafc1a
> ReportingServicesService!dbpolling!a20!02/11/2005-13:27:04::
> NotificationPolling processing 1 more items. 1 Total items in internal
> queue.
> ReportingServicesService!dbpolling!d64!11/02/2005-13:27:04::
> NotificationPolling processing item
> ceeebad5-0578-4b59-af37-5fd8305ddbac
> ReportingServicesService!library!d64!11/02/2005-13:27:04:: i INFO: Call
> to RenderFirst( '/<folder>/<report>' ) !-- this line modified by me in
> Google post to hide folder/report name
> ReportingServicesService!library!d64!11/02/2005-13:27:06:: i INFO:
> Initializing EnableExecutionLogging to 'True' as specified in Server
> system properties.
> ReportingServicesService!notification!d64!11/02/2005-13:27:06::
> Notification ceeebad5-0578-4b59-af37-5fd8305ddbac completed. Success:
> True, Status: File E5.pdf was written to \\<server>\<share>,
> DeliveryExtension: Report Server FileShare, Report: ARMReport1, Attempt
> 0 !-- this line modified by me again
> ReportingServicesService!dbpolling!d64!11/02/2005-13:27:06::
> NotificationPolling finished processing item
> ceeebad5-0578-4b59-af37-5fd8305ddbac
> I have created 12 more subscriptions running at various intervals (1
> minute, 2 minutes, 3 minutes, 5 minutes and 10 minutes) - mixing these
> intervals up to see if it's anything to do with how far apart the jobs
> are running. I know this is grabbing at straws, but I'm happy to try
> anything! :-> I've just realised that the results from these
> subscriptions will be in soon, so I'll hold on before posting this and
> will include the results ...
> New results from 12 subscriptions:
> 1 - (Time = 15:40) - Didn't run
> 2 - (Time = 15:41) - Didn't run
> 3 - (Time = 15:43) - Ran
> 4 - (Time = 15:45) - Ran
> 5 - (Time = 15:48) - Didn't run
> 6 - (Time = 15:50) - Didn't run
> 7 - (Time = 15:52) - Didn't run
> 8 - (Time = 15:53) - Ran
> 9 - (Time = 15:55) - Ran
> 10 - (Time = 15:58) - Didn't run
> 11 - (Time = 16:03) - Didn't run
> 12 - (Time = 16:13) - Didn't run
> That seems fairly inconculsive to me!
> Any other thoughts?
> Cheers,
>
> Rich
>|||Hi, Ricardo.
Sorry for the delay in my reply. I've made slight progress in that I
know that one of the two SQL RS IIS servers is causing a problem (I
probably should have said before that we have two SQL RS servers
talking to one SQL RS database). By stopping the ReportServer service
on the second SQL RS server, all of the subscriptions now work (even
though the subscriptions are only configured on one of the servers, and
are only saving files on the same server, etc - so as far as I was
concerned, the second server wasn't involved). I'll look into this
more tomorrow, but for now my solution is just to keep that service
stopped on the second server and lose the resilience that the second
SQL RS server was providing.
Thanks again for your help.
Rich

Inconsistent SP performance on different Servers

One of my developers recently installed a backup of the production database onto his test site. His test server has the same configuration as the production server.

One of the Stored Procedures that is called takes 1:45 to run on his machine, but only 2 seconds on the production server. This same SP takes only 2 seconds on my development database.

The SP is called iteratively, up to 10 times... to run against 10 separate fields. Depending on a value for a parameter called @.CriteriaClassID, depends on which portion of the SP runs.

The significant difference in processing time in itself is baffling (since the servers are same specs / configuration, as far as I can tell, and the data is identical, since he has a backup of the most recent production data).

But more baffling: if, in his data, I switch the values from field 1 to field 2, and vice versa, his results take 2 seconds (switching the values in field 1 to field 2 switches the value in @.CriteriaClassID which is passed through to this SP).

It's exactly the same SP; the only difference is that field 1 is processed first, field 2 second, field 3 third etc. On the production site and my development site, it doesn't make a difference in the order they are processed. On his machine it does.

Any ideas? I though perhaps his Indexes were corrupted in the rebuild, but we ran a SQL Server maintenance schedule to clean it up, and no improvement.

This is the SP, if it is of any help:

CREATE procedure [dbo].[st_pull_model_data] @.ModelID as integer, @.CriteriaID as integer
as

declare @.ClientID as integer, @.CriteriaClassId as char(1)

/*Procedure to pull data from org_model_data and postalcode_model_data for modeling and media analysis */
/*Need to have table #temp_data created outside of SP with fields org_id and zip_code */
/*This procedure is used by SP st_model_data */

If @.CriteriaID is not null
begin

set @.CriteriaClassId = (Select model_criteria_type from model_criteria where model_criteria_id = @.CriteriaID)
if @.CriteriaClassID = 'G' -- changes client_id from specific to general, if General is required.
begin
set @.ClientID = 0
end
else
begin
set @.ClientID = (Select client_id from model where model_id = @.ModelID)
end

If @.CriteriaClassId in ('G','P')
Begin
update #temp_data
set data1 = postal_criteria_value
from #temp_data t
left outer join
(select postalcode, postal_criteria_value
from postalcode_model_data pmd
join model_org_trade_area mota on mota.zip_code = pmd.postalcode
join model_org mo on mo.model_org_id = mota.model_org_id
where model_criteria_id = @.CriteriaID
and client_id = @.ClientID
and mo.model_id = @.ModelID) as PMD
on PMD.postalcode = t.zip_code
end
else
Begin
update #temp_data
set data1 = org_criteria_value
from #temp_data t
left outer join
(select distinct postalcode, org_criteria_value, omd.org_id
from org_model_data omd
join org o on o.org_id = omd.org_id
join model_org_trade_area mota on mota.zip_code = omd.postalcode
join model_org mo on mo.model_org_id = mota.model_org_id and mo.org_id = o.org_id
where model_criteria_id = @.CriteriaID and o.client_id = @.ClientID and mo.model_id = @.ModelID) as OMD
on OMD.postalcode = t.zip_code and omd.org_id = t.org_id
end
endJust a thought on something to try...

Declare two local variables that are similar to the two passed in parameters, copy the parameters to these local variables at the top of your sporc and then use the local variables only within the code.

If this has the effect of solving your issue, you have a "parameter sniffing" issue.|||One other thought, is his tempdb on the same physical disk as the database data files? It won't explain the whole performance problem, but it will explain a part.|||Chopin,

Thanks a ton. The inclusion of the two local variables (copying over the SP variables) solved the problem for performance.

Another lesson learned.|||In that case does that mean backup restore do not restore query plans for the stored proc in the database? Any idea?|||NO ... backup does not save the procedure cache.

Friday, March 9, 2012

Inconsistent Linked Server Execution Plans

I am witnessing something odd with regard to execution plans and a
linked server.
I have a restore of a production database in two different
environments, one DEV the other QA. This database resides on the linked
server in our setup. The other database -- the local database -- has
been installed and populated by a script in both DEV and QA. The
version and edition of SQL Server are identical in both environments.
In the DEV environment, executing a query which uses the linked server
(in a four part name) results in a reasonable execution plan with
filtering ocurring on the remote database before the rows are returned.
However, in the QA environment the same query results in an execution
plan that has _all_ of the rows from the remote tables returned before
filtering. The tables in the database on the linked server are somewhat
large > 1M. The execution times differ by a vast amount.
I'm really at a loss. I know the query optimizer is a fickle mistress
but is this evidence of something predictable and knowable? What are
the types of things that I might take a look at in order to discover
why the same query in two very similar environments generates such
wildly different execution plans? And, more than that why on earth
would the query optimizer not just as a matter of course _always_
filter the rows remotely before returning them? Is there any way to
force the query optimizer to filter the rows remotely before returning
them? I've looked at the openquery function which does this but is not
practical in this circumstance given the query itself. Rearchitecture
is an option but I'm dying for some kind of explanation for this
behavior. My brain is broke.Hi
I am not familiar with your problem in particular, but you could try
updating indexes and statistics on the linked server to see if anything
changes. To avoid any possible filtering problem you could execute a remote
stored procedure or possibly use OPENQUERY.
John
"person" wrote:

> I am witnessing something odd with regard to execution plans and a
> linked server.
> I have a restore of a production database in two different
> environments, one DEV the other QA. This database resides on the linked
> server in our setup. The other database -- the local database -- has
> been installed and populated by a script in both DEV and QA. The
> version and edition of SQL Server are identical in both environments.
> In the DEV environment, executing a query which uses the linked server
> (in a four part name) results in a reasonable execution plan with
> filtering ocurring on the remote database before the rows are returned.
> However, in the QA environment the same query results in an execution
> plan that has _all_ of the rows from the remote tables returned before
> filtering. The tables in the database on the linked server are somewhat
> large > 1M. The execution times differ by a vast amount.
> I'm really at a loss. I know the query optimizer is a fickle mistress
> but is this evidence of something predictable and knowable? What are
> the types of things that I might take a look at in order to discover
> why the same query in two very similar environments generates such
> wildly different execution plans? And, more than that why on earth
> would the query optimizer not just as a matter of course _always_
> filter the rows remotely before returning them? Is there any way to
> force the query optimizer to filter the rows remotely before returning
> them? I've looked at the openquery function which does this but is not
> practical in this circumstance given the query itself. Rearchitecture
> is an option but I'm dying for some kind of explanation for this
> behavior. My brain is broke.
>|||I updated the statistics on all the databases involved to no avail. I'm
unable to use the OPENQEURY function since I have to join tables across
the databases.
Which execution plan seems to be the anomaly? Would it be the execution
plan where the data is filtered on the linked server prior to being
returned or the execution plan that returns all the rows before
filtering?|||I think I may have found some promising information:
http://www.sql-server-performance.com/linked_server.asp
/quote
When running distributed queries on a linked server, if the linked
server has the same character set and sort order (collation) as the
local SQL Server, then you can reduce overhead and boost performance if
you set the SP_SERVEROPTION "collation compatible" option to true. What
this setting does is tell SQL Server to assume that all columns and
character sets on the remote server are compatible with the local
server. This same option can also be turned on for a linked server
using Enterprise Manager
If this option is not selected, then the distributed query being
executed on the remote server must return the entire table to the local
server in order for the WHERE clause to be applied. As you can imagine,
this could potentially return a lot of unnecessary data over the
network, slowing it down.
If the option is selected, (which is always recommended if the
collations are the same on both servers), then the WHERE clause is
applied on the remote server. This, of course, means that much less
data is transmitted over the network, often greatly speeding up the
distributed query. [7.0, 2000] Updated 2-16-2004
/endquote|||Hi
Also check out the other information at the bottom of the article.
Some examples of locally performed operations include:
Data conversion operations
Queries that use the bit, timestamp, or uniqueidentifier data types
Queries that use the TOP clause
INSERTS, UPDATES, or DELETES
John
"person" wrote:

> I think I may have found some promising information:
> http://www.sql-server-performance.com/linked_server.asp
> /quote
> When running distributed queries on a linked server, if the linked
> server has the same character set and sort order (collation) as the
> local SQL Server, then you can reduce overhead and boost performance if
> you set the SP_SERVEROPTION "collation compatible" option to true. What
> this setting does is tell SQL Server to assume that all columns and
> character sets on the remote server are compatible with the local
> server. This same option can also be turned on for a linked server
> using Enterprise Manager
> If this option is not selected, then the distributed query being
> executed on the remote server must return the entire table to the local
> server in order for the WHERE clause to be applied. As you can imagine,
> this could potentially return a lot of unnecessary data over the
> network, slowing it down.
> If the option is selected, (which is always recommended if the
> collations are the same on both servers), then the WHERE clause is
> applied on the remote server. This, of course, means that much less
> data is transmitted over the network, often greatly speeding up the
> distributed query. [7.0, 2000] Updated 2-16-2004
> /endquote
>|||The collation compatibility was not the magic bullet I was hoping it
would be. The two database use the same collation and enabling the
collation compatibility option on the linked server had no effect.
However, digging in further it now appears that there is a correlation
between W2003 and the poor linked server performance.
In our initial setup in the dev environment the local server was a W2K
box while the remote server was a W2003 box. In the QA environment both
servers are W2003. I installed the local db on W2003 box in the Dev
environment. To my surprise, the problem is now reproducible. I can
reproduce the problem in the dev environment only when the local db is
on a W2003 box but not when it is on a W2K box.
So there is some correlation between W2003 and poor linked server
performance.|||Hi
A pure guess would be the check out DTC is working correctly and possibly
re-install it. [url]http://support.microsoft.com/default.aspx?scid=kb;en-us;Q306843[/ur
l]
John
"person" wrote:

> The collation compatibility was not the magic bullet I was hoping it
> would be. The two database use the same collation and enabling the
> collation compatibility option on the linked server had no effect.
> However, digging in further it now appears that there is a correlation
> between W2003 and the poor linked server performance.
> In our initial setup in the dev environment the local server was a W2K
> box while the remote server was a W2003 box. In the QA environment both
> servers are W2003. I installed the local db on W2003 box in the Dev
> environment. To my surprise, the problem is now reproducible. I can
> reproduce the problem in the dev environment only when the local db is
> on a W2003 box but not when it is on a W2K box.
> So there is some correlation between W2003 and poor linked server
> performance.
>|||I'm not sure that that article applies to my situation. We're not using
distributed transactions.
Does this seem like a situation where a support ticket to Microsoft
might be advisable? I can consistently demonstrate that the query
optimizer devises radically different execution plans for the same
stored procedure depending on which operating system the local db is on
(all other factors being equal -- or apparently so). I don't know if
anybody from Microsoft looks at this forum...but would starting a new
thread with a more specific subject line get this problem any better
exposure?|||Hi
It is not something I have come across. I could not find anything at
http://lab.msdn.microsoft.com/productfeedback so raising it with PSS may be
an idea.
John
"person" wrote:

> I'm not sure that that article applies to my situation. We're not using
> distributed transactions.
> Does this seem like a situation where a support ticket to Microsoft
> might be advisable? I can consistently demonstrate that the query
> optimizer devises radically different execution plans for the same
> stored procedure depending on which operating system the local db is on
> (all other factors being equal -- or apparently so). I don't know if
> anybody from Microsoft looks at this forum...but would starting a new
> thread with a more specific subject line get this problem any better
> exposure?
>|||Did you come up with an explanation for the differences in the way the
two OSs handled the queries?
I'm curious, because I migrated from a perfectly-good W2K installation
to a W2003 machine, and my OLE DB Sybase linked server threw ALL KINDS
of errors... things that look to me like collation differences.
Rick

Inconsistent Linked Server Execution Plans

I am witnessing something odd with regard to execution plans and a
linked server.
I have a restore of a production database in two different
environments, one DEV the other QA. This database resides on the linked
server in our setup. The other database -- the local database -- has
been installed and populated by a script in both DEV and QA. The
version and edition of SQL Server are identical in both environments.
In the DEV environment, executing a query which uses the linked server
(in a four part name) results in a reasonable execution plan with
filtering ocurring on the remote database before the rows are returned.
However, in the QA environment the same query results in an execution
plan that has _all_ of the rows from the remote tables returned before
filtering. The tables in the database on the linked server are somewhat
large > 1M. The execution times differ by a vast amount.
I'm really at a loss. I know the query optimizer is a fickle mistress
but is this evidence of something predictable and knowable? What are
the types of things that I might take a look at in order to discover
why the same query in two very similar environments generates such
wildly different execution plans? And, more than that why on earth
would the query optimizer not just as a matter of course _always_
filter the rows remotely before returning them? Is there any way to
force the query optimizer to filter the rows remotely before returning
them? I've looked at the openquery function which does this but is not
practical in this circumstance given the query itself. Rearchitecture
is an option but I'm dying for some kind of explanation for this
behavior. My brain is broke.Hi
I am not familiar with your problem in particular, but you could try
updating indexes and statistics on the linked server to see if anything
changes. To avoid any possible filtering problem you could execute a remote
stored procedure or possibly use OPENQUERY.
John
"person" wrote:
> I am witnessing something odd with regard to execution plans and a
> linked server.
> I have a restore of a production database in two different
> environments, one DEV the other QA. This database resides on the linked
> server in our setup. The other database -- the local database -- has
> been installed and populated by a script in both DEV and QA. The
> version and edition of SQL Server are identical in both environments.
> In the DEV environment, executing a query which uses the linked server
> (in a four part name) results in a reasonable execution plan with
> filtering ocurring on the remote database before the rows are returned.
> However, in the QA environment the same query results in an execution
> plan that has _all_ of the rows from the remote tables returned before
> filtering. The tables in the database on the linked server are somewhat
> large > 1M. The execution times differ by a vast amount.
> I'm really at a loss. I know the query optimizer is a fickle mistress
> but is this evidence of something predictable and knowable? What are
> the types of things that I might take a look at in order to discover
> why the same query in two very similar environments generates such
> wildly different execution plans? And, more than that why on earth
> would the query optimizer not just as a matter of course _always_
> filter the rows remotely before returning them? Is there any way to
> force the query optimizer to filter the rows remotely before returning
> them? I've looked at the openquery function which does this but is not
> practical in this circumstance given the query itself. Rearchitecture
> is an option but I'm dying for some kind of explanation for this
> behavior. My brain is broke.
>|||I updated the statistics on all the databases involved to no avail. I'm
unable to use the OPENQEURY function since I have to join tables across
the databases.
Which execution plan seems to be the anomaly? Would it be the execution
plan where the data is filtered on the linked server prior to being
returned or the execution plan that returns all the rows before
filtering?|||I think I may have found some promising information:
http://www.sql-server-performance.com/linked_server.asp
/quote
When running distributed queries on a linked server, if the linked
server has the same character set and sort order (collation) as the
local SQL Server, then you can reduce overhead and boost performance if
you set the SP_SERVEROPTION "collation compatible" option to true. What
this setting does is tell SQL Server to assume that all columns and
character sets on the remote server are compatible with the local
server. This same option can also be turned on for a linked server
using Enterprise Manager
If this option is not selected, then the distributed query being
executed on the remote server must return the entire table to the local
server in order for the WHERE clause to be applied. As you can imagine,
this could potentially return a lot of unnecessary data over the
network, slowing it down.
If the option is selected, (which is always recommended if the
collations are the same on both servers), then the WHERE clause is
applied on the remote server. This, of course, means that much less
data is transmitted over the network, often greatly speeding up the
distributed query. [7.0, 2000] Updated 2-16-2004
/endquote|||Hi
Also check out the other information at the bottom of the article.
Some examples of locally performed operations include:
Data conversion operations
Queries that use the bit, timestamp, or uniqueidentifier data types
Queries that use the TOP clause
INSERTS, UPDATES, or DELETES
John
"person" wrote:
> I think I may have found some promising information:
> http://www.sql-server-performance.com/linked_server.asp
> /quote
> When running distributed queries on a linked server, if the linked
> server has the same character set and sort order (collation) as the
> local SQL Server, then you can reduce overhead and boost performance if
> you set the SP_SERVEROPTION "collation compatible" option to true. What
> this setting does is tell SQL Server to assume that all columns and
> character sets on the remote server are compatible with the local
> server. This same option can also be turned on for a linked server
> using Enterprise Manager
> If this option is not selected, then the distributed query being
> executed on the remote server must return the entire table to the local
> server in order for the WHERE clause to be applied. As you can imagine,
> this could potentially return a lot of unnecessary data over the
> network, slowing it down.
> If the option is selected, (which is always recommended if the
> collations are the same on both servers), then the WHERE clause is
> applied on the remote server. This, of course, means that much less
> data is transmitted over the network, often greatly speeding up the
> distributed query. [7.0, 2000] Updated 2-16-2004
> /endquote
>|||The collation compatibility was not the magic bullet I was hoping it
would be. The two database use the same collation and enabling the
collation compatibility option on the linked server had no effect.
However, digging in further it now appears that there is a correlation
between W2003 and the poor linked server performance.
In our initial setup in the dev environment the local server was a W2K
box while the remote server was a W2003 box. In the QA environment both
servers are W2003. I installed the local db on W2003 box in the Dev
environment. To my surprise, the problem is now reproducible. I can
reproduce the problem in the dev environment only when the local db is
on a W2003 box but not when it is on a W2K box.
So there is some correlation between W2003 and poor linked server
performance.|||Hi
A pure guess would be the check out DTC is working correctly and possibly
re-install it. http://support.microsoft.com/default.aspx?scid=kb;en-us;Q306843
John
"person" wrote:
> The collation compatibility was not the magic bullet I was hoping it
> would be. The two database use the same collation and enabling the
> collation compatibility option on the linked server had no effect.
> However, digging in further it now appears that there is a correlation
> between W2003 and the poor linked server performance.
> In our initial setup in the dev environment the local server was a W2K
> box while the remote server was a W2003 box. In the QA environment both
> servers are W2003. I installed the local db on W2003 box in the Dev
> environment. To my surprise, the problem is now reproducible. I can
> reproduce the problem in the dev environment only when the local db is
> on a W2003 box but not when it is on a W2K box.
> So there is some correlation between W2003 and poor linked server
> performance.
>|||I'm not sure that that article applies to my situation. We're not using
distributed transactions.
Does this seem like a situation where a support ticket to Microsoft
might be advisable? I can consistently demonstrate that the query
optimizer devises radically different execution plans for the same
stored procedure depending on which operating system the local db is on
(all other factors being equal -- or apparently so). I don't know if
anybody from Microsoft looks at this forum...but would starting a new
thread with a more specific subject line get this problem any better
exposure?|||Hi
It is not something I have come across. I could not find anything at
http://lab.msdn.microsoft.com/productfeedback so raising it with PSS may be
an idea.
John
"person" wrote:
> I'm not sure that that article applies to my situation. We're not using
> distributed transactions.
> Does this seem like a situation where a support ticket to Microsoft
> might be advisable? I can consistently demonstrate that the query
> optimizer devises radically different execution plans for the same
> stored procedure depending on which operating system the local db is on
> (all other factors being equal -- or apparently so). I don't know if
> anybody from Microsoft looks at this forum...but would starting a new
> thread with a more specific subject line get this problem any better
> exposure?
>|||Did you come up with an explanation for the differences in the way the
two OSs handled the queries?
I'm curious, because I migrated from a perfectly-good W2K installation
to a W2003 machine, and my OLE DB Sybase linked server threw ALL KINDS
of errors... things that look to me like collation differences.
Rick|||Rick,
No real explanation yet as to why the OSs handle the queries
differently.
We put in a support ticket with Microsoft. Their tech said to enable
the dynamic parameters option for the Sql Server provider. So in other
words, drop the linked server and recreate it with the dynamic
parameters option selected. I tried that and it had no effect. The
Microsoft tech then suggested we reboot the machine. I did and it
appears to have had a good effect. I am still in the process of testing
and I will post an update if anything interesting ocurrs. Here is the
knowledge base article the tech provided as documentation for his
suggestion:
http://support.microsoft.com/default.aspx?scid=kb;en-us;Q320208.
The article doesn't mention the reboot step. But it is a Microsoft
product and well...
Interestingly enough, the Sql Server provider option for the dynamic
parameters is _not_ selected on the w2k machine. The execution plan on
the w2003 machine is still not as efficient as on the w2k machine but
I'm closer than where I was last month.