Friday, February 24, 2012

include the report date

Best case, I include the report date in the report header. Worst case, in
the subject line of the subscription email. I'm using SQL/RS v2000, VS 2003,
anybody know how I can do this? nothing crazy, just like this:
ReportName xx/xx/xxxx
-- LynnI tried to add a text box into the header, just filling it w/this expression:
select LEFT(Dateadd(mm,-0,getdate()),11)
it fails indicating
'the value expression for the textbox 'textbox7' contains and error:
[BC30201] Expression expected.'
please advise
-- Lynn
"Lynn" wrote:
> Best case, I include the report date in the report header. Worst case, in
> the subject line of the subscription email. I'm using SQL/RS v2000, VS 2003,
> anybody know how I can do this? nothing crazy, just like this:
> ReportName xx/xx/xxxx
> -- Lynn|||Please disregard, I got it. I included a text box with this as the
expression, everything is fine now, thanks anyway:
=Format(today()," dd MMM yy" )
-- Lynn
"Lynn" wrote:
> I tried to add a text box into the header, just filling it w/this expression:
> select LEFT(Dateadd(mm,-0,getdate()),11)
> it fails indicating
> 'the value expression for the textbox 'textbox7' contains and error:
> [BC30201] Expression expected.'
> please advise
> -- Lynn
>
> "Lynn" wrote:
> > Best case, I include the report date in the report header. Worst case, in
> > the subject line of the subscription email. I'm using SQL/RS v2000, VS 2003,
> > anybody know how I can do this? nothing crazy, just like this:
> >
> > ReportName xx/xx/xxxx
> >
> > -- Lynn|||Just an FYI, you can use expression builder, look at the global variables.
One of those has what you want.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"Lynn" <Lynn@.discussions.microsoft.com> wrote in message
news:DA80718F-E475-4891-93A5-EAEB1CB3D2D7@.microsoft.com...
> Best case, I include the report date in the report header. Worst case, in
> the subject line of the subscription email. I'm using SQL/RS v2000, VS
> 2003,
> anybody know how I can do this? nothing crazy, just like this:
> ReportName xx/xx/xxxx
> -- Lynn|||Will do, Bruce, thank you. I will check it out.
-- Lynn
oh, oh, just a slightly related question -- i've created my browser role and
i'm trying to send out this : http://webserver/reports
such that certain people can have direct access to a number of my reports
right now there's a 'Users Folders' in there along with my reports. I'm
unable to hide that. Do you know how to get rid of it?
"Bruce L-C [MVP]" wrote:
> Just an FYI, you can use expression builder, look at the global variables.
> One of those has what you want.
>
> --
> Bruce Loehle-Conger
> MVP SQL Server Reporting Services
> "Lynn" <Lynn@.discussions.microsoft.com> wrote in message
> news:DA80718F-E475-4891-93A5-EAEB1CB3D2D7@.microsoft.com...
> > Best case, I include the report date in the report header. Worst case, in
> > the subject line of the subscription email. I'm using SQL/RS v2000, VS
> > 2003,
> > anybody know how I can do this? nothing crazy, just like this:
> >
> > ReportName xx/xx/xxxx
> >
> > -- Lynn
>
>|||Yes, you can remove the browser role from that particular folder. Normally
the folder inherits from the parent so all folders initially have the same
roles assigned to them. Click on the folder, properties, security and remove
the browser role. Or, if it is just that you don't want them distracted by
it, you could hide in list view (properties, general).
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"Lynn" <Lynn@.discussions.microsoft.com> wrote in message
news:CDBC9A73-7FD8-4271-9BCE-73D03A0BD3D1@.microsoft.com...
> Will do, Bruce, thank you. I will check it out.
> -- Lynn
> oh, oh, just a slightly related question -- i've created my browser role
> and
> i'm trying to send out this : http://webserver/reports
> such that certain people can have direct access to a number of my reports
> right now there's a 'Users Folders' in there along with my reports. I'm
> unable to hide that. Do you know how to get rid of it?
> "Bruce L-C [MVP]" wrote:
>> Just an FYI, you can use expression builder, look at the global
>> variables.
>> One of those has what you want.
>>
>> --
>> Bruce Loehle-Conger
>> MVP SQL Server Reporting Services
>> "Lynn" <Lynn@.discussions.microsoft.com> wrote in message
>> news:DA80718F-E475-4891-93A5-EAEB1CB3D2D7@.microsoft.com...
>> > Best case, I include the report date in the report header. Worst case,
>> > in
>> > the subject line of the subscription email. I'm using SQL/RS v2000, VS
>> > 2003,
>> > anybody know how I can do this? nothing crazy, just like this:
>> >
>> > ReportName xx/xx/xxxx
>> >
>> > -- Lynn
>>

Include sales information about sales manager

I need to extract sale informations about every employee in a parentchild dimension, regardless of hierarchy.

The problem is, when an employee is a manager, his sales amount includes aggregated information associated with him (that is his own Sales + sales of the employees reporting to him).
According to BOL, his personal sales amount is stored in DataMember (system-generated member).

Using MDX, how can I extract the "own" sale amount of every employee, whether he is a manager or not.

After a long fight with MDX, I am able to filter out "non-leaf-items", but I am not able to extract the "DataMember" part of nonleaf members.

Any clue is welcome.

To illustrate my problem, below are the numbers shown by "Adventure Works" and the numbers I need to extract.

Browsing Adventure Works cube

Filter: Date.[Calendar Year] = {CY 2004}

Detail Fields: [Measures].[Reseller Sales Amount]

Row Fields: [Employee].[Employees]

Amy E. Alberts (under Ken J. Sanchez and Brian S. Welcker) shows a total sales amount of $4,110,734.65, distributed as follow

Jae B. Pak $1,808,043.26

Rachel B. Valdez $829,512.64

Ranjit R. Varkey Chudukatil $1,374,855.78

total incl. Amy E. Alberts) $4,110,734.65

Amy E. Alberts $98,322.97 (this is not shown in the browser)

I need to extract:

Jae B. Pak $1,808,043.26

Rachel B. Valdez $829,512.64

Ranjit R. Varkey Chudukatil $1,374,855.78

Amy E. Alberts $98,322.97

Thanks.

Ren


Hi

Just a guess, not tested, but how about subtracting the descendant employee's sales if any descendants exist?

Regards

Chris

|||

I looked at many solutions but I am not able to find out the most efficient.

It seems to be an obvious problem and solution is not easy to find.

Include report link in subscription does not render the correct URL

Hi,

I am sure there is a thread about that one but I cannot find it.

I have a server which does not include the correct link with the subsciptions.

I create a subscription and ask to include the link to the reporet, I get the report with the machine name instead of the DNS name.

i.e. i get

The report is accessible at the following address:

http://atlanta/reportserver/...

instead of http://reporting.onsemi.com/reports/....

I tried to edit rswebapplication.config like that

<ReportServerUrl>reporting.onsemi.com</ReportServerUrl>

and rsreportserver.config like that

<DefaultHostName>onsemi.com</DefaultHostName>

but still cannot get the correct URL in the subscription.

Any idea?

Thanks,

Philippe

Find rsreportserver.config.

Look for UrlRoot and update that in the config.

Restart Reporting Services.

Include report link in subscription does not render the correct URL

Hi,

I am sure there is a thread about that one but I cannot find it.

I have a server which does not include the correct link with the subsciptions.

I create a subscription and ask to include the link to the reporet, I get the report with the machine name instead of the DNS name.

i.e. i get

The report is accessible at the following address: http://atlanta/reportserver/...

instead of http://reporting.onsemi.com/reports/....

I tried to edit rswebapplication.config like that

<ReportServerUrl>reporting.onsemi.com</ReportServerUrl>

and rsreportserver.config like that

<DefaultHostName>onsemi.com</DefaultHostName>

but still cannot get the correct URL in the subscription.

Any idea?

Thanks,

Philippe

Find rsreportserver.config.

Look for UrlRoot and update that in the config.

Restart Reporting Services.

INCLUDE option in CREATE INDEX

Consider this:
drop table a
go
create table a(i int, j int)
create unique index a1 on a(i) include(j)
insert into a values(1,2)
go
insert into a values(1,3)
(1 row(s) affected)
Msg 2601, Level 14, State 1, Line 1
Cannot insert duplicate key row in object 'dbo.a' with unique index
'a1'.
the index guarantees uniqueness of i but also stores j on its leaf
level.
Why would I need that? For index covering. I can have one and the same
index guartantee uniqueness of i and cover a query select i,j from a
where i between 1 and 10.Hi cbrichards
1) Included columns are in the leaf level ONLY. The do not appear in the
higher levels, as the key column of an index do.
2) The included columns play no part in the ordering of the leaf level
rows.
You may have a composite key of lastname, firstname and an included
column of city
The leaf row for Smith, Jane in Orlando may come before or after the
leaf row for Smith, Jane in Seattle. There is no predicting it or
controlling it.
3) With Included columns, you can exceed the key size limit of 900 bytes
total, and 16 columns.
This is the biggest benefit. You can get a lot more covering index
situations if you can get around the 900 byte limit.
HTH
Kalen Delaney, SQL Server MVP
"cbrichards via droptable.com" <u3288@.uwe> wrote in message
news:622b4dffd8727@.uwe...
> What is the difference in the following CREATE INDEX statements when using
> the INCLUDE option?
> CREATE INDEX index1 ON table1(col3) INCLUDE(col1, col2)
> CREATE INDEX index1 ON table1(col3, col1, col2)
> I have read BOL on the INCLUDE option and it says that it "Specifies the
> nonkey columns to be added to the leaf level of the nonclustered index."
> Isn't that also being done when creating a composite index without the
> INCLUDE option?
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forum...server/200606/1|||What is the difference in the following CREATE INDEX statements when using
the INCLUDE option?
CREATE INDEX index1 ON table1(col3) INCLUDE(col1, col2)
CREATE INDEX index1 ON table1(col3, col1, col2)
I have read BOL on the INCLUDE option and it says that it "Specifies the
nonkey columns to be added to the leaf level of the nonclustered index."
Isn't that also being done when creating a composite index without the
INCLUDE option?
Message posted via droptable.com
http://www.droptable.com/Uwe/Forum...server/200606/1|||Consider this:
drop table a
go
create table a(i int, j int)
create unique index a1 on a(i) include(j)
insert into a values(1,2)
go
insert into a values(1,3)
(1 row(s) affected)
Msg 2601, Level 14, State 1, Line 1
Cannot insert duplicate key row in object 'dbo.a' with unique index
'a1'.
the index guarantees uniqueness of i but also stores j on its leaf
level.
Why would I need that? For index covering. I can have one and the same
index guartantee uniqueness of i and cover a query select i,j from a
where i between 1 and 10.|||Hi cbrichards
1) Included columns are in the leaf level ONLY. The do not appear in the
higher levels, as the key column of an index do.
2) The included columns play no part in the ordering of the leaf level
rows.
You may have a composite key of lastname, firstname and an included
column of city
The leaf row for Smith, Jane in Orlando may come before or after the
leaf row for Smith, Jane in Seattle. There is no predicting it or
controlling it.
3) With Included columns, you can exceed the key size limit of 900 bytes
total, and 16 columns.
This is the biggest benefit. You can get a lot more covering index
situations if you can get around the 900 byte limit.
HTH
Kalen Delaney, SQL Server MVP
"cbrichards via droptable.com" <u3288@.uwe> wrote in message
news:622b4dffd8727@.uwe...
> What is the difference in the following CREATE INDEX statements when using
> the INCLUDE option?
> CREATE INDEX index1 ON table1(col3) INCLUDE(col1, col2)
> CREATE INDEX index1 ON table1(col3, col1, col2)
> I have read BOL on the INCLUDE option and it says that it "Specifies the
> nonkey columns to be added to the leaf level of the nonclustered index."
> Isn't that also being done when creating a composite index without the
> INCLUDE option?
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forum...server/200606/1|||In addition to the other replies: because the included columns are not
part of the index branch pages, each branch page can carry more index
keys. This results in a smaller index and potentially in a better index
depth. The lower index depth saves logical reads for every index seek.
Also, it should be a little bit less expensive to process changes in
included column when compared to indexed columns, because only the leaf
page(s) will have to be updated.
So if you need the column, but will never be filtering its value, then
it is smart idea to include it instead of indexing it.
HTH,
Gert-Jan
"cbrichards via droptable.com" wrote:
> What is the difference in the following CREATE INDEX statements when using
> the INCLUDE option?
> CREATE INDEX index1 ON table1(col3) INCLUDE(col1, col2)
> CREATE INDEX index1 ON table1(col3, col1, col2)
> I have read BOL on the INCLUDE option and it says that it "Specifies the
> nonkey columns to be added to the leaf level of the nonclustered index."
> Isn't that also being done when creating a composite index without the
> INCLUDE option?
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forum...server/200606/1|||In addition to the other replies: because the included columns are not
part of the index branch pages, each branch page can carry more index
keys. This results in a smaller index and potentially in a better index
depth. The lower index depth saves logical reads for every index seek.
Also, it should be a little bit less expensive to process changes in
included column when compared to indexed columns, because only the leaf
page(s) will have to be updated.
So if you need the column, but will never be filtering its value, then
it is smart idea to include it instead of indexing it.
HTH,
Gert-Jan
"cbrichards via droptable.com" wrote:
> What is the difference in the following CREATE INDEX statements when using
> the INCLUDE option?
> CREATE INDEX index1 ON table1(col3) INCLUDE(col1, col2)
> CREATE INDEX index1 ON table1(col3, col1, col2)
> I have read BOL on the INCLUDE option and it says that it "Specifies the
> nonkey columns to be added to the leaf level of the nonclustered index."
> Isn't that also being done when creating a composite index without the
> INCLUDE option?
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forum...server/200606/1|||Gert-Jan Strik wrote:
> So if you need the column, but will never be filtering its value, then
> it is smart idea to include it instead of indexing it.
More to the point: even if you filter on it, but the first column(s) is
selective enough, including instead of indexing might be more
efficient.
Also consider a unique index on employee_id including email_address.
The query
select employee_id, email_address from ... where email_address like
'%stone'
will use the index all right. Makes sense?|||Correct. The point of the included columns is, that no bookmark lookup
in the base table is required to access the column's value.
Side note: often, situations such as employee_id for an email_address
are already covered, because often employee_id is be the clustered index
key (default index type for the primary key), and thus automatically
part of any nonclustered index on email_address.
Gert-Jan
Alexander Kuznetsov wrote:
> Gert-Jan Strik wrote:
> More to the point: even if you filter on it, but the first column(s) is
> selective enough, including instead of indexing might be more
> efficient.
> Also consider a unique index on employee_id including email_address.
> The query
> select employee_id, email_address from ... where email_address like
> '%stone'
> will use the index all right. Makes sense?|||Gert-Jan Strik wrote:
> So if you need the column, but will never be filtering its value, then
> it is smart idea to include it instead of indexing it.
More to the point: even if you filter on it, but the first column(s) is
selective enough, including instead of indexing might be more
efficient.
Also consider a unique index on employee_id including email_address.
The query
select employee_id, email_address from ... where email_address like
'%stone'
will use the index all right. Makes sense?

INCLUDE option in CREATE INDEX

What is the difference in the following CREATE INDEX statements when using
the INCLUDE option?
CREATE INDEX index1 ON table1(col3) INCLUDE(col1, col2)
CREATE INDEX index1 ON table1(col3, col1, col2)
I have read BOL on the INCLUDE option and it says that it "Specifies the
nonkey columns to be added to the leaf level of the nonclustered index."
Isn't that also being done when creating a composite index without the
INCLUDE option?
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200606/1Consider this:
drop table a
go
create table a(i int, j int)
create unique index a1 on a(i) include(j)
insert into a values(1,2)
go
insert into a values(1,3)
(1 row(s) affected)
Msg 2601, Level 14, State 1, Line 1
Cannot insert duplicate key row in object 'dbo.a' with unique index
'a1'.
the index guarantees uniqueness of i but also stores j on its leaf
level.
Why would I need that? For index covering. I can have one and the same
index guartantee uniqueness of i and cover a query select i,j from a
where i between 1 and 10.|||Hi cbrichards
1) Included columns are in the leaf level ONLY. The do not appear in the
higher levels, as the key column of an index do.
2) The included columns play no part in the ordering of the leaf level
rows.
You may have a composite key of lastname, firstname and an included
column of city
The leaf row for Smith, Jane in Orlando may come before or after the
leaf row for Smith, Jane in Seattle. There is no predicting it or
controlling it.
3) With Included columns, you can exceed the key size limit of 900 bytes
total, and 16 columns.
This is the biggest benefit. You can get a lot more covering index
situations if you can get around the 900 byte limit.
--
HTH
Kalen Delaney, SQL Server MVP
"cbrichards via SQLMonster.com" <u3288@.uwe> wrote in message
news:622b4dffd8727@.uwe...
> What is the difference in the following CREATE INDEX statements when using
> the INCLUDE option?
> CREATE INDEX index1 ON table1(col3) INCLUDE(col1, col2)
> CREATE INDEX index1 ON table1(col3, col1, col2)
> I have read BOL on the INCLUDE option and it says that it "Specifies the
> nonkey columns to be added to the leaf level of the nonclustered index."
> Isn't that also being done when creating a composite index without the
> INCLUDE option?
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200606/1|||In addition to the other replies: because the included columns are not
part of the index branch pages, each branch page can carry more index
keys. This results in a smaller index and potentially in a better index
depth. The lower index depth saves logical reads for every index seek.
Also, it should be a little bit less expensive to process changes in
included column when compared to indexed columns, because only the leaf
page(s) will have to be updated.
So if you need the column, but will never be filtering its value, then
it is smart idea to include it instead of indexing it.
HTH,
Gert-Jan
"cbrichards via SQLMonster.com" wrote:
> What is the difference in the following CREATE INDEX statements when using
> the INCLUDE option?
> CREATE INDEX index1 ON table1(col3) INCLUDE(col1, col2)
> CREATE INDEX index1 ON table1(col3, col1, col2)
> I have read BOL on the INCLUDE option and it says that it "Specifies the
> nonkey columns to be added to the leaf level of the nonclustered index."
> Isn't that also being done when creating a composite index without the
> INCLUDE option?
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200606/1|||Gert-Jan Strik wrote:
> So if you need the column, but will never be filtering its value, then
> it is smart idea to include it instead of indexing it.
More to the point: even if you filter on it, but the first column(s) is
selective enough, including instead of indexing might be more
efficient.
Also consider a unique index on employee_id including email_address.
The query
select employee_id, email_address from ... where email_address like
'%stone'
will use the index all right. Makes sense?|||Correct. The point of the included columns is, that no bookmark lookup
in the base table is required to access the column's value.
Side note: often, situations such as employee_id for an email_address
are already covered, because often employee_id is be the clustered index
key (default index type for the primary key), and thus automatically
part of any nonclustered index on email_address.
Gert-Jan
Alexander Kuznetsov wrote:
> Gert-Jan Strik wrote:
> >
> > So if you need the column, but will never be filtering its value, then
> > it is smart idea to include it instead of indexing it.
> More to the point: even if you filter on it, but the first column(s) is
> selective enough, including instead of indexing might be more
> efficient.
> Also consider a unique index on employee_id including email_address.
> The query
> select employee_id, email_address from ... where email_address like
> '%stone'
> will use the index all right. Makes sense?|||I put that in a blog with better explanations:
http://sql-server-tips.blogspot.com/2006/06/yet-another-index-covering-tip.html

Include ID field in GROUP BY statement

I've got a query where i need to return a max value based on a select but one of the fields i need to return in the results is the records primary key ID No. This messes up the MAX bit and means that all results are returned, not just the max one.

The query i'm using is very long so i've simplified what i mean by the example below. Say i have a table 'Fruits':

ID FruitName Cost
1 Apple 0.45
2 Apple 0.63
3 Apple 0.52
4 Pear 0.89
5 Pear 0.83

And run the query:

select max(Cost),FruitName From Fruits
group by FruitName

It'll correctly return:

FruitName Cost
Apple 0.63
Pear 0.89

Now i need the ID also returned by my query so i go:

select max(Cost),FruitName,ID From Fruits
group by FruitName,ID

This doesnt return the above results with the ID appended to it, it instead returns:

ID FruitName Cost
1 Apple 0.45
2 Apple 0.63
3 Apple 0.52
4 Pear 0.89
5 Pear 0.83

As the ID is always distinct and therefore messes up the grouping. How in this instance would i return the correct result of:

ID FruitName Cost
2 Apple 0.63
4 Pear 0.89

Thanks.

I don't think you can do that with the ID field because there are multiple Ids for each item while using the group by. Can you use a Having clause? ie

select Cost,FruitName From Fruits
group by FruitName
having max(Cost)

This is a shot in the dark as i did not actually try your data. Hope it works though :)

|||

Thanks but dont think that would work as the 'HAVING' as i see it is just used to filter the results of the query so you'd use it if for example you wanted to only show max fuit more than £0.10

select Cost,FruitName From Fruit
group by FruitName,Cost
having max(Cost) > 0.1

Thats how i see it (could be wrong). But it still doesnt handle the problem of the ID.

|||

Hi,

This is a round about way, but it works

Select

FruitName,cost,(Select IDfrom Fruits fWhere f.Cost=frt.cost)as IDfrom

(

select FruitName,max(cost)as costFrom Fruits

group

by FruitName) frt|||

Hi TheGrox,

The below query may help you.

select id,fruitname,costfrom (select row_number()over (partitionby fruitnameorder by costdesc)as rownum,id,fruitname,costfrom Fruits) Fwhere rownum = 1

The main trick here is using the ROW_NUMBER() OVER (PARTITION BY...) clause

This is very like getting top n records from a table belonging to each category. You can follow the article athttp://www.kodyaz.com/articles/top-n-random-rows-foreach-category-or-group.aspx for more details and a sample for ROW_NUMBER() OVER (PARTITION BY...)

Eralper

http://www.kodyaz.com

|||

If you are using SQL 2000 you can try this:

Select t1.FruitName,t2.cost, Idfrom Fruits t1join (select FruitName,max(cost)as costFrom Fruitsgroup by FruitName ) t2on t1.FruitName = t2.fruitnameand t1.Cost = t2.cost

|||

Here is another one:

select t1.*from fruitsas t1where t1.ID= (select top 1 t2.IDfrom fruitsas t2where t2.FruitName = t1.FruitNameorder by t2.costDESC)
|||

Limno

I like your query better than mine. I am joining on name and cost. If there are multiple records with same name and costs, the join could return multiple records for same "Fruit" which might be incorrect.

|||I learned that solution from Umachandar Jayachandran. He is super.

Include File Concept

Hi All
I wanted to check if there is any feature like the INCLUDE file concept in ASP.
I have set of variables that I need to declare in each of the Stored Procedures by default, so that I can maintain this list in single place, dont have to worry about changing the list for every SP in the database.
Does SQL Server provide any such feature.
If not, is this planned in YUKON?You could try creating a template for use in Query Analyzer. You can modify
one of the existing procedure and save it as another name. See BOL:Using
Templates in Query Analyzer for more information.
"Prasanna" <pprabhu@.pbs.solutionsiq.com> wrote in message
news:FD16045E-A2F1-4534-88A2-65DD471DA1EE@.microsoft.com...
> Hi All
> I wanted to check if there is any feature like the INCLUDE file concept in
ASP.
> I have set of variables that I need to declare in each of the Stored
Procedures by default, so that I can maintain this list in single place,
dont have to worry about changing the list for every SP in the database.
> Does SQL Server provide any such feature.
> If not, is this planned in YUKON?|||Hello,
Templates are boilerplate files containing SQL scripts that help you create
objects in the database. There are however, unable to help maintain the set
of variables in a single place.
Currently, SQL Server does not provide this feature you require. As I
understand, SQL Server Yukon will provide .NET Programming Features, but
because Yukon is not a released version of SQL Server, we are unable to
guarantee it will have the feature like the INCLUDE file concept in ASP.
Thanks for understanding.
For additional information regarding .NET programming Features, please
refer to the following article:
SQL Server Yukon: .NET Programming Features
http://server1.msn.co.in/sp03/teched/pop5.html
This document contains references to a third party World Wide Web site.
Microsoft is providing this information as a convenience to you. Microsoft
does not control these sites and has not tested any software or information
found on these sites; therefore, Microsoft cannot make any representations
regarding the quality, safety, or suitability of any software or
information found there. There are inherent dangers in the use of any
software found on the Internet, and Microsoft cautions you to make sure
that you completely understand the risk before retrieving any software from
the Internet.
Thanks for using MSDN newsgroup
Regards,
Michael Shao
Microsoft Online Partner Support
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.

include declared referential integrity

When i specify 'include declared referential integrity ' for an article
definition, does that mean that the primary key and foreign key definition
on that article are transferred to the subscriber ?
If the table Im replicating is a foreign key table and i specify that option
but i do not have the primary key table on the publisher, will the initial
distribution of the snapshot fail while trying to create this table cos it
cannot reference the primary key table that does not exist ?
Also when i add multiple primary key and foreign key tables to a publication
and specify the 'include declared referential integrity ' option, is
replication smart enough to create these tables on the subscriber in the
right order.. ?
Right now i already have multiple publications one for tables, views,indexed
views bcos replication is not smart enough to know what to create first..
Using trans replication
Thanks
Hassan,
choosing this option will create the PK-FK relationships only when the
related articles are in the same publication and actually have the PK and
the FK explicitly declared on the publisher. The order of creation on the
subscriber takes this into account, although in the case of merge
replication in particluar, we tend to use the Not For Replication attribute
anyway.
Rgds,
Paul Ibison SQL Server MVP, www.replicationanswers.com/default.asp
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)

Include data in e-mail

I have about 150 people I would like to send e-mail automatically. Each
person would get a unique form letter that includes username and
password I have stored in a SQL table. Is this possible? Help
appreciated. Thanks.
Frank

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!Yes. Check out xp_sendmail
in the SQL documentation.

"Frank Py" <fpy@.proactnet.com> wrote in message
news:3fdf82b9$0$196$75868355@.news.frii.net...
> I have about 150 people I would like to send e-mail automatically. Each
> person would get a unique form letter that includes username and
> password I have stored in a SQL table. Is this possible? Help
> appreciated. Thanks.
> Frank
>
> *** Sent via Developersdex http://www.developersdex.com ***
> Don't just participate in USENET...get rewarded for it!

Include Comments in SQL Script

Hi,

When I generate SQL Script for my database, I wish to include the comments added against each column in the script. But there is no option available where I can specify to include "COMMENTS".

Please suggest.

Regards,

Sudhir Chawla

you can add custom comments by using this

-- two dashes

sql server scripts genarator has no comments option

you have to comment the scripts by yourself

|||

Or you can wrap your scripts in block comments

/*
If you have multiple
lines to comment
then you can use block comments
*/

|||

Actually,

When you create table and add columns, at that time there is a "DESCRIPTION" field, where we can specify the comments for that column. So how do I include these column level comments to be present in my script. I can add my own comments using -- or /* adjkasjd */, but I wish to include column level comments in my script.

Thanks for your advice.

Regards,

Sudhir

|||

I believe the DESCRIPTION field is actually an extended property. If you are using the Script Wizard (right click on the database, select check Tasks > Generate Scripts) to create your scripts, you can check the "Script Extended Properties" checkbox on the Script Options page to get the description scripted.

If you are using the Script context menu in Object Explorer to create your script, you will have to wait until SP2 to get this option, which will be exposed in a new scripting options dialog (e.g. Tools > Options > Scripting in the main menu bar).

Hope this helps,
Steve

Include column names in resultset - is it possible?

This is probably not a practical thing to do but I'd like to know if it
is possible
to include the column names as part of the resultset in my stored
procedure? The column
names could be the first row or and the last row of the resultset. The
number of columns
returned is unknown until runtime.
Thanks!
-SydneyAt no point down the line is there a client? Then who needs this data,
anyway...? :)
ML
http://milambda.blogspot.com/|||> The number of columns returned is unknown until runtime.
Why? How can that be? Please, explain.
If the structure of a result set depends on one or more conditions couldn't
you somehow figure out those conditions before executing this ellusive query
?
If you could, you'd know the structure of the result set in advance.
One other (mor logical) method would be to use a standard result set to
cover all options, and then if needed transform the result set after it's
returned from the procedure (or query) to more appropriately fit each
individual situation.
Sounds like sci-fi to me. But I'm willing to help.
ML
http://milambda.blogspot.com/|||ML, thanks for you reply.
See this link why I don't know # of columns in advance.
rl]
ML wrote:
> Why? How can that be? Please, explain.
> If the structure of a result set depends on one or more conditions couldn'
t
> you somehow figure out those conditions before executing this ellusive que
ry?
> If you could, you'd know the structure of the result set in advance.
> One other (mor logical) method would be to use a standard result set to
> cover all options, and then if needed transform the result set after it's
> returned from the procedure (or query) to more appropriately fit each
> individual situation.
> Sounds like sci-fi to me. But I'm willing to help.
>
> ML
> --
> [url]http://milambda.blogspot.com/" target="_blank">http://groups.google.com/group/micr...a.blogspot.com/|||Some people call this problem sci-fi, others call it a 'crosstab'.
And some people here think words are not important:)
You want to generate the columns dynamically.Now that
people know what you want I'm sure you will get a lot
of wonderful solutions.If you don't want to mess around with
them you can check out the RAC utility which will eliminate the
need to write any code on your part.It will generate your crosstab
magically.You will learn very little but you will keep your job:)
www.rac4sql.net|||Oh, it's a cross-tab query! Well, bring the normalized data to the client an
d
pivot it there. Or is there no client?
ML
http://milambda.blogspot.com/|||There is no client.Gotcha :)
www.rac4sql.net
"ML" <ML@.discussions.microsoft.com> wrote in message
news:AFFA52FB-C614-471D-91F7-E6D512AD5BC9@.microsoft.com...
> Oh, it's a cross-tab query! Well, bring the normalized data to the client
> and
> pivot it there. Or is there no client?
>
> ML
> --
> http://milambda.blogspot.com/

include column from table in SELECT in an INSERT OUTPUT clause

Hey All –

I am having problems with the OUTPUT clause in an INSERT.

I have something like this:

INSERT INTO dbo.Person

( PersonID

,Name

)

OUTPUT p.AuditVersionGUID

,inserted.PersonID

,inserted.Name

INTO dbo.PersonAudit

(

AuditVersionGUID

,PersonID

,Name

)

SELECT p.PersonID, p.Name

FROM #Person p

AuditVersionGUID is not and shouldn’t be defined on the Person table but I do have it defined in my #Person table.

I get this error:

Msg 4104, Level 16, State 1, Procedure spExtractPerson, Line 275

The multi-part identifier "P.AuditVersionGUID" could not be bound.

I can accomplish this when I am using the OUTPUT clauses in DELETE and UPDATE statements but not the INSERT.

Is this possible or am I relegated to using a memory table and appending the GUID in a separate statement after the OUTPUT?

Thank you,

Cameron

Yeah the GUID has already been determined since a few other sprocs need to refer to the same one.

Thank you for confiming what i had thought. Seems kind of strange why it only doesnt work on INSERT.

Cameron

|||

It's not allowed. Only { DELETED | INSERTED | from_table_name } . { * | column_name } is allowed in OUTPUT clause. from_table_name is a table included in the FROM clause of a DELETE or UPDATE statement that is used to specify the rows to update or delete.

But the following equivalent statement should work for you.

-- swap PersonAudit and Person

INSERT INTO dbo.PersonAudit

( AuditVersionGUID

,PersonID

,Name

)

OUTPUT

inserted.PersonID

,inserted.Name

INTO dbo.Person

(

PersonID

,Name

)

SELECT p.AuditVersionGUID, p.PersonID, p.Name

FROM #Person p

go

|||Ah, that would work, but Person is in a FK relationship, which OUTPUT does not allow the table to be on any side of a FK relationship.|||

In that case, then, you could always load a variable with the GUID and return it in the OUTPUT clause:

OUTPUT @.MyGUID,

inserted.[PersonID],

inserted.[name]

Obviously I don't understand your complete scenario so this may or may not be of any help.

Chris

|||It is doing a bulk insert so i can't use a variable because each AuditVersionGUID is linked to a specific PersonID, which is why i was trying to get it from the tables in the SELECT|||

In the OUTPUT clause you cannot reference tables other than INSERTED or DELETED when performing an INSERT as you can when performing a DELETE or an UPDATE. Look up the 'OUTPUT Clause' topic in BOL for more info.

Incidentally, in your example code has the value of the GUID already been determined by the time that you use the OUTPUT clause?

If not then you could simply use the following to generate a GUID:

OUTPUT NEWID(),

inserted.[PersonID],

inserted.[name]

Chris

include .mdf, .ndf, and .ldf files in exclusion list

Hi,
In the past I have always included .mdf, .ndf, and .ldf files in the
exclusion list for
both Antivirus and backup agents.
This message is primarily in regards to an open file backup agent.
I now work at a place that has an open file back agent.
Prior to my arriving the previous dba sucessfully tested a backup to tape
and restore using this agent.
Then DR plan was written to make use of this paradigm.
Fast forward 6 months
New product in production, many open file errors on .mdf, .ndf, and .ldf
files causing entrire backup streams to be delayed and eventually failing.
Immediately resolved when file extensions are added to exlusion list.
Does my old school thought process (exclude the files) lack a basis in the
new landscape involving SQL 2K5 and new backup agents?
Can anyone point to a MS document that recommends this as best practise?
I have seen Microsoft KB article 309422 regarding anit virus files.
Any input apprciated.
Hi Joe
I would exclude the files from any filter agent as this can lead to
performance problems, therefore it would be true regardless of SQL server
version. You will need to look at the SLA and find out what level of recovery
is required and how that could be met.
John
"Joe" wrote:

> Hi,
> In the past I have always included .mdf, .ndf, and .ldf files in the
> exclusion list for
> both Antivirus and backup agents.
> This message is primarily in regards to an open file backup agent.
> I now work at a place that has an open file back agent.
> Prior to my arriving the previous dba sucessfully tested a backup to tape
> and restore using this agent.
> Then DR plan was written to make use of this paradigm.
> Fast forward 6 months
> New product in production, many open file errors on .mdf, .ndf, and .ldf
> files causing entrire backup streams to be delayed and eventually failing.
> Immediately resolved when file extensions are added to exlusion list.
> Does my old school thought process (exclude the files) lack a basis in the
> new landscape involving SQL 2K5 and new backup agents?
> Can anyone point to a MS document that recommends this as best practise?
> I have seen Microsoft KB article 309422 regarding anit virus files.
> Any input apprciated.

include .mdf, .ndf, and .ldf files in exclusion list

Hi,
In the past I have always included .mdf, .ndf, and .ldf files in the
exclusion list for
both Antivirus and backup agents.
This message is primarily in regards to an open file backup agent.
I now work at a place that has an open file back agent.
Prior to my arriving the previous dba sucessfully tested a backup to tape
and restore using this agent.
Then DR plan was written to make use of this paradigm.
Fast forward 6 months
New product in production, many open file errors on .mdf, .ndf, and .ldf
files causing entrire backup streams to be delayed and eventually failing.
Immediately resolved when file extensions are added to exlusion list.
Does my old school thought process (exclude the files) lack a basis in the
new landscape involving SQL 2K5 and new backup agents?
Can anyone point to a MS document that recommends this as best practise?
I have seen Microsoft KB article 309422 regarding anit virus files.
Any input apprciated.Hi Joe
I would exclude the files from any filter agent as this can lead to
performance problems, therefore it would be true regardless of SQL server
version. You will need to look at the SLA and find out what level of recover
y
is required and how that could be met.
John
"Joe" wrote:

> Hi,
> In the past I have always included .mdf, .ndf, and .ldf files in the
> exclusion list for
> both Antivirus and backup agents.
> This message is primarily in regards to an open file backup agent.
> I now work at a place that has an open file back agent.
> Prior to my arriving the previous dba sucessfully tested a backup to tape
> and restore using this agent.
> Then DR plan was written to make use of this paradigm.
> Fast forward 6 months
> New product in production, many open file errors on .mdf, .ndf, and .ldf
> files causing entrire backup streams to be delayed and eventually failing.
> Immediately resolved when file extensions are added to exlusion list.
> Does my old school thought process (exclude the files) lack a basis in the
> new landscape involving SQL 2K5 and new backup agents?
> Can anyone point to a MS document that recommends this as best practise?
> I have seen Microsoft KB article 309422 regarding anit virus files.
> Any input apprciated.

Include "/" in select statement

I wanted to create something like this:
Select (FirstItem + "/" + SecondItem) AS Item

but I get error. Is there anything wrong with this code?

P.S. I'm using mssql 2000

Hi,

in case FirstItem and SecondItem field are varchar type themselves, just replace double quotes ("/") with single quotes ('/')

e.g Select (FirstItem +'/' + SecondItem) AS Item

|||

Adding tojoteke's solution, if the columns are not of same datatype, you would do a CONVERT.

SELECT CONVERT(varchar(50),FirstItem ) +'/' + CONVERT(varchar(50),secondItem) AS Item

Inavlid Cursor State

Hi pals,
I am asking you for help - I am using SQL server 2000 sp3, and have this
problem:
I create new dab, let say 'test_db', after I create 'Table1' with:
ID - Uniqueidentifier, PK
data - char (10), allow nulls
Ok, I close it, table is saved - this works, but problem comes here:
I open table via design table, and I want turn off 'allow nulls',
so I uncheck it, close it and I get this error:
'Table1' table
- Unable to create index 'PK_Table1'.
ODBC error: [Microsoft][ODBC SQL Server Driver]Invalid cursor state
And there errors I always get, when I want change something to the table.
Hm, not true - I can add new columns, but after I can't change them, for
example I can't set char(10) to char(20) and so on.
This is strange, have anyone some idea what it could be and how to avoid
this?
Thanks a lot
BranoSome info on this here:
http://www.aspfaq.com/2515
I recommend you make your database changes through code in Query
Analyzer rather than use Enterprise Manager. Regardless of the problem
you are having TSQL is more powerful and gives you more control.
--
David Portas
SQL Server MVP
--|||> http://www.aspfaq.com/2515
thanks a lot, patch http://support.microsoft.com/default.aspx/kb/838166
solve the problem.

Inadvertantly deleted detail row...

All,
I am a newbie and in working on an existing report, I inadvertantly
deleted the only detail row. I was able to get the report working, but
I had to use a group header as a detail row (fortunately I had a unique
value column), because I could not see how to add a detail row. MS is
very clear - whatever kind of row you are on when you choose to Insert
a Row Above or Below is the kind of row that will be created. Since I
did not have a detail row, I couldn't add one.
So, can you add a detail row when there isn't one? If so, how?
Any ideas, references, resources, war stories or good clean jokes on
the subject would be greatly appreciated.
Thanks,
KathrynClick on the table so that the handles appear. Right click the handle at the
end of the row on the table header row and then choose table details. Then
the row will reappear.
wrote:
> All,
> I am a newbie and in working on an existing report, I inadvertantly
> deleted the only detail row. I was able to get the report working, but
> I had to use a group header as a detail row (fortunately I had a unique
> value column), because I could not see how to add a detail row. MS is
> very clear - whatever kind of row you are on when you choose to Insert
> a Row Above or Below is the kind of row that will be created. Since I
> did not have a detail row, I couldn't add one.
> So, can you add a detail row when there isn't one? If so, how?
> Any ideas, references, resources, war stories or good clean jokes on
> the subject would be greatly appreciated.
> Thanks,
> Kathryn
>|||Kim,
Thanks so much! It's simple once you know it, but I couldn't find it
ANYWHERE online or in BOL.
Kathryn

Inadequate Results When Using Group By On Subqueries With Newid()

/*
RUN THIS QUERY AT LEAST 10 TIMES - YOU WILL BE IN A SURPRISE
DON'T BE AFFRAID - 1 SEC FOR EACH RUN
*/
/*
This query should generate 1 milion of mixed random ones and zeroes,
and group their counts
*/
select X,"Count"=count(*) from
(
select X=case when checksum(newid())>0 then 0 else 1 end
from ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX1
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX2
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX3
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX4
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX5
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX6
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX7
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX8
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX9
) XXX group by X

/*
RESULTS:

Correct - count can vary
| Count | X |
-----
| 524288 | 1 |
-----
| 524288 | 0 |
-----

But you can also get this result
| Count | X |
-----
| 786432 | 1 | <-- 1 !!!
-----
| 262144 | 1 | <-- 1 !!!
-----

!!!!!!!!!!! BIFURCATE ONE !!!!!!!!!!!!!!!
*/

--And this query is seems to be correct (not so fast, about 2 min to get output)
declare @.t table (X int)
insert @.t(X)
select X=case when checksum(newid())>0 then 0 else 1 end
from ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX1
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX2
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX3
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX4
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX5
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX6
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX7
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX8
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX9
select X,"Count"=count(*) from @.t
group by X

/*
This query is simillar, but it's principle is different, because if error during query occurs,
@.table would disappear and #table not. However MSSQLSERVER2K implementation prefers physical #tables ;], that's why this query lasts about 20 sec. Result is also correct.
*/
create table #t (X int)
insert #t(X)
select X=case when checksum(newid())>0 then 0 else 1 end
from ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX1
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX2
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX3
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX4
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX5
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX6
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX7
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX8
cross join ( select 1 as X union all select 1 union all select 1 union all select 1 ) XXX9
select X,"Count"=count(*) from #t
group by X
drop table #t

/*
Syntax "CHECKSUM(NEWID())" gives fast pseudo-random int for each usage ( not like RAND() - once per query and seed )
Problem is not bind to CHECKSUM(), but "NEWID()" AND "GROUP BY" combination.
MS SQL Server 2000 has built-in strange "select-driven" ordering routine, group by uses ordering ...?
*/

/*
Tested on Microsoft SQL Server 2000 SP2 - many different hardware
*/Check out the following article:

article (http://support.microsoft.com/default.aspx?scid=KB;en-us;293219&)|||Originally posted by rnealejr
Check out the following article:

article (http://support.microsoft.com/default.aspx?scid=KB;en-us;293219&)

This patch is unofficial and not available to public. I would cripple my SQL server grudgingly. My query is also not corellated as this Microsoft article says. Did you try my query on server with this patch applied with correct results?|||It is available to the general public - you just have to contact them. I agree though that the remarks by ms in the article are not reassuring but that is your decision. I have had to apply these type of intermediate fixes before and have had no problems (from my experiences with these fixes, you can back them out if necessary - but you can confirm that with ms).

To apply any release to your production system from ms without testing it first would be asking for disaster. If you wanted to pursue this, I am sure that you have sql server running on your desktop that has the same issue - just apply the fix to that and see.

Inactive Transaction/Log will not truncate after replication disabled

Hi, all
We have a clustered SQL Server 2000 (SP3a) in Window 2003 server
environment. I set up a transactional replication from 'DB1' at node A (
serves as 'publisher' ) to 'DB2' (as 'distributor' and 'subscriber') at node
B. Database option for DB1 in Node A is 'full recovery' model with
scheduled transaction backup run.
I set up the push replication at 'DB1' with published article 'T' and not
yet to let the snapshot agent start to do the reinitiating, I decided to
disable/remove all replications setup via Enterprise Manager. The Enterprise
Manager did remove everything from what I can see. But from that on, I
noticed that the Transaction Log in the 'DB1' keep growing even after log
file backup, the inactive transactions are queued for future snapshot usage,
I guess. I followed some articles found to run DTS to transfer some data
over to other database or reset up the replication and run the 'EXEC
sp_repldone @.xactid = NULL, @.xact_segno = NULL, @.numtrans = 0, @.time = 0,
@.reset = 1'. They all did truncate the log after the action.
However, problem is that I do not want the replication any more on the
DB1/Node1. It is kind of a vicious cycle in the set up now. If I removed the
replication/publisher setup, I can not run that 'sp_repldone' to clear the
log. But once I put back the replication/publisher for that cleanup action
and remove the replication after that, it seems to put a mark in the
database and the transaction log starts to queue and grow in a
no-replication setup. For comparison, I also did some test in a stand-alone,
non-clustered SQL server box and with publisher/distributor/subscriber all
in the same box but with two SQL Server instances to mimic the operation,
the transaction log seems to work fine and not to keep growing after
replication setup was removed.
Does anyone have the same experience or any insight to this issue ? please
help.
Thanks in advance for any of your input.
Eugene
I am a little confused by this statement "I decided to disable/remove all
replications setup via Enterprise Manager" Do you mean you disabled
replication or your dropped all subscriptions? If you dropped anonymous pull
subscriptions, it is possible that the log reader will keep reading the tlog
on DB1, and writing to the distribution database on DB2.
It seems however, from what you describe that you have disabled replication
on DB1, and still your tlog continues to grow. If this is the case, can you
run dbcc opentran in the database you are publishing? Also run sp_repltran.
sp_repltran should return nothing.
I would try to disable the publisher (db1) from being able to use the
distribution database on db2, and then enable db1 as a distributor, create a
local publication and a local subscription, and then disable publishing.
This should clear the condition.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on SQL FTS/Indexing Services
http://www.indexserverfaq.com
"Eugene Hwang" <hwang_eugene@.hotmail.com> wrote in message
news:uxkXd9KCFHA.2288@.TK2MSFTNGP14.phx.gbl...
> Hi, all
> We have a clustered SQL Server 2000 (SP3a) in Window 2003 server
> environment. I set up a transactional replication from 'DB1' at node A (
> serves as 'publisher' ) to 'DB2' (as 'distributor' and 'subscriber') at
node
> B. Database option for DB1 in Node A is 'full recovery' model with
> scheduled transaction backup run.
> I set up the push replication at 'DB1' with published article 'T' and not
> yet to let the snapshot agent start to do the reinitiating, I decided to
> disable/remove all replications setup via Enterprise Manager. The
Enterprise
> Manager did remove everything from what I can see. But from that on, I
> noticed that the Transaction Log in the 'DB1' keep growing even after log
> file backup, the inactive transactions are queued for future snapshot
usage,
> I guess. I followed some articles found to run DTS to transfer some data
> over to other database or reset up the replication and run the 'EXEC
> sp_repldone @.xactid = NULL, @.xact_segno = NULL, @.numtrans = 0, @.time =
0,
> @.reset = 1'. They all did truncate the log after the action.
> However, problem is that I do not want the replication any more on the
> DB1/Node1. It is kind of a vicious cycle in the set up now. If I removed
the
> replication/publisher setup, I can not run that 'sp_repldone' to clear the
> log. But once I put back the replication/publisher for that cleanup action
> and remove the replication after that, it seems to put a mark in the
> database and the transaction log starts to queue and grow in a
> no-replication setup. For comparison, I also did some test in a
stand-alone,
> non-clustered SQL server box and with publisher/distributor/subscriber all
> in the same box but with two SQL Server instances to mimic the operation,
> the transaction log seems to work fine and not to keep growing after
> replication setup was removed.
> Does anyone have the same experience or any insight to this issue ?
please
> help.
> Thanks in advance for any of your input.
> Eugene
>
>
>
|||Hilary,
I did what you said to re-establish the replication setup again but have
everything ( publisher/distributor/subscriber) LOCALLY in one node instead
of across two nodes that I did before and then disable publishing. It works
perfectly to clean up all marks, the inactive transaction log is cleared up
after the backup. Everything is working now.
Thanks a lot for your prompt suggestion and want to say thanks again on your
attention on almost every post in the group.
Eugene
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:e2PBQROCFHA.3840@.tk2msftngp13.phx.gbl...
> I am a little confused by this statement "I decided to disable/remove all
> replications setup via Enterprise Manager" Do you mean you disabled
> replication or your dropped all subscriptions? If you dropped anonymous
pull
> subscriptions, it is possible that the log reader will keep reading the
tlog
> on DB1, and writing to the distribution database on DB2.
> It seems however, from what you describe that you have disabled
replication
> on DB1, and still your tlog continues to grow. If this is the case, can
you
> run dbcc opentran in the database you are publishing? Also run
sp_repltran.
> sp_repltran should return nothing.
> I would try to disable the publisher (db1) from being able to use the
> distribution database on db2, and then enable db1 as a distributor, create
a[vbcol=seagreen]
> local publication and a local subscription, and then disable publishing.
> This should clear the condition.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on SQL FTS/Indexing Services
> http://www.indexserverfaq.com
> "Eugene Hwang" <hwang_eugene@.hotmail.com> wrote in message
> news:uxkXd9KCFHA.2288@.TK2MSFTNGP14.phx.gbl...
> node
not[vbcol=seagreen]
> Enterprise
log[vbcol=seagreen]
> usage,
=[vbcol=seagreen]
> 0,
> the
the[vbcol=seagreen]
action[vbcol=seagreen]
> stand-alone,
all[vbcol=seagreen]
operation,
> please
>

Inactive record

Hi just wondering if there is a way to have one record in a table be read
only, were an application can not remove it and have all other records act
normally? I am currently handling this in .net code but would prefer to
have it built into the table.
thanks.
--
Paul G
Software engineer.No there is now row level security based on SQL Server 2000.
HTH, Jens Suessmeyer.
http://www.sqlserver2005.de
--
"Paul" <Paul@.discussions.microsoft.com> schrieb im Newsbeitrag
news:BD1A992D-4723-48EA-82C1-7CF94A102258@.microsoft.com...
> Hi just wondering if there is a way to have one record in a table be read
> only, were an application can not remove it and have all other records act
> normally? I am currently handling this in .net code but would prefer to
> have it built into the table.
> thanks.
> --
> Paul G
> Software engineer.|||Sorry, i mean : No there is no row level security based on SQL Server 2000.
"Jens Smeyer" <Jens@.Remove_this_For_Contacting.sqlserver2005.de> schrieb
im Newsbeitrag news:eNTxiwzSFHA.612@.TK2MSFTNGP12.phx.gbl...
> No there is now row level security based on SQL Server 2000.
> HTH, Jens Suessmeyer.
>
> --
> http://www.sqlserver2005.de
> --
> "Paul" <Paul@.discussions.microsoft.com> schrieb im Newsbeitrag
> news:BD1A992D-4723-48EA-82C1-7CF94A102258@.microsoft.com...
>|||ok thanks for the information.
"Jens Sü?meyer" wrote:

> No there is now row level security based on SQL Server 2000.
> HTH, Jens Suessmeyer.
>
> --
> http://www.sqlserver2005.de
> --
> "Paul" <Paul@.discussions.microsoft.com> schrieb im Newsbeitrag
> news:BD1A992D-4723-48EA-82C1-7CF94A102258@.microsoft.com...
>
>|||On Wed, 27 Apr 2005 07:54:06 -0700, Paul wrote:

>Hi just wondering if there is a way to have one record in a table be read
>only, were an application can not remove it and have all other records act
>normally? I am currently handling this in .net code but would prefer to
>have it built into the table.
>thanks.
Hi Paul,
You can use a trigger:
CREATE TRIGGER DontTouchThisRow
ON MyTable AFTER UPDATE, DELETE
AS
IF EXISTS (SELECT *
FROM deleted
WHERE KeyCol = 1) -- Column to be protected
BEGIN
RAISERROR ('Row 1 may not be changed or removed', 16, 1)
ROLLBACK TRANSACTION
END
go
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)

Inactive Publisher???

I set up a Tranactional replication configuration and created a publication
in SQL2005. When I set up the subscriber the wizard stated all succeeded.
However, when I went to the Replication Monitor there was a error for the
publisher. I went to the history for the agent and it stated the following:
2007-02-09 17:56:51.44 Connecting to Distributor 'U1ST081SORTS2'
2007-02-09 17:56:51.57 The replication agent had encountered an exception.
2007-02-09 17:56:51.57 Source: Replication
2007-02-09 17:56:51.57 Exception Type:
Microsoft.SqlServer.Replication.ReplicationAgentEx ception
2007-02-09 17:56:51.57 Exception Message: The snapshot could not be
generated because the publisher is inactive.
2007-02-09 17:56:51.57 Message Code: 54057
Now, this is not only new to me but also to BOL because I couldn't find
anything about inactive "publishers", only subscribers. Also the Message
code didn't provide any answers. I've found places to change subscriber
inactivity timeouts but nothing for publishers.
Any ideas? I'm used to setting up replication in SQL2K and haven't had this
many problems before. Any significant changes in how to set up replication
with 2005?
Roger.
How did you create your publication?
By chance did you did it like this?
sp_addpublication 'mypublication'?
You need to specify the @.status='active' parameter.
Do a sp_helppublication and note the status field is it 0 or 1. 1 means
active.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Roger Denison" <RogerDenison@.discussions.microsoft.com> wrote in message
news:0EF738C9-9EE5-43C9-B19C-117280FAA1AB@.microsoft.com...
>I set up a Tranactional replication configuration and created a publication
> in SQL2005. When I set up the subscriber the wizard stated all succeeded.
> However, when I went to the Replication Monitor there was a error for the
> publisher. I went to the history for the agent and it stated the
> following:
> 2007-02-09 17:56:51.44 Connecting to Distributor 'U1ST081SORTS2'
> 2007-02-09 17:56:51.57 The replication agent had encountered an exception.
> 2007-02-09 17:56:51.57 Source: Replication
> 2007-02-09 17:56:51.57 Exception Type:
> Microsoft.SqlServer.Replication.ReplicationAgentEx ception
> 2007-02-09 17:56:51.57 Exception Message: The snapshot could not be
> generated because the publisher is inactive.
> 2007-02-09 17:56:51.57 Message Code: 54057
> Now, this is not only new to me but also to BOL because I couldn't find
> anything about inactive "publishers", only subscribers. Also the Message
> code didn't provide any answers. I've found places to change subscriber
> inactivity timeouts but nothing for publishers.
> Any ideas? I'm used to setting up replication in SQL2K and haven't had
> this
> many problems before. Any significant changes in how to set up
> replication
> with 2005?
> --
> Roger.
|||Hi Roger,
You can set the publisher to active by running sp_changedistpublisher
@.publisher = <your publisher name>, @.property = 'active', @.value = 'true' at
the distributor, although this happens often enough
([url]http://groups.google.com/group/microsoft.public.sqlserver.replication/browse_frm/thread/cb0ab8eb51efbdc0/20d2368928450759?lnk=gst&rnum=41#20d2368928450759[ /url])
that I am a bit worried about what kind of subtle problems we missed during
replication setup through SSMS. Are you having a SQL2000 publisher by any
chance?
-Raymond
"Roger Denison" <RogerDenison@.discussions.microsoft.com> wrote in message
news:0EF738C9-9EE5-43C9-B19C-117280FAA1AB@.microsoft.com...
>I set up a Tranactional replication configuration and created a publication
> in SQL2005. When I set up the subscriber the wizard stated all succeeded.
> However, when I went to the Replication Monitor there was a error for the
> publisher. I went to the history for the agent and it stated the
> following:
> 2007-02-09 17:56:51.44 Connecting to Distributor 'U1ST081SORTS2'
> 2007-02-09 17:56:51.57 The replication agent had encountered an exception.
> 2007-02-09 17:56:51.57 Source: Replication
> 2007-02-09 17:56:51.57 Exception Type:
> Microsoft.SqlServer.Replication.ReplicationAgentEx ception
> 2007-02-09 17:56:51.57 Exception Message: The snapshot could not be
> generated because the publisher is inactive.
> 2007-02-09 17:56:51.57 Message Code: 54057
> Now, this is not only new to me but also to BOL because I couldn't find
> anything about inactive "publishers", only subscribers. Also the Message
> code didn't provide any answers. I've found places to change subscriber
> inactivity timeouts but nothing for publishers.
> Any ideas? I'm used to setting up replication in SQL2K and haven't had
> this
> many problems before. Any significant changes in how to set up
> replication
> with 2005?
> --
> Roger.
|||Raymond/Hillary,
I used the wizard that came with SQL2005 to set up the publisher and
subscription. I used the sp that Hillary gave me determine the status.
Use Distribution
Execute sp_helppublication
Go
But all I saw was a successful completion. No results. Unlike 2000, which
provided results, I'm still getting used to how 2005 works.
Likewise, I ran sp_changedistpublisher as Raymond specified but it couldn't
find my publisher 'Autosort081 XRep'. It said to make sure it was registered
in sysservers in the distributor. I can't find that table (or anything
resembling) and even if I did I can't open it in Dist or Master. Just
another thing I don't like about 2005.
I do have publishers in 2000 but they only publish to other 2000 servers,
not 2005.
Roger.
"Raymond Mak [MSFT]" wrote:

> Hi Roger,
> You can set the publisher to active by running sp_changedistpublisher
> @.publisher = <your publisher name>, @.property = 'active', @.value = 'true' at
> the distributor, although this happens often enough
> ([url]http://groups.google.com/group/microsoft.public.sqlserver.replication/browse_frm/thread/cb0ab8eb51efbdc0/20d2368928450759?lnk=gst&rnum=41#20d2368928450759[ /url])
> that I am a bit worried about what kind of subtle problems we missed during
> replication setup through SSMS. Are you having a SQL2000 publisher by any
> chance?
> -Raymond
> "Roger Denison" <RogerDenison@.discussions.microsoft.com> wrote in message
> news:0EF738C9-9EE5-43C9-B19C-117280FAA1AB@.microsoft.com...
>
>
|||Roger, looks like your replication setup may be in a pretty bad state
(specifically the distributor doesn't seem to know anything about your
publisher). You may need to tear down all replication meta-data at your
publisher (likely by using @.ignore_distributor = 1 in various replication sp
calls). But before doing that, can you tell me how you upgrade the
distributor in the first place? You may also want to check the sys.servers
table in the master database and the MSdistpublishers table in msdb on your
distributor and see if there are any signs that the distributor actually
knows about your publisher at this point.
-Raymond
"Roger Denison" <RogerDenison@.discussions.microsoft.com> wrote in message
news:FF68B0C8-7B2B-4E70-BAD0-197FC3CDC91C@.microsoft.com...[vbcol=seagreen]
> Raymond/Hillary,
> I used the wizard that came with SQL2005 to set up the publisher and
> subscription. I used the sp that Hillary gave me determine the status.
> Use Distribution
> Execute sp_helppublication
> Go
> But all I saw was a successful completion. No results. Unlike 2000,
> which
> provided results, I'm still getting used to how 2005 works.
> Likewise, I ran sp_changedistpublisher as Raymond specified but it
> couldn't
> find my publisher 'Autosort081 XRep'. It said to make sure it was
> registered
> in sysservers in the distributor. I can't find that table (or anything
> resembling) and even if I did I can't open it in Dist or Master. Just
> another thing I don't like about 2005.
>
> I do have publishers in 2000 but they only publish to other 2000 servers,
> not 2005.
> --
> Roger.
>
> "Raymond Mak [MSFT]" wrote:
|||Thanks for your input, Raymond. You think things are messed up, I have two
publications that I can't delete becasue they "don't exist". And I can't
drop a couple of tables because they are used for replication by the two
publishers that don't exist. But that's another issue. This has been a lot
of fun trying to work in 2005.
For starters, I haven't upgraded anything from 2000 to 2005. With my latest
project, my IT guy purchased 2005 instead of 2000. I was able to import data
from one of my other servers so i wasn't starting from scratch. But there
was no replication associated with the tables at that point.
I then imported data from the publisher to the subscriber so that the schema
would already be present on the subscriber.
So I set up my subscriber as the distributor using the wizard. I then
created the publication on the publisher server. As you probably guessed
there are 2 servers in this scenario. From the publication, I launch the New
Subscriber... wizard to create the subscription. I request the publication
to start the snapshot and start the log reader (this is a trans rep). The
wizard says that everything completed successfully but RepMon shows errors.
Now, you ask me to look in some tables in Master or Distribution but I
cannot look at the data. Even though I am logged on as SA I cannot open any
of those ("open table" is grayed out) How do I go about getting access to
the system tables?
Roger.
"Raymond Mak [MSFT]" wrote:

> Roger, looks like your replication setup may be in a pretty bad state
> (specifically the distributor doesn't seem to know anything about your
> publisher). You may need to tear down all replication meta-data at your
> publisher (likely by using @.ignore_distributor = 1 in various replication sp
> calls). But before doing that, can you tell me how you upgrade the
> distributor in the first place? You may also want to check the sys.servers
> table in the master database and the MSdistpublishers table in msdb on your
> distributor and see if there are any signs that the distributor actually
> knows about your publisher at this point.
> -Raymond
> "Roger Denison" <RogerDenison@.discussions.microsoft.com> wrote in message
> news:FF68B0C8-7B2B-4E70-BAD0-197FC3CDC91C@.microsoft.com...
>
>

Inactive Link and Calendar Controls

After installing SQL Server 2005 Reporting Services, report links to
sub-reports are not working and is producing an error. Also, when selecting a
day from the calendar (parameterized), an error occurs as well.
Line: 129
Char: 5
Error: 'event' is null or not an object
Code: 0
URL:
http://reportserv/reports/pages/report.aspx?itempath=%2fDesigners_Council%2fReports%2fForm_LetterHave you looked at the xml code ebhind the report?
Had a similiar issue when migrating reports across. New reports were
fine. Found editing the xml was quick and fixed the issue.
Tom Bizannes
Microsoft Certified Professional
http://www.smartbiz.com.au
Sydney, Australia
Terry wrote:
> After installing SQL Server 2005 Reporting Services, report links to
> sub-reports are not working and is producing an error. Also, when selecting a
> day from the calendar (parameterized), an error occurs as well.
> Line: 129
> Char: 5
> Error: 'event' is null or not an object
> Code: 0
> URL:
> http://reportserv/reports/pages/report.aspx?itempath=%2fDesigners_Council%2fReports%2fForm_Letter|||Thank you for your suggestion.
However, the following solution resolved the linking challenges.
SOLUTION:
Turned off ScriptScan on Anti-Virus software
Moved the following files between ASPNET_CLIENT folders based on ASP.NET
version being used
File -> WebUIValidation.js (cannot have file available in both
ASPNET_CLIENT folders â' causes conflicts)
"SmartbizAustralia" wrote:
> Have you looked at the xml code ebhind the report?
> Had a similiar issue when migrating reports across. New reports were
> fine. Found editing the xml was quick and fixed the issue.
> Tom Bizannes
> Microsoft Certified Professional
> http://www.smartbiz.com.au
> Sydney, Australia
> Terry wrote:
> > After installing SQL Server 2005 Reporting Services, report links to
> > sub-reports are not working and is producing an error. Also, when selecting a
> > day from the calendar (parameterized), an error occurs as well.
> >
> > Line: 129
> > Char: 5
> > Error: 'event' is null or not an object
> > Code: 0
> > URL:
> > http://reportserv/reports/pages/report.aspx?itempath=%2fDesigners_Council%2fReports%2fForm_Letter
>

Inaccurate HTML Rendering

(RS 2000)
I have 6 rows of text boxes in my body header running the width of the
report. All look fine in the report designer and when rendered to Adobe.
When I output to HTML, there are an extra 2 blank rows between the 5 and 6
TB. All the TB properties are exactly the same; I have tried
deleting/recreating, shifting around, nothing seems to help.
Thanks for any suggestionsFor anyone else that has the same problem I ran into where the HTML output
looks different than the designer or other output: I found if I slightly
reduced the vertical height of all the text boxes it fixed the problem.
"Mike Harbinger" <MikeH@.Cybervillage.net> wrote in message
news:%23zD8P27TGHA.5908@.TK2MSFTNGP14.phx.gbl...
> (RS 2000)
> I have 6 rows of text boxes in my body header running the width of the
> report. All look fine in the report designer and when rendered to Adobe.
> When I output to HTML, there are an extra 2 blank rows between the 5 and 6
> TB. All the TB properties are exactly the same; I have tried
> deleting/recreating, shifting around, nothing seems to help.
> Thanks for any suggestions
>

In(@variable) clause and TABLE datatype variable

Hi this question has been asked several times and some solution has been
provided already. But the one I am facing is with a twist. I need to use the
IN() clause with a variable as its parameter. The variable is a list of comm
a
separated character values all enclosed in pairs of single quotes. I could
have solved this problem by enclosing the final query in a single quote and
running Exec command on it (with the Variable list outside the quotes) but I
also need to use a Table data type variable which raises error when EXEC
command is run.
Followig is the example that may explain well.
I have oversimplified this example and it does things that we would not do
in normal situation
use pubs;
-- declare and set Table variable
declare @.TableVariable TABLE ( col char(4) );
INSERT @.TableVariable
Select pub_id FROM publishers
;
--declare and set CSV single quoted characters list
declare @.ListVariable varchar(100);
set @.ListVariable = ' ''0736'', ''0877'' '; --these pub_ids are in the
publishers table, promise
--the query where the Table variable is used as well as the IN() clause is
used
Select TableVariable.col
From @.TableVariable as TableVariable
Where TableVariable.col IN (@.ListVariable)
-- returns 0 rows
--if we use Exec by replacing the last code section above with as following
declare @.command varchar(2000)
set @.command='
Select TableVariable.col
From @.TableVariable as TableVariable
Where TableVariable.col IN ('+@.ListVariable+')
'
exec (@.command)
--Then we get the error message:
-- Must declare the variable '@.TableVariable'.
Expand AllCollapse All
Manage Your Profile |Legal |Contact Us |MSDN Flash Newsletterhttp://www.sommarskog.se/arrays-in-sql.html
David Portas
SQL Server MVP
--
"Aamir Ghanchi" <AamirGhanchi@.discussions.microsoft.com> wrote in message
news:D82370C5-6E5E-4178-98B8-0017FE88E809@.microsoft.com...
> Hi this question has been asked several times and some solution has been
> provided already. But the one I am facing is with a twist. I need to use
> the
> IN() clause with a variable as its parameter. The variable is a list of
> comma
> separated character values all enclosed in pairs of single quotes. I could
> have solved this problem by enclosing the final query in a single quote
> and
> running Exec command on it (with the Variable list outside the quotes) but
> I
> also need to use a Table data type variable which raises error when EXEC
> command is run.
> Followig is the example that may explain well.
> I have oversimplified this example and it does things that we would not do
> in normal situation
> use pubs;
> -- declare and set Table variable
> declare @.TableVariable TABLE ( col char(4) );
> INSERT @.TableVariable
> Select pub_id FROM publishers
> ;
> --declare and set CSV single quoted characters list
> declare @.ListVariable varchar(100);
> set @.ListVariable = ' ''0736'', ''0877'' '; --these pub_ids are in the
> publishers table, promise
> --the query where the Table variable is used as well as the IN() clause is
> used
> Select TableVariable.col
> From @.TableVariable as TableVariable
> Where TableVariable.col IN (@.ListVariable)
> -- returns 0 rows
> --if we use Exec by replacing the last code section above with as
> following
> declare @.command varchar(2000)
> set @.command='
> Select TableVariable.col
> From @.TableVariable as TableVariable
> Where TableVariable.col IN ('+@.ListVariable+')
> '
> exec (@.command)
> --Then we get the error message:
> -- Must declare the variable '@.TableVariable'.
>
> Expand AllCollapse All
>
> Manage Your Profile |Legal |Contact Us |MSDN Flash Newsletter
>

IN(@variable) clause and Table Data Type variable

Hi this question has been asked several times and some solution has been
provided already. But the one I am facing is with a twist. I need to use the
IN() clause with a variable as its parameter. The variable is a list of comm
a
separated character values all enclosed in pairs of single quotes. I could
have solved this problem by enclosing the final query in a single quote and
running Exec command on it (with the Variable list outside the quotes) but I
also need to use a Table data type variable which raises error when EXEC
command is run.
Followig is the example that may explain well.
I have oversimplified this example and it does things that we would not do
in normal situation
use pubs;
-- declare and set Table variable
declare @.TableVariable TABLE ( col char(4) );
INSERT @.TableVariable
Select pub_id FROM publishers
;
--declare and set CSV single quoted characters list
declare @.ListVariable varchar(100);
set @.ListVariable = ' ''0736'', ''0877'' '; --these pub_ids are in the
publishers table, promise
--the query where the Table variable is used as well as the IN() clause is
used
Select TableVariable.col
From @.TableVariable as TableVariable
Where TableVariable.col IN (@.ListVariable)
-- returns 0 rows
--if we use Exec by replacing the last code section above with as following
declare @.command varchar(2000)
set @.command='
Select TableVariable.col
From @.TableVariable as TableVariable
Where TableVariable.col IN ('+@.ListVariable+')
'
exec (@.command)
--Then we get the error message:
-- Must declare the variable '@.TableVariable'.
Expand AllCollapse All
Manage Your Profile |Legal |Contact Us |MSDN Flash NewsletterI'd suggest using something like this. in (select id from tablname)
in your dynamic SQL example don't use the variable.
Set @.command = ' Select TableVariable.col
>From @.TableVariable as TableVariable
Where TableVariable.col IN ('
loop on list
select @.command = @.command + each number
End
select @.command = @.command + ')'
exec (@.command)|||Use a temp table instead.
David Gugick
Imceda Software
www.imceda.com|||http://www.sommarskog.se/arrays-in-sql.html
David Portas
SQL Server MVP
--
"Aamir Ghanchi" <AamirGhanchi@.discussions.microsoft.com> wrote in message
news:EAE233F0-A1C4-4CEF-9F72-8EB4EB4884E2@.microsoft.com...
> Hi this question has been asked several times and some solution has been
> provided already. But the one I am facing is with a twist. I need to use
> the
> IN() clause with a variable as its parameter. The variable is a list of
> comma
> separated character values all enclosed in pairs of single quotes. I could
> have solved this problem by enclosing the final query in a single quote
> and
> running Exec command on it (with the Variable list outside the quotes) but
> I
> also need to use a Table data type variable which raises error when EXEC
> command is run.
> Followig is the example that may explain well.
> I have oversimplified this example and it does things that we would not do
> in normal situation
> use pubs;
> -- declare and set Table variable
> declare @.TableVariable TABLE ( col char(4) );
> INSERT @.TableVariable
> Select pub_id FROM publishers
> ;
> --declare and set CSV single quoted characters list
> declare @.ListVariable varchar(100);
> set @.ListVariable = ' ''0736'', ''0877'' '; --these pub_ids are in the
> publishers table, promise
> --the query where the Table variable is used as well as the IN() clause is
> used
> Select TableVariable.col
> From @.TableVariable as TableVariable
> Where TableVariable.col IN (@.ListVariable)
> -- returns 0 rows
> --if we use Exec by replacing the last code section above with as
> following
> declare @.command varchar(2000)
> set @.command='
> Select TableVariable.col
> From @.TableVariable as TableVariable
> Where TableVariable.col IN ('+@.ListVariable+')
> '
> exec (@.command)
> --Then we get the error message:
> -- Must declare the variable '@.TableVariable'.
>
> Expand AllCollapse All
>
> Manage Your Profile |Legal |Contact Us |MSDN Flash Newsletter
>|||On Mon, 7 Feb 2005 14:31:03 -0800, Aamir Ghanchi wrote:
(snip)
I have already answered this question in .server, even before you posted
it (twice!) to this group. Please don't multi-post.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||Good thought. and this is what I did initially. but my IN() clause is
enclosed within the aggregate function SUM and subquery is not allowed in it
.
Thanks though.
"Paul Moore" wrote:

> I'd suggest using something like this. in (select id from tablname)
>
> in your dynamic SQL example don't use the variable.
> Set @.command = ' Select TableVariable.col
> Where TableVariable.col IN ('
> loop on list
> select @.command = @.command + each number
> End
> select @.command = @.command + ')'
> exec (@.command)
>|||slow though, but I guess thats the only viable option I am left with.
Thank you.
"David Gugick" wrote:

> Use a temp table instead.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>|||I understand. I just started using MSDN website for posting messages since
the new Google Groups interface eats up all the indentation on code snippets
.
Bad thing, I can't crosspost through MSDN site :(
thanks.
"Hugo Kornelis" wrote:

> On Mon, 7 Feb 2005 14:31:03 -0800, Aamir Ghanchi wrote:
> (snip)
> I have already answered this question in .server, even before you posted
> it (twice!) to this group. Please don't multi-post.
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)
>|||>> but my IN() clause is enclosed within the aggregate function SUM
and subquery is not allowed in it. <<
What did you think that the arithmetic sum of a logical expression
would be anyway? Think about it for two seconds.

IN(@ListVariable) and TABLE Data Type

Hi this question has been asked several times and some solution has been
provided already. But the one I am facing is with a twist. I need to use the
IN() clause with a variable as its parameter. The variable is a list of comma
separated character values all enclosed in pairs of single quotes. I could
have solved this problem by enclosing the final query in a single quote and
running Exec command on it (with the Variable list outside the quotes) but I
also need to use a Table data type variable which raises error when EXEC
command is run.
Followig is the example that may explain well.
I have oversimplified this example and it does things that we would not do
in normal situation
use pubs;
-- declare and set Table variable
declare @.TableVariable TABLE ( col char(4) );
INSERT @.TableVariable
Select pub_id FROM publishers
;
--declare and set CSV single quoted characters list
declare @.ListVariable varchar(100);
set @.ListVariable = ' ''0736'', ''0877'' '; --these pub_ids are in the
publishers table, promise
--the query where the Table variable is used as well as the IN() clause is
used
Select TableVariable.col
From @.TableVariable as TableVariable
Where TableVariable.col IN (@.ListVariable)
-- returns 0 rows
--if we use Exec by replacing the last code section above with as following
declare @.command varchar(2000)
set @.command='
Select TableVariable.col
From @.TableVariable as TableVariable
Where TableVariable.col IN ('+@.ListVariable+')
'
exec (@.command)
--Then we get the error message:
-- Must declare the variable '@.TableVariable'.
On Mon, 7 Feb 2005 13:27:06 -0800, "aamirghanchi@.yahoo.com"
<aamirghanchi@.yahoo.com@.discussions.microsoft.com> wrote:

> I need to use the
>IN() clause with a variable as its parameter. The variable is a list of comma
>separated character values all enclosed in pairs of single quotes. I could
>have solved this problem by enclosing the final query in a single quote and
>running Exec command on it (with the Variable list outside the quotes) but I
>also need to use a Table data type variable which raises error when EXEC
>command is run.
Hi aamirqhanchi,
http://www.sommarskog.se/arrays-in-sql.html
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)

IN(@ListVariable) and TABLE Data Type

Hi this question has been asked several times and some solution has been
provided already. But the one I am facing is with a twist. I need to use the
IN() clause with a variable as its parameter. The variable is a list of comm
a
separated character values all enclosed in pairs of single quotes. I could
have solved this problem by enclosing the final query in a single quote and
running Exec command on it (with the Variable list outside the quotes) but I
also need to use a Table data type variable which raises error when EXEC
command is run.
Followig is the example that may explain well.
I have oversimplified this example and it does things that we would not do
in normal situation
use pubs;
-- declare and set Table variable
declare @.TableVariable TABLE ( col char(4) );
INSERT @.TableVariable
Select pub_id FROM publishers
;
--declare and set CSV single quoted characters list
declare @.ListVariable varchar(100);
set @.ListVariable = ' ''0736'', ''0877'' '; --these pub_ids are in the
publishers table, promise
--the query where the Table variable is used as well as the IN() clause is
used
Select TableVariable.col
From @.TableVariable as TableVariable
Where TableVariable.col IN (@.ListVariable)
-- returns 0 rows
--if we use Exec by replacing the last code section above with as following
declare @.command varchar(2000)
set @.command='
Select TableVariable.col
From @.TableVariable as TableVariable
Where TableVariable.col IN ('+@.ListVariable+')
'
exec (@.command)
--Then we get the error message:
-- Must declare the variable '@.TableVariable'.On Mon, 7 Feb 2005 13:27:06 -0800, "aamirghanchi@.yahoo.com"
<aamirghanchi@.yahoo.com@.discussions.microsoft.com> wrote:

> I need to use the
>IN() clause with a variable as its parameter. The variable is a list of com
ma
>separated character values all enclosed in pairs of single quotes. I could
>have solved this problem by enclosing the final query in a single quote and
>running Exec command on it (with the Variable list outside the quotes) but
I
>also need to use a Table data type variable which raises error when EXEC
>command is run.
Hi aamirqhanchi,
http://www.sommarskog.se/arrays-in-sql.html
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)

IN(@ListVariable) and TABLE Data Type

Hi this question has been asked several times and some solution has been
provided already. But the one I am facing is with a twist. I need to use the
IN() clause with a variable as its parameter. The variable is a list of comma
separated character values all enclosed in pairs of single quotes. I could
have solved this problem by enclosing the final query in a single quote and
running Exec command on it (with the Variable list outside the quotes) but I
also need to use a Table data type variable which raises error when EXEC
command is run.
Followig is the example that may explain well.
I have oversimplified this example and it does things that we would not do
in normal situation
use pubs;
-- declare and set Table variable
declare @.TableVariable TABLE ( col char(4) );
INSERT @.TableVariable
Select pub_id FROM publishers
;
--declare and set CSV single quoted characters list
declare @.ListVariable varchar(100);
set @.ListVariable = ' ''0736'', ''0877'' '; --these pub_ids are in the
publishers table, promise
--the query where the Table variable is used as well as the IN() clause is
used
Select TableVariable.col
From @.TableVariable as TableVariable
Where TableVariable.col IN (@.ListVariable)
-- returns 0 rows
--if we use Exec by replacing the last code section above with as following
declare @.command varchar(2000)
set @.command='
Select TableVariable.col
From @.TableVariable as TableVariable
Where TableVariable.col IN ('+@.ListVariable+')
'
exec (@.command)
--Then we get the error message:
-- Must declare the variable '@.TableVariable'.On Mon, 7 Feb 2005 13:27:06 -0800, "aamirghanchi@.yahoo.com"
<aamirghanchi@.yahoo.com@.discussions.microsoft.com> wrote:
> I need to use the
>IN() clause with a variable as its parameter. The variable is a list of comma
>separated character values all enclosed in pairs of single quotes. I could
>have solved this problem by enclosing the final query in a single quote and
>running Exec command on it (with the Variable list outside the quotes) but I
>also need to use a Table data type variable which raises error when EXEC
>command is run.
Hi aamirqhanchi,
http://www.sommarskog.se/arrays-in-sql.html
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)