Showing posts with label column. Show all posts
Showing posts with label column. Show all posts

Friday, March 30, 2012

increase size of varchar column.. table being replicated..

I have a table that is being replicated.. i would like to
increase the size of this column.. Is there a way to do
this without dropping the subscription?
Thanks,
niv
It can be done indirectly but it's not nice! You could add a new column with
the new datatype (sp_repladdcolumn), do an update on the table to populate
the column, then drop the column (sp_repldropcolumn). Do this again to
create the column having the same original name.
Alternatively, as you say, you can drop the publication then recreate from
scratch.
We're hoping that such things will be simpler in SQL Server 2005.
Regards,
Paul Ibison
sql

Wednesday, March 28, 2012

Increase columns width of merge replicated table

We are using merge replication and it is working fine. We are looking for increase in on numeric column 10,5 instead of 7,2. How I can do it?
Any suggession highly appreciated by us.
ThanksI have only been able to accomplish this by copying out the data and the rowguid for each record, sp_repldropcolumn the old column, sp_repladdcolumn'ing it back in, and copying the data back in. (might not have sproc names exactly right)

I remember reading about a builtin sproc that you could execute that would run certain commands on a database or table that was in replication, but I can't find it's name, and I don't know it's limitations.

Increase column length in all tables with column

Hi,
I there a way to increase the length of the same-named varchar column in
all tables at once in SQL2000?
Thanks.
Alan
Alan Z. Scharf a crit :
> Hi,
> I there a way to increase the length of the same-named varchar column in
> all tables at once in SQL2000?
If you have used a SQL DOMAIN and a "use case" like Power Designer it
will be a children job !
Otherwise you will need to write a long script with many steps...
like this one :
CREATE TABLE T_IMPORT
(IMP_ID INTEGER,
IMP_NOM VARCHAR(16),
IMP_DATE CHAR(6))
INSERT INTO T_IMPORT VALUES (254, 'Dupont', '251159')
INSERT INTO T_IMPORT VALUES (321, 'Durand', '130278')
INSERT INTO T_IMPORT VALUES (187, 'Dubois', '110401')
-- changing IMP_DATE CHAR(6) to CHAR(8)
BEGIN TRAN
ALTER TABLE T_IMPORT
ADD TMP_IMP_DATE CHAR(6)
UPDATE T_IMPORT
SET TMP_IMP_DATE = IMP_DATE
ALTER TABLE
DROP IMP_DATE
ALTER TABLE
ADD IMP_DATE CHAR(8)
UPDATE T_IMPORT
SET IMP_DATE = TMP_IMP_DATE
ALTER TABLE
DROP TMP_IMP_DATE
COMMIT TRAN
and also if no constraints is on the column.
A +

> Thanks.
> Alan
>
Frdric BROUARD, MVP SQL Server, expert bases de donnes et langage SQL
Le site sur le langage SQL et les SGBDR : http://sqlpro.developpez.com
Audit, conseil, expertise, formation, modlisation, tuning, optimisation
********************* http://www.datasapiens.com ***********************
|||Thanks for your reply.
I guess it may be more trouble than it's worth.
Alan
"SQLpro [MVP]" <brouardf@.club-internet.fr> wrote in message
news:ePiKc2pTGHA.5332@.tk2msftngp13.phx.gbl...[vbcol=seagreen]
> Alan Z. Scharf a crit :
in
> If you have used a SQL DOMAIN and a "use case" like Power Designer it
> will be a children job !
> Otherwise you will need to write a long script with many steps...
> like this one :
> CREATE TABLE T_IMPORT
> (IMP_ID INTEGER,
> IMP_NOM VARCHAR(16),
> IMP_DATE CHAR(6))
> INSERT INTO T_IMPORT VALUES (254, 'Dupont', '251159')
> INSERT INTO T_IMPORT VALUES (321, 'Durand', '130278')
> INSERT INTO T_IMPORT VALUES (187, 'Dubois', '110401')
> -- changing IMP_DATE CHAR(6) to CHAR(8)
> BEGIN TRAN
> ALTER TABLE T_IMPORT
> ADD TMP_IMP_DATE CHAR(6)
> UPDATE T_IMPORT
> SET TMP_IMP_DATE = IMP_DATE
> ALTER TABLE
> DROP IMP_DATE
> ALTER TABLE
> ADD IMP_DATE CHAR(8)
> UPDATE T_IMPORT
> SET IMP_DATE = TMP_IMP_DATE
> ALTER TABLE
> DROP TMP_IMP_DATE
> COMMIT TRAN
> and also if no constraints is on the column.
> A +
>
>
> --
> Frdric BROUARD, MVP SQL Server, expert bases de donnes et langage SQL
> Le site sur le langage SQL et les SGBDR : http://sqlpro.developpez.com
> Audit, conseil, expertise, formation, modlisation, tuning, optimisation
> ********************* http://www.datasapiens.com ***********************
|||look up the table "syscolumns" in books on line.

Increase column length in all tables with column

Hi,
I there a way to increase the length of the same-named varchar column in
all tables at once in SQL2000?
Thanks.
AlanAlan Z. Scharf a crit :
> Hi,
> I there a way to increase the length of the same-named varchar column in
> all tables at once in SQL2000?
If you have used a SQL DOMAIN and a "use case" like Power Designer it
will be a children job !
Otherwise you will need to write a long script with many steps...
like this one :
CREATE TABLE T_IMPORT
(IMP_ID INTEGER,
IMP_NOM VARCHAR(16),
IMP_DATE CHAR(6))
INSERT INTO T_IMPORT VALUES (254, 'Dupont', '251159')
INSERT INTO T_IMPORT VALUES (321, 'Durand', '130278')
INSERT INTO T_IMPORT VALUES (187, 'Dubois', '110401')
-- changing IMP_DATE CHAR(6) to CHAR(8)
BEGIN TRAN
ALTER TABLE T_IMPORT
ADD TMP_IMP_DATE CHAR(6)
UPDATE T_IMPORT
SET TMP_IMP_DATE = IMP_DATE
ALTER TABLE
DROP IMP_DATE
ALTER TABLE
ADD IMP_DATE CHAR(8)
UPDATE T_IMPORT
SET IMP_DATE = TMP_IMP_DATE
ALTER TABLE
DROP TMP_IMP_DATE
COMMIT TRAN
and also if no constraints is on the column.
A +

> Thanks.
> Alan
>
Frdric BROUARD, MVP SQL Server, expert bases de donnes et langage SQL
Le site sur le langage SQL et les SGBDR : http://sqlpro.developpez.com
Audit, conseil, expertise, formation, modlisation, tuning, optimisation
********************* http://www.datasapiens.com ***********************|||Thanks for your reply.
I guess it may be more trouble than it's worth.
Alan
"SQLpro [MVP]" <brouardf@.club-internet.fr> wrote in message
news:ePiKc2pTGHA.5332@.tk2msftngp13.phx.gbl...
> Alan Z. Scharf a crit :
in[vbcol=seagreen]
> If you have used a SQL DOMAIN and a "use case" like Power Designer it
> will be a children job !
> Otherwise you will need to write a long script with many steps...
> like this one :
> CREATE TABLE T_IMPORT
> (IMP_ID INTEGER,
> IMP_NOM VARCHAR(16),
> IMP_DATE CHAR(6))
> INSERT INTO T_IMPORT VALUES (254, 'Dupont', '251159')
> INSERT INTO T_IMPORT VALUES (321, 'Durand', '130278')
> INSERT INTO T_IMPORT VALUES (187, 'Dubois', '110401')
> -- changing IMP_DATE CHAR(6) to CHAR(8)
> BEGIN TRAN
> ALTER TABLE T_IMPORT
> ADD TMP_IMP_DATE CHAR(6)
> UPDATE T_IMPORT
> SET TMP_IMP_DATE = IMP_DATE
> ALTER TABLE
> DROP IMP_DATE
> ALTER TABLE
> ADD IMP_DATE CHAR(8)
> UPDATE T_IMPORT
> SET IMP_DATE = TMP_IMP_DATE
> ALTER TABLE
> DROP TMP_IMP_DATE
> COMMIT TRAN
> and also if no constraints is on the column.
> A +
>
>
> --
> Frdric BROUARD, MVP SQL Server, expert bases de donnes et langage SQL
> Le site sur le langage SQL et les SGBDR : http://sqlpro.developpez.com
> Audit, conseil, expertise, formation, modlisation, tuning, optimisation
> ********************* http://www.datasapiens.com ***********************|||look up the table "syscolumns" in books on line.

Increase column length in all tables with column

Hi,
I there a way to increase the length of the same-named varchar column in
all tables at once in SQL2000?
Thanks.
AlanAlan Z. Scharf a écrit :
> Hi,
> I there a way to increase the length of the same-named varchar column in
> all tables at once in SQL2000?
If you have used a SQL DOMAIN and a "use case" like Power Designer it
will be a children job !
Otherwise you will need to write a long script with many steps...
like this one :
CREATE TABLE T_IMPORT
(IMP_ID INTEGER,
IMP_NOM VARCHAR(16),
IMP_DATE CHAR(6))
INSERT INTO T_IMPORT VALUES (254, 'Dupont', '251159')
INSERT INTO T_IMPORT VALUES (321, 'Durand', '130278')
INSERT INTO T_IMPORT VALUES (187, 'Dubois', '110401')
-- changing IMP_DATE CHAR(6) to CHAR(8)
BEGIN TRAN
ALTER TABLE T_IMPORT
ADD TMP_IMP_DATE CHAR(6)
UPDATE T_IMPORT
SET TMP_IMP_DATE = IMP_DATE
ALTER TABLE
DROP IMP_DATE
ALTER TABLE
ADD IMP_DATE CHAR(8)
UPDATE T_IMPORT
SET IMP_DATE = TMP_IMP_DATE
ALTER TABLE
DROP TMP_IMP_DATE
COMMIT TRAN
and also if no constraints is on the column.
A +
> Thanks.
> Alan
>
Frédéric BROUARD, MVP SQL Server, expert bases de données et langage SQL
Le site sur le langage SQL et les SGBDR : http://sqlpro.developpez.com
Audit, conseil, expertise, formation, modélisation, tuning, optimisation
********************* http://www.datasapiens.com ***********************|||Thanks for your reply.
I guess it may be more trouble than it's worth.
Alan
"SQLpro [MVP]" <brouardf@.club-internet.fr> wrote in message
news:ePiKc2pTGHA.5332@.tk2msftngp13.phx.gbl...
> Alan Z. Scharf a écrit :
> > Hi,
> >
> > I there a way to increase the length of the same-named varchar column
in
> > all tables at once in SQL2000?
> If you have used a SQL DOMAIN and a "use case" like Power Designer it
> will be a children job !
> Otherwise you will need to write a long script with many steps...
> like this one :
> CREATE TABLE T_IMPORT
> (IMP_ID INTEGER,
> IMP_NOM VARCHAR(16),
> IMP_DATE CHAR(6))
> INSERT INTO T_IMPORT VALUES (254, 'Dupont', '251159')
> INSERT INTO T_IMPORT VALUES (321, 'Durand', '130278')
> INSERT INTO T_IMPORT VALUES (187, 'Dubois', '110401')
> -- changing IMP_DATE CHAR(6) to CHAR(8)
> BEGIN TRAN
> ALTER TABLE T_IMPORT
> ADD TMP_IMP_DATE CHAR(6)
> UPDATE T_IMPORT
> SET TMP_IMP_DATE = IMP_DATE
> ALTER TABLE
> DROP IMP_DATE
> ALTER TABLE
> ADD IMP_DATE CHAR(8)
> UPDATE T_IMPORT
> SET IMP_DATE = TMP_IMP_DATE
> ALTER TABLE
> DROP TMP_IMP_DATE
> COMMIT TRAN
> and also if no constraints is on the column.
> A +
>
> >
> > Thanks.
> >
> > Alan
> >
> >
>
> --
> Frédéric BROUARD, MVP SQL Server, expert bases de données et langage SQL
> Le site sur le langage SQL et les SGBDR : http://sqlpro.developpez.com
> Audit, conseil, expertise, formation, modélisation, tuning, optimisation
> ********************* http://www.datasapiens.com ***********************|||look up the table "syscolumns" in books on line.

Friday, March 23, 2012

Incorrect syntax near .

I try to use stored procedure to do a search for Email column. But I always get a error "Incorrect syntax near '.'", when I input keyword like "ash11.tw." And if I put keyword without dot, it works fine. It seems like the dot caused the problem. But I dont know how to fix it. Can someone help me out? Thanks.

My where expression is following.

@.spWhereSearch = 'where cont.Email like "%' + @.Email + '%"'

Try this

@.spWhereSearch = Where cont.Email Like "'% + @.Email + %'"|||

Probably try this

@.spWhereSearch = 'where cont.Email like %' + @.Email + '%'

|||

Change the double quotes " in your where search to two single quotes '' and that should fix your issue.

|||

Thanks for answering.

I have tried above three but they still dont work.

The error is still the same.

Does anyone have other suggestions?

|||If you show us the code then it would be more helpful. I dont see any reason why this would fail

@.spWhereSearch = Where cont.Email Like '% + @.Email + %'|||That line needs to look *exactly* like this (which was Motley's suggestion and is the resolution to your prolem):

@.spWhereSearch = 'where cont.Email like ''%' + @.Email + '%'''

|||

Thanks for replying. it works!!!

tmorton and Motley's suggestion is correct.

Thanks again.

INCORRECT SYNTAX NEAR "STRING" FOR ALTER SQL

HELP

I am trying to create a new column for every file in a folder

but i keep getting an sql exception - incorrect syntax near ' whatever the value of the file name is'

it works if i just type in the value directly

my code look like this

fsofolder = CreateObject("Scripting.FileSystemObject")
folder = fsofolder.GetFolder("the path to the Files\")
files = folder.Files
For Each objfile In files
sname = objfile.Name

cmd3.CommandText = "ALTER TABLE NEW ADD " & "' " & sname & " ' " & " nvarchar(MAX)"

DatabaseConnection.Open()

Try

cmd3.Connection = DatabaseConnection
cmd3.ExecuteNonQuery()
Catch ex As SqlException
MsgBox(ex.Message)
End Try

DatabaseConnection.Close()

The syntax should be Alter TabletablenameADD COLUMNcolumnname datatype

There is no place for apostophe delimiters in the syntax, and the word COLUMN is needed too.

|||

Thanks

I figured out what the problem was

cmd3.CommandText = "ALTER TABLEtablename ADD " & "'[" & sname & "]" & " nvarchar(MAX)"

It was not accepting eg Q45654656.txt as a column name

but accepting [Q45654656]

|||

database objects can't have a '.' in their names

|||

It did actually

I missed-type in the last post

the difference was the [] that enclosed the string

it accepted

sname = [textfile.txt]

but not

sname = textfile.txt

as the column name


|||

I have another question however,

is it possible to have in one string a sql command to insert into database tableonlyif the column is empty or NULL ?

maybe something like

cmd.CommandText = "INSERT INTO table_name (column1, column2,...) VALUES (value1, value2,...) WHERE Columnvalue is NULL"

I appreciate the help

|||

Well, the a

fredi:

I have another question however,

is it possible to have in one string a sql command to insert into database tableonlyif the column is empty or NULL ?

maybe something like

cmd.CommandText = "INSERT INTO table_name (column1, column2,...) VALUES (value1, value2,...) WHERE Columnvalue is NULL"

I appreciate the help

Well, why don't you type in that sql statement for yourself and tell us? :)

However, wanting to do what you asked does not make sense to me. I could understand if you said you wanted to update a column only if it was null, because presumably you don't want to lose the old value. By definition, if you want to insert a record, the record shouldn't already exist, so how could a non-existent record have a value in any column?

FYI, it is possible to say (instead of the VALUES (value1, etc.)), SELECT value1, value2, etc.

|||

well here is what i am trying to do and able to do so far

-look into a folder

-create a database table in sql server using the create sql command

-alter the table and create a column named for each file name in the folder

-read each of the text file data into each column

however if i run the code again it adds the textfile data into the same columns again

I just need a statement to say if the column already has data then don't do the all the above steps

I hope this explains my situation

These work:


cmd2.CommandText = "CREATE TABLE " & DatabaseTableName & "(" & ISTCOLUMN& " nvarchar(MAX))"
cmd3.CommandText = "ALTER TABLE " & DatabaseTableName & " ADD " & sname1 & " nvarchar(MAX)"

cmd4.CommandText = "INSERT INTO " & DatabaseTableName & "( " & sname1 & " )" & "VALUES ( '" & filefields(i) & "' )"

How will I check if sname column is Null and only insert the values of filefields into it?

thanks

|||

Am I correct in saying the following?

If the column exists in the table, then you must have populated it with a value?

Because if that is true, then all you have to do is query INFORMATION_SCHEMA.COLUMNS and find out if the column exists.

If that is not true, then you can query the table to see if the column exists.

If it does, query the table to see if it has a row at all, and if so, a value in the column you are interested in.

If yes, do nothing.

If no, update the record.

Now, I have to tell you that what you are doing almost certainly violates relational data modeling.

I would be EXTREMELY SUSPICIOUS of a database design that required me to add a column to a table for every file in a directory.

The odds of this being a good database design are very, very low. Lower than the chance of my being hit by lightning this year.

Standard relational theory would tell us to create a ROW, not a COLUMN, for every file in the directory.

I am not telling you that your database design is wrong. I am telling you that it is very likely wrong, and that you should re-think your approach to be very, very sure the approach you are taking is the right one.

How many files might there be in the directory? Did you know there are limits as to how many columns can be defined for a table? Will you have more than that limit? Did you know that there are limits as to the number of bytes that can be returned for a row in a query? How many filename columns with their values will it take to go over that limit?

See <http://technet.microsoft.com/en-us/library/ms143432.aspx> for details on sql server limits.

Please reconsider your design or - to educate us all - explain why the situation you are in requires such an unusual design.


|||

Thanks David,

If the column exists in the table, then you must have populated it with a value?

is not true. I first create an empty table with at least one column then I add more columns as they show up (i.e as the text files get created). That might not be as important now as the structure of the database itself.

To say that I am fairly new to Database design would be an understatement. Thanks for enlightening me. I am still in an early stage of the design phase and you just showed me how flawed the database would be if I end up going over limits. I would reconsider my approach.

|||

Glad to have helped! I've got 25 years of computing mistakes behind me, so it's easier for me to recognize them.. Some of them are old friends. :)

So, to wrap up this thread, the correct answer is "Don't do it."

Wednesday, March 21, 2012

incorrect number of rows

I have a table 'detail_curr' with a column called
'tams_id'.
1. If the column is indexed (non-cluster,non-unique),
select count(*) from detail_curr
where tams_id is null;
(result): 4003464
2. If the column is NOT indexed,
(result): 3902727
What's wrong? Any help is appreciated.Check out:
http://support.microsoft.com/default.aspx?scid=kb;en-us;814509
--
Hope this helps.
Dan Guzman
SQL Server MVP
--
SQL FAQ links (courtesy Neil Pike):
http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
http://www.sqlserverfaq.com
http://www.mssqlserver.com/faq
--
"yren" <yren@.cc3.com> wrote in message
news:065801c35c8b$8d2f1b30$a501280a@.phx.gbl...
> I have a table 'detail_curr' with a column called
> 'tams_id'.
> 1. If the column is indexed (non-cluster,non-unique),
> select count(*) from detail_curr
> where tams_id is null;
> (result): 4003464
> 2. If the column is NOT indexed,
> (result): 3902727
> What's wrong? Any help is appreciated.|||Thanks. That's very helpful.
yren
>--Original Message--
>Check out:
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;814509
>--
>Hope this helps.
>Dan Guzman
>SQL Server MVP
>--
>SQL FAQ links (courtesy Neil Pike):
>http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
>http://www.sqlserverfaq.com
>http://www.mssqlserver.com/faq
>--
>

Monday, March 19, 2012

Incorrect date in column for 25,000 rows

I have a table with a column in it called Date, which is of the type DateTime, and for the last two years I have been adding data which I found out was incorrect.

My dates are all a day in the future, so I need to reduce each date by one day.

I can easily use a select script to reveal the 25,000 rows which are all incorrect dates. But I can't figure out how to update each and every row to subtract one day from each date.

So where I have:

26/01/2005

I would like to have:

25/01/2005

and of course for every record. Obviously way too many to do manually :-(

Can anyone show me a script that will get what I'm after.

Tia

Tailwag

After one your 25000 rows with invalid data, that is not nice to find out. But it is easy to solve, just run this query:


UPDATE [MyTable] SET [MyDate] = DATEADD(year, -1, [MyDate])


Don't forget to replace MyTable with your table name and MyDate with your datetime field name.
I have tested it before posting this solution, because i don't want to be responsible for lozing 25000 rows on friday.

|||

Also, replace year with day!

So the update query must be:

UPDATE [MyTable] SET [MyDate] = DATEADD(day, -1, [MyDate])

- Jeroen Boiten

|||Indeed! Thanks Jeroen, i overlooked that one. Thought only the years where incorrect.|||

Thank you PJ, and also Jeroen. I ran the query and 'Viola' it worked flawlessly, you guys are now extremely high on my best friends of all time list

Tia.

Tailwag

P.S. The B_ _ _ _ Friday curse has finally been thwarted!!!

Incorrect data in sysdatabases compatability level column

Hello,
I've noticed that if you have model and tempdb to different compatability
levels and restart the sql instance, the value in sysdatabases (cmptlevel)
does not get changed correctly.
Here's how to reproduce:
1. set model compatability to one level, say 9.0
2. set tempdb compatability to diff. level, say 8.0
3. verify in SSMS db -> properties -> options
or select * from master..sysdatabases
4. run an old join query from tempdb to show it is actually in 8.0
compatability
-> select * from sysobjects o, syscolumn c where o.id *= c.id
if it runs, the compatability is set to 8.0
5. now restart your db instance
6. repeat step 3 and 4, you should see that sysdatabases thinks tempdb is in
8.0 compatability, but you get an error message when running the query which
means the database is in 9.0
Can anyone explain? or is it possible this is a bug?
thanksPlease ignore this one, I resubmitted this same ticket the to SQL Server
category of threads.
"paul" wrote:

> Hello,
> I've noticed that if you have model and tempdb to different compatability
> levels and restart the sql instance, the value in sysdatabases (cmptlevel)
> does not get changed correctly.
> Here's how to reproduce:
> 1. set model compatability to one level, say 9.0
> 2. set tempdb compatability to diff. level, say 8.0
> 3. verify in SSMS db -> properties -> options
> or select * from master..sysdatabases
> 4. run an old join query from tempdb to show it is actually in 8.0
> compatability
> -> select * from sysobjects o, syscolumn c where o.id *= c.id
> if it runs, the compatability is set to 8.0
> 5. now restart your db instance
> 6. repeat step 3 and 4, you should see that sysdatabases thinks tempdb is
in
> 8.0 compatability, but you get an error message when running the query whi
ch
> means the database is in 9.0
> Can anyone explain? or is it possible this is a bug?
> thanks
>
>

Incorrect data in sysdatabases compatability level column

Hello,
I've noticed that if you have model and tempdb to different compatability
levels and restart the sql instance, the value in sysdatabases (cmptlevel)
does not get changed correctly.
Here's how to reproduce:
1. set model compatability to one level, say 9.0
2. set tempdb compatability to diff. level, say 8.0
3. verify in SSMS db -> properties -> options
or select * from master..sysdatabases
4. run an old join query from tempdb to show it is actually in 8.0
compatability
-> select * from sysobjects o, syscolumn c where o.id *= c.id
if it runs, the compatability is set to 8.0
5. now restart your db instance
6. repeat step 3 and 4, you should see that sysdatabases thinks tempdb is in
8.0 compatability, but you get an error message when running the query which
means the database is in 9.0
Can anyone explain? or is it possible this is a bug?
thanks
Sorry, I forgot to tell you to run the test query below from the tempdb to
either produce an error message or not. The connection's current db
determines compatability level.
"paul" wrote:

> Hello,
>
> I've noticed that if you have model and tempdb to different compatability
> levels and restart the sql instance, the value in sysdatabases (cmptlevel)
> does not get changed correctly.
> Here's how to reproduce:
> 1. set model compatability to one level, say 9.0
> 2. set tempdb compatability to diff. level, say 8.0
> 3. verify in SSMS db -> properties -> options
> or select * from master..sysdatabases
> 4. run an old join query from tempdb to show it is actually in 8.0
> compatability
> -> select * from sysobjects o, syscolumn c where o.id *= c.id
> if it runs, the compatability is set to 8.0
> 5. now restart your db instance
> 6. repeat step 3 and 4, you should see that sysdatabases thinks tempdb is in
> 8.0 compatability, but you get an error message when running the query which
> means the database is in 9.0
> Can anyone explain? or is it possible this is a bug?
> thanks
>
>
|||For what it is worth, I would not mess with the compatibilty level of
either tempdb or model. I know the idea is that you can set model to
match what you want from a new database, but as tempdb is effectively
recreated from model each time SQL Server starts it sounds like asking
for trouble.
Roy Harvey
Beacon Falls, CT
On Mon, 16 Jul 2007 11:14:08 -0700, paul
<paul@.discussions.microsoft.com> wrote:

>Hello,
>
>I've noticed that if you have model and tempdb to different compatability
>levels and restart the sql instance, the value in sysdatabases (cmptlevel)
>does not get changed correctly.
>Here's how to reproduce:
>1. set model compatability to one level, say 9.0
>2. set tempdb compatability to diff. level, say 8.0
>3. verify in SSMS db -> properties -> options
>or select * from master..sysdatabases
>4. run an old join query from tempdb to show it is actually in 8.0
>compatability
> -> select * from sysobjects o, syscolumn c where o.id *= c.id
> if it runs, the compatability is set to 8.0
>5. now restart your db instance
>6. repeat step 3 and 4, you should see that sysdatabases thinks tempdb is in
>8.0 compatability, but you get an error message when running the query which
>means the database is in 9.0
>Can anyone explain? or is it possible this is a bug?
>thanks
>

Incorrect data in sysdatabases compatability level column

Hello,
I've noticed that if you have model and tempdb to different compatability
levels and restart the sql instance, the value in sysdatabases (cmptlevel)
does not get changed correctly.
Here's how to reproduce:
1. set model compatability to one level, say 9.0
2. set tempdb compatability to diff. level, say 8.0
3. verify in SSMS db -> properties -> options
or select * from master..sysdatabases
4. run an old join query from tempdb to show it is actually in 8.0
compatability
-> select * from sysobjects o, syscolumn c where o.id *= c.id
if it runs, the compatability is set to 8.0
5. now restart your db instance
6. repeat step 3 and 4, you should see that sysdatabases thinks tempdb is in
8.0 compatability, but you get an error message when running the query which
means the database is in 9.0
Can anyone explain? or is it possible this is a bug?
thanks
Please ignore this one, I resubmitted this same ticket the to SQL Server
category of threads.
"paul" wrote:

> Hello,
> I've noticed that if you have model and tempdb to different compatability
> levels and restart the sql instance, the value in sysdatabases (cmptlevel)
> does not get changed correctly.
> Here's how to reproduce:
> 1. set model compatability to one level, say 9.0
> 2. set tempdb compatability to diff. level, say 8.0
> 3. verify in SSMS db -> properties -> options
> or select * from master..sysdatabases
> 4. run an old join query from tempdb to show it is actually in 8.0
> compatability
> -> select * from sysobjects o, syscolumn c where o.id *= c.id
> if it runs, the compatability is set to 8.0
> 5. now restart your db instance
> 6. repeat step 3 and 4, you should see that sysdatabases thinks tempdb is in
> 8.0 compatability, but you get an error message when running the query which
> means the database is in 9.0
> Can anyone explain? or is it possible this is a bug?
> thanks
>
>

Incorrect data in sysdatabases compatability level column

Hello,
I've noticed that if you have model and tempdb to different compatability
levels and restart the sql instance, the value in sysdatabases (cmptlevel)
does not get changed correctly.
Here's how to reproduce:
1. set model compatability to one level, say 9.0
2. set tempdb compatability to diff. level, say 8.0
3. verify in SSMS db -> properties -> options
or select * from master..sysdatabases
4. run an old join query from tempdb to show it is actually in 8.0
compatability
-> select * from sysobjects o, syscolumn c where o.id *= c.id
if it runs, the compatability is set to 8.0
5. now restart your db instance
6. repeat step 3 and 4, you should see that sysdatabases thinks tempdb is in
8.0 compatability, but you get an error message when running the query which
means the database is in 9.0
Can anyone explain? or is it possible this is a bug?
thanksSorry, I forgot to tell you to run the test query below from the tempdb to
either produce an error message or not. The connection's current db
determines compatability level.
"paul" wrote:
> Hello,
>
> I've noticed that if you have model and tempdb to different compatability
> levels and restart the sql instance, the value in sysdatabases (cmptlevel)
> does not get changed correctly.
> Here's how to reproduce:
> 1. set model compatability to one level, say 9.0
> 2. set tempdb compatability to diff. level, say 8.0
> 3. verify in SSMS db -> properties -> options
> or select * from master..sysdatabases
> 4. run an old join query from tempdb to show it is actually in 8.0
> compatability
> -> select * from sysobjects o, syscolumn c where o.id *= c.id
> if it runs, the compatability is set to 8.0
> 5. now restart your db instance
> 6. repeat step 3 and 4, you should see that sysdatabases thinks tempdb is in
> 8.0 compatability, but you get an error message when running the query which
> means the database is in 9.0
> Can anyone explain? or is it possible this is a bug?
> thanks
>
>|||For what it is worth, I would not mess with the compatibilty level of
either tempdb or model. I know the idea is that you can set model to
match what you want from a new database, but as tempdb is effectively
recreated from model each time SQL Server starts it sounds like asking
for trouble.
Roy Harvey
Beacon Falls, CT
On Mon, 16 Jul 2007 11:14:08 -0700, paul
<paul@.discussions.microsoft.com> wrote:
>Hello,
>
>I've noticed that if you have model and tempdb to different compatability
>levels and restart the sql instance, the value in sysdatabases (cmptlevel)
>does not get changed correctly.
>Here's how to reproduce:
>1. set model compatability to one level, say 9.0
>2. set tempdb compatability to diff. level, say 8.0
>3. verify in SSMS db -> properties -> options
>or select * from master..sysdatabases
>4. run an old join query from tempdb to show it is actually in 8.0
>compatability
> -> select * from sysobjects o, syscolumn c where o.id *= c.id
> if it runs, the compatability is set to 8.0
>5. now restart your db instance
>6. repeat step 3 and 4, you should see that sysdatabases thinks tempdb is in
>8.0 compatability, but you get an error message when running the query which
>means the database is in 9.0
>Can anyone explain? or is it possible this is a bug?
>thanks
>

Incorrect data in sysdatabases compatability level column

Hello,
I've noticed that if you have model and tempdb to different compatability
levels and restart the sql instance, the value in sysdatabases (cmptlevel)
does not get changed correctly.
Here's how to reproduce:
1. set model compatability to one level, say 9.0
2. set tempdb compatability to diff. level, say 8.0
3. verify in SSMS db -> properties -> options
or select * from master..sysdatabases
4. run an old join query from tempdb to show it is actually in 8.0
compatability
-> select * from sysobjects o, syscolumn c where o.id *= c.id
if it runs, the compatability is set to 8.0
5. now restart your db instance
6. repeat step 3 and 4, you should see that sysdatabases thinks tempdb is in
8.0 compatability, but you get an error message when running the query which
means the database is in 9.0
Can anyone explain? or is it possible this is a bug?
thanksSorry, I forgot to tell you to run the test query below from the tempdb to
either produce an error message or not. The connection's current db
determines compatability level.
"paul" wrote:

> Hello,
>
> I've noticed that if you have model and tempdb to different compatability
> levels and restart the sql instance, the value in sysdatabases (cmptlevel)
> does not get changed correctly.
> Here's how to reproduce:
> 1. set model compatability to one level, say 9.0
> 2. set tempdb compatability to diff. level, say 8.0
> 3. verify in SSMS db -> properties -> options
> or select * from master..sysdatabases
> 4. run an old join query from tempdb to show it is actually in 8.0
> compatability
> -> select * from sysobjects o, syscolumn c where o.id *= c.id
> if it runs, the compatability is set to 8.0
> 5. now restart your db instance
> 6. repeat step 3 and 4, you should see that sysdatabases thinks tempdb is
in
> 8.0 compatability, but you get an error message when running the query whi
ch
> means the database is in 9.0
> Can anyone explain? or is it possible this is a bug?
> thanks
>
>|||For what it is worth, I would not mess with the compatibilty level of
either tempdb or model. I know the idea is that you can set model to
match what you want from a new database, but as tempdb is effectively
recreated from model each time SQL Server starts it sounds like asking
for trouble.
Roy Harvey
Beacon Falls, CT
On Mon, 16 Jul 2007 11:14:08 -0700, paul
<paul@.discussions.microsoft.com> wrote:

>Hello,
>
>I've noticed that if you have model and tempdb to different compatability
>levels and restart the sql instance, the value in sysdatabases (cmptlevel)
>does not get changed correctly.
>Here's how to reproduce:
>1. set model compatability to one level, say 9.0
>2. set tempdb compatability to diff. level, say 8.0
>3. verify in SSMS db -> properties -> options
>or select * from master..sysdatabases
>4. run an old join query from tempdb to show it is actually in 8.0
>compatability
> -> select * from sysobjects o, syscolumn c where o.id *= c.id
> if it runs, the compatability is set to 8.0
>5. now restart your db instance
>6. repeat step 3 and 4, you should see that sysdatabases thinks tempdb is i
n
>8.0 compatability, but you get an error message when running the query whic
h
>means the database is in 9.0
>Can anyone explain? or is it possible this is a bug?
>thanks
>

Incorrect column order in syscolumns / Information_schema.columns

When dropping a column, syscolumns keeps the old ordinal
for the following columns. Is this correct / expected
behavior?
This also affects Information_schema.Columns. See example
below.
Is there any SUPPORTED way to refresh or update this info?
Regards
Hans Schyldt
Drop Table HS_Test
go
CREATE TABLE [dbo].[HS_Test] (
[C1] [int] IDENTITY (1, 1) NOT NULL ,
[C2] [int] NOT NULL ,
[C3] [varchar] (10) NOT NULL ,
[C4] [varchar] (10) NOT NULL) ON [PRIMARY]
go
select
T1.Table_Name, T1.Column_Name, T1.Ordinal_Position
from Information_Schema.Columns as T1
where T1.Table_Name = 'HS_Test'
order by T1.Ordinal_Position
go
alter table HS_Test
Drop Column C2
go
select
T1.Table_Name, T1.Column_Name, T1.Ordinal_Position
from Information_Schema.Columns as T1
where T1.Table_Name = 'HS_Test'
order by T1.Ordinal_Position
go
Drop Table HS_Test
goThis is correct behavior, but can be annoying.
Lot of people will argue that order and ordinal numbers do not matter, but
they matter to me.
What I did to correct this, was making a change with the Enterprise Manager
and reversing that change. Using such a change that the enterprise manager
rebuilds the whole table. In this way my ordinals where again as they were
in other databases. (So I could compare the databases again). This worked
fast enough so I didn't look for 'simpler' commands which only changed
metadata.
ben brugman
"Hans Schyldt" <hans.reply_to_forum.schyldt@.intentia.se> wrote in message
news:0e4701c367e7$70a16ec0$a101280a@.phx.gbl...
> When dropping a column, syscolumns keeps the old ordinal
> for the following columns. Is this correct / expected
> behavior?
> This also affects Information_schema.Columns. See example
> below.
> Is there any SUPPORTED way to refresh or update this info?
> Regards
> Hans Schyldt
>
> Drop Table HS_Test
> go
> CREATE TABLE [dbo].[HS_Test] (
> [C1] [int] IDENTITY (1, 1) NOT NULL ,
> [C2] [int] NOT NULL ,
> [C3] [varchar] (10) NOT NULL ,
> [C4] [varchar] (10) NOT NULL) ON [PRIMARY]
> go
> select
> T1.Table_Name, T1.Column_Name, T1.Ordinal_Position
> from Information_Schema.Columns as T1
> where T1.Table_Name = 'HS_Test'
> order by T1.Ordinal_Position
> go
> alter table HS_Test
> Drop Column C2
> go
> select
> T1.Table_Name, T1.Column_Name, T1.Ordinal_Position
> from Information_Schema.Columns as T1
> where T1.Table_Name = 'HS_Test'
> order by T1.Ordinal_Position
> go
> Drop Table HS_Test
> go

Monday, March 12, 2012

Inconsistent UDF column order

I have a UDF that when I run by two different users, gives two different
orders of data columns. One follows the syntax of the function and the othe
r
is mis-ordered but returns this way consistently. Has anyone else
encountered this?Can you post the ddl?
AMB
"ZachB" wrote:

> I have a UDF that when I run by two different users, gives two different
> orders of data columns. One follows the syntax of the function and the ot
her
> is mis-ordered but returns this way consistently. Has anyone else
> encountered this?|||Not sure what you mean by posting the ddl but here's the syntax of the UDF:
CREATE FUNCTION dbo. MISMOqryMINRegistrationCldDateNE(@.Enter_
Begin_Date
datetime,
@.Enter_End_Date datetime)
RETURNS TABLE
AS
RETURN ( SELECT TOP 100 PERCENT dbo.dbo_Tracking_File_Ext.f755#MERS,
dbo.MERS.MERS_MINNumber AS [MISMO MERS], dbo.tblLoanDetails.FirstSecond,
dbo.dbo_Tracking_File_Ext.f422#ClosingDate AS [MV Note
Date], dbo.tblLoanInfo.ClsdTDDate AS NoteDate,
dbo.LOAN_DETAILS.ClosingDate AS [MISMO ClosingDate],
dbo.tblLoanDetails.[Loan Amount],
dbo.LOAN_DETAILS.DisbursementDate AS [Funding Date],
dbo.dbo_Tracking_File_Ext.F251#CompanyName1,
dbo.dbo_Tracking_File_Ext.f252#CompanyName2,
dbo.GENERIC_ENTITY_LenderName._UnparsedName AS [MISMO CompanyName1],
' ' AS [MISMO CompanyName2],
dbo.qryBorrJoin.firstname, dbo.qryBorrJoin.B1MI, dbo.qryBorrJoin.Name AS
B1LastName, dbo.qryBorrJoin.BSSN,
dbo.qryBorrJoin.CoFirstName, dbo.qryBorrJoin.B2MI,
dbo.qryBorrJoin.coLastName, dbo.qryBorrJoin.CBSSN, dbo.tblLoanInfo.[Security
Address Street],
dbo.tblLoanInfo.City, dbo.tblLoanInfo.State,
dbo.tblLoanInfo.ZIP, dbo.dbo_Tracking_File.f555#property_county,
dbo.PROPERTY._County AS [MISMO property_county],
dbo.dbo_Tracking_File_Ext.f519#TrusteeName,
dbo.GENERIC_ENTITY_Trustee._UnparsedName AS [MISMO
TrusteeName], dbo.tblStateLookup.StateTrustVMort,
dbo.tblClosedLoan.LoanNumber,
dbo.[tblPurpose Lookup].PurpComerica,
dbo.tblClosedLoan.CommitID, dbo.tblClosedLoan.SandDYN,
dbo.tblClosedLoan.PSStatus
FROM dbo.tblStateLookup RIGHT OUTER JOIN
dbo.dbo_Tracking_File RIGHT OUTER JOIN
dbo.qryBorrJoin INNER JOIN
dbo.tblLoanInfo INNER JOIN
dbo.tblLoanDetails INNER JOIN
dbo.tblClosedLoan ON dbo.tblLoanDetails.LoanDetailID =
dbo.tblClosedLoan.NCLoanNumber ON
dbo.tblLoanInfo.[Acct Number] =
dbo.tblLoanDetails.[Loan Number] ON
dbo.qryBorrJoin.[Loan Number] = dbo.tblLoanInfo.[Acct
Number] LEFT OUTER JOIN
dbo.[tblPurpose Lookup] ON dbo.tblLoanInfo.Purpose =
dbo.[tblPurpose Lookup].[Purpose Lookup] LEFT OUTER JOIN
dbo.dbo_Tracking_File_Ext ON
dbo.tblClosedLoan.LoanNumber = dbo.dbo_Tracking_File_Ext.Loan_ID ON
dbo.dbo_Tracking_File.Loan_ID =
dbo.tblClosedLoan.LoanNumber ON dbo.tblStateLookup.StateID =
dbo.tblLoanInfo.State LEFT OUTER JOIN
dbo._CLOSING_DOCUMENTS LEFT OUTER JOIN
dbo.GENERIC_ENTITY_LenderName ON
dbo._CLOSING_DOCUMENTS.CLDC_ID = dbo.GENERIC_ENTITY_LenderName.CLDC_ID LEFT
OUTER JOIN
dbo.GENERIC_ENTITY_Trustee RIGHT OUTER JOIN
dbo.RECORDABLE_DOCUMENT ON
dbo.GENERIC_ENTITY_Trustee.RCDO_ID = dbo.RECORDABLE_DOCUMENT.RCDO_ID ON
dbo._CLOSING_DOCUMENTS.CLDC_ID =
dbo.RECORDABLE_DOCUMENT.CLDC_ID RIGHT OUTER JOIN
dbo.MORTGAGE_TERMS_MaxAppl LEFT OUTER JOIN
dbo.LOAN_APPLICATION ON
dbo.MORTGAGE_TERMS_MaxAppl.APPL_ID = dbo.LOAN_APPLICATION.APPL_ID LEFT OUTER
JOIN
dbo.PROPERTY ON dbo.MORTGAGE_TERMS_MaxAppl.APPL_ID =
dbo.PROPERTY.APPL_ID ON
dbo._CLOSING_DOCUMENTS.LOAN_ID =
dbo.LOAN_APPLICATION.LOAN_ID LEFT OUTER JOIN
dbo.LOAN_DETAILS ON dbo._CLOSING_DOCUMENTS.CLDC_ID =
dbo.LOAN_DETAILS.CLDC_ID LEFT OUTER JOIN
dbo.MERS ON dbo.MORTGAGE_TERMS_MaxAppl.APPL_ID =
dbo.MERS.APPL_ID ON
dbo.tblClosedLoan.LoanNumber =
dbo.MORTGAGE_TERMS_MaxAppl.LenderLoanIdentifier
WHERE (dbo.tblLoanInfo.ClsdTDDate BETWEEN @.Enter_Begin_Date AND
@.Enter_End_Date) AND (dbo.tblClosedLoan.SandDYN = 0 OR
dbo.tblClosedLoan.SandDYN IS NULL) AND
(dbo.tblClosedLoan.PSStatus <> N'rescinded')
ORDER BY dbo.qryBorrJoin.Name, dbo.qryBorrJoin.firstname )|||ZachB,
When you say "gives two different orders of data columns", Do you mean
different sort of the result or that the column list is different?. How are
you querying this table function?
The "order by" clause used inside the function does not guarantee any order
of the result when you use:
declare @.sd datetime
declare @.ed datetime
set @.sd = '20050101'
set @.ed = '20050321'
select col1, col2, ..., coln
from dbo.MISMOqryMINRegistrationCldDateNE(@.sd, @.ed) as t
you have to use an "order by" clause again if you want the order of the rows
to be consistent.
select col1, col2, ..., coln
from dbo.MISMOqryMINRegistrationCldDateNE(@.sd, @.ed) as t
order by col1, ...
AMB
"ZachB" wrote:

> Not sure what you mean by posting the ddl but here's the syntax of the UDF
:
> CREATE FUNCTION dbo. MISMOqryMINRegistrationCldDateNE(@.Enter_
Begin_Date
> datetime,
> @.Enter_End_Date datetime)
> RETURNS TABLE
> AS
> RETURN ( SELECT TOP 100 PERCENT dbo.dbo_Tracking_File_Ext.f755#MERS,
> dbo.MERS.MERS_MINNumber AS [MISMO MERS], dbo.tblLoanDetails.FirstSecond,
> dbo.dbo_Tracking_File_Ext.f422#ClosingDate AS [MV No
te
> Date], dbo.tblLoanInfo.ClsdTDDate AS NoteDate,
> dbo.LOAN_DETAILS.ClosingDate AS [MISMO ClosingDate],
> dbo.tblLoanDetails.[Loan Amount],
> dbo.LOAN_DETAILS.DisbursementDate AS [Funding Date],
> dbo.dbo_Tracking_File_Ext.F251#CompanyName1,
> dbo.dbo_Tracking_File_Ext.f252#CompanyName2,
> dbo.GENERIC_ENTITY_LenderName._UnparsedName AS [MISMO CompanyName1],
> ' ' AS [MISMO CompanyName2],
> dbo.qryBorrJoin.firstname, dbo.qryBorrJoin.B1MI, dbo.qryBorrJoin.Name AS
> B1LastName, dbo.qryBorrJoin.BSSN,
> dbo.qryBorrJoin.CoFirstName, dbo.qryBorrJoin.B2MI,
> dbo.qryBorrJoin.coLastName, dbo.qryBorrJoin.CBSSN, dbo.tblLoanInfo.[Securi
ty
> Address Street],
> dbo.tblLoanInfo.City, dbo.tblLoanInfo.State,
> dbo.tblLoanInfo.ZIP, dbo.dbo_Tracking_File.f555#property_county,
> dbo.PROPERTY._County AS [MISMO property_county],
> dbo.dbo_Tracking_File_Ext.f519#TrusteeName,
> dbo.GENERIC_ENTITY_Trustee._UnparsedName AS [MISMO
> TrusteeName], dbo.tblStateLookup.StateTrustVMort,
> dbo.tblClosedLoan.LoanNumber,
> dbo.[tblPurpose Lookup].PurpComerica,
> dbo.tblClosedLoan.CommitID, dbo.tblClosedLoan.SandDYN,
> dbo.tblClosedLoan.PSStatus
> FROM dbo.tblStateLookup RIGHT OUTER JOIN
> dbo.dbo_Tracking_File RIGHT OUTER JOIN
> dbo.qryBorrJoin INNER JOIN
> dbo.tblLoanInfo INNER JOIN
> dbo.tblLoanDetails INNER JOIN
> dbo.tblClosedLoan ON dbo.tblLoanDetails.LoanDetailID
=
> dbo.tblClosedLoan.NCLoanNumber ON
> dbo.tblLoanInfo.[Acct Number] =
> dbo.tblLoanDetails.[Loan Number] ON
> dbo.qryBorrJoin.[Loan Number] = dbo.tblLoanInfo.[Acc
t
> Number] LEFT OUTER JOIN
> dbo.[tblPurpose Lookup] ON dbo.tblLoanInfo.Purpose =
> dbo.[tblPurpose Lookup].[Purpose Lookup] LEFT OUTER JOIN
> dbo.dbo_Tracking_File_Ext ON
> dbo.tblClosedLoan.LoanNumber = dbo.dbo_Tracking_File_Ext.Loan_ID ON
> dbo.dbo_Tracking_File.Loan_ID =
> dbo.tblClosedLoan.LoanNumber ON dbo.tblStateLookup.StateID =
> dbo.tblLoanInfo.State LEFT OUTER JOIN
> dbo._CLOSING_DOCUMENTS LEFT OUTER JOIN
> dbo.GENERIC_ENTITY_LenderName ON
> dbo._CLOSING_DOCUMENTS.CLDC_ID = dbo.GENERIC_ENTITY_LenderName.CLDC_ID LEF
T
> OUTER JOIN
> dbo.GENERIC_ENTITY_Trustee RIGHT OUTER JOIN
> dbo.RECORDABLE_DOCUMENT ON
> dbo.GENERIC_ENTITY_Trustee.RCDO_ID = dbo.RECORDABLE_DOCUMENT.RCDO_ID ON
> dbo._CLOSING_DOCUMENTS.CLDC_ID =
> dbo.RECORDABLE_DOCUMENT.CLDC_ID RIGHT OUTER JOIN
> dbo.MORTGAGE_TERMS_MaxAppl LEFT OUTER JOIN
> dbo.LOAN_APPLICATION ON
> dbo.MORTGAGE_TERMS_MaxAppl.APPL_ID = dbo.LOAN_APPLICATION.APPL_ID LEFT OUT
ER
> JOIN
> dbo.PROPERTY ON dbo.MORTGAGE_TERMS_MaxAppl.APPL_ID =
> dbo.PROPERTY.APPL_ID ON
> dbo._CLOSING_DOCUMENTS.LOAN_ID =
> dbo.LOAN_APPLICATION.LOAN_ID LEFT OUTER JOIN
> dbo.LOAN_DETAILS ON dbo._CLOSING_DOCUMENTS.CLDC_ID =
> dbo.LOAN_DETAILS.CLDC_ID LEFT OUTER JOIN
> dbo.MERS ON dbo.MORTGAGE_TERMS_MaxAppl.APPL_ID =
> dbo.MERS.APPL_ID ON
> dbo.tblClosedLoan.LoanNumber =
> dbo.MORTGAGE_TERMS_MaxAppl.LenderLoanIdentifier
> WHERE (dbo.tblLoanInfo.ClsdTDDate BETWEEN @.Enter_Begin_Date AND
> @.Enter_End_Date) AND (dbo.tblClosedLoan.SandDYN = 0 OR
> dbo.tblClosedLoan.SandDYN IS NULL) AND
> (dbo.tblClosedLoan.PSStatus <> N'rescinded')
> ORDER BY dbo.qryBorrJoin.Name, dbo.qryBorrJoin.firstname )|||The UDF is being called from an MS Access .adp project. The "order" of the
data columns is different meaning in one case it shows Column A, Column B,
Column C, Column D but in the other it shows Column B, Column C, Column A,
Column D (Even if the syntax says SELECT Column A, Column B, Column C, Colum
n
D.
row 1 test1 test2 test3 test4
vs.
row 1 test2 test3 test1 test4
"Alejandro Mesa" wrote:
> ZachB,
> When you say "gives two different orders of data columns", Do you mean
> different sort of the result or that the column list is different?. How ar
e
> you querying this table function?
> The "order by" clause used inside the function does not guarantee any orde
r
> of the result when you use:
> declare @.sd datetime
> declare @.ed datetime
> set @.sd = '20050101'
> set @.ed = '20050321'
> select col1, col2, ..., coln
> from dbo.MISMOqryMINRegistrationCldDateNE(@.sd, @.ed) as t
> you have to use an "order by" clause again if you want the order of the ro
ws
> to be consistent.
> select col1, col2, ..., coln
> from dbo.MISMOqryMINRegistrationCldDateNE(@.sd, @.ed) as t
> order by col1, ...
>
> AMB
> "ZachB" wrote:
>|||ZachB,
Can you trace the statements sent to sql server by the project?
AMB
"ZachB" wrote:
> The UDF is being called from an MS Access .adp project. The "order" of th
e
> data columns is different meaning in one case it shows Column A, Column B,
> Column C, Column D but in the other it shows Column B, Column C, Column A,
> Column D (Even if the syntax says SELECT Column A, Column B, Column C, Col
umn
> D.
> row 1 test1 test2 test3 test4
> vs.
> row 1 test2 test3 test1 test4
> "Alejandro Mesa" wrote:
>|||Not that I'm aware of. I know you can check the properties of a particular
spid under Current Activity and see what syntax is or has just been run. Bu
t
I would assume that since the two users are hitting the same .adp they
shouldn't be passing different statements.
The MS Access .adp menu option states:
Open table 'dbo.MISMOqryMINRegistrationCldDateNE'
and this is just one example. This mis-ordering happens consisently across
several UDFs that I'm working with.
"Alejandro Mesa" wrote:

> ZachB,
> Can you trace the statements sent to sql server by the project?
>
> AMB|||Use Profiler to trace activities in the server. You can read about it in BOL
.
AMB
"ZachB" wrote:
> Not that I'm aware of. I know you can check the properties of a particula
r
> spid under Current Activity and see what syntax is or has just been run.
But
> I would assume that since the two users are hitting the same .adp they
> shouldn't be passing different statements.
> The MS Access .adp menu option states:
> Open table 'dbo.MISMOqryMINRegistrationCldDateNE'
> and this is just one example. This mis-ordering happens consisently acros
s
> several UDFs that I'm working with.
> "Alejandro Mesa" wrote:
>|||Anyone else? While I'm learning to trace, has anyone ever had a UDF return
data in different COLUMN order for different users? Let me know. Thanks i
n
advance.
"ZachB" wrote:

> I have a UDF that when I run by two different users, gives two different
> orders of data columns. One follows the syntax of the function and the ot
her
> is mis-ordered but returns this way consistently. Has anyone else
> encountered this?|||"ZachB" <ZachB@.discussions.microsoft.com> wrote in message
news:5966BAE6-5CC4-4055-A8D9-454B2E5A2E9C@.microsoft.com...
> Anyone else? While I'm learning to trace, has anyone ever had a UDF
return
> data in different COLUMN order for different users? Let me know. Thanks
in
> advance.
>
Wild Guesses:
1. The clients are executing different code. One client has been updated
and the other hasn't.
2. The clients are attached to different databases, one on test and one on
production.
3. It isn't really happening - The client preferences are different so that
on one client things appear differently but aren't actually different.
(e.g. hidden display controls, etc)
4. You are using Select * and there is a weird caching thing going on.
Good Luck.
Jim

Inconsistent sort order using ORDER BY clause

I am getting the resultset sorted differently if I use a column number in the ORDER BY clause instead of a column name.

Product: Microsoft SQL Server Express Edition
Version: 9.00.1399.06
Server Collation: SQL_Latin1_General_CP1_CI_AS

for example,

create table test_sort
( description varchar(75) );

insert into test_sort values('Non-A');
insert into test_sort values('Non-O');
insert into test_sort values('Noni');
insert into test_sort values('Nons');

then execute the following selects:
select
*
from
test_sort
order by
cast( 1 as nvarchar(75));

select
*
from
test_sort
order by
cast( description as nvarchar(75));

Resultset1
-
Non-A
Non-O
Noni
Nons

Resultset2
-
Non-A
Noni
Non-O
Nons

Any ideas?As far as i figured your query out, i am just wondering why this works for you as the 1 will be casted to a constant string which should not be allowed in the order by clause. Are you sure this works for you ?

Jens K. Suessmeyer.

http://www.sqlserver2005.de|||First, you are running the release version of 2005. You should install at least SP1.

Second, if you install SP1, you would see an error:

Msg 408, Level 16, State 1, Line 9
A constant expression was encountered in the ORDER BY list, position 1.

Because you are sorting by the NUMBER 1, not column 1 by using the cast. So basically you have no sort.

Inconsistent result set.

Hi All
What does it mean if i have a result set that comes as < long text>? I have
a table with a column of data type txt and size 16. When i try to get data
from that colomun, either i get the data or i get the result set as <long
text>. What is causing this inconsistency in my result set? Thanx in advance
.Enterprise Manager will not display large pieces of text stored in a text or
ntext column.
If you need to see more of the text values use Query Analyzer. But mind the
fact, that by default QA only displays up to 256 characters, so you'd need t
o
change that to suit your needs. Go to Tools | Options | Results and set
"Maximum characters per column" to a value of your choice (between 30 and
8192).
Or, better yet, manipulate long text in an appropriate client application.
Above all, get out of Enterprise Manager. ;)
ML|||Thanx for the reply,but still why is it at times i get the right result set.
I should be getting the '<long text>' all the time i run my query from what
you are saying. Also i have been doing this in QA. Thanx once again.
"ML" wrote:

> Enterprise Manager will not display large pieces of text stored in a text
or
> ntext column.
> If you need to see more of the text values use Query Analyzer. But mind th
e
> fact, that by default QA only displays up to 256 characters, so you'd need
to
> change that to suit your needs. Go to Tools | Options | Results and set
> "Maximum characters per column" to a value of your choice (between 30 and
> 8192).
> Or, better yet, manipulate long text in an appropriate client application.
> Above all, get out of Enterprise Manager. ;)
>
> ML|||I honestly do not know the exact limit used to display either actual
text/ntext or a generic <long text> message in Enterprise Manager, since I
really don't use it often. :)
But I have noticed that the limit may not be a fixed value. I think it
depends on the size of the active result set, but I'm not sure. Maybe someon
e
else knows. Anyway, since large text and/or images for that matter only mean
anything when displayed and manipulated in an appropriate client application
,
the ratio behind the "EM fun" is irrelevant. At least IMHO.
ML

Inconsistent negative numerical values in SQL2K database table

I have a case where the sum of a column containing positive, zero and
negative values is correct in Query Analyzer but the values in the column do
not contain a '-' (minus sign). The rows sort as expected but they do not
have a minus sign. I can update a positive row value by multiplying it by -1
and that value shows with a minus sign. The newly updated row sorts with the
other negative values but it is the only row with a minus sign.
This "condition" came to light when a cube built on the fact table (where
this is occurring) was not summing correctly. The cube sees all values as
positive as if an absolute function was being used. When the one positive
row was updated to a negative value as described above, the cube recognized
the negative value and the measure changed accordingly. What is it in the
SQL Server table or in the cube interpretation of the table values that is
causing this? Is there a workaround?
Another twist. The table, pmix_fact1, with which we see this behavior was
copied from another SQL Server 2000 server. The cubes built on that table
sum correctly. I suspect the process of copying the table may be introducing
this behavior. Has anyone experienced anything like this?
I have seen this once before when using an ODBC driver to copy table rows
from an Oracle 7.3.4 database to SQL Server 2000 where the values were all
positive after the copy. I changed to the latest Oracle driver (9.0.1) that
would still work with Oracle 7.3.4 and this resolved that problem. I have
not seen a case where a SQL Server to SQL Server copy would confuse numeric
values.
Sample data from SQL Server 2000 SP3 Query Analyzer
Extended_amount (sorted descending) No minus sign at the bottom.
8.99
7.99
4.08
3.19
2.99
2.98
2.59
2.29
2.19
2.19
1.99
1.79
1.59
1.49
.69
.00
.00
.00
.00
.00
.00
.00
.00
.00
.00
.00
.00
2.29
2.98
2.98
3.19
4.08
4.08
--
27.43
The cube reports: 66.63
Update the 8.99 to 8.99 * -1 and the Query Analyzer results are:
Extended_amount
7.99
4.08
3.19
2.99
2.98
2.59
2.29
2.19
2.19
1.99
1.79
1.59
1.49
.69
.00
.00
.00
.00
.00
.00
.00
.00
.00
.00
.00
.00
2.29
2.98
2.98
3.19
4.08
4.08
-8.99
--
9.45
The cube reports: 48.65
Suggestions are appreciated.
Earl NewcomerEarl,
There are probably some values for which the sign byte of the
decimal value is something other than 01 or 00. See the explanation
here:
http://groups.google.co.uk/groups?q=218AE3EB-60D7-4E5E-A2F1
You should be able to find the values with the funky sign and fix them
as shown in the above thread like this:
update T set
decVal = 1*decVal
where [if a large table, identify just the ones that have bad signs if
possible]
or something similar if the values should be negative. A handful of
data providers seem to cause this problem, but I don't recall seeing it
between SQL Servers before. How did you move the data from
one to the other?
Steve Kass
Drew University
Earl Newcomer wrote:
>I have a case where the sum of a column containing positive, zero and
>negative values is correct in Query Analyzer but the values in the column do
>not contain a '-' (minus sign). The rows sort as expected but they do not
>have a minus sign. I can update a positive row value by multiplying it by -1
>and that value shows with a minus sign. The newly updated row sorts with the
>other negative values but it is the only row with a minus sign.
>This "condition" came to light when a cube built on the fact table (where
>this is occurring) was not summing correctly. The cube sees all values as
>positive as if an absolute function was being used. When the one positive
>row was updated to a negative value as described above, the cube recognized
>the negative value and the measure changed accordingly. What is it in the
>SQL Server table or in the cube interpretation of the table values that is
>causing this? Is there a workaround?
>Another twist. The table, pmix_fact1, with which we see this behavior was
>copied from another SQL Server 2000 server. The cubes built on that table
>sum correctly. I suspect the process of copying the table may be introducing
>this behavior. Has anyone experienced anything like this?
>I have seen this once before when using an ODBC driver to copy table rows
>from an Oracle 7.3.4 database to SQL Server 2000 where the values were all
>positive after the copy. I changed to the latest Oracle driver (9.0.1) that
>would still work with Oracle 7.3.4 and this resolved that problem. I have
>not seen a case where a SQL Server to SQL Server copy would confuse numeric
>values.
>Sample data from SQL Server 2000 SP3 Query Analyzer
>Extended_amount (sorted descending) No minus sign at the bottom.
>8.99
>7.99
>4.08
>3.19
>2.99
>2.98
>2.59
>2.29
>2.19
>2.19
>1.99
>1.79
>1.59
>1.49
>.69
>.00
>.00
>.00
>.00
>.00
>.00
>.00
>.00
>.00
>.00
>.00
>.00
>2.29
>2.98
>2.98
>3.19
>4.08
>4.08
>--
>27.43
>The cube reports: 66.63
>Update the 8.99 to 8.99 * -1 and the Query Analyzer results are:
>Extended_amount
>7.99
>4.08
>3.19
>2.99
>2.98
>2.59
>2.29
>2.19
>2.19
>1.99
>1.79
>1.59
>1.49
>.69
>.00
>.00
>.00
>.00
>.00
>.00
>.00
>.00
>.00
>.00
>.00
>.00
>2.29
>2.98
>2.98
>3.19
>4.08
>4.08
>-8.99
>--
>9.45
>The cube reports: 48.65
>Suggestions are appreciated.
>Earl Newcomer
>
>
>|||Steve,
Well done. That was exactly the issue. Thank you very much. I tested the
fix of 1*value and it works as you mentioned. How does that fix the data do
you know?
To answer your question about copying between SQL Server instances I don't
think this condition is caused by copying between SQL Servers. I do know
that once a column has an incorrect sign bit in a SQL Server table that it
will copy between SQL Server tables carrying with it the incorrect sign bit.
In our case, I suspect the developers have used different ODBC drivers over
time and some values were copied correctly while other values suffer from the
sign bit problem. Since the cube is partitioned some of the partitions are
built with correctly stored data and AS reports those values correctly while
other partitions are built with incorrectly stored data and AS reports those
incorrectly.
We will add another DTS task to check the sign bit from now on.
Thanks again,
Earl Newcomer
"Steve Kass" wrote:
> Earl,
> There are probably some values for which the sign byte of the
> decimal value is something other than 01 or 00. See the explanation
> here:
> http://groups.google.co.uk/groups?q=218AE3EB-60D7-4E5E-A2F1
> You should be able to find the values with the funky sign and fix them
> as shown in the above thread like this:
> update T set
> decVal = 1*decVal
> where [if a large table, identify just the ones that have bad signs if
> possible]
> or something similar if the values should be negative. A handful of
> data providers seem to cause this problem, but I don't recall seeing it
> between SQL Servers before. How did you move the data from
> one to the other?
> Steve Kass
> Drew University
>
> Earl Newcomer wrote:
> >I have a case where the sum of a column containing positive, zero and
> >negative values is correct in Query Analyzer but the values in the column do
> >not contain a '-' (minus sign). The rows sort as expected but they do not
> >have a minus sign. I can update a positive row value by multiplying it by -1
> >and that value shows with a minus sign. The newly updated row sorts with the
> >other negative values but it is the only row with a minus sign.
> >
> >This "condition" came to light when a cube built on the fact table (where
> >this is occurring) was not summing correctly. The cube sees all values as
> >positive as if an absolute function was being used. When the one positive
> >row was updated to a negative value as described above, the cube recognized
> >the negative value and the measure changed accordingly. What is it in the
> >SQL Server table or in the cube interpretation of the table values that is
> >causing this? Is there a workaround?
> >
> >Another twist. The table, pmix_fact1, with which we see this behavior was
> >copied from another SQL Server 2000 server. The cubes built on that table
> >sum correctly. I suspect the process of copying the table may be introducing
> >this behavior. Has anyone experienced anything like this?
> >
> >I have seen this once before when using an ODBC driver to copy table rows
> >from an Oracle 7.3.4 database to SQL Server 2000 where the values were all
> >positive after the copy. I changed to the latest Oracle driver (9.0.1) that
> >would still work with Oracle 7.3.4 and this resolved that problem. I have
> >not seen a case where a SQL Server to SQL Server copy would confuse numeric
> >values.
> >
> >Sample data from SQL Server 2000 SP3 Query Analyzer
> >Extended_amount (sorted descending) No minus sign at the bottom.
> >8.99
> >7.99
> >4.08
> >3.19
> >2.99
> >2.98
> >2.59
> >2.29
> >2.19
> >2.19
> >1.99
> >1.79
> >1.59
> >1.49
> >.69
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >2.29
> >2.98
> >2.98
> >3.19
> >4.08
> >4.08
> >--
> >27.43
> >
> >The cube reports: 66.63
> >
> >Update the 8.99 to 8.99 * -1 and the Query Analyzer results are:
> >Extended_amount
> >7.99
> >4.08
> >3.19
> >2.99
> >2.98
> >2.59
> >2.29
> >2.19
> >2.19
> >1.99
> >1.79
> >1.59
> >1.49
> >.69
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >.00
> >2.29
> >2.98
> >2.98
> >3.19
> >4.08
> >4.08
> >-8.99
> >--
> >9.45
> >
> >The cube reports: 48.65
> >
> >Suggestions are appreciated.
> >
> >Earl Newcomer
> >
> >
> >
> >
> >
>|||Earl Newcomer wrote:
>Steve,
>Well done. That was exactly the issue. Thank you very much. I tested the
>fix of 1*value and it works as you mentioned. How does that fix the data do
>you know?
>
As far as I can tell, so long as SQL Server has to do an arithmetic
problem, it will create a new and proper decimal value and overwrite the
existing one. Fortunately, it appears that the decimal arithmetic
algorithms were not highly tuned to do clever things with the existing
sign byte/bit. Only very simple updates, like SET D = -D might operate
directly on the existing sign information.
SK
>To answer your question about copying between SQL Server instances I don't
>think this condition is caused by copying between SQL Servers. I do know
>that once a column has an incorrect sign bit in a SQL Server table that it
>will copy between SQL Server tables carrying with it the incorrect sign bit.
>In our case, I suspect the developers have used different ODBC drivers over
>time and some values were copied correctly while other values suffer from the
>sign bit problem. Since the cube is partitioned some of the partitions are
>built with correctly stored data and AS reports those values correctly while
>other partitions are built with incorrectly stored data and AS reports those
>incorrectly.
>We will add another DTS task to check the sign bit from now on.
>Thanks again,
>Earl Newcomer
>"Steve Kass" wrote:
>
>>Earl,
>> There are probably some values for which the sign byte of the
>>decimal value is something other than 01 or 00. See the explanation
>>here:
>>http://groups.google.co.uk/groups?q=218AE3EB-60D7-4E5E-A2F1
>>You should be able to find the values with the funky sign and fix them
>>as shown in the above thread like this:
>>update T set
>> decVal = 1*decVal
>>where [if a large table, identify just the ones that have bad signs if
>>possible]
>>or something similar if the values should be negative. A handful of
>>data providers seem to cause this problem, but I don't recall seeing it
>>between SQL Servers before. How did you move the data from
>>one to the other?
>>Steve Kass
>>Drew University
>>
>>Earl Newcomer wrote:
>>
>>I have a case where the sum of a column containing positive, zero and
>>negative values is correct in Query Analyzer but the values in the column do
>>not contain a '-' (minus sign). The rows sort as expected but they do not
>>have a minus sign. I can update a positive row value by multiplying it by -1
>>and that value shows with a minus sign. The newly updated row sorts with the
>>other negative values but it is the only row with a minus sign.
>>This "condition" came to light when a cube built on the fact table (where
>>this is occurring) was not summing correctly. The cube sees all values as
>>positive as if an absolute function was being used. When the one positive
>>row was updated to a negative value as described above, the cube recognized
>>the negative value and the measure changed accordingly. What is it in the
>>SQL Server table or in the cube interpretation of the table values that is
>>causing this? Is there a workaround?
>>Another twist. The table, pmix_fact1, with which we see this behavior was
>>copied from another SQL Server 2000 server. The cubes built on that table
>>sum correctly. I suspect the process of copying the table may be introducing
>>this behavior. Has anyone experienced anything like this?
>>I have seen this once before when using an ODBC driver to copy table rows
>>
>>from an Oracle 7.3.4 database to SQL Server 2000 where the values were all
>>
>>positive after the copy. I changed to the latest Oracle driver (9.0.1) that
>>would still work with Oracle 7.3.4 and this resolved that problem. I have
>>not seen a case where a SQL Server to SQL Server copy would confuse numeric
>>values.
>>Sample data from SQL Server 2000 SP3 Query Analyzer
>>Extended_amount (sorted descending) No minus sign at the bottom.
>>8.99
>>7.99
>>4.08
>>3.19
>>2.99
>>2.98
>>2.59
>>2.29
>>2.19
>>2.19
>>1.99
>>1.79
>>1.59
>>1.49
>>.69
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>2.29
>>2.98
>>2.98
>>3.19
>>4.08
>>4.08
>>--
>>27.43
>>The cube reports: 66.63
>>Update the 8.99 to 8.99 * -1 and the Query Analyzer results are:
>>Extended_amount
>>7.99
>>4.08
>>3.19
>>2.99
>>2.98
>>2.59
>>2.29
>>2.19
>>2.19
>>1.99
>>1.79
>>1.59
>>1.49
>>.69
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>.00
>>2.29
>>2.98
>>2.98
>>3.19
>>4.08
>>4.08
>>-8.99
>>--
>>9.45
>>The cube reports: 48.65
>>Suggestions are appreciated.
>>Earl Newcomer
>>
>>
>>

Wednesday, March 7, 2012

Inclusive WHERE expression

I have a column YearGiven (smallint, null). I need to be able to use a WHERE clause that will include all years. Something like this:

SELECT * FROM Documents WHERE YearGiven = *

I realize if I eliminate the WHERE clause entirely this will produce the desired result. However in my VB app my Select statement will have to take into account that the YearGivenTextBox used in the Full Text Search may be empty. This is not a problem if I can assign a value to the empy textbox variable that will include all years.

If you use @.YearGivenTextBox parameter, you could use following code:

Code Snippet

SELECT * FROM Documents WHERE YearGiven = @.YearGivenTextBox OR @.YearGivenTextBox IS NULL

--Also you could try

SELECT * FROM Documents WHERE YearGiven = @.YearGivenTextBox OR @.YearGivenTextBox = ''

If @.YearGivenTextBox not empty first part work, else second and SQL Server returns all rows.

|||

This would work:

SELECT * FROM Documents WHERE YearGiven IN (SELECT YearGiven FROM Documents)

You could also have a look at dynamic sql to build you sql string on-the-fly at run time and pass throught he parameters you need , here's a fantastic article on how to do that:

http://www.sommarskog.se/dynamic_sql.html

Ray

|||

Don’t use the logical expression against the variables on WHERE clause, it will force the engine to use Index Scan (even there is a possibility to use Index Seek). Always better check with if statement,

Don’t try to reduce the number of lines, always give the preference to the performance,

Code Snippet

if @.yeargiventextbox is null or @.yeargiventextbox = '' -- isnull(@.yeargiventextbox ,'') = ''

select * from documents

else

select * from documents where yeargiven = @.yeargiventextbox

|||

Thanks for suggestions. You have all given me some things to think about (especially Ray's link to the article on dynamic SQL). I am reconsidering my original strategy for how to work with the WHERE clause.

Manivannan, how would I implement the IF-ELSE code snippet you provided in my Visual Basic app? Would it be embedded in the main SELECT statement, or set apart by using WHERE 1 = 1 in the SELECT statement?

|||

Following up on the link provided above by Ray, I have been researching the concept of Dynamic SQL:

Dynamic Search Conditions in T-SQL
An SQL text by Erland Sommarskog, SQL Server MVP.

http://www.sommarskog.se/dyn-search.html


Although it is a little complex for me with my current understanding of T-SQL, it does seem to address some of the problems I am facing and may provide a good learning experience (if nothing else.) He is filtering with 12 parameters. In my application I will have 9 (6 comboboxes and 3 textboxes) plus my basic WHERE CONTAINS full text search. So this may be comparable.

QUESTION 1: Based on the title of this thread, my initial question is: What does the expression WHERE 1=1 mean? Sommarskog is using his version of the Northwind DB and there is no column 1 that I can see. Is this a T-SQL convention for appending parameters to the main query? Or what?

QUESTION 2: Does this seem like a good approach for me based on the info I have provided above?

Thanks again for your patience and expertise in explaining these difficult concepts (for me) and providing workable options.

Here is Sommarskog's code example for the Dynamic Search stored procedure cited above:


Code Snippet

CREATE PROCEDURE search_orders_1
@.orderid int = NULL,
@.fromdate datetime = NULL,
@.todate datetime = NULL,
@.minprice money = NULL,
@.maxprice money = NULL,
@.custid nchar(5) = NULL,
@.custname nvarchar(40) = NULL,
@.city nvarchar(15) = NULL,
@.region nvarchar(15) = NULL,
@.country nvarchar(15) = NULL,
@.prodid int = NULL,
@.prodname nvarchar(40) = NULL,
@.debug bit = 0 AS


DECLARE @.sql nvarchar(4000),
@.paramlist nvarchar(4000)


SELECT @.sql =
'SELECT o.OrderID, o.OrderDate, od.UnitPrice, od.Quantity,
c.CustomerID, c.CompanyName, c.Address, c.City,
c.Region, c.PostalCode, c.Country, c.Phone,
p.ProductID, p.ProductName, p.UnitsInStock,
p.UnitsOnOrder
FROM dbo.Orders o
JOIN dbo.[Order Details] od ON o.OrderID = od.OrderID
JOIN dbo.Customers c ON o.CustomerID = c.CustomerID
JOIN dbo.Products p ON p.ProductID = od.ProductID
WHERE 1 = 1'


IF @.orderid IS NOT NULL
SELECT @.sql = @.sql + ' AND o.OrderID = @.xorderid' +
' AND od.OrderID = @.xorderid'


IF @.fromdate IS NOT NULL
SELECT @.sql = @.sql + ' AND o.OrderDate >= @.xfromdate'


IF @.todate IS NOT NULL
SELECT @.sql = @.sql + ' AND o.OrderDate <= @.xtodate'


IF @.minprice IS NOT NULL
SELECT @.sql = @.sql + ' AND od.UnitPrice >= @.xminprice'


IF @.maxprice IS NOT NULL
SELECT @.sql = @.sql + ' AND od.UnitPrice <= @.xmaxprice'


IF @.custid IS NOT NULL
SELECT @.sql = @.sql + ' AND o.CustomerID = @.xcustid' +
' AND c.CustomerID = @.xcustid'


IF @.custname IS NOT NULL
SELECT @.sql = @.sql + ' AND c.CompanyName LIKE @.xcustname + ''%'''


IF @.city IS NOT NULL
SELECT @.sql = @.sql + ' AND c.City = @.xcity'


IF @.region IS NOT NULL
SELECT @.sql = @.sql + ' AND c.Region = @.xregion'


IF @.country IS NOT NULL
SELECT @.sql = @.sql + ' AND c.Country = @.xcountry'


IF @.prodid IS NOT NULL
SELECT @.sql = @.sql + ' AND od.ProductID = @.xprodid' +
' AND p.ProductID = @.xprodid'


IF @.prodname IS NOT NULL
SELECT @.sql = @.sql + ' AND p.ProductName LIKE @.xprodname + ''%'''


SELECT @.sql = @.sql + ' ORDER BY o.OrderID'


IF @.debug = 1
PRINT @.sql


SELECT @.paramlist = '@.xorderid int,
@.xfromdate datetime,
@.xtodate datetime,
@.xminprice money,
@.xmaxprice money,
@.xcustid nchar(5),
@.xcustname nvarchar(40),
@.xcity nvarchar(15),
@.xregion nvarchar(15),
@.xcountry nvarchar(15),
@.xprodid int,
@.xprodname nvarchar(40)'


EXEC sp_executesql @.sql, @.paramlist,
@.orderid, @.fromdate, @.todate, @.minprice,
@.maxprice, @.custid, @.custname, @.city, @.region,
@.country, @.prodid, @.prodname

|||

OK, Question # 1: What does WHERE 1 = 1 mean?

Here is what I found:

"If you're building a WHERE clause on the fly, and you don't know if there are any more expressions in the WHERE clause, then starting with 1=1 insures that you'll create a valid WHERE clause and the SELECT won't blow up. I don't recommend it but it works and it's quick."


"It's a standard way to have a "where" clause that it's always true."

"It allows the developers to not worry ... Normally used in dynamically generated SQL."

Question # 2: Is Dynamic SQL approach best for me?

Still don't know yet. I am trying a more standard Sproc approach and have gotten some of it to work. I can query in Visual Basic using a sprock with 6 parameters. I don't know how to include my Full Text Search parameter into the sproc. I don't know how to use IF-ELSE when the parameter value is NULL (the TextBox is empty or Combobox is unselected). Here is my sproc and Visual Basic code that works:

Code Snippet

CREATE PROC usp_Advanced_Search

@.doctype nvarchar(10) = NULL,

@.year varchar(6) = NULL,

@.sex varchar(6) = NULL,

@.category nvarchar(10) = NULL,

@.agenum smallint = NULL,

@.agecat nvarchar(10) = NULL AS

SELECT FullDocuments.FullDocNo, FullDocuments.DocType, Details.Year

FROM FullDocuments

INNER JOIN Details ON FullDocuments.FullDocNo = Details.FullDocNo

WHERE DocType = @.DocType AND Year = @.Year AND sex = @.sex

AND category = @.category AND agenum = @.agenum AND agecat = @.agecat

Code Snippet

Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles SearchButton.Click

Dim conn As New SqlConnection("Data Source=OFFICE\FULLTEXTSEARCH;Initial Catalog=ECR;Integrated Security=True")

Dim Command As New SqlCommand("usp_Advanced_Search", conn)

Command.CommandType = CommandType.StoredProcedure

Dim SelectedDocType = DocTypeComboBox.Text.ToString

Dim SelectedYear = YearTextBox.Text.ToString

Command.Parameters.Add("@.DocType", SqlDbType.VarChar, 10)

Command.Parameters("@.DocType").Value = DocTypeComboBox.Text.ToString

Command.Parameters.Add("@.Year", SqlDbType.VarChar, 6)

Command.Parameters("@.Year").Value = YearTextBox.Text.ToString

Command.Parameters.Add("@.Category", SqlDbType.VarChar, 12)

Command.Parameters("@.Category").Value = CategoryComboBox.Text.ToString

Command.Parameters.Add("@.Sex", SqlDbType.VarChar, 6)

Command.Parameters("@.Sex").Value = SexComboBox.Text.ToString

Command.Parameters.Add("@.AgeNum", SqlDbType.SmallInt)

Command.Parameters("@.AgeNum").Value = AgeNumTextBox.Text.ToString

Command.Parameters.Add("@.AgeCat", SqlDbType.VarChar, 8)

Command.Parameters("@.AgeCat").Value = AgeCatComboBox.Text.ToString

Dim adapter As SqlDataAdapter = New SqlDataAdapter()

adapter.SelectCommand = Command

Dim ds As New DataSet()

conn.Open()

adapter.Fill(ds)

conn.Close()

DataGridView1.DataSource = ds.Tables(0)

End Sub

I have tried to include a FullTextSearch Parameter in my sproc like this:

Code Snippet

CREATE PROC usp_Advanced_Search3

@.doctype nvarchar(10) = NULL,

@.year varchar(6) = NULL,

@.sex varchar(6) = NULL,

@.category nvarchar(10) = NULL,

@.agenum smallint = NULL,

@.agecat nvarchar(10) = NULL AS

DECLARE @.SearchTerm NVARCHAR(100)

SET @.SearchTerm ='SearchTextBox.Text'

SELECT FullDocuments.FullDocNo, FullDocuments.DocType, Details.Year

FROM FullDocuments

INNER JOIN Details ON FullDocuments.FullDocNo = Details.FullDocNo

WHERE DocType = @.DocType AND Year = @.Year AND sex = @.sex

AND category = @.category AND agenum = @.agenum AND agecat = @.agecat AND CONTAINS(SectionText, 'SearchTerm')

I then added this code to the VB app:

Command.Parameters.Add("@.SearchTerm", SqlDbType.NVarChar, 100)

Command.Parameters("@.AgeCat").Value = SearchTextBox.Text.ToString

The VB solution builds successfuly but when I try the Full Text Search I get this error:

Procedure or function usp_Advanced_Search has too many arguments specified.

Any specific suggestions or code would be greatly appreciated.