Click here to monitor SSC
  • Av rating:
  • Total votes: 37
  • Total comments: 15
Joe Celko

VALUES() and Long Parameter Lists

22 July 2010

To make progress as a relational Database programmer, you have to think in terms of sets, rather than lists, arrays or sequential data. Until that point, you'll feel the need to pass lists, arrays and the like to functions and procedures. Joe suggests some unusual ways around the difficulty and broods on the power of the VALUES constructor.

The original idea of SQL and RDBMS was that all you need to model data is the table. This is a set-oriented approach to data which falls naturally into declarative programming. Sets are defined by intention or by extension. An intentional definition of a set gives you a rule (characteristic function) with which to test membership; and they are handy for sets of large, unknown size and membership. For example, the set of even numbers are integers which have a remainder of zero when they are divided by two.

A set, defined by extension, is a finite collection of values. For example, Donald Duck's nephews are {Huey, Louie and Dewey}; well, actually, they are Huebert, Deuteronomy and Louis but you have to be a real comic book geek to know that.

What you do not have in RDBMS are arrays, linked lists, sequential files and other data structures that have to be navigated. This drives procedural programmers nuts. About once a month, one of them will post a request to a newsgroup asking how to pass a CSV (comma separated values) list in a string as a parameter to a store procedure. Every time, the links to several tricks are posted over and over, with the definitive articles at Erland Sommarskog's website.

The approaches to this problem can be classified as (1) Iterative algorithms (2) Auxiliary tables (3) Non-SQL tools.

The iterative approaches explain themselves. Just imagine that you are back in your freshman C or Assembly language programming class and are working at the lowest level you can. Read the CSV from left to right. When you get to a comma, convert the substring to the left of the comma into the target data type. Drop the comma; repeat the WHILE loop until you have consumed the string.

A more recent variation is to use a Recursive CTE, but it is essentially the same thing.

Another classic approach is to use a Series table (a list of numbers from 1 to n). First put a comma on both ends of the input string. All of the substrings we want to cast are bracketed by commas, so we extract them as a set. Here is one version of these procedures.

 

CREATE PROCEDURE ParseList (@inputstring VARCHAR(1000))
AS
SELECT S1.i AS parameter_position,
       CAST (SUBSTRING (@inputstring,
                        S1.i,
                        MIN(S2.i) - S1.i - 1) AS INTEGER)
       AS param
  FROM Series AS S1, Series AS S2
 WHERE S1.i < S2.i
   AND S1.i <= LEN(@inputstring) + 2
   AND S2.i <= LEN(@inputstring) + 2
   AND SUBSTRING(',' + @inputstring + ',', S1.i, 1) = ','
   AND SUBSTRING(',' + @inputstring + ',', S2.i, 1) = ','
 GROUP BY S1. i;

Or maybe the CTE variant

 

CREATE PROCEDURE ParseList (@inputstring VARCHAR(1000))
AS
;WITH CommaPositions
AS
(SELECT S.i AS i
   FROM Series AS S
  WHERE S.i <= LEN(@inputstring) + 2
    AND SUBSTRING(',' + @inputstring + ',', S.i, 1) = ',')

SELECT CP1.i AS parameter_position,
       CAST(SUBSTRING(@inputstring,
                      CP1.i,
                      MIN(N2.i) – CP1.i - 1) AS INTEGER)
       AS param
  FROM CommaPositions AS CP1, CommaPositions AS CP2
 WHERE CP1.i < CP2.i
 GROUP BY CP1.i;

To be honest, you would probably want to trim blanks and perhaps do other tests on the string, such as seeing that LOWER(@inputstring) = UPPER(@inputstring) to avoid alphabetic characters, and so forth.

The integer substrings are located between the (i)-th and (i+1)-th comma pairs. In effect, the sequence table replaces the loop counter. The Series table has to have enough numbers to cover the entire string, but unless you really like to type in long parameter list, this should not be a problem. The last two predicates n2.i<=LEN(@inputString)+2 and n2.i<=LEN(@inputString)+2 in the first version are to avoid a Cartesian product with the Series table.

Finally, the external methods involve a CLR function and XML processing. The most fundamental complaint against them is in maintaining code. Your CLR routine will be written in one of the dozens of CLR languages and you are assuming that the next guy will also be fluent. The same complaint about extra maintenance work can be made against XML. The SQL optimizer is never going to do anything with this external code, and the overhead of passing data via various APIs will never go away.

The simplest answer is to use a long parameter list to construct lists and derived tables inside the procedure body. SQL server can handle up to 2100 parameters, which should be more than enough for practical purposes. SQL Server is actually a wimp in this regard; DB2 ;can pass 32K parameters. and Oracle can have 64K parameters

Their database engines use those extreme limits for certain system functions that the users will never see. Nobody expects a human being to type in thousands of lines of CSV text with any of these techniques. In practice, I have seen a list of 64 (Chessboard), 81 (Sudoku solver) and 361 (Go board) simple parameters. Most of the time, 100 parameters is a very safe upper limit. If I need to pass more then I want to look at an ETL tool or something else.

The main advantages of the long parameter lists are:

  1. The code is highly portable to any Standard SQL. One product bigots need to get over the idea that they are never going to see more than one SQL engine in their careers.
  2. The code is pure native SQL and not an external procedural language. The people that maintain it can be SQL programmers. There will be no need to send your C# programmer to F# classes next year to keep up.
  3. The optimizer will treat them like any other parameter. They can be sniffed. You can use a RECOMPILE option and get the best performance possible each time the procedure runs.
  4. The compiler will treat them like any other parameter. You get the expected error messages and conversion. Frankly, I have never seen anyone who used one of the other techniques write RAISERROR() calls to return the same error messages as the compiler.
  5. It is easy to generate the code with a text editor. Think about how hard it is to write in a CLR language you don't know or to write a loop in T-SQL. I just cut & paste a skeleton with 250 parameters then cut off what I need. I am going to show a list of five parameters in my skeleton code to save space.

The simplest example of the long parameter list technique is just to use them: Pass what you need and leave the rest of the list to default to NULLs.

 

CREATE TABLE Zoo
(sku CHAR(11) NOT NULL PRIMARY KEY,
 animal_name VARCHAR(25) NOT NULL);

INSERT INTO Zoo (sku, animal_name)
VALUES ('39634-62349', 'Horse'),
       ('74088-65564', 'Cow'),
       ('16379-19713', 'Pig'),
       ('39153-69459', 'Yak'),
       ('17986-24537', 'Aardvark'),
       ('14595-35050', 'Moose'),
       ('40469-27478', 'Dog'),
       ('44526-67331', 'Cat'),
       ('93365-54526', 'Tiger'),
       ('22356-93208', 'Elephant');

 

CREATE PROCEDURE Animal_Picker

  @p1 CHAR(11) = NULL ,

  @p2 CHAR(11) = NULL ,

  @p3 CHAR(11) = NULL ,

  @p4 CHAR(11) = NULL ,

  @p5 CHAR(11) = NULL

AS

SELECT  sku ,

        animal_name

FROM    Zoo

WHERE   sku IN ( @p1, @p2, @p3, @p4, @p5 ) ;

 

EXEC Animal_Picker '39153-69459', '17986-24537', '99999-99999' ;

If you had a long list, there would be a lot of NULLs in the IN() predicate list. It would be a good idea to clean them out. But you cannot do that with a simple IN() predicate list. We need to get the data into a column in a derived table first. This is another simple text edit problem.

CREATE PROCEDURE Animal_Picker
(@p1 CHAR(11) = NULL,
 @p2 CHAR(11) = NULL,
 @p3 CHAR(11) = NULL,
 @p4 CHAR(11) = NULL,
 @p5 CHAR(11) = NULL)
AS
SELECT sku, animal_name
  FROM Zoo
 WHERE sku IN (@p1, @p2, @p3, @p4, @p5);

CREATE PROCEDURE Animal_Picker
(@p1 CHAR(11) = NULL,
 @p2 CHAR(11) = NULL,
 @p3 CHAR(11) = NULL,
 @p4 CHAR(11) = NULL,
 @p5 CHAR(11) = NULL)
AS
SELECT sku, animal_name
  FROM Zoo
 WHERE sku
       IN (SELECT in_sku
             FROM (VALUES (@p1), (@p2), (@p3), (@p4), (@p5))
                  AS X(in_sku)
            WHERE in_sku IS NOT NULL);

 

EXEC Animal_Picker '39153-69459', '17986-24537', '99999-99999';

The use of the VALUES() constructor is new to SQL Server programmers, but other products have had it for awhile now. Here is a simple way to keep a list of constants in a way that prevents them from being accidentally updated:

CREATE VIEW ConstantsList (constant_name, constant_value)

AS

SELECT X.constant_name, X.constant_value

FROM (VALUES ('pi', 3.14159), ('e', 2.71828), ('phi', 1.61803), ..)

AS X (constant_name, constant_value);

This is not quite complete. if you go to the VALUES list, you can explicitly CAST() the first row in the constructor and the rest of the table will follow it.

(....

(VALUES (CAST ('pi' AS VARCHAR(5), CAST (3.14159 AS DECIMAL(8,5)),

('e', 2.71828),

('phi', 1.61803)),

etc.)

We could use a view of a similar list of constants, the NATO (international radiotelephony) spelling alphabet

CREATE VIEW SpellingAlphabet (Telephony, character)

AS

SELECT X.Telephony, X.character

FROM (VALUES ('Alpha','a'), ('Bravo', 'b'), ('Charlie', 'c'),

    ('Delta', 'd'), ('Echo','e'), ('Foxtrot', 'f'), ('Golf', 'g'),

    ('Hotel', 'h'), ('India', 'i'), ('Juliet', 'j'), ('Kilo', 'k'),

    ('Lima', 'l'), ('Mike', 'm'), ('November', 'n'), ('Oscar', 'o'),

    ('Papa', 'p'), ('Quebec', 'q'), ('Romeo', 'r'), ('Sierra', 's'),

    ('Tango', 't'),('Uniform', 'u'), ('Victor', 'v'), ('Whiskey', 'w'),

    ('X-Ray', 'x'), ('Yankee', 'y'), ('Zulu', 'z')

)

AS X (Telephony, character);

 

 

SELECT  telephony

FROM    Series

        INNER JOIN SpellingAlphabet

        ON character = SUBSTRING('Simple-Talk', i, 1)

WHERE   i <= LEN('Simple-Talk')

ORDER BY i

 

telephony

---------

Sierra

India

Mike

Papa

Lima

Echo

Tango

Alpha

Lima

Kilo

 

VALUES is a row constructor, so it can do more than just parse a CSV string into scalars. Let's say I want to sell off the animals in my zoo. I can put the order header data in the front of the parameter list, then use the rest of the list for order details.

A skeleton for this insertion will use the price list I set, and join it to the animal's SKU codes on the order form I construct from the parameter list.

CREATE PROCEDURE OrderAndDetails (...)

AS

BEGIN

INSERT INTO Orders (..)

VALUES (@order_nbr, ..) AS X

 

INSERT INTO OrderDetails

SELECT @order_nbr, sku, order_qty, P.list_price

FROM (VALUES (@in_sku_1, @in_qty_1),

(@in_sku_2, @in_qty_2),

--..

(@in_sku_n, @in_qty_n)) AS OrderForm(sku, qty),

PriceList AS P

WHERE OrderForm.sku = P.sku;

END;

Do you see the idea? An entire program is being built from the long parameter list in a single declarative statement that uses no special features.

Think about what you can do with CASE expressions and computations inside the VALUES() constructor. Most of the procedural control of flow can be avoided. Likewise, a MERGE statement is quite handy  to get a table constructor in the USING clause. What we are doing is writing SQL like a functional programming language.

But isn't the VALUES() constructor like a table variable? Table variables are like tables and have some restrictions. You cannot use a table variable as an input or an output parameter; it is scoped just like any local variable you create with a DECLARE statement. The variable will no longer exist after the procedure exits. That much is like a parameter. The table variable needs to be declared, allocated in storage then used with INSERT, UPDATE and DELETE statements to get data in and out of it. There are also other restrictions on indexes and ALTER statements, but ignore those things for now.

But the long parameter list is made up of plain old parameters, which means that they can be used for outputs as well as inputs. It also is contained in one statement for the optimizer to use.

Do I think these techniques will always be the best solution? Of course not. There is no such thing in SQL. But it is worth looking at when you get an appropriate problem.

Joe Celko

Author profile:

Joe Celko is one of the most widely read of all writers about SQL, and was the winner of the DBMS Magazine Reader's Choice Award four consecutive years. He is an independent consultant living in Austin, TX. He has taught SQL in the US, UK, the Nordic countries, South America and Africa.
He served 10 years on ANSI/ISO SQL Standards Committee and contributed to the SQL-89 and SQL-92 Standards.
He has written over 800 columns in the computer trade and academic press, mostly dealing with data and databases. He is the author of eight books on SQL for Morgan-Kaufmann, including the best selling SQL FOR SMARTIES.
Joe is a well-known figure on Newsgroups and Forums, and he is famous for his his dry wit. He is also interested in Science Fiction.

Search for other articles by Joe Celko

Rate this article:   Avg rating: from a total of 37 votes.


Poor

OK

Good

Great

Must read
Have Your Say
Do you have an opinion on this article? Then add your comment below:
You must be logged in to post to this forum

Click here to log in.


Subject: Have you really research this Joe?
Posted by: Max Lewis (not signed in)
Posted on: Monday, July 26, 2010 at 12:11 AM
Message: Using many parameters > 20 instead of a CSV will be a maint and dev nightmare, even in terms of technically the plan sizes will be bigger than other methods. There are libraries already written to do this for us - there is no need to write anything, just use the library!

In terms of passing a table valued parameter, there is no need to use INSERT to populate it at all - we just pass in a DataTable from the application (see http://msdn.microsoft.com/en-us/library/bb675163.aspx).

This is very poor Joe, it's well known regardless of product that this is a bad approach. What seems clever in theory doesn't always work in practice!

Erland articles are the definitive - he's bothered to do the practical research in comparing the methods, could you possibly post your analysis in this to why you conclude this is a better method than using CSV or TVP?

Subject: An XML solution doesn't require CLR or user defined functions.
Posted by: Eric Russell (view profile)
Posted on: Monday, July 26, 2010 at 8:32 AM
Message: An XML solution for passing a multi valued parameter doesn't necessarily require CLR or user defined functions. Starting with SQL Server 2005, there are system supplied functions for this, and only one select statement is required to parse this parameter into a table result.
Other RDBM platforms like Oracle or DB2 have very similar similar XML datatypes and functions, so it requires only minor retrofitting when porting. Actualy no T-SQL stored procedure is portable to Oracle PL/SQL or another RDMS without some modification, except perhaps Sybase.

create procedure customer_list ( @customers xml )
as
select customers.id.value('.','int') as customer_id
from @customers.nodes('id') as customers(id);
go

exec customer_list
@customers = '<id>120</id><id>98</id><id>157</id>';
go

customer_id
-----------
120
98
157

Subject: Missed the point?
Posted by: Stan Kehoe (not signed in)
Posted on: Monday, July 26, 2010 at 8:39 AM
Message: Joe's point was to show a concept that is valid across multiple RDBMS vendors, not to provide the most SQL-08 optimized solution.

I frequently need to provide user-selected lists with thousands of keys to a query (say, a cost center hierarchy, or maybe a list of cusips), so I just stuff the list into a key table along with some kind of job key, then pass the job key on to a stored proc to do simple joins then purge the job from the key table. (Since I'm also a single-vendor guy, I usually use the .NET bulk loader to insert the key list :P )

Subject: Concurrency problems
Posted by: Tony Rogerson (not signed in)
Posted on: Monday, July 26, 2010 at 10:00 AM
Message: Hi Stan,

If you are using a single table to support mulitple concurrent users inserting into it and you are also then deleting the rows you can come a severe cropper on that one with blocking and deadlocking come out of your ears!

I think in this posting Joe is just pushing something he's pushed for a long time on the news groups, something he thinks is "clever", "different" and he believes "elegant"; yet, time and time again SQL MVP's and other SQL experts have pointed out the limitations of the multiple parameter method in terms of maintainability and cost to development where there are other "standard compliant" solutions far more elegant and maintainable.

Tony.

Subject: This would not scale for us
Posted by: Chris Leonard (not signed in)
Posted on: Monday, July 26, 2010 at 12:07 PM
Message: At Go Daddy, we sometimes have to process lists of up to 1 million internet domains at a time. This is just part of our regular processing, not including special one-off processes that may involve many more domains than this. Eventually we chunk the work into batches, but trust me, this first step is sometimes needed to guarantee a consistent mapping between what a customer asked for ("update the nameservers for all of my domains") and what we will do ("update the nameservers for domains 1, 2, 3, etc."). No technique using discrete scalar values will accomplish this. CSV parameters, TVPs, and XML all handle the task in easy-to-maintain ways that don't require any non-SQL knowledge and that scale very well. And many companies have something in their data domains that they have millions of that they may need to process as a set.

It's ironic that while encouraging people to think in sets, Joe is summarily dismissing readily available set-oriented solutions in favor of what is essentially a denormalized list of scalar variables.

-Chris

Subject: Response to Stan
Posted by: Chris Leonard (not signed in)
Posted on: Monday, July 26, 2010 at 12:10 PM
Message: Stan, even without assuming SQL 2008, but just assuming you are using a database platform upon which you can compile modular procs and functions, then passing in long CSV sets is still quite portable, very efficient, and far more maintainable.

Cheers,
Chris

Subject: set based languages should support sets as parameters
Posted by: AlexK (view profile)
Posted on: Monday, July 26, 2010 at 12:28 PM
Message: For the sake of consistency, set based languages should support sets as parameters.

This is not the case for 2005. So, where performance matters, I transmit long lists of numbers, up to 100K, as an image. It works lightning fast, and Erland Sommarskog was kind enough to include an example in his article:

http://www.sommarskog.se/arrays-in-sql-2005.html

in the subsection titled "Passing Numbers as Binary".

Subject: lighten up people
Posted by: Terry Grignon (not signed in)
Posted on: Monday, July 26, 2010 at 12:44 PM
Message: I like reading Joe's articles because they make me think about different ways of using SQL. Sometimes a little inefficiency is worth it if it provides me with new tools/ideas.

Subject: Beware the Webbie Coder
Posted by: BuggyFunBunny (view profile)
Posted on: Monday, July 26, 2010 at 1:05 PM
Message: --Nobody expects a human being to type in thousands of lines of CSV text with any of these techniques. In practice, I have seen a list of 64 (Chessboard), 81 (Sudoku solver) and 361 (Go board) simple parameters. Most of the time, 100 parameters is a very safe upper limit. If I need to pass more then I want to look at an ETL tool or something else.

Unfortunately, these limits aren't as practical as they seem, at first blush. In the Webbie world (java or VB or C# or PHP or ...) it is not only common, but considered Best Practice, to pass the entirety of an input page to the database; said page containing multiple pick lists. The backend is then expected to re-write all of the rows (on one or more tables)
referenced, irregardless of whether or not the row data is changed/updated; the Webbie coders being far too lazy to track change on all of the data they through on the screen. This Best Practice makes me angrier than any other Webbie paradigm. It is just so bloody stupid. But it happens all the time.

Subject: To Chris on the set / denormalised
Posted by: Tony Rogerson (not signed in)
Posted on: Tuesday, July 27, 2010 at 12:27 AM
Message: I thought that too, in the application we have the set as a datatable for instance, Joe wants us to break that up into a repeating group entirely breaking 1NF.

Relations are not specific to the database engine, Relations can be used in the application, so long as the definition of the structure follows Codds rules on it being a relation.

There are many other problems with this that are covered in various places including my own blog that debunk this "Parameter repeating group" method.

Tony.

Subject: Quick replies to feedback
Posted by: Celko (view profile)
Posted on: Tuesday, July 27, 2010 at 12:10 PM
Message: >> BuggyFunBunny: In the Webbie world (java or VB or C# or PHP or ...) it is not only common, but considered Best Practice, to pass the entirety of an input page to the database; said page containing multiple pick lists. The backend is then expected to re-write all of the rows (on one or more tables) <<

LOL! You are probably too young to remember the IBM 3270 family of terminals. A video form using a horrible fixed font popped up on the screen, you fill it out like a paper form and hit the XMIT key. OS/360 grabbed each of these fixed size screen buffers, send them to a batch system, and returned the errors the backend found. Same processing model with ugly graphics and no WIMP tools!

>> Chris Leonard: At Go Daddy, we sometimes have to process lists of up to 1 million internet domains at a time .. Eventually we chunk the work into batches, but trust me, this first step is sometimes needed to guarantee a consistent mapping between what a customer asked for ("update the nameservers for all of my domains") and what we will do ("update the nameservers for domains 1, 2, 3, etc.").<<

I hope that you are using an ETL tool for large amounds of data. Theses tricks are for procedures and not bulk data loads and scrubbing. What I did not show was using the VALUES() table constructor the USING clause of a MERGE statement. This makes "upserting" a one-statement procedure and guarantees transaction levels.

Wouldn't your skeleton problem be done with a query that finds the table with the customer's domains to pass to the UpdateNameserver() procedure? The bitch would be a customer who wants something the has to be done one account at a time ("Update the name servers for my domains that are the names of flowers! " or worse -- and I am sure you have one like that).

>> Chris Leonard: Stan, even without assuming SQL 2008, but just assuming you are using a database platform upon which you can compile modular procs and functions, then passing in long CSV sets is still quite portable, very efficient, and far more maintainable <<

Not as portable as you would think! While casting strings to other data types is defined in ANSI, you will find that the same data in a parameter and a string don't always get handled the same way. I will post a comparison list in a day or so.

Also, why is typing the same CSV list into a string rather than a parameter list "far more maintainable"? We write code for the end user (aka "dumber than dirt, out to destroy our beautiful code!") who neve sees the insides of a procedure. The repeated list of parameters is easy to maintain if you have a text editor with regular expressions and/macros.

>> >> AlexK: For the sake of consistency, set based languages should support sets as parameters. This is not the case for 2005. <<

Agreed. While it is a pain to have to declare table types outside the procedure body, at least you can do it in 2008. This is the Oracle model in PL/SQL.

>> Tony Rogerson: Joe wants us to break that up into a repeating group entirely breaking 1NF <<

No, I want to take a non-scalar CSV and turn it into a table of scalars with the VALUES() constructor. Hopefully, the original data was at least 1NF, but we cannot control that. I can pull out dups, NULLs and do other edits easily once I get it inside if it needs scrubbing.

Subject: Cannot post my table, but here are notes:
Posted by: Celko (view profile)
Posted on: Tuesday, July 27, 2010 at 3:16 PM
Message: I did a series of simple tests on minimal version of a CVS parser statement and a long parameter list procedure call. But I cannot get my table to print in this comment box. Here are the notes:

Notes:
1) The CVS parser gives more serious errors, but the difference between level 15 and 16 is not much. Unless you have code looking for it.

2) CSV fails to do some valid conversions.

3) Date testing has its own problems in CSV. The ISDATE() and the CAST() do not work the same way, so you need extra code in the CSV parser that does not depend on that function. Microsoft expects an ISO-8601 date in the string for the CAST(), but not for their proprietary ISDATE().

SELECT ISDATE ('2010-01-01'); -- TRUE
SELECT CAST ('2010-01-01' AS DATE); -- no problem
SELECT ISDATE ('010-01-01')-- TRUE
SELECT CAST ('010-01-01' AS DATE); -- Msg 241, Level 16, State 1, Line 1
Conversion failed when converting date and/or time from character string.

4) OUTPUT parameters are impossible with the CSV parser:
CREATE PROCEDURE ShowList
(@p1 INTEGER = NULL,
@p2 INTEGER = NULL,
@p3 INTEGER = NULL,
@p4 INTEGER = NULL,
@p5 INTEGER = NULL OUTPUT)
AS ..;

5) The NULL should produce an empty table, as it does. But the empty string should not become a zero. This is a string conversion error as far as I can tell; at least it is consistent, tho.

6) You can pass local variables with the long parameter list, but not with CSV:

DECLARE @local_int_1 INTEGER;
DECLARE @local_int_2 INTEGER;

SET @local_int_1 = 42;
SET @local_int_2 = POWER (@local_int_1 ,2);

EXEC ShowList @local_int_1, @local_int_2;


Subject: Say it Ain't so Joe
Posted by: BuggyFunBunny (view profile)
Posted on: Tuesday, July 27, 2010 at 5:08 PM
Message: Joe:
LOL! You are probably too young to remember the IBM 3270 family of terminals. A video form using a horrible fixed font popped up on the screen, you fill it out like a paper form and hit the XMIT key. OS/360 grabbed each of these fixed size screen buffers, send them to a batch system, and returned the errors the backend found. Same processing model with ugly graphics and no WIMP tools!

I guess you don't read my stuff all that closely (most under a Real Name elsewhere), but my main complaint with Webbie Coders is their ignorance of the plain fact of Web Coding: it is semantically exactly a 3270, just with pixels. The 3270 is a block mode, local edit, disconnected terminal. So is a browser (modulo AJAX, so far). I spent more time, lots more, working VT-220/RS-232(14,400)/*nix databases, which is a character mode, server edited, connected terminal. The AJAX folks are trying to re-create that environment, but don't even know it. Yes, I've lived the 3270 world, both of them, then and now.

Subject: Different than Union Select?
Posted by: Anonymous (not signed in)
Posted on: Monday, August 02, 2010 at 8:31 AM
Message: Is there a difference in using the VALUES construct verses using a UNION SELECT?

-- example --

SELECT in_sku
FROM (VALUES (@p1), (@p2), (@p3), (@p4), (@p5)) AS X(in_sku)
WHERE in_sku IS NOT NULL;

-- verses --

SELECT in_sku
FROM (SELECT @p1 UNION SELECT @p2 UNION SELECT @p3 UNION SELECT @p4 UNION SELECT @p5) AS X(in_sku)
WHERE in_sku IS NOT NULL;

Subject: UNION SELECT vs VALUES
Posted by: Celko (view profile)
Posted on: Monday, August 02, 2010 at 9:44 AM
Message: >> Is there a difference in using the VALUES construct versus using a UNION SELECT? <<

The VALUES row and table constructor is ANSI/ISO Standard; the UNION SELECT is dialect from the old Sybase "code museum". In Standard SQL, the SELECT.. must always have a FROM clause.

The SELECT.. UNION drops duplicate rows and the SELECT.. UNION ALL keeps duplicates.

Likewise, using the lone SELECT for multiple assignments is dialect,

SELECT @a=1, @b=2, ..;

while ANSI Standard uses SET for assignment:

SET (a, b, c, ..) = (1, 2, 3,..);

 

Phil Factor
Searching for Strings in SQL Server Databases

Sometimes, you just want to do a search in a SQL Server database as if you were using a search engine like Google.... Read more...

 View the blog

Top Rated

SQL Server XML Questions You Were Too Shy To Ask
 Sometimes, XML seems a bewildering convention that offers solutions to problems that the average... Read more...

Continuous Delivery and the Database
 Continuous Delivery is fairly generally understood to be an effective way of tackling the problems of... Read more...

The SQL Server Sqlio Utility
 If, before deployment, you need to push the limits of your disk subsystem in order to determine whether... Read more...

The PoSh DBA - Reading and Filtering Errors
 DBAs regularly need to keep an eye on the error logs of all their SQL Servers, and the event logs of... Read more...

MySQL Compare: The Manual That Time Forgot, Part 1
 Although SQL Compare, for SQL Server, is one of Red Gate's best-known products, there are also 'sister'... Read more...

Most Viewed

Beginning SQL Server 2005 Reporting Services Part 1
 Steve Joubert begins an in-depth tour of SQL Server 2005 Reporting Services with a step-by-step guide... Read more...

Ten Common Database Design Mistakes
 If database design is done right, then the development, deployment and subsequent performance in... Read more...

SQL Server Index Basics
 Given the fundamental importance of indexes in databases, it always comes as a surprise how often the... Read more...

Reading and Writing Files in SQL Server using T-SQL
 SQL Server provides several "standard" techniques by which to read and write to files but, just... Read more...

Concatenating Row Values in Transact-SQL
 It is an interesting problem in Transact SQL, for which there are a number of solutions and... Read more...

Why Join

Over 400,000 Microsoft professionals subscribe to the Simple-Talk technical journal. Join today, it's fast, simple, free and secure.