Using Stored Procedures to Provide an Application’s Business-Logic Layer

When faced with a complex business application that had to be delivered with minimum staffing, on-time and within budget, Dwain's team chose to encapsulate and implement most of an application's business logic in SQL Server, using an interface made up of stored procedures. Without this approach, the team was convinced that it would not have been possible to deliver that level of business logic complexity within the timeframe. Dwain explains how it was done, and provides a template.

Using Stored Procedures as an Ap

It is not a trivial matter to build the sort of multi-tier, web application that every corporation seems to want. It is too easy to end up with a morass of extremely complicated code that is difficult to get working correctly even with extensive testing, and it can furthermore be quite challenging to maintain it in production.

T-SQL Stored Procedures (SPs), along with some views and functions, are a useful way to encapsulate and implement most of an application’s business logic, especially that which retrieves the underlying data from the tables (master or transaction), and/or updates it.  This is useful because:

  •  It allows the database developer to work fairly independently of the application developers.
  • It means that the application developers could concentrate their efforts on getting the best possible user-interface without needing to understand the relational design of the base tables within the database.
  • It provides a logical abstraction of the database that could be ‘mocked up’ to assist with testing.
  • It allows a much more robust security regime for the database that reduces the likelihood that SQL Injection can occur.

I will describe two basic templates that can be used for many commonly-seen forms and web pages, which can then be elaborated as required. 

Occasionally in this article I will refer to an application that my company recently and successfully developed using this approach, along with the reasons why it was successful.  Without going too far into the details, I can say that in my 7+ years of experience with this company, this application ranks within the top three or four in complexity among all of those I’ve been intimately involved with.  It was the culmination of refining the approach I will describe, and it was successful because it was delivered with minimum staffing, on-time and within budget.  While it has not been in production long enough yet to fully assess customer satisfaction levels, initial reception has been promising and that is no mean feat considering it was designed to replace a legacy system that had been in use for over twenty years.

Now that I’ve established those rather lofty expectations, let’s start by explaining the basic idea within a common business requirement.

The Header/Details Form Pattern

I’ll start by describing, as an example, a relatively simple but common business document, an invoice, after which I’ll flesh out the T=SQL that would implement its’ functionality.

There are two parts to an invoice: header and details.

In the invoice’s header, you’ll normally find at least the following information

  •  An invoice number, which is often the primary key in the underlying table and is usually system-generated.
  •  The invoice date.
  •  The customer to which the invoice will be sent.
  • That customer’s billing details.

There may also be other sundry information that you may think of as being part of an invoice’s footer, because they appear at the end of the physical document once printed. In reality, these are stored with the header.  This does not include the invoice total, which is derived by summing the details.

In the invoice’s details, there will be:

  • A line number, or some other way of sequencing the detail lines of the invoice and which uniquely identifies each detail line as unique.
  •  An item code (the product description would normally be retrieved by joining the item code back to the item table).
  •  Quantity of the item.
  •  Price of the item (unit price).
  •  Extended price, or the product of quantity and price which, being derived from them, need not be actually included in the table.

Here is an example of a typical User-interface form that could be designed to create/maintain an invoice.

2097-clip_image002.jpg

There are several buttons on this form, each of which represent an action that the user can take.  This form would initially be in “query mode,” meaning that the user enters the filtering criteria that are required to retrieve the unique invoice record and then presses ‘Execute Query’.  The ‘Query‘ button returns the form to the ‘query mode’.  Alternatively, the ‘Insert‘ button may be used to create a new invoice, with the invoice number being auto-generated upon clicking ‘Save‘.  We’ve also offered our user the means to ‘Delete‘ an invoice

In the ‘Invoice Details’ section, users have the options to update or delete an existing row (U and D buttons), or insert a new row, all through the action buttons provided.  Most modern web applications will replace the buttons with cute icons, with context-sensitive help text that appears as you hover the mouse over them, but the concept remains the same.

The shaded patterns on the form represent text-panels where error messages may be displayed, so you see one panel below the action buttons, for errors that occur when processing the header, and also one above the grid for errors on actions on the details.

Let’s also mention the basic business rules for using the form.

  • In order to query (retrieve) an invoice, that invoice’s number must be supplied.
  • To create a new invoice, the customer ID must be entered before pressing the ‘Insert’ button, so that the existing customer details can be used to populate the relevant default values on the form.
  • The invoice number for a newly-created invoice is generated when it is first saved.  The format is “IN” plus the year of the invoice, plus a running number that must be unique.
  • The user may change the billing information (e.g., the customer name, address, etc.) with those changes to be applied to the single invoice only.

Implementing Form Actions through SPs

We can develop two Stored Procedures (SPs) that will together support the functionality of this form.  The first is implemented to perform all of the allowable form actions (buttons) for the header record, including a cascade delete of the details/header if that is required.  The second, somewhat simpler, SP is developed to support maintenance of the details.  Even though the ‘details’ SP is simpler, it will follow the same template or pattern as for the header SP.  We will discuss the differences later.

The design of the SPs is intended to insulate those who are developing the forms from needing to know about the details of the physical implementation of the data in the relational tables.

This will allow DBAs to set access rights to allow only execute privileges on SPs, giving access to the data indirectly via ownership chaining, without having to assign rights to insert, update, delete, etc. directly against the tables (or views).  This effectively prevents almost all possibility of SQL Injection attacks.

So far, other than my rather vague statement that the SPs “implement each of the form’s actions,” I haven’t mentioned precisely what the SPs will actually do.  This should become clear as we review the template in the next section, but one thing that I’ll mention right now is that the SPs must return something.

For our invoices form, the header SP should return three tables:

  • What I will call the “ReturnErrors” table, which delivers results from audits performed in the SP.
  • A table consisting of zero or one row with columns of data that represent the header information, and any additional derived data based on that.
  •  A third table that consists of the invoice details, with columns that are consistent with the data appearing in the details grid.

If you are using .Net as your development environment, you can make that front-end code retrieve data from all three of the tables mentioned above (or more if needed).  I will not go into the details on how this is done, because I’m not a .Net developer. Suffice it to say that a dataset or OdbcDataReader will store all tables returned by a stored procedure in one action.

An Invoice Header SP Template

Below we have a template that can be used generically for any “header” type set of actions such as defined for our form, created specifically to maintain our invoice header.  There’s a few comments interspersed with the code to help you follow along and a more detailed description at the end.

Let’s now describe the relevant components of our SP template:

1.       The parameters passed to the SP consist of an @action variable (required) and “optional” parameters for each column in the header table.  Some will not really be optional, but required based on the action that is being performed (validated in the SP).

2.       There’s a section to define some local variables and also a table variable that has the same columns as our header table, but no primary key.  The attribute on the primary key column (all columns in fact) must allow NULL values.

3.       There’s a section to cleanse the input parameters.  Mainly this consists of setting blank values to NULL, in case the forms developers weren’t particularly thorough in passing NULLs when it makes sense to do so.

4.       The auditing section, which consists of three parts:

a.       A section that makes sure the SP was called correctly.  These errors should never appear when testing the form (where @Error_Code IN (1, 2, 3, 4)).

b.      A section that performs any audits on the data passed in, which should be done in the form instead.  These are audits that do not require a check against a database table.  For example, making sure that required data is supplied, data range checks (such as what might be done by a CHECK constraint on a table column), etc.  These are where @Error_Code IN (5, 6, 7, 8, 9).

c.       A final section that performs any audits against other tables in the database.  This normally consists of audits to enforce FOREIGN KEY constraints.  Often, other more complex business rules may also be enforced in this section.  For example, suppose you had different types of invoices for different customer types.  You may have an audit here that determines first the customer’s type and then that the invoice type is valid for that customer.  These are where @Error_Code IN (10,11).

5.       Then there is the “actions” section, also broken into three parts:

a.       For @action=0 (query), we simply retrieve the row from the underlying table into our table variable.

b.      For @action=1 (insert), we define all the defaults we’ll return to the form for display (we can call this a stub header).

c.       For @action IN (2,3) (save and delete) we have to define the most complex part of the processing:

i. Begin a transaction. The next steps may, depending on the action, modify multiple tables, so you’d want all of those to complete successfully. Or if one of the steps fails, do a ROLLBACK to ensure that none of them completed.

ii. Start a TRY/CATCH block, to deal with any errors during the actual data processing (e.g., a deadlock).

iii. Allocate the next invoice number from the Counters table. In SQL 2012 or later, you may prefer to use a SEQUENCE for this. We use the UPDLOCK hint on the table to make sure no one else can allocate a sequence number until our transaction completes.

iv. Delete the details when the action is ‘delete‘.

v. Do a MERGE that separately handles the update/delete actions (based on @action) and does an insert of the newly created invoice number on insert.

vi. Note how the MERGE dumps the results using an OUTPUT clause into our table variable, including a row of NULL values in the event a delete occurred (we’ll remove that later).

vii. In the CATCH block, we have some generic error handling that is useful when we run the SP in SQL Server Management Studio and need to know more about any errors it generates (displayed in the Messages pane).

viii. Then we COMMIT or ROLLBACK the transaction, depending if we had an error or not.

6. Finally, we return the three tables that we discussed above. Naming each table something relevant is one way of retrieving the tables using .Net.

a.       ReturnErrors containing an error code and error message in one row, the latter of which can be displayed directly on our form.

b.      InvoiceHeader containing one row to use to refresh the form’s fields for the header.  Note that if the ErrorCode column of the ReturnErrors table is not zero (or if this was a delete), no row is returned.

c.       InvoiceDetails contains all the details for this invoice.  Again, no rows are returned for the case of the ErrorCode being non-zero or for a delete action.  Note that two line numbers are returned: LineItem (the table-specific line item for an invoice detail record) and InvoiceLineNo (a sequential row number used for display purposes only in the form’s grid).

Notice also how we perform the audits only until one of them fails (IF @Error_Code = 0 AND …) and do not perform the actions unless all audits pass.  We have omitted the invoice total, which would be simple enough to add as a calculated column to the header table as the sum of the details.

We’ll show some results of testing  for this SP in the next section.

Some Test Data and Initial Testing Scripts for the Header SP

Thanks to the wonders of deferred name resolution, we could have already created the above SP without the underlying tables being present.  You can run the script below to create the tables which we will use to demonstrate the actions of the header SP.

Note that while you may take exception to some of my chosen data types, and the simplifications that I made regarding prices appearing on price lists and selling units of measure, please indulge me as we’re not here today to demonstrate how best to structure such data.

The DROPs are provided as a ‘teardown’ code to help you clean up your sandbox once all is said and done.

Let’s now look at some test scripts and their results.

2097-clip_image003.png

2097-clip_image005.jpg

2097-clip_image006.png

2097-clip_image007.png

2097-clip_image009.jpg

2097-clip_image011.jpg

2097-clip_image012.png

We’ll leave it to our interested readers to verify that all audit rules trigger the correct error messages when an audit fails.

An Invoice Details SP Template and some Test Scripts

As noted earlier, the Invoice details SP is much simpler than for the header.  However it is still based on a trimmed-down version of the header SP’s template.

You may notice that, even though there is only one MERGE statement within the TRY/CATCH block (i.e., no other MERGE, INSERT, UPDATE or DELETE statements), we’ve still included the TRANSACTION in our template because there could very well be more statements if required by the business logic.

The main difference between the details and header template, is that the details template only returns the one table (ReturnErrors) for use by the front end application.  Also it is not necessary to use a table variable to store the details in.  After the details SP is executed, the form simply re-executes the header SP in search mode to recapture any changes to the details (and possibly the header record itself).  The forms developer must then refresh this information to the fields in the form.

Now let’s test the available actions for the details (from running the test scripts above we now only have invoice number: IN20141000002 available).

2097-clip_image013.png

2097-clip_image014.png

Notice how in each case we’re selecting all of the rows in our invoice details table to see the effect.  The forms developer would simply rerun the header SP in query mode for the invoice, then pick up the results returned for the details table and refresh that into the grid.

2097-clip_image016.jpg

The first results pane shows the results from calling the details SP.  The remaining three represent the results from rerunning the query against the header record.

2097-clip_image018.jpg

Business Logic Left to the Presentation Layer

The forms developers will still need to implement some specific business logic in the presentation layer where certain actions should be enabled or disabled. For example:

  •  Enabling/disabling buttons at various points in the process wherever it makes sense to do so: For example, the Save button should probably only be enabled once an invoice has been retrieved (or the Insert stub is displayed).
  •  Any role-enabled functionality probably requires some work by the forms developer to operate the form correctly.  For example, suppose there are two types of users that can access the form: normal and super-users.  Perhaps only super-users are allowed to delete an invoice.  The SP should perform this audit, but a more user-friendly forms behavior is to simply disable the Delete button if the user is not a super user.
  •  Normally, once customer ID is selected and the Insert button pressed, you don’t want the user changing the customer ID, so making that field display-only probably makes sense.
  •  After the user performs the Execute Query action for an invoice and that returns an invoice successufully , you’ll want to make the Invoice number field display-only, because it makes no sense to change the invoice number since they are being auto-generated by the header SP.

It is also possible to use the SP itself to flag when some or all of these specific conditions apply.  This can be done by adding additional columns to the ReturnErrors table’s results and then using these as control flags in the front-end code.

If you will follow the strict advice of your DBAs and limit developers’ direct access to tables and views, you’ll need to take one more step.  For cases where forms populate a lot of dropdown lists, you can write cover FUNCTIONs that return the needed results.  We tend to use schema-bound, in-line Table Valued Functions for best performance.

Some of the Benefits of the Steps Described Above

The table below summarizes some of the additional benefits to using this approach.

No.

Development Phase

Benefit and Remarks

1

Analysis and Design

A simple event-driven model that is the same can be developed for each form based upon the actions to be delivered by the SPs and the user actions (buttons) expected on each form.

2

A skilled SQL Developer should probably be able to translate a requirements specification directly into audits required for a particular form, without the need for an intermediate program specification.

3

Coding and Unit Testing

Forms developers are freed from having to understand the complex business logic underlying the application and can focus on forms behavior and usability improvements.

4

Only essential logic needs to be coded into the SPs on the first pass.  Incremental development of the SPs can add the rest.

5

Miscellaneous Testing

As the SQL Developer writes test scripts, they can be saved as part of the SP documentation, thus allowing any testing team capable of white-box testing to generate many variants on the calling parameters to the SP to ensure defect-free behavior of the SPs.

6

Once forms-development has progressed to the point that they are calling the SPs correctly, using SQL Profiler to trap the SQL generated by the form can allow the creation of multi-step scripts so that complex transactions can be executed repeatedly.  We’ve used this successfully to generate hundreds of test transactions, within a SQL WHILE loop by simply varying some of the input parameters or assigning random values to them.

7

Documentation

Documentation of the SPs need only be minimal:

  • The call signature (where parameter data types represent the underlying data type for the database tables).
  • The returned errors, clearly denoting which audits should never fail when the SP is called correctly, which should be done in the form and which should be done only by the SP.
  • The columns returned by each of the tables (and of course the table names) returned by the SP.

8

Performance

Because SPs are compiled and executed as a block of SQL code, they tend to be a little better performing than making multiple database connections and executing individual queries.

9

Eliminating round trips from application to the database.  It is especially crucial to minimize trips from the client, to the application tier and then to the database because of the latency inherent in such a trip.  It is also beneficial in reducing the round trips from the application server to the database server, as each time you make a connection and then close it, it generates latency.

10

Maintainability

Deployment effort was significantly reduced in cases where a fix involved only a change to one of the SPs.  That can be done as an incremental deployment with no need to ask all users to log out while the web application itself is deployed.

11

Security

Through proper controls within the SP, you can avoid issues with SQL Injection.

Some additional information on point #6.  There are automated testing tools that allow complex transactions to be scripted through an application’s forms.  However, these tend to be expensive, and do not eliminate the requirement for some scripting.  In our case, scripting in SQL is probably a much more widely known skill than writing scripts for an automated testing tool.

Managing the Development Process

One of the primary roles of a project manager is to select the right team to execute the project.  This is most certainly true when selecting the person for the role that we’re calling the SQL Developer.  That is the person that will construct the SPs that enable this method to work.

Some very specific skills are required.  This person must have:

  •  A deep technical expertise in the dialect of SQL being used, specifically a wide breadth of knowledge of SQL and high-performance code patterns.
  • The knowledge to translate a logical data model into the physical implementation, and the skills to create all of the necessary SQL objects (tables, views, etc.).
  • The ability to develop and test quickly, and deliver reasonably defect-free SPs to support at least three or four developers who will be working on the forms.
  • The ability to make logical decisions on what to include, or what can reasonably be excluded, in the initial SP implementations, because it may be necessary to develop the SPs incrementally so as to stay ahead of the development team.
  •  The ability to translate (possibly) an incomplete or unclear requirements specification into firm audit rules to be implemented in the SP.
  • The thoroughness to document the SPs so that the developers can use them during forms-development. 

I have found that this documentation need only be sufficient to serve its purpose: the SP’s call signature, the errors returned and the columns returned in each of the SPs tables, along with some test scripts that can be run to verify that the SP is actually working.  This thoroughness must extend to the data types for the parameters being passed to the SP, because these should be identical to the column data types in the underlying tables.  Providing information to the developers on which of the audits are “simple data audits,” and thus should be performed within the form’s logic is also useful.

In reality of course, the code should also be well-written and commented, to improve troubleshooting, maintenance and the incremental development effort that will go into making the SPs fully functional.  The idea to “Make it Work, Make it Fast and then Make it Pretty” comes into play here, however to avoid Donald Knuth’s statement that “Premature optimization is the root of all evil,” my recommendation is to leverage best-practices and high-performance code patterns to make it “fast enough” at the outset.  For more information on what Knuth’s quote does not mean, read this article by SQL MVP Gail Shaw: Premature Optimization.

It is probably not sufficient to “temporarily divert” a .Net developer to be the SQL Developer, because unless they are very senior they’ll probably fall short in one or more of the areas mentioned above.

The project manager must plan to give the SQL Developer a head start on the development team.  Probably at least a week should suffice.  After that head start, developers can be scheduled to begin working on visual forms design tasks.  By the time that is complete, there should be sufficiently functional SPs available to begin making the forms work.

For estimation purposes, you can expect that initial development of an SP like the one above for headers should be ready in about 0.5 – 1.0 man-day including documentation.  Details SPs should be ready in just a few hours.  Note that this is the basic effort to get them ready for the developer to start.  The overall effort is probably double that to make them fully functional and defect-free, and more complex forms and business requirements may take some additional time.

As the SQL Developer completes his work, it is a good idea to turn over his test scripts to a testing team for some “white box” testing across different scenarios.

The SQL Developer should also maintain a “to-do” list of outstanding, uncompleted items, which can be reviewed periodically and prioritized for implementation.  The project manager should help to prioritize the list, based on progress by the forms-development team.  First priority for the SQL Developer, of course, will always be to fix any issues identified by the forms-development or testing teams, so that his effort is not blocking progress of the forms-development. 

While not specifically discussed in this article, SPs (or views) can also be part of the job of the SQL Developer for reports delivery.  Once again, because reporting requirements may involve complex SQL (and require high-performance), the SQL Developer may be better suited to the task than reports developers.

You may be thinking that it is impossible for one SQL Developer to stay sufficiently ahead of a three-person forms-development team, but I can assure you that it is not impossible because it was done in our reference project with time to spare.

Reflections and Final Commentary

There were a few variants that came up in our development project, which were relatively easy to handle:

  •  Some forms had multiple query criteria, each of which would return a unique row of the underlying table; usually the header table.  This was handled by using a dynamic search stored procedure as recommended by SQL MVP Gail Shaw in her SQL-in-the-Wild blog: Catch all queries.
  • Some of the forms actually had multiple secondary (details) tables, where one grid may be visible at a time and the user was allowed to switch between the grids.  That’s another case of some special logic built into the form, but, for that case, the header SP returned more than three tables.
  • Using SQL Profiler and white-box testing, it was possible to ensure than every audit that should be done in the form, even if done redundantly in the SP, did not invoke a call to the SP.  The overhead of redundancy in the SP tends to be pretty minimal from a performance perspective, so long as it is limited to audits that don’t check against the database.
  •  When bugs did show themselves in the SPs, because they were well-written, organized and always followed the same template, the bugs were usually identified and fixed within about fifteen minutes.
  • As we were testing, none of the forms transactions seemed to respond in an inordinate amount of time (to me that means more than three seconds).  So basically the performance of the resulting forms was at least acceptable, even when the update actions might hit five or more different tables. and we confirmed this with the user during User Acceptance Testing,
  • In our application, we were careful to include inserted by/date/time and updated by/date/time on every single record in every header and detail table.  All this was handled within the MERGE statements, and the only requirement imposed on the forms developers was to pass into the SP the login name of the user of the form.

I seriously recommend you try this approach because of the benefits we found to the development of our reference application.  It would not have been possible to deliver that level of business logic complexity within the timeframe that we did without using this approach.

Naturally during the initial week after Go Live, some issues were identified.  We were able to rapidly deploy modifications to the business logic by deployment of a single SP.  Regardless of whether this was to correct a bug or improve how the application was able to deliver results to the users, the users’ satisfaction with how quickly those changes could be implemented was frequently communicated to us.  In this case, the users’ business was very time-critical, so deployments of the entire web application requiring locking the users out for a period of time needed to be avoided as much as possible.  The first application deployment after Go Live was not required until after the first week of use.

During that week of Go Live, we also were intensely focused on performance.  The users we were observing were accessing the application over their Intranet, and most form actions were responding instantaneously, with just a few of the more complex search operations taking one to two seconds.  While you would not expect remote users of the application to get quite those same performance results, most of the business is conducted at the Go Live site, so our expectations are that performance of the application will meet their needs.

  • 18979 views

  • Rate
    [Total: 30    Average: 4.2/5]
  • avonwyss

    Another fan of SPs – and some hints
    It’s nice to know that others also see SPs as still having an edge over modern ORM use. For my part I can only confirm that using SPs can really help with code quality and productivity.

    The real benefit of SPs to me is the ability to do proper separation of concerns; also, one has exact control over the SQL code which I find is a must.

    The SPs (with their arguments and what they return) form a contract, similar to what you have in OOP when you define a method on an interface. In fact, I do just that; I have written a library which takes a .NET interface decorated with some attributes and creates a proxy which does the SP call heavy lifting for me. The benefit is twofold: For one the calls to SPs get as simple as a call to a normal object, which reduces clutter a lot. The other is that it enables decoupling of the complete DB layer for testing purposes; you can simply mock the interface to simulate the expected database SP call behavior and therefore eliminate the need for a DB in unit tests (remember, unit tests should have no external dependencies if possible). And the DB developer doesn’t need much of a headstart in this case.

    My library actually goes as far as to also do simple ORM functionality (mapping data from resultsets into objects). The proxy itself is quite powerful; it can not only handle simple data types but also OUTPUT arguments (out and ref on the interface parameters), multiple resultsets, native XML support, and even pass enumerables into the SP (which are then translated into TVPs). So there is no real limitation in the features available with this approach. I feel I get the best of both worlds: proper SQL coding but full interoperability and testability with the OOP languages. While your mileage may vary depending on the programming language and frameworks used, I thing the ideas remain the same.

    Regarding your templates, there are some small things I think you should review:
    – Start your SPs with SET XACT_ABORT ON (read why in Guzman’s article "Use Caution with Explicit Transactions in Stored Procedures")
    – Whenever you use MERGE on a shared table (e.g. not temp or table variable), you should probably use the HOLDLOCK query hint (again explained by Guzman, this time in "UPSERT Race Condition With MERGE")

  • Anonymous

    And you wonder why NOSQL has taken off…
    That this kind of thinking persists is just plain lamentable. The last place on earth I want business logic is in SPs. This is invariably the result of either "I only know how to use this hammer" thinking or a hold over from the early 1990s belief DBAs are the only true and faithful keepers of the Holy Grail. But then this is largely a DB website – c’est la vie…

  • Robert young

    NoSql has crashed
    The takeoff was long ago. The reason for the crash: keeping data management in the client, a la COBOL before CICS, is just a mess. Writing it in C# or java or PHP doesn’t change the semantics. Read up Allen Holub’s "Bank of Allen" piece and the problem with ATM ROMs. Or Sarah Mei on the perils of MongoDB. Hilarious. Before HTTP there was the synergy of: *nix (or, in a pinch, windows/netware/etc.)/database/RS-232/VT-100. It was client/server, but both in one box/address space. With HTTP, we got a disconnected client, and the notion that all data management, therefore, had to be done at the client. Rubbish.

  • Anonymous

    10 years as a dba/team lead – sorry I like the middle tier
    I had customers that were not married to one particular db provider, so the move from MS to Oracle was a real possibility. CRUD sps were generated by a template, and business logic remained the same as business objects just needed to be mapped to the "new" database.

  • Robert young

    Malcom’s in the Middle
    There are a number of SP syntax translators available. SP portability is no longer the barrier it used to be.

  • Andrew Clarke

    Re: And you wonder why NOSQL has taken off…
    @Anonymous
    As Editor of Simple-Talk, I’d love to be able to publish a well-reasoned argument, with examples, for keeping business logic out of the data layer. It has been a long and fruitless search. Why not submit us an article?
    Ed.

  • Jeff Moden

    And you wonder why NOSQL has taken off…
    @Anonymous,

    With absolutely no irony intended or implied…

    You’re right. This is a mostly DB website. I share Andrew’s great suggestion about you writing an article on the reasons and benefits in doing all of this in another environment such as NoSQL. As he said, such articles are difficult to find possibly because folks like us might not even know what to search for.

    If you’re wary of wearing a pork chop around your neck into the proverbial lion’s den, then don’t write the article here. Write it on a website that has learned how to do things without the "hammer" of SQL and then post the link here.

    An alternative would also be great. If you know of an existing article that explains how to avoid the "hammer", please do post the link.

    Shifting gears just a tiny bit, I’m all for narrowing the gap between those who write managed code and those that write SQL. We’re all on the same team and the smaller that gap, the better the team we’ll make. With that thought in mind, I extend the invitation to anyone and everyone to write an article on the subject or post a link or two to good, thoughtful articles on the same subject. It would be incredible if someone with deep knowledge and experience in writing for all 3 layers of an application would do something like this without bias but rather with examples and "here’s what’s going to happen if you do this" in any or all of the 3 environments.

    Seriously… I’m not trying to bait deer here. A really good article or two on this subject written by folks with no axe to grind would be a wonderful and educational read.

    Like Red-Green says, "We’re all in this together and I’m pullin’ for ya".

  • Jeff Moden

    Nice article…
    Nice article, Dwain. What I really like the most about it is the inference about how the team worked together to get all this done.

  • Phil Factor

    Re: And you wonder why NOSQL has taken off…
    The best-articulated reasons for keeping the bulk of business processes in the database are to be found in the S.O.L.I.D. Object-oriented Design Principles. It helps code reuse, maintenance, and testability. It ensures the transactionality of all business processes.

    Not all the S.O.L.I.D. principles are relevant, of course, but it is hard to fault the logic of separation of concerns, The dependency Inversion principle, the Integration Separation principle, and the single-responsibility principle.

    I always find it odd to find OO developers who follow the Gang-of-four’s recommendations faithfully, yet who insist on accessing the ‘private’ data of a database directly rather than using an abstraction layer or interface to access the database service. So often, what happens is that applications end up attempting to execute business logic processes on the data by using procedural code, without the safety-net of the ACID properties of an RDBMS. Believe me, everything works better and faster if we all adopt the same rules.

  • Anonymous

    Thank You, I have to admit I have been doing this for years.
    And for years I have had developers tell me the importance of keeping business logic into a middle layer. Mostly the reasoning is database independence, which I understand, but is largely not applicable to the applications my team builds. We build for MS SQL.

    The last 10 years almost everything we have built has used T-SQL for the bulk of the business logic. And Web services (C# or VB.Net ) to make the Procs available to the presentation layer, generally PHP or ASP.Net.

    Things change. When we first started building apps this way , the team was just 2 of us and there was limited ability in C# or VB, so TSQL took on a lot of work, perhaps sometimes doing stuff it was not designed for (hell I’ve had it write a 500 page directory in RTF straight from SQL,mind you it was fast, 10 secs to file output) . CLR’s now handle lots of text stuff, like validating Email addresses, things SQL is not fast at, so wrongly or rightly CLR’s have allowed us to keep doing more in TSQL and keep performance reasonable.

    Over the years our ability in VB and C# has become greater and we’ve found areas where it simply makes more sense to do the processing in VB rather than TSQL. SUre sometimes we’ve been driven to it as we have to interface with a vendor specific applications and we are forbidden from writing to the underlying database other than through the provided business logic layer. To be honest the drivers behind where we place the code has always been where does it make sense from a security and performance standpoint. Sure ease of coding, reuse and ability to debug are important. We always ask these questions when we go to write an app and we review it.

    I can see how moving a bit more code into the VB layer may help performance in some applications, but I can’t imagine where we will ever develop code that does not talk to the data via a Proc. Purely from the security stand point and debug ability (see avonwyss and Phil Factors comments)

    So show me the article on why I should change, in fact show me heaps of them and hopefully one of them will have just the right wording to get through to me. Bring it on, please.

  • Frigate

    Stored procedures
    I touch the top of my hat for this article – stored procedures are not the coolest and sexuest technology, but it will allow you to make job done just matter of time without performing ORM vodoo ceremony. There are multiple ways to skin the same cat, but in the end it’s old plain data processing and IMHO it shoul be done on data tier. Regarding NoSql hype – some people discovered by painful experience that most of NoSql databases do not support ACID transaction processing.

  • Bitsqueezer

    Thanks for the article
    Hi,

    this is a really good SP template to handle record operations. And I would really love to be able to handle all the things with my database project on this way, but unfortunately the shown approach would only work in frontends like .NET which is directly prepared to use i.e. multiple SELECTs and offline datasets.

    Not all database developers work with .NET or a web frontend, there are a lot (like me) which develop frontends using Access (in my case with ADPs). The problem is that Access has some own ways of handling data in bound forms so in most cases you can be happy if it is possible to use a stored procedure at least in a continous form to display data.
    When you begin changing data Access uses it’s own internal stored procedures and it also expects to have at least View Definition permissions to the base tables which can lead to security problems.
    Of course you can use SPs in the same way like presented here if you use completely unbound forms and do all the stuff with own programming but that means you lose all advantages of rapid development which Access offers with it’s automatisms. I would really love to see a solution on how to do anything with SPs. The only way I would see is to implement a middle tier but it would really cost a lot of development to make that Access compatible.

    So in the end the solution presented here is of course great but also needs a specific kind of frontend/backend development and is, as always, no general solution for all of us database developers.

    Cheers,

    Christian

  • Bitsqueezer

    Thanks for the article
    Hi,

    this is a really good SP template to handle record operations. And I would really love to be able to handle all the things with my database project on this way, but unfortunately the shown approach would only work in frontends like .NET which is directly prepared to use i.e. multiple SELECTs and offline datasets.

    Not all database developers work with .NET or a web frontend, there are a lot (like me) which develop frontends using Access (in my case with ADPs). The problem is that Access has some own ways of handling data in bound forms so in most cases you can be happy if it is possible to use a stored procedure at least in a continous form to display data.
    When you begin changing data Access uses it’s own internal stored procedures and it also expects to have at least View Definition permissions to the base tables which can lead to security problems.
    Of course you can use SPs in the same way like presented here if you use completely unbound forms and do all the stuff with own programming but that means you lose all advantages of rapid development which Access offers with it’s automatisms. I would really love to see a solution on how to do anything with SPs. The only way I would see is to implement a middle tier but it would really cost a lot of development to make that Access compatible.

    So in the end the solution presented here is of course great but also needs a specific kind of frontend/backend development and is, as always, no general solution for all of us database developers.

    Cheers,

    Christian

  • Anonymous

    scalability
    When you bury business logic in a database and the database becomes the bottleneck, how do you scale this? If is much easier to scale up/scale out application servers and web servers than database servers.

  • Anonymous

    Business Logic in Stored Procedures
    This is pretty much in alignment with our use of moving Business logic SQL (historically developed in embedded in VB 6) to the database side.
    One advantage was that It made it easier to roll out updates to the database than having the user reinstall the exe application somewhat painful in a corporate environment where MS Office rollouts were treated to the same process as our "simplistic" specialised application.
    Another was that the performance was 3-4 times faster as there was limited data retrieval back to the client side, when lots of intermediate calculations, data rollups to other tables for other users were also used when they wanted to see immediate affects of detail changes.
    Though we went another step and the menu/forms/button were defined in SQL tables with the result that client code in C# was really just an engine that retrieved either form definition data or user data results.
    Otherwise a really useful article on a technique on separation of concerns as discussed by avonwyss

  • FatalExceptionError

    It Depends
    From my experience database developers tend to have a better understanding of business processes than your typical E-commerce developer. This is probably why you will see the DB guys try and maintain control. This is a mistake for most scenarios if you ask me.

    If this application is never going to change in scope and access then it is probably perfectly fine to put the business logic in the DB. If the company every grows though it will become a hot mess. Maintenance is a nightmare and you are better off scrapping the original code.

    The middle tier housing the business logic is for abstraction of information presentation and information storage.
    -You currently use a windows form on clients. What happens when the need for a web form arises. The behavior is probably going to be different.
    -What happens when you need a non-human controlled data interface(i.e. EDI)?
    -What do you do when the need for a mobile platform arises?

    IF you don’t centralize the business logic code will be duplicated and you must remember to update it in all the places. That leaves plenty of room for bugs and exploitation.

  • Dunc

    Not very DRY
    What about the maintainability of such an approach? I see for example CustomerZipcode appearing 17 times in the first procedure. Wherever I see duplication like this I fear bugs being introduced. I feel it can also lead to eventual sclerosis of a system where nobody wants to do something simple like add a field or change anything just in case they miss one of the (at least) 17 locations.

  • Anonymous

    Every layer has its merits
    First of all thank you for taking time writing this really good arIticle. I enjoyed it thoroughly. I would like to know your opinion on what your approach would be for the following requirement. For simplicity sake let’s say each row contains a country code for which we need to get the daily USD currancy exchange rate by calling a web services that returns an XML response. While this certainly can be done in SP, I don’t think it is the right place. All I am trying to say is that someone (architect/lead/experienced technical person) needs to make decision(s) to satisfy the requirement by implementing the right things in right place.

  • Tony

    Put the right thing in the right place
    I think that people need to realise that not all business rules should be handled in the same location. The term "Business logic" makes people think all rules should be treated the same. There is plenty of "Business Logic" that would be better described as "Data integrity logic" – i.e. business specific rules that go beyond declarative database constraints. This can, and should, in my opinion be implemented in the database via stored procedures, as the purpose of the database is to maintain integrity of the data.
    Business rules that are used by an application but are not necessarily directly tied to data integrity have no place in the database.
    I’m not saying implement rules in multiple places and cause a maintenance nightmare, i’m saying that different rules should be implemented in the most appropriate tier, and often the most appropriate place is the database. Other times, it definitely isn’t.

  • James Hutchison

    Highly Sceptical / Poor Design
    For small, purpose built applications this may be acceptable, but anything resembling commercial software, this is poor design.

    Why?

    a. The application will contain "business logic"…now you have two completely distinct layers to maintain and test.

    b. Error Handling -> You now have a bunch of magic numbers (Error Codes) strewn across your application. Maintenance now becomes difficult.

    c. A lot of the "business logic" is actually data validation. Issues such as empty Item Code, should have been caught and handled in the business logic layer reported on the form. The argument this reduces database traffic becomes false.

    d. DBAs will say otherwise, but this T-SQL is not well suited for large blocks of code and hence not maintainable. The example provided is trivial, this wouldn’t be so if there were say a 100 fields to populated.

    e. What happens if a field is added? Not only do you have update the SP, but also each client, thus negating the Maintainability aspect.

    Thanks

  • Anonymous

    Horses for courses.
    While on one of my first computer courses, a lecturer said that when asked if a computer can do such a thing, the most straight forward reply is Yes, No, Maybe. It is probably the same for this solution. The fact that a software project came in under budget has to be applauded. If the system being replaced is fairly static in scope i.e. one invoice is pretty much the same as another, then the solution may work well for many years and be amenable to modest change here and there.

    I don’t think that the n-tier solution need be any slower to develop. Through the use of a carefully (properly) designed database, it is possible to develop the CRUD stored procedures, the data layer and the business layer classes by using T4 text templates in C#. After that, the only real coding left is the business logic layer and the user interface – which is exactly what is being advocated in this article. If any data table changes are needed, it is just a matter of running T4 again to rebuild everything. Throw in a Windows Service as a WCF container and now we have a scaleable solution capable of running in a load balanced environment.

    Everyday I have the misfortune to be working in an environment where multi-thousand line T-SQL stored procedures exist, and it is awful. T-SQL is awful. It is awful to read, awful to write and twice as awful to debug. Keep SQL Server stored procedures for CRUD operations until it has a PL/SQL equivalent 🙂

    Well done on a successful project.

  • Peter W

    Next invoice number generation
    Keep in mind that this statement essentially single threads your invoice insert process:

    UPDATE dbo.Counters WITH(UPDLOCK)
    SET @NextInvoiceNo = CurrentValue = CurrentValue + 1
    ,@InvoicePrefix = IDPrefix
    WHERE CounterID = ‘Invoices’;

    The way it is currently coded. only one invoice can be added at a time, and it must either commit or rollback before another invoice can be added.

    Not an issue in a low volume environment, but a big problem for a high volume environment.

  • Dwain.C

    Hello All
    Thanks to everybody for the great responses to this article, whether pro or con. One truism that I have found in my years in software development is that there is "no right or wrong way." It is one of the great things about software development, infinite flexibility to do things the way you want. It is also one of the things that has caused software engineering to ultimately remain totally immature. When you build a bridge, you follow tried and true principles of mechanical engineering. When you build software, there are no tried and true guiding technical principles, and so there are a lot of software projects that fail (you can see one of my prior articles on how you might avoid some of that).

    I’ve been busy so I hope you’ll excuse me for not responding sooner. Our customer was so delighted with the aforementioned system they’re already asking for some extensions and those are underway and going well. Not surprisingly (at least to me), the forms modifications are taking the longest time (once again the SQL Developer is well ahead of the game).

    To avonwyss – I looked at both of your referenced articles by Guzman. His idea on using HOLDLOCK may be valid if you do have high concurrency. I think his idea on using XACT_ABORT is not applicable here. Just my opinions though, you could certainly use both but my editor would probably have gone for the throat had I extended the article to explain them.

    I do want to leave you all with a thought. How many of you write transactions with intent to ROLLBACK? How many of your transactions actually do get ROLLedBACK? I would hope that the number is nearly nil. If it is not, the likely cause is simply poor programming. I expect that not a single transaction processed by the system that just went live (10 Nov) has been rolled back. The only thing that I would expect to cause a ROLLBACK would be a deadlock (or an extremely poor piece of code).

    But that is also my opinion.

    I hope to address others who have taken the time to comment in the near future. Stay tuned.

  • Dwain.C

    Hello All
    Thanks to everybody for the great responses to this article, whether pro or con. One truism that I have found in my years in software development is that there is "no right or wrong way." It is one of the great things about software development, infinite flexibility to do things the way you want. It is also one of the things that has caused software engineering to ultimately remain totally immature. When you build a bridge, you follow tried and true principles of mechanical engineering. When you build software, there are no tried and true guiding technical principles, and so there are a lot of software projects that fail (you can see one of my prior articles on how you might avoid some of that).

    I’ve been busy so I hope you’ll excuse me for not responding sooner. Our customer was so delighted with the aforementioned system they’re already asking for some extensions and those are underway and going well. Not surprisingly (at least to me), the forms modifications are taking the longest time (once again the SQL Developer is well ahead of the game).

    To avonwyss – I looked at both of your referenced articles by Guzman. His idea on using HOLDLOCK may be valid if you do have high concurrency. I think his idea on using XACT_ABORT is not applicable here. Just my opinions though, you could certainly use both but my editor would probably have gone for the throat had I extended the article to explain them.

    I do want to leave you all with a thought. How many of you write transactions with intent to ROLLBACK? How many of your transactions actually do get ROLLedBACK? I would hope that the number is nearly nil. If it is not, the likely cause is simply poor programming. I expect that not a single transaction processed by the system that just went live (10 Nov) has been rolled back. The only thing that I would expect to cause a ROLLBACK would be a deadlock (or an extremely poor piece of code).

    But that is also my opinion.

    I hope to address others who have taken the time to comment in the near future. Stay tuned.

  • avonwyss

    @Dwain.C: UPDATE-HOLDLOCK, XACT_ABORT and ROLLBACK
    Dwain, while I understand that explaining my proposed additions might be out of scope for the article I still believe that they should be included in template SPs since these are pretty fundamental IMHO.

    Regarding the HOLDLOCK, unless you use other means of locking, you always have to assume concurrency – experience shows that it "hits" much more often than anticipated. In general, the way your SP template is built, wrapping the whole SP call in a SERIALIZABLE transaction may be required in order to guarantee correct evaluation of your "complex data audits" in concurrent situations.

    Regarding XACT_ABORT, the last thing I’d want is to have a half-baked state – and that’s what I might get without XACT_ABORT ON and a lingering open transaction in my connection pool. Typically, if something does fail (such as a missing foreign key or other invalid data being inserted/updated) I want the whole SP to stop and roll back. Graceful error handling and recovery is very difficult to get right and in most cases it’s much simpler to abort, roll back, fix the problem and retry. So yes, I write code with the intent to ROLLBACK in the case of an error – and I try to get my errors directly from the DML statements (e.g. by constraint violations) and not from separate checks in advance (possibly affected by concurrency). With SPs that have only few data modifying statements, the error code is usually telling me all I need to know to treat the error in a specific and meaningful manner on the application level. If necessary, I can also check @@ROWCOUNT after a DML statement to check for the expected number of changes and perform error generation afterwards, e.g. your type 1, 2 and 3 audits.

    BTW, note that none of your type 2 audits will ever fire – you’re cleansing the NULL values away earlier so that none of the conditions can be true.

    My proxy library actually checks for transactions on the connection and always creates an explicit transaction for calling the SP when none is active – and due to this I usually don’t wrap the code in the SP into a separate transaction.

  • Anonymous

    Re: Hello
    Hi Dwain,

    In reposnse to your statement.

    "Not surprisingly (at least to me), the forms modifications are taking the longest time (once again the SQL Developer is well ahead of the game)."

    >> Why don’t you build a forms component in SQL so it is quicker and all it takes is a SQL Developer and no need for a functional programming language which seems to be the bottleneck here!!!?.

    Sorry for the sarcasm..and don’t take this the wrong way. Like I said in an earlier post every layer has its merits and it is the team’s responsibility to put right things in right place based on the constraints (budget, deadlines, etc)..

  • Dwain.C

    To avonwyss
    You said:

    "BTW, note that none of your type 2 audits will ever fire – you’re cleansing the NULL values away earlier so that none of the conditions can be true."

    Look again at the cleansing. It converts blanks passed in into NULLS. So the audits checking for required arguments not supplied will work.

    Look also to the little IF statement right after the CATCH blocks. I believe with that in there, you won’t be seeing any transactions left in a lingering state. Most especially if you write high-performing SPs that complete the transaction in less than a second (which all of mine do).

    Note that I’m not saying its a bad idea to use either HOLDLOCK or XACT_ABORT. What I am saying it is my preference to not do so until proven that they’re needed.

  • Dwain.C

    Database Portability
    A couple of posters have noted that database portability is a reason you may not want to use this approach, but as Robert Young pointed out there are SP converters out there to help with that.

    So let me pose a question. Of all of the software development projects you have worked on throughout your career, what fraction of them involved porting an application from one database to another?

    I would venture a guess that in the entire population of software development projects throughout the world and throughout all of recorded time, the fraction that included this is probably less than one hundredth of one percent. That is not, of course, saying it doesn’t happen. It is saying that I think it is pretty rare.

    When corporations entertain software projects, the number one consideration is cost to benefits. They develop software systems to increase revenues, reduce costs or improve customer service (mostly). How exactly does changing the database contribute to any of those things? Perhaps a reduction in the annual licensing fee? Sounds like that would be a pretty trivial amount compared to the cost of the port. More likely it would occur to overcome some technical impediment, which needs to be pretty darned important to spend the cost.

    Please refute me if you don’t believe that to be the practical reality.

    In case you haven’t figured it out, I’m a great believer in the myth of database portability. Develop to a platform so you can derive all of the benefits that particular platform provides.

    So I don’t have a particular problem if this particular approach isn’t database portable. I acknowledge that others might, and that I probably won’t be able to change that opinion regardless of what arguments I put forth.