Making Data Analytics Simpler: SQL Server and R

R and SQL Server are a match made in heaven. You don't need anything special to get started beyond the basic instructions. Once you have jumped the hurdle of reliably and quickly transferring data between R and SQL Server you are ready to discover the power of a relational database when when combined with statistical computing and graphics.

In this article I will describe a way to couple SQL Server together with R, and show how we can get a good set of data mining possibilities out of this fusion. First I will introduce R as a statistical / analytical language, then I will show how to get data from and to SQL Server, and lastly I will give a simple example of a data analysis with R.

What is R and what noticeable features does it have

R is an open source software environment which is used for statistical data analysis. All operations are performed in memory, which means that it is very fast and flexible as long as there is enough memory available.

R does not have a storage engine of its own other than the file system, however it uses libraries of drivers to get data from, and send data to, different databases.

It is very modular in that there are many libraries which can be downloaded and used for different purposes. Also, there is a rapidly growing community of developers and data scientists which contribute to the library development and to the methods for exploring data and getting value from it.

Another great feature is that it has built-in graphical capabilities. With R it takes couple of lines of code to import data from a data source and only one line of code to display a plot graph of the data distribution. An example of this graphical representation will be given shortly. Of course, aside from the built-in graphics, there are libraries which are more advanced in data presentation (ggplot2, for example) and there are even libraries which enable interactive data exploration.

For more details on R features and on how to install it, refer to the R Basics article, which was recently published on Simple-talk.

Connecting to SQL Server from R

This part assumes that the reader has already gained some familiarity with the R environment and has the R and RStudio installed.

As mentioned, R does not have its own storage engine, but it relies on other systems to store the analyzed data. In this section we will go through some simple examples on how to couple R with SQL Server’s storage engine and thereby read data from, and write data to, SQL Server.

There are several options to connect to SQL Server from R and several libraries we can use: RODBC, RJDBC, rsqlserver for example. For the purpose of this article, however, we will just use the RODBC package

Let’s get busy and setup our R environment.

In order to get the connectivity to SQL Server working, first we need to install the packages for the connection method and then we need to load the libraries.

To install and load the RODBC package, do the following:

  • Open the RStudio console (make sure the R version is at least 3.1.3: If it isn’t, then use the updateR() function)
  • Run the following command: install.packages(“RODBC”)
  • Run the following command: library(RODBC)

Note: the R packages are usually available from the CRAN site, and depending on the server setup, they may not be directly accessible from the R environment, but instead it may be needed to be downloaded manually and installed manually. Here is the link to the package page: RODBC: http://cran.r-project.org/web/packages/RODBC/index.html

Exploring the functions in a package

R provides useful ways of exploring the functions of the R packages, If, for example, we wanted to list all functions in a specific package we would use a function similar to this:

And then we would call it like this:

Typing ??RODBC at the command prompt will bring out some help topics about the RODBC package.

Further, typing ? before any of the functions will bring out the help information about a function. For example,

Getting connected

For the purpose of this exercise, we will be using the AdventureWorksDW database (it can be downloaded from here).

Let’s say we are interested in calculating of the correlation coefficient between the annual sales and the actual reseller sales for each reseller. First we will create the following view:

I have divided the Sales and the Annual Sales amounts by a 1000, so it is easier to work with the numbers later on. We will go in details in the statistical analysis later on; let’s start by getting connected.

First we need to create a variable with our connection string (assuming we have already loaded the library by running library(RODBC) ):

RODBC connection

And now we can use this variable as a parameter in the different calls to our database. In the RODBC package there are two different ways to connect to the SQL Server: there are two methods sqlFetch and sqlQuery.

Here is how we use sqlFetch:

and then we check the contents of the frame with the following command:

The data looks like this:

2180-1-ba575827-c5c9-41ca-82d9-00373e332

Another way to get the data is to use sqlQuery like this:

As you may have guessed, this is quite flexible if we want to get a subset of the data by using the WHERE clause.

Benchmarking of RODBC

To benchmark the performance of the RODBC library, I have written a script which will read data from SQL Server to R.

Here is the script which will be used:

The script above creates a connection to a database called TestData. The database has two types of tables – a narrow table with 6 columns and a wide table with 31 columns.

The X in the table name represents the different amount of data there is in each table. All tables in the database are as follows:

The tables are populated with Redgate’s Data Generator, and then the R script above is used to get all data from the respective table and measure the time it took in seconds.

Here is how long it took (in seconds) to read the 10k, 100k, 1M, 10M and 30M rows:

Exploring the data

Now that we have loaded the data into memory, it is time to explore it. First, let’s see the density and the data distribution. R has great facilities to visualize data almost effortlessly. As we will see shortly, we can call a plot a graph with only a few clicks.

First, let’s set the properties of our environment to display two plot graphs in one row. In R this is easily done by using the par() function like this:

This will create a plot matrix with two columns and one row, which in this case will be filled in by rows. Now, let’s run the two commands which will fill in the graphs in the plot matrix:

This will give us the following graph (found in the Plots section in the RStudio environment):

2180-55c5835e-701d-4cb0-9b4a-1c2d90a51c3

From here we already can extract some valuable knowledge – and we did this with a few clicks! We can see that:

  • there are generally two segments of resellers by AnnualSales – ones that peak at around 100K, and the other ones that are above 2.5M
  • there are three different segments of resellers by SalesAmount – under 50k, around 150k and around 300k

Remember that this was the data we loaded for the sales in Euros. Let’s load the data from USD sales, and compare the histograms.

We will be using the following view in our database:

And then we will use similar commands to load the data in R:

And then we get the histograms like this:

The histograms look like this:

2180-b406d805-fba6-44b3-8cf0-3179391ccd2

This gives us some more insight into how the resellers perform in the USD market.

Let’s go a bit further ad get a summary of our datasets. Very conveniently there is a built-in function in R, which does exactly that: summarizes our dataset (also called dataframe in R language). If we simply type:

We will get the following summary info:

And respectively for the USD resellers we will type

And we will get the summary:

Now let’s do some clustering to dig a bit deeper in our data.

Cluster Analysis

There are countless ways to do Cluster Analysis, and R provides many libraries which do exactly that. There are no best solutions, and it all depends on the purpose of the clustering.

Let’s suppose that we want to help our resellers do better in the Euro region, and we have decided to provide them with different marketing tools for doing that, based on their Annual Sales amount. So for this example, the marketing department needs to have the resellers grouped in three groups based on their AnnualSales and the SalesAmount needs to be displayed for each reseller.

In the example below, we will take the dataFetchEUR dataframe and will divide the resellers in three groups, based on their AnnualSales. Then we will write the data back into our SQL Server Data Warehouse, from where the marketing team will get a report.

By looking at the data summary for the AnnualSales column, we can decide to slice the data by the boundaries of 1,000K and 1,600K. These are imaginary boundaries, which in reality will be set by the data scientist after discussions with the team who will be using the data analysis.

Just to verify, we can group the resellers count per their AnnualSales:

We get the following result:

CountResellers AnnualSalesK
13 3000,00
10 1500,00
4 1000,00
8 800,00
3 300,00

So, this seems right. Our three segments will be:

Segment 1 will be < 1,000,000
Segment 2 will be >= 1,000,000 and < 1,600,000
Segment 3 will be > 1,600,000

For this we can use a simple ifelse function to define the clusters:

AnnualSalesHigh.cluster <- ifelse(dataFetchEUR$AnnualSales >= 1600, 3, 0)
AnnualSalesMedium.cluster <- ifelse(dataFetchEUR$AnnualSales >= 1000 & dataFetchEUR$AnnualSales < 1600, 2, 0)
AnnualSalesLow.cluster <- ifelse(dataFetchEUR$AnnualSales < 1000, 1, 0)

Now we have three different datasets, and we need to combine them into one dataset (dataframe). We can use the cbind function like this:

Now we can look at the segments with the View function:

The data looks like this:

2180-1-809cb6bb-956b-4129-813c-1a9d3c201

Now we need to sum up the values in each row and output another dataframe, which has only one column. We can do this with the following function:

And finally, we need to bind together the original dataframe and add the categorization per reseller:

The data looks like this:

2180-1-a94eac08-8c8c-48e8-ad64-b87cc4029

And finally, we will save this dataset into our data warehouse by using the sqlSave function in the RODBC package:

And now we have a table called [dbo].[Marketing_EURAnnualSalesCluster] in our [AdventureWorksDW2012] database, which is ready for use by the marketing department.

This process can be automated and a batch file can be created with the R scripts – the entire flow from getting the data to writing it back to the data warehouse – and it can be scheduled to be run regularly.

Conclusion:

In this article we have seen how easy it is to connect to SQL Server from R and do an exploratory data analysis. This brings great value to business, especially because of the time savings of data modelling and visualization techniques which can be very time consuming with other technologies.

We have explored a way to connect to SQL Server (by using the RODBC library) and we have created a simple Cluster Analysis and segmentation, which provides immediate value to the end-users of the data.

Tags: , , , , , , , , ,

  • 85704 views

  • Rate
    [Total: 79    Average: 4.6/5]
  • Anonymous

    SQL Server & R
    Still i am trying to undestand how R benefits SQL Server.
    This article shows how to read data & display in graph which not new can be done in SSRS or SharePoint. If you can elobrate more in detail what R can do that SQL Server cannot and what kind of data that R can store in SQL Server, that will be great & beneficial to readers.

  • Robert young

    To Serve R (yes, we’re in the Twilight Zone)
    The part that’s missing (so far as I know) is the ability to run R routines from within a SQL Server session. Oracle, SAP/HANA, and Postgres/Netezza/any PG derived engine (at least) provide such support. In fact, any DB engine which allows C/C++ as a user defined function/SP language can be so hooked without much effort. Other languages, C# or java, not so much.

  • Feodor

    RE: SQL Server & R
    Anyone who has worked with SQL Server and data analysis for a while will have it in the clear that SQL Server has quite a few costs. We are talking about time and money. The time part is related to data gathering, modelling and writing the reporting part. The money part is the licensing costs of SQL Server – you pay for the hardware and for the software, whereas for R you have only the hardware to pay for. And for R you have no hardware limitations, like you do for SQL Server. (in 2014 you have free Express edition limited to 1GB memory and 10GB data size and max 4 cores, Standard edition is fairly lousy for BI and it costs too much, the Enterprise edition costs 4 times the Standard edition – you get the picture).
    You are right about the fact that the article is missing the perspective of the scalability comparison between SQL Server and R; the advantage of using R would be quite obvious to a fairly experienced data user, though. As mentioned in the article, R does not have a storage engine of its own, hence it needs to somehow get the data from a source. In reality this source can be simply flat files, or it can be any RDBMS. The cheaper the better.
    When it comes to the time factor, R is very beneficial to SQL Server because you can do a proof of concept and some data mining in a few minutes, whereas in SQL Server you’d have to do some modelling and reporting separately, and quite a few data mining / exploration features are quite costly.

    Here is a real life scenario, which can be beneficial to quite a few startups or non-profit orgs: get a few appliances (think even laptops or some cheap old servers) with as much memory as possible, and install SQL Server express edition. Use the Express editions to store relational data, just as in the relational source system, then use as much memory as you can buy to do data analysis in R and gain competitive advantage. This way it will cost as little as possible.

    After all, the article is using SQL Server as an example, because the audience is quite familiar with it and can easily follow the examples.

    Hope this helps.

  • Phil Factor

    Re: SQL Server & R
    You are extremely limited in the complexity of statistical analysis that you can do in SQL. It just isn’t designed for this. There rapidly comes a point where it is much easier to do this in R. The huge variety of graphical presentation of the data that you can do in R is a bonus, but I wouldn’t necessarily involve R if you are already using reporting services for this until it becomes necessary.

  • Sarita Garg

    SQL Server & R
    R programming is for data and statistical analysis. SQL sever is for storing data. SQL server database can be used as data source to do analysis. Moreover R programming will become useful with SQL server since Microsoft has bought Revolution Analytics for R programming which will become platform with Microsoft Analytics Platform System appliance and Hadoop for large datawarehouses.

  • grclausen

    SQL Server & R
    Really nice article and great tutorial for using R with SQL Server. I’ve heard that R will be integrated with SQL Server 2016. I have been studying up on R and have found that it is a quick way to get detailed statistics quickly. Kudos to Feodor!

  • Robert young

    Now (well, soon) Serving: R
    Turns out, Microsoft will be integrating Revolution R.

    Here: http://blogs.technet.com/b/dataplatforminsider/archive/2015/05/04/sql-server-2016-public-preview-coming-this-summer.aspx

    Doesn’t say whether the hook will work for any version of R, or only the closed version from Revolution.

  • Jpj

    Do R works with SQL Azure?

  • odeddror

    Connection String
    FYI
    You can do
    cn <- odbcConnect("AdventureWorksDW2012")