Products and Services

Carolina for Hadoop

Carolina for Hadoop allows executing any SAS program in parallel using Hadoop framework. It converts SAS Data Steps into Hadoop MapReduce jobs and for PROCs it generates a combination of MapReduce Java code and HiveQL queries. This allows utilization of the Hadoop parallel framework capabilities as well as access of data in Hadoop file system (HDFS). This provides, typically, an order-of-magnitude improvement in large scoring program run times.

Carolina In-Database

Carolina In-Database converts SAS Data Steps into Java Table User Defined Functions (UDFs). It automatically generates JAR files that implement UDFs, SQL scripts that create UDFs in a database and, for testing purposes, scoring SQL scripts that invoke the UDFs execution on a test input. After the UDF is created in the database it can be invoked multiple times against different input data sets using an appropriate (CREATE TABLE AS … SELECT …) SQL statement in the scoring SQL script. Currently, Carolina In-Database is validated for Teradata and Oracle databases.

Carolina for Integration

Carolina for Integration converts a SAS Data Step, often a predictive model, into a JAR that is accessible in real time or in batch from an operational system like a Web server, CRM system, or a business rules engine. In this Integration mode it is the responsibility of the operational system to get the input data and call the generated Java one record at a time. As an option, Carolina can perform validation of the converted program, if benchmark input and output datasets are provided. Carolina currently is validated to work with many COTS operational systems; we also provide an API to integrate Carolina with your proprietary Java software.

Carolina Stand – Alone

Carolina can also work as a clone of Base SAS. Internally, it converts a SAS program to Java, compiles, and executes it, but all of this processing is performed automatically, transparent to the user. Generated Java classes are compiled in memory and immediately executed. Carolina output is identical to SAS output: it creates output datasets, log and result files, tables in the database and the like. The only difference is that Carolina can optionally output the generated Java source files for verification purposes.


Carolina S-JDBC is an enterprise software utility that allows the direct reading and writing of proprietary-format SAS datasets. S-JDBC is a JDBC driver and as such it can be plugged into any third-party, Java-based application (operational system, like a rules engine, CRM, or other decisioning system) that supports JDBC. S-JDBC allows users to import any data that resides in a legacy SAS warehouse or file set. After application completes the job, S-JDBC allows the output data to be loaded back into the SAS data facility. Unlike with JDBC driver provided by SAS itself, no SAS server or licenses or technology is required.

 Professional Services

Engagement-based, professional services include large scale service projects, custom programming, and SAS to Java conversion (including porting of mainframe legacy SAS programs to Linux). Hadoop implementation for SAS programs on a service bases may be included with Carolina for Hadoop product implementation.