Quantcast
Channel: SAP HANA Developer Center
Viewing all 737 articles
Browse latest View live

Learn What's New in SAP HANA Dynamic Tiering

$
0
0

Read about New/Changed Features:

 

Do you want to know what was introduced/changed in the most recent support package for SAP HANA dynamic tiering? Are you looking for information on a feature introduced in an earlier support package? The document What‘s New in SAP HANA Dynamic Tiering (Release Notes) helps you in either case.

 

What‘s New in SAP HANA Dynamic Tiering (Release Notes) is organized chronologically by support package. Information on the most recent support package is at the beginning of the document, followed by information on the previous support package, and so on.

2016-04-01_12-13-49.png

Find the Central SAP Note for any Support Package:

 

The excellent blog post Finding Dynamic Tiering SAP Notes gives you many tips and tricks for locating dynamic tiering SAP Notes. The blog post also explains that each SAP HANA dynamic tiering support package has its own central SAP Note – a master note with links to all relevant SAP Notes.


Did you know you can find all central SAP Notes -- for all dynamic tiering support packages -- in the What’s New document?

 

  1. Navigate to the first page of the What’s New document.
  2. Click the Important SAP Notes link for the support package you’re interested in.

DT_WN.PNG


Event-driven, non-blocking, asynchronous I/O with SAP HANA using Vert.x

$
0
0

In this blog post I would like to demonstrate an example on how you can implement a non-blocking web service, running on the JVM, on top of SAP HANA. This blog post, such as the commands and the setup, does assume running the backend on a Linux/Mac machine (bare metal or IaaS). The commands might slightly vary on Windows machines, but the experience should be similar.

 

What is Vert.x?


"Vert.x is a tool-kit for building reactive applications on the JVM". This is what the Vert.x web site tells you.


Basically, Vert.x is an open source set of Java libraries, managed by the Eclipse foundation, that allows you to build event-driven and non-blocking applications. In case you are already familiar with Node.js, Vert.x allows you to build services the way you might already know from Node.js. Also, Vert.x is language-agnostic so you can implement your backend in your favorite JVM-based language, such as, but not limited to, Java, JavaScript, Groovy, Ruby, or Ceylon.

 

In case you want to want to know more about Vert.x, please refer to the official Vert.x web site or the official eclipse/vert.x repository on GitHub

 

Speaking in code, with Vert.x you can write a simple HTTP server and a web socket server like this (using Java 8):

 

vertx  .createHttpServer()  .requestHandler(req -> {  req.response().headers().set("Content-Type", "text/plain");  req.response().end("Hello World");  })  .websocketHandler(ws -> {  ws.writeFinalTextFrame("Hello World");  })  .listen(8080);

In case you want to know more about what makes a reactive application reactive, you can take a look at The Reactive Manifesto

 

"Building a Java web service on top of HANA? That requires running Tomcat."

Is that you? Think again! Depending on the use case, developing JVM-based backend services using Tomcat or a Java EE container such as JBoss might be the solution of choice for certain use cases, especially when it comes to transaction processing. For building real-time applications where you really don't care about transaction handling in the backend, using an application server might be an overkill for your project and much more than you actually needed.

 

 

What about Node.js?

 

Node.js is a great event-driven, non-blocking framework as well and the most popular amongst reactive backend frameworks and toolkits. I personally like Node.js a lot, simply because JavaScript itself is very flexible and npm.com has a really large ecosystem of Node.js packages. Also, there is great open-source HANA driver (SAP/node-hdb) for Node.js, so Node.js is still good choice for real-time applications.

 

However, Node.js has some pitfalls, especially when it comes to leveraging multiple CPU cores. There are also solutions in Node.js to address this problem. This blog post from Juanaid Anwar explains this really well: Taking Advantage of Multi-Processor Environments in Node.js

 

GitHub repository

 

You find the complete, ready-to-run source code of the example on GitHub:

GitHub - MitchK/hana_vertx_example: An example web service to demonstrate how to use Vert.x with SAP HANA

 

 

Example Preparation

 

 

First, you need to create a Maven project. You can also use any other dependency manager or build tool (like Gradle), but this tutorial will use Maven.

 

For this example we will be using the following Vert.x libraries and the HANA JDBC driver:

 

 

<dependencies>  <!-- Vertx core -->  <dependency>  <groupId>io.vertx</groupId>  <artifactId>vertx-core</artifactId>  <version>3.2.1</version>  </dependency>  <!-- Vertx web for RESTful web services -->  <dependency>  <groupId>io.vertx</groupId>  <artifactId>vertx-web</artifactId>  <version>3.2.1</version>  </dependency>  <!-- Vertx async JDBC client -->  <dependency>  <groupId>io.vertx</groupId>  <artifactId>vertx-jdbc-client</artifactId>  <version>3.2.1</version>  </dependency>  <!-- HANA Driver -->  <dependency>  <groupId>com.sap.db</groupId>  <artifactId>com.sap.db.ngdbc</artifactId>  <version>1.00.38</version>  </dependency></dependencies>


  • vertx-core: Provides the basic Vert.x toolkit functionality
  • vertx-web: Provides you with routing capabilities to build RESTful web services.
  • vertx-jdbc-client: Provides you with an asynchronous JDBC Client and with a lot of convenient APIs on top of JDBC.
  • com.sap.db.ngdbc: The official SAP HANA driver for JDBC. This driver is not open source and thus not available on Maven Central. You either have to use your company's internal Nexus server or refer to the .jar file on the file system using your pom.xml using <systemPath>${project.basedir}/src/main/resources/yourJar.jar</systemPath>

 

Using Java 8


You really don't want to code in Vert.x below Java 8. You really don't. Since Vert.x heavily relies on callbacks, writing Vert.x without lambda expressions will be a pain.


<build>  <plugins>  ...  <plugin>  <groupId>org.apache.maven.plugins</groupId>  <artifactId>maven-compiler-plugin</artifactId>  <version>3.5.1</version>  <configuration>  <source>1.8</source>  <target>1.8</target>  </configuration>  </plugin>  ...  </plugins></build>

 

Creating a fat .jar

 

In this example, we will build a single .jar file that will bootstrap our Vert.x code, which will contain all Java dependencies. There are many ways of deploying Verticles, this is just an example one.

 

Hereby, we will be referencing to com.github.mitchk.hana_vertx.example1.web.HANAVerticle as our main class.

 

<build>  <plugins>  ...  <plugin>  <groupId>org.apache.maven.plugins</groupId>  <artifactId>maven-shade-plugin</artifactId>  <version>2.3</version>  <executions>  <execution>  <phase>package</phase>  <goals>  <goal>shade</goal>  </goals>  <configuration>  <transformers>  <transformer  implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">  <manifestEntries>  <Main-Class>io.vertx.core.Starter</Main-Class>  <Main-Verticle>com.github.mitchk.hana_vertx.example1.web.HANAVerticle</Main-Verticle>  </manifestEntries>  </transformer>  </transformers>  <artifactSet />  <outputFile>${project.build.directory}/${project.artifactId}-${project.version}-fat.jar</outputFile>  </configuration>  </execution>  </executions>  </plugin>  ...  </plugins></build>


Creating a Verticle


According to the official documentation, a "Verticle" is a Vert.x term that describes an independently deployable piece of code. Outside of the Vert.x universe, you may call it "micro service". The use of Verticles is entirely optional, but I will show how to implement an example HANA Verticle.


Create a new class in a package of your choice. Make sure that the package name and class name are matching the main Verticle class you put into the pom.xml.


public class HANAVerticle extends AbstractVerticle {  @Override  public void start(Future<Void> fut) {  vertx   .createHttpServer()   .requestHandler(req -> {      req.response().headers().set("Content-Type", "text/plain");   req.response().end("Hello World");   })   .websocketHandler(ws -> {     ws.writeFinalTextFrame("Hello World");   })   .listen(8080);  }
}



Now run

 

$ mvn clean install package
$ java -jar target/example1-0.0.1-SNAPSHOT-fat.jar

 

on your command line (or set up your IDE accordingly) in order to install your Maven dependencies and create a fat .jar file.

 

Finally, open http://localhost:8080/ in your web browser.

Screen Shot 2016-04-07 at 5.38.23 PM.png


You can also check whether you web socket endpoint is listening on ws://localhost:8080, using a web socket client:


Screen Shot 2016-04-07 at 5.38.10 PM.png

Building a RESTful web service with Vert.x


Let's now build a simple RESTful web service where we actually want to use routing and also JSON output. Replace the content of your start() method with:


Router router = Router.router(vertx);
router  .get("/api/helloWorld").handler(this::helloWorldHandler);
vertx.createHttpServer()  .requestHandler(router::accept)  .listen(  // Retrieve the port from the configuration,  // default to 8080.  config().getInteger("http.port", 8080), result -> {  if (result.succeeded()) {  fut.complete();  } else {  fut.fail(result.cause());  }  });


You also need to create the handler method for the /api/helloWorld end point:


public void helloWorldHandler(RoutingContext routingContext) {  JsonObject obj = new JsonObject();  obj.put("message", "Hello World");  routingContext  .response().setStatusCode(200)  .putHeader("content-type", "application/json; charset=utf-8")  .end(Json.encodePrettily(obj));
}


Build your code again, start the .jar file and see the result in the browser:


Screen Shot 2016-04-07 at 5.46.56 PM.png


Connecting Vert.x with HANA


Now things become interesting. Put the following code snippet to the beginning of the start() method:


JsonObject config = new JsonObject();
// Example connection string "jdbc:sap://hostname:30015/?autocommit=false"
config.put("url", System.getenv("HANA_URL"));
config.put("driver_class", "com.sap.db.jdbc.Driver");
config.put("user", System.getenv("HANA_USER"));
config.put("password", System.getenv("HANA_PASSWORD"));
client = JDBCClient.createShared(vertx, config); // , "java:comp/env/jdbc/DefaultDB");

 

We will actually connect to HANA using environment variables for the configuration, for simplicity. You can also use a JNDI name instead.

 

In the helloWorldHandler, replace the method content with this code:


client.getConnection(res -> {  if (!res.succeeded()) {    System.err.println(res.cause());    JsonObject obj = new JsonObject();    obj.put("error", res.cause());    routingContext.response().setStatusCode(500)      .putHeader("content-type", "application/json; charset=utf-8").end(Json.encodePrettily(obj));    return;  }  SQLConnection connection = res.result();  connection.query("SELECT 'Hello World' AS GREETING FROM DUMMY", res2 -> {  if (!res2.succeeded()) {    System.err.println(res2.cause());    JsonObject obj = new JsonObject();    obj.put("error", res2.cause());    routingContext.response().setStatusCode(500)      .putHeader("content-type", "application/json; charset=utf-8")      .end(Json.encodePrettily(obj));    return;  }  ResultSet rs = res2.result();  routingContext    .response()    .putHeader("content-type", "application/json; charset=utf-8")    .end(Json.encodePrettily(rs));  });
});



Now, build the code again. Before you execute the .jar file, make sure you set your environment variables accordingly in the shell.


$ export HANA_URL=jdbc:sap://<your host>:3<your instance id>15/?autocommit=false
$ export HANA_USER=<user>
$ export HANA_PASSWORD=<your password>

 

After you executed the jar file again, you can see your result in the browser:

Screen Shot 2016-04-07 at 6.04.55 PM.png

Now you just developed your first Vert.x backend on top of SAP HANA!

 

Conclusion

 

Vert.x and SAP HANA work very well together, especially for real-time applications. If you want to develop your web services on top of JVM and want to avoid dealing with a servlet container or even a whole application server, Vert.x might be a great choice for you.

 

If you find any mistakes or have any feedback, please feel free to leave me a comment.

 

Work in progress:

 

  • Running Vert.x on SAP HANA Cloud Platform, using Neo SDK. Stay tuned!


How to Access ECC system from WEB IDE

$
0
0

SAP Web IDE is a browser based development tool. SAP Web IDE accelerates building modern applications for desktop and mobile devices with the latest UI technologies,simplifies the end-to-end application life cycle and customer extensions for SAPUI5 and Fiori applications.

 

          With powerful tools such as drag and drop components, templates and wizards, business analysts and designers can build their own Fiori and similar applications without writing code. The tool is also flexible for developers who believe that tools should never get in the way of the source code, and want to dive right into the code editor with the SAPUI5 specific code completion.

 

Follow the below steps to create account and accessing ECC system.

 

How To Create Account in SAP Web IDE

 

Run the below URL and click on Register.

 

https://account.hanatrial.ondemand.com/

 

 

1.png

 

Fill the details below.

 

2.PNG

 

After registering you will get a conformation link to your registered mail id.

 

3.png

 

 

4.png

 

 

Now you will get welcome screen like below.

 

5.png

 

How to create Destination:

 

Select Destinations for adding our ECC system details as shown in the below screen shot.

 

6.png

 

 

And then click on New Destination.

 

7.png

 

 

Enter the details like below and click on Save

 

8.png

 

9.png


How to Create  access our Destination(ECC) system

 

Now run the below link to crate sample Application by using our ECC Service.

 

https://account.hanatrial.ondemand.com/cockpit#/acc/p1941799264trial/accountdashboard

 

Login Screen.PNG

 

Login With you User Details.

 

below screen.PNG

 

Now click on Subscriptions and under that click on Webide.

 

Click on Subcription and web ide.PNG

 

Now we will get an URL. Click on Application URL to open Web Ide Editior.

 

url.PNG

 

loading.PNG

Home Screen.

Home screen.PNG

Choose "New Project from Template"

New application.png

Select “SAP Fiori Master Detail Application” tile and click on “Next”.

Master Detail Application.PNG

Enter the Project Name and click on “Next”

Project name.PNG

Now it will ask for ECC system login details.

Ecc system Details.PNG

Enter username and password click on login.

Enter login details to dispaly Services.PNG

Now we can access all the odata services which is present in our ECC system, Select one odata service and click on “Next”.

choose any service.PNG

In Template Customization Scree we will customize our scree by entering the basic information like below screen. It is based on our odata service.

Template Customizatrion.PNG

And finally click on Finish.

clcik on finish.PNG

The structure of our application is as shown below in Layout editor.

structure.PNG

 

Now we can directly Run our application to check Master Detail Application Out Put.

 

To Run Application.


Right click on Index.html-->Run-->Run as-->Web Application.


To Run Application.png

 

Out Put:

 

Enter the username and password.

rum user details.PNG

 

Output.PNG

 

 

Refereed From: Open SAP.

 

 

Thanks and Regards,

Ravi Varma I

How SAP Could Enable You and Me as Software Vendors with a "Backed by SAP" Program

$
0
0

At yesterday’s SAP Community Meet-up in Walldorf, there was a great discussion triggered by Dr. Christian Baader of SAP, who works with partners and tries to create conditions under which the partner ecosystem can thrive. The discussion revolved around the question: “What would it take for more individual developers to create real apps on HANA Cloud Platform, and market them through channels such as SAP’s app store?” I threw some ideas into the discussion, and today I decided to write them down and put them out for discussion with the wider community. I’m interested in your ideas, preferably if you come up with suggestions on how to close feasibility gaps.

 


Motivation

SAP wants to enable a thriving ecosystem around its technology platforms, specifically HANA Cloud Platform and HANA.

 


Situation

There is a large number of individual developers and small firms (1-4 employees/co-owners) for whom it is difficult to go to market with HCP-based apps for various reasons. With the right enablement, this could evolve into a thriving ecosystem that would increase the attractiveness of HCP, similar to the way competing platforms gain values when third-party apps are abdundant.

 

This potential is currently hindered by the following factors:

  • There is a high cost involved for individual developers to acquire developer license, join partner program, get access to "proper" development systems without the major limitations built into the HCP trial landscape.
  • The lack of a larger support organization creates dependency from one individual, which is not acceptable for enterprise customers.

 


Challenge

In the worst case scenario, a micro-ISV is no longer able to support their solution (company shuts down, developer dies, etc.) and a backup support organization needs to be able to take over and fix the issue.

What if SAP could provide this support organization and thus enable a large number of developers to go to market with solutions that are otherwise perfectly viable?

 


Solution

Set up a process and documentation standards that enables offshore support teams to quickly support a solution.

The support organization needs what they always need (same as with SAP's support infrastructure):

  • App must be coded according to guidelines.
  • Comprehensive documentation is required.
  • Use of SAP HCP infrastructure is required to ensure that support personnel has full access to source code repository as well as the ability to make changes and produce a new build anytime.
  • Fulfillment of these requirements needs to be ensured with reviews and final supportability approval.

(Please note that supportability approval is not the same as quality approval. SAP does not assert that the app is good and works properly, it merely asserts that SAP is able to provide support personnel in case a small partner ceases operations temporarily or permanently.)

This would result in the ability of support personnel to offer AGS-type support to apps they haven't built themselves and have no prior familiarity with, similar to the way support and maintenance teams are today taking over internal applications that were built elsewhere.

 


Cost aspect

The review process especially is going to be a cost-driver, so it's clear that the "Backed by SAP" programm comes with a price tag. This means that it will not automatically be economically viable for very low-volume or low-price apps but focus on apps for which sufficient revenue through volume or price can be expected.

This is perfectly reasonable, because the programm would add great value to micro-ISVs, customers, and SAP.

 


Other contributing factors

  • Demo Cloud as part of the development platform: A major contributing factor to the success of SAP Fiori has been the easily available “SAP Fiori, Demo Cloud Edition” at http://www.sapfioritrial.com/. No enterprise buyer buys an app from an SAP app store they haven’t experienced and tried out. For small vendors to be successful, an infrastructure similar to the SAP Fiori, Demo Cloud Edition is required that gives potential buyers easy access to demo and exploration versions of apps in the app store.

This should be integrated with the HCP development environment, so that developers can easily deploy a demo version of their app into a demo cloud; this demo version would then be easy to launch from the app store.

  • Demo cloud enables lead generation: Vendors are interested in sales leads. They want to know how has looked at their app, and start a dialog with potential customers. This should be integrated in the app store, for example vendors could make a setting that the demo version of the app is available to anyone who has shared their contact information and agreed that the vendor reach out to them.
  • Beyond hello world: As mentioned earlier, the current HCP trial is somewhat limited in that it doesn’t allow developers to explore the entire lifecycle of a cloud-based app; you can explore the coding aspect but not the interplay between, say, a development version and a currently productive version, go through the patch process, how to prevent data loss when productive table structures are changed in the development version, and so on. It would be helpful if there was a trial landscape that is basically identical to the productive landscape, except for support and legal aspects.

 

Do you think such measures could enable individual developers and small companies to release and market real HCP-based apps? Is this something you have been waiting for? Personally, I think it would help a lot of people become product companies who are today limited to working as consultants. Let me know!

New Interactive Images in the Dynamic Tiering Installation and Update Guide

$
0
0

As part of the documentation updates for revision 112 of dynamic tiering, we've revised the images in the installation and update guide to make it easier to find what you're looking for.

 

The Installing SAP HANA Dynamic Tiering topic has a new image showing the four different installation scenarios covered in the documentation. Decide whether you'd prefer to use the GUI or console interface for installation, then click one of the green icons to go straight to the instructions you need either to install a new system with both SAP HANA and dynamic tiering, or to add dynamic tiering to your existing SAP HANA system.

 

DTInstallSelection.png

DTInstallSelection2.png

If you're updating from an earlier version of dynamic tiering, the Updating SAP HANA Dynamic Tiering topic covers the differences between the regular and optimized update methods. Once you've decided which method to use, click on it to go to the right section, where you'll find more images to guide you through the documentation.

 

UpdateSelection.png

Like installation, regular updates can be done with the GUI or the console, and you can update both SAP HANA and dynamic tiering at the same time, or just dynamic tiering if you've already updated SAP HANA.

 

The optimized update method updates the entire SAP HANA system at once, so all you need to decide with this method is whether to use the GUI or console to get started.

 

What do you think? We're always looking for feedback, so if you have any questions or suggestions about these new images or any part of the dynamic tiering documentation, please add them to the comments section below.

Federating with SAP HANA

$
0
0

Introduction

Every five to six years, there comes a technology wave, and if you are able to catch it, it will take you a long way. Throughout my career, I’ve ridden several of these waves. MPP data warehouses brought us incredible speed for analytics and a few headaches for data integration. We’re seeing in-memory analytics reducing disk latency. Hadoop based technologies are opening up new solutions every day for storage and compute workloads while our source systems are still generating varying degrees of velocity, volume, and variety.

As a traditional ETL developer, I would usually try to figure out the best solution to acquire, cleanse, and store this data in an optimal format for analytics…usually a data warehouse. Depending on the business need, number of sources, and complexity, this approach is a long one and quite labor intensive. Source systems create new data faster than we can consume them in traditional models. Hence, we see many organizations adopting a Data Lake approach. Here, we are simply concerned with optimizing the acquisition and storage of any data source. We worry about consumption later.

While data federation has been around for years, traditional technologies typically dealt with federating a relational source with a tabular, single file extract. Today, we’re asking federation to handle relational stores, API’s, HDFS, JSON, AVRO, logs, and unstructured text. It’s a tough task, but I was pretty impressed with SAP HANA’s approach and implementation of data federation.

This post is not about SAP HANA, but rather focuses on its data federation capabilities. I will try to explain basics, best practices, and few tips and tricks I came across during my experience working with data federation in HANA.

Smart Data Access

SAP’s data federation capabilities are built in their HANA database, which is known as Smart Data Access (SDA). SDA eliminates the need to replicate data into SAP HANA, instead it lets you query remote sources from HANA. SAP calls this ability to weave in a network of data sources the in-memory data fabric. SDA allows you to create virtual tables, which points to remote tables on remote sources. It allows you to write SQL queries in HANA, which operates on virtual tables. HANA query processor optimizes these queries, and executes only the relevant part of the query in the target database, returns the results of the query to HANA, and completes the operation.

Supported Connectivity

SDA was first introduced in SPS06. Features matured over several releases, and it supports connectivity to ASE, Teradata, IQ, HANA, Oracle, Sql Server, Netezza, DB2, MaxDB and Hadoop. There are just a few one-time setup steps involved when setting up remote sources for first time.

transactions_analytics

All relational databases can be setup using ODBC drivers and RDBMS drivers on the UNIX server where HANA is installed. Once the drivers are installed, then create the remote sources using HANA studio. Refer to SAP administration guide for version details.

There are few different ways to setup Hadoop remote source. The most common way is to setup using ODBC drivers and HIVE/SPARK drivers on the UNIX server where HANA is installed. Once the drivers are installed, then create the remote source using HANA studio. Other ways include connecting via archive file/virtual UDFs from HANA studio and via spark controller on Hadoop.

SDA Best Practices

Sometimes it is difficult to determine what should be the optimal way to federate data especially dealing with Hadoop sources. We recommend to use divide and conquer method. You can let your remote sources process data and query them from HANA as needed. For ex:- You would push huge volume of data processing on your Hadoop system, this way you are taking advantage of commodity hardware and their cheaper processing power. You can leverage your cheaper storage options and save data in those databases, while only bringing data sufficing your analytical needs into HANA via SDA.

SDA would submit query on remote server, therefore performance will be based on how powerful remote source is configured. It may or may not be adequate for your use case, and you might choose to copy data into HANA instead.

Leveraging Statistics – HANA has the ability to calculate statistics on remote data sources. These statistics can help query optimizer decide how to join two tables including remote tables and in which order to join them. There are two types of statistics you can enable. Histogram type will only saves counts, and simple type will save information such as counts, count distinct, min, max. Depending on your needs you can enable either type to improve your performance.

Querying Hadoop – When federating data from Hadoop, there are few tips and tricks we can use for better performance:

  • Remote caching capabilities – All frequently accessed queries on Hadoop system should be cached. HANA provides remote caching capabilities for Hadoop systems, which saves frequently accessed queries into a separate table for faster execution, and avoids executing map reduce job on Hadoop system every time same query gets executed via HANA.
  • Using ORC file – Use ORC file for every hive table. Hive supports ORC file, a new table storage format that optimizes speed through techniques like predicate push-down and compression. You might run into issues, when querying table with billion plus records via SDA, this approach resolves it.
  • Use of Vectorization – Vectorized query execution improves performance of operations like scans, aggregations, filters and joins, by performing them in batches of 1024 rows at once instead of single row each time
  • Cost based query optimization – Cost-based optimization, performs further optimizations based on query cost, resulting in potentially different decisions: how to order joins, which type of join to perform, degree of parallelism and others.

 

 

Smart Data Access: Capabilities and Limitations

  • Capabilities
    • You may create attribute views, analytical views, calculation views, and leverage HANA’s tools and capabilities on remote tables, just like they were in HANA database. This extends ability to execute results using HANA’s calculation engine, which can perform better than normal SQL execution compare to other databases.
    • In the latest version of HANA, SDA allows users to insert/update/delete data on remote sources, also SDA will work on certain data types like BLOB/CLOB, which wasn’t allowed in initial version
  • Limitations
    • HANA is limited to the capabilities of hive for querying Hadoop.

Federating with SAP HANA

$
0
0

Introduction

Every five to six years, there comes a technology wave, and if you are able to catch it, it will take you a long way. Throughout my career, I’ve ridden several of these waves. MPP data warehouses brought us incredible speed for analytics and a few headaches for data integration. We’re seeing in-memory analytics reducing disk latency. Hadoop based technologies are opening up new solutions every day for storage and compute workloads while our source systems are still generating varying degrees of velocity, volume, and variety.

As a traditional ETL developer, I would usually try to figure out the best solution to acquire, cleanse, and store this data in an optimal format for analytics…usually a data warehouse. Depending on the business need, number of sources, and complexity, this approach is a long one and quite labor intensive. Source systems create new data faster than we can consume them in traditional models. Hence, we see many organizations adopting a Data Lake approach. Here, we are simply concerned with optimizing the acquisition and storage of any data source. We worry about consumption later.

While data federation has been around for years, traditional technologies typically dealt with federating a relational source with a tabular, single file extract. Today, we’re asking federation to handle relational stores, API’s, HDFS, JSON, AVRO, logs, and unstructured text. It’s a tough task, but I was pretty impressed with SAP HANA’s approach and implementation of data federation.

This post is not about SAP HANA, but rather focuses on its data federation capabilities. I will try to explain basics, best practices, and few tips and tricks I came across during my experience working with data federation in HANA.

Smart Data Access

SAP’s data federation capabilities are built in their HANA database, which is known as Smart Data Access (SDA). SDA eliminates the need to replicate data into SAP HANA, instead it lets you query remote sources from HANA. SAP calls this ability to weave in a network of data sources the in-memory data fabric. SDA allows you to create virtual tables, which points to remote tables on remote sources. It allows you to write SQL queries in HANA, which operates on virtual tables. HANA query processor optimizes these queries, and executes only the relevant part of the query in the target database, returns the results of the query to HANA, and completes the operation.

Supported Connectivity

SDA was first introduced in SPS06. Features matured over several releases, and it supports connectivity to ASE, Teradata, IQ, HANA, Oracle, Sql Server, Netezza, DB2, MaxDB and Hadoop. There are just a few one-time setup steps involved when setting up remote sources for first time.

transactions_analytics

All relational databases can be setup using ODBC drivers and RDBMS drivers on the UNIX server where HANA is installed. Once the drivers are installed, then create the remote sources using HANA studio. Refer to SAP administration guide for version details.

There are few different ways to setup Hadoop remote source. The most common way is to setup using ODBC drivers and HIVE/SPARK drivers on the UNIX server where HANA is installed. Once the drivers are installed, then create the remote source using HANA studio. Other ways include connecting via archive file/virtual UDFs from HANA studio and via spark controller on Hadoop.

SDA Best Practices

Sometimes it is difficult to determine what should be the optimal way to federate data especially dealing with Hadoop sources. We recommend to use divide and conquer method. You can let your remote sources process data and query them from HANA as needed. For ex:- You would push huge volume of data processing on your Hadoop system, this way you are taking advantage of commodity hardware and their cheaper processing power. You can leverage your cheaper storage options and save data in those databases, while only bringing data sufficing your analytical needs into HANA via SDA.

SDA would submit query on remote server, therefore performance will be based on how powerful remote source is configured. It may or may not be adequate for your use case, and you might choose to copy data into HANA instead.

Leveraging Statistics – HANA has the ability to calculate statistics on remote data sources. These statistics can help query optimizer decide how to join two tables including remote tables and in which order to join them. There are two types of statistics you can enable. Histogram type will only saves counts, and simple type will save information such as counts, count distinct, min, max. Depending on your needs you can enable either type to improve your performance.

Querying Hadoop – When federating data from Hadoop, there are few tips and tricks we can use for better performance:

  • Remote caching capabilities – All frequently accessed queries on Hadoop system should be cached. HANA provides remote caching capabilities for Hadoop systems, which saves frequently accessed queries into a separate table for faster execution, and avoids executing map reduce job on Hadoop system every time same query gets executed via HANA.
  • Using ORC file – Use ORC file for every hive table. Hive supports ORC file, a new table storage format that optimizes speed through techniques like predicate push-down and compression. You might run into issues, when querying table with billion plus records via SDA, this approach resolves it.
  • Use of Vectorization – Vectorized query execution improves performance of operations like scans, aggregations, filters and joins, by performing them in batches of 1024 rows at once instead of single row each time
  • Cost based query optimization – Cost-based optimization, performs further optimizations based on query cost, resulting in potentially different decisions: how to order joins, which type of join to perform, degree of parallelism and others.

 

 

Smart Data Access: Capabilities and Limitations

  • Capabilities
    • You may create attribute views, analytical views, calculation views, and leverage HANA’s tools and capabilities on remote tables, just like they were in HANA database. This extends ability to execute results using HANA’s calculation engine, which can perform better than normal SQL execution compare to other databases.
    • In the latest version of HANA, SDA allows users to insert/update/delete data on remote sources, also SDA will work on certain data types like BLOB/CLOB, which wasn’t allowed in initial version
  • Limitations
    • HANA is limited to the capabilities of hive for querying Hadoop.

 

Note:  Please check out iOLAP Blog for other BI and cool technology related blogs

Restricting _SYS_BIC views using stored-procedures

$
0
0

Introduction

 

     The activated models in HANA results in  Column Views under _SYS_BIC schema.  The end users using front end tool like SAP BusinessObjects (IDT), SAP Lumira, SAP Predictive Analytics, etc. need to have SELECT rights on the views in _SYS_BIC to consume the activated HANA models. Current design in HANA is such that the end user has access to all of the activated views under _SYS_BIC or none.  This raises a challenge of how to give access only to a specific view or set of views to a user/role and restrict the user/role from accessing other views which are irrelevant to him/her. In this article I have made an attempt using stored procedure approach to allow and restrict access to activated views in _SYS_BIC.

 

Create A Role

Using Studio :  Login to HANA system using SAP HANA Studio. Navigate to Security-Roles. Right click on Roles and select New Role. Enter the Role name, say SYS_BIC_PACKAGE_A_READ and activate it.

 

Using scripts:

role demoroles.roles::SYS_BIC_PACKAGE_A_READ{

 

}

NOTE: No body inside the script.

Refer to A step-by-step guide to create design-time (script based) Roles in SAP HANA for creating script based roles.

 

Stored Procedure to  assign _SYS_BIC views to a Role

 

     The source code for the procedure - SPROC_GRANT_SELECT_ON_SYS_BIC_VIEWS , to assign _SYS_BIC views to a role is given below.  This procedure takes 2 parameters – viewName  - name of the view  that is to be assigned to the role and the roleName.  The viewName parameter can take wild card (explained in Assign Role to user/role section below).

 

The system view VIEWS contains list of all the views in HANA including the developed and activated SAP HANA model views – Attribute, Analytic and Calculated Views. The GRANTED_PRIVILEGE view contains all the privileges assigned to a Role/User. The procedure uses these two views and defines  a cursor  (line 9) to retrieve only the activated views (parameter1) that are not assigned to the role/user (parameter 2). If the passed view (parameter1) is already assigned to the passed role (parameter2), the procedure simply exists without any action on the parameters. This cursor selects only views of type JOIN, CALC or OLAP. The cursor declared in line 9 is modified in line 14 if there is no wild card in parameter  viewName

 

The code at line 18

    dynSQL:= 'delete from YOURSCHEMA.DUMPTABLE'.

   is optional. I used DUMPTABLE with 1 character column of size 5000 to capture the GRANT statement generated  and to verify the views that are selected by the procedure.  Kind of debugging the procedure.  Line 19 empties the table and line 25 inserts the generated GRANT statement.

 

Lines 21 to the end– For loop, to process the cursor result set, generates the GRANT statement and executes it for each record in the cursor result set.

 

  1. CREATEPROCEDURE "YOURSCHEMA"."SPROC_GRANT_SELECT_ON_SYS_BIC_VIEWS" (in viewName varchar(100), in roleName varchar(100) )
  2. LANGUAGE SQLSCRIPT
  3. SQL SECURITY INVOKER
  4. AS
  5. BEGIN
  6. DECLARE dynSQL VARCHAR(500) :='';
  7. DECLARE CURSOR SYS_BIC_VIEWS for SELECT view_name FROM VIEWS WHERE SCHEMA_NAME = '_SYS_BIC' AND VIEW_NAME LIKE :viewName and view_type in ('CALC','JOIN','OLAP'and is_valid='TRUE'
  8. and view_name not in (select object_name from public.granted_privileges where object_name like :viewName and grantee=:roleName and is_valid='TRUE');
  9. if LOCATE(trim(viewName),'%',1,1)=0  --if there is no wildcard in the view name, remove "like" clause from cursor definition.
  10. then
  11. DECLARE CURSOR SYS_BIC_VIEWS for SELECT view_name FROM VIEWS WHERE SCHEMA_NAME = '_SYS_BIC' AND VIEW_NAME=:viewName and view_type in ('CALC','JOIN','OLAP'and is_valid='TRUE'
  12. and view_name not in (select object_name from public.granted_privileges where object_name = :viewName and grantee=:roleName and is_valid='TRUE');
  13. end if;
  14. dynSQL := 'delete from YOURSCHEMA.DUMPTABLE';
  15. exec dynSQL;
  16. FOR rs_SYS_BIC_VIEWS as SYS_BIC_VIEWS DO
  17. if LOCATE(trim(rs_SYS_BIC_VIEWS.view_name),'/olap',1,1)=0 --remove /olap view components if they present in result set.
  18. then
  19. dynSQL:='GRANT SELECT ON "_SYS_BIC"."'||rs_SYS_BIC_VIEWS.view_name||'" to "'||roleName||'"';
  20. insert into"YOURSCHEMA"."DUMPTABLE"values(dynSQL);
  21. exec dynSQL;
  22. end if;
  23. END FOR;
  24. END;

 

Stored Procedure to revoke _SYS_BIC views from a Role

 

This procedure is used to revoke the assigned views from a role/user.  The functional structure of this procedure is same as the previous procedure.

 

  1. CREATE PROCEDURE"YOURSCHEMA"."SPROC_REVOKE_SELECT_FROM_SYS_BIC_VIEWS" (in viewName varchar(100), in roleName varchar(100) )
  2. LANGUAGE SQLSCRIPT
  3. SQL SECURITY INVOKER
  4. AS
  5. BEGIN
  6. DECLARE vname VARCHAR(500) := :viewName;
  7. DECLARE CURSOR grantedPrivileges for select OBJECT_NAME from PUBLIC.GRANTED_PRIVILEGES where  object_name like :viewName and GRANTEE=:roleName and schema_name='_SYS_BIC';
  8. DECLARE dynSQL VARCHAR(500) :='';
  9. DECLARE grantString VARCHAR(50) :='GRANT SELECT ON';
  10. if LOCATE(trim(viewName),'%',1,1)=0  --if there is no wildcard in the view name, remove "like" clause from cursor definition.
  11. then
  12. DECLARE CURSOR grantedPrivileges for select OBJECT_NAME from PUBLIC.GRANTED_PRIVILEGES where  object_name = :viewName and GRANTEE=:roleName and schema_name='_SYS_BIC';
  13. end if;
  14. dynSQL := 'delete from YOURSCHEMA.DUMPTABLE';
  15. exec dynSQL;
  16. FOR grantedPrivilegeRS as grantedPrivileges DO
  17. dynSQL:='REVOKE SELECT ON "_SYS_BIC"."'||grantedPrivilegeRS.object_name||'" from "'||roleName||'"';
  18. insert into"YOURSCHEMA"."DUMPTABLE" values(dynSQL);
  19. exec dynSQL;;
  20. END FOR;
  21. END;

Assign/Revoke privilege/rights to the user/role

 

In the SAP HANA open the above created role SYS_BIC_PACKAGE_A_READ and make sure there are no entries under Granted Roles, Part of Roles, System Privileges, Object Privileges, Analytic Privileges, Package Privileges, Application Privileges and Privileges on Users.

Let us assume that your HANA views have the following structure

Root package

a

b

c

Attribute Views

                 AT_MY_ATVIEW_DATE_DIM

AT_MY_ATVIEW_REGION_DIM

Analytic Views

                AV_MY_AVVIEW_SALES

                AV_MY_AVVIEW_APPLICATIONS

Calculation Views

                CL_MY_CLVIEW_ONE

                CL_MY_CLVIEW_TWO

 

Usage scenarios

  1. To assign all views under the package a.b.c to SYS_BIC_PACKAGE_A_READ  role, use wild card in the parameter 1 as given below.

     CALL “YOURSCHEMA"."SPROC_GRANT_SELECT_ON_SYS_BIC_VIEWS"(‘a.b.c%’,’ SYS_BIC_PACKAGE_A_READ’);

Now verify that the role has the views added to it under Object Privileges tab.

 

     2. To assign a single view say AV_MY_AVVIEW_APPLICATIONS  to say roleB, give the absolute path of the view for the parameter1 as given below.

CALL “YOURSCHEMA"."SPROC_GRANT_SELECT_ON_SYS_BIC_VIEWS"(‘a.b.c.AV_MY_AVVIEW_APPLICATIONS’,’roleB);

 

     3. To revoke all the assigned views under the package a.b.c  from SYS_BIC_PACKAGE_A_READ role, use wild card in  the parameter 1 as given below

CALL YOURSCHEMA.SPROC_REVOKE_SELECT_FROM_SYS_BIC_VIEWS (‘a.b.c%’,SYS_BIC_PACKAGE_A_READ);

Now verify that the role has no views listed  under Object Privileges tab.

 

     4. To revoke a single view from a role, pass the absolute path of the view in parameter 1 as given below.

CALL YOURSCHEMA.SPROC_REVOKE_SELECT_FROM_SYS_BIC_VIEWS (‘a.b.c.AV_MY_AVVIEW_APPLICATIONS,SYS_BIC_PACKAGE_A_READ);

 

     5. If there are new model  developed and activated during the development process and all the new _SYS_BIC column views of the new models need to be assigned to the role – call revoke procedure to revoke all the granted views and call assign procedure to grant all the views, including the new models.

 

Revoke all views from the role:

CALL YOURSCHEMA.SPROC_REVOKE_SELECT_FROM_SYS_BIC_VIEWS(‘a.b.c%’,SYS_BIC_PACKAGE_A_READ);

 

Assign all views to the role (including new views):

CALL “YOURSCHEMA"."SPROC_GRANT_SELECT_ON_SYS_BIC_VIEWS"(‘a.b.c%’,’ SYS_BIC_PACKAGE_A_READ’);


References

  1. SAP HANA Administration Guide
  2. SAP HANA Developer Guide
  3. SAP HANA System Views Reference

 

Author

Pals Nagaraj, PMP, CMC is a Technology/Management consultant with extensive experience in providing Business Analytics Solutions using SAP BI, SAP Data Services, SAP HANA and Analytics platforms to federal, state and commercial clients. He is certified in SAP BI and SPA HANA. He be reached at pals@strategicitech.com.

 

 



Join Between the Tables - Points To Note

$
0
0

I come across some Joins scenarios, where I have learnt few Best Practices, which I would like to share.

 

1. Rank Node (Ex: Get Distinct Records By Latest Date) before Joins:


Two Tables are there and Table names are CUSTOMER and EMAIL_SUBSCRIPTION.

CUSTOMER Table contains the columns - Customer_ID and Email_Address of them.

EMAIL_OPT_IN Tablecontainsthe columns - EMAIL_Address, Subscription_Status and Modified_On.


 

TABLE:  CUSTOMERTABLE:  EMAIL_SUBSCRIPTION
temp.PNG

Subscription Status 1 Means "YES" and 0 Means "No"

 

temp.PNG


Requirement:

 

To Get CURRENT_SUBSCRIPTION_STATUS For Each CUSTOMER.

temp.PNG

 

Solution:



Wrong Approach



Correct Approach


In Dimension Calculation view,

Two tables are directly included in Join node.


temp.PNG


Select the required fields and activate the calculation view.

Data Preview of Calculation View is,


temp.PNG


This is giving Wrong results.

Customers - C1 and C2 both has subscription status -

Yes and No.

We could not find the Current subscription status of the Customer.


So before joining these two tables,

Current subscription status of Email has to be find with Latest Modified_On date and This output has to be connected to Customers Table in Join node.

In Dimension Calculation view,

Include Rank node in Calculation view.

Add EMAIL_SUBSCRIPTION Table into it.

 

In Rank node, Enter the following properties.

Sort Direction Descending (Top N) on

MODIFIED_ON Column and

Threshold as 1.

 

So It will filter out the records based on Latest Modified On and

give distinct records with latest timestamp.

 

temp.PNG

 

Next, In Join node,

Add Customer Table and RANK_1 Output.

 

temp.PNG

 

Select the required fields and activate the calculation view.

Let's View the Final output of Calculation view.

temp.PNG

It is giving the Correct results.


Email Subscription status Of Customer C1 is NO

Email Subscription status Of Customer C2 is YES


Before Joins, Ranking data is giving Distinct Records By Latest Time stamp. So This approach is giving correct results.

 

 

 

2. Aggregate Table before Joins:


Two Fact Tables are there. Table names are Planning_Sales and Actual_Sales.

Both Tables contain 3 columns - Customer_Id and Product_Id and Sales.

 

TABLE: PLANNING_SALESTABLE: ACTUAL_SALES
temp.PNGtemp.PNG

 

Requirement:

 

PLANNING and ACTUAL Sales by Product wise.

 

temp.PNG

 

Solution:


 


Wrong Approach



Correct Approach


In Calculation view,

Two Tables are directly included in Join node.

temp.PNG


Select the required fields and activate the calculation view.

Data Preview of Calculation View is,


temp.PNG


This is giving Wrong results.

Lets check the Output at Join node level.


temp.PNG


The Cause of the Problem is,


Both the products P1 and P2 have two entries in

Planning_Sales and Actual_Sales Table.

While Joins between these two tables,

Each record in PLANNING table looks Each record in ACTUAL table.


So the Rows_Count is doubled and After Aggregation is giving wrong results.


To avoid this Problem,

First Sales has to be aggregated by Product wise for

both PLANNING table and ACTUAL table,

Then It has to be joined.


In Calculation View,

 

Planning_Sales Table is Included in

Aggregation Node_1 and

Sales Aggregated by Product wise.

 

Actual_Sales Table is Included in

Aggregation Node_2 and

Sales Aggregated by Product wise.


Then In Join Node, Two Aggregated Outputs -

Aggregation_1 and Aggregation_2 are Joined.


temp.PNG


In Join node, Join Between Two Aggregation Nodes,


temp.PNG


Select the required fields and activate the calculation view.

Let's View the Final output of Calculation view.


temp.PNG


This is giving Correct results.


Joins before aggregation is giving distinct records with aggregated values.

So This approach is giving correct results.


 

Regards,

Muthuram

Significance of SAP HANA in the major Business Sectors

$
0
0

SAP HANA or SAP High-Performance Analytic Appliance is faster data processing software which combines a huge amount of valuable data and develops pertinent results for the business at a quick speed. It is an in-memory data platform that can be installed on-premise appliance, or in the cloud. SAP HANA includes various components such as:

 

  • Replication server
  • SAP HANA Database
  • Sybase replication technology
  • SAP HANA Direct Extractor connection
  • SAP SLT or System Landscape Transformation

 

Furthermore, SAP HANA edition can be categorized as SAP HANA Platform Edition, Enterprise Edition and Extended Edition.

 

SAP HANA.jpg

 

Benefits of using SAP HANA:

 

  • It helps in conducting in-memory computing and processes real time data.
  • It provides quick processing on huge data and lets the user to determine and investigate all types of analytical and transactional data.
  • It requires fewer testing, hardware, and maintenance; thus reducing the total cost of ownership (TCO).
  • It reduces the difficulty of data management and data manipulation.

 

Thus, SAP HANA can help in increasing the revenue of an association.

 

Career in SAP HANA:

 

SAP HANA has been creating lots of opportunities and scopes for the employees. There are institutions that offer certification in SAP HANA. Moreover, the pay scale for HANA professionals is quite lucrative as on an average they can earn 8 lacs per annum whereas a fresher can expect a package of 3.81 lacs per annum.

 

Courses that are included in SAP HANA:

 

Listed below are the courses that are included in SAP HANA. This includes:


  • SAP HANA in-memory strategy
  • An introduction to SAP HANA
  • Attribute view and Analytics view
  • User authorization and management
  • Replication server and Replication Process
  • SAP HANA Studio—administration/ navigation view
  • Architecture overview, HANA reporting, data modeling, SAP HANA DB

 

Course fees for SAP HANA Certification:

 

  • The course fee for “An introduction to SAP HANA” is approximately INR 40,000 to 50,000 and the duration is for 2 days.
  • The course fee for “SAP HANA - Implementation and Modeling” is approximately INR 60,000 to 70,000 and the duration is for 3 days.
  • The HANA Certification is approximately INR 35000 to 40,000.

 

In order to pursue career in SAP HANA, the candidate must have:

 

  • Elementary Knowledge regarding the database domains and information technology
  • Understanding of the BI reporting tools and Business Warehouse (BW)
  • Basic knowledge regarding business processes and applications
  • A degree from an affiliated university

 

Its time you get Certified as a SAP HANA Consultant and enjoy a bright career path ahead!

Introduction to Kerberos Constrained Delegation – SAP HANA Smart Data Access HANA to HANA Scenarios

$
0
0

Overview

 

Kerberos is one of the single sign on (SSO) mechanisms supported by HANA. A user connecting to SAP HANA via Kerberos must have a SAP HANA database user, who is mapped to the external identity in a key distribution center (KDC) such as Microsoft Active Directory. HANA supports two different types of Kerberos authentication: Direct authentication and indirect authentication via constrained delegation. Kerberos in the SDA context is the later scenario, indirect authentication from other SAP HANA databases via constrained delegation.

 

Kerberos constrained delegation for Smart Data Access HANA to HANA scenarios is new in SPS12. As of May, 2016 this feature is currently only offically supported for connections between two HANA SPS12 systems. The advantage with this feature is it allows you to log on to several SAP HANA systems but only explicitly authenticate once. This means one less password to remember when accessing data from remote HANA systems, enhanced security, and a smoother workflow.  Previously, only authentication via user name/password was available.

 

There are four main steps that take place when utilizing Kerberos SSO in a HANA to HANA SDA scenario. In Image one below you can see an overview of these four steps.

Picture1.png

 

Image 1: Overview of Kerberos constrained delegation in the SDA HANA to HANA scenario

 

 

Step one, the HANA user logs into the source HANA using any authentication method (please note, it is not necessary to login to the source HANA via Kerberos). Step two, the source HANA requests a delegation ticket from the external Key Distribution Center (KDC) on behalf of the user. Step three, the KDC issues the Kerberos constrained delegation ticket for the user. Finally, in step four the target HANA uses the constrained delegation ticket to authenticate the user.

 

Configuration

 

 

This section is intended as a general overview of the steps that need to be taken to enable Kerberos SSO for SDA in HANA to HANA scenarios.

 

Configuring the KDC

 

The first step you need to take is to configure the KDC for the source and the target HANA systems.

 

For the source system in the KDC, you need to create a Unix computer account for the SAP HANA 1 source system and mark it as trusted for delegation to SAP HANA 2 target system hdb service. You also need to add a host keytab on the HDB server to automatically authenticate the SAP HANA 1 source system with the Unix computer account.

 

For the target system SAP HANA 2 configure Kerberos authentication as normal (as in previous versions) by adding the hdb service to the KDC, no additional Kerberos configuration steps are necessary.

 

Detailed instructions on how to execute these configuration changes can be found in SAP Note 2303807.

 

Configuring HANA

 

On the source HANA server create a new HANA user with a Kerberos external identity in HANA studio. Grant this user the “Create Remote Source” privilege.

Picture2.png

Image 2: Creating a new user in the source system HANA 1

 

 

Next, connect to the source HANA system HANA 1 with the new HANA user. Open the Provisioning folder, and right click on the remote sources folder to select a new remote source.

Picture3.png

Image 3: Selecting a new remote source to add

 

 

Create a new remote source to the target HANA server HANA 2 and select SSO (Kerberos) as the credential mode.

Picture4.png

 

Image 4: Adding a new SSO (Kerberos) enabled remote source

 

 

Ensure that on the target HANA server HANA 2 there is a HANA user with the same Kerberos external identity as the Kerberos user created on the source system HANA 1.

 

If you have correctly configured Kerberos SSO, now when you browse the remote HANA system, the session will open automatically with this new HANA user.

 

Additional information

 

For more information on Kerberos and HANA please refer to:

 

HANA Security Guide

SAP Note 1837331 - How-To: HANA DB SSO Kerberos/ Active Directory

SAP Note 2303807 - SAP HANA Smart Data Access: SSO with Kerberos and Microsoft Windows Active Directory

What’s new in SAP HANA SPS12 – Smart Data Access

$
0
0

Overview

 

The SPS12 release is upon us, and with it a few great new features for SDA: Kerberos constrained delegation, UPSERT support, ROW_NUMBER support, performance optimizations and more.

 

What is SAP HANA Smart Data Access?

 

For those of you not so familiar with SDA, it has been available as a part of SAP HANA since SPS6 and gives you the power to expose data from remote sources as virtual tables in SAP HANA. This is a huge advantage compared to traditional ETL processes in many situations as it is gives you a cost-efficient, easy-to-deploy option to get real-time data visibility across fragmented data sources. Not to mention the advantage it brings in terms of location agnostic development when building applications on HANA which need to leverage data from multiple sources.

 

Picture5.png

Image 1: SAP HANA SDA overview

 

 

New SPS12 Features

 

Kerberos constrained delegation is one of the single sign-on (SSO) mechanisms that is supported by HANA. New in SPS12 is the ability to leverage Kerberos in HANA to HANA SDA scenarios. This means one less password to remember when accessing data from remote HANA systems, enhanced security, and a smoother workflow. There are a few changes that need to be made to configure Kerberos for SDA user cases, they include: Kerberos/Key Distribution Center (KDC) configuration, Smart Data Access configuration, and changes to SAP HANA users themselves. You can find an overview for how to configure Kerberos in this SCN Blog and detailed instructions for how to configure it in SAP Note 2303807.

 

Also new in SPS12 is support for the use of two SQL functions with virtual tables: UPSERT and ROW_NUMBER. UPSERT adds to the already supported write functions: INSERT/UPDATE/DELETE and is currently supported for HANA to HANA, SAP IQ, ASE, Teradata, and Microsoft SQL Server. You may find it comes in particularly handy in IoT use cases where you have a lot of events and tables that need to be updated. ROW_NUMBER is useful for high volume processing in BW, and allows for parallel fetching of data. It is currently supported for HANA to HANA, SAP IQ, Teradata, Oracle, and Microsoft SQL Server.

 

As of SPS11 SAP HANA SDA can leverage both native SDA adapters as well as SAP HANA Smart Data Integration (SDI) adapters to access even more 3rd party remote sources and their versions than ever before. Since SDA is a HANA Core feature, SDI adapters have been made available for SDA use cases at no additional cost. Please do keep in mind that unless SAP HANA SDI is included in your license agreement this does not entitle you to use SDI adapters for any other purpose than SDA.

 

Let’s say you have a running project where you have a SDA remote source defined and you have a lot of virtual tables and HANA views on top of it. It would be pretty painful to drop this remote source since all of the depending virtual tables and views will be dropped as well. So what you can do, and what has been supported since SPS11, is to convert this SDA remote source into an SDI remote source. New with this SPS12 release is a supported conversion for the Teradata adapter. Apart from the clear advantage of accessing more remote source versions, another advantage of converting the adapter includes that if the ODBC drivers of the remote sources are unreliable it does not affect the core database. Before converting SDA adapters to SDI adapters, you need to ensure that the Data Provisioning (DP) server has been started and that the DP Server Adapters (SDI adapters) have been configured. This blog post may be useful as a starting point for how to configure SDI adapters.

 

Picture6.png

 

Image 2: Supported conversions of SDA adapters to Data Provisioning server (SDI) adapters.

 

Finally, with the SPS12 release we also have some enhancements in HANA to HANA performance and functionality. We have added support for alphanumeric data types which will have improvements on query execution and optimization. Enterprise Semantic Search (ESS) is also supported for doing searches over data in virtual tables. This is part of the broader ESS project with the goal of being able to access and search data stored in remote systems. Last but certainly not least, we have also made significant “under the hood” enhancements in HANA to HANA performance. We are not no longer converting from HANA structures to ODBC structures and back to HANA structures. The data is now transferred in a compressed format using LZW lossless compression. The improvement is especially notable for queries containing a large number of tuples.

 

Thanks for reading the SDA SPS12 what’s new blog. Comments and questions are welcome.

SAP HANA SPS 12: New Developer Features

$
0
0

Overview

SAP HANA SPS 12 was released last week and of course we have improvements and additions targeted for developers in this new release. With SPS 11 we saw major architectural changes and additions with the introduction of XS advanced and HANA Deployment Infrastructure (HDI). Naturally the first release after such a major change primarily involves left over minor features, infrastructure improvements and other incremental changes. That is certainly the case with SPS 12 when it comes to developer features; although as SPS 11 as so major and still quite new to people we will review some of those major changes here as well.

 

XS Advanced

One of the biggest changes to the SAP HANA architecture was the introduction of XS advanced in SPS 11. SAP HANA extended application services in SPS 11 represents an evolution of the application server architecture building upon the previous strengths while expanding the technical scope. While I don't want to repeat all the architectural features which came with XS advanced in SPS 11, you can review them in this blog: SAP HANA SPS 11: New Developer Features; XS Advanced


Most of the additions to XS advanced architecture in SPS 12 are infrastructure related and designed to increase scale and performance options. Most notable among these features is the new concept of partial runtime download. Previously, a buildpack's only option to create a droplet for an application was downloading the appropriate runtime (for example a Node.js runtime for node apps or jvm/tomcat for Java apps), materializing all files into the file system and sending the whole droplet back to the XSA Controller. As runtimes might be fairly large in size and consist of thousands of files, this requires a lot of file system and network operations and slows down the staging process significantly.


As an optimization of the classic runtime download, the XS Controller in SPS 12 provides an enhanced runtime download interface for buildpacks: It enables buildpacks to download only those files to be analyzed or modified by the buildpack and omit the rest of the files. The omitted files will be added by the XS controller in a low cost operation (no file system and network access) after the buildpack has finished and merged with the files added / modified by the buildpack. This speeds up the staging process significantly.


Node.js

Node.js support for the JavaScript runtime in XSA as a major new features as well in SPS 11.  For the basics of what was introduced and supported as of SPS 11, please see this blog: SAP HANA SPS 11: New Developer Features; Node.js


Already in SPS 11 the Node.js runtime of XSA contained an implementation of XSOdata to allow for easy migration of existing services in XS Classic as well as reuse of the existing developers knowledge of how to create OData services. In SPS 11 we supported these major features of OData in the XSOdata runtime of XSA:


Service Document = scheme serviceRoot

EntitySet = scheme serviceRoot "/" entitySet

Entity = scheme serviceRoot "/" entitySet "(" keyPredicate ")"

Navigation = scheme serviceRoot "/" entitySet "(" keyPredicate ")/" entityNavProperty

Metadata = scheme serviceRoot "/" $metadata

Batch handling = scheme serviceRoot "/" $batch

System Query Options: $top, $skip, $select, $expand, $filter,$orderby,$inlinecount,$format

CUD Entity

Definition of OData schema namespace

Exposure of HANA tables and views as EntitySet

Create/update/delete restrictions on EntitySet level

Property Projection: Expose a subset of the table columns as properties of an OData EntityType

Automatic OData key generation, e.g. required for aggregated views

Simple and complex associations

Dataaggregation

Calculation views

Parameter EntitySets for calculation views

Nullable properties

Cache Control via cache header of $metadata requests

Custom exits (JavaScript and SQL Script) for modification and validation requests
(Only Entity requests)

Custom exits in batch requests (Only entity request)


This was a considerable list of features but did not yet match 100% of the features of XSOdata in XS Classic. With SPS 12 we continue to close the feature gaps by introducing these additional OData features into the Node.js based version of XSOdata:


$links requests, e.g. EntitySet/$links/NavigationProperty

$links requests inside batch requests

$links modification requests with custom exits (also as part of batch requests)

ETAG handling for conditional requests to support caching and optimistic concurrency control


Beyond SPS 12, we still have just a few features still to cover. We have the following planned features for the future:


Finalize feature parity to XS Classic, e.g. SAP annotations in metadata

Additional authorization checks via scopes on Service level and EntitySet level

We soon switch XSOData V2.0 into maintenance mode as we will have a strong focus on OData V4 development (OASIS OData standard)

No new major features planned to be provided with V2.0 and only very limited/important further features, based on customer requirements, will be implemented.


In the Node.js modules delivered by SAP we also see several new features.  For example the XSJS compatibility module now supports $.utils.createUuid(), $.utils.stringify(arrayBuffer) and all the functions under $.util.codec.*.  We also see that the sap-hdb-connect module has been deprecated in favor of the largely very similar Node.js module sap-hdbext.

 

Development Tools

The final piece of the new developer experience in SAP HANA SPS 11 shipped in late March 2016. The SAP Web IDE for SAP HANA was made available for download from the Service Marketplace and installation onto HANA SPS 11 systems. SAP Web IDE for SAP HANA provides a comprehensive web-based end-to-end development experience for creating SAP HANA native applications:

  • Development of SAP HANA content and models
  • UI development with SAPUI5
  • Node.js or XSJS business code
  • Git integration

 

However as the SAP Web IDE for SAP HANA shipped later than the initial release of SPS 11, for many people SPS 12 will represent their first exposure to the new developer tooling.  Therefore you might want to first review the launch materials which were created for the SAP Web IDE for SAP HANA here:

SAP HANA SPS 11: New Developer Features; SAP Web IDE for SAP HANA

 

Although the delivery of the first version of the SAP Web IDE for SAP HANA was just two months ago, we will still find some nice usability improvements in the SPS 12 version.  First the HANA Runtime Tools (HRTT) visual design has been adjusted to adopt the Web IDE design. This is the first step toward the planned deeper integration between HRTT and SAP Web IDE for SAP HANA which we are working towards.

SPS12_1.png

 

The HRTT has also been enhanced to be both multi-space and multi-org aware. This allows you to connect to any container in a HANA system.

SPS12_2.png

 

One of the major feedback items we've had from early adopters of the XSA-based development is that working with the log files can be overwhelming. This is a point we will continue to work on in the future, but for SPS 12 we've made a first important improvement.  The logs for a running service now stream as a live view in the SAP Web IDE for SAP HANA. No longer do you have to open a separate browser window to view the logs and then continually refresh that window. Updates to the log are pushed into the run console as you test your application.

SPS12_3.png

 

Another pain point we tried to address in SPS 12 was around the editing of the mta.yaml file.  YAML has a rather strict syntax when it comes to indentation and also doesn't allow the usage of tabs. In SPS 11, any technical problems with the mta.yaml file wouldn't produce an error until build/run time and often resulted in a crashed service.  With SPS 12 we introduce a client side validator to the mta.yaml editor, so that most technical errors are displayed immediately. This way you can avoid costly failed builds/runs and more easily find and correct such errors.

SPS12_4.png

 

Application Lifecycle

The Application Lifecycle story for XSA based development in SPS 11 was largely a manual process using command line tools to deploy MTAR archives.  At most we did deliver basic Git integration with the SAP Web IDE for SAP HANA. For SPS 12 we expand this area while still sticking close to standard Git. First we deliver an installation of Git for those customers that which to receive a complete installation from SAP.  In addition we also deliver two additional pieces - a Gerrit server and a Git Service Broker.

 

The Git server installation we deliver is based upon JGit. Alongside this Git server we also deliver Gerrit - and open source project led by Google which provide a code review workflow. To tie these open source pieces together with XSA development overall, we also deliver a Git service broker. This service broker integrates Git/Gerrit with XSA. It allows for access control and OAuth authentication for Git/Gerrit using the XSA UAA service. It also enables XSA services and applications to bind to the Git service; allowing dynamic create/delete of repositories and ability to bind to a repository.

 

Beyond SPS 12 we plan to continue to build out this Git/Gerrit integration further. We want to provide a REST API for runtime authoring of development objects.  We also plan to introduce a special Developer OAuth Scope to further control developer level access.  We also plan to support OAuth authentication of the Git service broker against multiple HANA systems.  We want to introduce versioning support for large binary files via the Large File Extension of Git (git-lfs).  Finally we plan to integrate the Gerrit code review workflow into the SAP Web IDE for SAP HANA as well as extend the Git Service Broker OAuth Single Sign On into the SAP Web IDE for SAP HANA.

 

Database Development

For all database level development topics, including SQLScript, HANA Deployment Infrastructure (HDI), and Core Data Services (CDS) for SAP HANA, my colleague Rich Heilman has written a separate blog. You can read about that content here:

SAP HANA SPS 12: New Developer Features; Database Development

 

Closing

In this section we would like to summarize the availability of many of these features and some things to consider when first starting development with these new capabilities. Some of these recommendations have changed from what we originally suggested with SPS 11.

 

When?

First when will everyone receive the functionality described in this blog series.  The new XS Advanced runtimes - Java and Node.js based - and infrastructure is all delivered generally available in SAP HANA SPS 11. Also the HANA Deployment Infrastructure (HDI) and the new database development artifacts delivered with it are also generally available. As you see in this blog we continue to enhance these new features in SPS 12 and beyond.

 

SAP HANA SPS 11 is delivered for on premise systems already (as of the end of November 2015).  Similar capabilities are planned to come as a part of SAP HANA Cloud Platform at some later date.

 

The original XS runtime (now named XS Classic) and HANA Repository remain a part of SAP HANA SPS11 and beyond to provide 100% backwards compatibility. This continues to be true in SPS 12. Therefore customers can upgrade with confidence to SPS 11 or SPS 12 without fear that the new innovations will somehow disrupt their existing applications. Customers can decide how and when they want to begin to move applications to the capabilities and only do so once they are comfortable with everything involved.  In the mean time everything they have continues to run exactly as it does today.

 

The new runtimes and HDI will NOT be feature compatible with the old XS and repository runtime at the first release of SPS 11 or even SPS 12.  There are missing features particularly in the area of Calculation Views as well a few lesser used aspects of XSODATA.  SAP fully intends to fill these gaps in future Revisions and/or Support Package Stacks. Already with SPS 12 we have closed a great many of these gaps, but we still haven't yet achieved 100% feature compatibility with XS Classic or the old Repository based development objects.

 

The development tools for the new runtimes and infrastructure (SAP Web IDE for SAP HANA and HANA Runtime Tools) where shipped for SPS 11 in March 2016 via the Service Marketplace and could be added to an existing SPS 11 system.  With SPS 12 these new tools ship in standard and can be installed into system via the hdblcm tool during system installation or upgrade.

 

Migration tools are planned to be delivered after the initial shipment of SPS 12 to help move your applications from XS Classic and the HANA Repository to XS Advanced and HDI.

 

Recommend Usage

SAP recommends that customers and partners begin to evaluate the new capabilities delivered with SPS 11.

 

SAP recommends that customers and partners who want to develop new applications use SAP HANA XS advanced model as of SPS 12.

 

The planned scope of available technologies for development with XS Advanced as of the initial delivery of SPS 11 is as follows:

- Core Data Services (the new HDBCDS artifact)

- SQLScript procedures and UDFs

- DDL for the development of database artifacts using text-based editors.  See help for full list of supported database artifacts.

- XSJS via the compatibility module of Node.js

- XSODATA via the compatibility module of Node.js or the new implementation in Java

- Node.js based development

- SAPUI5 for application development (using 3rd party text based editors)

 

If you want to migrate existing XS classic applications to run in the new XS advanced run-time environment, SAP recommends that you first check the features available with the installed version of XS advanced; if the XS advanced features match the requirements of the XS classic application you want to migrate, then you can start the migration process.

SAP HANA SPS 12: New Developer Features; Database Development

$
0
0

In this blog, I would like to introduce you to the new features for the database developer in SAP HANA 1.0 SPS12.   We will focus more on the database development topic including the HANA Deployment Infrastructure, Core Data Services, as well as SQLScript.  If you are interested in the new features for XS Advanced(XSA), or the SAP WebIDE for SAP HANA, please check out this blog by my colleague Thomas Jung.


HANA Deployment Infrastructure(HDI)


The HANA Deployment Infrastructure, HDI for short, was first introduced in HANA 1.0 SPS11 as part of the rollout of XS Advanced(XSA).  While XSA was officially released,  at the time there was very little tooling for XSA, and the developer experience was not quite complete.  Now that we have shipped the SAP HANA WebIDE for SAP HANA back in mid March, we’ve improved the experience quite a bit and filled several important gaps.  So with SPS12, we want to re-introduce the concept of the HANA Deployment Infrastructure.

 

The vison of the HANA Deployment infrastructure is to simplify the deployment of database objects into the HANA database.   We wanted to describe the HANA persistency model using file based design time artifacts, for exampele .hbdcds and .hdbprocedure files, and so on.  We wanted an all-or-nothing approach, so if there are many dependent artifacts within your container, and any one of them fail to get created in the DB, then the entire build will fail and nothing gets created.  We wanted a dependency based incremental deployment, which means we don’t want to drop everything and recreate it every time we build, we only want the changed objects to be adjusted.  We wanted complete isolation of the applications database objects.  This is achieved by the concept of containers, where each container corresponds to an underlying schema.  This adds additional security since each underlying schema, containing its deployed objects, is owned by a specific schema technical user.

 

Defined, HANA Deployment Infrastructure is a service layer of the HANA database that simplifies the deployment of HANA database artifacts by providing a declarative approach for defining database objects and ensuring a consistent deployment into the database, based on a transactional all-or-nothing deployment model and implicit dependency management.  HDI is based on the concept of containers, which allows for multiple deployments of the same application. This means you could have two versions of the same application running on the same HANA instance at the same time. Additionally, the focus of HDI is deployment only, so there is no versioning or life cycle management built into it.  You would use Git for repository and version management, and Gerrit for code review and approvals workflow.  Lastly, HDI supports the creation of database object only, so it does not support javascript, odata services, or any other application layer artifacts.

 

Again, we use a container concept in HDI where each container is a database schema. Actually it is a set of schemas, one main schema where the runtime objects reside, and several other supporting schemas used by the infrastructure itself.  All of the artifact definitions, such as a CDS file, need to be defined in a schema-free way, where as in the past, you would put a schema annotation in your CDS file. This is no -longer supported when using XSA/HDI.   The database objects within the container are owned by a container specific technical user and only this user has access to the objects within that container. You can reference other artifacts outside of your container via database synonyms.

 

HDI lives on top of the database conceptually.  The HDI build interface is implemented as a node.js module.  This node.js module simply calls the HDI APIs within HANA. This set of HDI APIs, which are implemented as stored procedures, are actually copied into each container schema in HANA, so each container gets its own copy of the API upon creation.  Within these APIs, of course we are simply using SQL to create the runtime objects themselves.  Finally, HDI runs inside its own process as part of the overall HANA core, one process per logical database.

 

1.png

 

The following is a list of artifacts which are supported by HDI.  New artifacts added as of SPS12 largely deal with Text Analysis and Text Mining.

 

TablesSynonyms & ConfigurationsPublic Synonyms
Virtual Tables & ConfigurationsRolesText Analysis Configuration
IndexesBI Views/Calculation ViewsText Analysis Dictionaries
Fulltext IndexesCore Data ServicesText Analysis Extraction Rules
ConstraintsData Control Language(DCL)Text Analysis Extraction Rules Includes
TriggersAnalytical PrivilegesText Analysis Extraction Rules Lexicons
ViewsAFFLANG ProceduresText Mining Configurations
Projection Views & ConfigurationsVirtual Function Packages
Scalar/Table FunctionsTable Data w/ CSV Files
Virtual Functions & ConfigurationsTable Data w/ Properties Files
Table TypesSearch Rule Sets
ProceduresFlowgraph
Procedure LibrariesReplication Task
SequencesStructured Privileges
Graph Workspaces

 

 

Core Data Services(CDS)


There have been several new features added in SPS12 for Core Data Services, or CDS. CDS was introduced in HANA 1.0 SPS06 and continues to be enriched with each SPS.

 

Graphical Editor


We introduced a new graphical editor for CDS artifacts in the SAP WebIDE for SAP HANA during the SPS11 delivery of the tool. This new graphical editor displays types as well as entities and views. It also shows the associations between entities as well as external dependencies. Of course you still have the ability to open the CDS artifact via the text editor as well.

 

2.png

 

Auto Merge


AUTO MERGE has been supported for quite some time with HANA core SQL, but only now supported in the context of CDS with SPS12.  AUTO MERGE is used to enable the automatic delta merge.  You can simply include AUTO MERGE or NO AUTO MERGE within the technical configuration section of an entity definition.

 

entity MyEntity {

  <element_list>

} technical configuration {

  [no] auto merge;

};


Index Sort Order


As of SPS12, within the definition of an index, you can now define the sort order per column. You can use ASC for ascending order, and DESC for descending order.  You have the option to sort by a column grouping or by an individual column.  Ascending is the default order when the order specification is omitted.

 

entity MyEntity {

  key id : Integer;

  a : Integer;

  b : Integer;

  c : Integer;

  s {

    m : Integer;

    n : Integer;

  };

} technical configuration {

  index MyIndex1 on (a, b) asc;

  unique index MyIndex2 on (c asc, s desc);

  index MyIndex3 on (c desc, s.n);

};

 

Explicit Specification of CDS Types for View Elements


In a CDS view definition, it is now possible in SPS12 to specify the type of a select item based on an expression.  For example, in the following example, if you define a column as A + B as S1, the resulting type is lost.  You can now explicitly define the type.

 

type MyInteger : Integer;

entity E {

  a : MyInteger;

  b : MyInteger;

};

 

view V as select from E {

// has type MyInteger

a,   

                

// has type Integer, information about

// user defined type is lost

a+b as s1,   

// has type MyInteger, explicitly specified

a+b as s2 : MyInteger

 

};

 

Defining Associations in Views


Another new features in SPS12, is associations in view definitions. In order to define an association as a view element, you need to define an ad-hoc association in the MIXIN section and then put this association into the select list.  In the ON-condition of such an association you need to use the pseudo-identifier $PROJECTION to signal that the following element name is to be resolved in the select list of the view rather than in the entity in the FROM clause.

 

entity E {

  a : Integer;

  b : Integer;

};

 

entity F {

  x : Integer;

  y : Integer;

};

 

view VE as select from E mixin {

  f : Association[1] to VF on f.vy = $projection.vb;

} into {

  a as va,

  b as vb,

  f as vf

};

 

view VF as select from F {

  x as vx,

  y as vy

};

 

CDS Extensibility


The CDS extension mechanism, delivered with SPS12,  allows adding properties to existing artifact definitions without modifying the original source files. The benefit is that a complete artifact definition can be split across several files with different lifecycles and code owners.   The EXTEND statement changes the existing runtime object, it does not define any additional runtime object in the database. The extensibility feature uses the concept of extension packages.  An extension package or simply package is a set of extend statements, normal artifact definitions (e.g. types which are used in an extend declaration), and extension relationships or dependencies.   Each CDS source file belongs to exactly one package, i.e. all the definitions in this file contribute to that package. On the other hand, a package usually contains the contributions from several CDS source files. A package is defined by a special CDS source file named .package.hdbcds. The name of the package defined in the file must be identical to the namespace that is applicable for the file (as given by the relevant .hdinamespace file). With this new feature, we are able to extend several different aspects of a CDS file including; adding new elements to a structure type of entity, adding new select items to a view, adding new artifacts to an existing context, assigning further annotations to an artifact or element, and extending the technical configuration section of an existing entity.

 

Let's have a look at an excessively simplified CRM scenario below. The base application has a CDS file called "Address.hdbcds" which contains a Type called "Address", it also has another CDS file called "CRM.hdbcds" which uses the "Address.hdbcds" CDS file.  Within the "CRM.hdbcds" file, we then have a context called "CRM" which contains an entity called “Customer” which has a column called “name” of type string and “address” of type "Address".

 

3.png

 

In this first extension package call “banking”,  we extend the "CRM" context and add a new type called “BankingAccount”.  We then extend the "Customer" entity and add a new element called “account” which uses the Type called “BankingAccount”, so we have a new column called "account.BIC" and "account.IBAN" added to the "Customer" table.

 

In the second extension, we further extend the "Customer" entity by extending the types which it uses.  First we will extend the "Address" type from the original "Address.hdbcds" file, and then extend the "BankingAccount" type which was defined by the previous extension.  So in this case the "onlineBanking" extension depends on the "banking" extension, hence the reason why we have the DEPENDS clause in the package definition.

 

4.png

 

The final result is that we have new columns in the "Customer" entity, "account.BIC" and "acocunt.IBAN", which were created by the "banking" extension, and "address.email" and "account.PIN", which were created by the "onlineBanking" extension.

 

5.png

 

SQLScript


SQLScript continues to be the stored procedure language used to take full advantage of the core capabilities of HANA such as massive parallel processing.  Several new language features have been added in HANA 1.0 SPS12.

 

Global Session Variables


As of SPS12, we now have the concept of global session variables in SQLScript.  Global session variables can be used to share scalar values between procedures and functions that are running in the same session. These are not visible from any other running session.  We can use the SET statement to set a key/value pair in one procedure, and use the built in function called SESSION_CONTEXT to retrieve that value in a nested procedure or function call.

 

-- Set Session Variable Value

PROCEDURE CHANGE_SESSION_VAR (

           IN NEW_VALUE NVARCHAR(50))

AS

BEGIN

  SET 'MY_VAR' = :new_value;

  CALL GET_VALUE( );

END

-- Retrieve Session Variable Value

PROCEDURE GET_VALUE ( )

AS

BEGIN

   DECLARE VAR NVARCHAR(5000);

   var = SESSION_CONTEXT('MY_VAR');

END;

 

Default empty for Table User Defined Functions


In SPS10, we introduced the ability to use the DEFAULT EMPTY extension when defining IN and OUT parameters of a procedure.  This is useful for initializing a table parameter before its use within the procedure. As of SPS12, we now bring this same functionality to Table User Defined Functions as well.

 

FUNCTION myfunc IN intab TABLE(a INT) DEFAULT EMPTY)

RETURNS TABLE(a INT)

  AS

BEGIN

RETURN SELECT * FROM :intab;

END;

 

SELECT * FROM myfunc();

 

 

Signatures for Anonymous Blocks


Anonymous Blocks were releases in SPS10, and allowed us to write SQLScript code in the SQL console without having to create a container, for example a procedure or function. This was a nice feature for creating quick and dirty test coding.  The only problem was that it did not support input and output parameters.  As of SPS12, we have added this feature.  You can now define these parameters in the same way you would when defining parameters for a procedure.  Both simple types and table types are supported as well as types defined via Core Data Services.

 

DO ( IN im_var INT => 5,

    OUT ex_var INT => ?,

    IN im_tab "dev602.data::MD.Employees" =>

"dev602.data::MD.Employees",

    OUT ex_tab "dev602.data::MD.Employees" => ?)

BEGIN

ex_var := im_var;

ex_tab = select * from :im_tab;

END

 

Enhancements for Implicit SELECT in Nested Calls


With this new feature in SPS12, implicit results from nested procedure calls are carried to the outermost procedure’s result.  You must first set a parameter value in the indexserver.ini configuration file.  This changes the default behavior system wide.

 

alter system alter configuration ('indexserver.ini', 'system') set ('sqlscript', 'carry_nested_implicit_result') = 'true' with reconfigure;


Until SP11, the nested implicit result is not carried to the caller. Its lifecycle goes with the nested call's lifecycle. With this example, it is closed when the nested call statement("call proc2") is closed.  From SP12, you can carry the nested implicit result with configuration change. When this configuration is on, the callee's implicit result sets are carried to the caller.

 

Enhancements for Header-Only Procedure and Functions


Header-only procedures and functions were first introduced in SPS10 and allowed developers to create procedures and functions with minimum metadata first using the HEADER ONLY extension. The body of the procedure and function could then be injected into the container later using the ALTER PROCEDURE or ALTER FUNCTION statement.  This allowed procedures and functions to be created without having to worry about the interdependencies between the procedures and functions.  As of SPS12, we now have the ability to call a HEADER ONLY procedure from within a trigger, as well as the ability to create a view on a HEADER ONLY Table User Defined Function.

How to do a delete with a map operation node in SDI

$
0
0

Hi everybody,

 

The last couple of days I was playing around with flowgraphs to get to know SDI and wanted to perform a delete using a map operation node in the data provisioning palette. Since I had some trouble creating a succesful flow, I thought I'd make a little post, it might save some people the trouble I had to go through

 

**note: for those that know data services, it is in fact almost the exact same flow.

 

 

1. Create Schema and table to test with

 

2. Add some data

 

3. Create a flowgraph

 

Right click on the package > choose new > search for flowgraph.

2016-05-13_10-04-19.png

 

It is important to create it as a Task plan. If you do not, the data provisioning palette will not show up.


2016-05-23_16-48-14.png

 

4. Create following flow

 

2016-05-23_11-53-22.png

 

The first node is a simple data source. I used a table with some dummy data with an ID and a DESCRIPTION (ID: 1, DESC: "one" and so on). The last node is a data sink. It is important to use a table with the same columns as the data source, else it will not work. It is also important that the source and sink have a primary key or HANA will throw an error.

 

The second node is a filter node. In this node you filter the data source for the rows you want to delete in the data sink. Go to the general tab in the properties and fill in the filter expression as you see fit. In this example I wanted to delete the row with an ID of 5, so I put a filter on my data source in such a way that the filter returns exactly that line to push it to the data sink.

 

2016-05-23_16-53-38.png

 

Now between the filter and the data sink, you have to place a map operation node. This node allows you to map certain database operations to another kind of database operation. The way a filter pushes the output to the data sink is through an opcode of "normal". So in the map operations' node properties, you have to map the "normal" operations to "delete" operations (Discard just ignores that specific operation, so we need delete here) like so:

 

2016-05-23_16-58-03.png

 

There is no need for a SQL expression in the Mapping tab.

 

Before you save, be sure to check you filled in the Target Schema of the surrounding container (in my case the "DELETE_MAP_OPERATION").

 

2016-05-23_17-02-55.png

 

5. Save, activate and run.

 

If you check the target table, the corresponding rows you filtered on will have been deleted.


Configuring Dynamic Tiering on Multitenant Databases

$
0
0

As of SP 10, dynamic tiering supports running in an SAP HANA multitenant database container installation. You can have some or all of the tenant databases running dynamic tiering. Make sure each tenant using dynamic tiering has its own dedicated worker host.  Unlike a single container system, you don’t need a dedicated standby host for each dynamic tiering host. You add a pool of dedicated standby hosts to the HANA system, and in the event of a worker host failure, HANA chooses one of the available standby hosts and automatically makes it the new worker host. You only need one license for all tenant databases running dynamic tiering.

 

dt_mt.png

 

Each tenant requires a dedicated dynamic tiering host. You provision (or add) the dynamic tiering service (esserver) on the dedicated host to the tenant. You can’t provision the same service to multiple tenants.  Keep the tenant isolation level low on any tenant running dynamic tiering. Provisioning fails if the isolation level is high.  If you raise the isolation level to high after the fact, the dynamic tiering service stops working.

 

You can convert an existing single container SAP HANA system running dynamic tiering to a multitenant database system. After conversion, the original database becomes the first tenant, with the esserver service from the original database automatically provisioned to the tenant.

 

Whether adding dynamic tiering to a newly created multitenant database system or creating a new tenant on an existing system to run dynamic tiering, the steps you take are the same:

 

  1. Make sure that the SAP HANA system is running without error, and that SAP HANA Cockpit can manage each tenant database. If the HANA system isn’t running correctly, provisioning dynamic tiering might fail.
  2. If you haven’t already, install dynamic tiering on the SAP HANA system.
  3. Add the dedicated dynamic tiering host to the system database, not to the tenant.
  4. Provision the dynamic tiering service (esserver) to a tenant database.
  5. Import the SAP HANA dynamic tiering delivery units to the tenant. Without these delivery units, you will be unable to manage dynamic tiering on the tenant using SAP HANA Cockpit.
  6. Create extended storage on the tenant database.

 

Repeat steps 3 through 6 on each tenant to run dynamic tiering.

 

See the SAP HANA Administration Guide for details on creating a multitenant system. See the SAP HANA Dynamic Tiering Administration Guide for details on configuring dynamic tiering to run on a tenant database.

HANA Tips & Tricks: issue #1 - Hacking information views

$
0
0

About this post

 

At Just-BI we just launched a knowledge sharing initiative where our consultants and developers discuss any issues and share tips and tricks concerning SAP HANA development. While our monthly meetings are company internal, we decided to share any items that might be interesting to other SAP HANA professionals publicly. Since SCN is already the go-to hub for all things HANA, we felt that scn is an appropriate place to do so.

 

So, here it is - our first post! We plan to have one meeting every month and publish any takeways immediately after that on a scn blogpost using the tag hanatipsandtricks. We hope that our tips and tricks and discussions are useful to you. Feel free to chime in, or to share your tips and tricks. We welcome your interest and participation!

 

Editing XML source of Information Views

glenn-cheung.jpg


Glenn Cheung
kicked off the meeting with a very useful and powerful tip: editing the XML source code of SAP HANA information views.

 

Information views (Analytical-, Attribute- and Calculation Views) are typically created and edited using the SAP HANA View Editor (also known as the Modeler). This is essentially a query builder that allows you to use drag and drop to graphically build a query out of nodes representing things like database schema objects (tables or views), other information views, and query operators (such as join, union, aggregation, and so on). The models you build this way are stored as XML files in the repository. Activation of these models generates runtime objects, which are basically stored procedures that implement the query according to the model.

 

While the SAP HANA View Editor is the tool of choice when developing new information views, it can get in the way when performing certain tasks. For example, sometimes it may be convenient to build an information view against a personal database schema where you keep only a few objects just for development purposes. Once you're happy with how your information view works, you'll want it to work against the objects from the actual application database schema. (There are many similar scenarios like this, such as updating the package name if you're referencing CDS objects).

 

While the view editor does offer a "Replace With Datasouce" option (available in the right-click menu on the item), this quickly becomes a rather tedious and time-consuming taks, especially if your model contains many nodes, or if you have many information views that you want to point to the other schema. You can save yourself quite a bit of time by opening the view in a text editor and using search/replace to change the schema name. You can even do this without leaving SAP HANA Studio: simply right-click the information view in the project explorer, and choose "Open With" > "Text Editor". For real bulk operations, you need not even open the file in an editor, you can use a command like tool like sed to perform a regular-expression based text substitution.

 

openwith.png

Of course, you should always be very cautious when editing the XML sources directly. Unlinke the SAP HANA View Editor, your text editor or command line tools do not validate the changes you make to the model. Always make a backup of your source files or make sure you have some other way of restoring them should your raw edits render the models invalid.

 

Cross Join in Information Views

 

Another tip from Glenn is how to create Cross Joins in information views. A Cross Join is a type of join operation that returns the cartesian product of the joined tables (that is, the combination of all rows). While there is rarely need for a true Cartesian product in analytical queries, a use case sometimes does pop up when developing custom database applications.

While the SQL standard has a separate keyword for it (like it has keywords for INNER, LEFT OUTER, RIGHT OUTER etc), SAP HANA Studio does not offer a special Join type for it. (Note that in SAP HANA Studio you can set the join type in the properties page that becomes active when you select the edge that connects the joined column). The solution is however very straightforward - when you add your data sources to your join node in the View Editor, simply don't connect the columns and SAP HANA will generate a Cartesian product as result.

 

Note however that this behavior can bite you as well. I recently encountered a situation where I needed to clean up a calculation view. As part of the clean-up, I was removing columns "downstream" of a join node. While SAP HANA studio will warn that the column is used by any upstream nodes, it is very easy to miss the fact that you might be removing a column which is used to define a join. If that is the case, then it's very easy to end up with an unintentional cross join.

 

Adding Nodes mid-stream

scott-wery.png

 

Scott Wery provided a very useful tip on adding nodes to existing Calculation views. Let's consider an example: It's not uncommon to work on a calculation view that contains a number of joins. In many cases, the number of joins grows organically as the development process progresses and user requirements evolve - the requirement to "look up" a few extra columns is a very common one.

 

Once you opened your existing view in the SAP HANA View Editor and identified between which two nodes you want the new join node, you might proceed by deleting the edge that connects those two existing nodes, add the new join node, and then re-create the edges between the nodes. This would be fine except for the fact that when you break the edge between two nodes, any columns upstream of the broken edge that originate downstream of the broken edge are simply removed. You would have to recreate all those columns after re-establishing the edges from and to the new join node.

 

While that is of course possible, there is a much better way: if you first click the edge that connects the two nodes where you want the new join to appear in between, it will be selected. If you then drag the new join node unto the selected edge, a messagebox pops up, asking you if you want to insert the new node inbetween the existing nodes. If you confirm, the new join node will automatically be inserted there, splitting up the existing edge and connecting the existing nodes with the new join node, without removing any columns. This avoids doing a lot of tedious and error prone work!

 

insertjoin.png


Generating Scripted Calculation Views

 

The following tip is by yours truly Past week, my co-worker Ivo Moor was creating a few Scripted Calculation Views. (A Scripted Calculation view is a Calculation view that is defined by user-entered SQL script.) One rather tedious aspect of creating scripted calculation views is that you have to manually define the output columns of the view, and enter the names of the output columns as well as specify their data types. Again, this is totally doable, but it is not a lot of fun. Apart from the fact that can be time-consuming, it can be error prone too - if you accidentally enter a data type or data type parameters (like length, precision, or scale) that do not correspond to the runtime type of the column, then you might encounter run-time errors when executing the view.

 

I decided to spend a little time to see if I could make this easier. What would be ideal is if SAP HANA Studio would offer some kind of wizard or integrated generator that you could invoke from the SQL editor, and which would open the SAP HANA View Editor with a newly generated Scripted calculation view, based on the code that was inside the SQL editor, and having all its output columns generated based on the runtime types of the query. While I appreciate that such a generated view might still require editing, however it would give a considerable headstart. I looked into it a bit and I quickly realized that actually modifying SAP HANA Studio to add such a feature would cost me considerable more time than I currently am willing to spend.

 

So, as a really quick and, admittedly, dirty altnerative, I came up with a xsjs web application that can at least generate the calculation view code, and offer the user a download link, which can be used to download the view file and save it in an existing SAP HANA project. Here's a screenshot of the application frontend:

scvg.png

The way it works is, you enter your SQL query (or at least, the query that will produce the output for your scripted calculation view) in the SQL textarea. You can enter the name for your view in the Object Name field, and enter a version number as well. If the SQL code contains parameter or variable references, the tool will generate inputs for those so that you can enter values. Finally, you can also choose the database schema against which any database object identifiers are resolved.


After entering or changing data in the form, the application will send the query to a xsjs service, which will take the query, append a LIMIT 0 clause to it (so as to prevent doing any actual work as much as possible) and then execute it in order to obtain resultset metadata. This resultset metadata is then used to fill in a calculation view template with both column definitions as well as variable definitions. The result of the filled in template is then exposed via a download link at the bottom of the page. Clicking the link will prompt the user to download a .calcullationview file which you should be able to save to your HANA project and then activate.

 

If you want to try this yourself, feel free to download or fork the code from the just-bi/scvg repository on github. It's free and open and I hope it will be useful to you. If you're interested in these kinds of productivity tools, then stay tuned - The just bi development team is currently looking into possibilities to create tools like these and integrate them into HANA Studio. I can't really say when we'll have time to make this happen since this is not really core business but I can promise that once we have some of these tools we will publish them and contribute them back to the SAP HANA Developer community just like we are doing now.

 

Finally


I hope you enjoyed our tips and tricks! We'll be back a month from now - just track the hanatipsandtricks to stay tuned :)

How to process and use multi-value input parameter in a scripted view in HANA

$
0
0

This write-up is to explain on how to use a multi-value input parameter directly in   a scripted view in HANA .

 

This is  a common requirement in different business development cases based on HANA modeling .

 

So rather than having a graphical projection on top of scripted view where we deal with the filter based on multi-value input parameter , which is a work-around to deal with this , here we are here going to see how we can directly process and code the multi-input parameter for data restrictions on select queries in scripted views.

This design is going to reduce the run-times .

 

This below one is a multi-value input parameter.

image1.png

 

 

Challenge:

In scripted calculation views , we cannot directly use this kind of input parameter to restrict the data out from the select queries like follows:

 

SELECT * FROM <TABLE> WHERE MATNR IN :P_MATNR .

 

Reason :

The Multiple values of input parameter are assigned as a horizontal list of values ; each enclosed in single quotes , separated from each other by a comma symbol .

 

This can be noticed by writing a select query for the values of multi-value input parameter in a test scripted view like follows.

 

 

 

In this test view , we have a column MATNR of varchar[100] , P_MATNR is the multi value input parameter as shown above.

P_MATNR has to be assigned to a variable to see this data  as we are doing in below code.

image2.png

 

We are going to execute this view for output with below values :

image3.png

Output is like below:

image4.png


Observations from the output above :


[1] Data of different input values is contained in a horizontal line.

[2] Each value is enclosed in single quotes

[3] Comma symbol is present between two values .

Comma is not seen after the last value

[4] Every data value is creating additional 3 characters [ One comma symbol and Two single quote symbols ]except the last data value .

The last data value doesn’t have a comma symbol after it .

[5] If a comma symbol is appended at the end of last data value , then we will have all of the different data values in a consistent parameter like the below

image5.png

This is done by the concat() function in sql as below

image6.png

Now all data values including the last one have:

 

->Every data value is has additional 3 characters [ One comma symbol and Two single quote symbols ]

-> Total length of this output string = ( n + 3) * m

Where m = number of input values passed to the multi value input parameter;

             n = length of the input parameter value


Like for example here , if we pass three input values to the parameter P_MATNR , m = 3

and if the length of the MATNR column in system is 8 , then n = 8

So , overall string length = 33 as shown in below screen shot.

   image7.png       

 

This data processing is going to help us when we further process this data into a readable format for the select queries .

 

In the logic below , we are going to transpose the horizontally available data into a columnar format where each data value of input parameter is shown in each row.

 

This will help further in utilizing the input parameter’s values for restricting data out of select query in the logic.

 

image8.png

The code at line 13 in above defines the number of loop runs to be made next .

This is equal to the number of input parameters passed.

We took 11 here because input parameter is of length 8 and every input data value is made to have additional 3 characters as explained above .

So , this calculation at line 13 , will assign number of input values passed as value to variable J.

 

Code in lines 15 to 29 in the above screen shot  , will do below [a] and then [b]->

 

[a] derive the exact value of each input data value and assign it to the table variable

[b] then process the string of input parameter values to exclude the value that is assigned already to table value in [a].

 

This sequential steps  [a] and [b] will continue as a loop till all of the input parameter values are processed and assigned as individual values devoid of single quotes and comma symbols ; one per each row into the final output column.

 

So , final output is going to be like this when we pass below input:

 

Input :

image9.png

Output :


image10.png

 

Please find the attached text file for the above explained code .

 

Since the data of multi-value input parameter is processed into a columnar format of a  table variable , this can get used further to restrict data out of select queries on further business logic that is there in the code like :

 

var_x = SELECT * FROM <TABLEX> WHERE MATNR IN ( SELECT * FROM :v_matnr) ;

 

Here , v_matnr is the table variable which has data of multiple input values processed into different rows each  as explained above.

 

If a graphical view is being used , we can directly do this by IN() operator as follows :

image11.png

Learn What's New in SAP HANA Dynamic Tiering

$
0
0

Read about New/Changed Features:

 

Do you want to know what was introduced/changed in the most recent support package for SAP HANA dynamic tiering? Are you looking for information on a feature introduced in an earlier support package? The document What‘s New in SAP HANA Dynamic Tiering (Release Notes) helps you in either case.

 

What‘s New in SAP HANA Dynamic Tiering (Release Notes) is organized chronologically by support package. Information on the most recent support package is at the beginning of the document, followed by information on the previous support package, and so on.

2016-04-01_12-13-49.png

Find the Central SAP Note for any Support Package:

 

The excellent blog post Finding Dynamic Tiering SAP Notes gives you many tips and tricks for locating dynamic tiering SAP Notes. The blog post also explains that each SAP HANA dynamic tiering support package has its own central SAP Note – a master note with links to all relevant SAP Notes.


Did you know you can find all central SAP Notes -- for all dynamic tiering support packages -- in the What’s New document?

 

  1. Navigate to the first page of the What’s New document.
  2. Click the Important SAP Notes link for the support package you’re interested in.

DT_WN.PNG

Event-driven, non-blocking, asynchronous I/O with SAP HANA using Vert.x

$
0
0

In this blog post I would like to demonstrate an example on how you can implement a non-blocking web service, running on the JVM, on top of SAP HANA. This blog post, such as the commands and the setup, does assume running the backend on a Linux/Mac machine (bare metal or IaaS). The commands might slightly vary on Windows machines, but the experience should be similar.

 

What is Vert.x?


"Vert.x is a tool-kit for building reactive applications on the JVM". This is what the Vert.x web site tells you.


Basically, Vert.x is an open source set of Java libraries, managed by the Eclipse foundation, that allows you to build event-driven and non-blocking applications. In case you are already familiar with Node.js, Vert.x allows you to build services the way you might already know from Node.js. Also, Vert.x is language-agnostic so you can implement your backend in your favorite JVM-based language, such as, but not limited to, Java, JavaScript, Groovy, Ruby, or Ceylon.

 

In case you want to want to know more about Vert.x, please refer to the official Vert.x web site or the official eclipse/vert.x repository on GitHub

 

Speaking in code, with Vert.x you can write a simple HTTP server and a web socket server like this (using Java 8):

 

vertx  .createHttpServer()  .requestHandler(req -> {  req.response().headers().set("Content-Type", "text/plain");  req.response().end("Hello World");  })  .websocketHandler(ws -> {  ws.writeFinalTextFrame("Hello World");  })  .listen(8080);

In case you want to know more about what makes a reactive application reactive, you can take a look at The Reactive Manifesto

 

"Building a Java web service on top of HANA? That requires running Tomcat."

Is that you? Think again! Depending on the use case, developing JVM-based backend services using Tomcat or a Java EE container such as JBoss might be the solution of choice for certain use cases, especially when it comes to transaction processing. For building real-time applications where you really don't care about transaction handling in the backend, using an application server might be an overkill for your project and much more than you actually needed.

 

 

What about Node.js?

 

Node.js is a great event-driven, non-blocking framework as well and the most popular amongst reactive backend frameworks and toolkits. I personally like Node.js a lot, simply because JavaScript itself is very flexible and npm.com has a really large ecosystem of Node.js packages. Also, there is great open-source HANA driver (SAP/node-hdb) for Node.js, so Node.js is still good choice for real-time applications.

 

However, Node.js has some pitfalls, especially when it comes to leveraging multiple CPU cores. There are also solutions in Node.js to address this problem. This blog post from Juanaid Anwar explains this really well: Taking Advantage of Multi-Processor Environments in Node.js

 

GitHub repository

 

You find the complete, ready-to-run source code of the example on GitHub:

GitHub - MitchK/hana_vertx_example: An example web service to demonstrate how to use Vert.x with SAP HANA

 

 

Example Preparation

 

 

First, you need to create a Maven project. You can also use any other dependency manager or build tool (like Gradle), but this tutorial will use Maven.

 

For this example we will be using the following Vert.x libraries and the HANA JDBC driver:

 

 

<dependencies>  <!-- Vertx core -->  <dependency>  <groupId>io.vertx</groupId>  <artifactId>vertx-core</artifactId>  <version>3.2.1</version>  </dependency>  <!-- Vertx web for RESTful web services -->  <dependency>  <groupId>io.vertx</groupId>  <artifactId>vertx-web</artifactId>  <version>3.2.1</version>  </dependency>  <!-- Vertx async JDBC client -->  <dependency>  <groupId>io.vertx</groupId>  <artifactId>vertx-jdbc-client</artifactId>  <version>3.2.1</version>  </dependency>  <!-- HANA Driver -->  <dependency>  <groupId>com.sap.db</groupId>  <artifactId>com.sap.db.ngdbc</artifactId>  <version>1.00.38</version>  </dependency></dependencies>


  • vertx-core: Provides the basic Vert.x toolkit functionality
  • vertx-web: Provides you with routing capabilities to build RESTful web services.
  • vertx-jdbc-client: Provides you with an asynchronous JDBC Client and with a lot of convenient APIs on top of JDBC.
  • com.sap.db.ngdbc: The official SAP HANA driver for JDBC. This driver is not open source and thus not available on Maven Central. You either have to use your company's internal Nexus server or refer to the .jar file on the file system using your pom.xml using <systemPath>${project.basedir}/src/main/resources/yourJar.jar</systemPath>

 

Using Java 8


You really don't want to code in Vert.x below Java 8. You really don't. Since Vert.x heavily relies on callbacks, writing Vert.x without lambda expressions will be a pain.


<build>  <plugins>  ...  <plugin>  <groupId>org.apache.maven.plugins</groupId>  <artifactId>maven-compiler-plugin</artifactId>  <version>3.5.1</version>  <configuration>  <source>1.8</source>  <target>1.8</target>  </configuration>  </plugin>  ...  </plugins></build>

 

Creating a fat .jar

 

In this example, we will build a single .jar file that will bootstrap our Vert.x code, which will contain all Java dependencies. There are many ways of deploying Verticles, this is just an example one.

 

Hereby, we will be referencing to com.github.mitchk.hana_vertx.example1.web.HANAVerticle as our main class.

 

<build>  <plugins>  ...  <plugin>  <groupId>org.apache.maven.plugins</groupId>  <artifactId>maven-shade-plugin</artifactId>  <version>2.3</version>  <executions>  <execution>  <phase>package</phase>  <goals>  <goal>shade</goal>  </goals>  <configuration>  <transformers>  <transformer  implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">  <manifestEntries>  <Main-Class>io.vertx.core.Starter</Main-Class>  <Main-Verticle>com.github.mitchk.hana_vertx.example1.web.HANAVerticle</Main-Verticle>  </manifestEntries>  </transformer>  </transformers>  <artifactSet />  <outputFile>${project.build.directory}/${project.artifactId}-${project.version}-fat.jar</outputFile>  </configuration>  </execution>  </executions>  </plugin>  ...  </plugins></build>


Creating a Verticle


According to the official documentation, a "Verticle" is a Vert.x term that describes an independently deployable piece of code. Outside of the Vert.x universe, you may call it "micro service". The use of Verticles is entirely optional, but I will show how to implement an example HANA Verticle.


Create a new class in a package of your choice. Make sure that the package name and class name are matching the main Verticle class you put into the pom.xml.


public class HANAVerticle extends AbstractVerticle {  @Override  public void start(Future<Void> fut) {  vertx   .createHttpServer()   .requestHandler(req -> {      req.response().headers().set("Content-Type", "text/plain");   req.response().end("Hello World");   })   .websocketHandler(ws -> {     ws.writeFinalTextFrame("Hello World");   })   .listen(8080);  }
}



Now run

 

$ mvn clean install package
$ java -jar target/example1-0.0.1-SNAPSHOT-fat.jar

 

on your command line (or set up your IDE accordingly) in order to install your Maven dependencies and create a fat .jar file.

 

Finally, open http://localhost:8080/ in your web browser.

Screen Shot 2016-04-07 at 5.38.23 PM.png


You can also check whether you web socket endpoint is listening on ws://localhost:8080, using a web socket client:


Screen Shot 2016-04-07 at 5.38.10 PM.png

Building a RESTful web service with Vert.x


Let's now build a simple RESTful web service where we actually want to use routing and also JSON output. Replace the content of your start() method with:


Router router = Router.router(vertx);
router  .get("/api/helloWorld").handler(this::helloWorldHandler);
vertx.createHttpServer()  .requestHandler(router::accept)  .listen(  // Retrieve the port from the configuration,  // default to 8080.  config().getInteger("http.port", 8080), result -> {  if (result.succeeded()) {  fut.complete();  } else {  fut.fail(result.cause());  }  });


You also need to create the handler method for the /api/helloWorld end point:


public void helloWorldHandler(RoutingContext routingContext) {  JsonObject obj = new JsonObject();  obj.put("message", "Hello World");  routingContext  .response().setStatusCode(200)  .putHeader("content-type", "application/json; charset=utf-8")  .end(Json.encodePrettily(obj));
}


Build your code again, start the .jar file and see the result in the browser:


Screen Shot 2016-04-07 at 5.46.56 PM.png


Connecting Vert.x with HANA


Now things become interesting. Put the following code snippet to the beginning of the start() method:


JsonObject config = new JsonObject();
// Example connection string "jdbc:sap://hostname:30015/?autocommit=false"
config.put("url", System.getenv("HANA_URL"));
config.put("driver_class", "com.sap.db.jdbc.Driver");
config.put("user", System.getenv("HANA_USER"));
config.put("password", System.getenv("HANA_PASSWORD"));
client = JDBCClient.createShared(vertx, config); // , "java:comp/env/jdbc/DefaultDB");

 

We will actually connect to HANA using environment variables for the configuration, for simplicity. You can also use a JNDI name instead.

 

In the helloWorldHandler, replace the method content with this code:


client.getConnection(res -> {  if (!res.succeeded()) {    System.err.println(res.cause());    JsonObject obj = new JsonObject();    obj.put("error", res.cause());    routingContext.response().setStatusCode(500)      .putHeader("content-type", "application/json; charset=utf-8").end(Json.encodePrettily(obj));    return;  }  SQLConnection connection = res.result();  connection.query("SELECT 'Hello World' AS GREETING FROM DUMMY", res2 -> {  if (!res2.succeeded()) {    System.err.println(res2.cause());    JsonObject obj = new JsonObject();    obj.put("error", res2.cause());    routingContext.response().setStatusCode(500)      .putHeader("content-type", "application/json; charset=utf-8")      .end(Json.encodePrettily(obj));    return;  }  ResultSet rs = res2.result();  routingContext    .response()    .putHeader("content-type", "application/json; charset=utf-8")    .end(Json.encodePrettily(rs));  });
});



Now, build the code again. Before you execute the .jar file, make sure you set your environment variables accordingly in the shell.


$ export HANA_URL=jdbc:sap://<your host>:3<your instance id>15/?autocommit=false
$ export HANA_USER=<user>
$ export HANA_PASSWORD=<your password>

 

After you executed the jar file again, you can see your result in the browser:

Screen Shot 2016-04-07 at 6.04.55 PM.png

Now you just developed your first Vert.x backend on top of SAP HANA!

 

Conclusion

 

Vert.x and SAP HANA work very well together, especially for real-time applications. If you want to develop your web services on top of JVM and want to avoid dealing with a servlet container or even a whole application server, Vert.x might be a great choice for you.

 

If you find any mistakes or have any feedback, please feel free to leave me a comment.


Viewing all 737 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>