Quantcast
Channel: SAP HANA Developer Center
Viewing all 737 articles
Browse latest View live

IoT with SAP HANA DEV

$
0
0

This past few months have been heavily focused on SAP HANA and Internet of Things (IoT) for me and the SAP Developer Relations team. The result from Las Vegas, Berlin and just last week at the ThingMonk 2014 event in London is a nice little collection of code making up a new example application that will be present in the next version of the SAP HANA Developer Edition (SPS9) in the next weeks. Ideally this project will be embedded in the next HANA Dev Edition but also available on GitHub in the next weeks.

 

The goal of this example is to show how easily one can read data from an external hardware device with a sensor such as temperature sensor and store, visualize and access the data via a SAP HANA XS application.

  • Hardware such as Raspberry Pi, Beagle Bone, and Tessel
  • Circuit
  • Some code for hardware devices using Bonescript, Python and Javascript

 

This example application also allows for display of the values entered via those devices.

 

No example is complete without showing the complete "round trip", this example also provides the ability to retrieve values from SAP HANA and use them in the hardware side.

 

 

Hardware

Below you will find example code for posting to this application from different hardware devices. 3 That we have focused on are:

  • Raspberry Pi Model B

    rpi.jpg

  • Beagle Bone Black

    bbb.jpg

  • Tessel

    Tessel.png

 

Circuit

led_temp.png

Creating the circuit on the breadboard is extremely simply.

  • Connect a black jumper wire to the GND
  • Connect a red jumper wire to the 3.3 V
  • Place an LED in the breadboard
  • Place a 220 Ohm Resister from GND to the short leg of the LED
  • Place a jumper wire from the long leg of the LED to the GPIO pin 18
  • Repeat the process for the second LED and connect to GPIO pin 22
  • Place the digital temperature sensor on the bread board, we are using DS18B20
  • From the "left" leg of the sensor we run a black jumper wire to GND
  • From the "right" leg of the sensor we run a red jumper wire to 3.3v
  • From the "center" leg of the sensor we run a jumper wire to the GPIO pin 16
  • Between the "right" and "center" legs we place a 4.7K Ohm resistor

 

Coding


Basically any type of hardware device that can do a JSON based HTTP post can submit data to the server using Content-Type": "application/json".

               var jsonData = {                          "ID": 1,                          "SNAME": "Tessel",                          "STYPE": "Temp",                          "SVALUE": ""+temp+"",                          "TDATE": "/Date("+timeStamp+")/"                        }     

 

 

Python for the Raspberry Pi

        

import os

import glob

import json

import urllib2

import time

import RPi.GPIO as GPIO

 

os.system('modprobe w1-gpio')

os.system('modprobe w1-therm')

 

base_dir = '/sys/bus/w1/devices/'

device_folder = glob.glob(base_dir + '28*')[0]

device_file = device_folder + '/w1_slave'

 

hanaposts = 0

hanaposts2 = 0

 

# to use Raspberry Pi board pin numbers

GPIO.setmode(GPIO.BOARD)

GPIO.setwarnings(False)

# set up GPIO output channel

GPIO.setup(18, GPIO.OUT)

GPIO.setup(22, GPIO.OUT)

 

def blink(pin):

GPIO.output(pin,GPIO.HIGH)

time.sleep(1)

GPIO.output(pin,GPIO.LOW)

time.sleep(1)

return

 

def basic_authorization(user, password):

s = user + ":" + password

return "Basic " + s.encode("base64").rstrip()

 

def read_temp_raw():

f = open(device_file, 'r')

lines = f.readlines()

f.close()

return lines

 

def read_temp():

lines = read_temp_raw()

while lines[0].strip()[-3:] != 'YES':

time.sleep(0.2)

lines = read_temp_raw()

equals_pos = lines[1].find('t=')

if equals_pos != -1:

temp_string = lines[1][equals_pos+2:]

temp_c = float(temp_string) / 1000.0

temp_f = temp_c * 9.0 / 5.0 + 32.0

return temp_c

 

while True:

hanaposts += 1

txtTemp = read_temp()

txtDate = '/Date(' + str(int(time.time())) + ')/'

url = 'http://[SERVER IP]:[SERVER PORT]/sap/devs/iot/services/iot_input.xsodata/sensor'

params = {"ID": "1", "TDATE": txtDate, "SVALUE": str(txtTemp), "SNAME": "Craig", "STYPE": "Temp" }

req = urllib2.Request(url,

headers = {

"Authorization": basic_authorization('[USER]', '[PASSWORD]'),

"Content-Type": "application/json",

"Accept": "*/*",

}, data = json.dumps(params))

f = urllib2.urlopen(req)

# LED

if hanaposts == 25:

hanaposts = 0

hanaposts2 += 1

blink(22)

if hanaposts2 == 50:

hanaposts2 = 0

blink(18)

time.sleep(1)

Bonescript for the Beagle Bone

          var b = require('bonescript');          var temperatureSensor = "P9_40"                    setInterval(readTemperature, 1000);                    function readTemperature() {              b.analogRead(temperatureSensor, writeTemperature);          }                    // The MCP9700A provides 500mV at 0C and 10mV/C change.          function writeTemperature(x) {              //console.log("temp value: "+x.value);              var millivolts = x.value * 3300; // Only outputs 1.8v, so must adjust accordingly              var temp_c = (millivolts - 500) / 100;              var temp_f = (temp_c * 9 / 5) + 32;              //console.log("Millivolts: " + millivolts + ", temp_c: " + temp_c + ", temp_f: " + temp_f);              var timeStamp = new Date().getTime();              //console.log(new Date());              writeToScreen(temp_c);              writeToHana(temp_c,timeStamp);          }                    function writeToHana(temp, timeStamp) {              var http = require('http');              var options = {                host: '[SERVER IP]',                port: [SERVER PORT],                path: '/sap/devs/iot/services/iot_input.xsodata/sensor',                method: 'POST',                headers: {                   'Authorization': '[AUTH STRING]' ,                   'Content-Type':'application/json'                          }              };                        var req = http.request(options, function(res) {                res.setEncoding('utf-8');                          var responseString = '';                          res.on('data', function(data) {                  responseString += data;                  // Do something when a value is there                  //console.log("Response: " + responseString);                          });                          res.on('end', function() {                  //var resultObject = JSON.parse(responseString);                });              });                        req.on('error', function(e) {                console.log("error found");                console.error(e);              });                        var jsonData = {                          "ID": 1,                          "SNAME": "Craig",                          "STYPE": "Temp",                          "SVALUE": ""+temp+"",                          "TDATE": "/Date("+timeStamp+")/"                        }              var strData = JSON.stringify(jsonData);              //console.log(strData);              req.write(strData);              req.end();          }     

Javascript for the Tessel

// Any copyright is dedicated to the Public Domain.

// http://creativecommons.org/publicdomain/zero/1.0/

 

/*********************************************

This basic climate example logs a stream

of temperature and humidity to the console.

*********************************************/

 

var tessel = require('tessel');

// if you're using a si7020 replace this lib with climate-si7020

var climatelib = require('climate-si7020');

 

var climate = climatelib.use(tessel.port['A']);

 

 

climate.on('ready', function () {

console.log('Connected to si7020');

 

// Loop forever

setImmediate(function loop () {

climate.readTemperature('c', function (err, temp) {

climate.readHumidity(function (err, humid) {

console.log('Degrees:', temp.toFixed(4) + 'C', 'Humidity:', humid.toFixed(4) + '%RH');

var timeStamp = new Date().getTime();

writeToHana(temp.toFixed(4),timeStamp);

setTimeout(loop, 1000);

});

});

});

});

 

climate.on('error', function(err) {

console.log('error connecting module', err);

});

 

function writeToHana(temp, timeStamp) {

var http = require('http');

var options = {

host: '[SERVER IP]',

port: [SERVER PORT],

path: '/sap/devs/iot/services/iot_input.xsodata/sensor',

method: 'POST',

headers: {

'Authorization': '[AUTH STRING]' ,

'Content-Type':'application/json'

 

}

};

 

var req = http.request(options, function(res) {

res.setEncoding('utf-8');

 

var responseString = '';

 

res.on('data', function(data) {

responseString += data;

// Do something when a value is there

//console.log("Response: " + responseString);

 

});

 

res.on('end', function() {

//var resultObject = JSON.parse(responseString);

});

});

 

req.on('error', function(e) {

console.log("error found");

console.error(e);

});

 

var jsonData = {

"ID": 1,

"SNAME": "Tessel",

"STYPE": "Temp",

"SVALUE": ""+temp+"",

"TDATE": "/Date("+timeStamp+")/"

 

}

var strData = JSON.stringify(jsonData);

//console.log(strData);

req.write(strData);

req.end();

}

    

 

Control Values

 

Using an HTTP GET using Content-Type": "application/json" we can retrieve values from our "control" table.

Python for the Raspberry Pi

import RPi.GPIO as GPIO

import time

import urllib2

import json

 

# convert string to number

def num(s):

try:

return int(s)

except ValueError:

return float(s)

 

# blinking function

def blink(pin):

GPIO.output(pin,GPIO.HIGH)

time.sleep(1)

GPIO.output(pin,GPIO.LOW)

time.sleep(1)

return

 

# Get values from server

def getServerResponse(url):

req = urllib2.Request(url)

opener = urllib2.build_opener()

f = opener.open(req)

return json.loads(f.read())

 

 

###### Initialize Program ######

 

# surpress warnings

GPIO.setwarnings(False)

# to use Raspberry Pi board pin numbers

GPIO.setmode(GPIO.BOARD)

# set up GPIO output channel

GPIO.setup(18, GPIO.OUT)

GPIO.setup(16, GPIO.OUT)

GPIO.setup(22, GPIO.OUT)

 

# fetch control variables

jsonStr = getServerResponse("http://[SERVER IP]/sap/devs/iot/services/iot_control.xsodata/control?$format=json&$filter=SCONTROL%20eq%20%27HOT%27")

hotTemp = jsonStr['d']['results'][0]['SVALUE']

 

jsonStr = getServerResponse("http://[SERVER IP]/sap/devs/iot/services/iot_control.xsodata/control?$format=json&$filter=SCONTROL%20eq%20%27COLD%27")

coldTemp = jsonStr['d']['results'][0]['SVALUE']

 

# Now loop and check for action

while True:

jsonStr = getServerResponse("http://[SERVER IP]/sap/devs/iot/services/iot.xsodata/IOT?$orderby=ID%20desc&$top=1&$select=SVALUE&$filter=SNAME%20eq%20%27Ian%27&$format=json")

currentTemp = jsonStr['d']['results'][0]['SVALUE']

if num(currentTemp) < num(coldTemp):

print "Under, " + currentTemp + " is less than " + coldTemp

for i in range(0,50):

blink(18)

else:

if num(currentTemp) > num(hotTemp):

print "Over, " + currentTemp + " is more than " + hotTemp

for i in range(0,50):

blink(22)

else:

print "Within range"

 

time.sleep(5)

 

GPIO.cleanup()

 

Round Trip

 

Using an HTTP GET using Content-Type": "application/json" we can retrieve values from our "control" table.

 

This round trip uses a fan, fan and "led" as a heater. The fan is a small 4V fan using the 5V power and a transistor on board. The idea here is that when the fan reaches the "HOT" value from the control table the program would start the fan to cool it back down and when it reaches the "COLD" value the "heater" would start to heat of the environment.

 

round_trip.png

 

Python for the Raspberry Pi

import os

import glob

import json

import urllib2

import time

import RPi.GPIO as GPIO

 

os.system('modprobe w1-gpio')

os.system('modprobe w1-therm')

 

base_dir = '/sys/bus/w1/devices/'

device_folder = glob.glob(base_dir + '28*')[0]

device_file = device_folder + '/w1_slave'

maxtemp = 0

mintemp = 0

 

# to use Raspberry Pi board pin numbers

GPIO.setmode(GPIO.BOARD)

# set up GPIO output channel

GPIO.setup(12, GPIO.OUT)

GPIO.setup(13, GPIO.OUT)

GPIO.setup(15, GPIO.OUT)

GPIO.setup(16, GPIO.OUT)

GPIO.setup(18, GPIO.OUT)

 

def blink(pin):

GPIO.output(pin, GPIO.HIGH)

time.sleep(0.5)

GPIO.output(pin, GPIO.LOW)

time.sleep(0.5)

return

 

def temp_low():

GPIO.output(12, GPIO.HIGH)

GPIO.output(13, GPIO.LOW)

GPIO.output(15, GPIO.HIGH)

GPIO.output(16, GPIO.LOW)

return

 

def temp_high():

GPIO.output(12, GPIO.LOW)

GPIO.output(13, GPIO.HIGH)

GPIO.output(15, GPIO.HIGH)

GPIO.output(16, GPIO.LOW)

return

 

def temp_ok():

GPIO.output(12, GPIO.LOW)

GPIO.output(13, GPIO.LOW)

GPIO.output(15, GPIO.LOW)

GPIO.output(16, GPIO.HIGH)

return

 

# Get values from server

def getServerResponse(url):

req = urllib2.Request(url)

opener = urllib2.build_opener()

f = opener.open(req)

return json.loads(f.read())

 

def basic_authorization(user, password):

s = user + ":" + password

return "Basic " + s.encode("base64").rstrip()

 

def read_temp_raw():

f = open(device_file, 'r')

lines = f.readlines()

f.close()

return lines

 

def read_temp():

lines = read_temp_raw()

while lines[0].strip()[-3:] != 'YES':

time.sleep(0.2)

lines = read_temp_raw()

equals_pos = lines[1].find('t=')

if equals_pos != -1:

temp_string = lines[1][equals_pos+2:]

temp_c = float(temp_string) / 1000.0

temp_f = temp_c * 9.0 / 5.0 + 32.0

return temp_c

 

jsonStr = getServerResponse("http://[SERVER IP]/sap/devs/iot/services/iot_control.xsodata/control?$format=json&$filter=SCONTROL%20eq%20%27COLD%27")

mintemp = jsonStr['d']['results'][0]['SVALUE']

print ('MIN Temp is set at ' + mintemp + 'c')

 

jsonStr = getServerResponse("http://[SERVER IP]/sap/devs/iot/services/iot_control.xsodata/control?$format=json&$filter=SCONTROL%20eq%20%27HOT%27")

maxtemp = jsonStr['d']['results'][0]['SVALUE']

print ('MAX Temp is set at ' + maxtemp + 'c')

 

while True:

# Value of Sensor

txtTemp = read_temp()

# Timestamp

txtDate = '/Date(' + str(int(time.time())) + ')/'

# HTTP Post to HANA

url = 'http://[SERVER IP]/sap/devs/iot/services/iot_input.xsodata/sensor'

params = {"ID": "1", "TDATE": txtDate, "SVALUE": str(txtTemp), "SNAME": "Ian", "STYPE": "Temp" }

# print(params)

req = urllib2.Request(url,

headers = {

"Authorization": basic_authorization('[USER]', '[PASSWORD]'),

"Content-Type": "application/json",

"Accept": "*/*",

}, data = json.dumps(params))

f = urllib2.urlopen(req)

blink(18)

 

# fetch the url

# url2 = "http://[SERVER IP]/sap/devs/iot/services/iot.xsodata/IOT?$orderby=ID%20desc&$top=1&$select=SVALUE&$filter=SNAME%20eq%20%27Ian%27&$format=json"

# req2 = urllib2.Request(url2)

# opener2 = urllib2.build_opener()

# f2 = opener2.open(req2)

# json2 = json.loads(f2.read())

# currtemp = json2['d']['results'][0]['SVALUE']

 

jsonStr = getServerResponse("http://[SERVER IP]/sap/devs/iot/services/iot.xsodata/IOT?$orderby=ID%20desc&$top=1&$select=SVALUE&$filter=SNAME%20eq%20%27Ian%27&$format=json")

currtemp = jsonStr['d']['results'][0]['SVALUE']

#print currtemp

 

if (float(currtemp) <= float(maxtemp)):

if (float(currtemp) < float(mintemp)):

print ('>>> HEATER ON ' + currtemp + "c lower than MIN temp of " + str(mintemp) + "c")

temp_low()

else:

print ('HEATER/FAN OFF ' + currtemp + "c within bounds")

temp_ok()

else:

print ('>>> FAN ON ' + currtemp + "c exceeds MAX temp of " + str(maxtemp) + "c")

temp_high()

 

# Loop to next reading

time.sleep(3)

 

** Be sure on your spacing for your code, for some reason formatting here is horrid!!


Hana SPS09 Smart Data Integration - Adapters

$
0
0

This post is part of an entire series

Hana SPS09 Smart Data Integration - Overview

 

The foundation to Data Integration is being able to connect to various sources. Looking at the SPS08 Smart Data Integration option and its connectivity, you can see the usual suspects: Oracle, SQL Server etc.
With SPS09 and its Adapter SDA-extension not much did change, except one: There is an Adapter SDK now and you can write your own Adapters in Java!

 

 

The question towards connectivity is an obvious one. Without the ability to connect to a source system directly a workaround has to be used, e.g. writing files, copying them and then loading them into Hana. That files are cumbersome to handle is obvious as well. Or do you support Twitter? Workaround might be to use its underlying restful protocol. Common to all these workarounds is that they put all the development burden on the user. The user has to write a program to create files. The user has to create and parse the restful messages.

For the common sources that is no problem, all tools support relational databases of the various vendors. But even there you might find features unique to one database that is either supported or not.

 

While that is the case for Smart Data Integration Adapters as well, thanks to the Adapter SDK every Java developer can write adapters for Hana without compromising the Hana stability.

 

Architecture

 

adaptersdkarchitecture.png

The most important change, from an architectural point of view, was to move as much code as possible out of Hana's IndexServer into separate processes.

All that remains in the IndexServer is the optimizer code, translating the user entered SQL into an execution plan containing remote execution and local execution in - hopefully - the most efficient manner. The part of the SQL that should be sent to the source is handed over to the Data Provisioning Server process, which is another Hana process. This contain all the logic common to all adapters. Most important, it contains the communication protocol in order to talk to an agent process, the host of all adapters.

This architecture has multiple positive side effects:

  1. If anything happens to the remote source, Hana Index Server is not impacted. Since the Index Server is the core process of Hana, any core dump in any line of code could have brought down the entire Hana instance.
  2. Because the agent is an installation of its own, you can install the agent anywhere. One option is to place it on the Hana server itself. But that might not be preferred because then the entire middleware of all sources has to be installed there and the network has to allow passage of those middleware protocols. More likely the agent will be installed on the specific source and talk to Hana via the network. No problem as one Hana instance can have as many agents as required. Or a server of its own is used - possible as well.
  3. Because the agent can be installed anywhere, it can be even installed on-premise and be connected to a Hana cloud. instance. Not even a VPN tunnel has to be used as the supported protocol includes https as well. In this case the agent does establish a https connection to the Cloud Hana instance just as any other web browser would do.
  4. Developing an Adapter is much easier. Hana Studio has a plugin so it acts as an agent and now the developer can watch the internals easily.

 

 

Deploying existing Adapters

 

All the SAP provided Adapters are part of the Hana Agent installation, which is a separate download in SMP.

see Installations and Upgrades -> Index H -> SAP HANA SDI

in A- Z Index | SAP Support Portal


Once the agent is installed deploying an adapter is as easy as copying a jar file.

 

see SAP HANA Enterprise Information Management Administration Guide - SAP Library for details

 

Writing your own Adapter

 

The most important question is how difficult it is to write your won adapter. The goal of development was to make it as simple as possible of course but most important, you do not need to be an expert in Java, Hana and the Source system. Any Java developer should be able to write new adapters easily, just by implementing or extending some Adapter base classes.

Frankly it is quite simple, you start a new project in Hana Studio, a Java osgi/Equinox plugin project, a wizard builds the base class for you and your task is to add the code to open a connection to the source, list the source tables and their structure and the such. The SAP help portal has an excellent manual describing all step by step.

Create a Custom Adapter Using a New Plug-in Project - SAP HANA Data Provisioning Adapter SDK - SAP Library

Building an Adapter by Extending the BaseAdapterClass - SAP HANA Data Provisioning Adapter SDK - SAP Library

 

Using the Adapters

fileadapter-createsource.png

All Adapters follow the Hana Smart Data Access paradigm.

In Hana Studio you can create a new remote source, you browse the remote tables. And for selected tables you will create virtual tables so that these look and feel like any other Hana table and can be queried. All the complexity underneath is hidden.

Just imagine the power of this! You deploy the e.g. File Adapter on your file server.

fileadapter-browse.png

Obviously the Adapter needs to follow certain security rules, like in the case of the FileAdap

ter the developer decided that you can query not the entire server but only files and directories within a specified root directory. And in addition the adapter requires the consumer to authenticate himself as a valid user.


Then the allowed Hana instance, local or remote or in the cloud, can see files as regular tables with a user defined table layout. Each of these tables can be instantiated as virtual table and when selecting from a virtual table, all files - or the files specified in the where clause of the query - are parsed using the defined file format and their data is shown.

 

fileadapter-query.png

SAP HANA SPS 09: New Developer Features; XS Admin Tools

$
0
0

This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

 

There are several major enhancements to the XS and developer related admin tools in SAP HANA SPS 09. In this blog we will explore the visual redesign of the XSAdmin tools as well the structural changes to the SAP Web Dispatcher.

 

SAP Web Dispatcher

 

The SAP Web Dispatcher has long (since SPS 03) been the embedded web server for SAP HANA.  However its actual administration and integration into the HANA tools has been pretty lacking.  Before SPS 09 if you want to configure or monitor the SAP Web Dispatcher you had to resort to direct OS access of the HANA system.  This all changes is SPS 09 as the local SAP Web Dispatcher is now fully integrated into the SAP HANA Process Management framework.  This means that monitoring, configuration, and administration call all be done via the HANA Studio or web tooling which runs from an XS endpoint.

 

From a monitoring standpoint we can view the Web Dispatcher from the HANA Landscape view of the HANA Studio.

HANABlog40.png

However this also means that configuration parameters can now be made from the HANA Studio tools as well:

HANABlog41.png

 

The SAP Web Dispatcher has long had its own web based administration and monitoring tool.  Now this tool is directly accessible via an XS endpoint.  The URL path for this admin tool is /sap/hana/xs/wdisp/admin/public/

HANABlog42.png

 

XS Admin Tool

When originally introduced several SPSs ago, the XS Admin tool was primarily intended to allow for the advanced configuration of XSACCESS points. Since then it has grown in scope to include job schedule, SMTP and SAML configuration as well as receive a major visual design overhaul in SPS 09.

 

SAP has decided to adopt the Fiori visual design for all the web based Administration tools in HANA and the XS Admin too is no different.

HANABlog43.png

 

Another area of major redesign in the XS Admin tool is around job scheduling.  In SPS 07 we introduced XSJobs and had the ability to administer or monitor a single job from the XS Admin tool. New in SPS 09 by frequent request is a single Job Dashboard that allows you to monitor the status of all jobs in a HANA system.  It can be directly accessed from the url: /sap/gaba/xs/admin/jobs/ or runs as integrated part of the XS Admin tool.

HANABlog44.png

 

From the Job Dashboard we can drill into the details of a single job definition.

HANABlog45.png

 

Logon and Self Service

 

The Form Logon Screen receives several new features in SPS 09 as well. This includes new self service tools for resetting passwords and requesting new accounts.

 

For example you can now set a custom background image for the Form Logon Screen via the xsengine.ini -> httpserver parameter.

HANABlog46.png

There is a new self service application to reset passwords.
/sap/hana/xs/selfService/user/resetPassword.html

HANABlog47.png

 

And a new self service to request a new user account.

/sap/hana/xs/selfService/user/requestAccount.html

HANABlog48.png

 

Also there is an Admin tool where you can monitor and approve or reject these user account requests.

/sap/hana/xs/selfService/admin/

HANABlog49.png

 

Finally there is a stand alone self service appliation to allow users to set their user profile preferences such as date or time format.

/sap/hana/xs/formLogin/profile/

HANABlog50.png

Hide and seek in XSEngine - expose HTTP access to multitenant database containers in SAP HANA SPS09

$
0
0

Intro

Hi everyone, recently I just installed SAP HANA SPS09 and wanted to try something new. Since multitenant database containers (MDC) is a major feature in SPS09, I installed SAP HANA in single container mode and multiple containers mode as well. As you know, I'm a fan of SAP HANA, especially XS, so after adding both SAP HANA systems into SAP HANA Studio, I just visited XSEngine welcome pages directly. Just like after installing Apache Tomcat, you'll first visit the welcome page to check if the web server is running correctly.

 

For the single container mode, everything looked fine. But for the multiple containers mode, I failed to visit the welcome page... It seemed like hide and seek game in XSEngine. So where is XSEngine hidden and how can I visit the welcome page? I did some research and wanted to share with you my experience exposing HTTP access to multitenant database containers in SAP HANA SPS09.

 

Single container mode (instance 00)

Everything looked fine just like before.

 

1_.png

 

Since the instance number is 00, I visited port number 8000. XSEngine is up and running.

 

2_.png

 

Multiple containers mode (instance 01)

I logged on to system database container, so that's SYSTEMDB as you can see in the following screen shot. First of all, you cannot find the 'xsengine' service which you can find in the single container mode.

 

3_.png

 

Since the instance number is 01, I visited port number 8001. The welcome page did not appear. Why???

 

4_.png

 

Configure HTTP access to multiple containers

The first idea came to my mind was searching the solution in SAP HANA Administration Guide. Bingo! You can find the solution from Configure HTTP Access to Multitenant Database Containers and follow the procedure. Here I won't explain the procedure again, but I'll give some screen shots and highlight some key points for you.

 

Before jumping into the configuration part, I high recommend you to first have a look at Multitenant Database Containers - SAP HANA Administration Guide - SAP Library in order to have an overview of MDC. From the architecture diagram below, you can see some points:

 

1. The internal web dispatcher, which is introduced as a service inside SAP HANA SPS09, accept all HTTP requests and dispatch them to different databases. That's why you'll see in the following part, most configurations take place in webdispatcher.ini.

 

2. Only index server in tenant database by default

 

3. For system database, the XS server is embedded in the name server. That's why you cannot find the 'xsengine' service in the landscape.

 

4. For tenant database, the XS server is embedded in the index server.

 

6.PNG

 

Configure HTTP access to SYSTEMDB

1. Logged on to system database container -> Administration -> Configuration -> webdispatcher.ini -> profile -> change wdisp/system_0 on the sytem layer

  • Changing the port number from 8 to 14, 3<instance>14 is the internal port number reserved for the embedded XS server of the system database.
  • Replacing the default SRCURL=/ parameter with the SRCVHOST parameter, specifying the fully qualified domain name (FQDN) of the physical host as the value

 

5_.png

 

2. Visit the welcome page. XSEngine is up and running.

 

7_.png

 

Configure HTTP access to DB1

What about creating a tenant database and visiting its XSEngine? Let's give it a shot.

 

1. Logged on to system database container -> SQL console -> run the following SQL

CREATE DATABASE DB1 SYSTEM USER PASSWORD Manager1;

 

For better understanding, I logged on to DB1 and you can see only the index server belongs to DB1 as we discussed in the architecture diagram. Since the XS server is embedded in the index server, you cannot find it as well.

 

9_.png

 

2. Logged on to system database container -> Administration -> Configuration -> webdispatcher.ini -> profile -> add wdisp/system_1 on the system layer

By default, new tenant databases are assigned 3 port numbers, one for each of the following:

  • Internal communication (3<instance>40-3<instance>97)
  • SQL (3<instance>41-3<instance>98)
  • HTTP (3<instance>42-3<instance>99)

 

We did not assign the port number when we created the tenant database DB1, so 30140(internal communication), 30141(SQL) and 30142(HTTP) these three port numbers are assigned to DB1. That's why we configure the port number to 3$(SAPSYSTEM)42 in the following screen shot.

 

8_.png

 

3. Add the tenant database alias name to DNS. As I'm not able to do that, I just add an entry to the local "hosts" file on Windows. You can find the path in the following screen shot.

 

10_.png

 

4. Visit the welcome page. XSEngine is up and running!

 

11.PNG

 

Add a dedicated 'xsengine' service and configure HTTP access

Don't like the embedded XS server? What about adding a dedicated 'xsengine' service? Now let's create another tenant database DB2 and add a dedicated XS server. You can find details from Add a Service to a Tenant Database - SAP HANA Administration Guide - SAP Library.

 

1. Logged on to system database container -> SQL console -> run the following SQL

CREATE DATABASE DB2 SYSTEM USER PASSWORD Manager2;

 

Identical to DB1, only the index server belongs to DB2, still no 'xsengine' service. Besides, you can imagine 30143(internal communication), 30144(SQL) and 30145(HTTP) these three port numbers are assigned to DB2.

 

12_.png

 

2. Run the following SQL to add the dedicated 'xsengine' service. Replace <hostname> with yours.

ALTER DATABASE DB2 ADD 'xsengine' AT LOCATION '<hostname>:30146';

 

If you use an occupied port number, the following error will appear.

 

13.PNG

 

3. Now you can find the dedicated 'xsengine' service. Keep in mind 30146(internal communication), 30147(SQL) and 30148(HTTP) are assigned to it.

 

14_.png

 

4. Configure HTTP access, add wdisp/system_2 on the system layer, use port number 3$(SAPSYSTEM)48

 

15_.png

 

5. Add the tenant database alias name to DNS.

 

16_.png

 

6. Visit the welcome page. XSEngine is up and running!

 

17.PNG

 

Want more?

Till now we've configured HTTP access to SYSTEMDB, DB1, DB2 and visited the welcome page successfully.

 

SYSTEMDB

7_.png

 

DB1

11.PNG

 

DB2

17.PNG

 

From above three URLs, you can find different hostnames but the same port number. That's how the internal web dispatcher in SAP HANA works, mapping various hostnames to various databases.

 

But how can we make sure there are three XS servers, not just one XS server? Let's use Web IDE to confirm this. In order to use Web IDE, first we need to grant a role to the SYSTEM user in SYSTEMDB, DB1 and DB2 respectively.

 

18_.png

 

We assume the following credentials.

 

<database>: <username>/<password>

SYSTEMDB: SYSTEM/Manager0

DB1: SYSTEM/Manager1

DB2: SYSTEM/Manager2

 

SYSTEMDB

19_.png

 

I failed to login with the password Manager1 and Manager2 which means this XS server only belongs to SYSTEMDB.

 

22.PNG

 

DB1

Failed to login with the password Manager0 and Manager2 which means this XS server only belongs to DB1.

 

20_.png

 

DB2

Failed to login with the password Manager0 and Manager1 which means this XS server only belongs to DB2.

 

21_.png

 

Wrap-up

In this blog post, we've first learned some basics about multitenants database containers (MDC) including the architecture at a high level. In SAP HANA SPS09, the HTTP access to MDC is not configured/exposed by default. We need to configure them manually. Thus besides SYSTEMDB we created two tenant databases DB1 and DB2, and successfully exposed HTTP access to these containers. In addition, we also learned how to add a dedicated 'xsengine' service.

 

Hope you enjoyed reading my blog and exposing HTTP access to MDC successfully.

SAP HANA SPS 09: New Developer Features; HANA Test Tools

$
0
0

This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

 

Unit testing and test driven development have increasingly become more popular in the industry in general and specifically within the SAP customer base.  To meet this demand, SAP introduces both an Unit Test framework and a table/view Mock framework in SPS 09.  Its important to note that this test framework is not automatically installed into SPS 09 however. The delivery unit for this framework is delivered as non-automatic content. This means its sitting on the file system of your HANA system and must be imported by an administrator. This is done because we assume customers will want these tools in their development systems probably not production systems.  Therefore we allow you, the customer, to decide if you want the tools installed.

 

Contents of the HANA_TEST_TOOLS Delivery Unit

 

  1. XSUnit a unit test framework based on Jasmine, which is a open source JavaScript Test Framework. With XSUnit you can test both JavaScript code and database content. Various extensions have been made to the standard Jasmine library to make it easier to test database content. (unit.jasminexs)
  2. XSUnit Test Coverage for XS JavaScript (unit.codecoverage)
  3. Test Isolation Framework, namely Mockstar, which allows you to test a view or a stored procedure in isolation. (mockstar)
  4. The results of the XSUnit can be persisted in the Test Result Logger
  5. Test Helper (unit.util)
  6. Developer Documentation (doc)
  7. Demos which are a good starting point for how to test various HANA content. (demo)

HANABlog51.png

 

How to create an XSUnit Test

 

First of all there is one common tool for testing XS JavaScript and database content such as SQLScript and Views.  It is all based upon the open source libraries detailed in the above section; several of which have been extended by custom assertions and test utilities specific to the HANA environment.

 

Test code and test data are developed beside your productive code in the same HANA instance as just another XSJSLIB development object.  Test themselves are implemented in XSJS syntax in SAP HANA Studio or the Web-based Development Workbench.

 

HANABlog52.png

 

The Unit tests can than be ran standalone in the Web browser via the URL:
/sap/hana/testtools/unit/jasminxs/TestRunner.xsjs?package=<package>

Where package = the repository package your test are located within.

HANABlog53.png

 

These unit tests can also be ran directly from the SAP Web-based Development Workbench.

HANABlog54.png

 

This has the advantage of displaying the code coverage directly within the editor of the SAP Web-based Development Workbench.

HANABlog55.png

 

Code coverage can also be displayed in the standalone web tool as well:

HANABlog56.png

By integrating the JavaScript line code coverage library BlanketJS into the XSUnit Test framework, it’s now possible to see, to which extent the JavaScript code is covered (percentage, line based) by the executed test run.

 

Note that all .xsjslib files which match pattern parameter (default “*Test”) are excluded per default.

 

Example 1: The coverage is measured for each imported .xsjslib file
http://<host>:<port>/sap/hana/testtools/unit/jasminexs/TestRunner.xsjs?package=sap.hana.testtools.mockstar&coverage=true

 

Example 2: The coverage is measured for each .xsjslib file located in the mockstar package
http://<host>:<port>/sap/hana/testtools/unit/jasminexs/TestRunner.xsjs?package=sap.hana.testtools&coverage=sap.hana.testtools.mockstar

 

Example 3: If you would like to exclude test helpers located in the tests package you can specify them via the “exclude” parameter
http://<host>:<port>/sap/hana/testtools/unit/jasminexs/TestRunner.xsjs?package=sap.hana.testtools&coverage=true&exclude=sap.hana.testtools.mockstar.tests

 

However the test tool can also be ran as a service call.  This way if you like to execute the test on a regular basis or trigger by code changes you can use Jenkins as a continuous integration server.  The unit tests are called remotely from Jenkins and the results are stored and displayed within Jenkins. This is a process that we use internally at SAP.

HANABlog57.png

 

Mocking

HANABlog58.png

 

To write self-contained unit tests that are executable in any system, you have to test the HANA model in an isolated way.

An HANA View for example has typically dependencies to other HANA Views or to database tables. These dependencies pass data to the “View under test" and cannot be controlled and overwritten by the test. This means you need the possibility to mock dependencies of the “View under test".

 

The Mocking Framework, namely Mockstar helps you to test your models with specific test data that resides in dedicated test tables.

Therefore it creates you a copy of your origin view/procedure where the dependent views/tables are replaced by test tables.

These (temporary) tables can be prepared, controlled and filled by the test.

 

The advantages of mocking are:

 

  • Write self-contained unit tests that are executable in any system
  • Tests can be executed by several users in parallel

SAP HANA SPS 09: New Developer Features; Core Data Services

$
0
0

This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

 

Core data services (CDS) is an infrastructure for defining and consuming semantically rich data models in SAP HANA. Using a a data definition language (DDL), a query language (QL), and an expression language (EL), CDS is envisioned to encompass write operations, transaction semantics, constraints, and more.

 

A first step toward this ultimate vision for CDS was the introduction of the hdbdd development object in SPS 06. This new development object utilized the Data Definition Language of CDS to define tables and structures. It can therefore be consider an alternative to hdbtable and hdbstructure.

 

In SPS 09 we continue to build on the foundation of CDS and the HDBDD development artifact.  We introduce important new syntax as well as significant enhancements to the lifecycle management of these development objects.

 

Lifecycle Management

 

One of the biggest requests for CDS/HDBDD was to improve the lifecycle management capabilities.  Before SPS 09 if you made most changes to an entity definition for a table which had data in it, you received an activation error. The developer was responsible for migrated data out of the existing table, dropping it, recreating with the new structure and then migrating the data back. This process was not only cumbersome and error prone but also made transporting such changes automatically to downstream systems nearly impossible.

 

In SPS 09 we add an automatic migration process to activation of the HDBDD artifact. This supports a variety of scenarios.

 

  • Conversion from the same named hdbtable artifact
  • Change of key / data type
  • Removal of columns

 

In these situations, data is moved transparently to a shadow table.  The source table is then dropped/adjusted and the data is migrated back. This allows for structural changes that in the past would have required significant developer work to preserve the data.  Even better is that this process happens automatically upon activation, therefore transporting content via Delivery Units will automatically trigger the migration.

 

Multi-File Support

Another very common request was for multiple-file support in HDBDD. Before SPS 09 you often were force into very large monolithic HDBDD artifacts if you wanted to share association or type definitions between entities or views. It also meant no way to defined global types that could be reused across the system.

 

SPS 09 solves all of these problems by adding the ability to import from one or more existing HDBDD files.  All the entity definitions, types, etc then become visible to the destination HDBDD artifact.

 

It is possible to refer to an artifact that is defined in another HDBDD file ("external" artifact). Each external artifact must explicitly be made accessible by a using declaration. The using declaration introduces a local name as an alias for the external artifact, which is identified by a fully qualified name. The external artifact can be either a single object (type, entity, view) or a context. The using declarations are located in the header of the file between the namespace declaration and the beginning of the top level artifact.

 

For example you might create some reusable types in a central HDBDD:

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context ContextA {  type T1 : Integer;  context ContextAI {    type T2 : String(20);    type T3 {      a : Integer;      b : String(88);    };  };
};

 

You can then import and reference these types in a separate HDBDD artifact.

namespace playground.sp9.cds;
using playground.sp9.cds::ContextA.T1;
using playground.sp9.cds::ContextA.ContextAI as ic;
using playground.sp9.cds::ContextA.ContextAI.T3 as ict3;
@Schema: 'SP9DEMO'
context ContextB {  type T10 {    a : T1;               // Integer    b : ic.T2;            // String(20)    c : ic.T3;            // structured    d : type of ic.T3.b;  // String(88)    e : ict3;             // structured  };  context ContextBI {    type T1 : String(7);  // hides the T1 coming from the first using declaration    type T2 : T1;         // String(7)  };
};

 

Enumerations

Another new syntax feature in SPS 09 is the introduction of enumerations.  That's a type which declares several possible values or domains.  However Enumerations can only be used within Annotations. The definition of annotations is very similar to type definitions. Annotation definitions can be either located inside a context, or an annotation definition can be the single top level artifact in a CDS file.  However annotations do NOT generate catalog artifacts.

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context enumerations {    type Color : String(10) enum { red = 'FF0000'; g = '00FF00'; b = '0000FF'; };
annotation MyAnnotation {  a : Integer;  b : String(20);  c : Color;  d : Boolean;
};
};

 

New Types

One of the other common requests was for the CDS syntax to support all the HANA data types.  With SPS 09 we extend the supported types, including the GeoSpatial types.  Please note that the support the GeoSpatial types ST_POINT and ST_GEOMETRY is still limited.  These types can only be used for the definition of elements in types and entities.  It is not possible to define a CDS view that selects such an element from an CDS Entity.

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context NewTypes {
@nokey
entity SomeTypes {  a : hana.ALPHANUM(10);  b : hana.SMALLINT;  c : hana.TINYINT;  d : hana.SMALLDECIMAL;  e : hana.REAL;  h : hana.VARCHAR(10);  i : hana.CLOB;  j : hana.BINARY(10);  k : hana.ST_POINT;  l : hana.ST_GEOMETRY;
};  
};

 

View Syntax

The View Syntax receives several new features as well.  For example we can now specify the OrderBy clause in the view definition:

//Order By
view EmployeesView as select from Employee
{  orgUnit,  salary
} order by salary desc;

 

We can also use the CASE syntax within the view definition:

//Case
entity MyEntity {
key id : Integer;     a : Integer;     color : String(1);
};  
view MyView as select from MyEntity {    id,    case color          when 'R' then 'red'        when 'G' then 'green'        when 'B' then 'blue'        else 'black'    end as color,    case when a < 10 then 'small'        when 10 <= a and a < 100 then 'medium'        else 'large'    end as size
}; 

 

Unmanaged Associations

Associations in general are one of the most powerful features of the CDS syntax. They move the relationship definition out of the view/join and into the source entity definition for better re-usability/maintenance and simpler overall syntax.

 

An unmanaged associations is based on existing elements of the source and target entity. No fields are generated. In the ON condition, only elements of the source or the target entity can be used; it is not possible to use other associations. The ON condition may contain any kind of expression - all expressions supported in views should also work in the ON condition of an unmanaged association. The names in the ON condition are resolved in the scope of the source entity. Elements of the target entity are accessed via the association itself.

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context unmanagedAssociations {
entity Employee {  key id   : String(256);  officeId : Integer;
};
entity Room {  key id : Integer;  inhabitants : Association[*] to Employee on inhabitants.officeId = id;
};  
entity Thing {  key id   : Integer;  parentId : Integer;  parent   : Association[1] to Thing on parent.id = parentId;  children : Association[*] to Thing on children.parentId = id;
};  
};

 

Backlink

Unmanaged associations also make several previously unsupported features possible.  For example the backlink. The CDS specification defines backlink associations. This kind of association is not yet supported. But you can (with a little extra effort) achieve the same effect by using unmanaged associations.

According to the CDS specification, it shall be possible to define a header/item relationship like this:

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context backlink {
//Backlink syntax not yet supported
//entity Header {
//  key id : Integer;
//  items : Association[*] to Item via backlink header;
//  description : String(120);
//};
//
//entity Item {
//  key header : Association[1] to Header;
//  key id : Integer;
//  description : String(120);
//};
//
//Workaround using unmanaged associations
entity Header {  key id : Integer;  items : Association[*] to Item on items.headerId = id;  description : String(120);
};
entity Item {  key headerId : Integer;  key id : Integer;  header : Association[1] to Header on header.id = headerId;  description : String(120);
};
};

 

Many to Many

Similarly Many to Many can also be simplified using unmanaged associations. CDS defines mediated m-to-n associations, which can be defined using the "via entity" keyword. These kinds of associations are not yet supported directly in HDBDD. But using unmanaged associations, there is a workaround.

Example: model of employees and projects, employee can be assigned to any number of projects. The link table must be modelled explicitly.

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context manyToMany {
entity Employee {   key id : Integer;  name : String(80);  projectLinks : Association[*] to E2P on projectLinks.e_id = id;
};
entity Project {  key id : Integer;  name   : String(80);  employeeLinks : Association[*] to E2P on employeeLinks.p_id = id;
};
entity E2P {    key e_id : Integer;  key p_id : Integer;  projects  : Association[*] to Project  on projects.id  = p_id;  employees : Association[*] to Employee on employees.id = e_id;
};  
view EmployeesWithProjects as select from Employee {     name                       as EmployeeName,     projectLinks.projects.id   as projectId,     projectLinks.projects.name as projectName  };    
view ProjectsWithEmployees as select from Project {     name                         as projectName,     employeeLinks.employees.id   as EmployeeId,     employeeLinks.employees.name as EmployeeName
};  
};

 

Full Text Index

Last but not least we now have syntax to define full text Index within the CDS/HDBDD artifact.

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context fullText {
entity Header {  key id : Integer;
@SearchIndex.text: { enabled: true } 
@SearchIndex.fuzzy: { enabled: true }  description : String(120);
};
};

SAP HANA SPS 09: New Developer Features; XSDS (XS Data Services)

$
0
0

This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

 

In SPS 09, SAP introduces a major extension to the programming model in the form of a reusable XSJS Library called XSDS or XS Data Services. There are several goals for this new library and the impact that it has on the overall programming model.

 

General HANA artifact consumption

  • Native JavaScript embedding of HANA database artifacts
  • Structured API calls instead of low-level SQL Interface
  • Less boilerplate coding (for example result set conversions)

 

Native CDS Consumption

  • Import CDS entities as native JavaScript objects
  • Understand CDS metadata for working with JavaScript objects

 

The origin of this extension to the programming model is that there were originally three separate projects/libraries built within SAP.  They focused on different aspects of data consumption within XSJS. They were an ORM (Object Relationship Mapper) library, a Query Builder, and a Relationship library.

HANABlog59.png

 

While these three individual libraries all had value on their own; ultimately SAP decided to integrate them together into one single library and also build them onto the CDS/HDBDD foundation for even better reuse of information that already exists within the database layer.

 

CDS (Core Data Services)

Core Data Services are a cross-platform set of concepts and tools to define semantically rich data models.  Within SAP HANA we already have the development artifact HDBDD for using the CDS syntax to define tables (entities), their relationships and views that utilize those relationships. A central part of the CDS specification is the object-relationship mapping that links semantically meaningful entities to their technical representation as records in the HANA database.

 

The short description of XSDS then can be consider as the native CDS embedding for JavaScript in the XS application services layer of the SAP HANA Programming Model.

HANABlog60.png

 

At its core, XSDS allows you to import and extend CDS entity definitions.  You simply supply the package path and file name of the HDBDD and the entity name during the import. The XSDS library reads the available metadata on types, keys and associations.  It also supports extensions, projections, and renaming of entity definitions.  It also supports CDS conceptions (like backlink and via entity associations) which aren't yet supported by HDBDD definitions.

 

In this simple example we import the Employees entity from the EPM hdbdd artifact and the MasterData Context.

 

$.import("sap.hana.xs.libs.dbutils", "xsds");
var XSDS = $.sap.hana.xs.libs.dbutils.xsds;
var oEmployee = XSDS.$importEntity("sap.hana.democontent.epmNext.data", "EPM.MasterData.Employees");

oEmployee is a now a JavaScript which contains all this metadata which was already defined in the underlying CDS model. It also supports full CRUD transactionality via this JavaScript object.

 

Import Pre-Generation

However, this import of the metadata from the underlying model can be a relatively costly operation. Because the XSJS programming model is stateless this often creates the situation where you must re-import the CDS metadata for every single request.

 

For this reason the XSDS library also support pregeneration of imports in order to improve performance.  The library serializes the result of the import into a single reusable XSJSLIB file. Therefore instead of importing the entity from the HDBDD artifact you can use standard $.import feature of XSJS to utilize this prebuilt library definition.

 

Managed/Unmanaged

However XSDS isn't limited to only using CDS Entities. The goal for XSDS was to provide a single solution for all XSJS applications.  This means being able to use the same access library even if your base tables are directly created in the catalog and not via CDS. This way you can decide between Managed or Unmanaged modes in the library.

 

Managed mode allows you to import entity definitions from catalog objects and from CDS definitions (as described above). It is best used when you need consistency in access. It functions as a lightweight ORM building JavaScript objects from catalog or CDS definitions. It supports navigation via associations, data consistency and a limited query ability.

 

Unmanaged mode, on the other hand, functions as a Query Builder to provide you with maximum flexibility and access to nearly the full HANA SQL Syntax but via a simplified query interface. Data consistency between entities, however, remains the responsibility of the application developer.

 

Now for more details, lets look at some syntax examples of both Managed and Unmanaged Mode.

 

Managed Mode

Earlier we saw how to import a CDS entity definition.  You can also import catalog object definitions as well:

 

var Demo= XSDS.$defineEntity("Demo", '"DEMOSP9"."DEMO_TABLE"');

In this case we supply the Schema and Table name from the catalog instead of the path to the HDBDD artifact.

 

Regardless of the source of the entity, we perform the same core operations on the entity after its imported into a JavaScript object.

 

We can create, retieve, update, and delete instances of the entity.

var post = Post.$get({ pid: 101 });

The result is a plain JavaScript object that can be programming against like any other JavaScript object.

if (post.Author.Name === 'Alice') post.Rating++;

Optionally the objects support lazy retrieval of associated instances to avoid "n+1 problem".  The library also optimizes Join strategy to minimize Database queries.

 

You can write changes into the database explicitly.

post.$save();

The transaction manager tracks local changes for minimal updates to the database.

 

As far as instance manipulation goes we can retrieve a single instance by key:

var post = Post.$get({ pid: 101 });

You can then update an instance like this:

post.Author.Name = "Alice";
post.$save();

Or you can create a new instance using this syntax:

var user = new User({ Name: "Alice", Created: new Date() });
user.$save();

Deletion works similarly:

post.$discard();

Instance Management

As the above examples show the entity instances are managed. This means that instances are singleton objects with "personality". Associated instances are stored by reference.

 

The benefits of this approach are that the runtime guarantees consistency.  You have a single consistent view on every instance enforced by the $save operation. You also have automatic association management across all relationship cardinalities (1:1, 1:n, and m:n).

 

The downside, on the other hand, is that we need additional overhead for instance management.  We are also limited to a subset of database functionality for instance retrieval.

 

HANABlog61.png

Transaction Handling

The managed mode also has impacts on the transaction handling.  We receive the benefits of transparent database and transaction handling. With a minimal XSDS runtime, the library is still able to provide metadata processing, entity cache, and type conversion during the transaction. It still exposes and reuses as much underlying database functionality as possible (for example: Constraints checks).

 

XSDS uses a single Database connection and single transaction.  This makes it well suited for single-threaded, per-request execution model of the XS processing layer.  It supports auto commit, explicit commits, and rollbacks. It also supports connection configuration files (XSSQLCC).

 

 

Unmanaged Mode

We also have unmanaged mode for when you want a more convenient database interface but don't want to give up the full database syntax.  For example here is the typical database join statement in XSJS code.

var rs = conn.prepareStatement(‘
SELECT a."Name", c."Text.Text"  FROM "bboard.post" p  JOIN "bboard.user" a ON p."Author.uid"=a."uid"  JOIN "bboard.comment" c ON p."pid"=c."Post.pid"  JOIN "bboard.user" ca ON c."Author.uid"=ca."uid"
WHERE a."uid"=ca."uid“').execute();
while (rs.next()) {  rs.getInt(1); rs.getString(2);
}

 

There are several problems here which XSDS unmanaged mode addresses.

Leaking Abstraction:

  • Column name can only be derived from the underlying table definition

Convenience:

  • String based query interface
  • Manual result set processing

Redundant Code:

  • Joins could be derived from underlying metadata instead of hand written

 

For these reasons the XSDS library also supports a query library for Ad-hoc queries.  It allows you to construct general queries in a very JavaScript-like way:

var query = Post.$query().$where(Post.Author.Name.$eq("Alice"));
var results = query.$execute();

The result object still yields a plain JavaScript object as unmanaged values not an instance as of a managed object.

 

The Query Language used here has several features:

  • Incremental query building for better readability/maintainability
  • Operators like $project, $where, and $matching
  • Full support for CDS path navigation
  • Expression language based on CDS Query Language using fluent API
  • Alternative, more concise JSON-like selection notation for restricted subsets

 

Here are some additional code examples to help you learn some of these benefits.

 

$project

Projections with CDS navigation expressions:

var res = Post.$query().$project({ Author: {Name: true},                               Comments: {Author: {Name: true}},                  Title: "TitleOfPost"}).$execute();

The results of this query would look something like this:

[{             Author: {                                Name: "Alice"                },                Comments: {                                Author: {                                Name: "Bob"                                }                },                TitleOfPost: "First Post!"
}, ...]

$where

The library supports complex where conditions like the following:

var selfCommentingPosts = Post.$query().$where(Post.Comments.Author.pid.$eq(Post.Author.pid))
.$execute();

There are predefined operators of the expression language:

  • $eq
  • $ne
  • $gt
  • $lt
  • $ge
  • $le
  • $like
  • $unlike

But you can still escape to any SQL operator:

var somePost = Post.$query() .$where(Post.Rating.$prefixOp('SQRT').$gt(2)).$execute();

$matching

$matching allows selection using a template for the result.

var interestingPosts = Post.$query().$matching({    Rating: {        $gt: 2    },    Tags: {        Name: "+1"    }
}).execute();

**Note the Unmanaged version of $find and $findAll uses the same template language.

 

Incremental Query Construction

Incremental construction with immutable query objects:

// build query without DB call
var qBadPosts = Post.$query().$where(Post.Author.pid)  .$eq(Post.Comments.Author.pid)));
// refine query
var qStarBadPosts = qBadPosts.$where(Post.Rating.$gt(4));

Explicit trigger:

// trigger actual DB calls
var badTitles = qBadPosts.$project(                { Title: true, Author: {Name : true}}).$execute();
var recentBadTitles = qStarBadPosts.$project({ Title: true })  .$execute();

JSON Expressions

Simple expressions:

Post.$find({ mandt: "001", pid: 101 });

Complex expressions:

Post.$find({ Title: { $like: "First%" },             Rating: { $gt: 1, $lt: 3 },
 Created: { $lt: Date.now() – 3600 } });

JSON Expression Language Operators:

  • $eq
  • $ne
  • $lt
  • $le
  • $gt
  • $ge
  • $like
  • $unlike
  • $null

 

Optimized Query Construction

The query construction allows for incremental query construction without database roundtrips.  Immutable query objects support sharing of subqueries. This results in just-in-time translation into plain SQL using needed joins.

SAP HANA SPS 09: New Developer Features; SAP River

$
0
0

This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

 

SAP River

The benefits of the SAP River language (RDL) application development environment were validated by partners and early adopters. However, based on feedback we received, and consistent with our strategic direction of building open and standard environments, SAP has decided to abandon a proprietary language approach, and to reapply and integrate the SAP River assets and principles within a cloud based development environment as part of the HANA Cloud Platform.

 

The SAP River language will therefore no longer be available as a stand-alone development environment in SAP HANA.

 

SAP River Assets Reused

The SAP River Application Explorer has been rebuilt as a general SAP OData Explorer in SPS 09. It allows for the general testing and data generation of XSODATA based services. You can view all records in a service or create/delete/edit individual records. It also supports mass generation of multiple records at once with random value generation. It can be launched from the Web-based Development Workbench (via context menu option on XSODATA service) or directly via the url=

/sap/hana/ide/editor/plugin/testtools/odataexplorer/index.html?appName=<xsodata service path>

HANABlog6.png

 

SAP River Concepts Realized in Alternative Ways

 

One-Click Debugging

We now introduce One-Click debugging in SPS 09.  No longer do you have to choose the XS Session ID. Instead the targer of the debug session will launch in an external web browser or run in-place within the HANA Studio when you choose debugging.  This also means that we needed to provide tools within the debugger for stubbing in HTTP Headers, Body, etc.  In the past developers often used 3rd party tools like Postman to simulate service calls. Now you can do all of this from within the HANA Studio as you start the debugging.

HANABlog32.png

 

But the debugging improvements don't end there.  We now also support XSJS/SQLScript integrated end-to-end debugging.  From an XSJS that calls into a SQLScript stored procedure we can now step from the XSJS debugger seamlessly into the SQLScript debugger.

HANABlog33.png

 

 

XSDS (XS Data Services)

In SPS 09, SAP introduces a major extension to the programming model in the form of a reusable XSJS Library called XSDS or XS Data Services. There are several goals for this new library and the impact that it has on the overall programming model.

 

General HANA artifact consumption

  • Native JavaScript embedding of HANA database artifacts
  • Structured API calls instead of low-level SQL Interface
  • Less boilerplate coding (for example result set conversions)

 

Native CDS Consumption

  • Import CDS entities as native JavaScript objects
  • Understand CDS metadata for working with JavaScript objects

 

For more details on XSDS please refer to this blog:

SAP HANA SPS 09: New Developer Features; XSDS (XS Data Services)


Hana SPS09 Smart Data Integration - Realtime Table Replication

$
0
0

This post is part of an entire series

Hana SPS09 Smart Data Integration - Overview

 

One of the more basic use cases is to copy data from a source database into Hana. Either just once to try out Hana or doing an initial load and setup realtime replication for all changes in order to have Hana side by side with the source system, e.g. for reporting.

Using the Smart Data Integration technology that can be done, you have to create virtual tables, write the SQL or dataflow for the initial load - one per source table - and create the realtime subscriptions. Or in short, use the Hana WebIDE to create a hdbreptask object that does all of that for you.

 

But let's do something different instead of replicating a source database into Hana this time. Just for fun, a database would work as well of course. We want to replicate the Twitter!

For that we need an Adapter to connect to Twitter - no problem, shipped together with the Agent - and create a remote source with the Twitter login credentials. Everything else is straight forward:

 

Open the hdbreptask editor in the WebIDE by creating a new file of that ending

reptask0.png

 

Select the remote source, the target schema and all remote tables - well in Twitter there are just two but in reality you can add hundreds of tables at once

 

reptask1.png

Optionally specify filters

reptask2.png

Execute the task

reptask3.png

 

Summary

Actually, there is a nice video of all of this

SAP HANA SPS 09: New Developer Features; REST API

$
0
0

This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

 

The SAP HANA Repository is a key feature of the HANA native development environment.  It is the design time archive for all development objects from database tables all the way up through user interface artifacts. However up until now we've not had a supported API which would allow application developers or 3rd party tools developers to access the Repository.  With the advent of the Web-based development tooling, SAP needed to create a REST-based service interface for these development tools.  These two requirements come together in SPS 09 in the form of the newly released SAP HANA REST API.

 

Eclipse Orion

For the Web-based development tools, SAP decided to use the Eclipse Orion API. The REST API is an API layer completely supporting the Orion protocol version 1.0 which enables Development Tools to access HANA platform components such as the repository via REST based calls in an easy and flexible manner. As an example the API provides means for the creation and manipulation of workspaces, projects and files but also covers HANA-specifics such as activation and catalog access. Any development tool using the REST API can be used as a client. This opens up HANA as a development environment not just for SAP development tools such as Web IDE or the Web-based Development Workbench but also for tools coming from Partners and Customers.

 

As a consequence, existing tools implemented on top of the Orion Server API are able to work with HANA. This enables SAP-external communities who would like to use their own tools in a "bring-your-own-tool" (BYOT) manner and allows them to integrate HANA nicely into their established development and management workflows and tools infrastructure.

 

API Parts

The REST API has several parts which we will look at briefly in this blog. This is just intended to be an introduction to the basic capabilities.  For more details, please refer to the online help: http://help.sap.com/hana/SAP_HANA_XS_REST_API_Reference_en/index.html

 

  • File API
  • Workspace API
  • Transfer API
  • Metadata API
  • Change Tracking API
  • OData Service
  • Info API

 

File API

The SAP HANA REST Application Programming Interface (REST API) includes a file API that enables you to browse and manipulate files and directories via HTTP. The File API included in the SAP HANA REST API uses the basic HTTP methods GET, PUT, and POST to send requests, and JSON is used as the default representation format. For more information about the original Orion File API, on which the File API is based, see http://wiki.eclipse.org/Orion/Server_API/File_API.

 

HANABlog62.png

 

Retrieve the contents of a file (http://wiki.eclipse.org/Orion/Server_API/File_API#Getting_file_contents).

You can also retrieve the contents of a particular version of the file (URL-parameter version) or check the ETag of the current version of the file (using the “If-Match" parameter).


The file API supports a conditional “If-None-Match”, too. Note that providing a header for the “If-None-Match" results in the return code 304 (Not Modified) if the file was already requested in the current session and the file was not changed since the last request.

 

If you specify a value for the parameter Workspace in SapBackPack, the file API returns the most recent inactive version of the file in the given workspace, if it exists. If no inactive version of the requested file exists in the specified workspace, then an active version of the file is retrieved. The default workspace is "" (empty string).

 

Here is a small example taken from an SAPUI5 application where we use this API via jQuery.AJAX to load the content of a file from the Repository:

var ShortUrl = '/sap/hana/xs/dt/base/file/';
var package = oExerciseJSON.exercises[i].steps[isub].package;
var lPath = package.replace(/[.]/g, '/');
var aUrl = ShortUrl + lPath;
aUrl = aUrl + '/'                + oExerciseJSON.exercises[i].steps[isub].object                + '.'                + oExerciseJSON.exercises[i].steps[isub].extension;                                                       
var oSapBackPack = new Object();

oSapBackPack.Workspace='SHINE_DATA';
var sapBackPack = JSON.stringify(oSapBackPack);
                

jQuery.ajax({
                                url : aUrl,                                method : 'GET',                                dataType : 'text',                                headers : {                                    "SapBackPack" : sapBackPack },                                success : onInsertContent,                                error : onErrorCall,                                async : true                });

Workspace API

The SAP HANA REST Application Programming Interface (REST API) includes a Workspace API that enables you to create and manipulate workspaces and projects via HTTP. For more information about the original Orion API, on which the Workspace API is based, see http://wiki.eclipse.org/Orion/Server_API/Workspace_API.

 

The information in this section describes how to use the Workspace API in the context of SAP HANA. To better understand how it can be used, it is essential to understand the meaning of the terms workspace and project in the context of the Eclipse ORION concept.

 

ORION Workspace

A concept taken over from the Eclipse architecture. A workspace is an area where the design artifacts used in coding or modeling work are located. In this sense, the workspace is user specific; it is the place where a user collects all the smaller entities required for development activities, for example, projects. This Eclipse-based workspace concept is different to the meaning of the workspace currently used in the context of the SAP HANA XS Repository.

 

ORION Project

A collection of design time and modeling entities that not only have a common storage location, but to which you can also apply activities that typically occur during the development lifecycle, for example: compile, build, distribution, deployment, sharing objects with other developers.

Both ORION concepts are technically mapped to the SAP HANA XS package concept, which means that both are implemented as SAP HANA XS packages with specific content and in a specific location.

 

So to summarize, when you create a project; it is an SAP HANA XS sub package in the specified workspace package.

 

Transfer API

The Orion Transfer API is used to import and export packages and files. For more information about the Orion Transfer API, see http://wiki.eclipse.org/Orion/Server_API/Transfer_API.

  • This API is used to import and export packages and files
  • It supports resumable, chunked uploads of file content
  • Export not yet implemented.  Use the File API for export instead.

 

Currently, there are some restrictions. For example, although the Orion Transfer API supports the upload of single (binary) files also in the form of multiple chunks, it is not possible to use the Orion API to import (or export) a package. Neither is it possible to import files from (or export files to) an external server.

 

Metadata API

The REST API's Metadata API provides services to support search and auto-completion scenarios. For example, to retrieve metadata from runtime, design-time and other metadata locations. The typical location of runtime metadata is the SAP HANA database catalog. Currently it is possible to retrieve metadata for tables, views, procedures, functions, sequences, and schemas. The design-time location for metadata is the SAP HANA Repository. Also accessible is the metadata location used by Core Data Services (CDS). All provided services are available at the following (single) access point:

/sap/hana/xs/dt/base/metadata

 

The specific services are called by setting the HTTP parameter Service-Name to the appropriate value. This part of the interface only supports HTTP GET requests.

 

Supported Services:

checkMetadataExistence

checkMetadataExistence URI

getMetadataSuggestion

 

Change Tracking API

The SAP HANA REST Application Programming Interface (REST API) includes a Change Tracking API that enables you to make use of specific lifecycle-management features included with the SAP HANA Repository via HTTP.

 

Change Tracking is integrated with the SAP HANA XS Repository transport tool set. For example, with change tracking enabled, you can ensure that an export operation includes only the latest approved versions of repository objects.

 

Note that the API can only be used if change tracking is enabled in the SAP HANA system via the HANA Application Lifecycle Management tool.

 

OData Services

The SAP HANA REST Application Programming Interface (REST API) includes a set of OData services that enable access to configuration data stored in the SAP HANA environment. The OData services can be accessed with any Odata consumer implementation that supports Odata version 1.0.

 

The SAP HANA REST API includes the following Odata services.

  • Delivery units
  • Contents of .ini files
  • M_FEATURES

 

To use the Odata services included with the SAP HANA REST API, the user needs to have the access rights to the underlying SAP HANA runtime views.

Delivery Units

Provides access to "_SYS_REPO"."DELIVERY_UNITS"

 

INI File Contents

Provides access to "SYS"."M_INIFILE_CONTENTS"

 

M_FEATURES

Provides access to "SYS"."M_FEATURES"

 

Info API

The SAP HANA REST Application Programming Interface (REST API) includes an Info API that can be used to display information about the current version of the REST API. The information displayed by the Info API includes a description of the current version of the delivery unit and the number of commands (API entry points) that are currently supported by the REST API.

Making better informed purchases.

$
0
0

Ive recently posted an idea about making better informed purchases in SAP Idea Incubator.

 

The basic concept is to help purchasers make better purchases with help of analytical data processed through HANA on the go via a Fiori app.

Further details regarding the idea can be found via the following link,

 

Making better informed purchases. : View Idea

 

I urge you to help me improve this idea with your valuable feedbacks and suggestions.

Thanks.

New Series Analytic SQL Functions with SAP HANA SPS09

$
0
0

A series is defined as a sequence of data points made over a specific interval, most often a time interval. Examples of series are average levels of CO2 in the atmosphere, daily closing values of the stock market and daily precipitation levels.

 

Time series analysis is the practice of analyzing time series data in order to extract meaningful information. Just about every organization can benefit from harvesting this information in order to better understand reality: from financial applications striving to better support decision making, to heavy industry companies looking to better predict service needs, to retail organizations looking to improve by describing typical seasonality behavior.

Among the many new features with SPS09, SAP HANA introduced a number of powerful series analytic SQL functions. This article will introduce and explain these functions, as well as give general recommendations on their applicability.

 

The SERIES clause

As of SAP HANA SPS09, the CREATE TABLE statement has been enhanced to support a new SERIES clause. This clause indicates that the table will contain series data. A series table is like any other table but has additional properties. These allow for more efficient storage and takes particular advantage of the columnar nature of SAP HANA.

 

Series analytics functions

Getting the man in the middle - MEDIAN

The median value is very well known and does not need much further explanation. In SAP HANA, null values are eliminated when calculating the median value and the middle value (the average of the two middle values in the case the number of elements is even) is returned.

 

Linear dependence between two variables - CORR

When using the CORR function, SAP HANA will calculate the Pearson product momentum correlation coefficient - often simply called the correlation coefficient, or r - between two variables. The output is a value between –1 and 1.

 

A negative correlation indicates that as one of the variables increases, the other one decreases. An example of this would be the correlation between vaccinations and illness: as more people are vaccinated for a given illness, the less this illness will occur.

 

Similarly, a positive correlation means that an increase in one variable will also increase the other variable. As an example, consider the relationship between smoking and lung disease.

 

No correlation means that an increase of one variable will not reliably affect the other variable, and that the elements are really just randomly distributed.

 

Rank correlation between two variables - CORR_SPEARMAN

Sometimes, two variable are obviously correlated - just not as a line in a graph.

 

RankCorr.PNG

 

For these situations, we are interested in measuring correlation based on how a ranking of a data point in one series is related to the ranking of a data point in another series.

This is measured by the Spearman’s Rho correlation, which is supported in SAP HANA by the CORR_SPEARMAN function. Considering the graph above, the CORR value is 0.88, indicating a strong corrrelation, but not perfect (a line would not be a perfect fit). The CORR_SPEARMAN value, however, is a perfect 1 since each consecutive value ranks the same on both the x and the y axis. 

 

A common issue in statistical analysis is the existence of outliers. An outlier is a data point which does not fit the general trend of the data. There are multiple reasons for the occurrence of outliers, and they can have a significant impact on the calculated correlation. The Spearman correlation will, by it’s nature, be much less affected by the existence of outliers.

Recommendations

When analyzing correlations between variables, it is important to note that the value does not indicate the strength of the correlation, only its directional value. To know how strong the measured correlation coefficient is, you need to also take into account the sample size (the larger the sample size, the more we can trust calculated correlations). This is known as the its significance. Another way of expressing this is that the stronger the significance, the less likely the correlation is to happen by chance.

This is a measure based on the sample size and must be calculated in order to draw a meaningful conclusion.

If it is calculated that there is less than a one in twenty chance (5%) that a calculated correlation can happen by chance, the findings are said to be significant. If there is less than a one in one hundred chance (1%), the findings are designated as highly significant. Currently, the significance is not automatically calculated by the SAP HANA analytical series SQL functions.

Future Directions

SPS09 is the first release of SAP HANA to support SQL-level analytical series functions. Over time, we can expect more support for additional functionality, such as direct support for significance calculations and other measures such as Kendall’s Tau correlation - a different correlation measure which is more sensitive to many, but smaller, rank differences. Kendall’s Tau is a better measure when we are looking to measure overall conformance but are not too concerned about one or two major discrepancies. By contrast, Spearman’s Rho is very sensitive to singular, large variations.

 

Full support for series analytics can be found in the R language, which is also supported by SAP HANA. To take advantage of the full, blazing speed of in-memory analytics, however, we need to use the native SQL functions described in this article.

 

As the SAP HANA platform continues to evolve, I will update this blog with information about new capabilities for lightning fast series analytics!

Hana SPS09 Smart Data Integration - Batch Dataflows

$
0
0

This post is part of an entire series

Hana SPS09 Smart Data Integration - Overview

 

What is the most performing table layout for querying data in Hana? Certainly not the OLTP data model, where many many joins are required to get all data. The OLTP data model is best suited for entering and changing data as it impacts the least amount column/rows. But in that case, is it really such a wise idea to copy the data 1:1 from the source OLTP system into Hana?

Sure you get better performance with Hana than with your existing database, but even Hana is slowed down by every join. Not much but 20 times a little is still a lot. Hence it might be a good idea to read the source data, transform it an load the data into a Hana optimized data model instead of strictly keeping it as is.

 

Actually, analyzing the cases when the OLTP model is good enough and when transformation makes sense was an entire exercise I went through in another blog post: Comparing the Data Warehouse approach with CalcViews - Overview and Comparing the Data Warehouse approach with CalcViews - Examples.

The summary of my findings are, Hana is simply amazing, how much it can cope with within sub seconds. But the creativity and the needs of business to query data is often even more amazing. Hence my conclusion is, operational reports can be done back where they belong - the operational system if run in Hana. But for true Business Intelligence - finding new business insights in the data - this is where I would suggest building star schema data models still.

 

How do you transform the data during the load

In the past the answer was quite simple, you use an ETL tool like Data Services. And the answer is and will be valid still. But for just getting data into Hana such an external ETL tool might have a too large of a footprint. They have their own look and feel, their own repository, their own administration and management, even simple things like data types are different. These tool are meant to read any source and put the data into any target.

Therefore we utilized existing Hana technology and combined it to build an ETL feature natively into Hana.

It comprises of the following components

  • CalcEngine as ETL engine
  • Hana Smart Data Access to connect to a remote source
  • Hana security and the entire Hana management actually
  • Hana monitoring
  • Hana repository for designtime storage and propagation to production
  • A new Hana editor, three in fact as of today: AFM Editor, WebIDE based Replication Editor, Smart Data Preparation - another WebIDE based tool aimed towards Business Users

Since this blog is located under Hana Development, let's focus on the tool targeting us, the IT professionals.

 

The AFM Editor

The AFM editor is a Hana Studio Modeling editor like any other, so I open that from a Hana repository tree.

afm1.pngafm2.png

With this object being created, you can start drawing dataflow. Pull in source objects, target objects and transforms. I am using the term objects here for a reason, because actually it can be many things. Hana tables, virtual tables, views, calcviews, table types, temporary tables. For the ETL use case it is (virtual) tables mostly.

 

As usual in this blog series I want to do something special, in this case my sources are the 0MATERIAL_ATTR and 0MATERIAL_TEXT extractors of the SAP ERP system. So I drag and drop these virtual tables into the editor and use them as source together with some transforms and have to add a target table.

Note: This adapter is one I wrote, it is not a shipped one.

afm3.png

 

The interesting part is the types of transforms available in the calcengine now and hence in the editor.

On the right hand side of the editor is a palette and depending on the features being installed in Hana, it shows different categories. If the smart Data Integration and Smart Data quality feature is installed, there are two categories.

 

Palette "General"

This contains the basic transforms.

afm4.png

 

  • Data Source: Reader
  • Data Sink: Loader
  • Data Sink (Template Table): A target table that is created matching its input columns
  • Aggregation: Allows to group-by columns and specify aggregation functions for the others
  • Filter: Allows to specify filters and complex mappings like substring(col1,4,8)
  • Join: All kinds of joins (Note: this is not the existing calcengine join. With this join node you can join n tables at once, specify different join types for each, specify complex join conditions.)
  • Sort: To order data
  • Union: A Union-all operation
  • Procedure: In case you want to invoke a procedure with all transformed data being passed into it as table type parameter

 

 

 

 

 

 

Palette "Data Provisioning"

This contains all Data Integration specific transforms.afm5.png

 

  • Date Generation: A source that generates a range of date values, one per row
  • Row Generation: A source that generates a range of integer values, one per row
  • Case Node: To route the data through different paths controlled by conditions, e.g. All region='US' data should go into FilterUS, all region='EMEA' data into the transform FilterEMEA and all other rows into FilterOther.
  • Pivot: To take n input rows and merge them into n columns. All other columns stay as is.
  • UnPivot: The reverse operation, to take n columns, e.g. Revenue_January, Revenue_February,... and create n rows, e.g. the 12 values are returned in the column Revenue but using 12 rows.
  • Lookup: To lookup in another table for the lookup value and take one. A join might find multiples and hence the output would be all combinations - often not desired, hence the lookup.
  • Cleanse: To cleanse all data based on reference information, most often used together with the postal address directory of every single country of the world.
  • Geocode: To translate addresses into long/lat information and vice versa. Or Point Of Interest relationships and the such.
  • Table Comparison: To identify for all rows the difference of the incoming data compared to the target table and what type of difference there is.
  • Map Operation: To influence the way read data is handled, e.g. all data read from table1 should be flagged as delete and hence will be deleted in the target table. Also to specify different mappings for differently flagged rows, e.g. all insert rows should have the value of now() in the INSERT_DATE column but not be changed by updates. And update rows should write the value of now() into the LAST_UPDATE column.
  • History Preserving: Deals with all the complexity when loading a target table that should retain the history. So instead of updaing the e.g. customer record, a new customer record version should be added, the valid-from and valid-to columns of the old and new version should be updated,... things like that. Makes creating a Slow Changing Dimension Type 2 a piece of cake. Note: Requires a Table Comparison Transform somewhere upstream, else it does not know what changed and what the current values are.

 

Summary

As you concur hopefully, there is little reason to use an external ETL tool for loading data into Hana, even if transformations are required. Although I have talked about loading the data into Hana just once, this solution does allow to create delta logic as well. But more important, it can deal with realtime data as well!

Using SQLSCRIPT ARRAY as "Internal Table" to deal with complex logic

$
0
0

SQL Script using a declarative style to simplify our data operation. Although HANA SQL Statement is a powerful language, in some cases, especially complex calculation is needed, we need cache table in memory and operate it, just like internal table did in ABAP.

Table variable in SQL Script is a symbol to describe some SQL statement or other logic. It is not a “real” variable at runtime. SQL script’s execution is not line by line, our script was analyzed by HANA and was treated like one or several combined SQL statement. The consequence is we can do nothing with intermediate data you declared in SQL Script as table variable.

Create local temporary table may solve the problem, we could create some temporary table and using update or insert statement to manipulate data in it. But we need to create procedures with write privilege. Thus could not be invoked by read only procedures and calculation views.

Fortunately, SQL Script has an array concept, it could help us to store table data temporary in memory and manipulate it.

In this article, we use a sap standard table SCARR as example to show how to use ARRAY to handle data in memory.

1.Declare Array for storing data

SQL Scrpit do not has a  "Structure" or "Internal Table" concept, so if we want to save data in a table, we should use some arrays, each represent a field.

DECLARE SCARR_MANDT NVARCHAR(3) ARRAY; --Declare a array of  nvarchar(3)
DECLARE SCARR_CARRID NVARCHAR(2) ARRAY;
DECLARE SCARR_CARRNAME NVARCHAR(10) ARRAY;
DECLARE SCARR_INDEX INTEGER;--Declare an integer for record the pointer of arrays
  • You could use any primitive data types of sql scrpit.
  • Mention that,for some reason, may be some bugs, if you use DECIMAL type in array and try to read its value will lead to a compile error. So if you want to deal with decimals, use double instead and convert it back to decimal after calculation.
  • Following a [TABLENAME]_[FIELDNAME] rule to ensure code is easy to understand.

2.Fill arrays with values from a table variable

After binding a select statement to a table variable, using ARRAY_AGG command to fill column data into array

LT_SPFLI = SELECT MANDT,CARRID,CARRNAME FROM SPFLI;
SCARR_MANDT = ARRAY_AGG(:LT_SPFLI.MANDT ORDER BY MANDT,CARRID);
SCARR_CARRID = ARRAY_AGG(:LT_SPFLI.CARRID ORDER BY MANDT,CARRID);
SCARR_CARNAME = ARRAY_AGG(:LT_SPFLI_CARRNAME ORDER BY MANDT,CARRID);
  • Sort data by primary key to ensure each array have the same order, then we could use one index to access fields of same record
  • Mention to the use of ":"
  • Data types of array element and table column must be same

3.Loop over,get data from or set value to the array

FOR SCARR_INDEX IN 1 .. CARDINALITY(:SCARR_MANDT) DO --Using cardinality statement to get number of elements of an array. Array index starts from 1.
IF :SCARR_CARRID[:SCARR_INDEX] = 'AA' THEN --Get value using :ARRNAME[:INDEX]
SCARR_CARRNAME[:SCARR_INDEX] := 'America Airline'; --Set value using ARRNAME[:INDEX] := VALUE
END IF;
END FOR;
SCARR_INDEX := CARDINALITY(:SCARR_MANDT) + 1; --Add 1 to the index to add new rows
SCARR_MANDT[:SCARR_INDEX] := '200'; --Set value with new index directly, be careful not to override existing values
SCARR_CARRID[:SCARR_INDEX] := 'CA';
SCARR_CARRNAME[:SCARR_INDEX] := 'China Airline';
  • Up to now, SQL Script do not support directly pass array value to a procedure or function, use a temp sclar variable to handle the data temproryly.
  • Mention to the use of ":",":=" and '='

4.Combine arrays to a table variable

Using UNNEST command to combine arrays to a table variable

var_out = UNNEST(:SCARR_MANDT,:SCARR_CARRID,:SCARR_CARRNAME) AS ("MANDT","CARRID","CONNID");
--Arrays are transfered to columns of a table

5.Summary

Using array, we can deal with complex logic which could not deal with SQL statement without create temprory tables, this feature make SQL Script has the ability to handle nearly all data logic.Except this,we could force hana to execute some logic at a certain time and sequence to improve proformence in some case.Although doing those is not as easy as ABAP, we have a way to deal with the most complex logic and combine those logic to the easy unstanding declaretive SQL Script.

Convert to a multi tenant hana database

$
0
0

In this blog I explain how to convert an existing Hana database into an Multi Tenant Database. But before we start a short introduction to multi tenant databases

 

In the past you had the following choices when installing an SAP Hana Database

 

  • One SAP HANA DBMS, one database, one application and one schema.

 

  • One  SAP HANA DBMS, one database, several applications, several schema's (MCOD)

 

  • More then one SAP HANA DBMS (one DB in each) 1-n applications, 1-n schemas (MCOS)

  • SAP Hana Virtualised

 

Multitenancy refers to a principle in software architecture where a single instance of the software runs on a server, serving multiple tenants. A tenant is a group of users sharing the same view on a software they use. With a multitenant architecture, a software application is designed to provide every tenant a dedicated share of the instance including its data, configuration, user management, tenant individual functionality and non-functional properties. Multitenancy contrasts with multi-instance architectures where separate software instances operate on behalf of different tenants. (http://en.wikipedia.org/wiki/Multitenancy)

http://content.screencast.com/users/nicolinden/folders/Snagit/media/992953ff-deb1-40f5-9013-044ef3afad06/2014-12-17_09-25-35.png

A single database container is called a Tenant Database, you can run multiple tenant databases on one SAP Hana System while still having only one software version for a SAP HANA system. Some advantages are:

 

  • strong separation of data and users
  • backup and restore available by tenant database
  • resource management by tenant (cpu, memory)

 

When installing a multi tenant database you have two options:

 

  • Start from scratch with a new SPS09 installation, during the installation you get the option to install a single container or a multi container database node


  • Convert an existing single server to a multi tenant database, please not that the change is permanent and cannot be reversed.

 

Since the installation from scratch is basically not much different from the past with the exception of the screenshot shown above I will focus on converting an existing database in this Blog. I will add future blogs describing details regarding configuration and maintenance of a multi tenant database which will be the same for an new installation and converted system.

 

 

Prerequisites

 

Before you can actually convert a database you have to apply to the following pre-requisites:

 

  • The Statistics server has been migrated or removed
  • Your Hana version is on SPS09 or newer

 

 

Migrate Statistics Server

 

As of SPS07 you can migrate the statistics server from a separate process to be part of the nameserver process. When you did not do this before you have to look at OSS notes "1917938 and 1925684" before executing the steps below. Since only installing a database on SPS09 with the "multiple_container" option enabled will install the database by default with the new Statistics server, you will be running the 'old' statistics server unless you manually migrated it already before.

 

First you need to check whether you are still using the old statistics server, the easiest way to do so is from the operating system with the sapcontrol command to check the processes (sapcontrol -nr <instance number> -function GetProcessList). When you see a separate process for the statistics server (as shown below) then you are still working with the old statistics server and need to migrate it first.

 

 

Migrating the statistics server is a simple process, just open the hana studio and go to the configuration. From there adjust "nameserver.ini -> statisticsserver -> activate=true".

 

 

After some minutes, you will see that the statistics server is shutting down and eventually it will be removed from the list.

 

 

 

Tip: when you add "watch -n 1" in front of the sapcontrol command you don't have to repeat the command each time manually to refresh.

 

Finally you can run the SQL Statement “SELECT * FROM _SYS_STATISTICS.STATISTICS_PROPERTIES where key = ‘internal.installation.state'” to check if the migration went fine


 

 

Check for the correct hana version

 

A second prerequisite you have to check is the correct hana version, you most probably already know on which version you run but you can also check it with the steps below (as provided by the admin guide):

 

First you need to stop the database with the command "HDB stop" and then you execute the command "hdbnsutil -exportTopology <file_name>". When you run the command with the database still running you will get an error message:

 

checking for inactive nameserver ...

nameserver phyhandb4-prd:30001 is still active. cannot open its persistence failed

 

Now open the exported file in the VI editor and look for the line "topologyVersion" and check that the value is 15 or higher.



Convert Database


Now that the prerequisites are met we can convert the database to an multi tenant database. In our test we used an empty database, to show you that the existing content is indeed still available after you convert de database we created and empty schema called “CONTENT-MIGRATION” through the SQL command: “createschema "CONTENT-MIGRATION" OWNED BYSYSTEM;


 

Before you can convert the database you have to stop the database with the command ”HDB stop”, then run the command “hdbnsutil -convertToMultiDB” to do the actual conversion.



This only takes a few seconds (for an empty database) after which the database is converted. This action executes the following steps:


  • Set “multidb mode” flag in the configuration
  • Create the system and tenant database
  • Updates the secure store in the file (SSFS) system.


The newly created database maintains the original data and has the original port configuration, the initial "HDB start" command only starts the system database though. Therefore it shows that you can start the tenant database by an SQL statement, after you do this it will automatically start when using the "HDB start" command in the future.


However, when you try to connect with hdbsql to this new SystemDB database it will fail with the error:


* 10: invalid username or password SQLSTATE: 28000


This is because you need to reset the password of the SYSTEM user for the SystemDB database. So make sure the database is still stopped (otherwise stop it with "HDB stop") and then reset the system password of the user “SYSTEM” with the command “hdbnameserver -resetUserSystem”. When asked type a new password and press enter.


 

When the System password has been reset you have to start the database with “HDB start” which will only start the System Database. You can see this with the sapcontrol command, this does not show an additional index server for the tenant database.



Now connect to the database with the hdbsql command “hdbsql -i <instance number> -n <hostname>:3xx13 -d SystemDB -u SYSTEM” after which the system will ask you for a password. You can see that you are connected when the prompt shows “hdbsql SystemDB=>”.



Now you have to start the tenant database with the SQL command “ALTER SYSTEM START DATABASE <SID>”. But when you do this, the system will complain that the user is forced to change its password:


* 414: user is forced to change password: alter password required for user SYSTEM SQLSTATE: HY000


to solve this run the SQL command “ALTER USER SYSTEM PASSWORD <new_password>”. Now run the SQL command "“ALTER SYSTEM START DATABASE <SID>” again and it should work (takes a few seconds before you get your prompt back).



After doing this you can exit the hdbsql command and check if the tenant database is running with the “sapcontrol -nr <instance> -function GetProcessList” command, it should now show an additional XS engine and index server instance “e.g. indexserver-TST and xsengine-TST, where our SID is TST”.


 

 

Add a Multi Tenant Database to the SAP Hana Studio

 

In order to add a multi tenant database to the hana studio, you first have to install Hana studio version “Version: 2.0.7”. After that add a system as usual (right click and choose the option “Add System”).


 

 

When adding a “normal” system with the new studio you choose the option “Single Container” but when adding a multi tenant database you choose the option “Multiple Containers”. Each Multi Tenant database has one System Database and one or more Tenant Databases. First we will add the System Database to the studio.


The system user password of the SystemDB database is the one that you have reset in the previous steps.


 

As you can see, the SystemDB database looks as a normal database but it is only used to maintain the tenant database and not to hold corporate data. As shown in the database above the SystemDB does not contain the schema called “CONTENT-MIGRATION” which we created earlier.

 

When you choose to install a new SPS09 Database from scratch with the option “Multiple_Containers” then you will have only a SystemDB after the installation and you have to add the tenant databases manually after the installation.

 

 

After the SystemDB is added to the studio you can also add the tenant database. The process is the same, but now you choose the option “Tenant database container” and you provide the tenant database name. The System user password of this database is the same as the original database before converting it to a multi tenant database.

 

As you can see this tenant database does contains the schema called “CONTENT-MIGRATION” which we created before we converted to a multi tenant database.


 

Now you have converted a single database to a multi tenant database, in future blogs I will write more details about adding and dropping a tenant database and also provide more information regarding the configuration of a multi tenant database (e.g. XS engine access).


Major Configuration Issues, when we use HANA as secondary DB

$
0
0

Note: Issues facing during ECC-SLT-HANA scenario. In my scenario, each system is standalone system. We have connectivity between three systems.

I have discussed the issues only at ECC system side which we get and highly important.



1. First and primary note is DB USER NAME; it should not be a “SYSTEM” standard user. SAP strongly recommends not using “SYSTEM” as a user for Non-Administrative tasks. i.e. instance, to replicate data to specific SCHEMA and read it from ECC by means of HANA as a secondary database.

    

For instance, during creating a connectivity between ECC and HANA, HANA as a secondary database, we should not use SYSTEM as username as shown given below.

 

This screen shot has been captured from "DBCO" transaction code in ECC system.

 

1.jpg

2. User name and schema name should be same when we are working HANA as secondary database.

 

For instance, HANA DB Schema name is“ECC_HANA_800” and then DB user name at ECC side should be “ECC_HANA_800”. Because when we are executing the HANA transactions at ECC system side, it would search for tables, views and corresponding HANA objects from schema name similar to our DB user name at ECC side i.e. ECC_HANA_800.

 

Please create the technical user name seperately at ECC system similar to SCHEMA NAME as shown given below.

 

This screen shot has been captured from "DBCOCKPIT" transaction code in ECC system.

2.jpg

Please find the screen shot below for reference on HANA DB. Schema Name is “ECC_HANA_800” which is similar to Connection user name at ECC system.

 

3.jpg

3. We have to configure the connection information as per given below instructions.

4.jpg

For instance please find configuration details below.

HANA Server URL: 172.25.36.125

Port Number: 30015

Connection information would be 172.25.36.125:30015

 

This screen shot has been captured from "DBCOCKPIT" transaction code in ECC system.

5.jpg

4. We need to have authorization to read the tables, views and respective objects from HANA Schema when we are executing HANA related transactions at ECC system via secondary database access to HANA DB.

 

For instance, I am executing “FBL3H” transaction code from ECC which would run on secondary database which is on HANA. During the execution of FBL3H transaction code, it would read the data from HANA secondary database.

 

Since it has no authorization, it would turn into dump at ECC system.

 

6.jpg

Thank you very much for referring to my blog. I have discussed only ECC system issues.

 

I will publish my next blog on SLT and HANA system issues for these kind of scenarios.

 

Regards,

Ravi K Chandragiri

Arduino based wearable from Demo Jam Quantum Fit

$
0
0

TechEd Las Vegas in 2013 I had the pleasure of teaming up with Greg Myers from EV Technologies . Quantum Fit was our solution to the explosion of fitness devices without a unified platform. To date, fragmentation of the data is still an issue. This blog will cover the devices that were strapped to me that were posting data to and receiving notifications from HANA.

 

The application collected biometric, distance and environmental data. This was all posted to an HANA database via XSJS. The HANA application would react to changes in the data to trigger notifications to the athlete. The data was also consumed via an athletes dashboard showing real time data for the athlete.

 

The demo shows our working solution, after watching the 6 minutes of geeky entertainment I'll go over how we built the device side of the solution.

 

 

 

Smart phone

The center of our Quanitified Self was the smart phone. In this instance we used the iPhone 5S. The overriding purpose was to read the data from the connected devices, device itself, push alerts to the Pebble and post device readings to our HANA System running on AWS.

Location and motion readings were read from the device.

 

The secondary purpose of the iPhone was to display the current heart rate.

 

IMG_4202.PNG

 

iOS has the CoreMotion Framework that provides access to the Step Count. We enabled both distance determination for outdoor with CoreLocation, but for indoor on stage CoreLocation would not work. In hindsight the step count did not work so well on stage either as there was no guaranteed refresh period from the M7 chip and it slipped outside the time window for the demo.

 

Accessing the CoreMotion data is relatively straightforward as shown in the code example below.

 

#include <CoreMotion/CMStepCounter.h>

...

#pragma mark - Core Motion

 

-(void) startStepCount {

if ([CMStepCounter isStepCountingAvailable] == YES) {

        CMStepCounter *stepCounter = [[CMStepCounteralloc] init];

        [stepCounter startStepCountingUpdatesToQueue:[NSOperationQueuecurrentQueue] updateOn:20withHandler:^(NSInteger numberOfSteps, NSDate *timestamp, NSError *error) {

steps = numberOfSteps;  // Double check on this step count value

self.stepCount.text = [NSStringstringWithFormat:@"%d steps",steps];

           

[selfupdatePace];

        } ];

    }

}

 

Pebble Watch

The Pebble Watch at the time was not being used (nor enabled) as a step counter. We looked at creating our own Pebble application but it made no sense as the Pebble comes with a default sports application that the mobile application can use. The following coding is from the original iPhone application. Since this was written Pebble have made a significant SDK update so this may no longer work with the latest SDK.

 

/*

*  PBPebbleCentral delegate methods

*/

- (void)pebbleCentral:(PBPebbleCentral*)central watchDidConnect:(PBWatch*)watch isNew:(BOOL)isNew {

    [selfsetTargetWatch:watch];

}

 

- (void)pebbleCentral:(PBPebbleCentral*)central watchDidDisconnect:(PBWatch*)watch {

[[[UIAlertView alloc] initWithTitle:@"Disconnected!" message:[watch name] delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil] show];

    if (_targetWatch == watch || [watch isEqual:_targetWatch]) {

        [self setTargetWatch:nil];

    }

}

 

- (void)setTargetWatch:(PBWatch*)watch {

    _targetWatch = watch;

 

[watch  sportsGetIsSupported:^(PBWatch *watch, BOOL isAppMessagesSupported) {

   

[watch appMessagesSetUUID:PBSportsUUID];

        [watch sportsAppLaunch:^(PBWatch *watch, NSError *error) {

NSString *message = @"";

message = error ? [error localizedDescription] : @"Sports App Launch Request";

        }];

    }];

   

}

 

-(void) sendDataToWatch {

    NSTimeInterval minsPerMile = cMetersInMile / (paceMetersPerSecond * 60);

    NSString *minsPerMileString = [NSStringstringWithFormat:@"%0.2f",minsPerMile];

   

NSDictionary *update = [[NSDictionary alloc] initWithObjectsAndKeys:[NSString stringWithFormat:@"%d",self.heartRate],PBSportsTimeKey,

  [NSStringstringWithFormat:@"%0.1f",distanceInMeters/cMetersInMile],PBSportsDistanceKey,minsPerMileString,PBSportsDataKey ,nil];

   

PBSportsUpdate *upd = [[PBSportsUpdate alloc] init];

    upd.time = 3.95f;

    [_targetWatch sportsAppUpdate:update onSent:^(PBWatch *watch, NSError *error) {

        NSString *message = @"";

        if (error) {

message = [error localizedDescription];

        }

    }];

}

 

pebblescreen.png

 

The other feature we used on the Pebble was notifications. This is default behaviour for the watch and required no extra development on our part. The notifications received by the phone are mirrored on the watch.

 

Smart Water Bottle

The water bottle is where I had most of the fun. We used an Arduino UNO, Redbear Bluetooth BLEMini module and DS18B20 temperature sensor. We had other peripherals such as humidity readers connected but chose not to use them in the demo as we already had a lot to show in 6 minutes.

 

FullSizeRender.jpg

 

The coding for the Arduino below and still works when tested yesterday. The temperatue monitor is an I2C serial device which needs the OneWire library for reading data. There are a lot of examples on how to use this, it is one of the many getting started devices.

 

The RedBear BLEMini was used for the Bluetooth support. Using the serial port the temperature readings were written through the device.

 

#include <Arduino.h>

#include <SoftwareSerial.h>

#include <OneWire.h>

 

SoftwareSerial BLEMini(0, 1);

 

byte blink;

 

//Temperature chip i/o

/* OneWire */

int DS18S20_Pin = 2; //DS18S20 Signal pin on digital 2

OneWire ds(DS18S20_Pin); // on digital pin 2

 

void setup()

{

  BLEMini.begin(57600);

  pinMode(12, OUTPUT); 

  Serial.begin(57600);

}

 

unsigned char buf[16] = {0};

unsigned char len = 0;

 

void loop()

{

  float temp = getTemp();

  char tempData[10];

  sprintf(tempData,"%2.2f",temp);

 

  printDouble(temp,100);

  blink ^= 0x01;

 

  digitalWrite(12, blink);

 

  delay(1000);

}

 

void printDouble( double val, unsigned int precision){

// prints val with number of decimal places determine by precision

// NOTE: precision is 1 followed by the number of zeros for the desired number of decimial places

// example: printDouble( 3.1415, 100); // prints 3.14 (two decimal places)

 

  Serial.print (int(val));  //prints the int part

  Serial.print("."); // print the decimal point

  unsigned int frac;

  if(val >= 0)

      frac = (val - int(val)) * precision;

  else

      frac = (int(val)- val ) * precision;

  Serial.println(frac,DEC) ;

}

 

float getTemp(){

  //returns the temperature from one DS18S20 in DEG Celsius

 

  byte data[12];

  byte addr[8];

 

  if ( !ds.search(addr)) {

   //no more sensors on chain, reset search

   ds.reset_search();

   return -1000;

  }

 

  if ( OneWire::crc8( addr, 7) != addr[7]) {

   Serial.println("CRC is not valid!");

   return -1000;

  }

 

  if ( addr[0] != 0x10 && addr[0] != 0x28) {

   Serial.print("Device is not recognized");

   return -1000;

  }

 

  ds.reset();

  ds.select(addr);

  ds.write(0x44,1); // start conversion, with parasite power on at the end

 

  byte present = ds.reset();

  ds.select(addr); 

  ds.write(0xBE); // Read Scratchpad

 

  for (int i = 0; i < 9; i++) { // we need 9 bytes

    data[i] = ds.read();

  }

 

  ds.reset_search();

 

  byte MSB = data[1];

  byte LSB = data[0];

 

  float tempRead = ((MSB << 8) | LSB); //using two's compliment

  float TemperatureSum = tempRead / 16;

 

  // Convert to f

  TemperatureSum = (TemperatureSum * 1.8 ) +32;

 

  return TemperatureSum;

}

 

Notifications

For the demo we chose to use SMS. The quickest way to implement this was to use Twilio the following code is as simple as it gets to send an SMS message. The PHP script was hosted on an external server and called via events from HANA when specific thresholds were crossed in the biometric data.

 

<?php

// Include the PHP Twilio library. You need to download the library from

// twilio.com/docs/libraries, and move it into the folder containing this

// file.

require "Services/Twilio.php";

 

// Set our AccountSid and AuthToken from twilio.com/user/account

$AccountSid = "abcdefghijklmnopqrstuvw";

$AuthToken = " abcdefghijklmnopqrstuvw ";

 

// Instantiate a new Twilio Rest Client

    $client=new Services_Twilio($AccountSid, $AuthToken);

 

/* Your Twilio Number or Outgoing Caller ID */

$from = '+1 xxx-xxx-xxxx';

 

// make an associative array of server admins. Feel free to change/add your

// own phone number and name here.

    $people=array(

"xxxxxxxxxx" => "Johnny",

    );

 

    foreach ($people as $to => $name) {

// Send a new outgoing SMS

$body = "Missing data $name";

        if(isset($_GET["text"])){

          $body=$_GET["text"];

        }

        $client->account->sms_messages->create($from, $to, $body);

echo "Sent message to $name: $body";

    }

?>

 

Adidas HRM

The Heart Rate Monitor was simply one that supported Bluetooth notifications to allow us to get regular readings. The iPhone app subscribed to receive notifications using the CoreBluetooth Framework. The following code is an extract from the application. Subscribing and listening to the bluetooth devices requires that the devices are found, connected to , characteristics determined and the data read.

 

#define cAdidasHRMUUID @"0BC904FB-D617-A18E-A6B7-9385378F0A2E"

...

/*

Request CBCentralManager to scan for heart rate peripherals using service UUID 0x180D

*/

- (void) startCBScan {

  

// https://developer.bluetooth.org/gatt/services/Pages/ServicesHome.aspx

[cm scanForPeripheralsWithServices:[NSArray arrayWithObjects:[CBUUID UUIDWithString:cBiscuitServiceUUID],nil] options:nil];

}

 

/**

* Invoked when the central discovers a peripheral while scanning.

*/

- (void) centralManager:(CBCentralManager *)central didDiscoverPeripheral:(CBPeripheral *)aPeripheral advertisementData:(NSDictionary *)advertisementData RSSI:(NSNumber *)RSSI

{

/* Make sure we don't pick up a rogue device */

    NSString *cfuuidString = (NSString*)CFBridgingRelease(CFUUIDCreateString(kCFAllocatorDefault, [aPeripheral UUID]));

    if (![knownDevicescontainsObject:cfuuidString]) {

        return;

    }

  

    NSMutableArray *peripherals = [selfmutableArrayValueForKey:@"heartRateMonitors"];

    if( ![self.heartRateMonitorscontainsObject:aPeripheral] ) {

        [peripherals addObject:aPeripheral];

    }

  

// Wait until we have all expected peripherals

    if ([peripherals count] == [knownDevicescount]) {

NSMutableArray *uuids = [[NSMutableArray alloc] initWithCapacity:5];

        for (CBPeripheral *per in peripherals)

        {

[uuids addObject:per.UUID];

        }

[cm retrievePeripherals:uuids];

    }

}

 

/*

Invoked when the central manager retrieves the list of known peripherals.

Automatically connect to first known peripheral

*/

- (void)centralManager:(CBCentralManager *)central didRetrievePeripherals:(NSArray *)peripherals

{

    [selfstopScan];

  

/* If there are any known devices, automatically connect to it.*/

    for (CBPeripheral *per in peripherals)

    {

[cm connectPeripheral:per options:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:CBConnectPeripheralOptionNotifyOnDisconnectionKey]];

    }

}

 

/*

Invoked whenever a connection is succesfully created with the peripheral.

Discover available services on the peripheral

*/

- (void) centralManager:(CBCentralManager *)central didConnectPeripheral:(CBPeripheral *)aPeripheral

{

    [aPeripheral setDelegate:self];

    [aPeripheral discoverServices:nil];

}

 

/*

Invoked whenever an existing connection with the peripheral is torn down.

Reset local variables

*/

- (void)centralManager:(CBCentralManager *)central didDisconnectPeripheral:(CBPeripheral *)aPeripheral error:(NSError *)error

{

    if( aPeripheral )

    {

        [aPeripheral setDelegate:nil];

        aPeripheral = nil;

    }

}

 

/*

Invoked whenever the central manager fails to create a connection with the peripheral.

*/

- (void)centralManager:(CBCentralManager *)central didFailToConnectPeripheral:(CBPeripheral *)aPeripheral error:(NSError *)error

{

[self addLogEntry:[NSString stringWithFormat:@"Fail to connect to peripheral: %@ with error = %@", aPeripheral, [error localizedDescription]]];

    if( aPeripheral )

    {

        [aPeripheral setDelegate:nil];

        aPeripheral = nil;

    }

}

 

#pragma mark - CBPeripheral delegate methods

/*

Invoked upon completion of a -[discoverServices:] request.

Discover available characteristics on interested services

*/

- (void) peripheral:(CBPeripheral *)aPeripheral didDiscoverServices:(NSError *)error

{

    for (CBService *aService in aPeripheral.services)

    {

/* Heart Rate Service */

        if ([aService.UUIDisEqual:[CBUUIDUUIDWithString:@"180D"]] || [aService.UUIDisEqual:[CBUUIDUUIDWithString:@"180d"]])

        {

[aPeripheral discoverCharacteristics:nilforService:aService];

        }

      

/* Device Information Service */

        if ([aService.UUIDisEqual:[CBUUIDUUIDWithString:cBiscuitServiceUUID]])

        {

[aPeripheral discoverCharacteristics:nilforService:aService];

        }

    }

}

 

/*

Invoked upon completion of a -[discoverCharacteristics:forService:] request.

Perform appropriate operations on interested characteristics

*/

- (void) peripheral:(CBPeripheral *)aPeripheral didDiscoverCharacteristicsForService:(CBService *)service error:(NSError *)error

{

 

// Temperature measurement

    if ([service.UUIDisEqual:[CBUUIDUUIDWithString:cBiscuitServiceUUID]])

    {

for (CBCharacteristic *aChar in service.characteristics)

        {

// https://developer.bluetooth.org/gatt/characteristics/Pages/CharacteristicsHome.aspx

/* Set notification on heart rate measurement */

CBUUID *uuid = [CBUUIDUUIDWithString:cBiscuitCharacteristic];

if ([aChar.UUIDisEqual:uuid])

{

[aPeripheral setNotifyValue:YESforCharacteristic:aChar];

}

        }

    }

  

// Heart Rate Service

    if ([service.UUIDisEqual:[CBUUIDUUIDWithString:@"180D"]])

    {

for (CBCharacteristic *aChar in service.characteristics)

        {

// https://developer.bluetooth.org/gatt/characteristics/Pages/CharacteristicsHome.aspx

/* Set notification on heart rate measurement */

if ([aChar.UUIDisEqual:[CBUUIDUUIDWithString:@"2A37"]])

{

[aPeripheral setNotifyValue:YESforCharacteristic:aChar];

}


/* Read body sensor location */

if ([aChar.UUIDisEqual:[CBUUIDUUIDWithString:@"2A38"]])

{

[aPeripheral readValueForCharacteristic:aChar];

}

          

/* Write heart rate control point */

if ([aChar.UUIDisEqual:[CBUUIDUUIDWithString:@"2A39"]])

{

uint8_t val = 1;

NSData* valData = [NSDatadataWithBytesvoid*)&val length:sizeof(val)];

[aPeripheral writeValue:valData forCharacteristic:aChar type:CBCharacteristicWriteWithResponse];

}

        }

    }

  

if ( [service.UUID isEqual:[CBUUID UUIDWithString:CBUUIDGenericAccessProfileString]] )

    {

for (CBCharacteristic *aChar in service.characteristics)

        {

/* Read device name */

if ([aChar.UUID isEqual:[CBUUID UUIDWithString:CBUUIDDeviceNameString]])

{

[aPeripheral readValueForCharacteristic:aChar];

}

        }

    }

  

    if ([service.UUIDisEqual:[CBUUIDUUIDWithString:@"180A"]])

    {

for (CBCharacteristic *aChar in service.characteristics)

        {

/* Read manufacturer name */

if ([aChar.UUIDisEqual:[CBUUIDUUIDWithString:@"2A29"]])

{

[aPeripheral readValueForCharacteristic:aChar];

}

        }

    }

}

 

/*

* Invoked upon completion of a -[readValueForCharacteristic:] request or on the reception of a notification/indication.

*/

- (void) peripheral:(CBPeripheral *)aPeripheral didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic error:(NSError *)error

{

/* Updated value for heart rate measurement received */

    if ([characteristic.UUIDisEqual:[CBUUIDUUIDWithString:@"2A37"]])

    {

        if( (characteristic.value) || !error )

        {

/* Update UI with heart rate data */

[selfupdateWithHRMData:characteristic.value];

        }

    }

  

/* Updated value for heart rate measurement received */

    if ([characteristic.UUIDisEqual:[CBUUIDUUIDWithString:cBiscuitCharacteristic]])

    {

        if( (characteristic.value) || !error )

        {

/* Update UI with heart rate data */

NSString *temp = [[NSStringalloc] initWithData:characteristic.valueencoding:NSUTF8StringEncoding];

temperature = [temp doubleValue];

self.tempLabel.text = temp;

        }

    }

    }

/* Value for device Name received */

else if ([characteristic.UUID isEqual:[CBUUID UUIDWithString:CBUUIDDeviceNameString]])

    {

        NSString * deviceName = [[NSStringalloc] initWithData:characteristic.valueencoding:NSUTF8StringEncoding];

    }

/* Value for manufacturer name received */

    elseif ([characteristic.UUIDisEqual:[CBUUIDUUIDWithString:@"2A29"]])

    {

self.manufacturer = [[NSString alloc] initWithData:characteristic.value encoding:NSUTF8StringEncoding];

    }

}


Lessons Learned

While everything works in development the real world is a less forgiving place. I had spent the evenings (after midnight) running the streets so that my teammate Clint Vosloo could fine tune the HANA scripts. Timezone differences between the East coast of the USA and South Africa were coming into play here. After midnight there are few people running around wearing fitness devices that could interfere, however the same was not true at TechEd. Sitting in the Monday evening keynote I noticed quirks with the app, it would not always connect to my devices. Pulling out the Light Blue blue tooth testing app showed our issue. There were over 100 wearables in the audience including other HRMs and my coding was picking them all up, I had not added the specific device registration I needed. Good that we found this the night before the demo and not in the demo.

 

If you ever plan to have a demo that is tuned to your heart rate, make sure you take into account your own stage presence. During our many hours (I ran a lot) of tuning we collected lots of profile data on my heart rate. We tuned the alerts to the expected profile. What we had not accounted for was that my heart rate was already over 20bpm more than my usual resting rate when we started, nervous on stage perhaps? This meant that the system was detecting that I was working much harder than I physically was. While the metric was still correct in that my HR was in an abnormal zone it definitely had an affect on the timing of the demo.

 

Many thanks to Greg Myers, Clint Vosloo and Eric Vallo for teaming up with me on a fun time.

Setting up Oracle Gateway to access HANA via ODBC

$
0
0

Firstly thanks to my customer for the situation that forced me to explore this scenario.

 

I was unable to find a simple document on this explaining such a setup. Hence this blog.

 

The scenario is to access HANA tables from within Oracle via database links.

 

Source database : SAP HANA SP 08 Rev 85

Target database : Oracle 11.2.0.3 running on RHEL 6.x X64

In middle Oracle gateway running on RHEL 6.x X64

 

Target oracle database will migrated to HANA in 6 months, but in the interim period we need to access HANA data sitting inside oracle seamlessly to refresh few materialized views.

 

After evaluating options such as CSV file export/import, ETL tools, SAP SLT, Data Services, etc, the favorable option was Oracle Gateway.

 

To get this Oracle Gateway running was quite a work for the first time. Therefore this blog, to help others.

 

The way it should works is :-

 

From within oracle database if a sql statement like SELECT * FROM DUMMY@H1X; is fired, it should bring the data from SAP HANA database (H1X).

 

First some basics, which is important to understand.

 


How does it work?

 

SQL commands are fired from a oracle database (sqlplus), which will reach out for Oracle Gateway via DBLINK > tnsnames.ora

Oracle Gateway will have parameter in its init<sid>.ora file and loads unixODBC libraries from its LD_LIBRARY_PATH.

unixODBC will load HANA odbc drivers and goes through DSN setting to read data from HANA database.

 

Meaning Oracle DB > DBLINK > tnsnames.ora > Oracle Gateway > unixODBC drivers > Data source DSN (odbc.ini) > HANA odbc drivers > HANA DB

 

If you notice above Oracle Gateway and HANA odbc do not talk to each other directly. Instead talk through unixODBC drivers.

 

This is the most important to understand, else you will not be able to setup this correctly.

 


Step by step - How to setup the above scenario


 

Step 1 - First step is to make unixODBC working

 

Installed unixODBC rpms (both 32 and 64 bit) on RHEL machine where you will run Oracle Gateway.

 

unixODBC-2.2.14-11.el6.x86_64

unixODBC-devel-2.2.14-11.el6.i686

unixODBC-devel-2.2.14-11.el6.x86_64

unixODBC-2.2.14-11.el6.i686

 

 

Step 2 - Install SAP HANA client (64bit) on RHEL machine where you will run Oracle Gateway. Please refer to the HANA client installation guide.

 

 

Step 3 - Create /etc/odbc.ini   contents looks like below

 

[H1X]

Driver=/usr/sap/hdbclient/libodbcHDB.so

ServerNode=serverhana:30015

 

 

Step 4 - Install Oracle Gateway software. You may please google to find step-by-step on how to install using Oracle Universal Installer.

 

 

Step 5 - Set environment variable of user running oracle gateway. In my case it is

 

     LD_LIBRARY_PATH=/usr/lib64:/usr/sap/hdbclient:/oracle/BW1/112_64/lib

 

Its very important to have 64bit unixODBC libraries path (/usr/lib64/libodbc.so) in LD_LIBRARY_PATH because we run RHEL x64 and Oracle 64 bit software.

 

 

Step 6 - Create init<sid>.ora. In my case I will call this as dg4odbc (initdg4odbc.ora). The content should like like below.

 

HS_DB_NAME = H1X

HS_FDS_CONNECT_INFO = H1X                                   <===== This is the DSN name that comes from step 2 /etc/odbc.ini

HS_FDS_SHAREABLE_NAME = /usr/lib64/libodbc.so

HS_LANGUAGE=AMERICAN_AMERICA.UTF8

HS_NLS_NCHAR=UCS2

HS_FDS_TRANSACTION_MODEL=READ_ONLY

set ODBCINI=/etc/odbc.ini

 

 

Step 7 - Create listener.ora

 

LISTENER =

(ADDRESS_LIST=

(ADDRESS=(PROTOCOL=tcp)(HOST=oragwhost)(PORT=1551))

)

 

SID_LIST_LISTENER=

  (SID_LIST=

   (SID_DESC=

    (SID_NAME=dg4odbc)

     (ORACLE_HOME=/oracle/BW1/112_64)

      (PROGRAM=dg4odbc)

       (ENV="LD_LIBRARY_PATH=/usr/lib64:/oracle/BW1/112_64/lib:")

      )

  )

 

 

Step 8 - Start listener

 

lsnrctl start

 

 

Step 9 - Lets first test unixODBC is working

 

Login as user which will run oracle gateway and check LD_LIBRARY_PATH (refer step 4 above) and use below commands.

 

isql -v <DSN name from step 2 /etc/odbc.ini> <hana user name> <password>

 

For example isql -v H1X SYSTEM password

 

+---------------------------------------+

| Connected!                            |

|                                       |

| sql-statement                         |

| help [tablename]                      |

| quit                                  |

|                                       |

+---------------------------------------+

 

SQL> select * from dummy

+------+

| DUMMY|

+------+

| X    |

+------+

SQLRowCount returns 1

1 rows fetched

 

If you see these output you are half way through. unixODBC is working.

 

 

Now its time to work on oracle database from where data will be read with SELECT statements.

 

 

Step 10 - Add entries in tnsnames.ora   In my case it will look like below.

 

dg4odbc  =

  (DESCRIPTION=

   (ADDRESS=(PROTOCOL=tcp)(HOST=oragwhost)(PORT=1551))

   (CONNECT_DATA=(SID=dg4odbc))

   (HS=OK)

  )

 


And test it with tnsping

 

 

 

Step 11 - Create DB link in oracle database and run a SELECT command.    Commands looks like this.

 

CREATE PUBLIC DATABASE LINK H1X CONNECT TO

"SYSTEM" IDENTIFIED BY "password" USING 'dg4odbc';

 

 

SQL> select * from dummy@H1X;

 

 

DUMMY

--------

X

 

 

 

 

 

Hope this helps some one in need !

Supplier & Product Rating based on Customer Feedback : HANA Sentiment analysis

$
0
0

Introduction


With this quick blog I am explaining how to implement a supplier rating application using the Text Analysis capabilities of SAP HANA. I am using Sentiment analysis feature to rate a particular product and suppliers based on the customer feedback on various web sites .

 

Assumptions

We have the customer feedback collected from suppliers and arranged in the following format. (data is copied from multiple online suppliers. Product names and supplier names are masked )

data strucure.JPG

 

Modelling your source Table

     Now we need to load the data into SAP HANA. For that, We are creating a table of the same structure .

Query.JPG

 

After once the data is loaded into the source table, we are creating a full text index on the table for the FeedbackText column using the standard configuration "EXTRACTION_CORE_VOICEOFCUSTOMER".

Voice of the customer content includes a set of entity types and rules that address requirements for extracting customer sentiments and requests. You can use this content to retrieve specific information about your customers' needs and perceptions when processing and analyzing text. The configuration involves complex linguistic analysis and pattern matching that includes processing parts of speech, syntactic patterns, negation, and so on, to identify the patterns to be extracted.

Once the index is created , one new columnar table will be created in the same schema where the source table is created. This table will be having the tokens as well as the corresponding sentiments values .

 

Tree.JPG

 

Now generating some visualizations on the generated sentiment data.

 

Analysis.JPG

 

SAP Lumira For Reporting


Now we have the sentiment data available in our table. That means, we have the product details, Supplier details, and the type of feedback given by the customers . Now we will generate dashboard(Real time dashboards on LUMIRA if we integrate it with live feedback data on multiple web sites) .

 

 

 

Pie Chart.JPG

And I love this part

I love this part.JPG

(This blog is created for DataGeek ).

 

Sreehari

Hadoop MapReduce as Virtual Functions in HANA

$
0
0

Hadoop offers large-scale, low-cost distributed storage called Hadoop Distributed File System (HDFS). When a large file is stored in Hadoop, it is broken into multiple fixed-size blocks and replicated across data nodes. Since the redundancy is on the small size block level, the recovery is faster and distribution is optimal. In order to process these blocks coherently, Hadoop introduced a programming paradigm called MapReduce (MR). By integrating with Hadoop, you can combine the in-memory processing power of SAP HANA with Hadoop’s ability to store and process huge amounts of data, regardless of structure.

Before SAP HANA SPS09, you can use the Smart Data Access to create virtual tables through hive ODBC driver for data federation scenarios. Start from SPS09, SAP HANA supports the integrations with Hadoop MapReduce (MR) jobs written in Java. You can create new type of User Defined Function with the direct access to HDFS and the vUDFs(Virtual User Defined Function) can be invoked from SQL directly that help customer to reuse their investments in MapReduce and solve the problems don’t fit the typical Hive usage patterns via SDA.

 

Here is the architecture diagram of the integration: 

1.png

1. Install Hadoop adapter controller

You should have your Hadoop system installed already. SAP has created an adapter that can be installed as a delivery unit in HANA XS Engine and will be pushed to Hadoop later. This installation has been done in the system but it is good for you to understand how it works.

The controller can be downloaded from SAP Marketplace. After that, you need to assign the sap.hana.xs.lm.roles::Administrator role to your HANA user then start the HANA Application Lifecycle Manager to import it as a delivery unit.

 

HANA Application Lifecycle Manager URL is like below, replace the host and instance no. After login, click on Delivery Unit->Import->Browse and locate the tgz file and import it.

http://<yourhost>:80<instanceNumber>/sap/hana/xs/lm/

2.png

In the Hadoop side, the controller.jar should have been deployed at /sap/hana/mapred/<Revision>/controller/package in HDFS.

 

3.png

 

You will also need to put these two jar files at the path /sap/hana/mapred/lib in HDFS:

 

4.png

 

2. Create the MapReduce java project

 

You can ask to download the project from this blog. If you want to start from scratch, below are the steps.

In HANA Studio/Eclipse, create a new Java project, give it a name song_count, add the following jar files into a subdirectory lib and add them into class path of the project. You should see something like below after that.

 

5.png

Create the Mapper class with a name SongCountMapper.java in the package com.sfp and then copy the source code below. In the mapper, we find out the songs for each artist.

packagecom.sfp;

importjava.io.IOException;

importorg.apache.hadoop.io.IntWritable;
importorg.apache.hadoop.io.Text;
importorg.apache.hadoop.mapreduce.Mapper;

publicclassSongCountMapperextendsMapper<Object, Text, Text, IntWritable> {

   
privatefinalstaticIntWritableone=newIntWritable(1);

   
@Override
   
publicvoidmap(Objectkey, Textvalue, Contextoutput)throwsIOException,
             InterruptedException {
         String[]
song=value.toString().split(",");
       
output.write(newText(song[3]),one);
     }

}


Create the Reducer class with a name SongCountReducer.java in the package com.sfp and then copy the source code below. In the reducer, we aggregate the number of songs for each artist.

packagecom.sfp;

importjava.io.IOException;
importorg.apache.hadoop.io.IntWritable;
importorg.apache.hadoop.io.Text;
importorg.apache.hadoop.mapreduce.Reducer;

publicclassSongCountReducerextendsReducer<Text, IntWritable, Text, IntWritable> {

   
@Override
   
publicvoidreduce(Textkey, Iterable<IntWritable>values, Contextoutput)
           
throwsIOException, InterruptedException {
       
intcount= 0;
       
for(IntWritablevalue:values){
       
count+=value.get();
         }
       
output.write(key,newIntWritable(count));
     }

}

Create the Application class with a name SongCountApplication.java and then copy the source code below. In the application, it incorporates the mapper and reducer, launch a MapReduce job the process data from the input path /sfp/song/data and put the result into path /sfp/job_result in HDFS.

packagecom.sfp;

importorg.apache.hadoop.conf.Configuration;
importorg.apache.hadoop.conf.Configured;
importorg.apache.hadoop.fs.Path;
importorg.apache.hadoop.io.IntWritable;
importorg.apache.hadoop.io.Text;
importorg.apache.hadoop.mapreduce.Job;
importorg.apache.hadoop.mapreduce.lib.input.FileInputFormat;
importorg.apache.hadoop.mapreduce.lib.input.TextInputFormat;
importorg.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
importorg.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
importorg.apache.hadoop.util.Tool;
importorg.apache.hadoop.util.ToolRunner;


publicclassSongCountApplicationextendsConfiguredimplementsTool{

   
publicstaticvoidmain(String[]args)throwsException {
       
intres= ToolRunner.run(newConfiguration(),newSongCountApplication(),args);
         System.exit(
res);     
     }

   
@Override
   
publicintrun(String[]args)throwsException {

         Job
job= Job.getInstance(newConfiguration());
       
job.setOutputKeyClass(Text.class);
       
job.setOutputValueClass(IntWritable.class);

       
job.setMapperClass(SongCountMapper.class);
       
job.setReducerClass(SongCountReducer.class);

       
job.setInputFormatClass(TextInputFormat.class);
       
job.setOutputFormatClass(TextOutputFormat.class);

         FileInputFormat.setInputPaths(
job,newPath("/sfp/song/data"));
         FileOutputFormat.setOutputPath(
job,newPath("/sfp/job_result"));

       
job.setJarByClass(SongCountApplication.class);

       
job.submit();
       
return0;
     }

}

 

Compile the java code and make sure there is no syntax error. The project should look like below now.

 

6.png

 

3. Upload the MapReduce Jobs Archive into HANA repository

 

In order to let HANA aware of the MapReduce code, we need to upload the jar file into HANA repository. HANA Studio can help to generate the jar file on the fly from your source code.

We firstly need to create a new package in HANA repository that we can store the jar file. Go to HANA repository(Create a new one if not created), create a new package sfp.song. Create a new general project in HANA Studio/Eclipse named song and share the project with the HANA repository. You should see a project like this after that. The project is really a placeholder for the MapReduce job archive, it is not necessary to create the .xsapp or .xsaccess file like other XS-based project deployed in HANA repository.

 

7.png

 

Select the project, click New—>Other.. and click SAP HANA—>Database Development—>Hadoop MR Jobs Archive in the pop-up window.

 

8.png

 

Click next and enter the file name as song_count then click next.

 

9.png

 

Select the Java project song_count, put the target Schema as your Schema, here is SFP and click Finish.

 

10.png

 

That will create a song_count.hdbmrjobs file, activate it and you will see it has been uploaded as a jar file in HANA repository by executing this query.

11.png

 

12.png

4. Create the Remote Data Source

 

Now it’s the time to create the remote data source by running the SQL statement below in SQL console of HANA Studio. You will need to replace the <FQDN> with your own Full Qualified Domain Name of your host that you can find it by running command hostname -f.


CREATEREMOTE SOURCE HADOOP_SOURCE

ADAPTER"hadoop"

CONFIGURATION'webhdfs_url=http://<FQDN>:50070;webhcat_url=http://<FQDN>:50111'

WITHCREDENTIALTYPE'PASSWORD'

USING'user=hdfs;password=hdfs';


Now you should see the Remote Source.

 

13.png

 


 

5. Create the Virtual Function

 

To create the virtual function, you can run the following SQL statement in SQL console of HANA Studio. The returns table structure need to be the same data types as the output structure of the Reducer, the Package System is the one you can find at view “SYS”.”VIRTUAL_FUNCTION_PACKAGES”, the configuration need to specify the input path of the MapReduce job where you put your data files in HDFS and the class name of the Mapper and Reducer.

 

CREATEvirtualFUNCTIONHADOOP_SONG_COUNT()
RETURNSTABLE("song_artist"NVARCHAR(400),"count"INTEGER)
PACKAGESYSTEM."sfp.song::song_count_job"
CONFIGURATION
'enable_caching=true;mapred_jobchain=[{"mapred_input":"/sfp/song/data","mapred_mapper":"com.sfp.SongCountMapper","mapred_reducer":"com.sfp.SongCountReducer"}]'

ATHADOOP_SOURCE;


6. Run the Virtual Function

 

Now you can simply run the virtual function like below.

SELECT*FROMHADOOP_SONG_COUNT();


It will trigger the MapReduce job and execute it in the Hadoop Cluster and populate the result into the output path of your Reducer, in this Reducer, it will be in /sfp/job_results in HDFS.

 

13.png

 

And you can find the job records in the job history UI at http://<your_hadoop_host>:19888/jobhistory

 

14.png

Viewing all 737 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>