Quantcast
Channel: SAP HANA Developer Center
Viewing all 737 articles
Browse latest View live

Hana Smart Data Integration - Fun with Transformation Services

$
0
0

This post is part of an entire series

Hana Smart Data Integration - Overview

 

By combining Hana Table Type objects as either source or target or both, complex data transformations can be designed easily thanks to the FlowGraph editor to provide services to the users. As these services are all based on Hana SQL, they can be consumed by all Hana applications, ABAP programs, XS applications, procedures,...

 

To quote Sheldon Cooper from the Big Bang Theory episode "Fun with flags":

 

Now, today's episode of 'Fun with Flags' is not fun, but it is important.

 

 

The Hana smart Data Integration option consist of multiple building blocks, including Hana objects as source/targets (tables) and Transformations. What do you get when combining Hana Table Types and SDI Transformations together? An infrastructure for transformation services.

 

I would like to provide a service in Hana to cleanse addresses. The user should be able to input any address and the return is the same data plus the corrected address.

cleanse1.png

This is pretty much the same kind of dataflow as if reading and writing table data, except this time a TableType is used. I have created a table type to be used as input by executing the command

create type address_input as table (

customer_id varchar(8), street nvarchar(60), city nvarchar(30),

postalcode nvarchar(20), country_code nvarchar(3));

and the result can be seen in the Hana catalog.

 

 

 

Similar a table type to be used as output

create type cleansed_address as table (

customer_id varchar(8), street nvarchar(60), city nvarchar(30),

postalcode nvarchar(20), country_code nvarchar(3), standard_city nvarchar(30),

standard_postalcode nvarchar(30), standard_street nvarchar(60), standard_house_number nvarchar(10));

 

The goal of this example is to take the input, derive more data from the input values and output the additional information. As seen from a transformation, that is a normal FlowGraph, one that reads a table, well table type in that case, does transformations and writes the results.

So the only difference is the source resp target, it is a "Input Type" instead of a "Data Source" and when adding it, we are asked to provide a table type structure from the catalog.

cleanse3.png

 

What kind of transformations are done between source and target is secondary, in this case is it just copying the ADDRESS_INPUT.STREET to the STANDARD_STREET output column in addition.

 

When this is activated a task is created which has two table type parameters. Like in stored procedures, the data has to be prepared and then the procedure, well Task, is called.

cleanse3a.png

 

As a table type is just a structure, nothing that can hold data, two temporary tables are created with the ADDRESS_INPUT and CLEANSED_ADDRESS structure. In the input a record is added, then the task invoked and then the result selected from the output table.

cleanse3b.png

 

Now it is time to play with the transformations. Obviously the Data Cleanse transform is much better suited for the goal, so I removed the Filter node ("Transformation1") and replaced it with a Data Cleanse.

 

cleanse6.png

 

When dropping the transform into the canvas it provides a simple wizard like interface to configure the input and output.

 

cleanse4.png

The Data Cleanse transform requires the information in which column it can find what kind of information related to the cleansing task. Since this information was not provided, by clicking on Edit Defaults the content information for each column can be set. CustomerID is Unknown, meaning it has no impact on the cleansing process, the column STREET contains text like "Innsbrucker Straße 34", so it is more than just a street, it is an entire address line.

 

Next step is to configure the output of the transform, the additional columns it should produce. To help with that it provides typical examples to pick from. Since the output structure should have a separate house number column, the 7-column address had been selected.

cleanse5.png

 

 

The transform does output all input columns plus the 7 additional address fields it generates. So once the transform is connected to the target table type, the available transform output fields are mapped to the various target fields.

cleanse7.png

 

Executing the task again shows a better result. The transform was able to break apart the STREET column (content type: address line) into the street name and number and it modified the non-existent postcode 5070 to 5071. It was not able to do more as the World Wide Directory files were the only available ones.

cleanse8.png

 

After copying the address files for Austria and hence the postal information down to street and house number were available, a few more changes have been made to the input address.

cleanse9.png

It correctly found out the official street name is "Innsbrucker Bundesstraße" and not "Innsbrucker Straße" and although the street runs from Salzburg City (Postcode: 5020) into Salzburg-Wals (Postcode: 5071), the house number 34 indicates we are taking about the Salzburg City.

 

 

Trying the same with google maps leads to the same result, Google does change the postcode to 5020 as well.

cleanse10.png

 

 

So what is the impact of all that?

 

First of all it shows the openness of the Hana Smart Data Integration solution. The same kind of transformations used for Data Provisioning can be used for providing services as well. In above example an extreme case had been used, one where no Hana table was used at all. But by modifying the dataflow, all kinds of use cases can be solved:

  • A service data is sent into, looked up in a Hana table, the result is returned. Example: Search for customers with this address in the customer master data table. Maybe this record should not be created a second time but exists already.
  • A service data is sent into and after transformations loaded into Hana tables. Example: Create this record as new customer in the master data table.
  • A service to be customized by the user. Example: Instead of writing tons of ABAP or JS code, the user can modify a pre-built dataflow to add his own transformations and configurations.
  • Use above services via an XS application
  • Or via OData
  • Or in an ABAP program
  • ...

SPLASH - providing valuable feedback

$
0
0

In this blog post, I wanted to share my experience with using the SPLASH tool, specifically how to create a Study. The reason I wanted to share this is because in the several projects I have worked on this year as well as in others in the past, I have had live demo sessions which are effective, however, I believe that SPLASH Survey's would have saved me some time in development prior to the demo itself. A few times in the demos, the feedback we received was minor but crucial to the UAT. Further, I think that if I had used a tool such as Splash prior to the demo, I would have been at a better place and saved a few days of development for those minor, cosmetic things that usually the Business is not content with but require more iterations from the developers to tweak.

 

First of all, It is important to know that BUILD and SPLASH are currently not for GA - I had to create a beta account at https://www.experiencesplash.com/splashapp/ and provide my information in order to receive access to the beta version of the tool. I am very impressed with what is there so far!

 

Once I created an account and acknowledge the confirmation email, I was able to start on my first SPLASH project. I selected the Study (Survey) because as I mentioned before, It is something we (developers) have to on every project and get feedback from business. It does not matter which technology it is or even what field you come from. Getting feedback is a very important and fundamental step on successful project deliveries.

 

here are the steps to create a survey and send out the link for users to provide feedback.

1) create project

createProject.JPG

2) create survey

createSurvey.JPG

3) provide title for survey

surveyTitle.JPG

4) upload some screenshots - In my case I am using people here but it could be wireframes or actual project screenshots

 

(drag and drop into screen - I have hidden the faces of some people purposely)


photos.JPG

5) once pictures are uploaded, publish - notice that if survey is published, then it cannot be edit.

6) once published, SPLASH will provide a link that can be sent out to access the study or also you can provide email addresses that will receive the survey.

 

 

after receiving the survey, I provided some feedback for some of the pictures, such as the ones below


sam.JPG


thomas.JPG

 

 

 

as seen on these screenshots, I have provided some feedback and also within the feedback you may be able to see the comment as well as the icon (face) to see the mood of the comment.

 

more than just sending out surveys, I believe this tool has more to offer such as advanced feedback prior to finalizing a project, mood feedback based on some of the icons, and most importantly, this tool could be used for more than just SAP projects. I think we have received a great tool to get feedback on reporting, DB artifacts, UIs, etc. This tool is very easy to use and follow for non-technical users as well as if we needed to provide feedback from tech leads into the development team.

 

 

I look forward to hearing from others, how they are using SPLASH and hopefully others can also benefit from this small post.

CRUD Operations with XSJS - SAP HANA Native Application

$
0
0

Hello Everyone,

 

This blog explains how to create SAP NAtive application having CRUD(Create, Read, Update & Delete) operations using SAPUI5 and XSJS services. We will also see how to consume XSJS.

 

 

As all we know ,Data can be transferred over HTTP using either the Atom (XML) or the JSON (JavaScript) format. Here I am using JSON to pass the data.


Creating the User Interface

1.       Create a project

2.       Setup SAPUI5 bootstrap and required libraries

3.       Create view components

       4.     Implement XSJS Services

               4.1.    Read

               4.2.    Create

               4.3.    Update

               4.4.    Delete


Prerequisites


     ·       You have to install the SAPUI5 Application Development Tool into your Eclipse

     ·       UI Development Toolkit for HTML5 Version SAPUI5 1.28


             1.    Create the XS project.

 

            Structure of your project will look like this:

     1.png

 

      2.   Create a simple HTML Page ,say index.html

    

           

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><meta name="viewport" content="width=device-width, initial-scale=1.0"/>  <meta http-equiv="X-UA-Compatible" content="IE=edge"/>  <title>CRUD Operation with XSJS</title>  <script src="js/jquery.min.js"></script>    <script src="https://sapui5.hana.ondemand.com/resources/sap-ui-core.js"   id="sap-ui-bootstrap"  data-sap-ui-libs="sap.ui.ux3,sap.ui.commons,sap.ui.table,sap.viz,sap.suite.ui.commons,sap.ui.ux3,sap.m"  data-sap-ui-theme="sap_goldreflection">  </script>  <script>  sap.ui.localResources("views");  var view = sap.ui.view({id:"id1", viewName:"views.sample", type:sap.ui.core.mvc.ViewType.JS});  </script>  </head>            <body>                <div id="content" style="height: 400px;width: 400px; border: 1px solid #AAA"></div>            </body></html>

        

1.           3.       Create one view.js file , I named it as sample.view.js

 

sap.ui.jsview("views.sample", {    /** Specifies the Controller belonging to this View.     * In the case that it is not implemented, or that "null" is returned, this View does not have a Controller.     * @memberOf views.Companies     */    getControllerName: function() {        return null;    },    /** Is initially called once after the Controller has been instantiated. It is the place where the UI is constructed.     * Since the Controller is given to this method, its event handlers can be attached right away.     * @memberOf views.Companies     */    createContent: function(oController)    {        // create a simple matrix layout        var oLayout = new sap.ui.commons.layout.MatrixLayout({    id : "matrix1",        columns : 5,        width : "100%",        widths : [ "15%", "30%","10%","30%","15%"],     layoutFixed : false     });        var fname="",lname="",id,dob,ph_no;          var oLabel1 = new sap.ui.commons.Label("l1");  oLabel1.setText("Employee ID");  var oLabel2 = new sap.ui.commons.Label("l2");  oLabel2.setText("First Name");  var oLabel3 = new sap.ui.commons.Label("l3");  oLabel3.setText("Last Name");  var oLabel4 = new sap.ui.commons.Label("l4");  oLabel4.setText("DOB");  var oLabel5 = new sap.ui.commons.Label("l5");  oLabel5.setText("Phone Number");  var oInput1= new sap.ui.commons.TextField('input1');        oInput1.setValue("1");       // oInput1.setEditable(false);        oInput1.setTooltip("ID");         oInput1.attachChange(function(){            id= oInput1.getValue();            });  var oInput2 = new sap.ui.commons.TextField('input2');        oInput2.setValue("Amit");        oInput2.setTooltip("please enter first name");        oInput2.attachChange(function(){            fname= oInput2.getValue();            });          var oInput3 = new sap.ui.commons.TextField('input3');        oInput3.setValue("Gupta");        oInput3.setTooltip("please enter last name");        oInput3.attachChange(function(){            lname= oInput3.getValue();            });              var oInput4 = new sap.ui.commons.TextField('input4');        oInput4.setValue("1975-03-27");        oInput4.setTooltip("please enter Dob");        oInput4.attachChange(function(){            dob= oInput4.getValue();            });    var oInput5 = new sap.ui.commons.TextField('input5');        oInput5.setValue("98196636666");        oInput5.setTooltip("please enter phone number");        oInput5.attachChange(function(){            ph_no= oInput5.getValue();            });              var oButton1 = new sap.ui.commons.Button({    text : "Read",    tooltip : "This is a Read operation button",         press : function() {              var jurl="/services/read.xsjs";              jQuery.ajax({    url: jurl,              async :false,              TYPE: 'POST' ,              method: 'GET',              dataType: 'JSON',              success: function(data) {                     oInput1.setValue(data[0].ID);              oInput2.setValue(data[0].FirstName);              oInput3.setValue(data[0].LastName);              oInput4.setValue(data[0].DoB);                 oInput5.setValue(data[0].Phone_No);                       }                          });             }         });          var oButton2 = new sap.ui.commons.Button({    text : "Insert",    tooltip : "This is a Insert operation button",         press : function() {          var data1=                 {                 ID:id ,                 FirstName:fname,                 LastName:lname,                 DoB:dob,                     Phone:ph_no,                  };             var datavalue=JSON.stringify(data1);             alert(datavalue);             var jurl="/services/insert.xsjs";                  jQuery.ajax({           url: jurl,              async :false,              TYPE: 'POST' ,              data:{dataobject:datavalue},              method: 'GET',              dataType: 'text',              success: function(data) {              alert(data);              console.log(data);                       }                          }); }        });        var oButton3 = new sap.ui.commons.Button({    text : "Update",     tooltip : "This is a Update operation button",        press : function() {                var data1=                 {                 ID:id ,                 FirstName:fname,                 LastName:lname,                 DoB:dob,                     Phone:ph_no,                  };                         var datavalue=JSON.stringify(data1);             var jurl="/services/update.xsjs";                  jQuery.ajax({             url: jurl,              async :false,              TYPE: 'POST' ,              data:{dataobject:datavalue},              method: 'GET',              dataType: 'text',              success: function(data) {              alert(data);              console.log(data);                       }                          });                }        });          var oButton4 = new sap.ui.commons.Button({    text : "Delete",     tooltip : "This is a delete operation button",        press : function() {           var jurl="/services/delete.xsjs";           jQuery.ajax({           url: jurl,              async :false,              TYPE: 'POST' ,              data:{                  "ID":id                  },              method: 'GET',              dataType: 'text',              success: function(data) {              alert(data);              console.log(data);                       }                          });                }             });        oLayout.createRow("", oLabel1,"",oInput1,"" );        oLayout.createRow("", oLabel2,"", oInput2,"" );        oLayout.createRow("", oLabel3,"", oInput3,"" );        oLayout.createRow("", oLabel4,"", oInput4,"" );        oLayout.createRow("", oLabel5,"", oInput5,"" );        oLayout.createRow("", oButton1,oButton2, oButton3,oButton4 );
// attach it to some element in the page        oLayout.placeAt("content");      }
});


4 . Create XSJS :


  • Insert.xsjs

              

$.response.contentType = "application/text";
var body='';
var aCmd = $.request.parameters.get('dataobject');
var obj=JSON.parse(aCmd);
var id= obj.ID ;
var firstname=obj.FirstName;
var lastname=obj.LastName;
var dob=obj.DoB;
var phone=obj.Phone;var tx_data_query="";
function getTxtData()
{    var connection = $.db.getConnection();    var statement = null;    var resultSet = null;    tx_data_query ='INSERT INTO EMP_DETAILS (ID, FIRST_NAME, LAST_NAME, DOB, PHONE_NUMBER) VALUES (' + id + ',\'' + firstname + '\',\'' + lastname +    '\',\'' + dob + '\',\'' + phone + '\')';   try    {     statement = connection.prepareStatement(tx_data_query);    resultSet= statement.executeQuery();    connection.commit();    } finally {    statement.close();    connection.close();    }    return resultSet;
}
function doGet()
{          try          {          $.response.contentType = "application/json";  $.response.contentType = "text/plain";  $.response.setBody(getTxtData());          }  catch(err) {             $.response.contentType = "text/plain";  $.response.setBody("Error while executing query: [" +err.message +"]");  $.response.returnCode = 200;          }
}
doGet();


Before Insertion:

 

11.png

 

12.png

 

     After Insertion :

 

     13.png

  • Update.xsjs


 

$.response.contentType = "application/text";
var body='';
var aCmd = $.request.parameters.get('dataobject');
var obj=JSON.parse(aCmd);
var id= obj.ID ;
var firstname=obj.FirstName;
var lastname=obj.LastName;
var dob=obj.DoB;
var phone=obj.Phone;
var tx_data_query = "";
function getTxtData()
{    var connection = $.db.getConnection();    var statement = null;    var resultSet = null;   // tx_data_query = 'UPDATE NEO_4M2TWKN5K877VBPYTUG3UE323"."s0014075794trial.hanaxs.CRUD::emp_table set FIRST_NAME=\'' + firstname + '\', LAST_NAME=\'' + lastname + '\', DOB=\'' + dob + '\', PHONE_NUMBER= \'' + phone + '\' where ID=\''+id+ '\'';
tx_data_query='delete * from NEO_4M2TWKN5K877VBPYTUG3UE323"."s0014075794trial.hanaxs.CRUD::emp_table where id='+id;    try    {    statement = connection.prepareStatement(tx_data_query);    resultSet=statement.executeQuery();    connection.commit();    } finally {    statement.close();    connection.close();    }    return resultSet;
}
function doGet()
{          try          {          $.response.contentType = "application/json";                    $.response.contentType = "text/plain";                    $.response.setBody(getTxtData());          }          catch(err)          {                    $.response.contentType = "text/plain";                    $.response.setBody("Error while executing query: ["+tx_data_query +err.message +"]");                    $.response.returnCode = 200;          }
}
doGet();

After Update:

14.PNG

 

  • Delete.xsjs


         

$.response.contentType = "application/text";
var body='';
var id = $.request.parameters.get('ID');
var tx_data_query = "";
function getTxtData()
{    var connection = $.db.getConnection();    var statement = null;    var resultSet = null;    tx_data_query = 'DELETE from \"_150949\".\"EMP_DETAILS\"  where where ID='+id;    try    {    statement = connection.prepareStatement(tx_data_query);    resultSet=statement.executeQuery();    connection.commit();    } finally {    statement.close();    connection.close();    }    return resultSet;
}
function doGet()
{          try          {          $.response.contentType = "application/json";                    $.response.contentType = "text/plain";                    $.response.setBody(getTxtData());          }            catch(err)          {                    $.response.contentType = "text/plain";                    $.response.setBody("Error while executing query: [" +err.message +"]");                    $.response.returnCode = 200;          }
}
doGet();

     

  After Delete:

15.PNG



  • Read.xsjs

   


$.response.contentType = "application/text";
var body='';
var aCmd = $.request.parameters.get('ID');
var tx_data_query ='select from \"_150949\".\"EMP_DETAILS\" where ID='+aCmd;                  
function close(closables) {          var closable;          var i;          for (i = 0; i < closables.length; i++) {                    closable = closables[i];                    if(closable) {                              closable.close();                    }          }
}
function getTxData(){          var txlist = [];          var connection = $.db.getConnection();          var statement = null;          var resultSet = null;          try{                    statement = connection.prepareStatement(tx_data_query);                    resultSet = statement.executeQuery();                    var txitem;                              while (resultSet.next()) {                    txitem = {};                    txitem.ID = resultSet.getString(1);                    txitem.FirstName = resultSet.getString(2);                    txitem.LastName = resultSet.getInteger(3);                    txitem.DoB = resultSet.getString(4);                    txitem.phone_no = resultSet.getString(5);                    txlist.push(txitem);                    }          } finally {                    close([resultSet, statement, connection]);          }          return txlist;
}
function doGet() {               try{                    $.response.contentType = "text/plain";                    $.response.setBody(JSON.stringify(getTxData()));          }          catch(err){                    $.response.contentType = "text/plain";                    $.response.setBody("Error while executing query: [" + err.message + "]");                    $.response.returnCode = 200;          }
}
doGet();

 

After Reading:

16.PNG

 

 

 

I hope you find this blog useful and helpful.

 

 

Happy Coding!

 

Thanks,

Kapil Jain

How to access HANA using Excel?

$
0
0

Dear Friends,

 

We have seen many queries in SDN about accessing HANA using Microsoft Excel. Many of us were facing lot of issues/error while installing ODBC drivers, registering MDX library and Connecting to HANA in excel.

 

Scope of this blog is just to help you to setup the HANA connection through excel.

 

Here is my configuration which most of you will have and face many issues in creating ODBC connection.

 

OS: Windows 7 64 bit

Office Version: Office 2013 32 bit

HANA Studio: Version: 2.1.15

 

Below are some steps( Its very simple).

 

Step1: Download SAP HANA Client from service market place( HANA Studio do not provide any libraries to connect HANA from other clients. For example. Excel, JAVA UI, etc). It is just user interface for accessing HANA artifacts. Don't download HANA Client 64 bit.

 

Installations and Upgrades - H" --> SAP In-Memory (SAP HANA )"-->HANA Platform Edition"-->SAP HANA PLATFORM EDITION"--> SAP HANA PLATFORM EDIT. 1.0 (Support Packages and Patches)"-->Entry by Component"-->HANA client

 

Download latest version of below OS and install. This also works on Windows OS 32Bit

 

Step2: Make sure your HANA server is Up and running. Enter your URL in browser as http://99.99.100.100:8010   here instance is 10 and host is 99.99.100.100

 

Step3: After installing HANA Client, ODBC driver for HANA would get installed automatically. You can notice this while installation.

            To make sure, please follow below steps.

         --Go to Control Panel --> Administrative Tools --> Data Source(ODBC) --> This is ODBC Administation Console

 

        HANA ODBC 1.png

Step4: Create ODBC DSN for HANA as below.

           --Click on Finish after step 3 and enter Connection name, host and port details and test.

           Hint: To get your hostname and port, go to Systems view -> Right Click Properties -> Database User Logon -> Additional Properties

 

        HANA ODBC2.png     HANA ODBC 3.png

 

Step 5: After step 4, you will see one new entry in Data Source list of ODBC Administration Console. We are half way now. Close all the windows

 

Step 6: Map your system files/libraries to local machine ( This is in lay man understanding).

             Go to start and enter regedit  change the directory path as shown below.

              Value: %SystemRoot%\system32

          HANA ODBC 4.png

          

Step 7 : Now you just find the SAP HANA ODBC MDX Library which is nothing but "SAPNewDBMDXProvider.dll" . We just need to register this library so that office can access HANA DSN using this library.

             Go to :C:\Program Files (x86)\SAP\SAP HANA Database Client for MS Excel\SAPNewDBMDXProvider.dll

 

If you do not find above file  then you can re install the HANA Client and try again to find this file.

 

Step 8: You can register the MDX library manually without using software "HANA Client for Excel"

            Go to Start and Enter CMD . Right click and run as administrator

             HANA ODBC 5.png

          To register HANA MDX Library you need to execute windows registry with file name as below.

 

           C:\Windows\system32>regsvr32.exe "C:\Program Files (x86)\SAP\SAP HANA Database Client for MS Excel\SAPNewDBMDXProvider.dll"

       HANA ODBC 6.png

 

 

Step 9: Open excel and go to DATA ribbon and click on From Other Sources -> From Data Connection Wizard -->Others/Advanced ->

 

             HANA ODBC 7.png

 

Step 10: Connect to HANA  : Enter all the system details and login credentials and access your artifacts.

             

              HANA ODBC 8.png

 

I hope now you will be able to access HANA from your excel.

 

Thanks for visiting this blog and looking forward with your suggestions and comments.

 

Sincerely,

Shreyansh

Debugging HANA stored procedures from a different catalog schema - get auths right!

$
0
0

I've been working to build a deliverable package in HANA (UI5 application and supporting back-end) to send out to customers. And part of the development process, as many readers will know, is debugging stored procedures. In some cases, you're awesome, and you don't need to do this - "I know it works, I wrote it". But sometimes, just sometimes, it's helpful to turn to the debugger to see why a particular piece of functionality is not working as expected. But what to do when you don't have the auths to debug your own stored procedure?! Eeeeep!

 

To set the scene, I'm writing a UI5 app, with an OData HANA-only (i.e. no Gateway here) back-end, and have in place a number of OData modification exits. The modification exits are mostly stored procedures (.hdbprocedure files). Read up on these over here: Custom Exits for OData Write Requests - SAP HANA Developer Guide for SAP HANA Studio - SAP Library

 

I've got an update modification exit on one of my OData entities, Forecasts.

 

service {

"fa.apps.hpc.xs.views.forecasts::forecasts_by_user_view" as "Forecasts" key ("id")

navigates(

  "ForecastFields" as "Fields",

  "ForecastDataSet" as "DataSet",

  "ForecastRuns" as "Runs"

  )

  create using "fa.apps.hpc.xs.procedures.forecast::createForecast"

  delete using "fa.apps.hpc.xs.procedures.forecast::delimitForecast"

  update using "fa.apps.hpc.xs.procedures.forecast::updateForecast";

}

 

That'd be this guy, in bold. If you're savvy, an update modification exit stored procedure needs a particular signature.

 

PROCEDURE "FA_APPS"."fa.apps.hpc.xs.procedures.forecast::updateForecast" (

IN  new <TABLE_DEFN>,

IN  old <TABLE_DEFN>,

OUT outtab <HTTP_ERRORS DEFN>

)

LANGUAGE SQLSCRIPT

SQL SECURITY INVOKER

-- DEFAULT SCHEMA <SCHEMA>

AS

-- READS SQL DATA AS

BEGIN

 

-- Weave your update magic here

 

END;

 

I wanted to debug it, just for fun (actually, for a very real reason). The trouble I found however, is that because this stored procedure exists in another schema to my user's one, debugging is no longer automatically available. In Eclipse (I use Eclipse with HANA Add-ons, not HANA Studio, although effectively the same thing) you'll still be able to set the breakpoint, as per below. Nice blue dot. Perfect! Break point set.

Capture1.JPG


But when you start debugging said procedure, you'll notice a little cross over your breakpoint.

 

capture2.jpg

Nooooooooooooooooooo!!!!!!!!

 

That didn't happen before when the stored proc belonged to my schema. But of course, that was my schema. My user owns it and everything in it, including debug privileges for all stored procedures. So I messed around a bit, not quiet grasping what was required here. I also followed the inimitable Rich Heilman's instructions, which you can peruse over here:Unable to Debug a activated Procedure in HANA XS environment. but they didn't quite do what I needed. Presumably, I already had sufficient auths in HANA to debug a procedure, but I didn't have auths to debug this procedure. So I persevered. As you do. And keep doing. And keep on doing... until eventually it clicked! Hunting though the Object Privileges I could apply to my user, low and behold, I see my stored procedure in the list:

 

capture3.jpg

 

"Hmmmmm" I think to myself. "I wonder what will happen if I..."

capture4.jpg

 

And there you have it. Once you add the stored proc to your user under Object Privileges, you then determine what specific privileges you should have (note: I'm logged in here as a user with user admin auths, so I can grant privileges to my user, but you could just as well use SYSTEM or whatever if this is not an important HANA instance).

 

So that's lovely to know. I can add stored proc auths directly to my user. But this is going to be a deliverable package/app. It follows that I should define such auths in an HDB Role artifact - heck, I'm doing that already for all other components of this app, so best keep on point. So, for the purposes of this post, I'll demonstrate adding this to my existing HDB Role artifact. Were you to do this productively, you might consider creating a separate .hdbrole file for debug/administration of the app. I'll keep everything all together for now. I should note, that I am not saving my user with the privileges above. I don't want to have to apply individual privs to each and every user - I already have an .hdbrole file, so I'll use that. Please also note that, shortly after writing this post, I found Rich's post on pretty much exactly this topic. New SQLScript Features in SAP HANA 1.0 SPS7 - specifically, the last section Debugging Authorisation Concept. I've basically just expounded on what's covered in this post, and the one I linked to earlier.

 

http://assets.diylol.com/hfs/eca/a9b/bfd/resized/x-all-the-things-meme-generator-reblog-all-the-things-205a3d.jpg

 

Right then, to my HDB Role! Now, this role is already applied to my user. So any changes to it should automatically apply to my user too.

 

role fa.apps.hpc.xs.roles::USER_ROLE {

 

// This is a very broad smattering of privileges for a schema, and will be paired back once the supporting

// stored procedures are in place.

schema fa.apps.resources:FA_APPS.hdbschema: SELECT, INSERT, UPDATE, DELETE, EXECUTE, DEBUG;

 

// Lots of guff here

 

// Here's the stored procedure declaration with the required privileges - EXECUTE and DEBUG

catalog sql object "FA_APPS"."fa.apps.hpc.xs.procedures.forecast::updateForecast" : EXECUTE, DEBUG;


// Lots of guff here

 

}

 

Add our procedure privileges as above (including DEBUG on the schema), activate the role, restart debugging of the procedure and...

capture5.jpg

Et voila. Our red cross has happily transformed into a green tick and we're off debugging! I don't know about you, but there's a special, warm place in my heart reserved for green ticks.

 

#happysigh

 

In the debugger...

capture6.jpg

We are breaking exactly out our breakpoint now. Excellent.

 

This is just a smattering of information about the gauntlet that can be HANA auths. The thing is, they're not really complicated. But it can be hard sometimes to figure out what's required. As this post ironically proves.

 

For completeness:

 

My instance is HANA Developer Edition, hosted in AWS via. CAL, ver 1.00.091 (SPS9)

Eclipse is Mars running latest HANA and UI5 Add-ons

 

Credit for both references to Rich Heilman

Hana Smart Data Integration - The File Adapter

$
0
0

The SDI File Adapter is one of the adapters preinstalled with every SDI Agent. Its purpose is to make any file, fixed width or delimited, in every character encoding and all formats available to Hana as a virtual table to select from as easy as from any other Hana table..

 

Concept

 

Once the File Adapter is made visible in Hana it provides multiple tables out of the box, e.g. FILEDIRECTORY to get a list of all available files or FILECONTENT with file content as one large BLOB. As this might allow any Hana user with the create-remote-source privilege to read files, there needs to be some security built around.

When deploying the adapter in the agent, a root directory is to be set. The adapter does not allow to reach outside of this root directory, ever. Therefore sensible files like /etc/passwd or C:\Windows cannot be read. Second is some password (or AccessToken as it is labeled) to make sure the person creating the remote source in Hana is allowed to use this adapter.

The administrator installing the agent decides on these settings by deploying the FileAdapter and changing the preferences of this adapter in the AgentConfig tool.

 

fileadapter_preferences.png

As one virtual table specifies the file format definition and multiple files can have the matching structure, there is no 1:1 relationship between files and virtual tables. So what should the adapter return when browsing the list of available virtual tables? To provide this browsing option the adapter does scan a File Format Root Directory for files of name *.cfg and each file contains the format information. This is the second directory to configure.

 

With this information the remote source and the virtual tables can be created and the file content selected from.

 

Remote Source

 

When creating a remote source the root and file format directories are set by default to the directories of the adapter and can be further restricted here. The idea is, maybe one remote source points to the plan data subdirectory, another to the employee data (both sub directories of the adapter root directory) and users are allowed to use either the one or the other remote source. Just in case more fine grained security is needed.

The AccessToken has to match the token entered in the adapter preferences as security measure.

fileadapter-createsource.png

 

Alternatively the remote source can be created via SQL as well

 

CREATE REMOTE SOURCE "FileSource" ADAPTER "FileAdapter" AT LOCATION AGENT "agent_local"

CONFIGURATION '<?xml version="1.0" encoding="UTF-8"?>

<ConnectionProperties name="ConnectionInfo">

<PropertyEntry name="rootdir">/usr/sap/FileAdapter/FileServer</PropertyEntry>

<PropertyEntry name="fileformatdir">/usr/sap/FileAdapter/FileformatDefinitions</PropertyEntry>

</ConnectionProperties>'

WITH CREDENTIAL TYPE 'PASSWORD' USING

  '<CredentialEntry name="AccessTokenEntry">

    <password>1234</password>

  </CredentialEntry>';

 

The built-in tables

 

The following tables are provided by the adapter always. These are supposed to help building applications using the File Adapter.

fileadapter-builtintables.png

 

  • FILEDIRECTORY: List all files of the remote source, including the sub directories. Supports pushdown of LIKE and = on the PATH and NAME column.
    fileadapter-filedirectory.png
  • FILECONTENT: Allows to read a single file as is. It requires the PATH and NAME column to be restricted to a single file using a where clause and returns the entire file content as one BLOB.
    fileadapter-filecontent.png
  • FILECONTENTTEXT: As above but this time the file content is returned as NCLOB. That means the file is read with a specific character encoding. This can either be the default - based on the Byte Order Mark of the data file or the OS default - or provided as dataprovisioning parameter. In this example a ASCII file is read with UTF32 codepage, hence producing wrong characters.
    fileadapter-filecontenttext.png
  • FILECONTENTROWS: Again returns a BLOB but this time one row per line. A line is supposed to end with a \n (ASCII newline) character/byte.
    fileadapter-filecontentrows.png
  • CODEPAGES: Returns the list of all supported values for codepages by querying the adapter's JVM installation.
  • LOCALE: Returns the locales the JVM does support.

 

Defining file formats

 

One option of creating a virtual table for a given file format is executing the create virtual table statement manually and providing the format information as additional metadata. Below is an example of the syntax and it also lists all possible parameters. Only some of them are mandatory, most can be omitted to leave them at a useful default.

 

create virtual table "v_plan" at "FileSource"."<NULL>"."<NULL>"."v_plan"

REMOTE PROPERTY 'dataprovisioning_parameters'=

'<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

<Parameters>

<Parameter name="FORCE_FILENAME_PATTERN">plan%.txt</Parameter>

<Parameter name="FORCE_DIRECTORY_PATTERN"></Parameter>

<Parameter name="FORMAT">CSV</Parameter>

<Parameter name="CODEPAGE">UTF-8</Parameter>

<Parameter name="ROW_DELIMITER">\r\n</Parameter>

<Parameter name="COLUMN_DELIMITER">;</Parameter>

<Parameter name="TEXT_QUOTES"></Parameter>

<Parameter name="SKIP_HEADER_LINES">0</Parameter>

<Parameter name="QUOTED_TEXT_CONTAIN_ROW_DELIMITER">false</Parameter>

<Parameter name="ESCAPE_CHAR"></Parameter>

<Parameter name="TEXT_QUOTES_ESCAPE_CHAR"></Parameter>

<Parameter name="ERROR_ON_COLUMNCOUNT"></Parameter>

<Parameter name="LOCALE">de_DE</Parameter>

<Parameter name="DATEFORMAT">dd. MMMM yyyy</Parameter>

<Parameter name="TIMEFORMAT">HH:mm:ss</Parameter>

<Parameter name="SECONDDATEFORMAT">yyyy.MM.dd HH:mm:ss</Parameter>

<Parameter name="TIMESTAMPFORMAT">yyyy.MM.dd HH:mm:ss</Parameter>

<Parameter name="COLUMN">YEAR;Integer</Parameter>

<Parameter name="COLUMN">COST_CENTER;varchar(8)</Parameter>

<Parameter name="COLUMN">PLAN_AMOUNT;decimal (15, 2)</Parameter>

<Parameter name="COLUMN">CHANGE_SECONDDATE;seconddate</Parameter>

</Parameters>';

 

The other option is to create a .cfg file inside the Directory of file format definitions or a sub directory thereof. The content of the file are logically the same. The file has to have a first line with a description and then lists all parameters as key value pairs. Above example as cfg file would look like:

 

Fileformat to read a CSV file with plan data

FORCE_FILENAME_PATTERN=plan%.txt

FORCE_DIRECTORY_PATTERN=

FORMAT=CSV

CODEPAGE=UTF-8

ROW_DELIMITER=\r\n

COLUMN_DELIMITER=;

TEXT_QUOTES=

SKIP_HEADER_LINES=1

QUOTED_TEXT_CONTAIN_ROW_DELIMITER=false

ESCAPE_CHAR=

TEXT_QUOTES_ESCAPE_CHAR=

ERROR_ON_COLUMNCOUNT=false

LOCALE=de_DE

DATEFORMAT=dd. MMMM yyyy

TIMEFORMAT=HH:mm:ss

SECONDDATEFORMAT=yyyy.MM.dd HH:mm:ss

TIMESTAMPFORMAT=yyyy.MM.dd HH:mm:ss

COLUMN=YEAR;Integer

COLUMN=COST_CENTER;varchar(8)

COLUMN=PLAN_AMOUNT;decimal (15, 2)

COLUMN=CHANGE_SECONDDATE;seconddate

 

At the end both versions have the same endresult, a virtual table with all these parameters as virtual table attributes. The only difference is with the cfg files the browsing shows something and then the virtual table is created, whereas in the first option the table is created directly.

 

Note: Changing the cfg file after creating the virtual table has no effect. These parameters are really copied into the Hana data dictionary.

 

fileadapter-tableproperties.png

 

In case the file is a fixed width file, so the column start and end position is always the same, the position information is needed in addition.

 

Fixed width demo

FORMAT=FIXED

FORCE_FILENAME_PATTERN=plan%.txt

CODEPAGE=UTF-8

ROW_DELIMITER=\r\n

COLUMNSSTARTENDPOSITION=0-3;4-11;12-37;37-53

ROWLENGTH=54

COLUMN=YEAR;Integer

COLUMN=COST_CENTER;varchar(8)

COLUMN=PLAN_AMOUNT;decimal (15, 2)

COLUMN=CHANGE_SECONDDATE;seconddate

 

Parameters in detail

 

Global

  • FORCE_FILENAME_PATTERN, FORCE_DIRECTORY_PATTERN: A user might execute a simple "select * from virtualtable" without any where clause on directory and name of file. In that case every single file in the root directory and subdirectories will be read and parsed according to this virtual table format definitions. That might take a while a produce many errors. But if the virtual table does match to files in a certain directory, directory tree or to certain file names only, it makes sense to hard wire this information into the virtual table directly. Examples:
    • FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/FileServer/plandata: Reads all files in that directory only
    • FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/FileServer/plandata/%: Reads all files in that directory and subdirectories
    • FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/FileServer/plan%: Reads all files in directories that start with "plan"
    • FORCE_FILENAME_PATTERN=plan%.txt: Reads files like plan20120101.txt
    • FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/FileServer/plandata  FORCE_FILENAME_PATTERN=plan%.txt: Files inside the directory and matching the provided name pattern
  • FORMAT: Is either CSV (default) or FIXED
  • CODEPAGE: The character encoding the file is read with. By default the operating system default is used. In case the file has a Byte Order Mark this codepage is used always. Valid values of the Java installation can be found by creating a virtual table for CODEPAGE and querying its contents.
  • ROW_DELIMITER: A character sequence indicating the end of a row in the file. In case these are non-printable characters they can be provided encoded as /d65536 or /xFFFF or as Unicode notation /u+FFFF. Alternatively the typical \r and \n is supported as well. Examples:
    • \n Unix standard
    • \r\n Windows standard
    • \d13\d10 Windows standard but characters provided as decimal number
    • \x0D\x0A Windows standard but characters provided as hex number
  • SKIP_HEADER_LINES: In case the file has a header, the number of lines to be skipped is entered here.
  • ERROR_ON_COLUMNCOUNT: By default a row with less columns then defined is considered okay. By setting this parameter to true, it is expected that all rows of the file have as many columns as defined.
  • LOCALE: The decimal and date conversion from the strings in the file into native numbers or dates might be locale specific. For example the text "3,1415" is a perfectly nice value for PI but in the German notation where the comma is the decimal separator. With an English locale it would be a large integer number 31415. Same thing with dates in case the month or day name is used. "14. Oktober 2000" with locale German is fine, for all other languages it is not. Again, valid values for the locale can be found by querying a virtual table based on the LOCALE table of the adapter.
  • DATEFORMAT, TIMEFORMAT, SECONDDATEFORMAT, TIMESTAMPFORMAT: The file format can use these datatypes for the date/time related values. Each can have a different format string. The syntax of the format string is the Java SimpleDateFormat syntax.
  • COLUMN: multiple entries each consist of the columnname;datatype where the datatype is any normal Hana datatype.

 

CSV only

  • COLUMN_DELIMITER: The character sequence indicating the next column. If non-printable characters are used then either one of the encodings will work \d65536, \xFFFF or \u+FFFF
    • ; Meaning the ; is the column separator, so a line looks like 2000;IT Costs;435.55
    • | using the pipe character as delimiter
    • \d09 using an ASCII tab character as delimiter
  • TEXT_QUOTES: Sometimes text data is enclosed in quotes so a column delimiter inside the text does not break the format. The line 2000;IT Costs; software related only;435.55 would appear as 4 columns as the text contains a semicolon as well. If the file was created with quotes like 2000;"IT Costs; software related only";435.55 then there is no such issue but the file parser needs to act more carefully and not just search for the next column delimiter. It needs to check if the text is inside the text quote character or outside.
  • ESCAPE_CHAR: Another way to deal with inline special characters is to escape those, like in 2000;IT Costs\; software related only;435.55. Here the \ char is an escape char and indicates that the subsequent character is to be taken literally, not as e.g. column delimiter.
  • TEXT_QUOTES_ESCAPE_CHAR: Leaves the question on how to make quotes appear inside the text, like in IT Costs; "software related" only. One option the file creator might have used is to simply use the global escape character: 2000;"IT Costs; \"software related\" only";435.55. Another popular method is the have the quotes escaped by another quote like in 2000;"IT Costs; ""software related"" only";435.55. In that case both the TEXT_QUOTE=" and the TEXT_QUOTE_ESCAPE_CHAR=" are set to the " character.
  • QUOTED_TEXT_CONTAIN_ROW_DELIMITER: The default value is false and tells the parser regardless of any quotes or escape characters the text inside a row does never contain the row delimiter character sequence. In this case the parser can break the file into rows much faster, it needs to search for the character sequence only, and only the column parsing has to consider the escape and quote characters. If set to true parsing will be slower.

 

Fixed Width only

  • COLUMNSSTARTENDPOSITION: In a fixed width file the column positions need to be specified for each column. Example:
    • 0-3;4-11;12-37;38-53 defines that the first 4 characters are the first column, the next 8 contain the data for the second column etc. Columns need to be in the proper order.
    • 0;4;12;38 is equivalent to above example, the last column ends with the line end
    • 0;4;12;38-53 can be used as well. In fact every single column can either specify the start and end position or just the start.
  • ROWLENGTH: In fixed with files there does not need to be a row delimiter. Often the file has some and then they need to be stripped away. Examples assuming the last data character is at index 53 as specified above:
    • ROWLENGTH=56 ROW_DELIMITER= would work for a file that has a row delimiter. The payload text ranges from 0..53 and hence 54 characters long plus two characters for \r\n. But the last column does not contain the \r\n as it is told to end at index 53.
    • ROWLENGTH=54 ROW_DELIMITER=\r\n is equivalent to above. Each row is expected to be 54 characters plus 2 characters long. The main advantage of this notation is that COLUMNSSTARTENDPOSITION=0;4;12;38 would work as well as the trailing \r\n is stripped away. In the previous example the the last column would start at 38 but end at index 55 due to rowlength=56 and hence contain the \r\n characters in the last column.

 

 

 

Using the virtual table

 

When the virtual table is created it has 5 additional columns:

  • PATH: The path name of the file being read
  • NAME: The file name this row belongs to
  • ROWNUMBER: The line number inside the file
  • PARTITION: All rows get evenly distributed into 10 partitions (ROWNUMBER modulo 10)
  • ERROR: A string with all column parsing errors

 

fileadapter-query.png

 

Executing a select * from virtualtable will return all rows of all files the virtual table is allowed to read, that is all files or the matching ones based on FORCE_DIRECTORY_PATTERN and FORCE_FILENAME_PATTERN settings. Therefore each row tells what file it belongs to in the PATH and NAME columns. As this text can be quite lengthy, it might be a good idea to not select those columns unless the information is needed of course.

This is actually an important capability of the adapter, it supports projections. Therefore only those columns actually needed are transferred, parsed, converted into the proper datatype. The "top" clause is pushed down to the adapter as well, it would read the first lines and then finish.

The adapter also supports filters on PATH and NAME hence all where clauses with equal, in-lists or like are pushed to the adapter and it can navigate to the matching files directly.

 

When a parsing error occurs the reading does continue still but the information about all errors in that row are put into the ERROR column. Things like "Cannot convert to seconddate using the provided format and locale (abcd)". Hence reading the file without checking the ERROR column is dangerous.

 

In order to simplify finding issues in the file, the line number is returned in the ROWNUMBER column. In above example the file starts with row number 2 as the first line is the header line.

 

In case the file is very large and has lots of expensive data conversions the virtual table provides a PARTITION column to read a subset of the file only. This can be used to conveniently execute for example two insert...select statements in parallel, each processing half the file.

insert into target select * from v_us_county_census where partition in (0,1,2,3,4);

insert into target select * from v_us_county_census where partition not in (0,1,2,3,4);


Since the flowgraph supports partitioned reading as well, the file adapter works well together with that feature.


OData read exits - a delightful solution

$
0
0

Hi all,

 

Some months back, I started a discussion on OData read modification exits, and had a bit of a moan on SCN about the fact that they didn't exist. I was building a UI5 app and needed to be able to filter out values in an OData entity on the server. I also wanted to make sure that

  1. the records returned for any given entity belonged to the user, and
  2. to ensure that one user couldn't retrieve the records of another.

Relying on OData filters was not a suitable solution. It seemed, however, that a read modification exit in the OData service definition would suit this need perfectly! So why wasn't there such a mechanism?

 

Now, to begin the story...

 

I have a table MYTABLE, with the columns ID, VALUE, USER. This table records sensitive values in the VALUE column, and each row belongs to a user, denoted by the USER column.

 

Originally, I built my OData service definition like this:

 

service {

    "SCHEMA"."MYTABLE" as "SecureValues"("ID");

}

 

Too easy. Expose my table via. an entity, and read only the records I need by filtering on USER. Unfortunately, this meant that anyone with the service URL (i.e. http://server/path/to/service.xsodata) could merely replay a query from my UI5 app to get access to all of the values in this table/entity. Even if I was filtering by USER in the app, this could easily be subverted to obtain all values in a raw OData response. So that was a BIG problem.

 

Then came my moaning. Mostly about the fact that OData entity definitions support update, create, and delete modification exits, but not read. I was like, "WHY?!?!?!?!?!" In my head it seemed that would be the easiest thing ever! So I continued hunting around, mostly in vain, to find a solution. Finally, I settled on using an intermediary server (Node.js) as a reverse-proxy to filter records in requests. In effect, this intermediary performed some work on the URL to apply the username to the query string, and forwarded this on to HANA. It was inelegant and didn't really solve the security problem, but did mean I no longer had to hard code the username query string parameter into URLs. Small win. Or was it? I had just introduced another server into the chain. With more code. And more bugs... That didn't really simplify anything, especially where this was to be deployed within a corporate network. In fact, it made it less appealing to interested parties. Back to the drawing board.

 

So I then poured through the documentation for Views in HANA, and analytic view privileges surfaced as a possible solution. But these brought with them a host of other issues (not to do with the technology, but to do with my usage of them). All I was trying to do was filter out values on the server. By using an analytic view, I was also introducing sums/counts/aggregations into the process, and I didn't need any of it.

 

I JUST WANTED TO FILTER BY USER ON THE SERVER

 

Analytic views also didn't allow for unions. For a union, I'd need an Attribute view. So where I needed a union between two (or more) tables, I'd have the tables unioned in an Attribute view. Overhead. The Attribute view (read: union) would then be embedded in an Analytic view with Analytic privileges. Overhead++. The Analytic privilege used a stored procedure to identify the current user. Overhead++.

 

Basically, that didn't work. And waaaaaaay too complicated for what I needed. "There has to be an easier way".

 

While hunting around SCN for solutions to another problem, I arrived at it. It was like a bolt of lightening struck me square in the forehead. Or, for those Despicable Me fans, a light bulb moment.

 

despicable me.gif

 

 

The other problem - Configuring an App Site

I was trying to configure a new App Site for our HANA Fiori Launchpad. Not a difficult procedure, but there are a few steps. You can read up on that if you're keen...

 

Two articles from Wenjun Zhou

Creating news tiles with SAP HANA UI Integration Services (UIS)

Creating custom tiles with SAP HANA UI Integration Services (UIS)

 

And another useful article from Ian Henry

Exposing HANA Calc Views via OData to Fiori Tiles

 

Lastly, some other threads helped me get that all sorted, one of which I contributed to with a vital piece of information that I had continually ignored: Create new Fiori launchpad in CAL-HANA | SCN

 

Anyway, back the story. So in troubleshooting HANA Fiori Launchpads, I browsed through a lot of SAP-delivered code on HANA. And I discovered a beautiful thing. I discovered the HANA Table Function, or UDF for short (something Rich Heilman covered way back in 2012! - Table User Defined Functions( Table UDF ) in HANA). Goodness I felt dumb.

 

And when used in combination with the very simple HDBView artifact (.hdbview), you end up with a view dependent on a user-scripted SQL procedure. In other words, you now get to write the code to read the table data! I also knew, from hunting through SAP code, that an .hdbview could be used in an XSOData entity definition. By this stage, I was well excited. "To HANA Studio!" I declared in the office, probably a bit loudly. I did get looks.

 

To recap, the mission is to filter out values that do not belong to the currently authenticated user. The app resides on HANA only, and uses HANA native form authentication to log in. Which means we can access the current, and session, user from HANA SQL Script.

 

So our table function (this is not compiled code, so bear this in mind when using it yourself - you will need to adjust this if you're using CDS, for example):

 

FUNCTION "SCHEMA"."GET_VALUES_BY_USER" ( )

     RETURNS "SCHEMA"."MYTABLE"

     LANGUAGE SQLSCRIPT

     SQL SECURITY INVOKER AS

BEGIN

     RETURN SELECT * FROM "SCHEMA"."MYTABLE"

          WHERE "USER" = CURRENT_USER;

 

END;

 

That's get_values_by_user.hdbtablefunction.

Quite clearly, this function reads from MYTABLE, but crucially, only reads the values that belong to the current user. It returns a table type of MYTABLE which you now know will only contain rows that match the currently logged in user. So that's the first part.

 

Now we wrap the .hdbtablefunction artifact up into an .hdbview. This is really simple.

 

schema = "SCHEMA";

query = "select * from \"GET_VALUES_BY_USER\"()";

depends_on_view=["GET_VALUES_BY_USER"];

 

And that's get_values_by_user_view.hdbview. Note, the view must have a different name to the table function. That's why I append _view to the name.

Activate those two. Now we have a view that only returns records belonging to a user! Ha! Awesome.

 

Last step - throw it into your .xsodata service definition. You will need to specify the key now with keyword key, because the view doesn't have one of it's own (perhaps you can add this somehow, I don't know).

 

service {

    "SCHEMA"."get_values_by_user_view" as "SecureValues" key ("ID");

}


Now, when you call this .xsodata service entity, SecureValues, you will only get the rows belonging to the authenticated user. Bingo! What's more, all the OData filter parameters still work, but they will only be performed on the records returned by your view. So all of your UI5 model filters, and so on, will not need to change (other than to stop worrying about filtering on USER). One more thing - because your OData entity is no longer bound to a physical table (it's now bound to a view), you will need to implement create, update and delete modification exits for all OData entities for which you do this. No biggie. Some simple INSERT, UPDATE/UPSERT and DELETE (or delimit) SQL Script will sort that out - or XSJS if that's your thang.


I hope, hope, hope that others hunting around for solutions to server-side record filtering in OData will stumble across this post. I've attempted to wrap all of the articles/threads that helped me get to a workable solution into the one posting for this purpose.


Best o' luck,

Hagen

Hana Smart Data Integration - Architecture

$
0
0

This post is part of an entire series

Hana Smart Data Integration - Overview

 

 

The SDI product consists of three main building blocks: The Hana Index Server, the Data Provisioning Server and the Data Provisioning Agent.

While the first two are processes of the Hana instance itself, the agent is an external process and as such can be installed anywhere.

 

SDI architecture.png

 

SDA use case

 

As the architecture involves quite a few components, best is to start with an example and follow its way through all the steps.

The user did execute a SQL statement like "select columnA, substring(columnB, 1, 1) from virtual_table where key = 1".

SDI architecture - SDA.png

  1. This SQL statement is part of a user session in Hana and enters the SQL Parser. The first thing the parser needs is the required metadata - does a virtual table of that name even exist, what are its columns etc. All of this are data dictionary tables in Hana.
  2. The SQL Optimizer does try to pushdown as much of the logic as possible. In order to make adapter development simpler, it cannot simply pushdown the full SQL but rather look at the metadata what kind of statements the adapter told to support. In this example the adapter shall be a very simple one, all it supports are select statements reading all columns and no functions; simple equal where clauses it does support. Hence the optimizer will rewrite the statement into something like "select columnA, substring(columnA, 1, 1) from (select * from virtual_table where key = 1)". This statement will return the same as the original one but now it becomes obvious what parts are done inside the adapter and what has to be done in Hana. The inner "select * where.." is sent to the adapter and the adapter will return the row with key = 1 but all columns. Hana will then take that row, read columnA only and return its value plus the result of the substring function to the user.
  3. The SQL Executor is responsible for getting the actual data, so it will tell the Federation Framework to retrieve the data of the optimized SQL.
  4. The Federation Framework is responsible of combining the data coming from Hana and the remote system. In this simple example the SQL select reads remote data only, so it will call the appropriate methods in the adapter, that is an open call to establish the connection to the source (if not done already for that session, provide the inner SQL to the adapter by calling the executeStatement method and then call the fetch method until there is no more data.
  5. Since the Federation Framework cannot talk via the network to that remote adapter directly, is has a Delegator for the Federation Framework. This components calls the equivalent methods in the Adapter Framework of the Data Provisioning Server.
  6. The Adapter Framework itself sends the command via the network to the agent where the corresponding methods in the Adapter are called. The responsibility of that component is to route the commands to the correct adapter and deal with any error situations like agent cannot be reached etc.
  7. The adapter acts as a bridge. It gets an open call with all the parameters provided by the remote source object, it should open a connection to the source system. It does receive the SQL command in the executeStatement method, hence it does translate that into the equivalent call for the given source. Its fetch method is called, the adapter should return the next batch of data by reading the source data and translating the values in to the Hana datatype value.

 

Realtime push

 

The definition of what remote tables the user wants to subscribe to follows the same path as SDA from step 1 to 5. A SQL statement is executed - create remote subscription on ... - and all the validation like, does the table exist, does the adapter support the required capabilities etc are performed.

SDI architecture - Realtime.png

  1. The interesting part starts when the subscription is made active. With the "alter remote subscription ..queue" command the Realtime Provisioning Client is told to start the subscription. There the first checks are performed, e.g. prevent starting an already started subscription.
  2. The Realtime Provisioning Manager inside the DP Server decides what needs to be done in order to get the change data. Basically this means two things, either telling the adapter to start sending the changes or, if the adapter is sending the changes already for another subscription and its data can be reused, simply consuming it as well.
  3. If the adapter has to be notified about the request of getting change data, the Adapter Framework forwards that request to the agent and from there to the adapter.
  4. And the adapter does whatever needs to be done in order to capture the requested changes. This is really source specific, for databases it might mean reading the database transaction log, for other sources it could be implemented as a listener and the source does push changes. and in worst case the adapter has to frequently check for changes in the source. From then on the Adapter keeps sending all change information for the requested data back to the Adapter Framework and from there into the Change Data Receiver.
  5. The Change Data Receiver has to deal with various situations. If the subscription is in queue state still, then the initial load of the table is in progress and hence no change information should be loaded into the target yet. Hence it has to remember these rows somewhere, the Change Data Backlog Store. In case the source adapter does not support re-reading already sent data, then all received data is put into that Backlog Store as well to allow the source sending more data, even if the data has not been committed in Hana already. In other cases the receiver provides the data to the Applier for processing.
  6. The Change Data Applier is loading all data into Hana in the proper order and using the same transactional scope as the data was changed in the source. It is the Applier who deal with the case that one change record is used for multiple subscriptions and loads the data into all targets then. In case the target is a table, it interprets the opcode (insert, update, delete, ...) received and performs the proper action on the target table. And in case of a Task, it makes the incoming change set unique per primary key before sending the data to the Task object (insert+update+update = insert row with the values of the last update statement).
  7. the Applier currently creates regular SQL statements like insert...select from ITAB; or start task using parameter...; and these statements are executed by the Index Server like any other SQL statements.

 

Data Dictionary Metadata

 

One important aspect of the architecture is that all, really all, information is stored in Hana itself and nothing in the adapter. The reason is simple, there might be multiple agents for the same source system for failover, the adapter/agent might stop working and upon restart needs to know where to pickup the work, the adapter can be reinstalled.

All this data is stored in Hana tables and can be quite interesting to debug a problem. Usually these tables are not queried directly but rather a public synonym pointing to a view, which has the row level security implemented inside, is used. Here is a list of such objects

  • AGENTS: Returns the list of all known Data Provisioning Agents and how to reach them.
  • ADAPTERS: Returns the list of adapters known by the Hana database. New entries are added whenever an agent does deploy an adapter previously not known. When one of these adapters has the flag IS_SYSTEM_ADAPTER = true, then it is an Adapter based on ODBC and executed by the Index Server. All other adapters are the SDI adapters.
  • ADAPTER_LOCATIONS: As one adapter can be hosted on one agent but not the other or on multiple agents, this table tells the relationship.
  • REMOTE_SOURCES: For each created remote source one line is returned.
  • VIRTUAL_TABLES: All created virtual tables can be found in there.
  • VIRTUAL_TABLE_PROPERTIES: Additional metadata the adapter requires at runtime are stored in Hana and can be seen via this view.
  • VIRTUAL_COLUMNS: The columns of each virtual table.
  • VIRTUAL_COLUMN_PROPERTIES: Additional metadata can be added to columns as well.
  • REMOTE_SUBSCRIPTIONS: The list of all remote subscriptions and their state.
  • REMOTE_SUBSCRIPTION_EXCEPTIONS: In case a subscription has an error, the exception number and the reason can be found here and using the exception id a recovery can be triggered manually.

 

The SQL statements used can be found here: Hana Adapter SDK - Interaction via SQL

 

Monitoring

 

Using above metadata tables all monitoring can be done using pure SQL commands. But there is a Hana Cockpit based set of screens as well in a separate distribution unit found on Service Marketplace. Very handy to install these instead plus it shows the status of the Task framework as well, the calculation engine based data transformation framework part of the SDI solution.

 

Agent Configuration Tool

 

In order to help setting up the agent and the adapters, the Agent Configuration Tool part of any Agent installation can be used. It does execute the SQL commands for adding agents, adapters, etc and edits the local configuration files.

 

SAPDB Adapters

 

Above the only adapters explained where the SDI adapters. If the ODBC based adapters of the Index Server are used, then no Data Provisioning Server, Agent etc is required. Instead the Federation Framework does access these ODBC adapters via the SAPDB Adapters component. Although more and more adapters will be moved to the SDI, this path will continue to exist but for SAP owned databases only. Sybase ASE and IQ for example. All those databases where going though multiple hops and an SDK does introduce limitations of some kind.

 

C++ Adapters

 

Speaking of multiple hops and SDK, one question might be why the Agent is required even if the source is local. For this case there is a C++ version of the SDK available as well and such an adapter can be accessed by the Data Provisioning Server directly. The only adapter implemented currently is the OData adapter. And as this SDK is the regular Adapter SDK, such an adapter could be deployed on the Agent as well. Then a local source would be accessed directly via the DP server, a remote source via the agent.

The normally used Adapter SDK is the Java version however and as Hana is written in C++ it cannot run the Java code directly, there has to be some inter-process communication which the Agent handles. Hence for a scenario, where the source is local, installing the Agent on the Hana box could be an option but the is needed still.


SAP HANA Text Analysis - Custom Configuration/Dictionary - Dynamic Query

$
0
0

The base idea was to share my exploration on SAP HANA Text Analysis thru this blog. In the process, I have used custom configurations and custom dictionary to build my full text analysis index tables and used a dynamic query in the procedure to fetch the number columns of a table.

 

If there are no entries in text analysis table, then the query will return the fields maintain in the custom configuration table, else it adds a new column to the query which was requested and stored in the text table. Thought was the same procedure should return the data with X or Y number of columns without changing the procedure code.



YouTube: https://www.youtube.com/watch?v=XzC5vpinNNM


 

STEP 1: Creating a Custom Configuration.hdbtextconfig:


Custom dictionary is used to define our own tokens or variants. Variants will be categorized by TA_TYPE.

In this case, I have built a custom dictionary which is based on database table and fields.

  1. In order to do this, I have created a new file customconfig.hdbtextconfig
  2. I have copied the code inside EXTRACTION_CORE_VOICEOFCUSTOMER.hdbtextconfig
  3. Added the below line in my custom configuration file, customconfig.hdbtextconfig

 

 

<string-list-value>amohas97.session.textproj::prod.hdbtextdict</string-list-value>

Path for TA:

Path for TA.jpg

 

 

 

Code in Custom Configuration:

Code - VoC.jpg

 

 

 

STEP 2: Creating a Custom Dictionary .hdbtextdict:


I have added the lines in the file to create a custom dictionary. When the entries are populated in the table which has text analysis index created with the customconfig.hdbtextconfig, it will read the below dictionary to identify the TA_TOKEN and TA_TYPE. I have maintained the table name "amohas97.session.data::srnum" and field name "status" in the custom dictionary under different categories.

 


<?xml version="1.0" encoding="utf-8" ?><dictionary xmlns="http://www.sap.com/ta/4.0">    <entity_category name="querytable">      <entity_name standard_form="amohas97.session.data::srnum">            <variant name ="serial numbers"/>      </entity_name>  </entity_category>          <entity_category name="queryfields">      <entity_name standard_form="queryfields">            <variant name ="status"/>      </entity_name>  </entity_category>      </dictionary>



STEP 3: Tables & Text Analysis full text index:


  1. Table for default selection "amohas97.session.data::srnumpr".

 

        Table was created to maintain the default query. This table will have 3 fields Schema name, Table name and the default fields. This query will get                    executed when there are no entries in the text analysis table.

 

srnumpr.jpg


    2. Table to fetch the data. Maintain some sample records. In this case "amohas97.session.data::srnum"

srnum.jpg



    3. Table to store the requests for the selection of the fields from the above table. In this case "WRK_SCH"."amohas97.session.data::sptxt"

 

    4. Create a full text index on the table "WRK_SCH"."amohas97.session.data::sptxt". This table will hold the newly requested FIELDS. Refer the custom          configuration file for creating an Index.

 



CREATE FULLTEXT INDEX TXTIDX2 ON "WRK_SCH"."amohas97.session.data::sptxt" ("TTXT")
CONFIGURATION 'amohas97.session.textproj::customconfig'
TOKEN SEPARATORS '\/;,.:-_()[]<>!?*@+{}="&'
TEXT ANALYSIS ON;



STEP 4: Procedure:

  1. Get the default query maintained in the "amohas97.session.data::srnumpr" to fetch the data from "amohas97.session.data::srnum"
  2. Check whether there are entries in the text analysis table
  3. If entry exist, get the values of TA_TOKEN & TA_NORMALIZED. These entries will be the values of table and field.
  4. Build a new query using the step 3 information
  5. Execute the query which was built in step 4


CREATE PROCEDURE dquery ( )                LANGUAGE SQLSCRIPT as                                            --SQL SECURITY INVOKER                --DEFAULT SCHEMA <default_schema_name>                --READS SQL DATA AS
BEGIN
declare lv_query nvarchar(100) := null;
declare lv_new_q nvarchar(100) := null;
declare lv_old_q nvarchar(100) := null;
declare lv_q_fin nvarchar(1000) := null;
declare lv_table nvarchar(100) := null;
declare lv_fields nvarchar(100) := null;
declare lv_count nvarchar(3) := null;
/*****************************                Write your procedure logic
*****************************/
-- Configurable table where the fields are maintained for default selection
SELECT QUERY INTO lv_old_q  FROM "WRK_SCH"."amohas97.session.data::srnumpr"
WHERE schema_name = 'WRK_SCH'  AND TABLE_NAME = 'amohas97.session.data::srnum';
-- Index table
select count(*) into lv_count from "WRK_SCH"."$TA_TXTIDX2" where ta_type = 'querytable';                if lv_count > 0 then                                -- --To get the table name, QUERYTABLE is category name. "serial numbers" this text is a token to identify the table                                select top 1 ta_normalized into lv_table from "WRK_SCH"."$TA_TXTIDX2" where ta_type = 'querytable' order by time desc;                                -- --To get the filed name, QUERYFIELDS is category name. "status" this text will be identified as field                                select top 1 ta_token into lv_fields from "WRK_SCH"."$TA_TXTIDX2" where ta_type = 'queryfields' order by time desc;                                -- --To get the COLUMN name, or to check whether column exists                                select column_name into lv_new_q from TABLE_COLUMNS where schema_NAME = 'WRK_SCH' and table_name = lv_table and upper(comments) = upper(lv_fields);                                -- --Concatenate the field name with the exisiting query                                lv_query := lv_old_q || ',' || lv_new_q;                                                          -- --Prepare the final query                                lv_q_fin := ' select ' || lv_query|| ' from "WRK_SCH"."amohas97.session.data::srnum" ';                else                -- --Existing query in the table                lv_q_fin := ' select ' || lv_old_q|| ' from "WRK_SCH"."amohas97.session.data::srnum" ';                          end if;
-- Execute the QUERY
EXECUTE IMMEDIATE (:lv_q_fin);
END;



STEP 5: Execution:

 

Run the procedure in SQL console : CALL"amohas97.session.ta::dquery"();

when we first execute the procedure, the query will return the data/entries with three columns from the table "amohas97.session.data::srnum"

as the default selection of the fields is maintained in the "amohas97.session.data::srnumpr" table.

In this table I have maintained only 3 columns. so the query returned only 3 columns.


r1.jpg


1.      Now in SQL console, I will insert a new entry in this table with a statement in English in the table "amohas97.session.data::sptxt" table. I have created a full text index on this table. So whenever there is an entry created it, my text analysis table will have an entry with categorized TA_TOKENS and TA_NORMALIZED.


insert into "."amohas97.session.data::sptxt""WRK_SCH"."amohas97.session.data::sptxt" values('1','shahid','give me the status of serial numbers');

This is how the text table looks like

r1_sptxt.jpg



1.      Now run the procedure again, this time the procedure will return the data with 4 columns

 


r2.jpg

 

 

Thank you for reading.

what does HANA and MLK have in common?

$
0
0

Today, we celebrate an important day in US history, Martin Luther King’s birthday (MLK) – which is officially on January 15th, however, in the US it is celebrated on the 3rd Monday in January. MLK was an important figure in our History, amongst other things, but also because in August 1963, he delivered the famous speech called “I have a dream”.

 

Similarly to MLK’s speech, many IT professionals have had a dream. A dream that one day, data from different sources, colors, structures, and backgrounds can all be gathered together, analyzed, and served under one central repository. That central repository which now houses data together (persisted or virtually) is called SAP HANA.

 

SAP, the HANA team and other SAP technologies have made this “dream” a reality for the IT world and the enterprises.

 

1) SAP Landscape and Transformation (SLT) is a tool provided by SAP that facilitates the near real time data replication from sources such as ECC, MSSQL , Oracle (and others) into HANA.  Also, via SLT we can create virtual Models and we are able to utilize those models in HANA.  http://scn.sap.com/community/replication-server/blog/2013/06/10/starter-blog-for-sap-lt-replication-server

 

2) Smart Data Access– SDA makes it possible to access remote data, without having to replicate the data to the SAP HANA database beforehand. https://help.sap.com/saphelp_nw74/helpdata/en/bb/df686dc0d94f779b5c11d06753da95/frameset.htm

 

3) Smart Data Integration - SAP HANA smart data access (SDA) functionality is further enhanced in SAP HANA smart data integration to provide virtual access for non-critical data https://blogs.saphana.com/2015/09/11/simplifying

-sap-hana-data-integration-landscape/

 

4) Data Services– is an ETL tool that provides a solution for data integration, data quality, data profiling, and text data processing that allows you to integrate, transform, improve, and deliver trusted data to critical business processes

http://help.sap.com/bods

 

5) HANA Extended Application Services, aka HANA XS. With HANA XS, we are able to serve data via http(s) to other sources. Moreover, XS is the engine which allows web applications to be built and with the help of sapui5 (JS library) we can build modern and responsive applications. Further, the consumption of external data is also possible via outbound connectivity.

 

Please feel free to ask questions about any of the HANA topics above.

Thank you for reading this short blog and happy MLK day!

Visualizing Data with Jupyter and SAP HANA Vora

$
0
0

In this blog post, we will explain, how to set up Jupyter as a browser-based frontend to easily query and visualize your data.

 

Jupyter is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text, see Project Jupyter.

 

This tutorial consists of two parts.

You are currently reading part one, which explains the basic steps how to set up and configure Jupyter.

It is essential to complete part one before continuing with part two!

 

Part two demonstrates how to run queries in Python and how to visualize data using matplotlib.

 

 

 

Prerequisites

 

Before starting this tutorial, please make sure your cluster is up and running.

You should have at least once started the spark shell and run some queries to test its functionality.

 

To complete part 2 of this tutorial, you need sample data, which can be downloaded here:

https://mdocs.sap.com/mcm/public/v1/open?shr=LMV6pH_012dtA13N-rtiwGZUAnulqt2zX4MSfGXQ51w

This file contains TPC-H sample data at scale facto 0.001.

Please download the file and extract its content to your HDFS.

 

Alternatively, you may generate the sample data on your own by downloading and compiling DBGEN:

http://www.tpc.org/tpch/tools_download/dbgen-download-request.asp

 

 

Installation

 

To get startet, we need to install several packages, that should come bundled with your Linux distribution.

Please run the following commands on a RedHat-based machine:

sudo yum install python-pip
sudo yum install python-matplotlib
sudo yum install gcc-c++
sudo pip install --upgrade pip
sudo pip install jupyter

You may install Jupyter on a jumpbox outside the cluster, for example, on an Ubuntu-based system.
Then, the first two commands are slightly different:

sudo apt-get install python-pip
sudo apt-get install python-matplotlib
sudo apt-get install g++
sudo pip install --upgrade pip
sudo pip install jupyter

 

 

Environment

 

Next, we need to set some environment variables to inform Jupyter about our Spark and Python settings.

Please adjust the paths and version number below according to your local environment, then either run these commands on the shell as the "vora" user, or put them in your ".profile", to have them loaded every time you log in:

export PYTHONPATH=/home/vora/vora/python:$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.8.2.1-src.zip
export ADD_JARS=/home/vora/vora/lib/spark-sap-datasources-<version>-assembly.jar
export SPARK_CLASSPATH=$ADD_JARS
export PYSPARK_SUBMIT_ARGS="--master yarn-client --jars $ADD_JARS pyspark-shell"

 

 

Configure Jupyter

 

Please run this command as the user "vora" to generate the initial configuration for Jupyter:

jupyter notebook --generate-config

Now, open an editor and edit the file "~/.jupyter/jupyter_notebook_config.py"

Since we are running on a remote machine with no Window Manager, we configure Jupyter to not open up a webbrowser on startup.

Please uncomment the line

# c.NotebookApp.open_browser = False

Uncomment means removing the pound sign at the beginning of the line.

 

To be able to access Jupyter from remote, we need to uncomment the following line as well:

# c.NotebookApp.ip = '*'

Notice: This will give everyone access to the Jupyter webinterface.

In a production environment, you might want to set up access control.

Please refer to this guide, how to secure your Jupyter installation:

Securing a notebook server

 

After applying the above changes to the config file, please save your changes and close the editor.

 

Notice:

Usually, cloud providers and IT departments are very restrictive and may block access to Jupyter's TCP port (default: 8888).

Please make sure to include a rule in the firewall configuration allowing access to the port on the machine running Jupyter.

Consult the provider's documentation or your IT department for details.

Running Jupyter

 

To run Jupyter, first, create an empty folder where you want to store your notebooks, and go into that folder.

Then run the following command as the user "vora", e.g.:

mkdir notebooks
cd notebooks
jupyter notebook

This will start a Jupyter notebook server, listening on port 8888 for connections.

The console output will be similar to this:

[I 09:39:29.176 NotebookApp] Writing notebook server cookie secret to /run/user/1000/jupyter/notebook_cookie_secret
[W 09:39:29.200 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended.
[W 09:39:29.200 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using authentication. This is highly insecure and not recommended.
[I 09:39:29.204 NotebookApp] Serving notebooks from local directory: /home/d062985/notebooks
[I 09:39:29.204 NotebookApp] 0 active kernels
[I 09:39:29.204 NotebookApp] The IPython Notebook is running at: http://[all ip addresses on your system]:8888/
[I 09:39:29.204 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).

Now we can fire up a webbrowser on another machine and navigate to the URL of the host running Jupyter, e.g. http://jumpbox.yourcluster.internal:8888/

You should see a website like this:

0.png

 

By clicking New, you can start a new notebook, that is waiting for your input:

1.png


After clicking, the empty notebook will open up:

2.png

 

Now, we can start submitting queries by entering the query into a paragraph and hitting the play button on top.
This will then execute the snippet in the background and return results to the webpage.

 

 

Submitting queries and plotting data

 

The final part of this tutorial will take place in Jupyter.

Please download the attached Jupyter Notebook "PythonBindings.ipynb.zip", unzip it, and copy it to the notebook folder on your machine running jupyter.

Then, open the file in the Jupyter webinterface in your webbrowser.

DB2 to HANA Migration -DB2 Translate () function achivement in SAP HANA

$
0
0

DB2 to SAP HANA Migration -Db2 Translate ()  function in SAP HANA


DB2 Query  :

 

select PHONE_NUMBER,

(TRANSLATE(PHONE_NUMBER,'','1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz()-/')) as PHNE

from  "Phone_test"


In above query where ever its find any letter from  this string '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz()-/' in phone number  It replace with Space like ‘ ‘


result s(Out put) :


Db2 Image.PNG

In SAP HANA there is no  Translate function  or similar function is not available , There is a work around using Regular Expression for similar scenario.

SAP HANA Query  with out put:

HANA Image.png

 

Thanks,

Asif

SDA Setup for SQLServer 12

$
0
0

Smart Data Access (SDA) is slightly older feature in HANA .I recently got a chance to setup for SQL Server12 . I read lot of documents/blogs/you tube videos but still running into issues and couldn’t implement in first go. It took me two or three iterations to get it installed correctly. I decided to put exact steps one should follow to implement SDA for SQL Server. I believe with these steps one should be able to install SDA for SQL Server in first go without wasting much time.


SDA Definition:

 

SAP HANA smart data access enables remote data to be accessed. It enables remote data to be accessed as if they are local tables in SAP HANA, without copying the data into SAP HANA. Specifically, in SAP HANA, you can create virtual tables which point to remote tables in from data sources. It is possible to write queries in HANA combining Hana native tables and virtual tables.


SDA Architecture:


fig_1.png


We are configuring HANA Server SH1 and connecting to SQLSERVER and another HANA Database.


Steps to Configure SDA :

 

a.      Linux Users

b.      Download / INSTALL unixODBC Driver Manager

c.      Download / INSTALL SQL Server Drivers

d.      SETUP .odbc.ini file

e.      Test

f.      Setup SDA for SQL server in HANA STUDIO

 


 


a. Linux Users

I am sure everyone knows about users but just wanted to reiterate .When you install Hana with root user it automatically creates a user <SID>User. If you don’t know the <SID>User you can run following command and know the <SID>User.


>> cat /etc/passwd

 

Root User:

 

With root user you will setup complete configuration of HANA System that including  UnixODBC and other ODBC drivers.

 

<SID>user:

 

<SID>User may not have permissions to setup configurations files unless he is given special permissions. Generally all HANA configurations are done with root user.

Hana Studio is owned by <SID>User  so some of the configurations you did as root user the others users in the system can’t see those configuration files .

 

 

b. Download / INSTALL unixODBC.2.3.0 drivers Manager.

      

Download the update the unixODBC driver to 2.3.0

 

http://www.unixodbc.org/


fig_2.png

 

 

Once you download this driver (unixODBC-2.3.0.tar.gz)  move it to HANA Server (can use FileZilla or other file transfer s/w) .Go to the directory when you have copied this file.

 

Login as root user and go to the directory when you copied the drivers.

>>  gunzip unixODBC-2.3.0.tar.gz

>>  tar xvf  unixODBC-2.3.0.tar

>>  ls 

 

Will show you the unixODBC-2.3.0 folder , type the following command to install Unixodbc drivers .

 

>> cd unixODBC-2.3.0

>> ./configure

>> make

>> make install

 

Once the making of binaries are done check if the drivers installed properly.

>> isql –version

The output should be unixODBC 2.3.0


c . Download /Installing SQL Server Drivers :

 

Download MS SQL SERVER Driver for Suse Linux:


(google for other flavors of Linux drivers for SQLSERVER)

 

http://www.microsoft.com/en-us/download/details.aspx?id=34687


fig_3.png

Once you download , copy the SQL server drivers to HANA Server ( May be at same location where you downloaded /installed UnixODBC file using filezilla  or other file transfer software)

 

INSTALL MS SQL server drivers:  unzip the file:

 

Login as root user

>> gunzip msodbcsql-11.0.2260.0.tar.gz

>> tar –xvf msodbcsql-11.0.2260.0.tar

 

This will install a folder with msodbcsql-11.0.2260.0 and also it will install in /mnt file directory

to check whether it installed SQL server drivers properly check the following path sql odbc directory.

 

>> cd /mnt/drivers/msodbcsql-11.0.2260.0

 

Verify the version and install:

 

>> ./install.sh verify


fig_4.png


>> ./install.sh install


fig_5.png

Check again to see if it’s installed.

 

>> ./install.sh verify

fig_6.png


d. SETUP .odbc.ini file

 

Check the odbc.ini file by using the command

 

Login as root user

 

>> odbcinst –j

 

fig_7.png


Odbc.ini file is visible at two locations .One at root directory and one at /etc/unixODBC/odbc.ini

 

Note: if you want all the users to have access to .odbc.ini file then add config in root user and add entries in “/root/.odbc.ini” file

 

 

>> vi  /root/.obdc.ini       -- Open the file and enter following and save it

 

----------------------------  MS SQL SERVER DRIVERS --------------------------

[MSSQL]

Server=SQLSVR,1433

Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-11.0.so.2260.0

 

----------------------------------------------------------------------------------------------

 

 

e.  TESTING

 

Login to Hana Server with Root User (in Putty)  and test connectivity for Sql Server :

 

Login As root user

>> isql  -v  MSSQL sa  Welcome1


fig_8.png

This works for root user .


f.   Add data source from HANA studio

 

fig_9.png

  1. Error : SAP DBtech JDBC L403) Internal error Cannot remote source objects [unixODBC] Data Source name not found and no default driver specified.

 

Is you see the above error which tells there is a problem with unixODBC drivers but the error message is misguiding .This means the  HANA Studio cannot  see some of the configuration files in HANA HOME Directory.Remember .Hana Studio is owned by <SID>USER..

 

Do the following

 

Log in HANA Server as root user in putty

 

>> cp /root/.odbc.ini  /usr/sap/<SID>/home    (ex:  cp /root/.odbc.ini  /usr/sap/SHI/home)

 

Logout of Hana Server and come to HANA Studio and try to connect SQL Server .I got success when I do this process.

 

   

Configuring  Other Hana Server (Server sid name : FDC )

 

Login to HANA Server (SH1 ) as root user

 

>> vi /root/.odbc.ini

----------------------------  MS SQL SERVER DRIVERS --------------------------

[MSSQL]

Server=SQLSVR,1433

Driver=/opt/icrosoft/msodbcsql/lib64/libmsodbcsql-11.0.so.2260.0

 

--------------------------------- HANA SYSTEM (FDC) ---------------------------------------

Server=FDC: 30015

Driver=/usr/sap/FDC/hdbclient/libodbcHDB.so

 

>> cp /root/.odbc.ini  /usr/sap/SHI/home ( For HANA Studio to see this file )

 

 

 

Connecting Other Databases:  Will update this section when I configure .

 

 

 

Conclusion:

 

Now we have connected to SQL Server and different HANA Server. This worked for me and hope it works for you as well in your first go. Whenever I add other databases as sources I will update the process with my findings in this blog and thanks for reading my blog.

I-O-T : INTERNET Of THINGS

$
0
0

This is my 2nd blog on HANA to discuss new applications building on SAP HANA platform and this one will discuss one of today’s hottest topics, applications for IoT.

We are all foreseeing a new connected world where sensors and networking chips are everywhere. In this world, everything is digitized – from light bulbs, to human body, to industrial equipment – and real-time data continuously flows across the network and through software systems. Applications that harness these data flows can greatly transform the way we live and do business. These IoT applications will allow us to create unprecedented new personalized offers and services, innovative business processes and models, and greater business opportunities. For example, adaptive logistic applications can leverage sensors that deliver Geo-spatial information on trucks, trains, and vehicles movement to help business optimize operations at their logistic hubs, with predictive maintenance on capital equipment or for real-time fleet management. SAP does provide multiple ready to use IoT solutions today, and in this blog,

I will focus on how SAP HANA advanced data provisioning technology can help you develop new IoT application to satisfy your unique business requirements.

 

 

 

 

Two of the most fundamental characteristics of IoT applications are the “continuous data input” and the size of the data generated. Additionally, not all data is relevant and as such it needs to be selectively processes and stored. SAP HANA advanced data provisioning technology help your applications cope with these new data-flows. We actually introduced SAP HANA advanced data provisioning technology to

adapt to different scenarios in the previous blog.  

The following diagram shows different options, such as a real-time data replication among

relational databases (SAP Replication Server), complex and high-performance extract, transform and load operations (SAP Data Services), and data

exchange with Business Suite (SAP Landscape Transformation), all used together. In this blog, I want to emphasize the following two features:

 

 

1. Smart Data Streaming : As mentioned, not all streaming data is useful, but it is typically critical to process and store out-of-range data immediately.

For example, if a patient’s pacemaker data suddenly shows an irregular heart beat, the nurse might need to take immediate action.  In such situations we

should process the data before storing it.

 

 

 

SAP HANA provides smart data streaming component that allows the processing of high-velocity, high-volume event streams in real time.

Specifically with smart data streaming your application can filter, aggregate, and enrich raw data before storing it into your database.

Depending on the situation, the filters you can apply to actively monitor the data streaming can be very simple or include complex set of

rules that aggregate or correlate incoming events. In this way, you can truly take advantage of live streams of data and implement continuous

intelligence solutions.

 

 

2. SAP Remote Data Sync:  In some situations, sensors are not connected all the time to the Internet and applications have to find ways to store data at

the edge of the network and periodically synchronize it with other data stores for consistency. For example, with big drill devices used in the

underground wells, what can we connect devices when network is unreliable. So, IoT applications might need to interact not only with incoming streams but

also with a number of different applications that deal with streaming data locally and store streaming data in remote data repositories.

Efficiently moving data between SAP HANA and other applications or data stores is critical.  This remote data sync capabilities recently released in

SPS10.  This feature leverages SAP SQL Anywhere/Ultralite databases to capture data at thousands of locations at the edge of the network and provides

transaction consistent data synchronization to SAP HANA database. In this way your IoT application can easily support offline operation mode.

 

 

 

HANA scenario for IoT

 

typically 4 types of IoT scenarios. Let me propose this taxonomy:

 

 

M2A -> Machine to Analytic -> When a “Thing” or “Machine” needs to be monitored, it typically requires taking a look at sensor information. For example, in Oil & Gas, you would monitor the pressure of a well by reading the sensors of parts that regulate pressure. In automotive—let’s take race cars–you would monitor the engine temperature and status of electrical equipment such as lights. There are many scenarios where you need real-time monitoring of a device. I would even argue that the “Find My Phone” app for an iphone is a simple version of this. The thing is the phone and the analytic is a map that tells you where your phone is. This doesn’t require storing large amounts of data in Hadoop or looking back at data. The value is seeing what’s going on now. This could even be things like Dropcam where the machine is a video camera and the analytic is a “screen” for viewing a video. Lots of home security applications here. Also, there are modern analytical use cases leveraging predictive for fault tolerance or predictive maintenance, but you get the idea that it could be operational analytics to see what’s going on now or predictive analytics to predict future pattern.

 

 

M2M -> Machine to Machine -> This is the scenario people talk about the most. Imagine your alarm clock going off and it triggering your coffee maker and toaster to get started making breakfast. Very cool! Very consumer oriented. However, this is not new. In reality, Machines have been talking to each other (without human intervention) for decades. In manufacturing-back in the 1980s– there was always a conveyor belt and a mechanical arm. If the mechanical arm broke, the conveyor belt stopped. If the belt broke, the arm stopped. So there you go – a machine to machine scenario. It wasn’t using the internet, but does anyone care? This was two machines talking to each other. What’s different now, with the introduction of the internet and some standardization, is that the type and amount of data you can collect is utterly enriched and it is much cheaper for two machines to talk to each other, which makes the technology cheap enough for consumer scenarios (like the alarm clock/toaster scenario). Cheaper is a result of adoption/scale as well as Internet connectivity. To put this into perspective I’ll ask the question you’ve all heard before: “if a tree falls in a forest and no one is there to hear it, does it make any sound?”. The fact that two machines talked in the past was limited to those two machines and possibly the operators on the shop floor. Now, everyone from the floor operator to the business executive can be made aware of the event (possibly with different alert levels) and suitably adapt their operations and processes.

 

M2D -> Machine to Data Lake -> This one, I believe, is where most people go with IoT. They believe that machines are collecting a ton of data and if they don’t capture and store that data, they won’t be able to derive value in the future because they missed collecting all of it. Sometimes the idea and value of capturing the data is well understood up front, but most often, I find that people don’t identify the value of capturing information up front and therefore, spend a ton of time/money building out data lakes to look for future value. It’s a “fear driven” approach. I do find that a lot of folks who buy Hadoop do this for their IoT driven Big Data Scenario. Then they unleash data scientists and try to find the magic value. Usually, something meaningful can be derived, but it takes time.

 

M2P -> Machine to Process -> The last scenario, where I believe there is a ton of value, is when machines can communicate directly with business processes and be part of that process. What if your washing machine broke and could trigger a warranty claim or a service technician call if you’re not in warranty? 9 times out of 10, the “process side” of this is already handled in an SAP system. The question is connecting the machine to generate the business process action.

 

 

 

Now some IoT scenarios may have an element of multiple categories. For example, you may want to do real-time analytics off a car engine (M2A), as well as store the data (M2D) to look for maintenance failure patterns over time, as well as trigger a service appointment if the engine fails (M2P), and finally display a light on the car dashboard that the engine died (M2M). So, some scenarios may only be categorized via one of the above taxonomies, but sometimes, a scenario will involve multiple approaches.

Useful tips for Node.js

$
0
0

1. Windows Users- remember to use "Node.js command prompt" and NOT  "Node.js" : When you download and install Node.js for windows version, along with Node.js environment the installation also adds icons for quick launch of Node.js console. If you use quick launcher "Node.js", it does open a console which appears to be Node.js console, however when you try to run any command, it ends up in a weird error.

 

 

 

Error.png

 

NodeJs.png

 

Complete list of installers is available here.

 

 

2. Install packages, Node.js does not contain everything in it:  Depending on the logic, your Node.js code may use various libraries. Don't be under impression that the initial setup makes everything available to you. You may need to add those packages explicitly to your environment. NPM (Node.js package manager) is a Node.js console based command which allows you to install different packages.

 

Packages like express, JQuery, bower, underscore, debug, request etc. needs to be installed explicitly. The complete list of NPM packages is available here

 

 

The syntax is like below

 

npm install <packagename>

 

OR


npm i  <packagename>  (incase you feel lazy to type complete spelling of install )



3. Want to test Node.js online? Try Windows Azure: Windows Azure provides one month of free trial to host web applications online. Azure comes with many useful features to manage web contents. It also supports integration with Github which is used for versioning of source code.



Azure.png


Important : If you want to deploy Node.js web application on Azure, you will have to use ports supported by Azure, in other words -hardcoded port number may not work on Azure.


Exposing HTTP Access to Multitenant Database Containers in SAP HANA SPS11

$
0
0

Introduction

 

I have a HANA XS application that can be deployed to separate tenant databases when my HANA system is setup as Multitenant Database Containers (MDC).  Naturally, I require HTTP access to those tenant databases and I was running into problems configuring this for both my SystemDB and my tenant databases.

 

I found the excellent post by Wenjun Zhou Exposing HTTP access to multitenant database containers in SAP HANA SPS09 and although the procedure is different for HANA SPS11, it pointed me in the right direction.  The documentation also has a section on this topic (Configure HTTP(S) Access to Multitenant Database Containers - SAP HANA Administration Guide - SAP Library) that was useful, but there were a few steps that needed extra attention to complete the process, which I'll describe in this blog post.

 

Note: I'll only discuss HTTP access here, but the procedure is similar to configure HTTPS access.

 

Assumptions

 

  • The HANA databases in your system should already be set up for access via DSN alias hostnames.
  • The Web Dispatcher is set for automatic configuration.
  • You have already created one or more tenant databases in your HANA MDC system.

 

There are a few other prerequisites and the list is found in the documentation link above.

 

Accessing the XS Engine

 

By default, you can access the XS engine for the SystemDB by opening the URL http://<fqdn_hostname>:80<instance>, where "fqdn" is the fully qualified domain name (FQDN).  For example, if my HANA system is "mdc.internal.sap.com" and instance number is 00, then the XS engine URL for the SystemDB is http://mdc.internal.sap.com:8000.

 

Now, if you try to access the XS engine for one of your tenant databases (assuming you've set up its DSN alias hostname), you'll immediately get the following error:


503 Service not available


For example, if you created a tenant database called "TN1" with alias "mdc-tn1", then the XS engine for that database is http://mdc-tn1.internal.sap.com:8000 - however you'll get that 503 error when you try to open this URL.


Configure HTTP Access for the Tenant Database


What you must do now is specify the public URLs for all your tenant databases.  Here's how you do this in HANA Studio (example with SQL is below as well):


  1. Connect to the SYSTEMDB (not the tenant database) as SYSTEM and open Administration.
  2. Switch to the Configuration tab and expand xsengine.ini | public_urls.  Double-click on "public_urls" to invoke the Change Configuration Values dialog.

    xsengine-ini.png

  3. Click the Databases drop-down box, select the tenant database(s) you want to configure and click OK.

    SelectTenantDB.png

  4. Provide the new URL using the alias hostname and click Save.

    TN1-Config.png


You are now able to XS engine for the tenant database (e.g. http://mdc-tn1.internal.sap.com:8000) without getting the 503 error.


If you want to use SQL instead of HANA Studio, you can execute the following statement in a SQL Console.  Don't forget to connect to the SYSTEMDB (not the tenant database!) as the SYSTEM user.


ALTER SYSTEM ALTER CONFIGURATION ('xsengine.ini', 'database', 'TN1')
SET ('public_urls', 'http_url') = 'http://mdc-tn1.internal.sap.com:8000'
WITH RECONFIGURE;


Re-Configure HTTP Access for the SystemDB


If you now try to access the SystemDB XS engine using the same URL as before (e.g. http://mdc.internal.sap.com:8000), you'll find that you'll get (oh no!):


503 Service not available


The documentation clearly states that the URL for the SystemDB changes once you've configured the URLs for the tenant databases.  Instead of using the FQDN, the SystemDB XS engine is available under http//<localhost>:80<instance>.  In our example, this means that the XS engine would now be found at http//mdc:8000.


What you need to do now is execute the following SQL statement to re-configure the SystemDB URL to use the alias host name:


ALTER SYSTEM ALTER CONFIGURATION ('nameserver.ini', 'system')
SET('public_urls', 'http_url') = 'http://mdc.internal.sap.com:8000'
WITH RECONFIGURE;


Note that if you try to make this change in HANA Studio by editing the xsengine.ini configuration file, the change will not take effect.


Voila!  You now have access to the XS engines for the SystemDB and tenant databases using the alias hostnames.


Final Thoughts

 

You can spend a lot of time trying to figure out how to enable HTTP access to your SystemDB and tenant databases.  Usually once you configure the tenant databases properly, you scratch your head determining why the SystemDB URL no longer works.  The procedure to fix this is quite straightforward, once you know where to look.  At a minimum, following these steps will allow you to open each tenant database's HANA Cockpit, as that's also a HANA XS application.

HANA Connect to Hive by EIM Option

$
0
0

Note: all the below steps are done by HANA SYSTEM user. If you use other HANA user, you may grant the roles/privilege if needed.

 

  • Enable Data Provisioning Server

1.jpg

  • Import Data Provisioning Delivery Unit

Download HANA SDI 1.0 and install Data Provisioning Delivery Unit. You could use the UI in this Delivery Unit to monitor the replication task status later.


  • Grant Data Provisioning Monitor privilege to SYSTEM

          Log on HANA using SYSTEM and execute:

CALL GRANT_ACTIVATED_ROLE('sap.hana.uis.db::SITE_DESIGNER','SYSTEM');
CALL GRANT_ACTIVATED_ROLE('sap.hana.im.dp.monitor.roles::Monitoring','SYSTEM');

 


  • Test DP monitor cockpit

         Then you should access the UI of the Data Provisioning Delivery Unit:

http://your_hana_server:8000/sap/hana/im/dp/monitor/?view=DPAgentMonitor

http://your_hana_server:8000/sap/hana/im/dp/monitor/?view=DPSubscriptionMonitor

http://your_hana_server:8000/sap/hana/im/dp/monitor/?view=IMTaskMonitor


  • Install Data Provisioning Agent in your Hadoop server

Note: you may install the agent in a separate Linux server.

Log on Linux server using root user. Add one new OS user name with dpagent.

 

Extract the installation file in /home/dpagent/installation:

./SAPCAR_617-80000935.EXE -xvf ./IMDB_DPAGENT100_01_0.sar

 

Install Data Provisioning agent:

cd /home/dpagent/installation/HANA_DP_AGENT_20_LIN_X86_64
./hdbinst --silent --batch --path="/usr/sap/dpagent" --user_id=dpagent

                --path: specify the directory of dpagent installation.

                --user_id: specify use which user to run the dpagent service

 

 

  • Start and Connect DPAgent to SAP HANA

Use root user:

cd /usr/sap/dpagent
vi ./dpagentconfig.ini

Modify the following configurations:

agent.name=mydpagent
hana.port=30015
hana.server=vhhan110

start agent:

nohup ./dpagent &

 

  • Register DPAgent

Start Xming on your terminal server.

Log in to RedHat running Data Provisioning Agent use root user:

cd /usr/sap/dpagent/configTool
./dpagentconfigtool

Now you should see below dialog to configure Data Provisioning Agent:

2.jpg

Click ‘Connect to HANA’.

HANA database user: SYSTEM

HANA database user password: password_of_SYSTEM

Then you should connect to HANA:

3.jpg

Click Register Agent, and input:

Agent Name: mydpagent   (you could use your own name instead of this)

Agent Hostname: vcRHL660 (you should use your server name to instead of this)

4.jpg

Click Register, then your agent should be registered successfully:

5.jpg

Now we need to register the HiveAdapter distributed by SAP.

Select HiveAdapter and click ‘Register Adapter’:

6.jpg

The ‘Registered with HANA’ should be ‘Yes’:

7.jpg

  • Copy Hive JDBC to /usr/sap/dpagent/lib/

Now let’s configure the HiveAdapter.

Download Hive 0.13.1 from the below link:

https://hive.apache.org/downloads.html

               Note: please download one version of 0.12.0, 0.13.0 or 0.13.1.

Use root user to log on the Linux Server running Data Provisioning Agent, copy the jar files into /usr/sap/dpagent/lib/

 

8.jpg

 

  • Create Remote Source using ‘SYSTEM’

Use SYSTEM log on HANA in HANA studio and create one new remote source to point to the Hive through your Data Provisioning Agent:

9.jpg

Please input the information according to your own environment:

Source Name:            HIVE_EIM (you could input your own source name)

Adapter Name:           HiveAdapter

Source Location:        mydpagent (your agent configured in the above)

Hive Version:             0.13.1

Host:                         your own Hive server name

Port:                          10000 (Port of Hive)

Database Name:        test (your own Hive database name)

Use SSL:                  false

Credentials Mode:      Technical User

Username:                 hive

Password:                  hive

 

  Then please save and activate it. Now you should see your HIVE_EIM under Remote Sources:

10.jpg

You can create virtual table based on your remote table:

11.jpg

Data Preview from the virtual table:

12.jpg

  • Grant Role and Privilege to ‘SYSTEM’

If you would like to create Replication Task in HANA, your should grant role and privilege as below:

Use SYSTEM log in SAP HANA:

 

Grant 'sap.hana.xs.ide.roles::EditorDeveloper' to SYSTEM user:

CALL GRANT_ACTIVATED_ROLE('sap.hana.xs.ide.roles::EditorDeveloper','SYSTEM');

 

Grant SELECT,CREATE on MYHIVETEST(owner is SYSTEM) to _SYS_REPO:

GRANT SELECT,CREATE ANY ON SCHEMA MYHIVETEST TO _SYS_REPO WITH GRANT OPTION;
  • Create Replication Task

open

http://your_hana_server:8000/sap/hana/ide/editor/

log in using SYSTEM and create Replication Task:

13.jpg

Input your Replication Task Name, such like ‘HIVE2HANA’. Then please fill the information as below:


14.jpg

15.jpg

 

 

When saved and activated successfully, please log on HANA in HANA studio using ‘SYSTEM’ user. You should find the below objects created by HANA automantically:

16.jpg

HANA will create one virtual table points to your remote Hive table, you could review the data from this virtual table. One destination table is also created by HANA, now it’s empty. You could execute the procedure created by your replication task to load remote Hive data into your local HANA table. Double click the procedure to review its code:

17.jpg

Then open one SQL console and execute the procedure:

 

CALL"MYHIVETEST"."MyRep::HIVE2HANA.START_REPLICATION"()

  After it’s executed successfully, then you could preview the data in your local table.


18.jpg


  • Monitoring the replication tasks

You could also monitor the Data Provisioning Agent and replication tasks in the UI of Data Provisioning Delivery Unit you imported in the above.

Please open this link to monitor the Data Provisioning Agent status:

http://your_hana_server:8000/sap/hana/im/dp/monitor/?view=DPAgentMonitor

 

19.jpg

Open this link to monitor the replication tasks status:

 

http://your_hana_server:8000/sap/hana/im/dp/monitor/?view=IMTaskMonitor

 

20.jpg

Finding SAP Notes

$
0
0

SAP Notes are great sources of information on issues occurring in the code or documentation of a product. Use them to find out what issues impact a particular release and how you can work around them.

 

When can SAP notes help you?

 

  • You’re considering whether to upgrade your system to a newer version and you want to know what issues affect that version. Look up the SAP Notes for the product so you can plan and work around them quickly and efficiently.
  • You recently upgraded your system and you suddenly see an error message you’ve never seen before. Search for SAP Notes with that error message text to find out how to get around the issue.
  • You’re stuck on a particular task and you’re not sure what to do next. Look at the SAP Notes for the product – it’s possible that one of the notes details something missing from the documentation that can help you work around the issue.
  • You’re looking for sizing information - dynamic tiering, for example, has an SAP Note that summarizes current sizing recommendations.

 

Where can you find SAP notes?

  1. Go to the Support Portal.
  2. Select Knowledge Base & Incidents in the top menu, then SAP Note & Knowledge Base Article Search.
  3. Click the Launch the SAP Note and KBA search button.
  4. Enter the product name in the Search Term field to find all notes for that product.

 

dt_search_screen.JPG

 

OR, if you know the number of the note you’re looking for, simply type it in the Number field in the top right corner.

 

TIP: Dynamic tiering has a central SAP Note for each release that lists all notes associated with that product version. For example, if you want to find the central note for the SPS 11 release, search for "dynamic tiering” and look for the note titled “SAP HANA Dynamic Tiering SPS 11 Release Note.”

 

dt_master_note.JPG

 

You can also find SAP Notes on the Help Portal

 

The Help Portal has a direct link to the current release’s central SAP Note. To find the central SAP Note for dynamic tiering:

  1. Go to the SAP HANA Dynamic Tiering documentationon the Help Portal.
  2. You can find a link to the central SAP Note for the release under the What’s New – Release Notes section. This note has direct links to all issues actively impacting this product version.

For example, for SAP HANA dynamic tiering SPS 11, the central note is http://service.sap.com/sap/support/notes/2194442.

 

Did you know that…?

 

Once an issue is resolved in a particular version, the SAP Note is archived and is no longer publicly available through that version’s central SAP Note?

 

What has your experience been with SAP Notes?

 

Have you had issues finding them? Do you find the content helpful or do you think it could be improved in some way? Feel free to post any comments or feedback regarding smart data streaming SAP Notes.

Implementing IoT Streaming Analytics Scenarios With Streaming Lite

$
0
0

Streaming Lite is a light-weight version of the Smart Data Streaming ("SDS") server designed to run on IoT gateways to process data close to the source. A great example use case for applying complex event stream processing on the edge is a Worker Safety demo that the SAP IoT team has built in cooperation with Intel. In this reference sample, data from gas, motion and heart rate sensors is continuously processed through a Streaming Lite project to detect and react to unsafe situations. The first level of alert is generated when a dangerous gas level is detected. In this case an alert is generated for both the workers to evacuate the area and for the supervisor initiate a response. The second level of alert is generated when the pattern of a dangerous gas level is detected followed by detecting from the motion sensor that the worker is now prone. This combination of events results in a more urgent alert to respond to the immediate worker safety event.


Intel has provided a solution brief titled Intel and SAP IoT Bring Actionable Insights to the Edge that describes this reference use case in more detail.


To help you learn how to start building Streaming Lite projects, we have put together a comprehensive set of tutorials available in Learning how to use Streaming Lite with HANA Smart Data Streaming.



Fine Tune Sentiment Analysis Using Custom Dictionary

$
0
0

    Sentiment analysis can be fine tuned using custom dictionaries, for example, sentiment polarity changed  or additional sentiments extracted. In my blog I will explain how to archive something different - link sentiment to correct topic.

    I run Voice of Customer text analysis against some reviews with standard EXTRACTION_CORE_VOICEOFCUSTOMER configuration. What I found is that "microwave safe bowl" was incorrectly interpreted as WeakPositiveSentiment.

TA_1.jpg

TA_2.jpg

"safe" sentiment was linked to incorrect topic e.g. "a microwave" instead of "bowl".  I do not care about a bowl as such and the sentiment just screw up my analysis. Excluding "safe" adjective from custom dictionary looks like a simple solution, but solves one problem and creates another (what if I need to extract "microwave is safe" WeakPositive Sentiment). The right solution is to define a new neutral sentiment "microwave safe" adjective in custom dictionary. This way "safe" sentiment is linked to the right topic e.g. "bowl".

TA_3.jpg

Then reference custom dictionary in custom configuration

TA_4.jpg

and finally recreate text index

TA_5.jpg

If I query against $TA table again, I will see that sentiments are correctly extracted

TA_7.jpg

     You can try described demo in your system. Import TA_DEMO.tgz transport (rename TA_DEMO.tgz.txt to TA_DEMO.tgz first). ta_demo package will be created

TA_9.jpg

and finally execute install.sql script. The latter will create REVIEW table with data, grants proper authorization to TA_DEMO schema and creates text index with standard EXTRACTION_CORE_VOICEOFCUSTOMER configuration.

TA_8.jpg

TA_10.jpg

Viewing all 737 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>