Quantcast
Channel: SAP HANA Developer Center
Viewing all 737 articles
Browse latest View live

How to update your sandbox HANA system to SPS 12

$
0
0

Since I have always been keeping a sandbox HANA system up-to-date with the latest Revision, I definitely wanted to upgrade it as soon as possible to the new stable SPS 12 as announced during SAPPHIRE NOW.

 

Please be aware, that there is no Revision as of Datacenter Service Point - DSP Revision (SAP production system verified SAP HANA Revision) for HANA SPS 12 yet, so this is only recommended to make yourself familiar with it in a sandbox environment!

 

There are many ways to check for HANA component updates, but I like the respective function of the HANA Studio:

Check for SAP HANA component updates.png

This check can take a while:

Scanning Software Download Center.png

But eventually the available component updates for download are displayed:

Select Available Component Updates.png

On next I can select where to store my downloads:

Download Selected Component.png

On finish the components are downloaded, which again could take a while:

Downloading components.png

Since it is always a good idea to work with the latest HOSTAGENT, I update this first, which is pretty straight forward (my HANA server runs on VMware, so your path to the downloaded archive and SAPCAR version might differ):

GNOME Termainal 1.png

Then I initiate the automatic HOSTAGENT upgrade with rm .upgrading. This can take a few minutes, depending on your configuration.

 

Next, I start the graphical HANA Life Cycle Manager from the mounted upgrade volume:

hdblcmgui.png

I confirm SAP HANA Database 1.0 SPS 12 Revision 120 as the target:

Select Software Component Locations.png

And choose to update my system:

Choose system to update.png

I choose the component to be updated:

Choose components.png

And provide the required passwords:

Specify authorization data.png

I check and confirm the update summary:

Summary.png

And the update completes without any issues:

Updating Software.png

With a final success message:

SAP HANA components updated.png

In my next blog, I will go into more details of HANA SPS 12, especially around more SAP HANA XS Advanced features, but for now, I hope that this has done the trick for you:

SAP HANA Cockpit.png


HANA Tips & Tricks: issue #1a - Addendum: Importing Columns for Scripted Calculation Views

$
0
0

Last weeks post by Roland Bouman mentions a cool program he built, to easily generate column definitions in a scripted calculation view.

 

Creating columns with matching names and datatypes manually can obviously be tedious and error prone.

 

Later I remembered that such a standard option exists in HANA Studio as of SPS9 "import columns from".

After selecting the script node in the editor, select the drop down box from output area in the top right.

Add_column_from.png

 

This could be a nice time-saver, instead of entering columns manually.

 

If you enjoyed this, I would definitely recommend The what's new series, like:http://www.slideshare.net/SAPTechnology/sap-hana-sps-09-hana-modeling

They are filled with handy tips like this.

Using Rank node to Obtain the most recent record using a date field

$
0
0

Hi,

 

This is a build-up over the post from Monissha Agil

 

http://scn.sap.com/docs/DOC-63775

 

Below are the instructions of how from a table with different states of a record you can obtain the latest modification based on the date of the modification and other categorical variables such as type of the record.

 

Use case; Pick the amount of latest record of for an specific organization and type of liability

 

The data;

data1.png

As it can be observed on the first organization there is different amounts for the same type of liability, only been different on when they were updated, we are inserted into obtain the most recent.

 

The View;

data_2.JPG

The Rank Node definition;

data_3.JPG

Here is where the real trick happens, as in this case we are interest in just the newest record we sort the rank descending and limit the threshold to a constant 1, which will return the top record of the descending list.

Then equally important is to define what are the partition for the rank to be executed, as in this case we are interested in the latest record for an specific organization AND type of liability, both are required, this is critical to define it correctly.

 

The outcome;

data_4.JPG

Using the organization that had most duplicated we have a final clean one record per organization and type using the latest update of the combinaton

Make SAP Agile Data Preparation work for you

$
0
0

Having read about SAP Agile Data Preparation, I wanted this tool immediately. In this blog I describe how you can make it work for you. Please refer to the respective SAP help site for more detailed information.

 

First you have to enable the Script Server, if you plan to use SAP Smart Data Quality:

scriptserver.png

Next you have to enable the XS Job Scheduler:

scheduler.png

Then you have to download and import the SAP Enterprise Semantic Services Delivery Unit:

HANA_IM_ESS.png

Subsequently you download and import the SAP Agile Data Preparation Delivery Unit:

HANA_IM_ADP.png

Please verify, that both Delivery Units have deployed successfully:

Recent Activities.png

To activate, Run ESS setup:

Run ESS setup.png

To enable ADP, you grant yourself the ADP Administrator role:

INSTALLER_USER.png

With this you activate ADP and create an ADP Administrator user:

ADPUSER.png

As a result, logging into ADP you are greeted with a welcome page:What's New.png

And finally you can get productive:

ADP.png

Federating with SAP HANA

$
0
0

Introduction

Every five to six years, there comes a technology wave, and if you are able to catch it, it will take you a long way. Throughout my career, I’ve ridden several of these waves. MPP data warehouses brought us incredible speed for analytics and a few headaches for data integration. We’re seeing in-memory analytics reducing disk latency. Hadoop based technologies are opening up new solutions every day for storage and compute workloads while our source systems are still generating varying degrees of velocity, volume, and variety.

As a traditional ETL developer, I would usually try to figure out the best solution to acquire, cleanse, and store this data in an optimal format for analytics…usually a data warehouse. Depending on the business need, number of sources, and complexity, this approach is a long one and quite labor intensive. Source systems create new data faster than we can consume them in traditional models. Hence, we see many organizations adopting a Data Lake approach. Here, we are simply concerned with optimizing the acquisition and storage of any data source. We worry about consumption later.

While data federation has been around for years, traditional technologies typically dealt with federating a relational source with a tabular, single file extract. Today, we’re asking federation to handle relational stores, API’s, HDFS, JSON, AVRO, logs, and unstructured text. It’s a tough task, but I was pretty impressed with SAP HANA’s approach and implementation of data federation.

This post is not about SAP HANA, but rather focuses on its data federation capabilities. I will try to explain basics, best practices, and few tips and tricks I came across during my experience working with data federation in HANA.

Smart Data Access

SAP’s data federation capabilities are built in their HANA database, which is known as Smart Data Access (SDA). SDA eliminates the need to replicate data into SAP HANA, instead it lets you query remote sources from HANA. SAP calls this ability to weave in a network of data sources the in-memory data fabric. SDA allows you to create virtual tables, which points to remote tables on remote sources. It allows you to write SQL queries in HANA, which operates on virtual tables. HANA query processor optimizes these queries, and executes only the relevant part of the query in the target database, returns the results of the query to HANA, and completes the operation.

Supported Connectivity

SDA was first introduced in SPS06. Features matured over several releases, and it supports connectivity to ASE, Teradata, IQ, HANA, Oracle, Sql Server, Netezza, DB2, MaxDB and Hadoop. There are just a few one-time setup steps involved when setting up remote sources for first time.

transactions_analytics

All relational databases can be setup using ODBC drivers and RDBMS drivers on the UNIX server where HANA is installed. Once the drivers are installed, then create the remote sources using HANA studio. Refer to SAP administration guide for version details.

There are few different ways to setup Hadoop remote source. The most common way is to setup using ODBC drivers and HIVE/SPARK drivers on the UNIX server where HANA is installed. Once the drivers are installed, then create the remote source using HANA studio. Other ways include connecting via archive file/virtual UDFs from HANA studio and via spark controller on Hadoop.

SDA Best Practices

Sometimes it is difficult to determine what should be the optimal way to federate data especially dealing with Hadoop sources. We recommend to use divide and conquer method. You can let your remote sources process data and query them from HANA as needed. For ex:- You would push huge volume of data processing on your Hadoop system, this way you are taking advantage of commodity hardware and their cheaper processing power. You can leverage your cheaper storage options and save data in those databases, while only bringing data sufficing your analytical needs into HANA via SDA.

SDA would submit query on remote server, therefore performance will be based on how powerful remote source is configured. It may or may not be adequate for your use case, and you might choose to copy data into HANA instead.

Leveraging Statistics – HANA has the ability to calculate statistics on remote data sources. These statistics can help query optimizer decide how to join two tables including remote tables and in which order to join them. There are two types of statistics you can enable. Histogram type will only saves counts, and simple type will save information such as counts, count distinct, min, max. Depending on your needs you can enable either type to improve your performance.

Querying Hadoop – When federating data from Hadoop, there are few tips and tricks we can use for better performance:

  • Remote caching capabilities – All frequently accessed queries on Hadoop system should be cached. HANA provides remote caching capabilities for Hadoop systems, which saves frequently accessed queries into a separate table for faster execution, and avoids executing map reduce job on Hadoop system every time same query gets executed via HANA.
  • Using ORC file – Use ORC file for every hive table. Hive supports ORC file, a new table storage format that optimizes speed through techniques like predicate push-down and compression. You might run into issues, when querying table with billion plus records via SDA, this approach resolves it.
  • Use of Vectorization – Vectorized query execution improves performance of operations like scans, aggregations, filters and joins, by performing them in batches of 1024 rows at once instead of single row each time
  • Cost based query optimization – Cost-based optimization, performs further optimizations based on query cost, resulting in potentially different decisions: how to order joins, which type of join to perform, degree of parallelism and others.

 

 

Smart Data Access: Capabilities and Limitations

  • Capabilities
    • You may create attribute views, analytical views, calculation views, and leverage HANA’s tools and capabilities on remote tables, just like they were in HANA database. This extends ability to execute results using HANA’s calculation engine, which can perform better than normal SQL execution compare to other databases.
    • In the latest version of HANA, SDA allows users to insert/update/delete data on remote sources, also SDA will work on certain data types like BLOB/CLOB, which wasn’t allowed in initial version
  • Limitations
    • HANA is limited to the capabilities of hive for querying Hadoop.

 

Note:  Please check out iOLAP Blog for other BI and cool technology related blogs

HANA Tips & Tricks: issue #1 - Hacking information views

$
0
0

About this post

 

At Just-BI we just launched a knowledge-sharing initiative where our consultants and developers discuss any issues and share tips & tricks concerning SAP HANA development. Our monthly meetings are company internal, but we decided to share any items that might be interesting to other SAP HANA professionals in public. Since SCN is already the go-to hub for all things HANA, we felt that this is the most appropriate place to do this.

 

So, here it is - our first post!

 

We plan to publish one immediately following our monthly meetings, and we will tag it using the hanatipsandtrickstag. We hope that our tips, tricks and discussions are useful to you. Feel free to chime in, or to share your insights. Thanks in advance. - we welcome your interest and participation!

 

Editing XML source of Information Views

glenn-cheung.jpg


Glenn Cheung
kicked off the meeting with a very useful and powerful tip: editing the XML source code of SAP HANA information views.

 

Information views (Analytical-, Attribute- and Calculation Views) are typically created and edited using the SAP HANA View Editor (also known as the Modeler). This is essentially a query builder that allows you to use drag and drop to graphically build a query out of nodes representing things like database schema objects (tables or views), other information views, and query operators (such as join, union, aggregation, and so on). The models you build this way are stored as XML files in the repository. Activation of these models generates runtime objects, which are basically stored procedures that implement the query according to the model.

 

While the SAP HANA View Editor is the tool of choice when developing new information views, it can get in the way when performing certain tasks. For example, sometimes it may be convenient to build an information view against a personal database schema where you keep only a few objects just for development purposes. Once you're happy with how your information view works, you'll want it to work against the objects from the actual application database schema. (There are many similar scenarios like this, such as updating the package name if you're referencing CDS objects).

 

While the view editor does offer a "Replace With Datasource" option (available in the right-click menu on the item), this quickly becomes a rather tedious and time-consuming tasks, especially if your model contains many nodes, or if you have many information views that you want to point to the other schema. You can save yourself quite a bit of time by opening the view in a text editor and using search/replace to change the schema name. You can even do this without leaving SAP HANA Studio: simply right-click the information view in the project explorer, and choose "Open With" > "Text Editor". For real bulk operations, you need not even open the file in an editor, you can use a command like tool like sed to perform a regular-expression based text substitution.

 

openwith.png

Of course, you should always be very cautious when editing the XML sources directly. Unlike the SAP HANA View Editor, your text editor or command line tools do not validate the changes you make to the model. Always make a backup of your source files or make sure you have some other way of restoring them should your raw edits render the models invalid.

 

Cross Join in Information Views

 

Another tip from Glenn is how to create Cross Joins in information views. A Cross Join is a type of join operation that returns the Cartesian product of the joined tables (that is, the combination of all rows). While there is rarely need for a true Cartesian product in analytical queries, a use case sometimes does pop up when developing custom database applications.

While the SQL standard has a separate keyword for it (like it has keywords for INNER, LEFT OUTER, RIGHT OUTER etc), SAP HANA Studio does not offer a special Join type for it. Note that in SAP HANA Studio you must set the join type in the properties page that becomes active when you select the edge that connects the joined column. The property page does not have an option for Cross Join, nor can it have one, since a cross join does not have any joined columns. The solution is however very straightforward - when you add your data sources to your join node in the View Editor, simply don't connect any columns. The resulting join will still be valid and SAP HANA will generate a Cartesian product as result.

 

Note however that this behaviour can bite you as well. I recently encountered a situation where I needed to clean up a Calculation View. As part of the clean-up, I was removing columns "downstream" of a join node. While SAP HANA studio will warn that the column is used by any upstream nodes, it is very easy to miss the fact that you might be removing a column which is used to define a join. If that is the case, then it's very easy to end up with an unintentional cross join.

 

Adding Nodes mid-stream

scott-wery.png

 

Scott Wery provided a very useful tip on adding nodes to existing Calculation views. Let's consider an example: It's not uncommon to work on a Calculation View that contains a number of joins. In many cases, the number of joins grows organically as the development process progresses and user requirements evolve - the requirement to "look up" a few extra columns is a very common one.

 

Once you opened your existing view in the SAP HANA View Editor and identified between which two nodes you want the new join node, you might proceed by deleting the edge that connects those two existing nodes, add the new join node, and then re-create the edges between the nodes. This would be fine except for the fact that when you break the edge between two nodes, any columns upstream of the broken edge that originate downstream of the broken edge are simply removed. You would have to recreate all those columns after re-establishing the edges from and to the new join node.

 

While that is of course possible, there is a much better way: if you first click the edge that connects the two nodes where you want the new join to appear in between, it will be selected. If you then drag the new join node onto the selected edge, a messagebox pops up, asking you if you want to insert the new node inbetween the existing nodes. If you confirm, the new join node will automatically be inserted there, splitting up the existing edge and connecting the existing nodes with the new join node, without removing any columns. This avoids doing a lot of tedious and error prone work!

 

insertjoin.png


Generating Scripted Calculation Views

 

The following tip is by yours truly Past week, my co-worker Ivo Moor was creating a few Scripted Calculation Views. (A Scripted Calculation view is a Calculation View that is defined by user-entered SQL script.) One rather tedious aspect of creating scripted Calculation Views is that you have to manually define the output columns of the view, and enter the names of the output columns as well as specify their data types. Again, this is totally doable, but it is not a lot of fun. Apart from the fact that can be time-consuming, it can be error prone too - if you accidentally enter a data type or data type parameters (like length, precision, or scale) that do not correspond to the runtime type of the column, then you might encounter run-time errors when executing the view.

 

I decided to spend a little time to see if I could make this easier. What would be ideal is if SAP HANA Studio would offer some kind of wizard or integrated generator that you could invoke from the SQL editor, and which would open the SAP HANA View Editor with a newly generated Scripted Calculation View, based on the code that was inside the SQL editor, and having all its output columns generated based on the runtime types of the query. While I appreciate that such a generated view might still require editing, however it would give a considerable head start. I looked into it a bit and I quickly realized that actually modifying SAP HANA Studio to add such a feature would cost me considerable more time than I currently am willing to spend.

 

So, as a really quick and, admittedly, dirty alternative, I came up with a XSJS web application that can at least generate the Calculation View code, and offer the user a download link, which can be used to download the .calculationview file and save it in an existing SAP HANA project.

 

Here's a screenshot of the application frontend:

scvg.png

The way it works is, you enter your SQL query (or at least, the query that will produce the output for your scripted calculation view) in the SQL textarea. You can enter the name for your view in the Object Name field, and enter a version number as well. If the SQL code contains parameter or variable references, the tool will generate inputs for those so that you can enter values. Finally, you can also choose the database schema against which any database object identifiers are resolved.


After entering or changing data in the form, the application will send the query to a XSJS service, which will take the query, append a LIMIT 0 clause to it (so as to prevent doing any actual work as much as possible) and then execute it in order to obtain resultset metadata. This resultset metadata is then used to fill in a calculation view template with both column definitions as well as variable definitions. The result of the filled in template is then exposed via a download link at the bottom of the page. Clicking the link will prompt the user to download a .calcullationview file which you should be able to save to your HANA project and then activate.

 

If you want to try this yourself, feel free to download or fork the code from the just-bi/scvg repository on github. It's free and open and I hope it will be useful to you. If you're interested in these kinds of productivity tools, then stay tuned - The Just-BI development team is currently looking into possibilities to create tools like these and integrate them into HANA Studio. I can't really say when we'll have time to make this happen since this is not really core business but I can promise that once we have some of these tools we will publish them and contribute them back to the SAP HANA Developer community just like we are doing now.

 

Update on Generating Scripted Calculation Views


After our knowledge exchange session, Scott Wery remembered a trick that makes it somewhat easier to define columns for your calculation view. I would recommend everybody to check out his write-up on that topic: HANA Tips & Tricks: issue #1a - Addendum: Importing Columns for Scripted Calculation Views.

 

Thanks Scott! Much obliged :)

 

Finally


I hope you enjoyed our tips and tricks! We'll be back a month from now - just track the hanatipsandtricks to stay tuned

How to consume XSJS service in Gateway

$
0
0

How do we consume XS services (hosted on a different domain) in our SAP UI5 application (hosted on gateway server)? Generally it shouldn't have been a problem, but as UI5 apps are usually run on chrome all UI5 developers would or will have faced the dreaded CORS ! issue at some time while consuming an XS service.

 

CORS (Cross-Origin-Resource-Sharing), to explain in brief a web security measure to ensure that services that reside outside our domain can only be used if there is a trust between them (like an SSL connection). This creates a small roadblock as UI5 developers try to consume XS services. Let's say hosted on http://xs.engine.com while the UI5 app resides on gateway let's say hosted at http://gateway.com, as the Gateway domain and XS aren't the same we get the dreadful CORS error. Don't get me wrong, CORS has probably saved a lot of enterprise sites from being hacked so it quite necessary for web security

 

Moving on, for this blog I will be describing how to consume only XSJS service in Gateway and not XSODATA services.

 

There are three possible solutions to solve this

1. Use a proxy over XS engine so that the Gateway can make a secure connection to consume the services.

2. Use JSONP data type to consume XSJS while making an ajax call from SAPUI5

3. Create an ABAP object (RFC preferably) in Gateway that will consume the XSJS service and publish the data using an ICF service (created on the Gateway)

 

I have performed both solution 2 and 3. I will be describing solution 2 in another blog. Although using JSONP is way more easier than solution 3 there are a few drawbacks which will be explained in the next blog:


How to consume XSJS as JSONP in SAPUI5 application.

 

For Step 1 and 2 i referred a very useful blog by Amaresh Pani : http://scn.sap.com/community/abap/connectivity/blog/2014/11/09/calling-an-external-restful-service-from-abap--http-method-get

 

Note the XSJS service here is a GET service. I will be exploring this more in the next few days for POST services as well.

The service I am trying to hit is : http://<XS engine host>:<port>/JohanXSJS/XSJS/Profit_Center.xsjs?comp=XXXX

 

Step 1: Create an RFC destination with connection type G (i.e. to External Server) to your XS engine

 

RFC destination for XSJS.PNG

You can mention the DB user credentials in the Logon tab that would ensure that you that you can access the XSJS service without a logon prompt.

 

Step 2: Create an RFC to use this RFC destination and call any XSJS service dynamically.

 

FUNCTION z_xsjs_get_data.

*"----------------------------------------------------------------------

*"*"Local Interface:

*"  IMPORTING

*"     VALUE(IV_URL) TYPE  STRING

*"  EXPORTING

*"     VALUE(JSON_STRING) TYPE  STRING

*"----------------------------------------------------------------------

 

   DATA : lo_http_client TYPE REF TO if_http_client,

          lo_rest_client TYPE REF TO cl_rest_http_client,

          lo_response    TYPE REF TO if_rest_entity,

          lv_url         TYPE        string.

 

   cl_http_client=>create_by_destination(

    EXPORTING

      destination              = 'HANAXS'    " Logical destination (specified in RFC destination above)

    IMPORTING

      client                   = lo_http_client    " HTTP Client Abstraction

    EXCEPTIONS

      argument_not_found       = 1

      destination_not_found    = 2

      destination_no_authority = 3

      plugin_not_active        = 4

      internal_error           = 5

      OTHERS                   = 6

  ).

 

   CREATE OBJECT lo_rest_client

     EXPORTING

       io_http_client = lo_http_client.

 

   cl_http_utility=>set_request_uri(

    EXPORTING

      request = lo_http_client->request    " HTTP Framework (iHTTP) HTTP Request

      uri     = iv_url                     " URI String (in the Form of /path?query-string)

  ).

 

   lo_rest_client->if_rest_client~get( ).

 

   lo_response = lo_rest_client->if_rest_client~get_response_entity( ).

 

   json_string = lo_response->get_string_data( ).

 

ENDFUNCTION.

 

Step 3: Make a handler class that will be consumed by ICF service node

 

     Add an interface as shown below to the class to make a Handler class for ICF service

Class Interface.PNG

This provides the below method definition:

 

Class method.PNG

This method definition given by this interface will provide the necessary parameters. Write the below code in that method :

 

METHOD if_http_extension~handle_request.

 

     DATA : lv_string TYPE string,

            lv_url    TYPE string.

 

     lv_url = '/JohanXSJS/XSJS/Profit_Center.xsjs?comp=XXXX'.

 

     CALL FUNCTION 'Z_XSJS_GET_DATA'

       EXPORTING

         iv_url      = lv_url

       IMPORTING

         json_string = lv_string.

 

     CALL METHOD server->response->set_cdata( data = lv_string ).

   ENDMETHOD.

 

Step 4: Make an ICF service that will call the above RFC.


Go to  transaction SICF and select the below hierarchy :

 

SICF node.PNG

 

Click on the Create Host/Service button at the top left corner to create a new node under this hierarchy.


In the Handler tab enter the name of the class you've created in Step 3 :

 

SICF Handler.PNG

 

Right click on the node then Activate this node and then test it in the browser.


Now this service will be as follows : http://<gateway host>:<port>/sap/bc/zxsjs?sap-client=XXX.


You can add query parameters that can be captured in your handler class. I will be updating the blog with how to call two or more services with this one handler class.


In your UI5 code you can access this service with an ajax call to this service. Hope this helps all the UI5 developers out there !!

SAP HANA Developer Edition 1.00 SPS11

$
0
0

Screen Shot 2016-06-30 at 18.52.04.png

 

After several months we finally have the SPS11 SAP HANA Developer Edition live with the brand new SAP HANA XSA configured and running, unfortunately being early meant we had a few problems along the way so hopefully you'll all be happy with the results - we are that is for sure!

 

The new version has all the goodies we normally include for the world of XSC which is still alive and well in this version (it will be shut down at some point in the future though).

 

Screen Shot 2016-06-30 at 18.52.13.png

We've also updated some of the sample applications as well so there is certainly new stuff in there for those who are wanting to check out XSC.

 

Screen Shot 2016-06-30 at 18.57.16.png

 

For those chomping at the bit and wanting to get their hands on XSA, well we have that for you as well!

 

Screen Shot 2016-06-30 at 18.52.18.png

It's preconfigured and ready to use out of the box! We've even uploaded an application ready for you to use and explore. As well as lots of other items in there to use.

 

Screen Shot 2016-06-30 at 18.52.27.png

 

With the changes and addition of XSA you will also need to know that there are a lot more ports open on the server now than before, this is related to how XSA works. So keep that in mind when you launch the instance. You will also need to work with a "hostname" and not the IP address anymore, we've added instructions for that into the system.

 

Screen Shot 2016-06-30 at 19.01.01.png

 

Happy coding!


XS Advanced features: Using Synonyms; Using non-HDI container schema objects in HDI container.

$
0
0

This blog will give you information on how to use objects of a non-HDI container or stand-alone schema into your container.


A word about HDI Containers


As we enter the world of XS Advanced, we come across many new terms and one of them is "HDI container".

You can think of it as a database schema. It abstracts the actual physical schema and provides schema-less development. All the objects you create will sit in a container. You can read more about them in the blog written by Thomas Jung. Please visit http://scn.sap.com/community/developer-center/hana/blog/2015/12/08/sap-hana-sps-11-new-developer-features-hdi

 

The key points that we need to emphasize while working with the HDI containers are:

  • A database schema and a technical user also gets created in the HANA database for every container. All the run time objects from the container like tables, views, procedures etc. sit in this schema and not in the schema bind to your database user.
  • All the database object definitions and access logic has to be written in a schema-free way.
  • Only local object access is allowed. It means that you can only access the objects local to your container. You can also access the objects of other containers and non-HDI container schemas (foreign schemas) but via synonymsas long as the technical user of the HDI schema has been granted access to this foreign schema.

 

Creating Synonyms

 

Now you will be looking at an example of creating a synonym for the objects of a non-HDI container schema (foreign schema) in your container.

This example is based on SPS 12 and uses both XS command line tool and SAP Web IDE for SAP HANA (XS Advanced) tool.

 

Prerequisites:

  • You should have a database user who should be able to access XSA Web IDE tool.
  • Your database user should have the authorization (WITH GRANT OPTION) on the foreign schema.

 

Let's start with the example step by step.

 

Create a user provided service.

You have to create a user provide service for your foreign schema. Open XSA client tools and login using your user by issuing 'xs login' command.

Now create user service by issuing 'xs create-user-provided-service' or 'xs cups' command.

You can use the following syntax:

xs cups <service-name> -p "{\"host\":\"<host-name\",\"port\":\"<port-number>\",\"user\":\"<username>\",\"password\":\"<password>\",\"driver\":\"com.sap.db.jdbc.Driver\",\"tags\":[\"hana\"] , \"schema\":\"<foreign schema name>\" }"

 

 

Modifying mta.yaml file.

You have to correctly configure all services including the user provided service in the mta.yaml file. This allows using the user provided service within the project.

 

Add an entry of the user provided service you created in 'resources' section of mta.yaml file. Use the below sample code as a reference.

mta1.JPG

Figure 1: Entry of user provided service in mta.yaml file example

 

Also, add a dependency of this service in HDB module of your project. Use the below sample code as a reference.

mta2.JPG

Figure 2: Service dependency in HDB module (mta.yaml file example)



Creating .hdbsynonymgrantor file.

This file specifies the necessary privileges to access external tables. Open XSA Web IDE and under HDB module of your project create a new folder with name 'cfg', just like the 'src' folder, its name is special. This tells the HDI deployer that this folder containes configuration files and treats them appropriately.

Create your .hdbsynonymgrantor file under this folder. Sample content of this file might be:grantor_file.JPG

Figure 3: .hdbsynonymgrantor file example



Creating synonym for external object

Create a .hdbsynonym file in 'src' folder of your HDB module. In one .hdbsynonym file you can define multiple synonyms to be used in your project.

Please use the below code sample as your reference for creating synonyms.

synonym.JPG

Figure 4: .hdbsynonym file example


Now, you should be able to use those external tables in your container using these synonyms.

The various Data Provisioning Options for Hana

$
0
0

An often asked question is how to get data into Hana as there are so many different options. Let me try to differentiate the various methods from a technical point of view.

 

Standalone products:

  • SAP Data Services
  • SAP LT Replication Server (SLT)
  • Sybase Replication Server (SRS)
  • Direct Extractor Connection (DXC)
  • SAP Process Orchestration/Integration (SAP PI, SAP XI)
  • SAP BW

 

and the Hana options:

  • Hana Smart Data Access (SDA)
  • Hana Smart Data Integration (SDI)
  • Hana Smart Data Streaming (SDS)

 

The first and most important point to consider here is the question itself: How to get data into Hana. If Hana is just one of many sources and targets or there is no Hana in the picture at all, then the standalone products do make more sense. Or to argue into the reverse direction, there are so many different options because of the non-Hana scenarios. Okay, it would be odd if the standalone solutions can load everything but Hana, hence they have that capability as well which creates that confusion.

 

Example: My task is to integrate various system, the sources are an SAP ERP system, 5 Oracle databases, flat files of various formats, SQL Server,... and to load that into Teradata, Oracle and BW on Hana.

So Hana plays no or a little role in it, sounds like a perfect SAP Data Services scenario. Many sources, many targets and Data Services in the middle.

 

Hana is the sole target

 

A few years ago, if a customer wanted to load e.g. flat files into Hana, he had to install Data Services. A full blown ETL tool just to load a few files from Mainframe (Cobol Copybook), CSV files with a non-default format and other sources. Only Data Services as an ETL tool provides reading capabilities of essentially every source system, allows the data to be transformed easily so it can be loaded into the target structures. An installation of a full blown ETL tool just for that.

If realtime is required as well, then SLT or SRS would have to be installed in addition.

In case the source data should be just made available and not copied - the federated/virtual data model use case - SDA would need to be configured as well.

 

So at the end, for an e.g. SQL Server database as source, three products are needed. All with their own install, with their own connector to the source, totally different look and feel and different capabilities.

  • Customer wants to perform Data Services like transformations in Realtime? Not possible.
  • Customer wants to perform Data Services like transformations in a CalcView for virtual data models? Not possible.
  • Customer wants to tryout one style, e.g. a virtual data model and then switch to batch data integration for performance reasons and then to realtime data integration for accuracy? No way.
  • Customer wants to administer, maintain and monitor all from Hana? No chance.
  • Customer wants to read from something less common, say MySQL? Only Data Services has that possibility.

 

For these reasons the Hana EIM Option Smart Data Integration was developed. Take the best concepts of all the products, merge those with existing Hana features and quickly you can develop the most powerful and easy to use product from ground up. Yes, SDI does not reuse any code from the old products.

see Playing Lego with SDI

 

Essentially, SDI is an extension of SDA which enhances it by

  • Adapters
    • Running outside of the Hana kernel - hence are not a stability threat for Hana
    • Provide an Adapter SDK to write new adapters easily - every customer has a few common sources SDI provides adapters for plus one exceptional. They can write a new adapter for that within hours/days.
    • Support onpremise and cloud deployments - a Hana cloud can read onpremise data as if it was local
  • Realtime Push
    • Adapters do support select and insert/update/delete but also a realtime push of change data
  • Transformations
    • All Data Services and Hana transformations are available in Hana natively
  • UIs
    • A Data Services like UI for the assembly of dataflows and the configuration of the individual transforms
    • Supports batch reading, realtime transformations and virtual data models
    • Hana Cockpit for monitoring and administration

 

The various SDI adapters

 

Ideally one source would have one adapter. One Twitter adapter to read Tweets. One Facebook adapter to read Facebook posts. One Oracle adapter to read Oracle data.

Some sources do provide different APIs to get data, especially when it comes to realtime. Take Oracle as a first example. To read from it executing SQL selects via JDBC is sufficient. But JDBC certainly does not support realtime. What are the realtime options for Oracle?

  • Adding triggers to the source tables manually
  • Oracle CDC API creating triggers internally
  • Oracle CDC API using streams technology
  • Oracle Streams used directly
  • Oracle GoldenGate
  • Oracle LogMiner

 

It does not make sense to have one adapter supporting all options or one adapter per realtime method. Hence the SDI OracleLogReader adapter is using the LogMiner technology for realtime plus the JDBC driver for normal reads.

Later we might add more adapters to support other realtime APIs as well, in case these are preferred by a customer.

 

SAP ERP is another example where multiple technologies could be used. There is certainly no JDBC driver for SAP ERP, so even reading tables is a challenge already.

  • ABAP Adapter: Reads SAP tables via ABAP, also can read from Extractors and call BAPIs.
  • ECCAdapter: This is a variant of a database adapter, so it does use the database transaction log to get the changes for realtime. But returns the data in the ABAP datatypes and deals with pool/cluster tables correctly.
  • SLT Adapter?? This one is missing as of today. Would totally make sense to utilize SLT and its trigger based realtime approach to get the changes from ABAP tables. see ideaplace to vote up if you agree.

What is the best method for loading Hana?

 

As the goal of SDI was to provide a one stop solution for all Data Integration problems with Hana, the correct answer should be SDI. Only SDI ...

  • supports Batch and Realtime and Virtual access
  • is fully integrated with Hana development UIs
  • is integrated with Hana Monitoring Cockpit
  • allows virtual table access (SDA) to its sources
  • provides access to a large number of different sources, databases, SAP systems, applications, cloud apps, internet sources,...
  • supports cloud and onpremise deployment options without any compromise
  • does even support realtime transformations
  • simplifies delta loads thanks to the realtime push of change data
  • allows complex transformations to be performed easily (Data Services like)
  • is using Hana repo for moving-to-production
  • ...

 

Also keep in mind, there are certain features missing as of today (SPS11), like Workflows. This is high on the priority list. The product was first delivered with Hana SPS09 which is relatively young compared to all others.

 

But again, SDI is the supposed optimal solution only because we have reduced the question to "loading into Hana".

There are more than enough use cases, even in the Hana world where other tools have the edge.

 

 

 

SAP Data Services

 

For Data Integration from many to many systems with all the requirements related, Data Services is and will be a good choice. Its focus is on batch performance, transformations and connectivity to every possible source/target.

 

SDV12-3jw14-44.JPG

 

SAP LT Replication Server (SLT)

 

The product's sweet spot is ABAP to ABAP realtime replication. Supported sources and targets are the databases ABAP runs on, Hana being one of them. Unlike the SDI ECCAdapter SLT is adding database triggers on the original source tables to capture the changes, which has downsides but also upsides.

For ABAP savvy users this is certainly the preferred option still. From a technical point of view 90% of the logic of SLT is what you would call an adapter in SDI. That is the reason why adding an SLT adapter to SDI would make so much sense.

 

SLT_LTRC_DataTransferMonitorAndStatistics.png

 

 

Sybase Replication Server

 

Main use case for SRS is a multi source/target replication. Supports the most common databases only.

 

 

Direct Extractor Connection

 

Had been a simple method to consume Extractors from Hana, completely replaced by SDA ABAPAdapter.

 

 

SAP Process Orchestration (SAP PI)

 

This tool does actually play in another league, it is an Enterprise Application Integration tool, not a Data Integration tool. Of course the difference between the two types is often marginal, e.g. if a customer master record consists of a single row in the application, there is no difference if you move the record (=row) or if you move the master record object (=row). It gets interesting in the complex cases, where one business object consists of many different types of Information.

 

Figure_2.gif

 

 

SAP BW

 

With SAP BW on Hana the user is in a very comfortable position. For SAP extractors for example SAP BW has native support, why go another route? Hana data can be read as well. And for none-Hana data BW is using the Hana virtual tables, the SDI solution hence.

Therefore BW should not be considered an alternative but instead BW can utilize all other loading options to take advantage of those.

 

 

Hana SDA

 

This feature enables the use of virtual tables in Hana. These are Hana objects that point to an external system to blend in the remote data in Hana. For example a SQL Server table CUSTOMER_MASTER can be blended in to Hana as virtual table V_CUSTOMER_MASTER. This virtual table is nothing else than a pointer to the remote table. So it does not store any data on the Hana side, all queries are forwarded to the remote system and the data retrieved.

 

SDA provides a set of adapters, all running in the Hana IndexServer and do use ODBC drivers. Nowadays the SDI provided adapters are used instead.

 

fileadapter-browse.png

 

 

Hana SDI

 

This solution is a natural part of Hana and extends Hana SDA. Therefore all Hana products and applications can take full advantage of its capabilities just like SDI does utilize other Hana features.

 

reptask2.png

 

Hana SDS

 

Hana Smart Data Streaming again does actually compare to the other Data Provisioning options. Although it has adapters as well, its main goal is to aggregate the incoming stream of data, mostly by time slices. A good example is the streaming of WebLog changes. Instead of storing the billions and billions of raw weblog information in Hana, it might make more sense to extract key information from the raw data and store this information instead. How many users have viewed a web page within 10 minutes intervals?

Therefore the SDS provided adapters are more focused on these kinds of source which produce millions of rows per second and not database sources. It does support reading from databases as well but just to lookup master data, not to subscribe to database changes.

 

maxresdefault.jpg

Make the Web IDE work on your local HANA platform

$
0
0

Inspired by Chaim Bendelac’s excellent blog Developing with XS Advanced: A TinyWorld Tutorial I wanted to explore developing with XS Advanced further.

 

One precondition is the Web IDE for HANA, with Make SAP HANA XS Advanced work for you and How to update your sandbox HANA system to SPS 12 already in the bag, this was relatively easy to achieve.

 

In fact, I used this opportunity to update my HANA system to Revision 121 and my XS Advanced Runtime to version 1.0.28 as well as XS Monitoring, XS Services and HANA Runtime Tools to service pack 2.

 

One important difference to a standard HANA installation or update is the requirement of parameter xs_components as per Note 2304873 - SAP Web IDE for SAP HANA SPS12 - Release Note:

Terminal.png

Then I select the Web Ide and DI CORE packages, together with the other packages mentioned above and start the update process:

Select Software Component Locations.png

This runs for a while, but eventually the Web Ide 1 component gets installed:

SAP Web Ide 1.png

Eventually all finishes successfully:

SAP HANA components updated.png

And from the HANA XS Advanced Command Line tool, I can see that my Web IDE for HANA is now available:

webide.png

So, I log on and start developing:

Web IDE.png

Have fun

Troubleshooting Dynamic Tiering Connections to SAP HANA

$
0
0

Sometimes you may get a connection error when converting a table to
extended storage:

 

ALTER TABLE "DT_SCHEMA"."TABLE_A" USING EXTENDED STORAGE

 

[4863] {207439} [95/10731633] 2016-05-06 17:50:26.455132 e FedTrace

odbcaccess.cpp(03672) : ODBC error: connected:  1 state: HY000 cide: -65 [SAP]

[ODBC Driver] Unable to connect to server 'HANA': [SAP AG] [LIBODBCHDB SO] [ HDBODBC]

General error;1033 error while parsing protocol


In this case, the dynamic tiering host can't communicate with SAP HANA to fetch the contents of

the table needed for the conversion.

 

Enable Traces

 

One solution is to enable traces. Enable these traces, then run the ALTER TABLE statement again:

 

ALTER SYSTEM  ALTER CONFIGURATION ( ' indexserver.ini ' , ' SYSTEM ' ) SET ( ' authentication ' , ' SapLogonTicketTrace ' ) = ' true '  WITH RECONFIGURE;

ALTER SYSTEM  ALTER CONFIGURATION ( ' indexserver.ini ' , ' SYSTEM ' ) SET ( ' trace ' , 'saptracelevel ' ) = ' 3 '  WITH RECONFIGURE;

ALTER SYSTEM ALTER CONGIGURATION ( ' indexserver.ini ' , ' SYSTEM ' ) SET ( ' trace ' , 'authentication ' )  = ' debug '  WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ( ' indexserver.ini ' , ' SYSTEM ' ) SET ( ' trace ' , ' crypto ' ) = ' debug ' WITH RECONFIGURE';

 

Collect the required traces and contact SAP Support to resolve the communications problem.

 

Check for Different Certificate Sizes

 

Certificate sizes on the dynamic tiering and SAP HANA hosts should match. Certificate mutual authentication and the SSL protocol secure internal communications between the hosts. Different certificate sizes show inconsistent certificates that could cause connection problems.

 

Log in as SIDadm on the dynamic tiering host and enter these commands:

  1. cdglo
  2. cd security/rsecssfs/data
  3. ls -al sap_system_pki_instance.*

Note the file sizes. The file sizes should match those on the HANA host.

 

Log in as SIDadm on the SAP HANA host and enter these commands:

  1. cdhdb
  2. cd $SECUDIR
  3. ls -al sap_system_pki_instance.*

Do the file sizes match those on the dynamic tiering host? If not, contact SAP Support.

 

Note: For these certificates to work correctly, you need to synchronize the clocks on the SAP HANA and dynamic tiering hosts. Run the date command on each host.

HANA Tips & Tricks: Issue #2a - Digging into Table Variables

$
0
0

About this post

Following the success of our first "HANA Open Mic" session (sounds cool doesn't it? ) at Just-BI initiated by Glenn Cheung, we had a second round of discussions and as promised, here is another blog post touching upon table variables in stored procedures. As part of our second series of blog posts, Roland Bouman has also written an interesting article about Calculating Easter and related holidays with a HANA scalar function.


Working with Table Variables in Stored Procedures

 

Table variables are often used as output parameters in Stored Procedures and Scripted Calculation Views. If it is just one record/row to be inserted into this table, it can be quite simple. But if multiple records need to be inserted (after some data manipulation), then this needs a slightly different approach. Roland Bouman initiated this discussion as he thought; "How difficult can this be? A simple INSERT statement should do the trick". But he ultimately found out that an INSERT statement cannot be used on table variables. Too bad!

 

The below example illustrates this use-case and rather unexpectedly, running the CREATE PROCEDURE statement results in a compile-time error.

 

createprocedure pr_test_table_var0(

  out p_tab table (

    id integer,

    name varchar(32)

  )

)

language sqlscript

sql security invoker

as

begin

    declare v_index integer;

 

    for v_index in 1..3 do

      insert

           into p_tab (id, name)

     values (v_index, 'name'||v_index);

    endfor;

 

end;

 

The error message stated the following:

Could not execute 'create procedure pr_test_table_var0( out p_tab table ( id integer, name varchar(32) ) ) language ...' in 55 ms 29 µs .

  SAP DBTech JDBC: [259] (at 239): invalid table name:  Could not find table/view P_TAB in schema RBOUMAN: line 15 col 14 (at pos 239)

 

We then discussed the options available to achieve this and figured out that there are multiple solutions to this. Here is a brief write-up about these options with examples.

 

Option 1: Union All

This approach relies on assigning the result of a SELECT statement to the table variable. The SELECT statement itself uses a UNION ALL to append the required rows, to the contents of the table variable. For a newbie like me, it may also be worth mentioning that in order to SELECT the contents of a table variable, the name of the table variable must be prefixed by a colon. The drawback of this approach is repeated copying of all the contents of the table variable.

 

dropprocedure pr_test_table_var1;

 

createprocedure pr_test_table_var1(

  out p_tab table (

    id integer,

    name varchar(32)

  )

)

language sqlscript

sql security invoker

READS SQL DATA

as

begin

    declare v_index integer;

 

    for v_index in 1..3 do

      p_tab = select *

            from :p_tab

            unionall

            select v_index as id

            ,      'name'||v_index as name

            from dummy

            ;

    endfor;

 

end;

 

call pr_test_table_var1(?);

 

Option 2: Arrays

In this option, an array first needs to be created for each column of the table variable, fill these arrays with scalars - one for each row and then finally merging all these arrays into a table variable with UNNEST. This solution can get quite complex and tricky, and may not be the best for all requirements.

dropprocedure pr_test_table_var2;

 

createprocedure pr_test_table_var2(

  out p_tab table (

    id integer,

    name varchar(32)

  )

)

language sqlscript

sql security invoker

READS SQL DATA

as

begin

    declare v_index integer;

    declare v_ids integer array;

    declare v_names varchar(32) array;

 

    for v_index in 1..3 do

    v_ids[v_index] = v_index;

    v_names[v_index] = 'name'||v_index;

    endfor;

 

  p_tab = unnest(:v_ids, :v_names)

    as (id, name);

end;

 

call pr_test_table_var2(?);


Option 3: Local Temporary Table

In this option, a local temporary table is created with the same structure as the table variable. An INSERT is allowed on temporary tables. Thus, all the data manipulation can be performed and the rows can first be inserted into a temporary table. The final result set can then be simply assigned to the table variable using a SELECT statement. I would then DROP this temporary table because if you do not, then you will be surprised to find out that you get a run-time error if the table already exists. Another syntax worth noting here is that a local temporary table name should always start with a '#'.

dropprocedure pr_test_local_temp_table;

 

createprocedure pr_test_local_temp_table(

  out p_tab table (

    id integer,

    name varchar(32)

  )

)

language sqlscript

sql security invoker

as

begin

    declare v_index integer;

    createlocaltemporarytable"#TAB"as (

      select * from :p_tab

    );

       for v_index in 1..3 do

        insertinto #TAB (id, name)

                values (v_index, 'name'||v_index);

        endfor;

    p_tab = select * from"#TAB";

       droptable"#TAB";

end;

 

call pr_test_local_temp_table(?);

 

Notice that the "UNION ALL" and the "Array" options are read-only. So in that sense, the "Local temporary table" solution has the disadvantage in that it cannot be read-only.

 

To summarize, option 3 seems to be an efficient and easy way of working with table variables.This is an approach we would follow even with ABAP. For example, I could correlate this option to an ABAP function module, wherein an internal table is defined, data is manipulated/derived and stored in this internal table and ultimately, this internal table is assigned to the export parameter of the function module.

 

That's all for now about table variables. Hope you enjoyed this post. Do watch out for more by tracking the hanatipsandtricks tag.

Custom MTA Fiori Worklist template

$
0
0

For those building apps, with on-premise Native Hana, the Web IDE is now available  from SPS11 onwards.

See the Developing with XS Advanced: A TinyWorld Tutorial  for a great intro.


The minor downside, at the moment, is that there is a lack of templates currently available. 

Below is a screen shot, from SPS12, showing only the MTA project template available.


In HCP by comparison there is a larger list of templates, including my personal favourite for desktop applications, the 'SAP Fiori Worklist application'


 


In this blog I've copied the Worklist code from HCP and included a simplified version of the  'Northwind' example OData, for Orders, with mockdata for testing.


The complete code is available to download at worklistTemplateNative.zip - Google Drive


Below is a screen shot of the Fiori Worklist App, running in SPS12 with the Mock data for testing.


The HCP templates are also setup with a nice framework to start building out your test cases. 

No excuses now anyone ;-)

 

 

 

Below is the folder structure of this hybrid template, which includes a simplifed version of Northwind tables and custom Odata service.



Below is the Northwind Order table definition and Odata service.



For testing with the Mockserver I've included the Northwind metadata.xml and Orders.json test data.


I'm still working out the best way to do this, but to avoid hardcoding of the SAPUI5 libraies in each of the html files I've added properties in the mta.yaml as follows.


I'v defined the 'ui5_library' property as a variable in the xs-app.json file.


Finally in each of the html files, I then use the ui5_library variable to substitute the library at run time.


For testing I typically used the "/test/flpSandboxMockServer.html"


However to get the full experience with the functioning tables, and ODATA service use the main 'index.html' which you may hit errors similar to

"Error: Unsupported content type: multipart/mixed;boundary". See similar SCN issue https://scn.sap.com/thread/3904438


Following some helpful guidance from Thomas Jung, this is a known issue which may occur in the Web IDE, but is overcome by deploying the application similar to the steps in '2.1.4 Deploy the "Tiny World" Application'  in the  XSA Dev guide. http://help.sap.com/hana/SAP_HANA_Developer_Guide_for_SAP_HANA_XS_Advanced_Model_en.pdf


I hope you find this Template useful, while we wait for more Templates to be delivered in subsequent releases.

HANA TA for Hybris Ecommerce - Why Google??

$
0
0

Context Setting

Alright, so lets pick an ecommerce site-- Say Levi's-Great Britain http://www.levi.com/GB/en_GB/category/men/collections/levi-collections-whats-new-men

Screen Shot 2016-07-13 at 1.49.00 PM.png

On the top right we see a generic search, that may use Apache SolR. Below we see structured search.

If I wanted to search for "Levis Great Britain Mens new arrival Slim 32" , I add "Great Britain" only to set the regional context. So let's analyze different ways in the current system:

1. On the Levi's site the search "Mens New Arrival Slim 32" does not provide any result

Screen Shot 2016-07-13 at 1.53.53 PM.png

2. On the Levi's site the search: "Slim 32" provides a lot of data of which a lot is unrelated. For example, "Boot Cut" is displayed whereas I was looking for

Screen Shot 2016-07-13 at 1.54.19 PM.png

3. What if we use Google? Many do this for most online shopping. The top 10 searches provide a lot of contextScreen Shot 2016-07-13 at 1.58.42 PM.png

Screen Shot 2016-07-13 at 1.58.53 PM.png

 

So, the question is what can be done here? Why utilize Google, when we can implement a change that keeps the user from leaving the site and possibly getting distracted with counter offers.

Proposal

Using Text Analysis in HANA:

  1. Create dictionaries and rules for product filter dimensions and their values
  2. Dictionaries will help identify the filter dimensions from the value typed in. Example: "Slim" would automatically map to "FIT".
  3. Rules needs to come in action when value does not uniquely identify the filter dimension and we need to do natural language processing. Example: If we have Slim 32, "32" could stand for waist or length. So unless qualified via qualifiers 32 will be used to filter waist and length both. With a rule if we will map "waist" only for sentences like "Slim and Waist 32" or "Slim with 32 waist" etc.
  4. Use the XS API for runtime applying the configuration on the query to bring in the filter route
  5. If the filter route is achieved feed the converted routes from unstructured search to structured search
  6. If the filter route is not achieved as the query was not identified feed the string to free text search of underlying framework

POC Snap Shots:


 

Scenario 1: Searching on "Slim 32", results in: "Slim" gets assigned to "FIT", "32" gets assigned to both "WAIST" and "LENGTH". Conversion from unstructured to structure search happens.

Screen Shot 2016-07-13 at 2.09.34 PM.png

Screen Shot 2016-07-13 at 2.09.56 PM.png

 



Scenario 2: Searching on "Originals waist 32", results in: "Originals" gets normalized to "Original" the base form and maps to "FIT", "32" gets assigned only to "WAIST".  Conversion from unstructured to structure search happens

Screen Shot 2016-07-13 at 2.10.32 PM.png

Screen Shot 2016-07-13 at 2.10.25 PM.png

 


 

Scenario 3: Searching on "Shirts", results in a handover to the existing platform search environment as currently this item type is not mapped within Text Analysis in HANA.

Screen Shot 2016-07-13 at 2.10.50 PM.png

Screen Shot 2016-07-13 at 2.11.00 PM.png

 

 


 

Please see the video of POC in action:


Integrating Pivotal Cloud Foundry to HANA using SAP Cloud HANA Service Connector

$
0
0

Introduction

 

Lately we have been looking into the possibilities of tapping into a HANA database platform from our on premise Pivotal Cloud Foundry (PCF) PaaS for an upcoming project. There are a number of ways we can read data from HANA in PCF. One of them is to use a cloud connector which connects to HANA with the JDBC API. Luckily SAP have provided a HANA service connector for this:

 

spring-cloud-sap/spring-cloud-cloudfoundry-hana-service-connector at master · SAP/spring-cloud-sap · GitHub

 

If like myself you are a newbie to PCF then it may not seem obvious from the outset as to how to use it. In this blog I will demo how this can be utilized. At a high level we will do the following:

  • Create a user provided service which will contain our HANA db connection details and credentials.
  • Create a spring boot application which reads data from the HANA db and displays it in a simple html page.
    • Bind the HANA db service to our application.

 

Prerequisites

 

  • A HANA database
  • A PCF instance
  • Spring Tool Suite(STS)

 

HANA DB service

 

The first step is to create a user provided service in PCF for the HANA database. This will contain our connection details and user credentials for connecting to HANA.


See here for detailed information on creating user provided services in PCF - http://docs.pivotal.io/pivotalcf/1-7/cf-cli/getting-started.html#user-provided


  • Login to your PCF cf CLI instance in command prompt and execute the following command:


               cf cups hana-db -p "hostname, port, user, password, schema, url"


    • This will prompt you to enter the hostname, port, user, password, schema and url for the HANA db that you want to connect to.
    • The url should be in the form of jdbc:sap://<server>:<port>[/?<options>] e.g. "jdbc:sap://myserver:30015/?autocommit=false
      • The port should be 3<instance number>15 for example 30015 if the instance is 00.


Note: the url parameter is important as this is what the HANA service connector will use in identifying the datasource.


Once created, if you log in to your Apps Manager you should see the below entry in your list of Services:



hana-db user provided service.PNG

 

Demo spring boot application that uses the HANA service connector

 

  • In STS create a new project as follows:
    • File -> New -> Spring Starter Project

Spring Starter Project.PNG

 

    • Click Next and on the next screen click Finish
    • This will create the project with below structure:

project structure.PNG

 

  • Update the pom.xml to include the required dependencies (specifically the dependencies with the following artifactIds:
    • spring-boot-starter-thymeleaf
    • spring-cloud-cloudfoundry-connector
    • spring-cloud-spring-service-connector
    • spring-cloud-core
    • spring-cloud-cloudfoundry-hana-service-connector
    • commons-dbcp

 

<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">  <modelVersion>4.0.0</modelVersion>  <groupId>com.pm.hana</groupId>  <artifactId>demo-hana-service-connector</artifactId>  <version>0.0.1-SNAPSHOT</version>  <packaging>jar</packaging>  <name>demo-hana-service-connector</name>  <description>demo-hana-service-connector</description>  <parent>  <groupId>org.springframework.boot</groupId>  <artifactId>spring-boot-starter-parent</artifactId>  <version>1.3.6.RELEASE</version>  <relativePath/> <!-- lookup parent from repository -->  </parent>  <properties>  <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>  <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>  <java.version>1.7</java.version>  </properties>  <dependencies>  <dependency>  <groupId>org.springframework.boot</groupId>  <artifactId>spring-boot-starter</artifactId>  </dependency>  <dependency>  <groupId>org.springframework.boot</groupId>  <artifactId>spring-boot-starter-test</artifactId>  <scope>test</scope>  </dependency>  <dependency>            <groupId>org.springframework.boot</groupId>            <artifactId>spring-boot-starter-thymeleaf</artifactId>        </dependency>  <dependency>     <groupId>org.springframework.cloud</groupId>     <artifactId>spring-cloud-cloudfoundry-connector</artifactId>     <version>1.2.0.RELEASE</version>  </dependency>  <dependency>     <groupId>org.springframework.cloud</groupId>     <artifactId>spring-cloud-spring-service-connector</artifactId>  </dependency>  <dependency>     <groupId>org.springframework.cloud</groupId>     <artifactId>spring-cloud-core</artifactId>  </dependency>  <dependency>     <groupId>com.sap.hana.cloud</groupId>     <artifactId>spring-cloud-cloudfoundry-hana-service-connector</artifactId>     <version>1.0.4.RELEASE</version>  </dependency>  <dependency>     <groupId>commons-dbcp</groupId>     <artifactId>commons-dbcp</artifactId>     <version>1.4</version>  </dependency>  </dependencies>  <build>  <plugins>  <plugin>  <groupId>org.springframework.boot</groupId>  <artifactId>spring-boot-maven-plugin</artifactId>  </plugin>  </plugins>  </build></project>

 

  • You should see the Maven Dependencies libraries are now updated.
  • Add a manifest.yml file with the following details.
    • Take a note of the "hana-db" defined under services. This will bind the hana-db service to this application.

 

applications:
- name: demo-hana-service-connector  instances: 1  host: demo-hana-service-connector  services:    - hana-db  env:    SPRING_PROFILES_DEFAULT: cloud
  • Add the HANA JDBC Driver ngdbc.jar file in a folder called lib under src/main/resources. Note: if you have HANA Client installed you can get the ngdbc.jar from your Program Files.

 

ngdbc_jar.PNG

 

  • Add a Configuration class where we will define our dataSource bean. Note the "hana-db" service is passed as a String parameter to retrieve the dataSource. @Configuration indicates that the class can be used by the Spring IoC Container as a source of bean definitions. @Profile("cloud") ensures the configuration is loaded only in a cloud environment.

 

package com.pm.hana;
import javax.sql.DataSource;
import org.springframework.cloud.CloudException;
import org.springframework.cloud.config.java.AbstractCloudConfig;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
@Configuration
@Profile("cloud")
public class CloudConfig extends AbstractCloudConfig
{  @Bean  public DataSource dataSource()  {  DataSource retVal = null;  try  {  return connectionFactory().dataSource("hana-db");  }  catch (CloudException ex)  {  ex.printStackTrace();  }  return retVal;  }
}
  • Create a simple HanaDatabase object class to store the database information we retrieve from the HANA db.

 

package com.pm.hana;
public class HanaDatabase {  String databaseName;  String usage;  public String getDatabaseName() {  return databaseName;  }  public void setDatabaseName(String databaseName) {  this.databaseName = databaseName;  }  public String getUsage() {  return usage;  }  public void setUsage(String usage) {  this.usage = usage;  }
}

 

  • Create a Controller class to handle the web requests. Note that the dataSource defined in the CloudConfig class is autowired into our class. The home method is annotated with @Requestmapping("/home") to process requests to the "/home" path. Among other things this calls the getHanaDatabaseInfo method which will pull out data from the sys.m_database table and add to the model attributes before redirecting to the home.html page.

 

 


package com.pm.hana;
import java.net.URI;
import java.net.URISyntaxException;
import java.sql.Connection;
import java.sql.DatabaseMetaData;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.LinkedHashMap;
import java.util.Map;
import javax.sql.DataSource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;
@SpringBootApplication
@Controller
public class HomeController
{
  private static Logger log = LoggerFactory.getLogger(HomeController.class);  @Autowired(required = false)  DataSource dataSource;  @RequestMapping("/home")  public String home(Model model)  {  Map<Class<?>, String> services = new LinkedHashMap<Class<?>, String>();  HanaDatabase hanaDatabase = null;  if (dataSource != null)  {  services.put(getClass(dataSource), toString(dataSource));  hanaDatabase = getHanaDatabaseInfo();  }  model.addAttribute("services", services.entrySet());  model.addAttribute("hanaDatabase", hanaDatabase);  return "home";  }  private HanaDatabase getHanaDatabaseInfo()  {  HanaDatabase hanaDatabase = null;  Connection conn = null;  try        {        conn = dataSource.getConnection();        Statement stmt = conn.createStatement();        ResultSet resultSet = stmt.executeQuery("select database_name, usage from sys.m_database");        resultSet.next();        hanaDatabase = new HanaDatabase();        hanaDatabase.setDatabaseName(resultSet.getString(1));        hanaDatabase.setUsage(resultSet.getString(2));        }        catch (SQLException ex)        {        log.info("SQLException: " + ex);        }        finally        {        if (conn != null)        {        try                {                 conn.close();                }                catch (SQLException e) {} // we are screwed!        }        }  return hanaDatabase;  }  private String toString(DataSource dataSource)  {  if (dataSource == null)  {  return "<none>";  }  else  {  Connection conn = null;  try         {          conn = dataSource.getConnection();          DatabaseMetaData metaData = conn.getMetaData();          return stripCredentials(metaData.getURL());         }         catch (Exception ex)         {          return "<unknown> " + dataSource.getClass();         }         finally         {          if (conn != null)          {          try                 {                 conn.close();                 }                 catch (SQLException e) {                  log.info("SQLException: " + e);                 } // we are screwed!          }         }  }  }  private String stripCredentials(String urlString)  {  try  {  if (urlString.startsWith("jdbc:"))  {  urlString = urlString.substring("jdbc:".length());  }  URI url = new URI(urlString);  return new URI(url.getScheme(), null, url.getHost(), url.getPort(), url.getPath(), null, null).toString();  }  catch (URISyntaxException e)  {  System.out.println(e);  return "<bad url> " + urlString;  }  }  private static Class<?> getClass(Object obj)  {  if (obj != null)  {  return obj.getClass();  }  else  {  return null;  }  }
}

  • Under src/main/resources add a templates folder and add the home.html file.

 

<!DOCTYPE HTML><html xmlns:th="http://www.thymeleaf.org"><head>    <title>Getting Started: Serving Web Content</title>    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />    <link rel="stylesheet" href="https://netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css"/></head><body>  <div class="container">  <br/>  <div class="jumbotron">         <h1>PCF HANA Service Connector</h1>  <p>Demo PCF Spring application that uses <a href="http://projects.spring.io/spring-cloud">Spring Cloud</a> and   <a href="https://github.com/SAP/spring-cloud-sap/tree/master/spring-cloud-cloudfoundry-hana-service-connector">HANA Service Connector</a>   to connect to a HANA database</p>  </div>  <h2>Cloud Services</h2>  <table class="table table-striped">  <thead>  <tr>  <th>Service Connector Type</th>  <th>Connection address</th>  </tr>  </thead>  <tbody>  <tr th:each="service : ${services}">  <td><strong th:text="${service.key.name}"></strong></td>      <td th:text="${service.value}" />      </tr>      </tbody>     </table>     <h2>Database Info</h2>     <div class="row">      <strong class="col-sm-2">Database Name:</strong>      <p class="col-sm-10" th:text="${hanaDatabase.databaseName}">databaseName</p>     </div>     <div class="row">      <strong class="col-sm-2">Usage:</strong>      <p class="col-sm-10" th:text="${hanaDatabase.usage}">usage</p>     </div>     <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>     <script src="https://netdna.bootstrapcdn.com/bootstrap/3.1.1/js/bootstrap.min.js"></script>  </div></body></html>
  • Your final project structure should look like this:

Final structure.PNG

 

 

  • Deploy the application to PCF by dragging the project to your PCF Server in STS( or via the cf CLI).

 

deploy 1.PNG

  • In the next screen note that I have set the Memory Limit of the application to 4 GB. A memory limit of 4 GB resulted in heap memory size of 3GB. This is to allow for the heap size requirements of the HANA JDBC connection. At the default of 512 MB I was getting "Out of Memory - java heap space" errors.

 

deploy 2.PNG

  • In the next screen you will see the available services that I can choose from to bind to the application. "hana-db" will be selected by default as specified in our manifest.yml file.

deploy 3.PNG

deploy 4.PNG

 

  • You should see the below in your console logs after the application was deployed showing that the application has started:

 

2016-07-13 14:39:30.016  INFO 14 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2016-07-13 14:39:30.024  INFO 14 --- [           main] .p.h.DemoHanaServiceConnectorApplication : Started DemoHanaServiceConnectorApplication in 8.624 seconds (JVM running for 9.498)
healthcheck passed
Exit status 0
Container became healthy
[Application Running Check] - Application appears to be running - demo-hana-service-connector.
  • Now lets go to our PCF Apps Manager again and inspect our new application

 

  • As we can see the app is up

 

demo-hana-service-connector app.PNG

 

 

  • If we go to the Services tab we can see that the hana-db service is bound to the app:

 

demo-hana-service-connector app - bound services.PNG

 

 

 

  • And if we look at the Env variables tab we can now see the user provided credentials supplied by the hana-db service:

 

demo-hana-service-connector app - env variables.PNG

 

 

Demo test.PNG

 

We can see that it displays the Cloud Services and the Database Information that we read from the HANA db. Success!

 

While this was implemented on Pivotal Cloud Foundry, it should be applicable to any Cloud Foundry enabled PaaS.

 

References

 

GitHub - SAP/spring-cloud-sap: Spring Cloud Connectors for SAP HANA Cloud Platform (HCP) and SAP HANA DB platform

spring-cloud-sap/spring-cloud-cloudfoundry-hana-service-connector at master · SAP/spring-cloud-sap · GitHub

GitHub - SAP/cloud-hello-spring-cloud: Simple sample demonstrating the usage of Spring Cloud Connectors

Pivotal Docs

Connect to SAP HANA via JDBC - SAP HANA Developer Guide for SAP HANA Studio - SAP Library

Introducing Spring Cloud

My first experiences with the on premise Web IDE for HANA

$
0
0

After having made the Web IDE work on my local HANA platform I started developing with XS Advanced following the TinyWorld tutorial. In this blog I am describing my respective first experiences with the Web IDE for HANA.

 

Project

Creating a XS Advanced (XSA) project is supported by templates, but currently only one template for Multi-Target Application Project exists that simply creates a Multi-Target Application (MTA) Descriptor file. Templates like the Fiori Master Detail Application, that I like because it fits the requirements for many simple out of the box scenarios, are currently missing and the MTA Descriptor file has to be edited manually for the application configuration.

 

HANA database (HDB) modules

The full integration of Core Data Services (CDS) including the build services in the Web IDE for HANA makes it easy to create any type of supported HDB modules especially with the option to seamlessly switch between the graphical and the text editor:

CDS Artifact.png

The icing on the cake that would ease working with the text editor would a beautify functionality to augment the rudimentary syntax highlighting as there is for the Java Script artifacts.

 

Business Logic

Since the only available project template currently does not create any business logic, this has to be added either as a node.js or SQL Script code. The integrated Java Script editor has syntax highlighting and code completion with fully integrated deployment functionality and therefore reduces the need and usefulness of an external editor.

 

User Interface

Since the only available project template currently does also not create user interface, this has to be added in either simple HTML or of course SAPUI5. The integrated editor for this is rather basic but the deployment is very nicely integrated again.

 

OData

Exposing an OData service based on a graphically modelled Calculation View is a matter of simply adding a xsodata file. However, unfortunately, the currently created metadata file is very basic and does not support for the addition of OData4SAP annotations:

metadata.png

Frameworks like the Fiori Overview Page (OVP) Cards however depend on specific annotations, like for example the sap:label annotation for the axis labels in OVP Line Chart Cards.

 

Summary

With the Web IDE for HANA, SAP have come a long way from the HANA SPS11 days, where XSA artefacts had to be solely created with external tools and deployed via the XS Advanced Command Line Interface (CLI). Especially the integration of the build and deployment tools into the Web IDE for HANA is excellent. To simplify the development of real world applications however there would be more templates required like the ones currently available on the HANA Cloud Platform (HCP). A user interface for the application configuration would also reduce the probability for errors when manually editing the MTA Descriptor file. Finally, OData4SAP annotation support would be needed to leverage frameworks like the Fiori OVP Cards.

Viewing all 737 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>