Quantcast
Channel: SAP HANA Developer Center
Viewing all 737 articles
Browse latest View live

Announcing the Free download of SAP HANA + SUSE Linux Virtual Machine

$
0
0

Doing a startup is a risky business. According to a study by Mashable, 75% of all startups and 90% of all products fail. At SAP Startup Focus, we are in the business of helping startups increase those odds considerably. We do this by providing them an amazing software platform SAP HANA they can build their solutions on and by helping them with the go-to-market side of things. Currently we have over 1,700 startups that are building solutions on our HANA platform, from more than 55 countries, with 130 validated solutions available for sale to customers.

 

As compared to a few years ago, it has indeed become a lot easier and a lot cheaper to build an application you can take to market, or shop around to investors. With widespread adoption of the Cloud and increasing usage of Mobile as a consumption mechanism, even scrappy startups can now develop sophisticated products without getting a 2nd mortgage on the house.

 

And that is where our very valued partners at SUSE come in. They understand unequivocally that for startups to succeed in a brutally competitive global environment, they not only need access to cutting edge technology like SAP HANA but also need it in a manner that is uniquely suited to their geographic and financial situation. For e.g. in several emerging markets access to commercial-grade bandwidth remains an ongoing issue which means that developing on a platform that is only available in the Cloud remains a logistical challenge.

 

Hence, we are very proud to announce that starting right now qualified startups in the SAP Startup Focus program will be eligible to receive for a 6-month period a comprehensive developer version of SAP HANA on SUSE Enterprise Linux Server as a single, downloadable Virtual Machine (VM). A single VM reduces the barriers to adoption for the SAP HANA platform, and will allow for startups to quickly evaluate and rapidly prototype solutions while running a developer version of SAP HANA on their local machine.  This will significantly reduce cost, and increase the agility of a startup in the initial product development phase of their lifecycle.

 

Additionally, startups will also receive free training via SUSE’s Certified Linux Administrator (CLA) Academy, a $2,375 USD value. Pre-sales electronic support and membership in SUSE’s partner program operated by PartnerNet®, which provides access to additional SUSE software, is also included.

 

To download the VM, please visit the SAP HANA Marketplace, the destination for startups to discover, try and buy the SAP HANA platform as well as applications based on SAP HANA, located at http://marketplace.saphana.com/p/3377


Trigger based approach in SLT

$
0
0

SLT
Concept-trigger based.

 

 




 
 
 
 
 
 
 
 
 
 
 
 




 

 

DB triggers
controls delta mechanism involved in it.

 

 

Logging
table records the data record last loaded and this helps with delta working
properly during SLT.

 

 

Read module
pushes data in to replication server,

 

 

 

 

 

If any
transformation needed it will be handled by transformation engine and write
engines pushes that to application tables in SAP HANA.

 

 

 

 

Hope this
helps understand the Trigger based approach



How to Install HANA Studio on MAC OS X Yosemite

$
0
0

a03.png

 

SAP HANA Studio for MAC = eclipse + SAP plugins

 

Well, you have probably landed here because you cannot find the HANA Studio installer for the OS X platform which has mysteriously disappeared from the SAP HANA download website. Surprise, surprise...

 

Until it gets listed again (I would not hold my breath), another way to run HANA Studio on Mac is to install SAP HANA Tools. I tried it out myself and found some issues that I will highlight here as the installation process should be a simple and straightforward process. After all, HANA Studio is basically eclipse + some plugins (i.e. SAP HANA Tools).

 

Below, I have listed the steps to follow in order to get HANA Studio up and running on Mac running Yosemite (OS X 10.10.1).

 

 

Step 1: Prerequisites

 

The JAVA runtime environment is required to run eclipse so you will have to install it first. The question is which one? On the SAP HANA Tools website, it is recommended to use JRE version 1.6+.  But for Mac users, it is not as straightforward as on Windows because Apple supplies their own version of Java for Java versions 6 and below. If prompted by the OS, do not install JAVA for OS X (http://support.apple.com/kb/dl1572) as it is not compatible with eclipse Luna.

 

Since I have a general preference for using the latest version of JAVA (security), I will be installing JAVA SE 8u25. The JRE should do just fine to run eclipse but I am going with the JDK since I need it for other software development purposes as well.

 

 


b01.png

Step 2: Installing eclipse


The next step is to install the eclipse IDE.

 

c01.png

 


Step 3: Installing SAP HANA Tools

 

Third and final step, you will need to install SAP HANA Tools in eclipse.

 

  • Open Eclipse IDE
  • Select a workspace at the prompt
  • Click on Help> Install New Software

d01.png

  • If do not see "The Eclipse Project Updates" in the dropdown menu for available sites, add it "http://download.eclipse.org/eclipse/updates/4.4" and then select it.
  • Make sure the checkbox is checked for "Contact all update sites..."
  • Select Eclipse SDK. If you do not, you will encounter Missing requirement AFL connector error when you will install the HANA Tools later on in the process
  • Click Next, agree to the license agreement then Finish

e01.png

  • SAP Network users: you will encounter a Repository not found error if you are on the SAP-Corporate network as access to the repository is blocked. Therefore you will have to switch to another network (i.e. SAP-Internet) and repeat the previous steps if needed. Others can proceed to the next step.
  • If successful, you will be prompt to restart eclipse
  • Open eclipse again
  • Select a workspace at the prompt
  • Click on Help> Install New Software
  • Add the following repository: https://tools.hana.ondemand.com/luna
  • Make sure the checkbox is checked for "Contact all update sites..."
  • Select SAP HANA Tools. If you did not install the eclipse SDK plugin in the previous step, you will encounter Missing requirement AFL connector error here. Otherwise, you should be able to install SAP HANA Tools without any problem.
  • Click Next, agree to the license agreement then Finish

f01.png

 

You will be prompted to re-start eclipse which means that the installation process is successfully completed.

 

Now, you can simply connect to a HANA system and get started with the fun part of development.

It’s here! HANA SPS09 – check out some of the awesome new options

$
0
0

Detect and Act: Insight from Event Streams

SAP HANA smart data streaming allows you to extract insight from real-time information streams and respond immediately!

 

SAP HANA smart data streaming lets you capture, analyze, and act on the continuous flow of new information that streams into your business, identify emerging threats and opportunities as they happen, and respond immediately. SAP HANA smart data streaming is a highly scalable event processor for the SAP HANA platform that enables you to capture and process streams of events from many sources in real-time to provide sub-second response to incoming data.

 

 

 

 

Transform your Business: Manage all your data cost effectively with the performance you demand!

SAP HANA dynamic tiering allows you to extend HANA memory with disk-backed column store technology.

SAP HANA dynamic tiering is a highly scalable, integrated option that gives application developers centralized operational control to cost effectively manage very large data sets – terabytes to petabytes – allowing uses to characterize data into temperature tiers and move the data from one temperature tier to another within the same database. With SAP HANA dynamic tiering you can transform your business by managing all your data cost effectively with the performance you demand.

 

 

For more information on the HANA SPS09 release read the blog by Mike Eacrett on “What’s New for HANA SPS09”

 

 

MY EXPERIENCE OF ORACLE TO HANA MIGRATION PROJECT-PART III

$
0
0

Best Practice For Hana Performance Optimization (PART I):

Hi Everyone, I am sharing my experience of working on Oracle to Hana Migration Project.Below are the few points about the performance optimization of the sap hana code.


Points to consider when you are writing the sap hana procedure..!!

Please see the below mentioned steps.

 

1.      Always select only the required column instead of selecting all the columns.


Example: Suppose their are three table TABLE_A ,TABLE_B and TABLE_C with the below structure.


TABLE_A Structure:


NameAgeEmp_IdDepartmentSalary


TABLE_B Structure:


 

NameDepartmentJob_GradeCompany_NameCompany_Type

 

TABLE_C Structure:


 

DepartmentEmp_IdDesignationJob_Location


Now suppose in your procedure you have to select only the Name,Salary and Designation from these three table based on the join condition and use them to populate the data into some target table TABLE_T.

 

So,For the given Scenario you should not use the below SQL Statements if you are using this it will lead to performance degrade of the particular procedure.


                                              F_5.png

 

If you are using query like above then you are selecting more column than required. So its always better to select only the required columns which will result in performance improvement of your SQL procedures.


                                                 F_4.png

 

 

2.  Always try to use "NOT EXISTS" and "EXISTS" keyword in your procedure instead of  "NOT IN" and "IN" because using the      "NOT IN"  or "IN" inside the procedure will slow down the procedure performance.

    

     Example: I want to delete all the records from COMPONENT_A where ENTERPRISE ,SITE and PRODUCTION ORDER is not      in HEADER_A.

    

     Using the Below Delete statement will slow down the performance.

                               DE_1.png

    

     So ,Its always advisable to use the NOT EXISTS statements like below which will improve the performance.

                                  DE_2.png


3.     Always try to avoid using HDBSEQUENCE in your procedure Becuase it will slow down your procedure performance.

    

     Example:- Suppose I have SALES table with below structure.

        

ItemProduction_OrderSales_NameSales_OrganisationStatusScenario
A_1             0
B_2             0


Now i want to select all the item from the sales table and add the suffix to all the item of sales table and scenario is one of the sales table column which value is constant.


Solution:-     So first solution which will come to our mind is to create a hdbsequence  and concatenate that sequence to Item column of the SALES table.


Steps are given as:

I.     Create a HDBSEQUENCE.

          a.     Go to Project and follow the steps to created the sequence as below.

                   

                         seq_2.png          

II.     Now using the sequence created we can write the procedure for our scenario.Please see the below procedure using the sequence.


                SEQ_3.PNG


So, My observation was when i tried calling this procedure it took around 1 minute to execute. So i tried below approach.

If you have any column in your table which is constant through out you process then you should use row number function to achieve the same functionality. which will not affect the execution time at all. Like below.


                     SEQ_4.PNG


So,When i executed the above procedure it took only few seconds.

So if anyone have better idea of removing the sequence from hana procedure,Please share you thoughts.



 

4.     Always try to avoid Combining the Join Engine and Calculation Engine of Hana. Though Hana execute calculation statements in calculation engine and join statements in join engine but its always better to separate the join engine and Calculation Engine Statements to provide better execution time.


Example:  In the below hana procedure I have used the table variable where we are storing the data from join of three table and their is calculation happening in the same join expression. Which means we are combing  join engine and calculation engine,Result more execution time.


CREATE PROCEDURE TEST_PROC

  LANGUAGE SQLSCRIPT

  SQL SECURITY INVOKER

  AS

BEGIN

 

JN_DATA  =      SELECT   T1.RUNTIME

                                         T2.ITEM,

                                         T3.LOCATION

                            FROM   DETAILS T1,

                                         ROUTING T2,

                                         RESOURCES T3

                             WHERE T1.BOR= T2.BOR

                             AND      T1.LOCATION = T2.LOCATION

                             AND      T1.SCENARIO= T3.SCENARIO

                             AND      T2.ITEM = T3.NAME

                             AND     T1.BOR LIKE '%BOR_ALT%'

                             AND     T2.BOS NOT LIKE '%_TMP_%'

                             AND     T3.ITEM = 'N' OR ITEM IS NULL;

                          

                          

INSERT INTO TABLE_COMPONENTS (SELECT * FROM :JN_DATA);

                                  

END;


Below Procedure where we have separated both the engine logic and it results in  faster execution of the procedure.

 

CREATE PROCEDURE TEST_PROC1

LANGUAGE SQLSCRIPT

SQL SECURITY INVOKER

AS

BEGIN

 

EXP_DETAIL    = SELECT RUNTIME,

                                         LOCATION,

                                         SCENARIO,

                                          BOR

                           FROM     DETAILS

                           WHERE  BOR LIKE '%BOR_ALT%';

 

EXP_ROUTING = SELECT   ITEM,

                                            LOCATION,

                                           BOR

                            FROM       ROUTING           

                            WHERE    BOS NOT LIKE '%_TMP_%';

 

 

EXP_RESOURCES= SELECT  NAME,

                                               RESOURCE,

                                              SCENARIO

                                              LOCATION

                                FROM         RESOURCES

                               WHERE     ITEM = 'N' OR ITEM IS NULL;

 

 

 

JOIN_DATA   = SELECT     T1.RUNTIME

                                          T2.ITEM,

                                          T3.LOCATION

                            FROM    :EXP_DETAIL T1,

                                         :EXP_ROUTING T2,

                                         :EXP_RESOURCES T3

                             WHERE  T1.BOR= T2.BOR

                             AND       T1.LOCATION = T2.LOCATION

                             AND       T1.SCENARIO= T3.SCENARIO

                             AND       T2.ITEM = T3.NAME;

                      

INSERT INTO TABLE_COMPONENTS (SELECT * FROM :JOIN_DATA);

                                  

END;

 

So in the above procedure we have first calculated all the column then only we are using it for the join.which is why both the engine executes these statements separately ,result in better performance.

 

 

5.     Creating a read and write procedure is always better in terms of performance.So always try to create a read and write procedure to get the better performance.

      

       Example: Just for the example i am showing the procedure which takes more time when we use to read and write in the same procedure.

 

CREATE PROCEDURE HISTORY_DATA

  LANGUAGE SQLSCRIPT

  SQL SECURITY INVOKER

  AS

BEGIN

 

DATA_1=(SELECT  SCENARIOID,

                             LINENUM,

                            SITE,

                            NAME

                FROM HISTORY);

********************************************************

Many other Transaction on data not shown here

********************************************************

 

  INSERT INTO SHIPMENT_HISTORY

   (

    SCENARIOID,

    LINENUM,

    SITE,

    NAME

    )(SELECT * FROM :DATA_1);

 

  DATA_2=(SELECT      SCENARIOID,

                                    SHIPPED,

                                   DATESHIPPED,

                                  SOURCE,

                                  CREATEDDATE

                 FROM HISTORY);

 

********************************************************

Many other Transaction on data not shown here

********************************************************

 

INSERT INTO SHIPMENT_HISTORY

   (

    SCENARIOID,

    SHIPPED,

    DATESHIPPED,

    SOURCE,

    CREATEDDATE

    )(SELECT * FROM :DATA_2);

 

 

END;

 

So,the above procedure takes around 1:36 Minutes time when we run it that's the reason i have separated the procedure into read and write procedure.

 

READ PROCEDURE:  The read procedure in hana does not allow any DML statements inside the procedure.So we will just read the data from the target tables after all the transactions and pass that data to the output parameter of the procedure ,Output parameter of the procedure can be a scalar variable or table variable.

 

So below steps has to be followed to create the read and write procedure.


STEP I- First create the HDBTABLETYPE of the same column which you are passing to the output parameter. And to Create the HDBTABLE first we have to declare the artifacts of different datatypes which we can use to create the table type.As shown in the below screen shot.

               new_read.PNG

STEP II- Now create the table type using these artefacts like below.      

             READ_2.PNG


         

STEP III- Create a read procedure and pass the data to output variable of above table type.

    

CREATE PROCEDURE HISTORY_DATA_READ

                                                                           (OUT OUT_DATA_1FULL_PATH_OF_HDBTYPE_HISTORY_1,

                                                                            OUT OUT_DATA_2 FULL_PATH_OF_HDBTYPE_HISTORY_2)

  LANGUAGE SQLSCRIPT

  READS SQL DATA

  SQL SECURITY INVOKER

  AS

BEGIN

 

********************************************************

Many other Transaction on data not shown here

********************************************************

--final data to be sent to out parameter

 

DATA_1=(SELECT  SCENARIOID,

                              LINENUM,

                              SITE,

                              NAME

                FROM HISTORY);

 

********************************************************

Many other Transaction on data not shown here

********************************************************

--final data to be sent to out parameter

 

  DATA_2=(SELECT    SCENARIOID,

                                   SHIPPED,

                                   DATESHIPPED,

                                  SOURCE,

                                  CREATEDDATE

                 FROM HISTORY);

 

END;

              

 

WRITE PROCEDURE:- Now read procedure is created so we will create one procedure which will call the read procedure and we will read the data into another variables which we will use to insert into target tables.

 

CREATE PROCEDURE HISTORY_DATA

  LANGUAGE SQLSCRIPT

  SQL SECURITY INVOKER

  AS

BEGIN

 

--call the read procedure to store the data into two table variables

***************************************************************************

 

CALL HISTORY_DATA_READ (DATA_1_IN,DATA_2_IN);

 

***************************************************************************

  INSERT INTO SHIPMENT_HISTORY

   (

    SCENARIOID,

    LINENUM,

    SITE,

    NAME

    )(SELECT * FROM :DATA_1_IN);

 

 

INSERT INTO SHIPMENT_HISTORY

   (

    SCENARIOID,

    SHIPPED,

    DATESHIPPED,

    SOURCE,

    CREATEDDATE

    )(SELECT * FROM :DATA_2_IN);

 

 

END;

 

So now after separating the procedures into read and write it took only 2.01 Seconds to execute.Conclusion is its always better to use read and write procedure.

 

So,these are the some points from my work experience on Oracle to Hana Migration Project.

Please share your thought about the post,Advise for further improvement is most welcome ..:)

I will release the Part II very soon..:)

Happy Reading..:)

Hana SP9 Data Provisioning - Overview

$
0
0

Prior to Hana SP9 SAP suggested to use different tools to get data into Hana: Data Services (DS), System Landscape Transformation (SLT), Smart Data Access (SDA), Sybase Replication Server (SRS), Hana Cloud Integration - DS (HCI-DS),... to name the most important ones. You used Data Services for batch transformations of virtually any sources, SLT for realtime replication of a few supported databases with little to no transformations, HCI-DS when it comes to copying database tables into the cloud etc.
With the Hana Smart Data Integration feature you get all in one package plus any combination.

 

The user however has very simple requirements when it comes to data movement these days:

  • Support batch and realtime for all sources
  • Allow transformations on batch and realtime data
  • There should be no difference between loading local on-premise data and loading over the Internet into a cloud target other than the protocol being used
  • Provide one connectivity that supports all
  • Provide one UI that supports all

 

The individual tools like Data Services do make sense still for all those cases the requirement matches the tool's sweet spot. For example a customer not running Hana or where Hana is just yet another database, such a user will prefer a best of breed standalone product like Data Services always. Customers requiring to merge two SAP ERP company codes will use SLT for that, it is built for this use case. All of these tools will continue to be enhanced as standalone products. In fact this is the larger and hence more important market! But to get data into Hana and to use the Hana options, that is when it becomes hard to argue why multiple external tools should be used, each with its own connectivity and capability.

 

In addition to that the Hana SDI feature tries to bring the entire user experience and effectiveness to the next level, or lays the groundwork for that at least.

 

Designing Transformations

 

Let's start with a very simple dataflow, I want to read news from CNN, check if the text "SAP" is part of the news description and put the result into a target table. Using Hana Studio, we create a new Flowgraph Model repo object and I dragged in the source, a first simple transformation and the target table. Then everything is configured and can be executed. So far nothing special, you would do the same thing with any other ETL tool.

RSSDataflow.png

But now I want to deal with the changes. With any ETL tool in the market today, I would need to build another dataflow handling changes for the source table. Possibly even multiple in case deletes have to be processed differently. And how do I identify the changed data actually?

 

RSSDataflow-RTFlag.png

With Smart Data Integration all I do in above dataflow is to check the realtime flag, everything else happens automatically.

 

How are changes detected? They are sent in realtime by the adapter.

What logic needs to be applied on the change data in order to get it merged into the target table? The same way as the initial load did, considering the change type (insert/update/delete) and its impact on the target.

The latter is very complex of course, but we when looking at what kind of dataflows the users have designed for that, we were able to come up with algorithms for each transformation.

 

The complexity of what happens under the cover is quite huge, but that is the point. Why should I do that for each table when it can be automated for most cases? Even if it works for 70% of the cases only, that is already a huge time saver.

 

Ain't that smart?

 

The one thing we have not been able to implement in SP9 is joins, but that was just a matter of development time. The algorithms exists already and will be implemented next.

 

 

Adapters

 

How does Hana get the news information from CNN? Via a Java adapter. That is the second major enhancement we built for SP9. Every Java developer can now extend Hana by writing new Adapters with a few lines of code. The foundation of this feature is Hana Smart Data Access. With this you can create virtual tables, which are views on top of remote source tables and read data from there.

For safety reasons these adapters do not run inside Hana but are hosted on one or many external computers running the Hana Data Provisioning Agent and the Adapters. This agent is a very small download from Service Market Place and can be located on any Windows/Linux computer. Since the agent talks to Hana via either TCP or https, the agent can even be installed inside the company network and loads into a Hana cloud instance!

Using that agent and its hosted adapters Hana can browse all available source tables, well in case of a RSS feed there is just a single table per RSS provider, and a virtual table being created based on that table structure.

Now that is a table just like any other, I can select from it using SQL, calculation views or whatever and will see the data as provided by the adapter. The user cannot see any difference to a native Hana table other than reading remote data will be slower than reading data from Hana.

That covers the batch case and the initial load.

For realtime Hana got extended to support a new SQL command "create remote subscription <name> using (<select from virtual table>) target <desired target>". As soon as such remote subscription got activated, the Adapter is asked to listen for changes in the source and send them as change rows to Hana for processing. The way RSS changes are received is by querying the URL frequently and push all found rows into to Hana. Other sources are might support streaming of data directly but that is up to the adapter developer. As seen from Hana the adapter provides change information in realtime, how the adapter does produce that we do not care.

 

 

This concludes a first overview about Hana Smart Data Integration. In subsequent posts I will talk about the use cases this opens up, details of each component and the internals.

All databases are in-memory now.....aren't they?

$
0
0

Over the last few years the reputation of Hana constantly grew and other people found various arguments why Hana is nothing better. Me being an Oracle expert for more than 20 years was among them, I have to admit. Looking back it was rather a lack of understanding on my side and being trapped in marketing statements of SAP. You think that is a lame excuse? You are probably right.

So let me take you on my journey with Hana and share some internals you have never read elsewhere before.

 

The first argument came in 2010, mostly from Oracle, and back then was - understandably - "Of course keeping everything in your head is much faster, but that is simply not doable." If I may translate that, the statement was: memory is too limited and too expensive. True enough, even today. What is the price of a hard disk with 1TB and, in comparison, how much does a server with 1TB of memory cost? A completely reasonable argument.

Actually, I just digged up a youtube video and it is fun to watch, even today.

 

 

SAP was arguing at the time that you compress the data and hence you do not need as much memory. We all know how compression works and the costs involved, I found that not very convincing.

 

What struck me even more however was the fact that traditional databases do cache the data in RAM as well, so they are in-memory so to speak, except that only the frequently accessed data is cached, the archived data does not eat into your memory budget.

 

What I hated about Hana the most was the row versus column storage. Marketing touted that thanks to the columnar storage you can aggregate data much faster, and when confronted with the question of reading an entire row with all columns, the response was "we have row storage as well". Excellent answer. Now I would need both, a  row and columnar storage for each table? You cannot be serious.

 

With this kind of mindset I started to look at Hana, did typical performance tests and quickly found out, there is something severely wrong with my assessments of Hana. Thankfully the Hana developers took the time to engage with me and provided me with internals that explained what I missed before.

 

Let me try to show.....

 

 

 

The three technologies

According to marketing the benefit of Hana are the top three technologies shown below. I am sure you had been bombed with the same arguments, Hana is In-Memory and therefore it is fast. And Columnar. It does Compression!

I can fully understand now, why people, including myself, were skeptical. All of these technologies are nothing new as Larry Ellison stated in above video mentioning Oracle's TimesTen product as an example.

The secret to the Hana performance is not the three technologies as such, in my opinion, it is the intelligent combination of these three plus the insert-only approach. The reason I am saying that is when looking at each technology individually, all have advantages but sever disadvantages as well. Memory is limited, Compression is CPU intensive. Columnar Storage puts column values closely together whereas row storage provides each row's data as one block. Insert-only requires to drop outdated versions from time to time.

 

In-Memoryram.png
Compressioncompression.png
Columnar Storagerow column storage.png
Insert-Onlyinsert only.png

 

 

 

 

 

In-Memory ram.png

The basic idea is that memory is much faster than disk. Actually it is times faster. A 2014 CPU has a memory bandwidth of 10GByte/sec and higher, a single disk around 150MByte/sec - difference by factor 70. If your program is using the same memory frequently, it is cached inside the CPU's L1 or L2 cache, speeding up the memory bandwidth by another factor of 10. On the contrary the disk speed of 150MB/sec is for sequential read access only, random access is times worse for a disk system whereas has no negative impact on the RAM.
The downside of memory is the costs of the memory chip itself (7USD/GByte for RAM compared to 0.05USD/GByte for disks as of 2014) and the hardware platform you need, in order to cope with more memory, is getting increasingly more expensive also.
On the other hand, if I need 1TB of RAM that would be 7000USD. While this is much money compared to a single 100USD disk, it is not much in terms of absolute numbers.
But you can turn around my argument and simply say, if you have a 1TB big database one disk, use a server with 1TB of memory so all data can be cached.

So the argument "in-memory!" cannot be the entire truth.

Compression compression.png

The idea of compression is simple, a single CPU is much faster than the memory bus and the disk, not to mention that multiple CPUs share the same bus, hence compressing data in order to reduce the amount of data is beneficial as long the overhead of that is not too huge. Therefore every major database supports compression. It is not very popular though as compressing a database block and decompressing it takes its toll. The most obvious cost overhead is when data is updated inside a database block. You have to uncompress the database block, make the change and then compress it again. Hopefully it fits into the same database block still otherwise you need a second one.

So the argument "Compression!" cannot be the entire truth.

Columnar storage row column storage.png

For a simple select sum(revenue) the columnar storage is just perfect. You have to read one column only, hence just a fraction of the whole table data is needed. Imagine you have all the data of one column in one file, this will be much faster as with traditional row orientated tables, where all the table data is in one file (or database object to be more precise) and you have to read the entire table in order to figure out each row's columns value.
In case you want to see all columns of a single row like it is typical for OLTP queries, the row storage is much better suited.

So the argument "Columnar Storage!" cannot be the entire truth.

 

Insert only insert only.png

A real database should have read consistency in the sense of when I execute my select statement I get all committed data of the table but neither will I see data that has been committed after nor will my long running query fail just because the old value was overwritten by the database writer.
The only major database I know supporting that since the beginning is Oracle (SQLServer has a option to turn that on/off), but you have to pay a price for that consistency. Whenever the database writer does overwrite a block with new data, the old version has to be kept somewhere, in the rollback segments of the database in case of Oracle. So a simple update or insert into an existing block requires two operations, the actual change plus saving the old version.
With insert only, the idea is a different one, there in each table the old data is never overwritten but only appended. If you execute an update against an existing row, a new row is written at the end of the table with same primary key value plus a transaction number. When a select statement is executed, it will see multiple versions and use the one with the highest transaction number less or equal to the currently active global transaction number in the database.
There is a tradeoff hence, the tables grow fast with this approach, especially when you update just a single byte, but on the other hand you are faster.

So the argument "Insert only!" cannot be the entire truth.

 

Combining the technologies

Okay, this was my starting point. All technologies are good ideas, other companies tried these as well and have built proper niche products, mostly around the analytic use cases. One database is a pure in-memory database that needs a regular database to persist the data, others you can insert data but not update or delete. Many support compression but usually it is turned off by customers.

The claim of Hana is to be a database that supports analytic and transactional use cases and is better than other databases in all areas. That should be easy to be put into perspective, I thought.

However, the one thing I did overlook at that time was how these technologies benefit from each other. So let us go through a couple of mutual benefits to get the idea.

Combination of Compression with Columnar Storage compression.png + row column storage.png

Compressing data you can do best whenever there is a repetitive pattern. Let us have a look at a real example, the material master table.

 

MANDTMATNRERSDAVPSTALVORMMTARTMBRSHMATKL
80000000000000000002323.01.2004KfalseROH1
80000000000000000003804.09.1995KDEVGtrueHALBM00107
80000000000000000004323.01.2004KBVfalseHAWA1
80000000000000000005805.01.1996KLBXfalseHIBEM
80000000000000000005905.01.1996KLBXfalseHIBEM
80000000000000000006812.01.1996KEDPLQXZfalseFHMIA013
80000000000000000007810.06.1996KVXtrueDIENM

 

What can you compress better, the entire file or a single column?
The answer is obvious but nevertheless I exported these columns from the MARA table, all 20'000 rows of my system, into a CSV file (1'033KB big) and did zip the one file with all data plus nine files with one column each.

a.png

Obviously the primary key cannot be compressed much, it is half of the data in fact but all other columns are, the MAND file is 303 byte large, the ERSDA file with all the many create dates is 12'803 bytes big.

But this is not a fair comparison as the zip algorithm favors larger datasets as it can look for patterns easier and it is a fairly aggressive algorithm. In databases your compression is lower to require less CPU cycles and more important, each file is split into database blocks. Meaning that if you have a fairly wide table, one database block might include one row of data only that is compressed - hence almost no compression possible at all.
With columnar storage we have no issue with that side effect.

So as you see, the technology of using columnar storage has a huge positive side effect on the degree compression is possible.

Combination of Compression with In-Memory compression.png + ram.png

That's an easy one. The more compression we do the less memory we need. Before we said that the cost ratio between disk and RAM is about 700 times cheaper. Thanks to the compression, usually is a factor of 10, the disk is just 70 times cheaper. Or more important, for your 10 TB database you do not need a server with 10TB of RAM which would be very expensive.

Combination of Compression with Insert-only compression.png+insert only.png

Compression has one important downside as well however, what if a row is updated or deleted even? The compression spans multiple rows, so when you change a value you have to uncompress the entire thing, change the value and compress it again. With traditional databases you do exactly that. Okay, you do not uncompress the entire table, the table data is split into pages (database blocks) and hence only the impacted page has to be recompressed but still you can virtually watch how much these indexes slow down updates/deletes in traditional databases.
So what does Hana do? Quite simple, it does not update and delete the existing data. It appends the change as a new version with a transaction ID and when you query the table, you will read the oldest version of each row, the oldest version that matches the query execution start time. So suddenly no recompression is needed anymore, data is appended uncompressed to the end of the table and once the uncompressed area exceeds a limit, it gets compressed and new data is inserted into a new page of the table.
The other advantage of that approach is, if a single row is updated multiple times, which row will that be? A booking made 10 years ago? Very unlikely. It will be a recent one, one that is still in the uncompressed area likely.

Combination of Columnar Storage with Insert-only row column storage.png+insert only.png

As the data is inserted at the end only, finding the data of a row in the various columns is very simple. According to the current transaction id, the row to read was the version at the table position 1'234'567, hence the task is to find for each column the value at that position.

Just imagine an Excel sheet. What is faster: Reading the cells (A,10) + (B,10) + (C,10), in other words the row 10 with the three columns A, B and C?

Or reading the the cells (J,1) + (J,2) + (J,3), in other words the column J with the values in the three rows 1,2 and 3?

It does not make a difference. None at all. The entire argument of reading row-wise is better is actually based on the assumption that it is faster to read horizontally than vertically. Which is true, if the data is stored on disk in a horizontal fashion. Then the data of one row is closely together and hence read in one rush from the disk cylinder. But on a memory system it does not matter at all.

Putting all four together compression.png+ram.png+row column storage.png+insert only.png

What was the issue with compression?

  • Compression works best on similar data -> one column often has similar values -> solved
  • Recompression in case something does change -> we do no change data but insert only -> solved
  • Compression is CPU expensive -> not a pkzip like compression is used but dictionary and pattern compression -> faster than reading the plain data

What was the issue with memory?

  • More expensive that disk -> thanks to the compression that factor is dampened -> Reality had proven Hana can run even large enterprise ERP systems

What was the issue with Columnar Storage?

  • You need to locate the row value for each column individually -> But actually, for memory it does not matter if you read two words from nearby or far away memory pointers. With compression this might even be faster!
  • Changing values requires to change the entire column string -> True, hence Hana does not change values, it appends data only.

What is the issue with Insert-only?

  • More and more old data is present and needs to be removed or memory consumption grows fast -> The most recent data is not added into the compressed storage, it is kept in the delta storage. All changes within that delta storage are handled to allow updates.
  • Above problem is faced only if changes are made on rows in the compressed storage area -> less likely but possible.

 

Comparing Hana with other databases

Meanwhile other database vendors have been faced with the realities as well and we have seen various announcements to jump on the in-memory bandwagon as well. Given your knowledge about the Hana internals now, you should be able to quantify the advantage for yourself.

 

Oracle: With Oracle 12c there is an in-memory option available. This option allows you to store the data in addition to the traditional disk based way in a in-memory area as well. Personally I have mixed feelings about that. On the one hand it is very convenient for existing Oracle users. You execute a couple of statements and suddenly you are times faster with your queries.

But this assumes that the traditional disk based storage does have advantages and I tried to show above it does not. Hence it is a workaround only, kind of what I criticized when the response was "Hana has column and row storage". And it leaves room for questions like

  • What are the costs to store the data twice when inserting/updating data?
  • Do what degree is the database slowed down if part of the memory used for database block caching is now reserved for in-memory storage?
  • No question this will be faster on the query side, but the OLTP load worries me.

Do me that sounds like the dinosaurs way, trying to sound flexible when all you do is actually doubling the costs. Obviously I lack the hands on experience but sorry Oracle, I was a big fan of yours but that does not sound like a master minds plan to me.

Nice reading I found: Rittman Mead Consulting: Taking a Look at the Oracle Database 12c In-Memory Option

 

Do you concur with my opinion?

BIG DATA & SAP HANA-Part1

$
0
0

Big Data is about VOLUME, VARIETY and VELOCITY of data. Let us see how SAP HANA platform full fill the requirement of 3 V’s (Volume, Variety and velocity) challenges of Big Data.

VOLUME

Volume of data increasing day by day and by 2020 it will be 40 Zetabyte .So for Big data now challenge is to store high volume of data.SAP HANA has successfully overcome with the volume aspect of Big Data by fortifying  SAP HANA platform. Following are the two game changing features in SAP HANA platform related to data volume.


      • SAP HANA and HADOOP integration
      • Dynamic Tiering

SAP HANA and HADOOP integration

            HADOOP facilitate to store infinite volume of data using distributed file system.SAP with its release of SP09, very tightly integrated with hadoop.Following are the SAP HANA and HADOOP integration options:

      • SDA (Smart Data Access)
      • SAP Data Services
      • SAP BO-IDT (Information Design Tool)
      • HANA XS Engine and Hadoop Hbase


SMART DATA ACCESS:Smart Data Access (SDA) provides SAP HANA with data virtualization capabilities. This technology allows to create a virtual table to combine SAP HANA data with other heterogeneous data sources like-HADOOP,TERADATA,MS SQL SERVER,ORACLE,SAP Sybase ASE,SAP Sybase IQ,SAP HANA

            

              In SAP HANA SPS07, HANA connect to HIVE:



    CREATE REMOTE SOURCE HIVE

    ADAPTER "hiveodbc"

    CONFIGURATION 'DNS=HIVE'

    WITH CREDENTIAL TYPE 'PASSWORD'

    USING 'user=hive;password=hive';


        • Create Virtual table on HIVE remote data source and consumed it on HANA catalog.


                In SAP HANA SPS08, HANA connect to Apache SPARK:

                           

        • SQL Script to create Remote Data Source to HADOOP SPARK


    CREATE REMOTE SOURCE HIVE

    ADAPTER "hiveodbc"

    CONFIGURATION 'DNS=SPARK'

    WITH CREDENTIAL TYPE 'PASSWORD'

    USING 'user=hive;password=SHa12345';


        • Create Virtual table on SPARK remote data source and consumed it on HANA catalog.

     


                In SAP HANA SPS09, HANA directly connect to Hadoop HDFS:


        • Create Map Reduce Archives package in SAP HANA Development Prospective using JAVA
        • Create Remote Data Source directly to Hadoop HDFS


    CREATE REMOTE SOURCE HADOOP_SOURCE

    ADAPTER "hadoop"

    CONFIGURATION 'webhdfs_url=<url:port>;webhcat_url=<url:port>'

    WITH CREDENTIAL TYPE 'PASSWORD'

    USING 'user=hive;password=hive';


        • Create Virtual Function


    CREATE VIRTUAL FUNCTION HADOOP_WORD_COUNT

    RETURN TABLE ("word" NVARCHAR(60),"count" integer)

    package DEV01."DEV01.HanaShared::WordCount"

    CONFIGURATION 'enable_remote_cache;mapred_jobchain=[("mapred_input":"/data/mockingbird"."mapred_mapper":"com.sap.hana.hadoop.samples.Wordmapper",

    "mapred_reducer":"com.sap.hana.hadoop.samples.WordReducer"}]'

    AT HADOOP_SOURCE;


        • Create Virtual UDF to directly connect to HDFS file.


    CREATE VIRTUAL FUNCTION HADOOP_PRODUCT_UDF()

    RETURN TABLE ("product_class_is" INTEGER, "product_id" INTEGER,"brabd_name" VARCHAR(255))

    CONFIGURATION 'datetiem_format=yyyy-MM-dd HH:mm:ss;date_format=yyyy-mm-dd HH:mm:ss;time_format=HH:mm:ss;enable_remote_caching=true;cache_validity=3600;

    hdfs_location=/apps/hive/warehouse/dflo.db/product'

    AT HADOOP_SOURCE;


     

                                      

             

    CONNECT TO HADOOP USING SAP DATA SERVICE         

        • Select File Format tab from Local Object Library->right click on HDFS File and click New

                                       SDA4.png

        • Provide following parameter values in HDFS File Format editor
          • Name: HDFS
          • Namenode host: <host name of hadoop installation>
          • Namenode port: <hadoop port>
          • Root Directory: < Hadoop file path>=</user/hadoop/input>
          • File Name: hdfs_data.txt

                                                  SDA5.png

        • Click on Save&Close and double click on created HDFS file again to view file format.

                                                    SDA6.png

        • Cretate Project->Job->Data Flow
        • Drag HDFS file to the canvase and make it as source->drag query transformation and target table on the data flow canvase and join.

                                            SDA7.png

        • Double click on Query transformation and schema IN and schema out

                                             SDA8.png

        • Execute Job and view the data brought in HANA from Hadoop.

                                             SDA10.png


    SAP BO(IDT)-HADOOP INTEGRATION

     

                                            HADOOP_IDT.png

    HANA XSENGINE AND HADOOP HBASE

        

                        HANA XSEngine can talk to Hadoop Hbase via server side Javascript.Please refer following article for more details.

                                                 XSEngine.png

     

                        Streaming Real-time Data to HADOOP and HANA

        

    DYNAMIC TIERING

     

              Dynamic tiering is SAP HANA extended storage of SAP IQ ES server integrated with SAP HANA node.Dynamic tie-ring has been included in SPS09.HOT data reside in SAP HANA In-Memory and warm data reside on IQ ES server columnar petabyte storage on disk.It provides environment to increase Terabyte SAP HANA In-Memory capability to Patabyte columnar disk storage without using Hadoop.

     

         HOT & WARM Table creation:

     

    CREATE TABLE "SYSTEM".SalesOrder_HOT" (

    "ID" INTEGER NOT NULL,

    "CUSTOMERID" INTEGER NOT NULL,

    "ORDERDATE" DATE NOT NULL,

    "FINANCIALCODE CHAR(2) NULL,

    "REGION" CHAR(2) NULL,

    "SALESREPRESENTATIVE" INTEGER NOT NULL,

    PRIMARY KEY("ID")

    );

     

     

    CREATE TABLE "SYSTEM".SalesOrder_WARM" (

    "ID" INTEGER NOT NULL,

    "CUSTOMERID" INTEGER NOT NULL,

    "ORDERDATE" DATE NOT NULL,

    "FINANCIALCODE CHAR(2) NULL,

    "REGION" CHAR(2) NULL,

    "SALESREPRESENTATIVE" INTEGER NOT NULL,

    PRIMARY KEY("ID")

    )USING EXTENDED STORAGE;

     

     

    Reference Document:

    SAP HANA SPS 09 - Dynamic Tiering.pdf

     

    Reference SAP HANA Academy Video:

     

    SAP HANA Academy - SAP HANA Dynamic Tiering : Installation Overview [SPS 09] - YouTube

    SAP HANA Academy - SAP HANA Dynamic Tiering: Introduction [SPS09] - YouTube


    Table T006D for SHINE

    $
    0
    0

    The EPM sample data included with the SHINE demo application is a helpful resource for working on development and modeling scenarios.

     

    Included in the sample data are supporting tables that are required to perform Unit of Measure and Currency conversions in analytic and calculation views, as stated in the SAP HANA Modeling Guide.

    Image1.png

    However, the sample data is missing table T006D.  Without this table, an error is generated when attempting a Unit of Measure conversion.

     

    Image2.png

     

    The attached file has the necessary SQL DDL and DML scripts to create and load a column table (and create a related synonym) for T006D.  The scripts can be executed from a SQL Console view in the SAP HANA Modeler perspective.

     

    Image3.png

     

    After refreshing the schema, T006D is displayed and can be referenced for Unit of Measure conversions.

     

    Image4.png

    Image6.png

     

    (Note: T006D table structure and default data obtained from SAP BW 7.4 SP8.)

    Enabling HANA Made easy with Linux Deployment Toolkit

    $
    0
    0

    The success of Linux adoption within SAP

     

    Background

     

    2 years ago, SAP Global IT Infrastructure Service – Service Center Labs IT, took the challenge of simplifying Linux OS deployments (OSD) in the area of developer workspace environment.

     

    Until then, there was neither Linux OSD service nor Linux support provided by SAP Global IT
    in this area.
    This means that each developer who needed access to Linux OS, spent valuable time
    installing his own Linux system.

    From IT management perspective, there was no control over these systems – it was not
    secure, not conforms to any guidelines, not managed or inventoried.

     

    Together with Senior Linux consultant Shay Cohen of G.S.R. IT consulting Ltd., Labs IT designed and built a flexible and scaleable service to
    manage and deploy Linux systems in automated way.

     

    When we designed the service, our goal was to provide the end user with a system which is preconfigured and ready for use out of the box. We focused on two main aspects:

    1. Conformity with SAP Global IT standards (e.g. systems naming conventions, security requirements, system settings)
    2. Simplicity:
      1. For IT to deploy
      2. For end user to request and use

     

    How we achieved this?

     

    Using native Linux tools, Linux Deployment Tool Kit was built and supported the following process:

     

    LDT_Process.jpg

    The first step of the process after the end user submitted service requires, is the key for the auto configuration and for the out of the box experience we wanted to achieve. In this step, IT technician enters LDT deployment task. In order to enter it, the following data should be provided:

     

    1. User ID from Active Directory which will be defined with elevated permissions on the system.
    2. MAC address of the target system for Linux OSD.
    3. Location of the target system
    4. Equipment No.(Internal asset number) of the target system. This will be used to configure the hostname according to SAP IT naming convention.
    5. System type – VM, Desktop or server – this will affect the way the system will be configured.E.g. different hostname, VMWare tools installed/not installed etc.
    6. SWAP File size.
    7. Required Linux distribution (SUSE/Redhat etc.)
    8. Profile – preconfigure set of tools which will be installed on the system.

     

    With this information in the DB the system can be fully automatically installed and configured – ready for use out of the box!

     

    This process enables us to reach the goals we set:

    1. Conformity with SAP Global IT standards:
      1. Each Linux system which is deployed via LDT is
        automatically configure – hostname, DNS settings, input local etc. are
        configured according to the deployment task which is entered via SAP IT OS
        deployment portal.
      2. McAfee
        Anti-Virus agent is installed and managed centrally by SAP Global IT Client
        Protection team.
      3. LDT Agent is installed. This agent is the system
        management interface for Labs IT. It checks periodically for tasks waiting for
        the systems and reports back to LDT DB about the system information, heartbeat,
        Anti-Virus agent status and tasks execution results.
      4. Scrambled root
        password with a local rescue account with periodically changing password to
        enable IT support login.
      5. Integration with
        SAP Active Directory domain.
    2. Simplicity:
      1. For IT to deploy – all is required from IT support technician who
        deploys Linux is to enter the required information in SAP IT OSD Portal and
        create a LDT deployment task. Afterwards, the OSD process run automatically
        after the technician boots the system with LDT boot ISO.
      2. For end user to request and use – all it takes for the end user to
        request Linux system is to enter an IT Service request with his used ID and the
        equipment number of his system. Afterwards, he is shipped with a system
        which is ready for use out of the box
        – just login with your domain account
        and password and start working!

     

     

    Adoption of the service

     

    The service was very successfully adopted by IT teams as well as our customers – SAP HANA developers any other development/QA teams who needs to work with Linux.

    Since the service went live in October 2012 over 1,400 LDT OSD took place. Below the monthly deployment trend is presented for the last 5 months of 2013. The screen shot is captured from LDT inventory portal:Statisrics.jpg

     

    In LDT portal, we can also track the number of live systems. These are system which reported back to the system in the last 24Hrs. this dashboard present the number of live systems, deviation by geographical region, distribution and type:

    Dashboard.jpg

    Summary

     

    As SAP HANA took place in SAP strategy, the demand from HAVA developers for Linux systems increased drastically, and especially for SUSE Linux.

    With LDT service in place, SAP Global IT was ready to support this growing demand with simple to use service.

     

    HANA developers have access Linux systems at the tip of their fingertips, reducing the time it takes them to
    setup these systems from few hours to few minutes.

    New SQLScript Features in SAP HANA 1.0 SPS9

    $
    0
    0

    Semantic Code Completion in SAP HANA Studio

     

    The fact that we have such long names for tables, views and table types in HANA, has been a pain point for many for some time now.  To help alleviate this issue, we have built semantic code completion into the SQLScript editor in SAP HANA Studio.  Now when a developer needs to do a SELECT against a particular table, he can hit CTRL+SPACE and get a list of tables to choose from.   The list is compiled of relevant objects based on the context of the statement, so if you have a SELECT statement and have entered the name of the schema already, and hit CTRL+SPACE, you will only get a listing of tables from that particular schema. This also works when calling procedures, or defining parameters with global table types.


    1.png

     

    Check out the demo video here.


     

     

    SQLScript Editor & Debugger in the SAP Web-Development Workbench

     

    Prior to SPS9, there was no way to maintain procedures from the SAP Web-based Development Workbench. You were forced to use the SAP HANA Studio for this.  Now as of SPS9, we have a basic SQLScript editor for maintaining .hdbprocedure files.  The .procedure file format is not supported here.  This editor has basic keyword code hints and syntax highlighting.

     

    2.png

     

    Since we can now create procedures from the SAP HANA Web-Based Development Workbench, it makes sense that we should be able to debug them.  As of SPS9, we also have a SQLScript procedure debugger as well. Currently, you must set breakpoints in the runtime object in the catalog, and then CALL your procedure from the SQLConsole in order to debug it.  We have plans to make it possible to debug a design time artifact directly without having to drop to the runtime object.  Within the debugger, you can of course single step through the code, and evaluate input/output parameters as well as intermediate scalar and table variables.

    3.png

    See a demo video here.


     

     

    Table Type Definitions for Parameters


    In previous support packages, we’ve had several different ways to create and reference table types when defining input/output parameters in our procedures.   For .procedure file format, we had “local table types” which really were not local, which is why we did not support them in the new .hdbprocedure file format.  For .hdbprocedure files, we recommended to create your tables types globally via CDS(.hdbdd file).  While I will still recommend to create table types via CDS for global type scenarios,  I am pleased to announce that we now have the possibility to declare local table types inline for parameters.  In the screen shot below you will see that I have an OUT parameter called EX_PRODUCT_SALE_PRICE which has a table type definition using the keyword TABLE followed by the column list with associated simple types.  These type declarations are truly local and cannot be used across procedures.  For situations where that you know that your table type will not be reused frequently, it might make sense and be a little easier to simply define the structure inline as opposed to creating it via CDS.

     

    4.png

     

     

    Table Type Definitions for Variables

     

    In previous support packages, intermediate table variables were simply defined by the result set of the data selection, such as a SELECT statement.  So whatever columns were in the field list would become the structure of the intermediate table variable.  The issue with this approach is that there is some performance cost associated with the type conversions at runtime.  Also, this could cause some ambiguousness in the code.  As of SPS9, we can now explicitly define the structure of an intermediate table variable from the DECLARE statement.  As shown below, I have an intermediate table variable called LT_PRODUCTS which is defined as a TABLE, followed by the column list and associated simple types.  This allows the developer to have strict typing within the procedure and avoid any unnecessary performance costs from type conversions.

     

    5.png

     

    Autonomous Transactions

     

    Another new language features in SPS9, is Autonomous Transactions.  The autonomous transaction allows the developer to create an isolated block of code which runs as an independent transaction.  This feature is particular helpful when executing logging type tasks.  Committed statements inside the autonomous transaction block will be persisted regardless of a rollback of the main transaction.  The keywords COMMIT and ROLLBACK are only allowed within the autonomous transaction block and not in the main line of the procedure.  If any tables are updated within the main body of the procedure, those tables are not allowed to be accessed from within the autonomous transaction block.

     

    6.png

     

    See a demo video of the new language features here.  And for more information regarding the SQLScript language itself, please check out the SQLScript Reference Guide.


     

     

    Use of CE Functions within Procedures & Scripted Calculation Views

     

    Although not specific to SPS9, I’d like close with some clarification around the use of CE Functions. Calculation Engine(CE) Functions, also known as Plan Operators, are an alternative to writing SQL.  At one time, it was recommended to always use CE Functions over SQL in both SQLScript stored procedures as well as scripted calculation views as they performed better than SQL. This is no longer the case.  The recommendation moving forward is to use SQL rather than CE Functions within SQLScript. The execution of Calculation Engine Functions currently is bound to processing within the calculation engine and does not allow a possibility to use alternative execution engines, such as L native execution. As most Calculation Engine Functions are converted internally and treated as SQL operations, the conversion requires multiple layers of optimizations. This can be avoided by direct SQL use. Depending on your system configuration and the version you use, mixing Calculation Engine Functions/Plan Operators and SQL can lead to significant performance penalties when compared to plain SQL implementation. Please note that the above described recommendation/behavior only applies to calculation engine functionality exposed by SQLScript.  Therefore only SQLScript related artifacts such as procedures, table functions and scripted calculation views are affected.

    SAP HANA SPS 09: New Developer Features

    $
    0
    0

    In this blog, I will collect the various smaller blogs that detail all the new developer related features in SAP HANA SPS 09.  This will be a "living" document which is updated as new blogs are released.

     

    HANA Programming Model

     

    Miscellaneous Improvements

    The following are various items that don't really have a category of their own.

    • New Mozilla VM (currently version 28)
    • Relaxed strict mode settings of the JavaScript VM
    • New threading model implementation internally which lays the foundation for future features. No changes to the programming model itself due to these changes yet in SPS 09.  Some general performance improvements thanks to this change.

     

    Miscellaneous Security Features

    In this blog we will have a first look at the new miscellaneous security features added to development model in SAP HANA SPS 09.

    SAP HANA SPS 09: New Developer Features; Miscellaneous Security Features

     

    New XSJS Database Interface

    In this blog we will have a first look at the new XSJS database interface in SAP HANA SPS09.  This is a completely redesigned and rebuilt database interface which replaces the current implementation in the $.db package.  This new interface, which is available as a separate API in $.hdb, focuses on several key areas of improvements.

    SAP HANA SPS 09: New Developer Features; New XSJS Database Interface

     

    New Core XSJS APIs

    Already we have looked at the new XSJS Database Interface in SPS 09. However this isn't the only new core XSJS API in SPS 09.  There are several other new core XSJS APIs which we will now explore further in this blog.
    SAP HANA SPS 09: New Developer Features; New Core XSJS APIs

     

    New XSODATA Features

    In this blog we will look at new features in the XSODATA service framework in SAP HANA SPS 09.

    SAP HANA SPS 09: New Developer Features; New XSODATA Features

     

    SQLScript

    New features in SQLScript language and tools:
    New SQLScript Features in SAP HANA 1.0 SPS9

     

    XS Admin Tools

    Coming Soon

     

    HANA Test Tools

    Coming Soon

     

    Core Data Services

    Coming Soon

     

    XSDS (XS Data Services)

    Coming Soon

     

    Repository REST API

    Coming Soon

     

    SAP River

    Coming Soon

     

    HANA Development Tools

     

    SAP HANA Web-based Development Workbench

    With SPS 09 we continue to enhance the browser based development tools adding support for a larger number of development artifacts as well as enhancing and improving the editors which already existed in previous releases.
    SAP HANA SPS 09: New Developer Features; SAP HANA Web-based Development Workbench

     

    SAP HANA Studio

    While we see major investment in the web-based tooling around SAP HANA, SAP also continues to make improvements and additions to the Eclipse based SAP HANA Studio as well. In this blog we will detail the enhancements to the SAP HANA Studio.

    SAP HANA SPS 09: New Developer Features; SAP HANA Studio

    SAP HANA SPS 09: New Developer Features; Miscellaneous Security Features

    $
    0
    0

    This blog is part of the larger series on all new developer features in SAP HANA SPS 09: http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

     

    In this blog we will have a first look at the new miscellaneous security features added to development model in SAP HANA SPS 09.

     

    Full CORS (Cross-Origin Resource Sharing) Support.

     

    Since SPS 06, we've had basic CORS support which could be configured at the package level.  This support allowed you to either enable or disable CORS, but in SPS 09 we expand the configuration options to allow filtering by origins, headers and http methods.

     

    HANABlog1.png

     

    Custom Headers/X-Frame

    This new feature allows you to control if the browser should allow a page within this HANA page to be rendered within a frame, iframe, or object.  This helps to avoid clickjacking attacks by keeping content from being embedded within a malicious site.

     

    Possible values:

     

    • DENY The page cannot be displayed in a frame, regardless of the site attempting to do so.
    • SAMEORIGIN The page can only be displayed in a frame on the same origin as the page itself.
    • ALLOW-FROM uriThe page can only be displayed in a frame on the specified origin. In other words, if you specify DENY, not only will attempts to load the page in a frame fail when loaded from other sites, attempts to do so will fail when loaded from the same site. On the other hand, if you specify SAMEORIGIN, you can still use the page in a frame as long as the site including it in a frame is the same as the one serving the page.

    HANABlog2.png

     

    Various Authentication Features

    • Secure HTTP Session Cookies
    • Support for Clietn Certificates from F5's Big IP
    • SAML Single Logout (SLO) support
    • SAML Authentication in Authorization header

     

    Support for Virus Scan Interface (VSI) for applications

    New XSJS API ($.security.AntiVirus) to access and use the SAP Virus Scan Interface from your server side JavaScript coding.

     

    • The scan needs a Virus Scan Adapter (VSA) to be installed on the host
    • The setup and configuration is available with SAP note 2081108
    • This class uses the SAP certified interface NW-VSI 2.00 (see SAP note 1883424)
    • For a list of the AV products supported, see SAP note 1494278

     

    Code Sample for using the new Virus Scan Interface from XSJS:

     

    try {  //create a new $.security.AntiVirus object using the default profile  var av = new $.security.AntiVirus();  av.scan($.request.body);
    } catch (e) {  $.response.setBody(e.toString());
    }

    SAP HANA SPS 09: New Developer Features; New XSJS Database Interface

    $
    0
    0

    This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

     

    In this blog we will have a first look at the new XSJS database interface in SAP HANA SPS09.  This is a completely redesigned and rebuilt database interface which replaces the current implementation in the $.db package.  This new interface, which is available as a separate API in $.hdb, focuses on several key areas of improvements.

     

    Performance gains

    • Achieves higher throughput on both read and write operations of a single session
    • Better scale out support and usage of multiple nodes in a HANA scale out scenario
    • Reduce the amount of remote process communication even in distributed query scenarios

     

    Usability improvements

    • Simple and easy to use JavaScript interface that accepts and returns JavaScript variables and JSON objects. No more type specific getter/setters
    • Code reduction
    • No boilerplate code

     

    Light-weight architecture

    • Based upon a new, thin C++ client library
    • No SQL processing in the XS Layer itself. Push all SQL processing into the Index Server
    • Uses internal HANA communication protocol optimizations

     

    The usage of this new API is best explained with a few samples.

     

    The old database interface

    First lets look at the pre-SPS 09 database interface.

     

    var productId = $.request.parameters.get("ProductId");  productId = typeof productId !== 'undefined' ? productId : 'HT-1000'
    var conn = $.db.getConnection();
    var query = 'SELECT * FROM "SAP_HANA_EPM_NEXT"."sap.hana.democontent.epmNext.data::EPM.Purchase.Item" ' +            ' WHERE "PRODUCT.PRODUCTID" = ? ';
    var pstmt = conn.prepareStatement(query);    pstmt.setString(1, productId);
    var rs = pstmt.executeQuery();
    var body = '';
    while (rs.next()) {  var gross = rs.getDecimal(6);  if(gross >= 500){  body += rs.getNString(1) + "\t" + rs.getNString(2) + "\t" +         rs.getNString(3) + "\t" + rs.getDecimal(6) + "\n";  }
    }rs.close();
    pstmt.close();
    $.response.setBody(body);
    $.response.contentType = 'application/vnd.ms-excel; charset=utf-16le';
    $.response.headers.set('Content-Disposition',  'attachment; filename=Excel.xls');
    $.response.status = $.net.http.OK;

    Notice in this example how you build the query string but then must set the input parameters via a separate setString function. Not only is this extra code, but also error prone because you must use the correct function call for the data type being set.

     

    More troublesome, however, is the result set object returned from the query.  This rs object is a special object than can only be iterated over once in order. No direct index support.  Its contents aren't visible in the debugger and you have to use similar type specific getters to retrieve individual column values.

     

    The new database interface

    Now for the same example rewritten with the new database interface in SPS 09.

     

    var productId = $.request.parameters.get("ProductId");  productId = typeof productId !== 'undefined' ? productId : 'HT-1000'
    var conn = $.hdb.getConnection();
    var query = 'SELECT * FROM "SAP_HANA_EPM_NEXT"."sap.hana.democontent.epmNext.data::EPM.Purchase.Item"' +            ' WHERE "PRODUCT.PRODUCTID" = ?';
    var rs = conn.executeQuery(query,productId);
    var body = '';
    for(var i = 0; i < rs.length; i++){   if(rs[i]["GROSSAMOUNT"] >= 500){  body += rs[i]["HEADER.PURCHASEORDERID"] + "\t" + rs[i]["PURCHASEORDERITEM"] + "\t" +  rs[i]["PRODUCT.PRODUCTID"] + "\t" + rs[i]["GROSSAMOUNT"] + "\n";   }
    }
    $.response.setBody(body);
    $.response.contentType = 'application/vnd.ms-excel; charset=utf-16le';
    $.response.headers.set('Content-Disposition',  'attachment; filename=Excel.xls');
    $.response.status = $.net.http.OK;

    The most striking difference is the removal of the need for the type specific getters or setters.  Now you simply pass in your JavaScript variable and the interface determines the type.  The result set is no longer some special object type, but instead a JSON object.  You process it in your JavaScript as you would any other JSON (direct index access, easy looping, or combination of both); accessing the columns by name.  The other advantage is that the result set object might look and act like a JSON object, but in fact it is rather special. It doesn't materialize the data into the JavaScript VM. Instead only pointers to the data are maintained in the JavaScript VM as long as only read operations are performed on the data.  The helps to keep the memory requirements of the JavaScript VM lower.

     

    This also means that you can view the contents of this result set object easily within the debugger.

    HANABlog3.png

     

    Another excellent advantage of this new interface is that because the result set object of a query is JSON its ready for output. So often most of the processing in an XSJS service was just to convert the Result Set to JSON so it can be passed to the client side. Now we can these results and directly insert them into a response object.

     

    var connection = $.hdb.getConnection();
    var results = connection.executeQuery(  'SELECT * FROM "sap.hana.democontent.epmNext.data::EPM.MasterData.Employees" ' +  'WHERE LOGINNAME <> ?', 'EPM_USER');
    $.response.setBody(JSON.stringify(results));

     

    But this new interface doesn't just help with SQL statements.  It also provides similar benefits to calling SQLScript stored procedures from XSJS. This interface creates what appears to be a JavaScript function to serve as a proxy for calling the stored procedure.  We can then easily pass in/out JavaScript variables and JSON objects for the procedure interface. No more having to insert data into temporary tables just to pass it into a procedure call.

     

    var connection = $.hdb.getConnection();
    var partnerRole = $.request.parameters.get("PartnerRole");
    partnerRole = typeof partnerRole !== 'undefined' ? partnerRole : '01';
    var getBpAddressesByRole = connection.loadProcedure("SAP_HANA_EPM_NEXT", 
    "sap.hana.democontent.epmNext.procedures::get_bp_addresses_by_role");
    var results = getBpAddressesByRole(partnerRole);
    //Pass output to response
    $.response.status = $.net.http.OK;
    $.response.contentType = "application/json";
    $.response.setBody(JSON.stringify(results));

    SAP HANA SPS 09: New Developer Features; New Core XSJS APIs

    $
    0
    0

    This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

     

    Already we have looked at the new XSJS Database Interface in SPS 09. However this isn't the only new core XSJS API in SPS 09.  There are several other new core XSJS APIs which we will now explore further in this blog.

     

    SMTP

    Probably the most requested API was an SMTP one.  Sending email from XSJS is an obviously valuable feature and therefore we had it on our backlog since SPS 06. For one reason or another it never quite shipped - until now.  With SPS 09 we add a rather full featured SMTP implementation as its own XSJS API. This includes multi-part, attachments, secure sockets, etc.

     

    Here is a simple example of the API:

     

    //create email from JS Object and send
    var mail = new $.net.Mail({    sender: {address: "demo@sap.com"},    to: [{ address: "demo@sap.com"}],    subject: "XSJS Email Test",    parts: [ new $.net.Mail.Part({        type: $.net.Mail.Part.TYPE_TEXT,        text: "The body of the mail.",        contentType: "text/plain"    })]
    });
    var returnValue = mail.send();
    var response = "MessageId = " + returnValue.messageId + ", final reply = " + returnValue.finalReply;

     

    Here is a slightly more complex example that also processes a file attachment:

     

    //create email from JS Object and send
    var mail = new $.net.Mail({    sender: {address: "demo@sap.com"},    to: [{ address: "demo@sap.com"}],    subject: "XSJS Email Test",    parts: [ new $.net.Mail.Part({        type: $.net.Mail.Part.TYPE_TEXT,        text: "Atachement Test",        contentType: "text/plain"    })]
    });
    mail.parts.push(new $.net.Mail.Part({  type: $.net.Mail.Part.TYPE_ATTACHMENT,  data: getImage(),    contentType: "image/jpg",    fileName: "myPicture.jpg"}));
    var returnValue = mail.send();
    var response = "MessageId = " + returnValue.messageId +               ", final reply = " + returnValue.finalReply;

     

    And of course there is a new configuration screen as part of the XS Admin tool to setup your SMTP server connection. This also gives you some idea of the many authentication types and SMTP settings we support.

    HANABlog4.png

     

    ZIP

    On the subject of often requested features, another very popular request was for ZIP/GZIP support.  SPS 09, thankfully, also delivers this feature.  We add an XSJS API which allows you to create/read/process ZIP and GZIP archives. It also has some nice optimizations which allow you to process a large ZIP from within the database result set or the request body without having to copy the byte array into the JavaScript. The actual ZIP processing will take place in the kernel and only a single part/object can be returned and materialized in the JavaScript VM.  This helps to process larger ZIP archives without needing to extend the memory allocation of the JavaScript VM.

     

    In this simple example you can see how the $.util.Zip library works.  You can create folder structures in the ZIP simply by specifying the full path as you create the content:

     

    var zip = new $.util.Zip();
    zip["folder1/demo1.txt"] = "This is the new ZIP Processing in XSJS";
    zip["demo2.txt"] = "This is also the new ZIP Processing in XSJS";
    $.response.status = $.net.http.OK;
    $.response.contentType = "application/zip";
    $.response.headers.set('Content-Disposition', "attachment; filename = 'ZipExample.zip'");
    $.response.setBody(zip.asArrayBuffer());

     

    But we can also process the ZIP directly from the Web Request object; avoiding any materialization of the content into the JavaScript VM.

     

    var zip = new $.util.Zip($.request.body);

     

    Similarly we can directly access the ZIP contents from within the Result Set object of a database query:

     

    statement = conn.prepareStatement("select data from EXAMPLETABLE where id=1");
    var rs = statement.executeQuery();
    if (rs) {
     while (rs.next()) {        //Load Zip From ResultSet        var loadedZip = new $.util.Zip(rs, 1);
     }

    XML

    Another new API in SPS 09 is an expat-based SAX XML Parser.  It supports parsing from a JavaScript String, JavaScript Array Buffer (with encodings in US-ASCII, UTF-8, and UTF-16), external entity, or $.web.Body object. Like the ZIP API, this allows for XML processing without always having to transfer the source XML into the JavaScript VM when working from the web body, external entity, or database result set.

     

    Here is a simple example where we parse from a hard coded JavaScript string. This examples shows how you can register JavaScript functions as callback handlers for parsing events.

     

    //create a new $.util.SAXParser object
    var parser = new $.util.SAXParser();
    //parse XML from String
    var parser = new $.util.SAXParser();
    var xml = '<?xml version="1.0" encoding="UTF-8" standalone="yes"?>\n' +          '<!-- this is a note -->\n'+           '<note noteName="NoteName">'+               '<to>To</to>'+               '<from>From</from>'+               '<heading>Note heading</heading>'+               '<body>Note body</body>'+           '</note>';
    var startElementHandlerConcat = "";
    var endElementHandlerConcat = "";
    var characterDataHandlerConcat = "";
    parser.startElementHandler = function(name, atts) {    startElementHandlerConcat += name;    if (name === "note") {        startElementHandlerConcat += " noteName = '" + atts.noteName + "'";    }    startElementHandlerConcat += "\n";
    };
    parser.endElementHandler = function(name) {    endElementHandlerConcat += name + "\n";
    };
    parser.characterDataHandler = function(s) {    characterDataHandlerConcat += s;
    };
    parser.parse(xml);
    var body = 'Start: ' + startElementHandlerConcat + '</br>' +           'End: ' + endElementHandlerConcat + '</br>' +           'Charcter: ' + characterDataHandlerConcat + '</br>';
    $.response.status = $.net.http.OK;
    $.response.contentType = "text/html";
    $.response.setBody(body);

    Improved Multi-Part support

    When working with complex HTTP Request or Response objects, often multi-part objects are used. For example in OData batch request, each record in the batch is a separate part in the request object.  The existing entity object APIs in XSJS have been extended in SPS 09 to help with the processing of multi-part entities.

     

    // Handling of multipart requests and responses in xsjs files:
    var i;
    var n = $.request.entities.length;
    var client = new $.net.http.Client();
    for (i = 0; i < n; ++i) {   var childRequest = $.request.entities[i].body.asWebRequest();   client.request(childRequest, childRequest.headers.get("Host") + childRequest.path);   var childResponse = client.getResponse();   var responseEntity =  $.response.entities.create();   responseEntity.setBody(childResponse);
    }

    Asynchronous Request Completion

    In SPS09 we added a new field to the response object where a follow-up JavaScript function event handler can be registered for additional processing upon request completion.

     

    $.response.contentType = "text/html";
    var output = "Hello, World! <br><br>";
    var conn = $.db.getConnection();
    var pstmt = conn.prepareStatement( 'select * from DUMMY' );
    var rs = pstmt.executeQuery();
    if (!rs.next()){  $.response.setBody( "Failed to retieve data");  $.response.status = $.net.http.INTERNAL_SERVER_ERROR;
    }
    else {  output += "This is the response from my SQL: " +            rs.getString(1);  $.response.setBody(output);  $.response.followUp({     uri : "playground.sp9.followUp:other.xsjs",     functionName : "doSomething",     parameter : {         a : "b"     }  });
    }

     

    Text Access API

    Translatable text objects are defined in the HANA Repository as design time objects called HDBTEXTBUNDLE. We already have an API in SAPUI5 to access these text objects from the client and the stored procedure, TEXT_ACCESSOR, for SQL/SQLScript access. In SPS09 we offer the same functionality now as an XSJS API.  Please note that unlike all the other APIs listed in this blog, this is not implemented in the $ namespace. Instead it is written as an XSJSLIB available from sap.hana.xs.i18n package.

     

    var textAccess = $.import("sap.hana.xs.i18n","text");
    var bundle = textAccess.loadBundle("playground.sp9.textAccess","demo1");
    var singleText = bundle.getText("demo");
    var replaceText = bundle.getText("demo2",['1001']);
    var oAllTexts = bundle.getTexts();
    //$.response.setBody(singleText);
    $.response.setBody(replaceText);
    //$.response.setBody(JSON.stringify(oAllTexts));

    Support for Virus Scan Interface (VSI) for applications

    New XSJS API ($.security.AntiVirus) to access and use the SAP Virus Scan Interface from your server side JavaScript coding.

     

    • The scan needs a Virus Scan Adapter (VSA) to be installed on the host
    • The setup and configuration is available with SAP note 2081108
    • This class uses the SAP certified interface NW-VSI 2.00 (see SAP note 1883424)
    • For a list of the AV products supported, see SAP note 1494278

     

    Code Sample for using the new Virus Scan Interface from XSJS:

     

    try {  //create a new $.security.AntiVirus object using the default profile  var av = new $.security.AntiVirus();  av.scan($.request.body);
    } catch (e) {  $.response.setBody(e.toString());
    }

    Secure Store

    The secure store API can be used to securely store data in name/value form. Applications can define a secure store object file and refer to this design time object in the application coding. The XSEngine takes care of the encryption and decryption and also provides the persistence for the data. There are two visibility options for the data a) Visible application wide All users of the application share the same data and can decrypt/encrypt data e.g.: passwords for a remote system b) Visible application wide but with a separation on user level: Every user of the application can have it’s own encrypted data which can only be decrypted by the user himself e.g. credit card numbers/pin codes etc.


    function store() {  var config = {    name: "foo",    value: "bar"  };  var aStore = new $.security.Store("localStore.xssecurestore");  aStore.store(config);
    }
    function read() {  var config = {    name: "foo"  };  try {    var store = new $.security.Store("localStore.xssecurestore");    var value = store.read(config);  }  catch(ex) {    //do some error handling  }
    }
    var aCmd = $.request.parameters.get('cmd');
    switch (aCmd) {
    case "store":  store();  break;
    case "read":  read();  break;
    default:  $.response.status = $.net.http.INTERNAL_SERVER_ERROR;  $.response.setBody('Invalid Command');
    }

    SAP HANA SPS 09: New Developer Features; New XSODATA Features

    $
    0
    0

    This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

     

    In this blog we will look at new features in the XSODATA service framework in SAP HANA SPS 09.

     

    Configurable Cache-settings for the $metadata request

    When calling OData services, the $metadata document is often requested over and over again.  Yet changes to the underlying entity definitions is relatively rare. Therefore in SPS 09 we enable the option to configure caching of these $metadata documents in order to avoid the many redundant queries to the process the metadata.

     

    ETag Support

    The OData specification uses ETags for optimistic concurrency control.  You can read more about the specification here: http://www.odata.org/documentation/odata-version-2-0/operations/#ConcurrencycontrolandETags

     

    If the developer wants to support this feature in XSODATA, they have to enable it per entity in the .xsodata file. Example:

    service
    { entity "sap.test.odata.db.views::Etag" as "EtagAll"     key ("KEY_00") concurrencytoken;   entity "sap.test.odata.db.views::Etag" as "EtagNvarchar"    key ("KEY_00") concurrencytoken ("NVARCHAR_01","INTEGER_03"); 
    }


    If only “concurrencytoken” is specified, all properties, except the key properties, are used to calculate the etag value. If only specific properties are given, only those are used for the calculation. concurrencytoken” cannot be used on aggregated properties with aggregation method AVG (average).


    Nullable Properties

    All entity properties are automatically generated by the XSODATA layer during create and since they are not nullable, the consumer is now forced to pass dummy values into these property.

     

    However OData supports $filter and $orderby conditions on the “null” value. This means, that it is now possible to treat “null” as a value, if the developer enables it. This behavior can only be enabled for the whole service, not per entity. Example:


    service {…
     }
    settings {
     support null;
     }


    Only if this support is enabled, $filter requests like $filter=NVARCHAR_01 eq null are possible. Otherwise “null” is rejected with an exception.

    If the support is not enabled, the default behavior applies and all null values are neglected in comparisons and the respective rows are removed from the result (i.e. common database behavior).


    Odata Execution Tracking Utility

    In order to ease the supportability and analysis of performance for OData request in HANA, we've added functionality to request to profile the performance of request processing (executed queries and the time spend in different OData components) in read and write requests. The requested profiling info is then accessible only if the XS engine is in debug mode.


    Main usage is - tracking performance by:
    1. Add query parameter named 'profile' to OData request that notifies the server to produce the report
    2. If the parameter is present and engine is in debug mode - OData profiling is done
    3. OData response is skipped and the server returns:
         HTML page with the collected information in case profile=html (default)
         JSON response with the collected info in case profile=
    json

     

    HANABlog5.png


    OData Explorer

    The SAP River Application Explorer has been rebuilt as a general SAP OData Explorer in SPS 09. It allows for the general testing and data generation of XSODATA based services. You can view all records in a service or create/delete/edit individual records. It also supports mass generation of multiple records at once with random value generation. It can be launched from the Web-based Development Workbench (via context menu option on XSODATA service) or directly via the url=

    /sap/hana/ide/editor/plugin/testtools/odataexplorer/index.html?appName=<xsodata service path>


    HANABlog6.png



    SAP HANA SPS 09: New Developer Features; SAP HANA Web-based Development Workbench

    $
    0
    0

    This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

     

    With the inclusion of browser based development tools in HANA SPS 06, no longer are you required to install the SAP HANA Studio and Client if you only need to do some basic development object creation or editing in the SAP HANA Repository. This means you can be coding your first application within seconds of launching a SAP HANA instance. The usage of such browser based development tools is particularly appealing to cloud-based SAP HANA development scenarios, like SAP HANA one.  You only need access to the HTTP/HTTPS ports of the SAP HANA server and avoid the need for any additional client side software installation. A browser pointed at the SAP HANA server is all you need to begin development.

     

    With SPS 09 we continue to enhance the browser based development tools adding support for a larger number of development artifacts as well as enhancing and improving the editors which already existed in previous releases.

     

    New Design and Foundation

     

    The SAP HANA Web-based Development Workbench is now based upon the same core libraries as the SAP Web IDE. This brings several key advantages.

     

    New visual design to the IDE which matches the design of the SAP Web IDE.

    Integration of some of the web tools of the SAP Web IDE. With a shared foundation we can add more of the functionality of the core web editing tooling of the SAP IDE over time as well.  I think you will eventually see these two tools merge into one integrated development experience for database to UI development in on place.

    Contextual Help via links online documentation


    HANABlog7.png


    This new foundation brings with some technical changes as well.  For example there are new URL paths for all the indivdual tools:

    /sap/hana/ide/editor

    /sap/hana/ide/catalog

    /sap/hana/ide/security

    /sap/hana/ide/trace


    Yet all the old URL should redirect automatically to the new paths. 


    Likewise there are new roles. The old roles, however, are still valid as they are include the new roles.


    Function Flow


    While editing, we introduce the Outline View.  This is a new panel-based, responsive UI with persisted user settings. It contains your navigation history along with alphabetical sorting, collapse all, expand all, function list.

    HANABlog8.png

    As part of the improvements to the function flow, we also introduce code navigation. This feature supports cross-file navigation for both client and server JavaScript.  With a Ctrl+Click you can jump to the function definition from its usage even if this definition is contained in a separate file.  We also have popin code preview and support for JSDoc as you mouse over JavaScript functions.


    HANABlog9.png


    New Templates


    The Web-based Development Workbench has always had support for new application templates.  In SPS09 we extend the list of available templates to include a complete Fiori example application.

    HANABlog10.png


    There are also new code snippets for XSJS, XSODATA, HDBPROCEDURE, and other development artifacts.

    HANABlog11.png


    Application Preview


    One of the additional advantages of moving to the SAP Web IDE foundation is that the SAP HANA Web-based Development Workbench now shares the same Application Preview tools as the SAP Web IDE.  This tool allows for HTML page testing in various form factors and screen orientations.

    HANABlog12.png


    XSODATA


    The XSODATA Editor within the SAP HANA Web-based Development Workbench receives improvements via syntax highlighting and keyword code completion.

    HANABlog13.png


    The SAP River Application Explorer has been rebuilt as a general SAP OData Explorer in SPS 09. It allows for the general testing and data generation of XSODATA based services. You can view all records in a service or create/delete/edit individual records. It also supports mass generation of multiple records at once with random value generation. It can be launched from the Web-based Development Workbench (via context menu option on XSODATA service) or directly via the url=

    /sap/hana/ide/editor/plugin/testtools/odataexplorer/index.html?appName=<xsodata service path>


    HANABlog14.png


    HANABlog15.png



    SQLScript


    One of the new development artifacts with support in the SAP HANA Web-based Development Workbench in SPS 09 is the HDBPROCEDURE. Not only do we get a basic editor, but it also has advanced features such as keyword code completion and syntax highlighting.

    HANABlog16.png


    In addition to editing, we can now also debug SQLScript procedures from the SAP HANA Web-based Development Workbench. Here you can set breakpoints in the runtime object in the Catalog tool.  You then call the procedure from the SQL console.  You have resume and step over functions as well as scalar and table variable/parameter previews.

    HANABlog17.png


    Performance Analysis in the SQL Console


    A new feature of the SQL Console in the SAP HANA Web-based Development Workbench is to allow for performance measurements. You see an expanded detail of the performance trace. You also have the option to perform repeated calls to the same operation and graph the performance over time.

    HANABlog18.png


    HANABlog19.png


    Form based Role Editor


    With SPS 09, the Web-based Development Workbench introduces a supported editor for creating and maintaining designtime roles.  This is not a source code based editor, unlike the HANA Studio. Instead this is a form based editor with similiar functionality to the older runtime role editor in the HANA Studio. This makes editing of roles by security administrators which might not that familiar with coding much easier.

    HANABlog20.png


    HANA Test Tools Integration

    With SPS 09, SAP ships optional Unit Test and Mock Framework tools. If these tools are installed on your HANA instance, then the option to trigger these tests will also show up in the SAP HANA Web-based Development Workbench. In addition to just running the Unit Test, you can also choose to perform a code coverage analysis.  The results we be displayed in the editor by highlighting the lines of code which were touched by the Unit Test.

    HANABlog21.png

    HANABlog22.png



    Calculation View Editor

    Another important addition to the list of supported development artifacts in the SAP HANA Web-based Development Workbench is the inclusion of Calculation Views. This is the first of the modeled views to be supported by the web development tooling and offers many options and features of the advanced modeling environment. The editor supports both scripted and graphical Calculation views.

    HANABlog23.png

    HANABlog24.png


    Analytic Privilege Editor

    To complement the support for Calculation Views, we also introduce a new editor in the Web-based Development Workbench for Analytic Privileges.

    HANABlog25.png


    Smart Data Access

    SPS 09 also introduces a new editor for Smart Data Access integrated into the Catalog tool of the SAP HANA Web-based Development Workbench. It allows you to define and edit Remote Sources and to create and maintain Virtual Table defintions.

    HANABlog26.png


    Replication Task Editor

    The last of the major new editors introduced in SPS 09 is the Replication Task editor.  This tool allows you to define replication tasks and to perform target mapping.

    HANABlog27.png


    CDS/HDBDD Editor

    The CDS/HDBDD Editor also received several improvements in SPS 09.  The editor now supports syntax highlighting and local code completion.

    HANABlog28.png


    It also integrates the data preview function. This allows you to select an entity within an HDBDD view and generate a SELECT SQL Statement to preview the contents in the underlying table.

    HANABlog29.png





    SAP HANA SPS 09: New Developer Features; SAP HANA Studio

    $
    0
    0

    This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

     

    While we see major investment in the web-based tooling around SAP HANA, SAP also continues to make improvements and additions to the Eclipse based SAP HANA Studio as well. In this blog we will detail the enhancements to the SAP HANA Studio.

     

    Project Creation Wizard

     

    The project creation wizard was first introduced with SPS 07.  Initially it focused on streamlining the project creation and then sharing to the Repository parts of the process. In SPS 09, it is further enhanced to include functionality to generate the initial .xsaccess and .xsapp files.  It also optionally allows for the generation of a Schema, HDBDD (for creating tables and views) and XSJS service. This takes much of the functionality of the Application Creation Wizard of the HALM tool and moves it into the HANA Studio thereby streamlining the project creation workflow.

    HANABlog30.png

     

    Navigation to XS Admin tool

     

    Before SPS 09, when you want to edit the settings of xsjob, xssqlcc, xshttpdest, or xsaccess files you had to manually open the XS Admin tool in a web browser and navigate to the package path that contains the development artifact.  Now in SPS 09 we can directly navigate to the Admin web tool from the context menu of the project explorer.  This opens the XS Admin tool to the correct development artifact in-place within the SAP HANA Studio.

    HANABlog31.png

     

    Debugging

     

    We heard feedback from developers that starting the debugger in the HANA Studio in the past was cumbersome. You had to choose the XS Session ID, which in turn could only be determined by looking at cookies within the web browser.  Therefore one of the major goals for SPS 09 was to improve the overall debugging experience.

     

    To that end we now introduce One-Click debugging in SPS 09.  No longer do you have to choose the XS Session ID. Instead the targer of the debug session will launch in an external web browser or run in-place within the HANA Studio when you choose debugging.  This also means that we needed to provide tools within the debugger for stubbing in HTTP Headers, Body, etc.  In the past developers often used 3rd party tools like Postman to simulate service calls. Now you can do all of this from within the HANA Studio as you start the debugging.

    HANABlog32.png

     

    But the debugging improvements don't end there.  We now also support XSJS/SQLScript integrated end-to-end debugging.  From an XSJS that calls into a SQLScript stored procedure we can now step from the XSJS debugger seamlessly into the SQLScript debugger.

    HANABlog33.png

     

    Direct Editing from the Repository Browser

    When we talk with developers we often ask them what they like and don't like about each of the development tools.  On the most common positive things we heard about the SAP HANA Web-based Development Workbench was the streamlined workflow. in the web tooling you can just directly edit a file without having to create a project or perform any content check-out.  Therefore we decided to bring this same streamlined workflow option into the HANA Studio as well.

     

    No longer is it required to check out content or even have a project to create or edit development artifacts.  All objects are directly editable simply by selecting them in the Repository browser. The creation of new packages and development artifacts is also enabled from the Repository browser.

    HANABlog34.png

     

    Yet we retain the functionality to work with projects as well.  For intensive daily work on a sub-set of development artifacts, the efficiency of the project view are still available. However for quick one-off changes you now have the fast option to directly edit.

     

    Repository Workspaces

    To complement the improved workflow of direct editing in the Repository browser, we have also improved the process for creating workspaces. Now when entering the Repositories view for the first time you immediately see all system connections which you've created previously.  You can right mouse click on a system connection and choose Import Remote Workspace and the local folder and all other settings are setup with a single click.

     

    There are also new options for administrators to delve other users workspaces. This helps with clean up of workspaces that are no longer needed (IE user has been deleted) and keeps orphaned inactive objects from causing any problems.

    HANABlog35.png

     

    Refactoring Service

    Another major main point for developers was the impact to renaming or moving development objects.  For example if you move a table definition between two packages, this also impacts the table name.  When the table name changes, all the views which use this table are broken.  While where-used services which were introduced in SPS 07 help find these dependencies; SPS 09 goes a step farther by no only finding all impacted objects but also proposing solutions and automatically adjusting package references in source and impacted objects.

    HANABlog36.png

     

    HDBDD Template

     

    While many development artifacts received creation wizards and templates in SPS 07, HDBDD received only a basic template.  In SPS 09 the template options were significantly enhanced. You can now choose from four different common scenarios to use as starting templates.

    HANABlog37.png

     

    SQLScript

    The biggest enhancement to the SQLScript Editor is, without a doubt, the introduction of Semantic Code Completion in SPS 09.  When you press CTRL+Space you trigger code completion which will list all relevant objects based upon your current context.  This searches for matches of tables, schemas, other procedures, basically any database object.

    HANABlog38.png

     

    Check out the demo video here.



    WebBridge

     

    With the heavy investment into the SAP HANA Web-based Development Workbench, it may sometimes be the case that a new editor is only only developed in the web based tooling.  But if you are working primarily within the SAP HANA Studio you might not want to switch over to the web based tooling for just one editor.  This is why in SPS 09 we introduce the WebBridge.

     

    The WebBridge allows running editors which only exist in the SAP HANA Web-based Development Workbench within the SAP HANA Studio.  They still use the Studio Save, Active, and other menu and toolbar options.  You use the Open ->With option and then choose Embedded Web Editor.

    HANABlog39.png

    Searching ESP 5.1 SP09 Docs

    Fuzzy search your Hana DB with NodeJS on CloudFoundry

    $
    0
    0

    Dear Cracks,

     

    I wrote this code more then 6 months ago and as it is still not legacy, I decided to write this blogpost.

    In the tutorial above, I will show you how easily you can get data out of your HANA database with NodeJS. The application is written in a way that you can deploy it on CloudFoundry.

     

    Why Hana?

    Hmm...I think it is not necessary to describe here why to use HANA I mainly do it, because it is super fast and has a lot more to provide then just the database.

     

    Why NodeJS?

    "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Holger Koser @kos wrote a NodeJS HANA driver, which is also open source and available on Github. The driver alows you to connect to your HANA DB and write powerfull applications in common javascript.

     

    Why CloudFoundry?

    In the cloud era, the application platform will be delivered as a service, often described as Platform as a Service (PaaS). PaaS makes it much easier to deploy, run and scale applications. Some PaaS offerings have limited language and framework support, do not deliver key application services, or restrict deployment to a single cloud. CloudFoundry is the industry’s Open PaaS and provides a choice of clouds, frameworks and application services. As an open source project, there is a broad community both contributing and supporting CloudFoundry. As it is open source, you can find the whole code on Github. Internally we have CloudFoundry running in a production grade, this is mainly why I go into CF here.

     

    Let's get the party started

    This tutorial shows you how to deploy an application which uses openUI5 as frontend, NodeJS as backend and SAP HANA as database. Regulary SAP and SAP databases need a huge landscape which is not really fast and needs a lot of customizations. A big advantage of this use case is the very fast SAP HANA database. We use it directly from our NodeJS backend without any middleware. Because of this architecture we're able to request our data really fast.

    This is the architectural overview:

    https://camo.githubusercontent.com/8bf6a0e9bd808ca364f31df77eeafbc6694c4400/68747470733a2f2f73332d65752d776573742d312e616d617a6f6e6177732e636f6d2f7377697373636f6d2d6974732f6e6f646a732d68616e612f6172636869746563747572652e706e67

    Here is to mention that you can use every CloudFoundry based PaaS not only our Swisscom Cloud also i.e. Pivotal Web Services.

     

    To demonstrate the fast usage of HANA we provide a freestyle fuzzy search(wikipedia) over three columns and a big amount of data. Follow the tutorials to understand how it works.

     

    Watch a Demohere.

     

    There are two different types of this tutorial, they are linked on github as I only want to maintain it at one point:

    • Kickstarter steps: a short introduction on how to use this repository and what you have to do when you clone it -> Link.
    • Step by step tutorial: you learn to setup a NodeJS application and how to use a database service in Cloudfoundry -> Link.

     

    As enhancement you can also deploy a mobile frontend in an addition layer, see the source code and steps here.

     

    What do you think about the combination of SAP HANA and NodeJS?

     

    I also wrote a demo app with which you can search through your PDF's, would be happy to write a blog post about that, if you're interested in NodeJS and HANA.

     

    Looking forward to your feedback,

    Lukas

    @Github

    @Twitter

    Viewing all 737 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>