Quantcast
Channel: SAP HANA Developer Center
Viewing all 737 articles
Browse latest View live

SAP HANA Idea Incubator - Multiple Process Improvement Proposals SAP BW

$
0
0

Here are some of my ideas & suggestion within existing setup which can significantly benefit end users and BW consultants

 

Data Modelling

 

  • Default handling of junk characters upon loading : BW data models will have to always write their own piece of code to handle junk characters and it’s a long standing wish from all developers

 

  • There should be a provision to compare the BW objects and Reports across the system to identify inconsistencies

 

  • There is no standard way on how data integrity checks are to be performed with source system, SAP should define standard processes for data reconciliations OR provide data models to compare data between source systems

 

  • Similar to metadata repository SAP can have a repository which has inbuilt tools, Also it would be nice if SAP provides standard tools for certain tedious operation which need custom workarounds, some examples include,
    • Identifying BeX Reports associated with Roles
    • Reports which are associated with Web Templates & Workbooks
    • Where used list of variables in Reports
    • Reports having last used information

Reporting

  • Suggest to have one single button to remove all Drills from the report output

 

  • If a report has 30+ fields for drills then its tedious to add the fields for drills to report output as users will have to scroll till bottom, enable search functionality for fields which are available for drills

 

  • Enable functionality to add multiple fields for drills at once in report output

 

 

https://ideas.sap.com/SAPHANAIdeaIncubator/user-friendly-options-in-bw-report---pro

 

 

Thanks

Abhishek Shanbhogue


Hana SPS09 Smart Data Integration - Realtime Sources with Transformations

$
0
0

This post is part of an entire series

Hana SPS09 Smart Data Integration - Overview

 

The classic way to work with external data in Hana is to replicate the source tables to Hana and then transform the data in the user queries, e.g. by adding a calculation view on top. But does it make sense to apply the same transformations to the same rows every time a user queries the data? It would make more sense to apply the transformations only once to each row and make the calculation view very simple. In other words, the target of the realtime subscription receiving the changes should be a transformation task, not a table like in the Twitter example before.

 

And actually, that is quite simple utilizing the Hana Smart Data Integration feature. To be more precise, it is two checkboxes....

 

 

The batch dataflow

In the previous example of a batch dataflow we read from two database tables, joined them together and the result was loaded into a target table. For a change, the source should be this time a RSS Feed from CNN.com.

So I have created an RSS Feed adapter, did create a remote source pointing it to the URL rss.cnn.com/rss/cnn_latest.rss and by creating a virtual table in this remote source, we can get all CNN news from this feed, these are the 50 most recent ones.

The primary key of a news article is its URL, hence this column is marked as PK of the source table, and the target table should have the same structure. To avoid a primary key violation when running the dataflow a second time, a table comparison transform compares the current data with the data being loaded already and inserts the new rows, updates changed rows and discards all that was loaded already.

 

rssfeed1.png

 

 

The realtime dataflow

Executing that dataflow frequently would be one option but actually, the RSS adapter was built to support realtime subscriptions and is has optimizations built in, for example it asks in the http header already what the last change date of the page was. Therefore it is better to let the adapter push changes to Hana.

To accomplish that, all we have to do is checking the realtime boxes in above dataflow.

rssfeed2.png

 

There are two of them, one is on container level (above screenshot) and the second is a property of the source table.

On table level the realtime box has to be set in case there are multiple source tables and only some of them should be read in realtime, e.g. you join this V_RSSFEED table with a flat file virtual table, which is static, not realtime.

And on container level the realtime flag is needed to generate a realtime task, even if just table types are used as source, no virtual table at all.

 

That's it. Suddenly you execute above dataflow once and from then on all changes will be pushed by the adapter into this transformation into the final table.

 

Granted, above transformation is not necessarily the most complex one but nothing prevents us from building more complex transformations, say we perform a text data processing on the news headline to categorize the text into areas, companies named, name of people and load those into Hana.

Then the calculation view is a simple select on these tables instead of it doing the transformations on all data every time a user queries something.

 

 

Under the hoods

Obviously above realtime checkbox does change a lot from the execution point of view. Most important, two tasks are being generated now by the activation plugin of the hdbtaskflow. One is the initial load, the batch dataflow. The other is a realtime task.

The interesting part is the realtime flow. It is more or less the same as the initial load task, except that the source is no virtual table, but a tabletype.

rssfeed3.png

 

 

 

The activation plugin has also create a remote subscription, with target TASK not TABLE as before.

rssfeed4.png

 

 

 

When executing the initial (batch) load, the realtime subscription is activated. We can see that in the stored procedure that is used to start the dataflow.

rssfeed5.png

 

 

Compare above with any other ETL tool. For those it takes you, say, 1 hour to create the initial load dataflow and multiple days for the delta logic. The goal here is to reduce that time to a mouse click and support realtime delta.

Try hanatrial using Python or nodejs

$
0
0

Step by step example how to connect to hanatrial instance using Python or nodejs open source client, PyHDB or node-hdb and HANA Cloud Platform Console Client.

 

1. Download the SDK of choice from SAP Development Tools for Eclipse repository

1_sdk.png

 

Unpack, start the bash shell on Linux or OSX, or Command Prompt on Windows and go to tools subfolder of the SDK. This example is tested on Linux (Ubuntu 14.04), on Windows should work the same way.

 

 

2. If you are behind a proxy, configure the proxy in your shell

following the readme.txt in tools folder.

 

 

3. Open the tunnel to hanatrial instance

following SAP HANA Cloud documentation,  like for example:

 

Pass.png

 

Username, account name and HANA instance name you may check in hanatrial Cockpit

 

Account.png

 

Instance.png

 

 

After entering the password, the tunnel is opened and the localhost proxy created, for hanatrial instance access:

 

TunnelParams.png

 

The default local port is 30015 but check if different.

 

 

4. Connect from Python or nodejs Client

Use displayed parameters to connect from Python client and display table names for example:

 

1
2
3
4
5
6
7
importpyhdbconnection = pyhdb.connect('localhost', 30015, 'DEV_4C55S55VRW5Z1W3STRMBCWLE0', 'Gy5q95tQaGnOZbz')
cursor = connection.cursor()
cursor.execute('select * from tables')
tables = cursor.fetchall()for table in tables:    print table[1]

 

Screen Shot 2015-04-24 at 15.09.46.png

 

It works the same way for nodejs client, only connection parameters adapted from node-hdb Getting Started example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
var hdb    = require('hdb');var client = hdb.createClient({  host     :'localhost',  port     :30015,  user     :'DEV_4C55S55VRW5Z1W3STRMBCWLE0',  password :'Gy5q95tQaGnOZbz'});
client.on('error', function (err) {  console.error('Network connection error', err);
});
client.connect(function (err) {  if (err) {    return console.error('Connect error', err);  }  client.exec('select * from DUMMY', function (err, rows) {    client.end();    if (err) {      return console.error('Execute error:', err);    }    console.log('Results:', rows);  });
});

 

Screen Shot 2015-04-24 at 15.35.02.png

Real time train information to the passengers who are travelling..

$
0
0

My idea is I want people to get real-time live messages to their mobile phones, the details of the long distance trains travelling. Passengers should have a real time information from the time they board until their departure of journey.This helps the passengers to look for an alternative if the train has cancelled or suppose lets say the train is at station W and their is some track problem which takes 2 hours for it to repair and a passenger has to board at Y to give an attempt to his exam. If he doesnt have any idea about the train and waits for it he looses his exam.But if the passenger gets that the train starts after 2 hours from W he would take an alternate to his destination to avoid chaos at the last minute. As i experienced this situation I would suggest if SAP HANA cloud computing can solve this problem.

Join us at the SAP HANA Developers Expert Summit!

$
0
0


blogimage.jpgThe SAP HANA Product Management team is inviting developers who are actively developing applications on SAP HANA to this free 1 day event, where we are interested to hear from you firsthand about your SAP HANA application development experiences, challenges, and what is working for you and what is not working for you.   If you’ve been spending your days banging away at the keyboard and building killer apps on SAP HANA, we want to talk to you.   We need to talk to the developers out there who have been neck deep in creating tables and views via CDS(Core Data Services), writing SQLScript stored procedures, and exposing services such as OData and server-side JavaScript.   We’d also like to talk to the developers who are creating applications leveraging some of the HANA specific features such as spatial, predictive, and text analysis. So if you have worked with any of these topics, please consider joining us in Newtown Square or Palo Alto for this free event.   The plan is to have a set of brief update presentations for each topic followed by a round of feedback sessions where you will be invited to sit down with the product manager responsible for that topic and let them know what your greatest successes have been, and what your worst pain-points are.   This is exclusive access to the product managers within SAP who can help you influence the direction of the product for the better. Don’t miss this opportunity.


This is an interactive event with a small number of experienced hands-on customer experts, networking together and providing direct and candid feedback to SAP. To this end the number of registrations will be limited, where all attendance request will be given careful consideration and we will contact you with a confirmation email to attend the event and the next steps.


We are running this event in two locations, register today for an invitation!  Please register for one location only.


Register here for an invitation for Palo Alto - June 24th, 2015

Register here for an invitation for Newtown Square - September 2nd, 2015
 

The tentative agenda is as follows:



Time

Agenda Topics

Speaker

8:00 am – 9:00am

Breakfast & Check-In

 

9:00 am – 9:20am

Welcome & Introduction

Mike Eacrett

9:30 am – 10:30am

Topic Update Presentations I

 

 

  Tooling & Lifecycle Management

Mark Hourani/Ron Silberstein

 

  Core Data Services

Thomas Jung

 

  Modeling

Lori Vanourek

 

  SQLScript

Rich Heilman

10:30 am – 12:10 pm

Break Out Feedback Sessions I

 

12:10 pm – 1:00 pm

Lunch

 

1:00 pm – 2:15 pm

Topic Update Presentations II

 

 

  SAP HANA XS

Rich Heilman

 

  The Future of SAP HANA XS

Thomas Jung

 

  Predictive

Mark Hourani

 

  Spatial

Balaji Krishna

 

  Text

Anthony Waite

2:15 pm – 2:30 pm

Afternoon Break

 

2:30 pm – 4:35 pm

Breakout Feedback Sessions II

 

4:45 pm – 5:00 pm

Closing Remarks

Mike Eacrett

5:00 pm – 8:00 pm

Networking Reception & Dinner

 

Sapphire: WHO'S WITH ME?

$
0
0

When I played Rugby back in the day, If I was running with the ball and there was one of my teammates on the outside ready to turn the corner and score a try, they would yell out "I'M WITH YOU!"

 

 

england-rugby-wallpapers-new-300x188.jpg

 

 

 

As you and I get ready for a great week next week at Sapphire - I want to ask you this question: "WHO's WITH ME?" The HANA Platform and the HANA Cloud Platform teams have put together an awesome agenda and a comprehensive set of content for Sapphire, and we want you to come along with us. For the HANA Platform and HANA Cloud Platform, there will be:

 

  • 15 customer theater presentations, from marquis brands such as Coca Cola, Under Armor, Lockheed Martin, Schlumberger, Siemens and others.
  • 4 customer panels, one hosted by Steve Lucas and one by Irfan Khan
  • 46 Microforums, discussing everything from HANA Best Practices, to How to get up and running with BW on HANA, to the value of extending your applications with SAP's PaaS offering (HCP)
  • 63 Demo Theaters, from Dynamic Tiering to IoT to Big data to hybrid cloud to cloud extensions and more
  • 19 demo stations, with HANA experts ready to "show and tell" all kinds of cool things HANA and the HANA Cloud Platform can do for your business

 

 

If you want to join me for my personal session participation (WHO'S WITH ME?), I will have a few things going on my agenda where you can join me:

 

  • Tuesday 12:30-1:30pm (BI12324): The HANA Platform Roadmap session.  I am fortunate to be co-presenting with Mike Eacrett, SAP VP of HANA Product Management
  • Tuesday 1:30-2pm (SID 20299): Join Craig Parker from Genband revolutionized their customers experience with the HANA Cloud Platform
  • Tuesday 4:30-5pm:  Come join me in the SUSE booth as we discuss ways SAP and SUSE will help you migrate off of Oracle (boo!) onto an SAP database (yea!)
  • Wednesday 11-11:45am (PT20275):  Hear from Cisco and from SAP consultants how you can chooses an optimal use case for your company to get started with HANA.
  • Wednesday 4:30-5pm (PT20258): Theater session with Lockheed Martin's Stephan Gerali to understand how HANA is helping Lockheed Martin innovate
  • Thursday 8-9am (RT1630): Dialog and Q&A with the HANA Innovation Award Winners.  Don't miss this opportunity to hear from customers who have transformed their business with HANA
  • Thursday 5pm-5:30pm (SID 20301): Enjoy Prakash Darji, GM of our HCP unit, discuss HCP success with three customers in an interactive panel discussion

 

Over and above my sessions, I want to invite you to join me at two very special events (WHO's WITH ME?):

 

 

  • HANA Innovation Awards celebration (Tuesday night, 6:30pm, Orlando Hilton):  Come celebrate with us as we recognize top customers using HANA to simplify, accelerate and innovate their businesses.  Tuesday evening in the Orlando Ballroom at the Orlando Hilton (Across the street from the Convention Center).  Everyone who attends gets the new Hasso Plattner/Bernd Leukert book, and another special giveaway.  Reserve your spot now by sending an email to mailto:nina.hunter@sap.com.
  • HANA Cloud Ice Breaker Reception (Wednesday night, 6:30pm, Minus5 Ice Bar):  Chill out at the Minus5 Ice Bar and mingle with SAP HCP and other SAP Cloud customers.  Don't know much about SAP's cloud initiatives?  Find out at the Ice Bar on Wed night.

 

I'm looking forward to finding out WHO'S WITH ME? in my activities in Orlando at Sapphire.  Together we will score a try!!!

 

 

NZ try.jpg

 

 

As an added bonus, just as I believe HANA is the greatest in-memory data management and application platform ever, in my humble opinion this is the greatest Rugby Try ever scored:  1973 New Zealand vs. Barbarians (Gareth Edwards).  What is your opinion - about HANA and about the greatest Rugby try?

 

 

Typescript definitions for HANA XS engine

$
0
0

Hi All,

I've written a typescriptdefinitions file for use with SAP HANA XS engine.

 

This allows you to create your XS applications in typescript, fire up your favorite typescript editor, along with the definitions file:

 

ts.png

 

This shows Atom with its typescript plugin, it provides syntax highlighting and typeahead features.

Saving the file will compile the typescript to javascript:

 

compiled.png

 

The compiled javascript can be uploaded to the HANA server and run there.

 

 

Future Work:

  1. Improving definitions file
  2. Working with typescript in library files(xsjslib) is a mess, the typescript files can be compiled into one javascript file using compiler flag --out

 

https://github.com/larshp/xsjs.d.ts

Using #SQLServer 2014 Integration Services (#SSIS) with #SAPHANA – Part 1

$
0
0

Now that SAP has awarded me SAP HANA Distinguished Engineer status, it’s time to get some new content out to the community. Over the last year, I’ve been working with Microsoft to see what works and what doesn’t work when using SQL Server 2014 Integration Services (SSIS) with SAP HANA. After all, SAP HANA supports ODBC connections and starting with SPS 9, developers can use the SAP HANA Data Provider for Microsoft ADO.NET. For customers using SQL Server as their data platform for SAP Business Suite / NetWeaver solutions, SSIS is part of your licensed version of SQL Server. So why pay for an additional Enterprise Information Management system, when just about everything you need is with your SQL Server license? Check out the “Enterprise Information Management using SSIS, MDS, and DQS Together [Tutorial]”. Along the way, I’ve learned what worked well and what you might want to avoid when using SSIS with SAP HANA.

 

In this blog series, I’ll share my experiences over the last year using a video blog with videos hosted on YouTube. While they may not the quality of the SAP HANA Academy YouTube videos in terms of production, they won’t have any marketing spin – this is developer to developer. Here is what you can expect in the series.

    1. Getting started with a free trial of Microsoft Azure. I’ll use Microsoft Azure’s 30 day free trial to create a client virtual for the tools. This means I have to complete this entire blog series within in 30 days. This is the topic for this blog post.
    2. Creating a virtual machine with SQL Server 2014 and the Visual Studio Data Tools.
    3. Creating your first SSIS solution to import a flat file into SAP HANA.
    4. Best practices in security when creating SSIS packages.
    5. Copying data from a SQL Server database into an SAP HANA star-schema.
    6. Using SQL Server change data capture to incrementally update fact and dimension tables on SAP HANA.
    7. Copying data from SAP HANA into a SQL Server database.
    8. Using SSIS to extract data from SAP Business Suite into SAP HANA.
    9. Using SSIS to load data files in parallel to improve data loading performance.
    10. Trade-offs in using ODBC versus the SAP HANA Data Provider for Microsoft ADO.NET when loading text files.
    11. Using SQL Server Agent to run SSIS jobs.
    12. How to monitor SSIS job execution.
    13. How to do error handling during a connection failure to SAP HANA.
    14. How to do SSIS logging to debug issues and audit packages.

 

I’ll try to keep the videos to five minutes in length thru creative editing out of long running operations. I’ll let you know when I edit out large chunks and let you know how long the operation really took.

 

Without further ado, here is the first video on the series.

Please let me know if you like this approach with the videos (with your ratings of course ). If there is a particular topic you would like to see sooner, let me know.I hope you enjoy the series.

 

Regards,
Bill Ramos, SAP HANA Distinguished Engineer

Follow me on:

Twitter - http://twitter.com/billramo

LinkedIn - https://www.linkedin.com/in/billramo


SAP TechEd (#SAPtd) Lecture of the Week: Big Data Analytics Using SAP HANA Dynamic Tiering

$
0
0

Greetings, TechEd enthusiasts!  I am pleased to re-introduce my presentation from TechEd 2014 on HANA dynamic tiering. HANA dynamic tiering is a feature that enhances HANA with an integrated disk-backed storage and processing tier – a warm store - for managing less frequently accessed data.  But wait – hasn’t SAP always promoted memory ONLY for real-time transaction processing and analytics?  Essentially, yes - SAP’s mantra is now memory FIRST.  For “hot” data, there is nothing better than HANA, which has been architected from the ground up as an all in memory solution with blazing performance.  However, not all data requires real-time access, and economies of scale are achieved through use of storage tiering technologies integrated with the HANA platform.  This broadens the HANA platform to encompass Big Data and large volume warm/cold data for a 360 degree enterprise view.

 

The first version of HANA dynamic tiering was released in the fall of 2014 as an add-on option to HANA SPS09.  My 2014 TechEd presentation gave an overview of the feature – the motivation for developing it, the use cases it is designed for, and technical details:

 

 

 

 

 

HANA dynamic tiering in its first incarnation was targeted primarily at SAP BW on HANA customers who were looking to reduce their HANA footprint by moving less critical data out of memory and onto cheaper storage.  SAP BW integrated HANA dynamic tiering to automatically reposition potentially large persistent staging area tables and write-optimized DSOs to disk for a significantly reduced memory footprint.  The SP10 version of HANA dynamic tiering – currently under development - will bring improved HA/DR, query speed, and data aging capabilities which will make the feature attractive to HANA developers who are building applications that manage large amounts of data – much of which does not need the low latency performance of continuous in-memory residence.


I am excited to look back to last year’s overview of HANA dynamic tiering, and also to look ahead to upcoming, improved versions of the capability.  Dynamic tiering will extend HANA’s reach into new problem spaces, and open up new opportunities for our customers.

SAP TechEd (#SAPtd) Stragety Talk: SAP's Platform-as-a-Service Strategy

$
0
0

In this Strategy Talk, recorded at SAP TechEd Bangalore 2015, Ashok Munirathinam, Director PaaS APJ speaks about

how SAP intends to focus the SAP HANA Cloud Platform for customers, partners, and developers to build new applications, extend on-premise applications, or extend cloud applications.  in this session you can get an understanding of the platform today, and the direction SAP is headed. Also understand key partnerships and use cases for the platform today, as well as future capabilities that are being developed. Understand the value and simplicity of cloud extensibility, as well as how to engage with SAP in a simple way.


Realtime Business Intelligence with Hana

$
0
0

The desire of enabling Business Intelligence on current data was always present and multiple approaches had been suggested. One thing they had in common, the failed miserably because they never met the expectations.

With Hana we do have all the building blocks from a technical point of view to finally implement that vision.

 

Requirements

 

  1. Speed: A Business Intelligence solution that has response times greater than a second will not be used by customers
  2. External data: Reporting within an application is no Business Intelligence, it is dumb reporting. BI means comparing data and the more there is to compare with, the more intelligent findings will be made.
  3. Historical correct data: If past data is reported on, the result should stay the same. Even if the master data did change. For example last year's revenue per customer region should remain the same although a customer moved to a different location.
  4. Data consistency: When data is filtered or grouped by a column, this column should have well defined values, not duplicates but with different spelling. Also consistency between tables becomes important, e.g. a sales line item without a matching sales order row would be a bad thing.

 

The goal should obviously be all green in each of the categories

SpeedExternal DataHistorical Correct DataData Consistency

 

 

What had been suggested in the past

 

To accomplish realtime Business Intelligence two major suggestions had been made: Near Realtime loads and EII.

 

The idea of a near-realtime data warehouse is simple, instead of loading the Data Warehouse once every night, load it every hour. Well not very "near" real time, load it every 5 minutes, every minute, every second even?

This approach is feasible down to a certain frequency, but how long does a Data Warehouse delta run take? One part of the time is the data volume for sure. But assuming the data is loaded that frequently that most of the time there were no changes in the source system at all, this factor can be reduced t zero. The most time is usually spent in finding out what has changed. One table has a timestamp based delta, hence a query reading all rows with a newer timestamp is executed. For other tables a change log/transaction log is read. And the majority of the tables do not have any change indicator, hence a read entirely and compared with the target.

Above logic does not only take time, it costs resources as well. Constantly the source is queried "Is there a change?" "Is there a change?" "Is there a change?". For every single table.

While this approach has all the advantages of the Data Warehouse, fast query response time, no issue with adding external data, no issue preserving historical data, it is simply not feasible to build, aside from exceptional cases.

SpeedExternal DataHistorical Correct DataData Consistency

 

 

Another idea became popular in mid 2000 was to create a virtual data warehouse, meaning you create a simple to understand data model via views but data is not loaded into that data model, instead data is queried from the various sources on request. Therefore called Enterprise Information Integration (EII). So all the complexity of the transformations are done inside the database view instead of in the ETL tool. As the source data is queried directly, it returns current data per definition and the entire delta logic can be spared.

This works as long as the queries against the source systems are highly independent, e.g. System1: select quarter, sum(revenue); System2: select quarter, business_year. And the source system can produce the results quickly enough.

For typical Business Intelligence queries both points are not fulfilled usually.

Also, often you have to cut down on the amount of transformations being done, else the query speed would suffer even more. A common example would be standardizing on search terms, finding duplicates in the master data. These things are either be done during data entry - slowing down the person entering the data - or not done at all with negative impact on the decision being made due to wrong assumptions.

Hence, although the idea as such has its merits it died quickly due to the bad query performance.

SpeedExternal DataHistorical Correct DataData Consistency

 

 

 

The situation with Hana

 

Data Federation - EII

From a technology point of view Hana supports EII, it is called Smart Data Access (Data Federation) there. The pros and cons of EII are the same however. When reading from a Hana Virtual Table, the required data is requested from the remote database, hence the overall query performance depends on the amount of data to be transferred, how long the remote database requires to produce the data and the time to create the final query result in Hana.

And as only data that is available can be queried, and changes in an ERP system are usually just that, changes, there is no history available way too often.

SpeedExternal DataHistorical Correct DataData Consistency

 

Sidecar - S/4Hana

As a temporary workaround, until the ERP system itself runs on Hana and therefore does participate on the Hana query performance, the Side-by-Side scenario is used. The idea is to copy the source database to Hana and keep it updated in realtime, all the queries that would take too long for the other database are executed within that Hana box instead. And once the entire ERP system runs on Hana, those queries can be kept unchanged but run on the ERP system tables now.

So basically this is reporting on the ERP tables directly. Due to the raw computing power of Hana the speed is much better and this became feasible again but it is not as fast as a data model optimized for queries. The reasons for this I have listed in this blog post: Comparing the Data Warehouse approach with CalcViews - Overview

Another issue is again the history of changes. If a sales order entered last month got updated and the amount reduced from 400USD to 300USD, the sum of revenue for last month will be different than it was yesterday. In BW you would see the old amount of 400USD in last month and another row with the amount -100USD for today. Hence the data is historical correct.

SpeedExternal DataHistorical Correct DataData Consistency

 

Realtime Data Warehouse

One feature Hana got with the Smart Data Integration option is to combine realtime feeds with transformations. Previously this was not possible with any other tool because of the complexity. Realtime had been used as synonym from Replication, meaning the source data is copied 1:1 into the target, just like in above sidecar approach. With this the downsides of EII are combined. But with Hana a realtime subscription can push the data into a task instead of a table, inside the task the data is transformed and loaded into the query optimized Data Warehouse data model.

Therefore the advantages of realtime and Data Warehouse are combined without introducing more complexity.

  1. The query speed is based on Hana and all complex transformations are done whenever the data is changed, not every single time somebody queries the data.
  2. External data is no problem, new sources can be added and harmonized with the other data easily.
  3. Historical correct data is possible as well, either a change triggers an update in Hana or the change information is added as new row. In other words, a task might either load the target table directly or there is a History Preserving transform used prior to the target table.
  4. Data consistency is no problem either. A realtime push of the source data preserves the transaction of the source, so if a sales line item got added and hence the sales order's total amount updated, both are done in one transaction in the source and in Hana - Smart Data Integration feature takes care of that. Also all transforms to standardize the data are available. Their execution takes a while but that does not matter as it processes the changed data only, not all, and only once, not every time somebody queries the data.
SpeedExternal DataHistorical Correct DataData Consistency

 

S4/Hana with external data

Using above technologies, Federation and Realtime Transformations, external data can be added to the S/4Hana database into a different schema. This allows to pick the proper tchnology for each case, e.g. it was said that Federation works only for cases when the amount of data returned is small. Very often the remote dataset is tiny anyhow, hence Federation is perfect. And if it is not the data can be brought into Hana in realtime, either by simply copying the data and hence having to do all the hamronization with the EPR data at query time. Or even better, pushing the realtime changes into a task object which does all the harmonization already. Therefore the resulting view is as simple as a union-all of two identical table structures, both being in Hana already.

While this approach allows for all the flexibility on the external data, the local ERP data has the same issue as before - missing history, data consistency and not optimal speed due to the number of transformation done in the view.

Theoretically realtime replication from the ERP system into another schema of the very same Hana database could be enabled to preserve the history. But that will not be liked a lot.

SpeedExternal DataHistorical Correct DataData Consistency

Configure HTTPS for HANA XS on SP9

$
0
0

Hello All,

I recently had to configure my server to use https with signed certificate (signed by certificate authority and not only self-signed). It took me quite a while because all the tutorials, manuals did not seem to work for me on SP9. After lots of research I finally made it. I would like to share my findings with you so that you can save days of research. Please notice that I’m working on SAP internal cloud platform therefore the steps below might need adjusting depending on your server location and configuration.

 

I found this post very helpful and it explains many things in details, it is good place to start:

http://scn.sap.com/community/developer-center/hana/blog/2013/11/02/outbound-https-with-hana-xs-part-1--set-up-your-hana-box-to-use-ssltls

 

 

The difference in the SP9 version is that it is by default configured to use https with self-signed certificate therefore you and other users will get red warning message all over to warn that the connection is not safe. In SP9 you do not need to import sapgenpse or libsapcrypto.so because it’s already there. You do not need to configure web dispatcher to use SSL and those libraries because it’s already done.

 

My system is internal SAP server hosted on SAP cloud platform. If you are using different platform/server the certificate request might be different for you.

In my commands below I’m using place holders. Please replace them with the data of your server:

[host_name] – in my case: mo123456

[host_url] – in my case: mo123456.mo.sap.corp

[instance_number] – in my case 00

[instance]- in my case MV1

 

First upload to SAPNetCA_G2.cer to  /usr/sap/[instance]/HDB[instance_number]/[host_name]/sec

I use winscp tool for this.

If you are working on SAP internal system you can find certificates here:

https://security.wdf.sap.corp/SAPNetCA_G2/

If you use this link to sign certificates please make sure that you select response encoding to: PKCS#7 because X.509 did not work for me.

 

Log on to your system via putty. Log on as admin - [instance]adm for example mv1adm.

Define 2 variables to shorten up the script later:

export SECUDIR=/usr/sap/[instance]/HDB[instance_number]/[host_name]/sec

This is the folder with signed certificates and where the requests file will be placed.

 

export TEMPEXELIB=/usr/sap/[Iinstance]/exe/linuxx86_64/HDB_1.00.090.00.1416514886_1804508

This is the location of sapgenpse. As of SP9 you don’t need to copy those files manually like in previous tutorial, you can run this program from this library. Please notice that the location of the file might be slightly different in your case depending on the version of HANA. Please check the folder: /usr/sap/[instance]/exe/linuxx86_64/ for subfolders. In my case it’s: DB_1.00.090.00.1416514886_1804508 . Please check if sapgenpse is in that fodler .

 

SP9 comes with self-signed certificates (at least my came) therefore you need to delete those before you import new certificates signed by certificate authority. Please delete following files: SAPSSLS.pse , sapsrv.pse, sapcli.pse from security folder /usr/sap/[instance]/HDB[instance_number]/[host_name]/sec

 

Run sapgenpse to generate request:

$TEMPEXELIB/sapgenpse get_pse -p $SECUDIR/SAPSSLS.pse -x '' -r $SECUDIR/SAPSSLS.req "CN=[host_url], OU=00, O=SAP, C=DE"

 

It’s important that the request and pse file are named SAPSSLS in other tutorials I found different name and that did not work form me. Web dispatcher is already configured to look for the certificate with SAPSSLS name therefore it's easier just to replace those.

 

View the request:

cat $SECUDIR/SAPSSLS.req

Copy the text, sign it at your certificate authority, and copy the response text.

Create new file for the response:

vi $SECUDIR/SAPSSLS.cert

Press “i” to start text editing.

Paste the response to command line (in putty it’s just right mouse click)

Press escape key and type:

:wq

Press enter/return key.

Alternatively you can copy the request text into text file on your local pc and upload it to the server as $SECUDIR/SAPSSLS.cert . However I read on couple of other posts that there might be problem with the way windows editors encode new line sign therefore it’s recommended to create the text under linux.

Import the certificate:

$TEMPEXELIB/sapgenpse import_own_cert -c $SECUDIR/SAPSSLS.cert -p $SECUDIR/SAPSSLS.pse -x '' -r $SECUDIR/SAPNetCA_G2.cer

Check the message if operation was successful.

 

Create credentials for the file:

$TEMPEXELIB/sapgenpse seclogin -p $SECUDIR/SAPSSLS.pse -x '' -O [instance]adm

Make sure that only admin has access to this file:

chmod 600 $SECUDIR/cred_v2

 

Follow similar steps for sapsrv.

 

$TEMPEXELIB/sapgenpse get_pse -p $SECUDIR/sapsrv.pse -x '' -r $SECUDIR/sapsrv.req "CN=[host_url], OU=00, O=SAP, C=DE"

cat $SECUDIR/sapsrv.req

Copy the text, sign it at your certificate authority, and copy the response text.

vi $SECUDIR/sapsrv.cert

Press “i”.

Paste response text.

Press esc, type  :wq

 

$TEMPEXELIB/sapgenpse import_own_cert -c $SECUDIR/sapsrv.cert -p $SECUDIR/sapsrv.pse -r $SECUDIR/SAPNetCA_G2.cer

 

$TEMPEXELIB/sapgenpse seclogin -p $SECUDIR/sapsrv.pse -x '' -O [instance]adm

 

Create request for sapcli

$TEMPEXELIB/sapgenpse gen_pse -p $SECUDIR/sapcli.pse -x '' "CN=[host_url], OU=00, O=SAP, C=DE"

In the previous post I did not see that this request was signed therefore I just left it like this.

 

 

There is no need for additional web dispatcher configuration.

Afterwards it’s important to restart web dispatcher; I personally prefer to restart the whole server.

You can check the link https://[host_url]:43[instance_number]sap/hana/xs/admin/

If the certificate was imported successfully you should not see any red warning messages.

 

Best regards and good luck,

Marcin

BAPI's on HANA

$
0
0

Hi

 

We use the following BAPI’s today on ECC6: Would any of these be affected when our clients migrate to HANA?

 

When posting Journal Entries:

1.       BAPI_ACC_DOCUMENT_CHECK

a.       Description:  Simulate journal entry posting prior to actual posting

2.       BAPI_ACC_DOCUMENT_POST

a.       Description:  Post the journal entry (only after successful CHECK)

3.       BAPI_USER_EXISTENCE_CHECK

a.       Description:  If desired, verify that the username is a valid SAP user

4.       BAPI_ACC_DOCUMENT_REV_POST

a.       Description:  Performs reversing entries

5.       RFC_READ_TABLE

a.       Description:  Pull Document # from SAP into the Journal Entry after successful POST

 

When retrieving General Ledger month-end balances:

1.                   BAPI_COMPANYCODE_GETLIST

2.                   API_COMPANYCODE_GET_PERIOD

3.                   BAPI_COMPANYCODE_GETDETAIL

4.                   BAPI_GL_ACC_GETLIST

5.                   BAPI_GL_ACC_GETDETAIL

6.                   BAPI_GL_GETGLACCPERIODBALANCES

7.                   BAPI_COMPANYCODE_EXISTENCECHK

Internet of Things Foosball - Part 1

$
0
0

With a long history of playing foosball inside the walls of company I work for we needed something that could take our game to a new level. Over the years there has been a continuous disagreement among the players who was the best and who has the highest winning rate. We wanted this to be sorted out.

 

Among these disagreements there had been evolving some ideas on what could be done to sort things out and on how we could achieve this. Everyday and every game played the ideas kept stacking up, whether they where simple or crazy .

The company had its own HANA box for an inhouse development. But we where lacking thoughts on how to create a use-case/business case to experiment the power of SAP HANA.

 

So when we visited SAP Teched in Berlin(2014) we saw the light. There was this "new" concept on how to create big data. We saw it in the Keynote and there where some sessions introducing Internet of Things. And after visiting the Hackers Lounge we were sure that these two, Foosball and HANA, could bring the best of each other.

 

Now we had the vision, but nothing happened.

 

We were lacking time. The workload of the developers was well enough in other projects for our customers and we new this could probably take days to implement. We could not remove developers from projects that were billed as that would lower our income. So we had a new headache.

 

It was then very convenient when we got an email from the Reykjavik University. They where asking companies in Iceland to submit a proposal for a Final Project for undergraduates in BSc in Computer Science. We decided to submit our project: Foosball IoT.

 

We decided to present this idea with a simple approach.The idea was this:

  • Capture the "game" with sensors using a arm based computer( rasberry/arduino ).
  • Everything sensed via the sensors should be pushed into HANA because we want as much of data as possible.
  • Use the HANA XS platform for the application api.
  • Use SAPUI5 as UI.
  • The UI should allow players to create a game, follow the score and browse through variety of statistics

 

Next steps, the implementation and final result in part 2.

Testing UI5 Apps

$
0
0

Hi All,

 

I've been developing some apps in SAP HANA for desktop and mobile viewports. Here's a tip for testing those apps without using the mobile phone simulator to check the rendering for mobile screens:

 

In the <head> tag of the html file there is a block of code to initialise sap.ui libraries, themes and others. It looks like this:

 

<script id='sap-ui-bootstrap' type='text/javascript'        src='https://sapui5.netweaver.ondemand.com/resources/sap-ui-core.js'          data-sap-ui-theme='sap_bluecrystal'        data-sap-ui-libs='sap.m'></script>

 

In order to enable the testing for mobile screens the following line should be added before closing the <script> tag:

 

data-sap-ui-xx-fakeOS='ios'

 

This line allows simulation of an iPhone/iPad viewport.

 

Hope this helps all SAP HANA starters.

 

Regards,

 

Alejandro Fonseca

Twitter: @MarioAFC


Part 2 - Creating a client VM on Azure with Visual Studio 2013 & SQL Server Data Tools for BI

$
0
0

Ok, so the title doesn't include SAP or HANA, but I'm getting there. In this video blog, I will walk you through the steps to create an Azure virtual machine with the free Visual Studio 2013 Community edition pre-installed. I then go through the process of downloading and installing the SQL Server Data Tools for BI. The video is almost 17 minutes in length, but the overall process took about 1 hour and 10 minutes. To go back to the index for the blog series, check out the Part 1 – Using #SQLServer 2014 Integration Services (#SSIS) with #SAPHANA.

 

NOTE: SSIS is not yet certified by the SAP ICC group. However, the content of this blog series is based on the certification criteria.

 

On with the show!

 

Check me out at the HDB blog area at: The SAP HDE blog

Follow me on twitter at:  @billramo

SAP HANA Developer Center Updates: New Landing Page, New Tutorials

$
0
0

The SAP HANA Developer Center has a new landing page full of new content for you. Check it out at developers.sap.com/hana.


SAP HANA Landing Page2.png


The new homepage offers you a quick and easy way to access the latest developer info on SAP HANA, sign up for your free developer edition and get started building your first app.


You’ll find information about how SAP HANA works including technical aspects, core features and developer tools. 

You’ll also get an overview of the different options available for you to get started: you can sign up for your free developer edition via SAP HANA Cloud Platform (you get a free instance) or you can sign up for your free developer edition via AWS or Microsoft Azure.


In addition, you’ll find step by step tutorials to help you build your first app. The tutorials cover from how to create your developer environment to building your first app to accessing data and more.


The page also includes links to resources and tools, the community, other related documentation, education and training, certification, etc.


So, take a look and bookmark the page: developers.sap.com/hana.

XS Project Not showing After SAP HANA tools Installation

$
0
0

XS Project Not appearing in Eclipse


I have encountered this problem,and i tried a lot of things but this basic step solved my problem,but before getting to that you need to follow the bellow steps properly.


To install SAP HANA Tools, proceed as follows:


  1. Get an installation of Eclipse Luna (recommended) or Eclipse Kepler.
  2. In Eclipse, choose in the menu bar Help > Install New Software...
  3. For Eclipse Luna (4.4), add the URL https://tools.hana.ondemand.com/luna.
    For Eclipse Kepler (4.3), add the URL https://tools.hana.ondemand.com/kepler.
  4. Press Enter to display the available features.
  5. Select the desired features and choose next.
  6. On the next wizard page, you get an overview of the features to be installed. Choose Next.
  7. Confirm the license agreements and choose Finish to start the installation.


Post installation open Eclipse and follow the steps below to see whether XS Project appeared or not:


  1.      In Eclipse Go to Window--Open Perspective--Other (see in fig 1)

1.jpg

 

Fig 1

  1. Select SAP HANA Development Perspective.
  2. Now there will be 3 tabs as shown below.

 

2.jpg

 

  1. Now go to project explorer view and on the top left corner click on File-- New -- Project as shown below.

 

3.jpg

 

  1. Now in New project wizard Select SAP HANA -- Application Development--XS Project as shown below.

 

4.jpg

 

Now if XS project does not appear then follow the steps below:


  1. Exit Eclipse.
  2. Find the path where your eclipse is installed for ex : D:\SAP HANA\eclipse
  3. Now go to command prompt.
  4. Switch to the eclipse folder and type eclipse –clean.
  5. Now the eclipse would open automatically and you could see the XS Project under Application Development.

$.hdb vs $.db Interface - Performance/Problems

$
0
0

Hi folks,

 

I want to share my experience concerning the two xsjs-engine database connection implementations:

  • $.hdb (since SPS 9)
  • $.db

 

The Story:

 

Some days ago I used the new HDB interface implementation for the xsjs engine to process and convert a result set in a xsjs service. Problematic for this service is the size of the result set. I am not very happy with the purpose of the service but we somehow need this kind of service.

 

The result set contains about 200.000 rows.

 

After setting up everything and having multiple test with small result sets < 10.000 rows everything works fine with the new $.hdb implementation. But requesting the first real sized set caused heavy trouble on the maschine (all xsjs connections) and the request never terminated.

 

As a result I found myself implementing a very basic xsjs service to get all files in the HANA Repository. (Because per default there are more then 40.000 elements in it.) I duplicated the service to get one $.db and one $.hdb implemenation with almost the same logic.

 

The Test:

 

HDB - Implementation

 

// >= SPS 9 - HDB connection
var conn = $.hdb.getConnection();
// values to select
var keys = [    "PACKAGE_ID",  "OBJECT_NAME",  "OBJECT_SUFFIX",  "VERSION_ID",  "ACTIVATED_AT",  "ACTIVATED_BY",  "EDIT",  "FORMAT_VERSION",  "DELIVERY_UNIT",  "DU_VERSION",  "DU_VENDOR"
];
// query
var stmt = conn.executeQuery( ' SELECT ' + keys.join(", ") + ' FROM "_SYS_REPO"."ACTIVE_OBJECT"' );
var result = stmt.getIterator();
// result
var aList = [];
while(result.next()){    var row = result.value();    aList.push({        "package" : row.PACKAGE_ID,         "name" : row.OBJECT_NAME,         "suffix" : row.OBJECT_SUFFIX,         "version" : row.VERSION_ID,         "activated" : row.ACTIVATED_AT,         "activatedBy" : row.ACTIVATED_BY,         "edit" : row.EDIT,        "fversion" : row.FORMAT_VERSION,        "du" : row.DELIVERY_UNIT,        "duVersion" : row.DU_VERSION,        "duVendor" : row.DU_VENDOR    });
}
conn.close();        
$.response.status = $.net.http.OK;
$.response.contentType = "application/json";
$.response.headers.set("Content-Disposition", "attachment; filename=HDBbench.json" );
$.response.setBody(JSON.stringify(aList));

DB - Implementation

 

// < SPS 9 - DB connection
var conn = $.db.getConnection();
// values to select
var keys = [    "PACKAGE_ID",  "OBJECT_NAME",  "OBJECT_SUFFIX",  "VERSION_ID",  "ACTIVATED_AT",  "ACTIVATED_BY",  "EDIT",  "FORMAT_VERSION",  "DELIVERY_UNIT",  "DU_VERSION",  "DU_VENDOR"
];
// query
var stmt = conn.prepareStatement( ' SELECT ' + keys.join(", ") + ' FROM "_SYS_REPO"."ACTIVE_OBJECT"' );
var result = stmt.executeQuery();
// vars for iteration
var aList = [];
var i = 1;
while(result.next()){    i = 1;    aList.push({        "package" : result.getNString(i++),         "name" : result.getNString(i++),         "suffix" : result.getNString(i++),         "version" : result.getInteger(i++),         "activated" : result.getSeconddate(i++),         "activatedBy" : result.getNString(i++),         "edit" : result.getInteger(i++),        "fversion" : result.getNString(i++),        "du" : result.getNString(i++),        "duVersion" : result.getNString(i++),        "duVendor" : result.getNString(i++)    });
}
result.close();
stmt.close();    
conn.close();        
$.response.status = $.net.http.OK;
$.response.contentType = "application/json";
$.response.headers.set("Content-Disposition", "attachment; filename=DBbench.json" );
$.response.setBody(JSON.stringify(aList));

 

The Result:

 

  1. Requesting DB-Implementation: File-Download for all 43.000 rows is starting within 1500 ms.
  2. Requesting HDB-Implementation: Requesting all rows leads to an error. So I trimmed the result set by adding a TOP to the select statement.
    • TOP  1.000 : done in 168ms
    • TOP  2.000 : done in 144ms
    • TOP  5.000 : done in 297ms
    • TOP 10.000 : done in 664ms
    • TOP 15.000 : done in 1350ms
    • TOP 20.000 : done in 1770ms
    • TOP 30.000 : done in 3000ms
    • TOP 40.000 : The request is pending for minutes (~5 min) then responding with 503. The session of the logged in user expires.

 

As summary: The new hdb implementation performs worse then the old one and there is a treshold in hdb that leads to significant problems on the system.

 

I appreciate every comment on that topic.

 

Best,

Mathias

"No buffer space available" when running ping

$
0
0

Hi there,

 

I've recently installed the latest SLES version provided by SAP for Business One (at the moment SLES 11 PL 3) which works just fine. After some weeks of continuous work, I dicovered some packages loss when running ping with result message "No buffer space available".

 

It turns out that the allocatedmemory was getting at maximum. The solution was to extend the amount in the file

 

/proc/sys/net/core/wmem_max

 

Then restart the network interface for it to take the change.

 

Hope this was useful.

 

Regards,

 

Alejandro Fonseca

Twitter: @MarioAFC

Viewing all 737 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>