The SAP Cloud connector is a critical piece in your SAP Cloud landscape. The SAP cloud connector, amongst other things, allows you to connect your on premise tools to the SAP Cloud Platform.
One thing to remember is that you are allowed two instances of the cloud connector as part of your SAP cloud infrastructure. Here we go into details of multiple landscapes possible for the same and also look at how to size and setup the Cloud connectors so that you get the best possible access to your data on the SAP cloud platform.
A typical landscape is to have the cloud connector setup based on your users. In the example below – we have one cloud connector for mainland USA and the customer also has another user base in the APAC region. Here you can setup the second cloud connector for APAC to ensure that the APAC users do not have to connect through the US connector, thereby preserving bandwidth.
The SAP cloud connector comes in different flavors – Windows and Linux and you can choose the option that is best suited for your landscape in terms of existing infrastructure and infrastructure capabilities.
Single Node Configuration
Here you can install the SAP CC on a single dedicated machine with a static IP and configure the same. In case you have users across the globe with regional VPNs, you can have one SAP CC instance for mainland US and one for APAC so that the network latency can be addressed, if there is any.
There is a High availability option for the SAP CC where you can set up the SAP CC as a High availability instance with a master and slave. The obvious benefits are:
- HA ensures that there is connectivity even if one of the machines go down
- If for instance – someone does a select * on the HANA DB and the Cloud connector goes down by being flooded with data, the slave will still ensure that others can continue to access the same
- If you have configures downstream systems like R, On premise PAL etc , then the data requests are going to be huge and this ensures that there is failover to ensure that connectivity is not disturbed.
- If SAP updates the cloud connector, you can upgrade the HA system without downtime.
The SAP CC is a gateway to SCP and hence does not require a very large Disk Space, however the common failure points are the Network bandwidth , CPU and RAM. The thumb rule being that more CPU and RAM will allow the CC to process higher volumes of data. When the SAP CC gets large data requests, it needs to process them in RAM and ensure that encryption and decryption happen. Also, another thing to be looked at is to have the machine possibly installed on your network backbone within your data center rather than have it on a Desktop / Server within office premises for example. This will ensure good network connectivity and also minimizes latency between the Cloud System and your backend Systems.
For instance if you are accessing your HANA cloud database into Tableau, then the closer it is in terms of network proximity to your Tableau server, the better the performance because network hops will be minimized.
One way of determining the sizing that works for you is to go with Virtual Machines for the same, install them with minimum settings to begin with and then simulate large data requests from SCP into your network and monitor the performance. Since these machines are Virtual, you can then bump up the RAM or CPU and then arrive at a sizing that works for you.
The reason why we cannot do something like standard T-Shirt sizing on the CC is because SCP can have multiple data sets, both big and small. You will not need the horsepower to handle small data sets but then have the additional horsepower for that occasional big data set and ensure system availability as well in the process.
Alternate options – Dedicating CC nodes for High Volume
However, if you have requirements that necessitate high data volumes, then it might be worthwhile to see if you can put in a dedicated node for high data volumes and have your systems access that URL for high data volumes. This would protect regular users and provide access to the same.
This option however will come with the additional requirement of enforcing the usage of the High-volume node for high volume requests where you can have a R server using the high-volume node and regular users using the other one. But this does not stop the users from using either one and will require IT oversight in managing the cloud connectors that users connect to.
Even though the 2 instance limit seems draconian, it can actually be used optimally to ensure reliable and fast access to your data across the enterprise.
Got questions? Click here to get in touch.