Persistent cache will play vital role in BODS when we are handling large datasets. This feature helps us lookup or join large volumes of data.

Persistent Cache can be created using Data Service Designer as a database in the local repository which gives it a performance edge when compared to normal cache in lookups.

Create a separate dataflow to load data from the source table into the persistent cache target template table. When you load/reload data into a persistent cache template table, the persistent cache database will always get truncated and recreated into the path location. Once the table is created, you can use it in your actual job.

Persistent cache datastore – Process flow


Configuring persistent cache datastore

As said earlier, we need to create datastore for persistent cache.

  • Go to Local object library -> Datastore -> Right Click -> Select New.
  • Name your datastore -> choose datastore type as Database -> Choose database type as Persistent cache.
  • Assign proper location (data service job server location) in the cache directory box where your persistent tables will store the data.

Configuring persistent cache datastore

Creating Persistent table and loading data from real table

    • Create dataflow inside the job. Inside the dataflow, drag the real table and load the real table data into persistent table.

Optimizing large data loads in Data services using persistent cache

    • In the above image, we are loading data from TCUST table to persistent cache table TCUST.
    • Create template table, pop-up will open like shown in the below in the image. Name your persistent cache table and in datastore- choose the persistent cache datastore which we have created already.

Optimizing large data loads in Data services using persistent cache

    • Once you create and connect the table, you must define the primary keys it.
    • Double click the persistent cache table, you will see three options target, options and keys.
      • Target -> it contains details about the table.
      • Options -> Compare by name and contains duplicate keys. It is recommended to be left checked unless your data is unique. Compare by position will work based on the position of the field in the table.
      • Keys -> It is very important to choose primary key for each persistent table you are creating. Choose the primary key of the real table in the persistent table.

Optimizing large data loads in Data services using persistent cache

  • File name and ID of the file is the key selected in the persistent table which I have created.


Persistent cache table has been created successfully. We can use this table in lookups and joins. The cache table will come up as a regular table in designer for further use.

Subscribe to our Newsletter

5600 Tennyson Pkwy
Suite 120
Plano TX 75024.

+1 888-227-2794

+1 972-232-2233

+1 888-227-7192