Pages

Sunday, February 23, 2014

Migrating Planning 11.1.1.3 artifacts to 11.1.2.3 without upgrading to 11.1.1.4

Disclaimer: The steps mentioned in this post are a workaround and in no scenario intend to replace standard steps of migration published in Oracle documentation. Please take necessary backup before following these steps. These steps may help to import artifacts when we do not have configured environment from 11.1.1 release to follow upgrade path.

I wanted to setup a Hyperion Planning 11.1.2.3 environment to try out new features under this release and thought of creating an application from the extract files of one of my app which I had used long back for training and proof of concept. I created a new Planning app shell similar to the app I was about to import.

Since 11.1.2.3 LCM offers ability to upload the LCM extract from shared services console in to File System directory (import_export by default), I followed these steps to upload the LCM extract.



On trying to open the app under the File System, it did not display any of the artifacts in the detail pane. Initially, I thought it could be some system error and restarted the services once. But that didn't help. On digging into the documentation, online forums and help from Nitesh, I realized that starting 11.1.2, LCM has undergone changes when it comes to importing and exporting of artifacts. Any artifact from 11.1.1.3 need to be first upgraded to 11.1.1.4 before migrating to 11.1.2. That is what was happening with this extract too. The SourceInfo.xml in the extract provide details about the LCM version used to create extract.




Few of the changes that can be easily identified:
  • The console no longer has the option to 'Define Migration' while importing or exporting artifacts
  • The LCM export in 11.1.2 stores the import and export definition XMLs.



 The app extract which I used was from 11.1.1.3 and I don't have any configured setup from 11.1.1.3 release to upgrade them to 11.1.1.4 and re-export. The alternative is to make changes to make it ready for 11.1.2.3.

Lets export the newly created app shell and compare the two extracts to find out how they differ. Lets name the LCM extract for the app shell as RETL_POC_APP.


  • The 11.1.2.3 extract have the import.xml along with info and resource directories which did not exist in old extract. This is because old version allowed us to create migration definition file

  • The listing.xml in 11.1.1.3 have a property "id" for folder tag which does not exist in the listing.xml from 11.1.2.3. But this field does not seems to have any unique value which may cause any problem during import. Lets ignore it as of now



There does not seem to be any major difference between the extracts. All I did is copied the Import.xml into the extract zip of 11.1.1.3 without changing any other xml, zip it and placed it back in the import_export directory. This time it worked !!!


Step 1: Select all artifacts and import
Selected all artifacts and started the import process. But it failed with errors while importing User Preferences.xml, Exchange Rates, Period, Planning Units and Security XMLs for groups and users.



Step 2: Deselect Standard Dimensions - Period & Year
Since the standard dimensions already exist in the target application, deselect the standard dimensions - Period & Year under Plan Type/PnL/Standard Dimensions. When ran the import again, it failed again


Step 3: Deselect User Preferences.xml and Security XMLs
The errors hold information that import process was unable to find groups and users. Since these users and groups does not exist in the Shared Services, these errors are valid. User Preferences.xml also hold information about users which should exist in Shared Services. This time import failed giving error for Planning Units.


On deselecting Planning Units.xml and importing it again leads to success. Voila !!!

All artifacts except security, Planning Units, User Preferences gets imported successfully. Though if we create the users and groups in shared services, Planning security would also get imported. I need to investigate more why Planning Units import failed even though the xml did not have any Planning Unit details.


Just to summarize, 
  • Providing the Import.xml (Migration definition file), LCM 11.1.2.3 was able to identify object to import. 
  • Users & Groups must exist in Shared Services to get migrated
  • Standard Dimensions - Period & Year may not get imported as the number of years and period hierarchy may depend how we have defined them in target
As said earlier, these steps are an alternate option to import artifacts when we do not have configured environment from 11.1.1 release to follow upgrade path.

Hope this helps !!!

Tuesday, February 11, 2014

ODI Series – Developing Efficient Integration

It has been a long time that I posted anything , and thought of writing something I have been working with lately – ODI.
Recently, I was training few newbies in ODI when I was asked a question – “How can we be sure whether we have developed an interface which is an efficient one? “
I guess there is no standard answer to this question. It all depends on the technology we are working with and the transformation we intend to achieve. We need to remember that ODI utilizes the underlying technology to perform its transformation and as a developer we need to make use of it in the best way. In the following post, I will be talking about the process of analyzing our data sources and technology to improve integration process.
Let’s consider following scenario and data sources:

Flat File called Products.txt which stores information about product codes


Staging DB which stores a Translation table. This table has the Alias names for all the product codes which should be populated in target


Target DB which stores Product information pulled from flat file and translation table both with following structure of table


We proceed with the assumption that we know how to reverse engineer flat files and RDBMS tables and bring them into ODI as models/datastores. Once reversed, the Datastores can be dragged on to interface as source and target and mapped as required. The interface should look something like this:


One easy way to optimize your integration is by minimizing the number of steps involved for data movement and transformation between source and target. Though, this may not matter much with small chunks of data files or records, but may severely affect integration time when involving huge volumes of data. In the following scenarios we would see how this can be handled well.

Scenario 1: Staging layer is neither source nor target
Let’s consider a scenario where you choose a staging area which is neither the source not target. In this interface we have flat file and RDBMS table (schema called DBStaging) as source and RDBMS table (schema called DBTarget) and thus we choose In-Memory Engine: SUNOPSIS_MEMORY_ENGINE as our staging area.



The flow tab diagram shows the flow of data from source to staging and then to target. The execution takes 15 steps to complete movement of data


Scenario 2: Staging layer is one of the source schema
Since one of our source is RDBMS (Translation table schema) we can use the same schema as our staging layer. This will reduce the step 1  of moving data from RDBMS table to SrcSet0 in the above scenario as the table exists in the same schema.



The execution now takes 11 steps to complete data movement. We actually skipped the steps to move data from Translation Table into SrcSet0 in Scenario 1

Scenario 3: Explore Knowledge Modules
It's always good to explore all the available Knowledge Modules and understand their features. In the above two scenarios, we used IKM SQL Control Append for populating data into target, while for this scenario we would use IKM SQL to SQL Control Append.
IKM SQL Control Append can only read data from single source and due to this reason we used LKM SQL to SQL at the staging layer. This performs the join between SrcSets and load data to a temp table. IKM SQL Control Append then pulls data from this temp table. On the other hand IKM SQL to SQL Control Append has the ability to read data from multiple sources. This can be observed in the flow diagram below, where LKM SQL to SQL is no longer required.



The execution now takes just 7 steps to complete data movement. We skipped the steps to load data to Temp table at the staging layer by using IKM SQL to SQL Control Append, reducing steps for overall integration.

As said earlier, all these optimization steps are dependent on underlying technology being used in integration and the transformation activity. Since the target here was a RDBMS schema, we were able to make use of IKM SQL to SQL Control Append; had the target been either Essbase or Planning, we would be bound to use the specific knowledge modules and need to look out for alternate ways to optimize our steps.

Hope this helps !!!