Posts

Showing posts from 2014

ODI Series - Digging into ODI repository

Image
I believe it is no new situation for any ODI developer, where he/she had to go through a long list of ODI objects to identify their usage in interfaces or packages, LKMs /IKMs used within interfaces, variables and their usage, getting rid of unwanted objects, etc I had been in such a situation and wanted to keep a track of any such objects. I was able to beg, borrow, steal or develop SQL queries to query ODI work repository and following list of SQL queries can be used to put down a list of objects and their association Query 1: Variables Used in mappings Query 2: Table Names used as Source schema & Datastore Tabs Query 3: Interface Information Query 4: Interfaces not used in Packages --- Updated May 12th, 2014 --- Query 5: Procedures used in Packages Query 6: Procedures not used in Packages Query 7: Scheduling Information This scheduling information query fetches all scheduling information except repetition information ...

ODI Series - Pulling file names and timestamp from directory

Image
Recently, I was asked a question by my colleague, Linga  (he always comes up with something or the other) - Can we pull file names from a directory and process the latest one out of them, while keeping the history of processed files ? I could recall that one other colleague Parasuram PJP did something similar. When I tried out his code, that did not work for me due to some reason. Anyways, he was kind enough to share some pearls of wisdom and I sat down searching details about Jython to fetch file names and additional information. To find out the latest file out of all, we need to capture the timestamp of its last modification. Since we need to keep a log of all files which gets processed, we will be maintaining a RDBMS table (SQL Server in my case) -  FILENAMES with following structure Along with this comes a twist where initial part file names will be same having different suffix. Just to simulate the scenario, I could think of some files which were ...

Automation of SmartView reports

Image
The most commonly used reporting tool with Hyperion setup is the Financial Reporting aka HFR. HFR brings with it, the ability to design reports using various datasources - Essbase, Planning, SAPBW, MSOLAP, HFM. It has been quite popular to burst static PDF reports using inbuilt scheduler or using the command line utility for Essbase, Planning and HFM. Along with HFR, SmartView is another tool used for reporting. SmartView provide options to download HFR reports in spreadsheets and carryout analysis as most analyst love to do. One common requirement which HFR do not offer as an out-of-box solution, is the ability to burst reports in spreadsheets. This post delves into an alternate approach to cater such requirements. The Smart View has been enhanced lately to include lot of new capabilities - Retaining Excel based formatting, butterfly reports, multi-grid reports, etc. - which combined with its macros functions help analysts design, automate and burst spreadsheet based reports ...

Migrating Planning 11.1.1.3 artifacts to 11.1.2.3 without upgrading to 11.1.1.4

Image
Disclaimer: The steps mentioned in this post are a workaround and in no scenario intend to replace standard steps of migration published in Oracle documentation. Please take necessary backup before following these steps. These steps may help  to import artifacts when we do not have configured environment from 11.1.1 release to follow upgrade path. I wanted to setup a Hyperion Planning 11.1.2.3 environment to try out new features under this release and thought of creating an application from the extract files of one of my app which I had used long back for training and proof of concept. I created a new Planning app shell similar to the app I was about to import. Since 11.1.2.3 LCM offers ability to upload the LCM extract from shared services console in to File System directory ( import_export by default) , I followed these steps to upload the LCM extract. On trying to open the app under the File System, it did not display any of the artifacts in the detail pane. I...

ODI Series – Developing Efficient Integration

Image
It has been a long time that I posted anything , and thought of writing something I have been working with lately – ODI. Recently, I was training few newbies in ODI when I was asked a question – “How can we be sure whether we have developed an interface which is an efficient one? “ I guess there is no standard answer to this question. It all depends on the technology we are working with and the transformation we intend to achieve. We need to remember that ODI utilizes the underlying technology to perform its transformation and as a developer we need to make use of it in the best way. In the following post, I will be talking about the process of analyzing our data sources and technology to improve integration process. Let’s consider following scenario and data sources: Flat  File called Products.txt which stores information about product codes Staging DB which stores a Translation table. This table has the Alias names for all the product codes which should be popu...