Profitability and Cost Management

Get Involved. Join the Conversation.


  • Don Bean


    For larger applications the overall data size of the application can start impacting calculation times.  You'll notice this when (as you described) calc times for the same rules, same data grow after you added additional data to other POVs.

    There are two ways to address this:

    1 - keep the environment where you are actively calculating as slim as possible.  You can do this by moving data from older POVs to a PCM environment where you do reporting only.

    2 - request the Logical Clear enhancement for you system.   The reason calc times slow down is the method Essbase uses for clearing the results of previous calc runs gets slower as overall application size increases.  There are two methods for clearing data; physical and logical.   Physical is the default method PCM employs, but if needed it can be switched to Logical, which is much faster but has some limitations.

    If you chose to request the Logical Clear enhancement option then please log an SR describing your calculation degradation and attach execution stats reports for two comparative runs.  Describe the differences in data volumes present during each run.

  • Timothy Hennessy

    Thanks Amit.

     I have explored this option, but I have not yet implemented this.  

  • Amith Kumar
    Thats a bug we have logged an SR with oracle for this issue, they will be releasing an patch fix sometime next year even though it was raised as defect in early 2019, for time being you can try using merge slice functionality through epmautomate or rest call. We have been given with couple of pics with the performance fix patch, which helped with timings.
  • Ramesh Balasubramanian

    Attached image from the PCMCS 17.02 update. I am not sure if more functions are enabled in the last two years.


  • Parmit Choudhury

    Thanks Law and Alecs.

    I would also like to know that can we also use "Manage Queries" tile in PCMCS application we can  generate a Source, Driver (if applicable), and Destination Offset reports of various Rulesets.

    Best Regards


  • Parmit Choudhury

    Hi Alecsandra,

    You are right, we have assigned attributes to base level members of the Entity dimension. In the Source tab of a rule we are applying an attribute filter to Entities in an alternate hierarchy (shared members structure) and it does not work in the sense that it is throwing this warning that I have posted.  The filter is working though when we  apply filter on the primary /base hierarchy but it doesn't work on the shared/alternate hierarchy. Please find the filter setup attached.

    Best Regards




  • Alecsandra Mlynarzek

    Hi Parmit, 

    We've used filters successfully on both UDAs as well as Attribute dimensions so before I venture an answer, let me see if I understand your issue correctly: you have assigned attributes to base level members of the Entity dimension. In the Source tab of a rule you are applying an attribute filter to Entities in an alternate hierarchy (shared members structure) and it does not work in the sense that it is throwing this warning that you posted. 

    So the warning says there are no members in your selection if you apply the filter criteria. Is the attribute = to a value or <> to a value? And are all your members in that section of the hierarchy associated with the attribute in your filter criteria or only some of them?

    Do you mind sharing with us the actual filter criteria setup (a screenshot would work) so we can ask more questions and assist you with the issue? 

    Kind regards,


  • Alecsandra Mlynarzek

    Hi Parmit, 

    I completely agree with Law - if you have granular reports, how do you know if they are truly balancing out.

    So you want to start from the top, check total amounts first and then have templates to drill into the low-level detail numbers. You can use the Rule balancing report for a lof of quick reporting - just setup multiple Model Views that can be selected from the rule Balancing dropdown. These reports are dynamic and accessible to your PCM users, so they should be a pretty good first choice. 

    Whenever you set up granular reports the biggest issue will be maintenance. Unless they are dynamic, you are better off focusing on top-level validations and training your users on how to do the drill through to bottom level members.

    I hope this helps.

    Kind regards,


  • Alecsandra Mlynarzek

    Hey there John, Aamir,

    We generally navigate around the summing up to total Account by ignoring the Driver node using the consolidation operator of: (~).  Also if your Account dimension is Label only and the top Account node is your Expense + Revenue total, then the "Account" total should be accurate without any further work.

    If you need drivers by accounts (in case you have account to account allocations) you will be forced to have a separate Driver dimension.

    But if you don't and your app is already fairly large, adding another dimension may not be the best option. So weigh in your volumes of not just metadata members in a dimension, but also number of dimensions and data granularity, and then make a choice. 

    Finally, one more aspect to consider - check your integration options. Is it easy to add a driver column to the source of data, if you add a Driver dimension, or will separating the Driver in a new dimension negatively impact your existing integrations?

    Just some ideas to consider when you perform your analysis.

    Best of luck!


  • Aamir Zaveri

    Hi John,

    Thanks for your response. It's very helpful.


  • John Sturgeon

    We also chose to separate out the drivers from accounts.

    I agree that traceability is important, and the other reason we chose to separate out driver data from the account dimension was to better segregate data for more intuitive analysis by users across our firm - a guiding principal for our implementation. For example, by only having account data in the account dimension a user can pull up the parent member of that dimension and immediately know the sum of (in our case) costs for the given period, rather than having to filter out the stat accounts.

    The separate dimension also gives you the ability to input driver data by account in the future if needed, which wouldn't be an option if both data types were contained in the same dimension.

    Only downside we could think of was that it adds another dimension to the application, which we deemed minimal since users could always use the "no driver" member when querying costs.

    Hope that helps!

  • John Sturgeon

    That did the trick, thank you!

  • Alecsandra Mlynarzek

    Hi there John,

    I remember I had a similar issue about 2 years ago and the solution was to set up a data grant for the group on at least one dimension to enable read at top level. By default, the Viewer and User groups cannot see data unless you give them access to some area of the application.

    Kind regards,

  • Law

    Hi Parmit.

    I suppose the answer would probably depend on what the "Correct Data Location" is. Presumably if it's known, I'd produce an FRS report to produce a bird's eye summary and have the customer drill in if discrepancies exist.

    For a customer i've worked with as an example we expected all values to be allocated out of a number of products so we created a few high level rec reports:

    1. Report 1 - Pre and Post allocated
      1. Aim of this was just to show that pre and post is the same
    2. Report 2 - Unallocated amounts
      1. Aim of this was to just show amounts unallocated and amounts in products that were unexpected
    3. Report 3 - HIgh level summary report of the allocated amount in each product with a % split and variance against prior
      1. Aim was to provide an indicative view of the allocated proportions and how it differed from previous post-allocated amounts

    You've mentioned the client wants to check 3,000 items. How do they know it's indicatively correct if they look at the value at such a granular level?

  • Aamir Zaveri

    Hi Alecs,

    Thanks for your reply.
    If I understand correctly, you mean to use POV dimensions e.g. scenario or version to have region members e.g. Actual-USA, Actual-Canada.

    I'll also raise this on Idea lab to consider rule security as an enhancement in future release.