Reporting and Analytics for Service

Get Involved. Join the Conversation.


    Kenny Tietz
    Best Practice Guide - How to Improve Reporting Performance...
    Topic posted March 26, 2015 by Kenny TietzBronze Trophy: 5,000+ Points, last edited March 21, 2016 
    8090 Views, 6 Comments
    Best Practice Guide - How to Improve Reporting Performance in Oracle Service Cloud

    Attached to this post is a revised best practices document that specifically deals in optimizations and report performance within Oracle Service Cloud.  Included is more than 25 best practices, and new feature additions that will help make report optimization easier to handle.  

    This document begins with an overview of key terminology, relevant concepts, and a discussion of relevant threshold governors.  With this background presented, the document provides an in-depth discussion of best practices to make Oracle Service Cloud Analytics reports run as fast and efficiently as possible, to improve scalability when large data volumes or processing-intensive report queries are involved, and to determine appropriate next steps if encountering warning messages when thresholds will be exceeded.

    We hope you find this revised guide both an insightful and practical tool for developing, tuning and managing reports and analyses within your Oracle Service Cloud instance.



    • Heena Karir

      Thank you Kenny for sharing such an useful Document !!

      ~ Heena Karir

    • Mike Griffiths

      Hi Kenny,

      Thank you for the information, this is useful. 

      Regarding the Database Row Thresholds, I find (anecdotally, of course) that Service Cloud is often inconsistent in its behaviour. Sometimes a report will outright refuse to run as it claims it has too many rows to analyse, whereas on other occasions, a report runs fine even though I know there are too many rows. Just yesterday I needed to run a report on Question Responses and did so by joining from Incidents to Accounts (using a Custom Field), from Accounts to Question Sessions, and from there to the responses provided. There were a total of 62 Incidents filtered into the report that had question sessions, and yet the report insta-refused to run due to exceeding the thresholds (something I don't believe to be true).

      As another example, one of our customers has ~5 million Contacts, with ~8 million Incidents associated with them which is clearly way above the thresholds set on the reporting database, let alone Operational, and yet reports can be run against this, though not always. Another example I could provide here is that sometimes I need to query the Clickstreams table, but given the huge amount of information stored in here, this is sometimes also impossible.

      I suppose part of the issue is that it's not clear what "analyse" means in this context. Is this the total number of rows that exist in the table pre-filter, or is it the number of rows that exist after filtering? Either way, I believe the limits should be the number of rows returned, rather than the number of rows analysed.

      I'd like to hear your (and others) thoughts on this, as the row limits often cause me a great deal of annoyance!



    • Justin

      If you are running an incident report, you need to use a date filter. The table join hierarchy used incidents>contacts>questions sessions could be huge without a filter to limit the date range, or using join filters if you arent using inner joins.


      Most of the times the filters you use and the way you join the tables will determine the rows analyzed. You generally dont want to query the entire population of contacts and/or incidents. If you do then i would point you to using ROQL and one of the API's. 

    • Pavol Procka



      I must agree with Mike, it seems a bit inconsistent.  have a problem with one of our scheduled reports. It is supposed to show aggregate of all agent achievements in monthly intervals for the last 6 months This used to work until about 4 weeks ago (we did not change anything since), but lately it stopped delivering the report with the error:

      "Unable to deliver scheduled report: Executing view query failed.

      Query processes too much data - add filters to limit the query or remove tables that are not required."

      Normally, I understand why - it says it in the error message - the amount of data, however I am wondering how come this was not a problem before (we have been using RightNowfor about 4 yrs now and the report used to come as scheduled, once a week until 12/06/2017 (that is the last one that was sent)?
      I am able to run it to a max. of -110 days, but with -115 and more ( I would need -180) it gives the error.

      Also, if I just create a simple report showing all transactions (was trying to see what is that big data) from the last 180 days, this works, brings cca 18 million lines, so I doubt it is really the amount of data that is causing the problems in the filtered one. Even if I tried to remove the other columns and only left Response Sent (next to Agent and Date/Time Interval), the report will simply not run.

      I am attaching the report definition, in case somebody has a suggestion what to replace the Response Sent, Trashed, Queued and Created column definitions with?

    • Pavol Procka

      The report started coming again... we did not make any changes... it seems SC has a bit of its own mind here.

    • Narendra Muttineni

      Thanks Kenny for sharing this document. This document helps a lot.