Comments

  • Glen Ryen

    And some results:

  • Glen Ryen

    Are you a Procurement Agent, such that you can see the POs in the purchasing work area?  If not, that could be the issue.  But the join is working for me, so you may want to open an SR (if you haven't already).  If this isn't a security issue, perhaps it's an instance issue.

    SET VARIABLE PREFERRED_CURRENCY='User Preferred Currency 1';SELECT
       0 s_0,
       "Procurement - Procure To Pay Real Time"."Invoice"."Invoice" s_1,
       "Procurement - Purchasing Real Time"."- Purchase Order Distribution Detail"."PO Distribution Id" s_2,
       "Procurement - Purchasing Real Time"."Purchase Order Header Detail"."Order" s_3,
       "Procurement - Procure To Pay Real Time"."Invoice Measures"."Matched Amount" s_4,
       "Procurement - Purchasing Real Time"."Purchase Order Header"."Count" s_5
    FROM "Procurement - Procure To Pay Real Time"
    WHERE
    ("Invoice"."Invoice" IS NOT NULL)
    ORDER BY 1, 4 ASC NULLS LAST, 3 ASC NULLS LAST, 2 ASC NULLS LAST
    FETCH FIRST 75001 ROWS ONLY

     

  • Glen Ryen

    Hi Angela,

    From that guidelines whitepaper, make sure you include some measures:

    "Always include a measure from each subject area that is being used in your report. You don’t have to display measures or use them. But you should include a measure in the report nonetheless and hide it if it is not needed in the report."

    If you do that, do you still get the error?

    Glen

  • Glen Ryen

    Hi Madhu,

    The call to the erpIntegrationService web service would invoke the  submitESSJobRequest operation and pass the job package and definition in the SOAP envelope as follows:

             <typ:jobPackageName>/oracle/apps/ess/financials/payments/shared/runFormat</typ:jobPackageName>
             <typ:jobDefinitionName>IBY_FD_POS_PAY_FORMAT</typ:jobDefinitionName>

    Does that help?  Just be aware that when you run the ESS job, it will complete with a status of ERROR if there's no data found.  But that could also be an expected condition, if there are no new checks printed since your last PosPay run.

    Hope that helps,

    Glen

  • Glen Ryen

    Do you get the same error if you include PO Number only from one subject area, not both?  I suspect you might want PO Distribution ID to be the only dimension that you pull from both, but let me know if that works for you.

    Glen

  • Glen Ryen

    Hi Jen,

    Check the PPP (Payment Process Profile) used to create that first batch of checks, look for the completion point.  I'm guessing that say complete when printed.  If so, did you confirm the printed numbers in the first batch (PPR) before attempting to print the second batch?  You have to mark what's been printed vs. spoiled in order to complete the PPR and release the Payment Document for the next batch of checks.

    A second thing to check is your definition of those Payment Documents under 'Manage Bank Accounts'.  Do they have a value defined in the 'Payment Document Category' field?  If so, that will trigger the need for a document sequence - and that's separate from the printed check number ranges.  If you're not using document sequencing, you probably want that 'Payment Document Category' field blank.  Not sure if that's the issue here, but it's a possibility.

    Hope that helps,

    Glen

  • Glen Ryen

    Hi Kalpita,

    That subprocess log pointed to two separate issues on two different statements.  The two errors may or may not be for the same bank account, but they're on different statement days at least:

    [CE_DUP_STMT_IN_ACCT_NUMBER], Statement_Header_Id[300000064966739], [The statement ID 2019-03-26 with the bank account number XXXX1197 already exists in the application.]

    [TRX_CODE_NOT_DERIVED], Statement_Header_Id[300000064966692], Line_Number [1], [The transaction code 254 could not be derived from code mapping. Define either a code mapping or the transaction code.]

    The first is a seeded validation preventing duplicate data.  You've already loaded that particular end of day statement.  You can either (1) delete the existing statement and then you reprocess the statement in this BAI2_20190327.txt file or (2) purge the statement from this file.  Generally you want #2, unless there was something wrong with the 2019-03-26 statement already loaded for that XXXX1197 account.

    The second error is a new bank activity that you're seeing for the first time.  Go to FSM task 'Manage Bank Transaction Codes' and define that missing 254 BAI2 code.  Name and Type would be something representing the data in your file, but probably 'Posting Error Correction Credit' and 'Miscellaneous' respectively.  Once you do that, then you can retry loading the failed row.

    And purging or retrying the statements in error would both be done via the main 'Manage Bank Statements and Reconciliation' page, the 'Process Warnings and Error' table at the top.  Does that help?

    Glen

  • Glen Ryen

    Hi Bernice,

    Is there still no way to specify a default sort order?  To Wayne's point, it makes it much harder to track longer running conversations.  Responses to old posts are similarly buried a few pages deep.  You can sort each forum by latest comment like you say, but you have to do that from the second page of results, once per forum, and every single time you visit that forum.  It makes focusing on recent activity extremely difficult.

    Thanks,

    Glen

  • Glen Ryen

    Hi Lori,

    What Swathi said, that's the seeded report for tracking outstanding checks and feeding the escheatment process.  It runs by bank account with an as-of date option, plus you can republish the results to Excel.  Does that give you what you need?

    Hope that helps,

    Glen

  • Glen Ryen

    Got it, you need CSV output for a dynamic (and potentially large) number of columns.  You could try to see if that first character isn't a space but rather a CR or LF.  You may still be limited on the options for stripping it out from a CLOB.

    I suspect a better approach might be to make your output an XML file, then transform the variable number of child nodes (categories) under each parent (party) into columns in your layout template.  There's RTF functionality for Dynamic Columns (https://docs.oracle.com/middleware/12212/bip/BIPRD/GUID-B9C4322A-4BBF-4D28-B8F3-435E527EE5E6.htm#GUID-E3A18C01-F517-49CB-B073-21FF13967534), that might be able to get you CSV.  Or it may be simpler to code that in XSLT.

    It strikes me as a clunky output format, so I'd also ask if that's the only option. Assuming that's going to some destination system, could they accept XML or JSON?

    Hope that helps,

    Glen

  • Glen Ryen

    Sorry, I can't follow along with what you're doing and reconcile that with the last error message you sent.  The SQL issued there (bottom of the screenshot) shows you selecting the invoice number and the CAPEX DFF, but not the PO Distribution ID.  Could you take some screenshots of each step in the process and post that in a Word doc?  Really need to see which columns you're adding, from which folders, and when it goes from working to a specific error message to see where the disconnect is here.

    Glen

  • Glen Ryen

    Hi Mohammed,

    Any reason why you wouldn't use the LISTAGG analytical function OVER the party name?  I think that would be far more readable code (and almost certainly quicker) than casting through XML.

    Glen

  • Glen Ryen

    Hi Adi,

    Have you seen the seeded 'General Ledger to Subledger Account Analysis' report?  That's a very good starting point for all costs hitting an expense account, though I've had lots of clients prefer a custom layout on top of that (for Excel vs. PDF output).  Does that help?

    Glen

  • Glen Ryen

    Hi Kalpita,

    Could confirm that there's a valid BAI2 file you're starting with, that you've got with no spaces in the file name, within a zip file that you've uploaded to the UCM fin/cashmanagement/import account (via Navigator, File Import and Export)?  And second, that you've submitted the 'Load and Import' job with a pointer to the zip file uploaded?  And finally, what does the log file for that 262724 subprocess say?  That one should have given some details of the error.

    Glen

  • Glen Ryen

    Did you remove PO Distribution ID between steps #2 and #3?  I don't see it in the SQL you posted.  What did the SQL look like after you were successful in step #2?