For partners that build or integrate commercially available applications and service solutions with the Oracle Cloud Platform
For partners that provide implementation or managed services around Oracle Cloud Applications
Hi,
I am trying to upload a CSV file to Salesforce through the Salesforce Adapter and the Bulk Data upsert operation. The upsert seems to work fine and generates the proper XML to send to salesforce, but when it reaches my FinalBatch operation, it fails with "invalid JobID" coming back from Salesforce.
I am using a similar flow to what is shown in the Oracle documentation here:
Attached is a picture of my integration flow. It fails at FinalBatch. The "mapping to FinalBatch" is not doing anything because there is nothing I can use from the upsert to send a jobID to the final batch operation.
Am I missing something? Do I need to create a Job ID first somehow?
This is the last part of the diagnostic log:
]]
[2019-07-24T19:34:37.570+00:00] [oicyyz3I_server_2] [ERROR] [] [oracle.soa.adapter] [tid: [ACTIVE].ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: xxxx] [ecid: 9287e245-2972-42c6-8df1-72487b165c7e-00196901,1:20730:3] [partition-name: DOMAIN] [tenant-name: GLOBAL] [oracle.soa.tracking.FlowId: 60] [oracle.soa.tracking.InstanceId: 3532] [oracle.soa.tracking.SCAEntityId: 434] [composite_name: SITE_ID_CATALOGUE!01.00.0000] [FlowId: 0000Mk_UItgEwG55zRs1yW1TA0NF0000pc] JCABinding <outbound> Integration Payload : [[
<finalBatch xmlns="http://xmlns.oracle.com/cloud/adapter/salesforce/FinalBatch_REQUEST"/>
]]
[2019-07-24T19:34:37.572+00:00] [oicyyz3I_server_2] [ERROR] [OSB-381990] [oracle.osb.transports.jca.jcatransport] [tid: [ACTIVE].ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: xxxx] [ecid: 9287e245-2972-42c6-8df1-72487b165c7e-00196901,1:20730:3] [partition-name: DOMAIN] [tenant-name: GLOBAL] [oracle.soa.tracking.FlowId: 60] [oracle.soa.tracking.InstanceId: 3532] [oracle.soa.tracking.SCAEntityId: 434] [composite_name: SITE_ID_CATALOGUE!01.00.0000] [FlowId: 0000Mk_UItgEwG55zRs1yW1TA0NF0000pc] Invoke JCA outbound service failed with application error, exception: <genericRestFault><errorCode>400</errorCode><errorPath><![CDATA[POST https://xxxx.***.xx.my.salesforce.com/services/async/46.0/job/7503D0000039sPFQAY/batch returned a response status of 400 Bad Request]]></errorPath><instance><![CDATA[<?xml version="1.0" encoding="UTF-8"?><error[[
xmlns="http://www.force.com/2009/06/asyncapi/dataload">
<exceptionCode>InvalidJob</exceptionCode>
<exceptionMessage>Invalid job id: 7503D0000039sPFQAY</exceptionMessage>
</error>]]></instance></genericRestFault>
]]
[2019-07-24T19:34:39.043+00:00] [oicyyz3I_server_2] [ERROR] [ESS-07004] [oracle.as.ess] [tid: ESS Execute Thread [RequestId:3581]] [userId: <WLS Kernel>] [ecid: 9287e245-2972-42c6-8df1-72487b165c7e-00196901,0:140:1:100009653] [APP: ESSAPP] [partition-name: DOMAIN] [tenant-name: GLOBAL] [ESS_JobMetadataID: JobDefinition://oracle/apps/ess/seeded/ics/IcsFlowJob] [ESS_RequestID: 3581] [FlowId: 0000Mk_UItgEwG55zRs1yW1TA0NF0000pc] [ESS_Module: Processor] Execution error for request 3581. Reason: ESS-07033 Job logic indicated a system error occurred while executing an asynchronous java job for request 3581. Job error is: com.oracle.bpel.client.BPELFault: faultName: {{http://schemas.oracle.com/bpel/extension}remoteFault}[[
messageType: {{http://schemas.oracle.com/bpel/extension}RuntimeFaultMessage}
parts: {{
CODE=<code>{http://schemas.oracle.com/bpel/extension}remoteFault</code>
,SUMMARY=<summary>ICS runtime execution error</summary>
,DETAIL=<DETAIL><detail><ICSfaultVar/><reason>Error sending bytes: Sending salesforce batch file failed with an error.
:Application Error</reason><operation>finalBatch</operation></detail></DETAIL>}
cause: {null}
.
]]
So when passing records to a CORE - Upsert operation via the Salesforce adapter, if more than 200 records are passed in one go salesforce will return upsert error :
EXCEEDED_ID_LIMIT: record limit reached. cannot submit more than 200 records into this call
I realise this is an UPSERT limitation on the Salesforce side, and I can most likely cater for it, but splitting into batches smaller than 200 with the integration. Would do a for-each loop with an XSL to do this.
But is there anything out of box with the adapter in OICS that could handle this batching for me?
PS. I also know I could update the integrations to use the Bulk APIs, but I would rather not do that, since payloads of 200 records or more will be rare, but nevertheless need to be handled.
Many Thanks
Rob
I am trying to create a sample Soap connection with no security option using a public provided end Point
I am getting an error unable to create connection with Cause as CASDK errors.
Any idea?
Service Type: Oracle Integration Classic
Hi Team,
I have noticed that NOT ALL instances that faulted can be recovered or resubmitted for reprocessing - based from y experience, I have resubmitted/recovered some (scheduled integration) and unable for others (app-driven integration)
I'd like to know as to which specific conditions would an instance considered as recoverable when fault encountered.
Thanks ahead,
Moon
Dear folks,
I am integrating Salesforce and Engagement cloud Account entity using Oracle Integration cloud, request mapping happened successful, response mapping failing with below error message, will someone pls. let me know how to fix it.
Client received SOAP Fault from server : Unable to find a deserializer for the type common.api.soap.wsdl.QueryResult Error Id: 1285309289-28170 (-1896990142)Cause : Problem could be in the request mapping. Solution : All mapped elements should belong to the parent object.
Thanks & Regards,
Kannan Ranganathan
Dear folks,
I am integrating Salesforce and Engagement cloud Account entity, request mapping happened successful, response mapping failing with below error message, will someone pls. let me know how to fix it.
Client received SOAP Fault from server : Unable to find a deserializer for the type common.api.soap.wsdl.QueryResult Error Id: 1285309289-28170 (-1896990142)Cause : Problem could be in the request mapping. Solution : All mapped elements should belong to the parent object.
Thanks & Regards,
Kannan Ranganathan