Comments

  • Chris Rogers

    Hi Rajan, thanks for your suggestions so far smiley

    In this specific situation I am trying to get all the DataType for each column in the report, so that when I run the report I can turn the results into the correct format in the receiving code

     "dataType": {
        "id": 3,
        "lookupName": "INT"
      }
    

    However I would rather not have to do a web call for every column before actually running the report!

     

    I do note that expanding sub collections seems to be a feature missing from all sub collections, eg

    /services/rest/connect/v1.4/contacts/123?expand=emails

    /services/rest/connect/v1.4/contacts/123?fields=emails.address&expand=all

    Just gives you the item links, rather than the actual details

    "emails": {
        "items": [
          {
            "rel": "canonical",
            "href": "/services/rest/connect/v1.4/contacts/123/emails/0"
          }
        ],
        "links": [
          {
            "rel": "self",
            "href": "/services/rest/connect/v1.4/contacts/123/emails"
          },
          {
            "rel": "canonical",
            "href": "/services/rest/connect/v1.4/contacts/123/emails"
          },
          {
            "rel": "describedby",
            "href": "https://service.elsevier.com/services/rest/connect/v1.4/metadata-catalog/contacts/emails",
            "mediaType": "application/schema+json"
          }
        ]
      },
    

    However, like you suggest this could be achieved with ROQL, however there doesn't seem to be a way to 'autodiscover' ROQL, like there is the different endpoints. (The schema+json)

    Wonder if it might be worth me creating an idea in the ideas lab?

  • Chris Rogers

    Hi Rajan

    As listed above, I have already tried /services/rest/connect/v1.4/analyticsReports/1/columns?expand=all 

    The expand parameter seems to work for 'normal' collections (eg analyticsReports), but not sub collections (eg analyticsReposts/columns)

     

    SELECT AnalyticsReport.Columns.* FROM AnalyticsReport WHERE ID = 1
    

    Also doesn't seem to work, unless I've made a mistake in the ROQL?

     

  • Chris Rogers

    Have you tried installing davfs2 manually on the server. Seems like it might have worked in your pipeline, but is just showing some info about how you can reconfigure it.

    Clearly you don't have to install davfs2 every time you run the job, unless very weird things are happening on your jenkins server??!

  • Chris Rogers

    Is that IP address / service accessible from the outside world?

    Something like https://httpstatus.io/ suggests not (it says connection timed out)

    Also try running on a normal http port (80,443,8080), it is possible Oracle block outward access to non standard ports.

  • Chris Rogers

    An update after many iterations this is what we now do for Incidents (and the same principle for Contacts and other things), in case anyone else has this problem!

    We now have a new custom object (QueuedIncident), that holds the IncidentID, Operation, Failures, FailureMessage, LastRequeued (datetime) and LastFailed (datetime)

    The (synchronous) Incident CPM then becomes very simple, it just creates a new QueuedIncident, with the IncidentId and operation (create,update).

    We then have an asynchronous CPM on the QueuedIncident custom object. When it succeeds it deletes the QueuedIncident. If it fails it increments the number of Failures by 1, logs the time in LastFailed, and FailureMessage.

    We then have a quick custom script that queries for things that have failed, and haven't been requeued since it last failed. It updates the LastRequeued date, and this update triggers the above async CPM again to try again! We then asked Oracle to run this hourly. If something fails more than 5 times it stopped getting requeued, and is flagged in a report for manual intevention.

    We now have a very reliable system that can handle failures gracefully, and retry them automatically, and flag lots of failures for manual intervention!
    It has the other benefit of being able to run it on a specific incident manually by just manually creating the new QueuedIncident, or even using the import wizard!

  • Chris Rogers

    You shouldn't have to do any encryption on the Customer Portal side, just on the side that provides the PTA link.

    Having said that I created a PTA tester in the customer portal, but it is quite dangerous as it lets us login as anyone!

    I used phpseclib to make things easy

    Then it was just a case of something like this;

    $cipher = new Crypt_Rijndael(CRYPT_RIJNDAEL_MODE_CBC);
    $cipher->setKey(Config::getConfig(PTA_SECRET_KEY));
    $cipher->setIV(str_repeat("\x0", $cipher->getBlockLength()));
    $encrypted = $cipher->encrypt($toEncrypt);
    $encrypted = base64_encode($encrypted);
    $encrypted = strtr($encrypted, array('+' => '_', '/' => '~', '=' => '*'));
    

    The exact encryption settings and algorithm will depend on what settings you have used though.

     

    HTTPS is probably a good idea on both sides, if anyone steals the link they can login as the user!

     
    
  • Chris Rogers

    I didn't have any problems mounting using davfs

    in /etc/fstab I have something along the lines of

    https://interface.custhelp.com/dav /mnt/jenkins/interface davfs rw,user,noauto 0 0
    

    Then have the username / password stored in ~/.davfs/secrets

    http://ajclarkson.co.uk/blog/auto-mount-webdav-raspberry-pi/ is a pretty good tutorial.

     

    I have used Jenkins pipeline jobs to pretty much script the uploading of files. It is in a Jenkinsfile that we store in git as well, so can track changes to the script.

    I have a Job for each interface, using parameters in the job to tell the script where to upload the files to. You could do the same, and have a trigger job in jenkins which fires off the rest, or just have one job to upload to all in one go.

  • Chris Rogers

    Our Jenkins is running on a Linux box so it can do everything locally. No reason you couldn't write a script to connect to a different Linux box to do it though.

  • Chris Rogers

    We use WebDAV as you would normally upload files.

    There are lots of ways to connect to WebDAV from linux, from cadaver to davfs

    Again work out what is best for you, we use davfs

  • Chris Rogers

    We've got 5 (with another 3 in the pipeline) language interfaces at the moment, so are also beginning to feeling your pain.

    We haven't had much time to look into automating staging/promote, if you have any luck please let us know!

    Cheers

    Chris

  • Chris Rogers

    Our git repo contains all the files for the customer portal, so off hand the script looks something like

    checkout git branch live
    mount webdav
    rsync src/customer-portal /mount/webdav/cp/customer
    unmount webdav
    

     

    There are loads of options for rsync so you will want to work out what works best for you, we spent a couple of days fine tuning it for our specific cases (eg ignore some files, etc).

    There are also lots of ways of mounting webdav on linux, again probably best you work out what works best for you.

     

    Yes, once we have setup rsync to only upload files that have changed, so in the stage/promote pages it only shows the files that have changed.

    We had to get everything in git that we wanted to upload though, and do a large deployment first to make sure git was identical to the uploaded files.

     

  • Chris Rogers

    Yes, so we develop on tst sites. We then use rsync to upload the files to our 'live' (production) site. The files go into the normal dev mode envrionment like they would if you manually uploaded them.

    We then have to manually run though the stage / promote steps via the UI.

  • Chris Rogers

    We use Jenkins to deploy our code to our production environments.

    It just checks out the entire codebase then rsyncs it with the production environment, thus only updating files that have changed.

    We still go through the customer portal admin pages to deploy though once the code has been uploaded.

    If you dig around you can find the admin pages source, you might be able to make something custom from them to actually deploy the files, but I feel that's the greatest idea.

    An alternative would be to write a script to go through the deployment process web pages.

  • Chris Rogers

    In this case we were acting as the external app so that we could test that we have setup PTA correctly, and that the user pages look ok, etc. The external apps handn't implemented the PTA encryption their end so we were a bit stuck! Obviously this won't be available to end users, the will have to login though the external apps!


    We ended up using phpseclib, much easier to get working!

  • Chris Rogers

    If you do some jiggery-pokery with reflection you can get some of the details of \RightNow\Internal\Utils\Url and the other internal classes out, but you are better off using the core references files (eg RightNow\Utils\Url) in cp\core\framework\Utils and cp\core\framework\Libraries as I have discovered that you cannot actually call methods from the most of the Internal classes directly sad