Quantcast
Channel: ATeam Chronicles
Viewing all 97 articles
Browse latest View live

Oracle BPM 12c just got Groovy – A Webcenter Content Transformation Example

$
0
0

Introduction

On the 27th June 2014 we released Oracle BPM 12c which included some exciting new features.
One of the less talked about of new features is the support of BPM Scripting which incorporates the Groovy 2.1 compiler and runtime.

So what is Groovy anyway?

Wikipedia describes Groovy as an object-oriented programming language for the Java platform and you can read the definition here.

In short though it is a Java like scripting language, which is simple to use. If you can code a bit of Java then you can write a bit of Groovy and most of the time only a bit is required.

If you can’t code in groovy yet don’t worry, you can just code in Java and that work most of the time too.

With great power comes great responsibility?

The benefits and possibilities of being able to execute snippets of groovy code in a BPM process execution are almost limitless. Therefore we must be responsible in its use and decide whether it makes sense from a BPM perspective in each case and always implement best practices which leverage the best of the BPM execution engine infrastructure.

If you can easily code, then it is easy to write code to do everything. But this goes against what BPM is all about. We must always first look to leverage the powerful middleware infrastructure that the Oracle BPM execution engine sits on, before we look to solve our implementation challenges with low level code.

One benefit of modelled BPM over scripting is Visibility. We know that ideally BPM processes should be modelled by the Business Analysts and Implemented by the IT department.

Business Process Logic should therefore be modelled into the business process directly and not implemented as low level code that the business will not understand nor be aware of at runtime. In this manner the logic always stays easily visible and understood by the Business. Overuse of logic in scripting will quickly transcend into a solution that will be hard to debug or understand in problem resolution scenarios.

If one argues that the business logic from your business process cannot be modelled directly in the BPM  process, then one should revisit the business process analysis and review whether the design actually makes really makes sense and can be improved.

 

What could could be a valid usecase for groovy in BPM?

One valid usecase of groovy scripting can be complex and dynamic data transformations. In Oracle BPM 12c we have the option to use the following mechanisms for transformations:

Data Association

Good for:

  • Top level transformations of the same or similar types
  • Simple transformations of a few elements
  • Lists and arrays
  • Performance

XSL transformation

Good for:

  • Large XML schema elements
  • Assignment of optional XML schema elements and attributes
  • Lists and arrays
  • Reuse

Groovy Scripting

Good for:

  • Generic XML schema types like xsd:any
  • Dynamic data structures
  • Complex logic
  • Error handling
  • Reuse

Java callouts using a mediator or Spring component

Good for:

  • Pure Java implementation requirements
  • Large batch processing

Each method have their own benefits and downsides, but in combination you can transform any payload. What to use is largely a case of:

  • Best practice within your organization
  • Best practice for BPM
  • The level of organized structure of your schemas

In practice, an efficiently implemented BPM process will be a combination of associations, xslt & bpm scripts.

 

tip3Tip: Always try to solve transformation tasks using using a data association first before turning to xslt or groovy. Use the right tool in your toolkit for the right job.

 

 Upgrading from BPM 10g

The inclusion of BPM scripting will also aid in the upgrade from BPM 10g processes. This should be seen as an opportunity to review and improve the implementation as opposed to blindly copying the existing functionality. This is a process that is beyond the scope of this post.

 

A Complex and Dynamic Webcenter Content SOAP Example

Invoking very generic SOAP services can be one instance where groovy can save the day. When a SOAP service is well defined it’s very easy to create a mapping using the xsl or data association mappers. But what if the element definition is very wide open with the use of schemas elements like xsd:any, xsd:anyType or xsd:anyAttribute.

To solve this transformation in XSLT could potentially be complex with lots of hand written, harder to read code.

The GenericRequest of the Webcenter Content SOAP service is an example of such a generic SOAP service. The flexibility of its use means that the payload required is very dynamic.

The actual schema element looks like this.

 

content.xsd

 

Now consider the situation where this payload for the GenericRequest needs to look like this and could potentially have lots of required logic.

 

soapui

This might be accomplished using a complex, hand coded xslt transformation.

Alternatively if you don’t have any xslt world champions on the team, anyone on your development team that can code code java can do this easily with groovy scripting.

Building the Transformation Demo

To demonstrate the transformation capabilities of groovy scripting we are going to create a simple synchronous BPM process based on the above usecase.

We send an Incident as a request and as a response will receive the transformed GenericRequest. In this manner it will be easy for us to see the whole transformed payload that we would normally send to Webcenter Content.

The finished process looks like this.

 

FinishedProcess

 

 

 

 

 

 

 

Create a new BPM Application and define Data Objects and Business Objects

We will create a new BPM application and define the:

  • Input arguments as an Incident
  • Output argument as a Webcenter GenericRequest

 

1) Download the schema zipfile called docs and extract to a local location. Then open Studio (JDeveloper) and from the top menu choose Application->New->BPM Application

 

NewApplication

 

 

 

 

 

 

 

 

2) Click OK, use the application name GroovyDemoApp and click Next

 

AppName

 

 

 

 

 

 

3) Use the Project Name GroovyDemo, then click Next

 

ProjectName

 

 

 

 

 

 

 

 

 

 

4) Now choose the Synchronous Service, name the process GroovyDemoProcess and click Next

 

SyncProcess

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now we need to define and add the input and output arguments. Here we use some predefined schema elements in schema files that I provide. Firstly we define these as Business Objects, then we use these Business Objects as a definition for our arguments and Data Objects in the process itself.

 

5) Click on the green add icon to add a new argument, name the argument incidentARG

 

incidentARG

 

 

 

 

 

6) Choose Browse under Type and then click the Create Business Object Icon

 

CreateBO

 

 

 

 

 

7) Use the name IncidentBO and click the magnify icon choose a Destination Module

 

DestModule2

 

 

 

 

 

 

 

8) Click the Create Module icon and use the name Domain

 

Domain

 

 

 

 

 

 

 

 

 

 

 

 

9) Click OK twice to return back to the Create Business Object window

 

 

 

 

 

 

 

 

10) Select the checkbox Based on External Schema and the magnifying glass icon to choose a Type

 

TypeChooser

 

 

 

 

 

 

 

11) Click the Import Schema File icon, select the incidents.xsd schema file and OK

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

12) Click OK to localize the schema files to your composite project

 

localize

 

 

 

 

 

 

 

 

 

 

 

 

 

 

13) Select the Incident element from the Type Explorer and OK twice to return to Browse Types

 

type_explorer

 

 

 

 

 

 

 

 

 

 

 

 

14) Select the IncidentBO type and OK

 

IncidentBOSelect

 

 

 

 

 

 

 

 

15) To complete the In argument creation click OK

 

InArgumentFinal

 

 

 

 

 

16) Now click the output tab to define the GenericRequest type as a an Output

 

InArgComplete3

 

 

 

 

 

 

 

17) Using the same procedure as before create an output argument using the following values:

 

Output Argument Name GenericRequestARG
Type GenericRequestBO
Schema Filename content.xsd
Module Domain
Element GenericRequest

 

OutArg5

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

18) Click Finish to complete the initial definition of the GroovyDemoProcess BPM process.

 

DefinitionProcess

 

 

 

 

 

 

 

 

We have created a GroovyDemoProcess syncronous BPM process that has an Incident as a request and a GenericRequest as a response.

Next we need to define process variables based on the business objects that we have already created. These will be used to store the payload data in the BPM process.

 

19) Ensure the GroovyDemoProcess is selected in the Application Navigator, then  in the Structure Window right-click the Process Data Objects icon. Use the name incidentDO and select the IncidentBO as the Type.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

20) Similarly create another process data object called genericRequestDO of Type GenericRequestBO

 

GenericRequestDO

 

 

 

 

 

Performing Data Associations of the Data Objects

Now we have to assign the payload of the incidentARG argument to the data object we have just created. We do this in the Catch activity.

 

21) Right-click the Start catch activity and select Properties. Select the Implementation tab and click the Data Associations link.

 

DataAssociations

 

Now we need to assign the incidentARG argument to the incidentDO data object.

Since we have defined these to be the same type it is easy. All we need to do is a top level assignment and not even worry about optional sub-elements.

21) Drag from the incidentARG to the incidentDO nodes and click OK twice to complete and close the Start node property definition.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now we need to associate the GenericRequestDO data object to the response.

This is in the Properties of the Throw End node.

22) Create a Copy association from the genericRequestDO to the GenericRequestARG nodes.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Defining the Groovy Expression in the BPM Script

Now at last we are ready to start defining the groovy code that will be responsible for the transformation.

Drag a Script Activity and place it just after the Start node. Re-name this to Transform Request

 

transform

 

 

 

 

 

 

 

 

 

 

 

 

 

Transform2

 

 

 

 

 

 

 

 

 

23) Right-click the Transform Request Script Activity and select Go To Script 

 

 

GoToScript

 

 

 

 

 

 

 

 

 

tip3Tip: The Script Activity must not have any implementation defined when it is being used for Groovy scripting. It functions as a container for the groovy script

 

Before we can start scripting we have to define the imports for the script, similar to what we would do in Java. First lets take a look at the Scripting Catalog to see what is already there. This will help us understand what we need to import.

 

24) In the Scripting Catalog expand the oracle–>scripting nodes to see what is already available to us.

 

Here we can see the Business Objects we have already created and all the elements that are included in the schema files that we imported.

 

ScriptingCatalog

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now we need to recall what is the format of the GenericRequest that is the target data structure of our transformation. We need to know this so we can choose the correct imports for our Groovy script.

 

soapui

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Above we can see that a GenericRequest contains the following elements:

 

  • Service–>Document–>Field
  • Service–>Document–>File–>Contents

 

25) Now return back to the Scripting tab and enter the following in the Script Editor window. This as you can see are comments and printing output to the console. This will be seen directly in the weblogic server diagnostic log.

 

//You can add comments like this
//You can print to the console like during your development/testing procedures 
println("Starting transformation of Incident to Generic Request")

 

tip3Tip: printing to the console log like this should only be used in development scenarios and should be removed for production. Alternatively we could add some logic to conditionally log messages only by specifying a payload value or composite mbean.

 

Selecting the Scripting Imports

Now we need to add in the imports for the elements that we will be using.

26) Click the Select Imports button on the top right of the editor to open the Select Imports window

SelectImports

 

 

 

 

 

 

27) Click the green Add icon and click with the mouse cursor in the new import row that appears

 

SelectImports2

 

 

 

 

 

 

 

 

28) Type oracle. (oracle and a dot)

 

OracleDot

 

 

 

 

 

 

 

 

The context menu will now open up to help you find the correct package path.

 

ConextMenu

 

 

 

 

 

 

 

 

 

 

tip3Tip: Do not use the cursor keys until you have clicked inside the context menu with your mouse since this will cause the context menu to disappear.

 

29) Now use the cursor keys to choose the following oracle.scripting.xml.com.oracle.ucm.type.Service, or type it in directly and click the Add icon to add another import.

 

Imports

 

 

 

 

 

 

 

 

30) Add the following imports and click OK

 

oracle.scripting.xml.com.oracle.ucm.type.Service
oracle.scripting.xml.com.oracle.ucm.type.File
oracle.scripting.xml.com.oracle.ucm.elem.Field
oracle.scripting.xml.com.oracle.ucm.type.Service.Document

 

Writing the Groovy Expression

31) Return back to the Groovy Script editor window.

 

Now we need to define the classes we need to use to build our GenericRequest. We define a Service, Document, Field, File and two arrays for the lists of files & fields.

 

tip3Tip: In essence here we are just instantiating POGO (plain old groovy objects) objects that are a Groovy representation of our GenericRequest element

 

32) Now enter in the following code after the debug code you entered earlier

 

//Define the message element types for data population

//The Service element
Service service = new Service()
//The Document element
Document document = new Document()
//The File element (base64 message embedded attachment)
File file = new File()
//The filed element
Field field = new Field()
//An array of type Field
List<Object> fields = new ArrayList()
//An array of type File
List<Object> files = new ArrayList()

 

Now we have created our POGO objects. Now we need to populate them with real data. Since we are transforming from an Incident to a GenericRequest, most of our data comes from the data object incidentDO, which we have populated from the argument.

We will start by creating each of the individual Field elements and assigning them to the array, since these constitute the bulk of our message.

Our first field looks like this.

 

FirstField

 

It contains an XML Schema attribute called name and a value which is the Internal BPM process ID of the in flight process.

Type field.set (field dot set) in the expression editor to show the context list of the available methods for the field object. We can see that the methods to set and get data from the field POGO already exist.

 

FieldDot

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

32) Type in the following expression to populate the first Field element and add it to the array at position 0 (appears first in the message)

 

//sDocName element containing BPM process instance ID
field.setName("dDocName")
field.setInner_content(predef.instanceId)
fields.add(field)

 

tip3Tip: We could get the BPM process instance ID by executing an xpath expression in a data association. However BPM 12c conveniently provides several pre-defined variables which are available from predef, of which some can be also updated in a groovy expression.  See the full list here.

 

The next field that we need to populate in the GenericRequest is the dDocTitle, which comes from the incident subject.

The transformed element looks like this.

 

SecondField

 

This time we get the value from the process data object incidentDO by directly calling the get method.

 

33) Add the following expression to the end of the script.

 

//dDocTitle from the incident subject
field = new Field()
field.setName("dDocTitle")
field.setInner_content(this.incidentDO.subject)
fields.add(field)

 

Now this is really straight forward right? Actually, with the power of groovy expressions it really is.

Now imagine that you wanted to implement some complicated if/then logic to only conditionally display some elements. All you need to do is write some simple logic into the script. Perhaps you need to format some dates or concatenate some strings values or convert some data types, again easy as pie.

Consider the xincidentDate field below. Here we get a date and convert it into a Webcenter Content required format in a few lines.

 

ConvertDate

 

 

 

 

 

 

 

 

34) Now add the remaining field definitions to the expression.

 

field = new Field()
field.setName("dDocAuthor")
field.setInner_content(this.incidentDO.reporter)
fields.add(field)
   
field = new Field()
field.setName("dDocAccount")
field.setInner_content("incident");
fields.add(field)
  
field = new Field()
field.setName("dSecurityGroup")
field.setInner_content("webcenter")
fields.add(field)
  
field = new Field()
field.setName("dDocType")
field.setInner_content("Incident")
fields.add(field)
  
field = new Field()
field.setName("xClbraRoleList");
field.setInner_content(":CaseMgr(RW),:CaseWorker(RW),:ActionOfficer(RW)");
fields.add(field)
  
field = new Field()
field.setName("xClbraUserList");
field.setInner_content("&${this.incidentDO.getReporter()}(RW)");
fields.add(field)
  
field = new Field()
field.setName("xIdcProfile")
field.setInner_content("IncidentRecord")
fields.add(field)
  
field = new Field()
field.setName("xComments")
fields.add(field)
  
field = new Field()
field.setName("xCitizenName")
field.setInner_content(this.incidentDO.name);
fields.add(field)
  
field = new Field()
field.setName("xEMail")
field.setInner_content(this.incidentDO.email);
fields.add(field)
  
field = new Field()
field.setName("xCity")
field.setInner_content(this.incidentDO.city)
fields.add(field)
  
field = new Field()
field.setName("xGeoLatitude")
field.setInner_content(this.incidentDO.geoLatitude)
fields.add(field)
  
field = new Field();
field.setName("xGeoLongitude");
field.setInner_content(this.incidentDO.geoLongitude);
fields.add(field);

field = new Field()
field.setName("xIncidentDate")
Calendar nowCal = this.incidentDO.getDate().toGregorianCalendar()
Date now = nowCal.time
String nowDate = now.format('M/d/yy HH:mm aa')
field.setInner_content(nowDate)
fields.add(field)
  
field = new Field()
field.setName("xIncidentDescription")
field.setInner_content(this.incidentDO.description)
fields.add(field)
  
field = new Field()
field.setName("xIncidentStatus")
field.setInner_content(this.incidentDO.incidentStatus)
fields.add(field);
  
field = new Field()
field.setName("xIncidentType")
field.setInner_content(this.incidentDO.incidentType)
fields.add(field)
  
field = new Field();
field.setName("xLocationDetails")
field.setInner_content(this.incidentDO.locationDetails)
fields.add(field)
  
field = new Field()
field.setName("xPhoneNumber")
field.setInner_content(this.incidentDO.phoneNumber.toString())
fields.add(field)
  
field = new Field()
field.setName("xStreet")
field.setInner_content(this.incidentDO.street)
fields.add(field)
  
field = new Field();
field.setName("xStreetNumber");
field.setInner_content(this.incidentDO.streetNumber);
fields.add(field);
  
field = new Field()
field.setName("xPostalCode")
field.setInner_content(this.incidentDO.getPostalCode());
fields.add(field)
  
field = new Field()
field.setName("xTaskNumber")
field.setInner_content(this.incidentDO.taskNumber)
fields.add(field)

 

The next element to add is the embedded base64 attachment. We add this in a similar fashion.

 

34) Add the following expression.

 

file.setContents(this.incidentDO.attachment.file)
file.setName("primaryFile")
file.setHref(this.incidentDO.attachment.name)
files.add(file)

 

Now we are nearly finished our groovy script. All we need to do is:

 

  • Add the arrays to the Document element
  • Add the Document element to the Service element
  • Add the Service to the process data object genericRequestDO

 

35) Add the following expression for the Document, Service and gerericRequestDO

//Add Field and Files
document.setField(fields)
document.setFile(files)

//Add Document to Service
service.setDocument(document)
service.setIdcService("CHECKIN_UNIVERSAL")

//Add the Service element to data object genericRequestDO
genericRequestDO.setWebKey("cs")
genericRequestDO.setService(service)

 

The BPM script is now complete and your Studio Application should look similar to this.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Deploying the Process

Now we need to deploy the BPM process to our BPM server so we can test it. We are going to deploy to the new BPM 12c Integrated Weblogic Server that comes with studio, but another server can be used if preferred.

 

tip3If this is the first time deployment to the Integrated Weblogic Server then Studio will ask for parameters and then create the domain first before deployment.

 

36) In the Application Explorer Right-click the GroovyDemo project and select deploy–>GroovyDemo–>Deploy to Application Server–>Next–>Next–>IntegratedWeblogicServer–>Next–>Next–>Finish

 

deploy1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

deploy2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The deployment log should should complete successfully.

 

 

 

 

 

 

 

 

 

 

 

Testing the Deployed Process

Now it is time to test the process. We will invoke our BPM process through the web service test page.

37) Open a browser window and go to the Web Services Test Client page http://localhost:7101/soa-infra/ and login with the weblogic user.

Click on the Test GroovyDemoProcess.service link .

 

 

 

 

 

 

 

 

38) Click on the start operation

 

teststartopp

 

 

 

 

 

 

 

 

 

39) Click on the Raw Message button to enter a raw XML SOAP payload.

 

raw

 

In the text box paste the following sample Webcenter Content GenericRequest payload.

 

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:gro="http://xmlns.oracle.com/bpmn/bpmnProcess/GroovyDemoProcess" xmlns:v1="http://opengov.com/311/citizen/v1">
   <soapenv:Header/>
   <soapenv:Body>
      <gro:start>
         <v1:Incident>
            <v1:Name>Joe Bloggs</v1:Name>
            <v1:Email>joe.blogs@mail.net</v1:Email>
            <v1:PhoneNumber>12345</v1:PhoneNumber>
            <v1:Reporter>03a7ee8a-ae3f-428b-a525-7b50ac411234</v1:Reporter>
            <v1:IncidentType>Animal</v1:IncidentType>
            <v1:IncidentStatus>OPEN</v1:IncidentStatus>
            <v1:Date>2014-09-17T18:49:45</v1:Date>
            <v1:Subject>There is a cow in the road</v1:Subject>
            <v1:Description>I have seen a big cow in the road. What should I do?</v1:Description>
            <v1:GeoLatitude>37.53</v1:GeoLatitude>
            <v1:GeoLongitude>-122.25</v1:GeoLongitude>
            <v1:Street>500 Oracle parkway</v1:Street>
            <v1:StreetNumber>500</v1:StreetNumber>
            <v1:PostalCode>94065</v1:PostalCode>
            <v1:City>Redwood City</v1:City>
            <v1:LocationDetails>Right in the middle of the road</v1:LocationDetails>
            <v1:Attachment>
               <v1:File>aGVsbG8KCg==</v1:File>
               <v1:Name>hello.txt</v1:Name>
               <v1:Href/>
            </v1:Attachment>
         </v1:Incident>
      </gro:start>
   </soapenv:Body>
</soapenv:Envelope>

 

40) Click the Invoke button in the bottom right hand corner

 

invoke

 

 

 

 

 

 

 

 

41) Scroll down to the bottom to see the Test Results

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Congratulations! We can see that the Incident request we sent to Oracle BPM 12c has been transformed to a Webcenter Content GenericRequest using Groovy Scripting.

 

 tip3Tip: The Web Services Test Client is an lightweight method for testing deployed web services without using Enterprise Manager. For full instance debugging and instance details use Enterprise Manager or the Business Process Workspace

 

If we track this instance in Enterprise Manager we can see what happened at runtime in the Graphical form.

 

graph

 

 

 

 

 

 

 

 

 

 

 

 

 

We can also look at the log from the Integrated Weblogic Server in Studio, which shows the debug expression we included.

 

 

 

 

 

 

 tip3Tip: This process could be easily remodelled to be asynchronous or re-usable and the transformed GenericRequest could be used in the input association for a Service Activity to actually invoke the Webcenter Content SOAP Service.

The actual implemented process where this example comes from in the B2C Scenario looks like this. It is a reusable process that waits for the the upload to Webcenter Content to complete before querying the final documenting details and returning to the main BPM process.

CreateContent

 

 

 

 

Summary

In this blog we introduced Groovy BPM Scripting in BPM 12c. Firstly we learned how to model a synchronous BPM process based on predefined XML schema types.

We learned how to do the following using BPM Scripting:

  • Where and how we should use BPM scripting in a BPM process.
  • How to import classes
  • Instantiate and declare groovy objects
  • Print debug messages to the weblogic log file
  • Use process data objects
  • Use predefined variables
  • Format data
  • Dynamically build data object data structures
  • Programmatically transform data between different XML schemas types
  • Deploy and test using the Web Services Test Client tool

 

In the next blog in this series I will demonstrate how to define and use BPM scripting in Business Objects and Exception handling in BPM scripting.

 

tip3Tip: For more information on BPM Scripting (e.g. the list of predefined variables) see the section Writing BPM Scripts in the official BPM documentation

 

 

 

 

 

 

 


The Parking Lot Pattern

$
0
0

The parking lot pattern is a strategy in Oracle SOA Suite to stage data in an intermediary store prior to complete processing by SOA Suite itself.  This pattern was spearheaded years ago by Deepak Arora and Christian Weeks of the Oracle SOA A-Team.  It has been implemented in various verticals to address processing challenges which include batch, complex message correlation/flows, throttling, etc.  To detail the pattern, this write-up discusses the components of a batch-related implementation.

The Parking Lot

The implementation of the “parking lot” can be done using various storage technologies like JMS, database, or Coherence (just to mention a few).  However, Oracle strongly recommends that a database table be used for simplicity.  The table structure typically contains state tracking and metadata relating to the payload that will be processed.  In our batch-processing example the table would contain: a row identifier column, a batch identifier column, a state column, maybe a type identifier column, maybe a priority indicator column, and finally the data/payload column.

 

Column

Content

Special properties

LOT_ID The identifier for the parking lot row. Usually some sort of sequence identifier. Primary key for the table
BATCH_ID An identifier for the batch. It would be shared across all rows within the batch.
STATE The state of the row: commonly this is a single character representing the states the row transitions through.  This field is usually used by the database adapter’s polling functionality as a “logical delete” indication. Example values:
N: new
R: reserved
P: processing
C: complete
SUBTYPE (optional) An optional subtype indicator: some sort of meta property about the input row. Note: don’t overload this to process both new orders and bulk inventory updates.  There should be separate parking lots for truly separate types.
PRIORITY (optional) An optional priority indication to allow the database adapter to pull these rows first for processing.
DATA (alternative 1) A CLOB containing a string of the data in XML form. See discussion
DATA (alternative 2) This would be a reference to data populated elsewhere in the system. For example, the order could be stored in a separate “pending orders” table and this could be an identifier for that other row. See discussion
DATA (alternative 3) Inline the data as columns directly within the parking lot table (combine the table from alternative 2 with the parking lot table, effectively). See discussion

 

Some things to note:

  • There should be one parking lot per general type, do not overload a single parking lot with multiple types (for example orders and inventory updates).
  • The parking lot table is anticipated to be busy. Ensure you clean up stale data through regular purging.

Data Representation Within the Parking Lot

There are at least three possible alternatives for storing the actual data within the parking lot.  Each option has different properties that need to be considered:

1. Store the data as a CLOB in XML form. This is the simplest approach, especially for complex data types. It adds some additional overhead writing and reading the CLOB as well as transforming between the XML and the CLOB. Note that these costs would be associated with XMLTYPE as well, and since there is no need for visibility into this while data it is in the database, it doesn’t provide any benefit.
2. Store the data separately in other tables with fully realized columns. This solution is most appropriate if the application is already doing it. That is, if the de-batching process is already copying the input payload to a tabular format in the database table, then this data format could be leveraged for the parking lot.
3. Combine the table that might otherwise exist in #2 with the parking lot itself. While this solution might prove to be the most performant, it can only work for simple data structures in the parking lot.

Database Adapter Usage

The parking lot process would be implemented as a SOA composite with a database adapter and a BPEL process.  The database adapter would read and dispatch individual rows to the BPEL process, creating an instance per order.

The database adapter supports various polling strategies.  Oracle recommends using the “logical delete” strategy, whereby a particular value of the STATE column would be asserted as part of the polling operation: SELECT <column list> FROM PARKING_LOT WHERE STATE=’N’.  The query is additionally enhanced with pessimistic locking function that allow for parallel execution from many separate nodes simultaneously—allowing this to work seamlessly in a cluster. Finally, a “reserved value” should be specified for full distributed polling support (the reserve value is updated during the poll so that the row is no longer a candidate on other nodes, until the transaction can complete).

There is an alternative database polling approach known as “SKIP LOCKING” (see http://docs.oracle.com/cd/E21764_01/integration.1111/e10231/adptr_db.htm#BGBIJHAC and DB Adapter – Distributed Polling (SKIP LOCKED) Demystified ).  While the skip locking approach has several advantages, it does not allow the intermediate states to be committed to the database.  The result is that it does not give the same stateful visibility to other processes that may be interested in the current state within the parking lot; for example, an OSB status monitoring service that provides the user with a means to check the status of the batch they submitted.

The database adapter supports various tuning properties that give very fine-grain control over its behavior, such as the number of poller threads, the number of rows to read per cycle, the number of rows to pass to the target BPEL process, and so on.  For more information about the database adapter, please refer to http://docs.oracle.com/cd/E21764_01/integration.1111/e10231/adptr_db.htm.  The Oracle Fusion Middleware Performance and Tuning Guide also covers database adapter tuning at http://docs.oracle.com/cd/E21764_01/core.1111/e10108/adapters.htm#BABDJIGB.

B2B Event Queue Management for Emergency

$
0
0

Executive Overview

Many customers face a crisis in production system when, for some reason, they end up with several B2B messages stacked up in the system, that may not be of a high priority to be processed at that point in time. In other words, it would greatly help many customers if, in such critical situations, they had an option to flush the backed-up messages from the system for later resolution and simply continue with processing of the current messages.
A-Team has been involved with different customers worldwide helping them implement such a solution for emergency use. Without getting into too much technical details, a high-level approach for such a solution is discussed here. The methodology accomplishes two key tasks, that are of primary importance during an emergency crisis within a B2B production gateway:

  • Allows to flush the event queue while the gateway is down, so that the gateway can be brought up quickly
  • Introspect the messages created from the event queue for resubmission or rejection

The primary objective of this framework is to allow the B2B engine to come back up quickly after flushing the messages from the event queue. The recovery or resubmission of messages is usually reviewed manually by the operations and business teams off-line and takes a longer cycle to complete. But this should not affect the down-time of the production system after the fast removal of the messages from the event queue. The downtime, thus encountered, is only driven by the first task, as listed above.

Solution Approach

Overview

The solution consists of immediate cleanup of messages from the system. The entries will be stored in files. After the files are created, the gateway will be ready for normal processing without any impact of messages that were previously present in the system.
After the gateway is opened for normal business, the analysis of the file contents can be carried out, in parallel, to decide which messages will be resubmitted or discarded. This analysis can be done via scripts to extract relevant pieces of business data for the messages removed. The scripts are decoupled for various types of transient message data and built on basic query utilities. The basic building blocks for data introspection are typically custom scripts, that are created based on specific business needs for analysis.
The analysis will create 2 lists of message IDs – one for resubmission and the other for rejection. Existing command-line utilities can be invoked to resubmit the messages in a scripted loop with configurable delays in between the resubmissions. For rejection, there is typically no processing required. However, the list of IDs will be used to update the database to reflect a final state for the appropriate messages.

Tasks and Activities

The following sections describes the tasks in greater detail. Sections I and II cover the activities that need to be completed while the gateway is down. Sections III and IV include the post-mortem phase for analysis of messages removed from the system.
The flowchart below can be used as a reference for the critical cleanup tasks covered in Sections I and II.

eventq

I. Preparation of Environment

If the gateway is down, it is important to bring it up in a maintenance mode, so that the cleanup of transient messages in the system can be completed. Otherwise, if the gateway is running, it has to be restarted for enabling maintenance mode. This can be achieved with the following sequence:

  • If the SOA/B2B environment is not up and running, start the Admin Server. Otherwise, this step can be skipped.
  • Pause the consumption of messages coming in to the B2B engine via external and internal listening channels.
  • Change the startup mode of SOA managed server to ADMIN mode.
  • Change the startup mode of SOAJMSServer to pause at server startup.
  • For a running environment, stop SOA managed servers and restart Admin Server. Otherwise, this step can be skipped.
  • Start SOA Managed Servers.

II. Cleanup of Transient Messages

There are four areas that require attention when there is a gateway outage and the whole B2B cluster is down. The four areas are:

  • B2B Event Queue – Weblogic JMS Queue, B2B_EVENT_QUEUE
  • SOA Quartz Scheduler – SOA Repository Database Table, SOAQTZ_JOB_DETAILS
  • B2B Sequence Manager – SOA Repository Database Table, B2B_SEQUENCE_MANAGER
  • B2B Pending Batch Jobs – SOA Repository Database Table, B2B_PENDING_MESSAGE

These four areas require attention since they contain information about in-flight messages that have not been processed to their final states. Based on the specific environment, the cleanup could be a maximum of four-step process, where only the first step is mandatory.

  • The B2B Event queue contents will be exported to a file for later analysis and the queue contents will be purged thereafter.
  • The SOA Quartz Scheduler tables key contents will be exported to a file for later analysis and purged (optional – only applicable to message retries).
  • The B2B Sequence Manager table key contents will be exported to a file for later analysis and purged (optional – only applicable to scheduled Partner downtime).
  • The B2B Pending Batch table key contents can be exported to a file for later analysis and purged (optional – only applicable to batching use cases)

After the above-mentioned 4 steps are completed, the B2B gateway can be started in normal processing mode. One of the key metrics for the solution, will be to determine how soon can these 4 steps be completed, so that the gateway can be brought up for ongoing business. Only step 1 above requires the preparation described in Section I (Preparation of Environment).
Steps 2, 3, and 4 can be performed only with the database up (i.e. Admin and Managed server are both down)

III. Message Data Analysis

After the gateway is up and running, the analysis of all the entries backed up can be carried out for further resubmission or rejection. The main objective of the analysis phase is to gather sufficient business data for each message ID to help operational analysis. The analysis for the backed up messages will be addressed based on the source.
The flowchart below can be used as a reference for the message data analysis tasks covered in Sections III and IV.

eventq2

A. B2B Event Queue, JMS Queue – Mandatory
  • Shell script based utilities can be used to read message IDs from the JMS export file, generated in Section II.
  • Entries existing in b2b_instancemessage view: Message IDs can be joined with the view to get desired information about messages for business analysis (for the most part, new incoming or outgoing messages referenced by the B2B Event Queue would not be available in the b2b_instancemessage view)
  • Entries not existing in the b2b_instancemessage view: All such message IDs can be scanned to save the payload into a file, that can be processed by a customized shell script to extract any field for further analysis.
  • Other system level entries (optional): Can be put back in the event queue via JMS Import utility in Weblogic console.
B. SOA Quartz Scheduler, SOA Repository Table – Optional
  • Message IDs from SOAQTZ_JOB_DETAILS table can be joined with b2b_instancemessage view for data analysis via custom script utilities.
C. B2B Sequence Manager, SOA Repository Table – Optional
  • Message IDs from B2B_SEQUENCE_MANAGER table can be joined with b2b_instancemessage view like shown in Section B above.
D. B2B Pending Batch Messages, SOA Repository Table – Optional
  • Message IDs from B2B_PENDING_MESSAGE table can be joined with b2b_instancemessage view like shown in Section B above.

IV. Message Resubmission/Rejection

At the end of the analysis phase, the list of Message IDs for resubmission and rejection will be available. The resubmission list can then be read by custom shell scripts to process individual messages via existing command-line utility, driven by parameters to control pause interval and looping criterion.
In general, no further action should be required for rejected messages. In certain exceptional situations, a database script can be run to change the state of such messages to a final state.

Summary

The above approach has been successfully implemented and used in production systems by customers for many years and is a well-proven technique. The entire package has been delivered as a consulting solution and the customer is responsible for all the scripts and artifacts developed. However, as newer versions of B2B are released, there could be other alternate options available as well. For further details, please contact the B2B Product Management team or SOA/B2B group within A-Team.

Acknowledgements

B2B Product Management and Engineering teams have been actively involved in the development of this solution for many months. It would not have been possible to deliver such a solution to the customers without their valuable contribution.

Submitting an ESS Job Request from BPEL in SOA 12c

$
0
0

Introduction

SOA Suite 12c added a new component: Oracle Enterprise Scheduler Service (ESS). ESS provides the ability to run different job types distributed across the nodes in an Oracle WebLogic Server cluster. Oracle Enterprise Scheduler runs these jobs securely, with high availability and scalability, with load balancing and provides monitoring and management through Fusion Middleware Control. ESS was available as part of the Fusion Applications product offering. Now it is available in SOA Suite 12c. In this blog, I will demonstrate how to use a new Oracle extension, “Schedule Job”, in JDeveloper 12c to submit an ESS job request from a BPEL process.

 

Set up a scheduled job in Enterprise Scheduler Service

1. Create a SOA composite with a simple synchronous BPEL process, HelloWorld.
2. Deploy HelloWorld to Weblogic.
3. Logon to Fusion Middleware Enterprise Manager.
4. Go to Scheduling Services -> ESSAPP -> Job Metadata -> Job Definitions. This takes you to the Job Definitions page.

2

 

5. Click the “Create” button, this takes you to Create Job Definition page. Enter:

Name: HelloWorldJob

Display Name: Hello World Job

Description: Hello World Job

Job Type: SyncWebserviceJobType

Then click “Select Web Service…”. It pops up a window for the web service.

39

6. On the “Select Web Service” page, select Web Service Type, Port Type, Operation, and Payload. Click “Ok” to finish creating job definition.

8

Secure the Oracle Enterprise Scheduler Web Service

The ESS job cannot be run as an anonymous user, you need to attach a WSM security policy to the ESS Web Service:

1. In Fusion Middleware Enterprise Manager, go to Scheduling Services -> ESSAPP, right click, select “Web Services”.

3

2. In Web Service Details, click on the link “ScheduleServiceImplPort”.

4

3. Open tab “WSM Policies” and click on “Attach/Detach”.

5

4. In “Available Policies”, select “oracle/wss_username_token_service_policy”, click “Attach” button to attach the policy and then click on “Ok” to finish the policy attachment.

6

5. You should see the policy attached and enabled.

7

Create a SOA Composite to Submit a HelloWorldJob

1. Create a new SOA Application/Project with an asynchronous BPEL (2.0) process, InvokeEssJobDemo, in JDeveloper 12c.

2. Create a SOA_MDS connection.

14

3. Enter SOA MDS database connection and test connection successfully.

15

4. Add a Schedule Job from Oracle Extensions to InvokeEssJobDemo BPEL process.

16

5. Double click the newly added Schedule Job activity. This brings up the Edit Schedule Job window.

6. Enter Name “ScheduleJobHelloWorld”, then click “Select Job” button.

17

7. This brings up the Enterprise Scheduler Browser. Select the MDS Connection and navigate down the ESS Metadata to find and select “HelloWorldJob”.

18

8. To keep it simple, we did not create a job schedule. So there is no job schedule to choose. If you have job schedules defined and would like to use them, you can choose a Schedule from the MDS connections.

9. Set Start Time as current date time, and click OK.

19

10. You may see this pop up message.

20

11. Click “Yes” to continue. In the next several steps we will replace WSDL URL with concrete binding on the reference binding later to fix this.

12. In EM, go to Scheduling Services -> Web Services.

21

13. Click on link “SchedulerServiceImplPort”

22

14. Click on link “WSDL Document SchedulerServiceImplPort”.

23

15. It launches a new browser window displaying the ESSWebService wsdl. WSDL URL is in the browser address.

24

16. Update EssService WSDL URL.

25

17. You need to attach WSM security policy to EssService request.

26

18. Add Security Policy: oracle/wss_username_token_client_policy.

27

19. Setting up the credential store for policy framework is beyond the scope of this blog. We will use a short cut, the default weblogic user and password, as Binding Properties on the EssService reference binding to which the security policy is attached.

40

 

20. Build and deploy InvokeEssJobDemo.

21. Test InvokeEssJobDemo web service.

29

22. It should show that the web service invocation was successful.

34

23. Launch flow trace. We can see that Job 601 was successfully submitted.

32

24. Go ESSAPP -> Job Requests -> Search Job Requests. Find Job 601. Job was executed successfully.

35

 

Summary

In this blog, we demonstrated how to set up a SOA web service ESS job and how to invoke ESS web service to submit a job request from BPEL process in SOA Suite 12c.

 

MFT – Setting up SFTP Transfers using Key-based Authentication

$
0
0

Executive Overview

MFT supports file transfers via SFTP. Often MFT customers receive a public key from their partners and want to use them to receive files via SFTP. This blog describes the setup required to enable such an MFT flow that would receive files from partners using key-based authentication.

MFT includes an embedded SFTP server. We will configure it with the supplied public key to receive files from remote partners. Upon receipt of a file, a simple MFT transfer will initiate and place the file in a pre-defined directory within the local filesystem.

Solution Approach

Overview

The overall solution consists of the following steps:

  • Generate public-private key pair on the remote machine and copy the public key to MFT server
  • Generate public-private key pair on the machine running MFT server
  • Import the private key in MFT keystore
  • Import the public key from partner in MFT keystore
  • Configure SFTP server with private key alias
  • Configure MFT users and corresponding SFTP directories to be used by remote partners
  • Enter SSH Keystore password
  • Restart MFT Server
  • Create Embedded SFTP Source
  • Create File Target
  • Create an MFT transfer using the above source and taarget
  • Deploy and Test

Task and Activity Details

The following sections will walk through the details of individual steps. The environment consists of the following machines:

  • VirtualBox image running MFT 12c on OEL6 (oel6vb)
  • Remote Linux machine used for initiating the transfer via SFTP client (slc08vby)

I. Generate public-private key pair on the remote machine and copy the public key to MFT server

To generate a private-public key pair, we use the command-line tool ssh-keygen. The tool creates 2 files for private and public key. For our purposes in this exercise, we will only be using the public key by copying it to the MFT machine from here. As a best practice, all the key files are saved in $HOME/.ssh/authorized_keys directory. A transcript of a typical session is shown below.

[slahiri@slc08vby authorized_keys]$ pwd
/home/slahiri/.ssh/authorized_keys
[slahiri@slc08vby authorized_keys]$ ssh-keygen \-t rsa \-b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/home/slahiri/.ssh/id_rsa): sftpslc
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in sftpslc.
Your public key has been saved in sftpslc.pub.
The key fingerprint is:
56:db:55:48:4c:db:c4:e1:8b:70:40:a8:bf:12:07:94 slahiri@slc08vby
The key’s randomart image is:
+–[ RSA 2048]—-+
|        . oo +o++|
|       E .  . +=.|
|      . . .. .o..|
|       o . oo.. .|
|        S . .. . |
|       o o       |
|        o .      |
|       . .       |
|        .        |
+—————–+
[slahiri@slc08vby authorized_keys] ls
sftpslc  sftpslc.pub
[slahiri@slc08vby authorized_keys] scp sftpslc.pub oracle@10.159.179.84:/home/oracle/.ssh/authorized_keys
oracle@10.159.179.84’s password:
sftpslc.pub                                   100%  398     0.4KB/s   00:00
[slahiri@slc08vby authorized_keys]

II. Generate public-private key pair on the machine running MFT server

As shown in the previous step, ssh-keygen is used on the MFT machine to generate a key pair. From the pair generated here, we will only be using the private key for our exercise. The session transcript is shown below.

[oracle@oel6vb authorized_keys]$ pwd
/home/oracle/.ssh/authorized_keys
[oracle@oel6vb authorized_keys]$ ssh-keygen \-t rsa \-b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): sftpmft
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in sftpmft.
Your public key has been saved in sftpmft.pub.
The key fingerprint is:
36:a8:ac:a7:0c:bd:34:c9:bd:cd:1b:fe:05:a8:1d:47 oracle@oel6vb
The key’s randomart image is:
+–[ RSA 2048]—-+
| |
| |
| E |
| + |
| + S |
| o + + + o |
|. * = o . |
| + +.= . . |
| =o. =o. |
+—————–+
[oracle@oel6vb authorized_keys]$ ls
sftpmft sftpmft.pub
[oracle@oel6vb authorized_keys]$

III. Import the private key in MFT keystore

The private key from Step II is imported into MFT keystore using WLST utility. It must be noted that for MFT, a different version of WLST is shipped and installed with the product. It is found in /mft/common/bin directory. The version of WLST in this directory must be used. The WLST session should be connected to the MFT Server port using an Administrative credential. A typical session transcript is shown below.

[oracle@oel6vb authorized_keys]$ cd /u01/oracle/SOAInstall/mft/common/bin
[oracle@oel6vb bin]$ ./wlst.sh
CLASSPATH=:/u01/oracle/SOAInstall/mft/modules/oracle.mft_12.1.3.0/core-12.1.1.0.jar

Initializing WebLogic Scripting Tool (WLST) …

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

wls:/offline> connect(“weblogic”,”welcome1″,”t3://localhost:7003″)
Connecting to t3://localhost:7003 with userid weblogic …
Successfully connected to managed Server “mft_server1″ that belongs to domain “base_domain”.

Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.

wls:/base_domain/serverConfig> importCSFKey(‘SSH’, ‘PRIVATE’, ‘MFTAlias’, ‘/home/oracle/.ssh/authorized_keys/sftpmft’)
CSF key imported successfully.
wls:/base_domain/serverConfig> listCSFKeyAliases(‘SSH’, ‘PRIVATE’)
Key Details
————————————————————————–
‘MFTAlias’, Format PKCS#8, RSA

IV. Import the public key from partner in MFT keystore

The same WLST session can be used to import the public key copied over from the remote machine in Step I. It must be noted that the public key alias used here should be the same as the userID that is to be used by the remote SFTP client to connect to the embedded SFTP server. Transcript of a sample session is shown below.

wls:/base_domain/serverConfig> importCSFKey(‘SSH’, ‘PUBLIC’, ‘MFT_AD’, ‘/home/oracle/.ssh/authorized_keys/sftpslc.pub’)
CSF key imported successfully.
wls:/base_domain/serverConfig> listCSFKeyAliases(‘SSH’, ‘PUBLIC’)
Key Details
————————————————————————–
‘MFT_AD’, Format X.509, RSA

wls:/base_domain/serverConfig> exit()

Exiting WebLogic Scripting Tool.

[oracle@oel6vb bin]$

V. Configure SFTP server with private key alias

After logging in to MFT UI, go to Administration Tab. Under Embedded Servers, go to sFTP tab and complete the following:

  1. enable SFTP
  2. set PublicKey as authenticationType
  3. set KeyAlias to the private key alias set during import in Step III.
  4. save settings

Example screenshot is shown below.

BSrvr

VI. Configure MFT users and corresponding SFTP directories to be used by remote partners

From MFT UI, under Administration Tab, configure the user and the SFTP root directory, that will be used by in remote SFTP client session. Note that the userID will be the same as the Public Key Alias, used while importing the public key in Step IV.

Sample screenshots for user and directory are shown below.

BUser

VII. Enter SSH-Keystore Password

From the MFT UI, go to Administration tab and select KeyStore node in the left navigator tree.

Enter the password for SSH-Keystore as the same passphrase used during key pair generation on local machine in Step II.

Example screenshot is given below.

BKstr

VIII. Restart MFT Server

MFT Server should be restarted for most of the changes made in the earlier steps to take effect. This wraps up the administrative setup necessary for the exercise. The following sections are part of a simple MFT design process to create a source, target and transfer.

IX. Create Embedded SFTP Source

From MFT UI, go to the Designer tab. Create a SFTP Source pointing to the directory created in Step VI. Sample screenshot is shown below.

BSrc

X. Create File Target

For the sake of simplicity, a local file directory is chosen as the directory. From the MFT UI, navigate to the Designer tab and create a target as shown below.

BTrgt

XI. Create a transfer using the above source and target

From the Designer tab within MFT UI, create a transfer using the source and target created in Steps IX and X. Sample screenshot is shown below.

BTrfr

XII. Deploy and Test

After deploying the transfer, we are ready top test the entire flow.

We initiate the test by starting a simple, command-line SFTP client in the remote machine (slc08vby) and connecting to the embedded SFTP server running within MFT. The userID is the one specified in Step IV and VI (MFT_AD). The passphrase is the same as that used in generating the key pair in the remote machine during Step I.

After the sftp session is established, we put a file into the SFTP root directory of the user on MFT server machine, as specified in Step VI. The transcript from a sample SFTP client session is shown below.

[slahiri@slc08vby ~]$ cat ~/.ssh/config.sftp
Host 10.159.179.84
Port 7522
PasswordAuthentication no
User MFT_AD
IdentityFile /home/slahiri/sftpslc
[slahiri@slc08vby ~]$

[slahiri@slc08vby ~]$ sftp -F ~/.ssh/config.sftp 10.159.179.84
Connecting to 10.159.179.84…
Enter passphrase for key ‘/home/slahiri/sftpslc':
sftp> pwd
Remote working directory: /MFT_AD
sftp> put sftptest.txt
Uploading sftptest.txt to /MFT_AD/sftptest.txt
sftptest.txt                                  100%   24     0.0KB/s   00:00
sftp> quit
[slahiri@slc08vby ~]$

After the SFTP operation is completed, the MFT transfer takes over. MFT picks up the file from the embedded SFTP source and places it in the directory within the local file system, defined as target. Example screenshot from Monitoring Tab of MFT UI is shown below.

BFlow

Finally, we verify that our test file is saved in the local directory specified as the target in Step X.

[oracle@oel6vb in]$ pwd
/home/oracle/in
[oracle@oel6vb in]$ ls
sftptest.txt
[oracle@oel6vb in]$

Summary

The test case described here is one way to establish secure transfers with MFT. There are other use cases as well and will be discussed in other parts of this blog series on MFT. For further details, please contact the MFT Product Management team or SOA/MFT group within A-Team.

Acknowledgements

MFT Product Management and Engineering teams have been actively involved in the development of this solution for many months. It would not have been possible to deliver such a solution to the customers without their valuable contribution.

Throttling in SOA Suite via Parking Lot Pattern

$
0
0

The Parking Lot Pattern has been leveraged in many Oracle SOA Suite deployments to handle complex batching, message correlation, and complex processing flows. One scenario that is a frequent topic of discussion is throttling SOA Suite so as not to overwhelm slower downstream systems. Most often this is accomplished via the tuning knobs within SOA Suite and WebLogic Server. However, there are times when the built-in tuning cannot be tweaked enough to stop flooding slower systems. SOA design patterns can be leveraged when product features do not address these edge use cases. This blog will focus on using The Parking Lot Pattern as one implementation for throttling. Also note a working example is provided.

Throttling Parking Lot

The key piece to this pattern is the database table that will be used for the parking lot. The table is very simple and comprised of 3 columns:

Column Description
ID (NUMBER) This is the unique ID/key for the row in the table.
STATE (VARCHAR)
This will be used for state management and logical delete with the database adapter. There are three values this column will hold:
1. N – New (Not Processed)
2. P – Processing (In-flight interaction with slower system)
3. C – Complete (Slower system responded to interaction)
The database adapter will poll for ‘N’ew rows and will mark the row as ‘P’rocessing when it hands it over to a BPEL process.
PAYLOAD (CLOB) The message that would normally be associated with a component is stored here as an XML clob.

The Use Case Flow

Without the parking lot, the normal flow for this use case would be:

1. Some client applications call SOA Suite via Web Service, JMS, etc.
2. An asynchronous BPEL instance is created and invokes the slower system for every client request within the tuning parameters of the SOA engine
3. The slower system cannot handle the volume and gets flooded

How the flow is changed with the parking lot:

1. Some client applications call SOA Suite via Web Service, JMS, etc.
2. Each client request is inserted into the parking lot table as an XML clob with STATE = ‘N’.
3. A composite containing a polling database adapter will select 1 row with STATE = ‘N’ and the count of rows with STATE = ‘P’ are less than a throttle value (e.g., 5).
4. If the in-flight interactions with the slower system are less than the throttle value, the database adapter gets the next available row and marks it as being processed (STATE = ‘P’).
5. This row is handed off to an asynchronous BPEL process that will invoke a different BPEL process responsible for interacting with the slower system.
6. When the slower system responds and this response propagates back to the initiating BPEL process, the row is marked as complete (STATE = ‘C’).
7. Go to step 3 until all records have been processed.

The throttle control value represents the maximum number of in-flight BPEL processes that are interacting with the slower system. We will see later how this value can be changed at runtime through the SOA Suite Enterprise Manager console.

Configuring the Polling Database Adapter

The database adapter is the gate for flow control via the polling SQL statement. And “expert polling” configuration is required in order to setup the appropriate SQL statement. This configuration is a combination of getting artifacts created in JDeveloper using the DBAdapter Configuration Wizard and then manually tweaking the generated artifacts. The important steps in the wizard consist of:

1. Operation Type: Poll for New or Changed Records in a Table
2. After Read: Delete the Row(s) that were Read
3. Make sure Distributed Polling is checked
4. Add Parameter: MaxRowsProcessing

When the wizard finishes and the artifacts are created, there will be a file with the following naming convention: [Service Name]-or-mappings.xml (please note that you may have to edit this file outside of JDeveloper with 12c). It is in this file we will make changes that are considered “expert polling” configuration steps. The steps are not complicated:

1. Locate the <query …> element. If there are any child <criteria …></criteria> elements, remove them and all their children elements.
2. Between the <query …> element and <arguments> element, add <call xsi:type=”sql-call”></call>
3. Within the <call …> element add a <sql></sql>
4. Within the <sql> element add the polling query. The blog example looks like:
SELECT
    ID,
    "STATE",
    PAYLOAD
FROM
    THROTTLE_PARKINGLOT 
WHERE
    (((SELECT COUNT("STATE") FROM THROTTLE_PARKINGLOT WHERE "STATE" = 'P') &lt; #MaxRowsProcessing) AND
    ("STATE" = 'N'))
ORDER BY ID ASC FOR UPDATE SKIP LOCKED
5. Locate the closing queries element (</queries>)
6. Between the </queries> element and </querying> element insert <delete-query></delete-query>
7. Within the <delete-query> element, add a <call xsi:type=”sql-call”></call>
8. Within the <call …> element add a <sql></sql>
9. Within the <sql> element add the logical delete query. The blog example looks like:
<delete-query>
    <call xsi:type="sql-call">
      <sql>
      UPDATE THROTTLE_PARKINGLOT SET "STATE" = 'P' WHERE (ID = #ID)
      </sql>
    </call>
</delete-query>

Other Components

Now that the polling adapter is configured, we need an asynchronous BPEL process to handle the state management of the message. In the blog example, it is a very straightforward process:

1. Convert CLOB into payload for the slow system
2. Invoke the slow system
3. Receive the response from the slow system
4. Update row in the database with a complete state

ThrottleParkingLot12c_001

The state update is done through another DBAdapter configuration where the Operation Type is Update Only and the column is the STATE column. The state management BPEL process simply updates the STATE to ‘C’ using the row ID it already has as the key.
The blog example has one more BPEL process called SlowSystemSimulatorBPELProcess. This is an asynchronous BPEL process that will randomly generate a wait time in seconds between 20 and 240. It then uses a Wait activity to simulate a very slow and sporadic downstream system.

The Example

I have provided two SOA Suite 12c projects for the example:

1. ThrottleParkingLotTableLoader (sca_ThrottleParkingLotTableLoader_rev1.0.jar)
2. ThrottleParkingLotBlogExample (sca_ThrottleParkingLotBlogExample_rev1.0.jar)

Each project contains the necessary SQL scripts to get things setup in the database. Once the user and the table are set up, you will have to configure your database adapter for accessing the THROTTLE_PARKINGLOT table via the ATeam_Example user. To make it easier on you, use eis/DB/ATeamExample as the JNDI Name for the DBAdapter. Otherwise this will need to be changed in the .jca files before deploying the projects to your SOA server.

Once the projects are deployed, you can run a stress test on the ThrottleParkingLotTableLoader / AddPayloadToParkingLotMediator_ep to fill the parking lot with records. Once the parking lot has records they should start being processed by the ThrottleParkingLotBlogExample composite. The initial setting for the MaxRowsProcessing property is 5, so the number in in-flight instances will be limited to 5:

ThrottleParkingLot12c_002

Within the SOA Suite Enterprise Manager, we can change the value of MaxRowsProcessing:

ThrottleParkingLot12c_003

Now we see that the number of in-flight instances has changed:

ThrottleParkingLot12c_004

This will allow runtime tweaking of load on the downstream system. The value for MaxRowsProcessing can also be set to 0 (zero) to stop messages flowing to the downstream system. If you noticed, the polling sequence also leverages SKIP LOCKED which should allow this to work in a clustered environment. However, I have not tested this so feel free to try it out and provide feedback on your findings

I do hope you find this a valuable option for finer grained throttling within SOA Suite.

BPM Workspace Login with libOVD and LDAP, Part 2: Login

$
0
0

Introduction

In Part 1, we looked at the initialization of libOVD at server startup. Now let’s examine what happens inside libOVD when you actually click on the login button in BPM Workspace. Again, we are looking at BPM 11g PS5 BP7 with Patch 17315336.

The Workspace login is a two step process. The first step is checking the username and password against the LDAP. This step is performed in WLS security layer rather than the JPS layer. This means even if you turn on tracing in libOVD, you won’t see any trace log message related to this step.

The second step starts after the user credentials are successfully verified. In this step, the Workspace first constructs a Workflow context object, which triggers a user lookup query against the LDAP, and then it makes a request to the BPM server which invokes the follow method:

oracle.bpel.services.workflow.common.provider.WorkflowWSProvider.processMessage

 

This call triggers second user lookup into the LDAP. The LDAP queries performed in this step are executed through the JPS and the libOVD layer (not the WLS security layer). This post will focus on this step.

 

1. Building the WorkflowContext Object

This process includes three sub-steps: lookup user, find reportees and populate user details.

1.1. Lookup User

– LDAP query received by libOVD:

Base:
Scope: 2
Filter: (&(loginid=user1)(objectclass=person))
Attributes: [mail, sn, cn, description, usernameattr, orclguid, givenname, loginid, objectclass, displayname, usernameattr]

– Adapters selected:

Adapter#SchemaAdapter (does nothing, can be igored)
Adapter#DefaultAuthenticator
Adapter#ODSEE
Adapter#RootAdapter (does nothing, can be ignored)

– Query Adapter#DefaultAuthenticator
        — Connect to the WLS embeded LDAP server at localhost:7001
        — Build a connection pool if specified
        — Map query to (&(uid=user1)(objectclass=person))
        — Change the LDAP search base to: ou=people,ou=myrealm,dc=bpmdomain
        — Perform the query (but yields no result in this case).

– Query Adapter#ODSEE
        — Connect to the WLS embeded LDAP server at localhost:7001
        — Build a connection pool if specified
        — Map query to (&(uid=user1)(objectclass=person))
        — Change the LDAP search base to: ou=people,dc=migration,dc=test
        — Perform the query which yields 1 result.

– Post search processing, which does nothing in this case.

 

1.2. Get Reportees

– LDAP query received by libOVD:

BindDN:
Base: ou=people,ou=myrealm,dc=bpmdomain
Scope: 2
Attributes: [displayname, orclguid, manager, description, givenname, loginid, usernameattr, sn, cn, mail, objectclass, uid]
Filter: manager=cn=user1,ou=people,dc=migration,dc=test

– The rest of the process is the same as in 1. Lookup User
— Map query to: (manager=cn=user1,ou=people,dc=migration,dc=test)
— No result returned in my test case.

1.3. Populate User Details

– LDAP query received by libOVD:

Base: cn=user1,ou=people,dc=migration,dc=test
Scope: 0
Filter: objectclass=*
Attributes: []

– Adapter selected:

        Adapter#ODSEE

– Query Adapter#ODSEE
        — Map query to (objectclass=*)
        — Perform the query which yields no result in my test.

2. oracle.bpel.services.workflow.common.provider.WorkflowWSProvider.processMessage

As one can see below, the LDAP queries performed in this step are exactly the same as in step 1, building the WorkflowContext. It is not clear from the trace log
that why the same queries have to be repeated.

2.1 Lookup User

– Same as 1.1 above

2.2 Populate User Details

– Same as 1.3

Note: there is no query to get reportees in this process.

 

Other Issue

During the testing, it was found that the property 

<property name=”use.group.membership.search.config” value=”INDIRECT_ONLY”/>

has to be removed from jps-config.xml. If not, the Administration link in the Workspace will be missing even for an administration user.

 

In the next post of this series, we will look into the similar login process in 11g PS6 and 12c. Some different behaviors were observed with some customers.

 

Getting Started with the REST Adapter in OEP 12c

$
0
0

Introduction

It is undeniable that we are living in the age of data. With the explosion of the internet, mobile-based devices, and social networks, data is becoming the most abundant resource on earth. Another factor that is massively contributing to this trend is IoT, an acronym for “Internet of Things”. Within the IoT realm, there are more devices than ever before, and increased connectivity leads to higher volumes of data being generated at faster rates. While most companies spend millions of dollars every year to store that data for future analysis, some companies are already leveraging this data in real-time to gain valuable information.

Getting insight through in-motion data analysis is paramount for companies to stay ahead of the competition, and there is a high demand for solutions that can transform these huge amounts of data into something meaningful. But if the idea is to capture events from the internet, mobile-based devices, social networks, and IoT; few will be the cases where you will not come across the REST architectural style. Thus, it becomes imperative that Oracle Event Processing (OEP) developers get familiar with the new 12c REST adapter.

This article will provide a step-by-step guide to implementing OEP applications that leverage the 12c REST adapter, and demonstrate how to set up the support for handling CSV, JSON and XML payloads.

Case Study: Blog Data Processing

During this article, an application that processes blog entries will be created. The purpose of this application is receive events about blog entries via REST, but with data available in different types of payloads. After processing the events, the application prints each event in the output. Despite of the simplicity, the value of this scenario is to show how the 12c REST adapter can be used in OEP applications.

Setting Up the OSGi Dependencies

OEP is an OSGi-based technology. After you create a new OEP project in Oracle Fusion Middleware JDeveloper, it is a best practice to define all your package dependencies in the MANIFEST.MF file before starting development. This practice will save you time when dealing with Java import statements and during application deployment.

To start using the REST adapter, there is no special package to be imported, but the REST adapter by itself does not do much without having the capability to handle payloads like CSV, JSON and XML. For this reason, you need to import the following packages:

- com.oracle.cep.mappers.csv

- com.oracle.cep.mappers.jaxb

It is important to note that importing all of them is not necessary, unless of course, if you are going to handle all supported media types. For instance, if you do not intend to handle CSV payloads, you can definitely remove that package from the MANIFEST.MF file. However, for the purposes of this article, all the packages need to be imported.

Design your Event Types using Java

As you probably know, event types in OEP can be defined in two ways: through the declarative mode or by using a Java class. It is true that the declarative mode gives you the ability to change the event type without re-compiling any source-code; but on the other hand, using a Java class allows you to implement the event type behavior with more flexibility. While the discussion about pros and cons between these two approaches applies for regular OEP projects, when using the REST adapter you have no choice unless using a Java class. This is necessary because behind the scenes OEP uses Java Architecture for XML Binding (JAXB) to provide support for JSON and XML. Listing 1 shows the Java class implementation for the event type that will be used through this entire article.

package com.oracle.ateam.fmw.soa.samples;

import java.io.Serializable;

public class BlogEntry implements Serializable {
    
    private String guid;
    private String title;
    private String link;
    private int likes;

    // Getters and setters methods were
    // omitted for clarity purposes.
    
    @Override
    public boolean equals(Object object) {
        if (this == object) {
            return true;
        }
        if (!(object instanceof BlogEntry)) {
            return false;
        }
        final BlogEntry other = (BlogEntry) object;
        if (!(guid == null ? other.guid == null : guid.equals(other.guid))) {
            return false;
        }
        if (!(title == null ? other.title == null : title.equals(other.title))) {
            return false;
        }
        if (!(link == null ? other.link == null : link.equals(other.link))) {
            return false;
        }
        if (likes != other.likes) {
            return false;
        }
        return true;
    }

    @Override
    public int hashCode() {
        final int PRIME = 37;
        int result = 1;
        result = PRIME * result + ((guid == null) ? 0 : guid.hashCode());
        result = PRIME * result + ((title == null) ? 0 : title.hashCode());
        result = PRIME * result + ((link == null) ? 0 : link.hashCode());
        result = PRIME * result + likes;
        return result;
    }

}

Listing 1: Java class implementation for the event type.

To understand what can be done to customize the event type when using JSON and XML payloads, please consult the JAXB specification. Oracle also provides a good JAXB tutorial for beginners.

Setting Up the REST Adapter in OEP

Having the OEP project opened in JDeveloper, you will notice that there is no REST adapter component available in the components palette, at least not in the 12.1.3 version used during the writing of this article. In this case, you should manually add an adapter entry in the OEP assembly file. Listing 1 shows an inbound REST adapter that receives blog entries in a given context path.

<wlevs:adapter id="restAdapter" provider="rest-inbound">
   <wlevs:instance-property name="eventTypeName" value="BlogEntry" />
   <wlevs:instance-property name="contextPath" value="/insertBlogEntry" />
</wlevs:adapter>

Listing 2: REST adapter configuration for the OEP assembly file.

Once you add the adapter entry in the OEP assembly file, the EPN diagram will start to show it along with the others elements. Figure 1 shows the REST adapter being used in the OEP project.

Figure 1: REST adapter being used in the OEP project.

A REST adapter in OEP is nothing more than a Servlet, exposed through the Jetty web server that accepts HTTP requests sent to the configured contextPath property. During deployment, the OEP engine dynamically injects an OSGi HTTP Service reference into the adapter implementation. The adapter then uses this reference to register itself as a Servlet in order to start handling HTTP requests in runtime.

The REST adapter also needs to be associated to an event type. During each request, the adapter will try to map the incoming payload wrapped in the HTTP request to the event type configured in the eventTypeName property, using one of the associated mappers. Which mapper to use will be based on the analysis of the Content-Type header field. For this reason, always make sure that valid media type values are used by the upstream systems that send the events.

Overview about Payload Mappers

One of the most interesting things about REST is the fact that any media type can be used as the payload. This is different from SOAP which was originally designed to handle only XML payloads. Using REST, you can use any valid internet media type as the payload, although CSV, JSON and XML are found as being the most popular ones due its maturity in the computer software industry.

The REST adapter was designed to handle only HTTP interactions. For this reason, it delegates all tasks related to payload handling to custom implementations called mappers. A mapper is a Java class that implements payload marshalling and unmarshalling. For instance, in the case of the inbound REST adapter, the mapper is used to unmarshall the received payload and generates an event type instance, and in the case of the outbound REST adapter, the mapper is used to marshall the event type instance into a payload that is going to be sent.

OEP provides out-of-the-box mappers for CSV, JSON and XML. When using the REST adapter, you need to configure at least one of these mappers in the assembly file, and associate it with the REST adapter. If you fail to configure at least one of these mappers you will not be able to deploy your OEP application because the REST adapter will proactively throw an exception.

The next sections will cover the configuration details for the CSV, JSON and XML mappers. For testing, it is recommended that you use some type of browser-based UI that can perform REST requests. For Chrome users, there is an extension in Chrome Web Store called Advanced REST Client that fits our requirements.

Using the REST Adapter with CSV

The Comma-Separated Values (CSV) format is based on tabular data that can be a mix of numbers and text. There is no limit to the number of records that a CSV payload can have. The only restriction is that each record needs to contain the field values separated by a comma. To start using the CSV mapper, you need to configure in the assembly file a bean for the following class: com.oracle.cep.mappers.csv.CSVMapper.

Listing 3 shows the REST adapter with the CSV mapper associated.

<bean id="csvMapper" class="com.oracle.cep.mappers.csv.CSVMapper" />

<wlevs:adapter id="restAdapter" provider="rest-inbound">
   <wlevs:instance-property name="eventTypeName" value="BlogEntry" />
   <wlevs:instance-property name="contextPath" value="/insertBlogEntry" />
   <wlevs:instance-property name="csvMapper" ref="csvMapper" />
</wlevs:adapter>

Listing 3: REST adapter configuration with support to CSV.

It is important to note that the CSV payload must include a message header containing the field names, and this message header need to be the first record. The field names must match with the event type field names. Figure 2 shows an example of a REST request using CSV as payload.

Figure 2: REST request using CSV as payload.

All fields need to be set in the CSV payload, in both the message header and the records containing the values. In the records that contains the values, if not all values are available; you need to fill the field spot with a default value that matches with data type used in the event type.

Using the REST Adapter with JSON

The JavaScript Object Notation (JSON) is an open standard format that uses human readable text to transmit data objects using attribute-value pairs, with code for parsing and generating JSON readily available in a large variety of programming languages. OEP uses JAXB to handle JSON payloads, so in order to understand what an event type looks like in the JSON format; you need to serialize it using JAXB. To start using the JSON mapper, you need to configure in the assembly file a bean factory for the following class: com.oracle.cep.mappers.jaxb.JAXBMapperFactory.

Listing 4 shows the REST adapter with CSV and JSON mappers associated.

<bean id="csvMapper" class="com.oracle.cep.mappers.csv.CSVMapper" />

<bean id="jsonMapper" class="com.oracle.cep.mappers.jaxb.JAXBMapperFactory"
   factory-method="create">
   <property name="eventTypeName" value="BlogEntry" />
   <property name="mediaType" value="application/json" />
</bean>

<wlevs:adapter id="restAdapter" provider="rest-inbound">
   <wlevs:instance-property name="eventTypeName" value="BlogEntry" />
   <wlevs:instance-property name="contextPath" value="/insertBlogEntry" />
   <wlevs:instance-property name="csvMapper" ref="csvMapper" />
   <wlevs:instance-property name="jsonMapper" ref="jsonMapper" />
</wlevs:adapter>

Listing 4: REST adapter configuration with support to CSV and JSON.

Since OEP uses JAXB to handle JSON payloads, it is necessary to inform the JSON mapper which event type is being handled. This is accomplished through the property eventTypeName. The property mediaType allows you to set which value to expect in the Content-type header field. Figure 3 shows an example of REST request using JSON as payload.

Figure 3: REST request using JSON as payload.

Different from the CSV mapper that supports multiple records in the same payload, the JSON mapper supports only one record. In the JSON world this record is called the object root. Fortunately, the object root can contain child objects, so you are not restricted to send all the data in the object root.

Using the REST Adapter with XML

The eXtensible Markup Language (XML) is an open standard specification defined by W3C for a markup language, understandable both by humans and machines, used for the representation for arbitrary data structures. OEP uses JAXB to handle XML payloads, so in order to understand what an event type looks like in the XML format; you need to serialize it using JAXB. To start using the XML mapper, you need to configure in the assembly file a bean factory for the following class: com.oracle.cep.mappers.jaxb.JAXBMapperFactory.

Listing 5 shows the REST adapter with CSV, JSON and XML mappers associated.

<bean id="csvMapper" class="com.oracle.cep.mappers.csv.CSVMapper" />

<bean id="jsonMapper" class="com.oracle.cep.mappers.jaxb.JAXBMapperFactory"
   factory-method="create">
   <property name="eventTypeName" value="BlogEntry" />
   <property name="mediaType" value="application/json" />
</bean>

<bean id="xmlMapper" class="com.oracle.cep.mappers.jaxb.JAXBMapperFactory"
   factory-method="create">
   <property name="eventTypeName" value="BlogEntry" />
   <property name="metadata" value="blogEntryMetadata.xml" />
</bean>

<wlevs:adapter id="restAdapter" provider="rest-inbound">
   <wlevs:instance-property name="eventTypeName" value="BlogEntry" />
   <wlevs:instance-property name="contextPath" value="/insertBlogEntry" />
   <wlevs:instance-property name="csvMapper" ref="csvMapper" />
   <wlevs:instance-property name="jsonMapper" ref="jsonMapper" />
   <wlevs:instance-property name="xmlMapper" ref="xmlMapper" />
</wlevs:adapter>

Listing 5: REST adapter configuration with support to CSV, JSON and XML.

Since OEP uses JAXB to handle XML payloads, it is necessary to inform to the XML mapper which event type is being handled. This is accomplished through the property eventTypeName. Also, the XML mapper needs a special configuration file called JAXB bindings that the OEP developer can use to customize the XML payload. You can set which JAXB bindings file to use through the metadata property.

According to the JAXB specification, XML bindings can be implemented using the JAXB bindings file, via Java annotations or using a mix of both approaches. However, OEP ignores any Java annotation set in the event type implementation. For this reason, the JAXB bindings file is mandatory.

In the <ROOT>/wlevs/mappers/jaxb folder, create a file called blogEntryMetadata.xml just as shown in listing 6. Note that JDeveloper does not create the /mappers/jaxb folder during the project creation, so you will need to manually create this folder.

<?xml version="1.0"?>

<xml-bindings xmlns="http://www.eclipse.org/eclipselink/xsds/persistence/oxm">
   <java-types>
      <java-type name="com.oracle.ateam.fmw.soa.samples.BlogEntry">
         <xml-root-element name="blog-entry"/>
      </java-type>
   </java-types>
</xml-bindings>

Listing 6: JAXB bindings file used to customize the XML payload.

As you can see in listing 6, the root element of the XML payload was changed from BlogEntry (which is essentially the Java class name of the event type) to blog-entry. There are a lot of customizations that can be done in the JAXB bindings file, but covering all of them goes beyond the scope of this article. Figure 4 shows an example of REST request using XML as payload.

Figure 4: REST request using XML as payload.

Different from the CSV mapper that supports multiple records in the same payload but like JSON, the XML mapper supports only one record. In the XML world this record is called the root node. Fortunately, the root node can contain child nodes, so you are not restricted to send all the data in the root node.

You can download the final implementation of the project used in this article here.

Conclusion

The REST architectural style is becoming a common standard for system-to-system, things-to-system and things-to-things interactions, due its simplicity, portability and small footprint. These characteristics attract more and more developers that choose to build APIs using REST. Also, with data being generated at faster rates, companies are looking for solutions that can help them to identify business opportunities or threats through the analysis of in-motion data. Looking ahead at what the industry needed, Oracle introduced the REST adapter in the OEP 12c version.

This article showed a step-by-step guide that aims to help developers to leverage the REST adapter in OEP applications, providing best practices for development and explaining the adapter internals.


Inside Fusion Middleware 12c: Increasing Scalability with JMS Adapter 12c

$
0
0

JMS Adapter (also known as Oracle JCA Adapter for JMS) is a component available with Oracle SOA Suite or Oracle Service Bus (OSB) which provides a very powerful way to use  the Java Messaging Service (JMS) for sending or receiving messages.

The most important goals for optimizing an SOA Suite or OSB environment are to make sure that

  • Each node of the cluster makes effective use of Java threads and other resources.
  • The cluster will be able to scale efficiently when adding new nodes.

In this article, we will show how JMS Adapter in the new release 12c can be configured in high throughput scenarios to use much fewer threads that in earlier versions. This means, that adding new nodes to the cluster will not require additional threads in all other nodes. As a result, a cluster with many nodes will perform and scale much better.

There are 2 main use cases of JMS Adapter with SOA Suite or OSB:

  • Inbound activation of a composite by receiving messages from a JMS destination (queue or topic)
  • Publishing messages from a composite to a JMS destination

We will focus in this article on the first use case for receiving messages. Only in that scenario – for listening to a queue, the activation framework of the Java EE Connector Architecture (JCA) layer will start a number of threads for JMS Adapter.

First, we will describe how many threads will be created by default or in versions prior to 12c, and then how this changes when using the new feature available with 12c.

Default thread creation for JMS Adapter

For the first example, we will assume a scenario with a 2-node SOA cluster where a Uniform Distributed Queue (UDQ) “jms/TestQueue” is defined and one SOA composite with an inbound JMS Adapter is listening to this queue. See line 2 of the JMS destination overview in Weblogic server in the following table:

jms_queue

With the default behavior, the number of threads used to read messages from this queue in each Java Virtual Machine (JVM) is one. This is derived from the default of the corresponding configuration parameter:

adapter.jms.receive.threads = 1

JMS Adapter creates this specified number of threads for each node in the cluster  to listen to incoming messages on all nodes, so the overall number of threads in each JVM in this example will be two, as shown in the following picture.

2-node-1-thread

We can verify that by taking a thread dump. The thread dump of soa_server1 shows 2 waiting threads like this:

  • “DaemonWorkThread: ‘1’ of WorkManager: ‘default_Adapters'” Id=253 WAITING
  • “DaemonWorkThread: ‘0’ of WorkManager: ‘default_Adapters'” Id=228 WAITING

In the next step, we examine how this picture will change when messages need to be dequeued in parallel. This can be required, for example, if a very high throughput needs to be achieved or if each dequeued message starts a transaction which takes a longer time. Parallel processing can be enabled by increasing the listening threads of each JMS Adapter instance. The property “adapter.jms.receive.threads” can be changed in EM console:

jms adapter properties

After changing this property to 10, the thread count in each thread dump will increase to 20 on each node (40 all together):

2-node-with-10-threads

To show the exponential effect, we will examine the result in a larger SOA cluster. In a 10-node cluster with 10 receive threads, the number of threads will increase to 100 needed on each node and 1000 all together in the cluster (lines in the picture are accurate for the first 3 nodes only):

10-node-unoptimized

The huge negative impact is that adding a new node will increase the number of threads in all other nodes as well. This leads to a non-linear increase of the total threads needed. In a cluster scenario with a lot of queues, this can result in a very high overall number of threads in a single JVM, causing too many context switches of the CPU and slowing down the performance significantly.

JMS Adapter threads in SOA Suite 12c

The new 12c feature to control the thread creation is provided as a new configuration property:

adapter.jms.DistributedDestinationConnectionEveryMember=false  (default is true)

The documentation describes the usage of this property (see chapter 8.3.1.6 in  “Understanding Technology Adapters”):

When true, the JMS Adapter creates a consumer/subscriber for each member of the Distributed Destinations (the author: this is the default and was the setting in 11g). If set to false, the JMS Adapter creates a consumer/subscriber for only local members of the distributed destination.
When the JMS Adapter is connecting to distributed destination on a remote cluster or a server on remote domain, the property ‘adapter.jms.DistributedDestinationConnectionEveryMember’ should always be set to true. When the JMS Adapter is connecting to distributed destination on a local cluster, the property can be set to either true or false. If set to true, the JMS Adapter behavior remains the same as before (that is, there is a consumer for each Distributed Destination is created). If set to false, the JMS Adapter only creates consumer/subscriber for the local members.

We recommend setting this property to false for distributed destinations on a local cluster.

As written in the JMS Adapter documentation (see Appendix), this change is only possible in the composite.xml, not through the Enterprise Manager (EM) web console:

<service name=”JMSConsumer” ui:wsdlLocation=”WSDLs/JMSConsumer.wsdl”>
<interface.wsdl interface=”http://xmlns.oracle.com/pcbpel/adapter/jms/JMSConsumerApp/JMSConsumer/JMSConsumer#wsdl.interface(Consume_Message_ptt)”/>
<binding.jca config=”Adapters/JMSConsumer_jms.jca”>
<property name=”useRejectedMessageRecovery” type=”xs:string” many=”false” override=”may”>true</property>
<property name=”adapter.jms.receive.threads” type=”xs:string” many=”false” override=”may”>10</property>
<property name=”adapter.jms.DistributedDestinationConnectionEveryMember” type=”xs:string” many=”false” override=”may”>false</property>
</binding.jca>

</service>

After changing this property to false, the number of threads used for JMS Adapter in each JVM will drop to 10. All 10 threads on node 1 are listening only to the local UDQ member (on the same node), and so on. The architecture will be much more scalable in a large cluster, as shown in the following picture:

10-node-optimized

It is important to notice that this property cannot be change via Enterprise Manager Console (EM) in 12.1.3 – and it is also not visible in the EM console, even after setting or changing the value in composite.xml.

Receiving messages just from a local queue will in most cases not impact the functional behavior. By using the Weblogic JMS features of message forwarding and load balancing, you can still achieve that all messages are distributed to cluster members with active consumers – without the need to connect to every destination member with JMS Adapter.

Conclusion

In summary, using this property “DistributedDestinationConnectionEveryMember” enables a user to considerably reduce the number of threads created for inbound JMS Adapter instances in a cluster environment. The larger a cluster is, the higher the potential reduction of the number of threads is.

In our previous example of a 10-node cluster with 10 inbound JMS Adapter receive threads, the thread count was reduced by a factor of 10.

This will considerably increase the scalability of the architecture and the performance of each JVM, especially in large clusters with high throughput requirements.

Appendix

References:

Purging and partitioned schemas

$
0
0

SOA Suite 11g and 12c both require regular database maintenance for optimal performance. A key task in managing your SOA Suite database is a regular purging strategy. You should be doing this, so read the Oracle SOA Suite database growth management strategy if you haven’t already: http://www.oracle.com/technetwork/middleware/bpm/learnmore/soa11gstrategy-1508335.pdf

One of the best practices for managing large SOA Suite applications is to use Oracle Database partitioning. In 11g this is usually a fairly ad-hoc setup, though the whitepaper has everything you need to know about setting it up; in 12c, the “LARGE” RCU profile is partitioned (with monthly partitions).

Purging a partitioned schema usually involves running the check and move scripts, to ensure your partitions don’t contain “LIVE” data (based on your retention policy), followed by dropping the “OLD” partitions and rebuilding the indexes.

However, there are times where you may want to run a purge to clean up data that doesn’t neatly align with the partitions, for example in a load testing environment. The purge scripts, by default, won’t touch any table that is partitioned. If your favourite table isn’t mentioned in the purge debug log output (example below), then it is probably because it is partitioned.

To force the purge scripts to consider partitioned tables, you need to enable the “purge_partitioned_component” flag to the “delete instances” purge function (see below). The purge script will then purge partitioned tables.

Obviously, this is not intended for regular production use and it should never be used there.

An example invocation with the flag set:

soa.delete_instances(max_runtime => 300, min_creation_date => to_timestamp('2000-01-01','YYYY-MM-DD'), max_creation_date => to_timestamp('2000-12-31','YYYY-MM-DD'), purge_partitioned_component=TRUE);

The example output below is from a soa.delete_instances run that has a partition on composite_instance. Note that there is no mention of composite_instance in the output.

There are several tables which can be partitioned, as well as whole units (such as BPEL). The purge script will skip any that have a partition. (If you are interested, you can search the PLSQL packages in a SOAINFRA schema for ‘is_table_partitioned’ to see which tables are checked and which columns it considers for partitioning).

01-JAN-2000 12:00:00 : procedure delete_instances
01-JAN-2000 12:00:00 : time check
01-JAN-2000 12:00:00 : sysdate = 01/JAN/2000:12/00
01-JAN-2000 12:00:00 : stoptime = 01/JAN/2000:12/00
01-JAN-2000 12:00:00 : checking for partitions
01-JAN-2000 12:00:00 : done checking for partitions
01-JAN-2000 12:00:00 : composite_dn =
01-JAN-2000 12:00:00 : loop count = 1
01-JAN-2000 12:00:00 : deleting non-orphaned instances
01-JAN-2000 12:00:00 Number of rows in table ecid_purge Inserted = 1
01-JAN-2000 12:00:00 : calling soa_orabpel.deleteComponentInstances
01-JAN-2000 12:00:00 Number of rows in table temp_cube_instance Inserted = 1
01-JAN-2000 12:00:00 Number of rows in table temp_document_ci_ref Inserted = 1
01-JAN-2000 12:00:00 Number of rows in table temp_document_dlv_msg_ref Inserted = 1
01-JAN-2000 12:00:00 Number of rows in table HEADERS_PROPERTIES purged is : 1
01-JAN-2000 12:00:00 Number of rows in table AG_INSTANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table TEST_DETAILS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table CUBE_SCOPE purged is : 1
01-JAN-2000 12:00:00 Number of rows in table AUDIT_COUNTER purged is : 1
01-JAN-2000 12:00:00 Number of rows in table AUDIT_TRAIL purged is : 1
01-JAN-2000 12:00:00 Number of rows in table AUDIT_DETAILS purged is : 1
01-JAN-2000 12:00:00 Number of rows in table CI_INDEXES purged is : 0
01-JAN-2000 12:00:00 Number of rows in table WORK_ITEM purged is : 1
01-JAN-2000 12:00:00 Number of rows in table WI_FAULT purged is : 1
01-JAN-2000 12:00:00 Number of rows in table XML_DOCUMENT purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DOCUMENT_DLV_MSG_REF purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DOCUMENT_CI_REF purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DLV_MESSAGE purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DLV_SUBSCRIPTION purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DLV_AGGREGATION purged is : 0
01-JAN-2000 12:00:00 Number of rows in table CUBE_INSTANCE purged is : 1
01-JAN-2000 12:00:00 Number of rows in table BPM_AUDIT_QUERY purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_MEASUREMENT_ACTIONS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_MEASUREMENT_ACTION_EXCEPS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_CUBE_AUDITINSTANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_CUBE_TASKPERFORMANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_CUBE_PROCESSPERFORMANCE purged is : 0
01-JAN-2000 12:00:00 : completed soa_orabpel.deleteComponentInstances
01-JAN-2000 12:00:00 : calling workflow.deleteComponentInstances
01-JAN-2000 12:00:00 : workflow.deleteComponentInstance begins
01-JAN-2000 12:00:00 : workflow.truncate_temp_tables
01-JAN-2000 12:00:00 Number of rows in table temp_wftask_purge workflow.deleteComponentInstance Inserted = 0
01-JAN-2000 12:00:00 : workflow.delete_workflow_instances begins
01-JAN-2000 12:00:00 : Purging WFTask_TL
01-JAN-2000 12:00:00 Number of rows in table WFTask_TL Purge WFTask_TL0
01-JAN-2000 12:00:00 : Purging WFTaskHistory
01-JAN-2000 12:00:00 Number of rows in table WFTaskHistory Purge WFTaskHistory0
01-JAN-2000 12:00:00 : Purging WFTaskHistory_TL
01-JAN-2000 12:00:00 Number of rows in table WFTaskHistory_TL Purge WFTaskHistory_TL0
01-JAN-2000 12:00:00 : Purging WFComments
01-JAN-2000 12:00:00 Number of rows in table WFComments Purge WFComments0
01-JAN-2000 12:00:00 : Purging WFMessageAttribute
01-JAN-2000 12:00:00 Number of rows in table WFMessageAttribute Purge WFMessageAttribute0
01-JAN-2000 12:00:00 : Purging WFAttachment
01-JAN-2000 12:00:00 Number of rows in table WFAttachment Purge WFAttachment0
01-JAN-2000 12:00:00 : Purging WFAssignee
01-JAN-2000 12:00:00 Number of rows in table WFAssignee Purge WFAssignee0
01-JAN-2000 12:00:00 : Purging WFReviewer
01-JAN-2000 12:00:00 Number of rows in table WFReviewer Purge WFReviewer0
01-JAN-2000 12:00:00 : Purging WFCollectionTarget
01-JAN-2000 12:00:00 Number of rows in table WFCollectionTarget Purge WFCollectionTarget0
01-JAN-2000 12:00:00 : Purging WFRoutingSlip
01-JAN-2000 12:00:00 Number of rows in table WFRoutingSlip Purge WFRoutingSlip0
01-JAN-2000 12:00:00 : Purging WFNotification
01-JAN-2000 12:00:00 Number of rows in table WFNotification Purge WFNotification0
01-JAN-2000 12:00:00 : Purging WFTaskTimer
01-JAN-2000 12:00:00 Number of rows in table WFTaskTimer Purge WFTaskTimer0
01-JAN-2000 12:00:00 : Purging WFTaskError
01-JAN-2000 12:00:00 Number of rows in table WFTaskError Purge WFTaskError0
01-JAN-2000 12:00:00 : Purging WFHeaderProps
01-JAN-2000 12:00:00 Number of rows in table WFHeaderProps Purge WFHeaderProps0
01-JAN-2000 12:00:00 : Purging WFEvidence
01-JAN-2000 12:00:00 Number of rows in table WFEvidence Purge WFEvidence0
01-JAN-2000 12:00:00 : Purging WFTaskAssignmentStatistic
01-JAN-2000 12:00:00 Number of rows in table WFTaskAssignmentStatistic Purge WFTaskAssignmentStatistic0
01-JAN-2000 12:00:00 : Purging WFTaskAggregation
01-JAN-2000 12:00:00 Number of rows in table WFTaskAggregation Purge WFTaskAggregation0
01-JAN-2000 12:00:00 : Purging WFTask
01-JAN-2000 12:00:00 Number of rows in table WFTask Purge WFTask0
01-JAN-2000 12:00:00 : workflow.delete_workflow_instances ends
01-JAN-2000 12:00:00 : workflow.deleteComponentInstance ends
01-JAN-2000 12:00:00 : completed workflow.deleteComponentInstances
01-JAN-2000 12:00:00 : calling mediator.deleteComponentInstances
01-JAN-2000 12:00:00 Number of rows in table temp_mediator_instance Inserted = 0
01-JAN-2000 12:00:00 Number of rows in table mediator_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table mediator_deferred_message purged is : 0
01-JAN-2000 12:00:00 Number of rows in table mediator_document purged is : 0
01-JAN-2000 12:00:00 Number of rows in table mediator_case_detail purged is : 0
01-JAN-2000 12:00:00 Number of rows in table mediator_case_instance purged is : 0
01-JAN-2000 12:00:00 Number of rows in table mediator_instance purged is : 0
01-JAN-2000 12:00:00 : completed mediator.deleteComponentInstances
01-JAN-2000 12:00:00 : calling decision.deleteComponentInstances
01-JAN-2000 12:00:00 Number of rows in table temp_brdecision_instance Inserted = 0
01-JAN-2000 12:00:00 Number of rows in table BRDecisionFault purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BRDecisionUnitOfWork purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BRDecisonInstance purged is : 0
01-JAN-2000 12:00:00 : completed decision.deleteComponentInstances
01-JAN-2000 12:00:00 : calling fabric.deleteComponentInstances
01-JAN-2000 12:00:00 Number of rows in table reference_instance_purge inserted = 1
01-JAN-2000 12:00:00 Number of rows in table xml_document purged is : 1
01-JAN-2000 12:00:00 Number of rows in table instance_payload purged is : 1
01-JAN-2000 12:00:00 Number of rows in table reference_instance purged is : 1
01-JAN-2000 12:00:00 Number of rows in table xml_document purged is : 0
01-JAN-2000 12:00:00 Number of rows in table rejected_msg_native_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table instance_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table composite_instance_fault purged is : 1
01-JAN-2000 12:00:00 Number of rows in table xml_document purged is : 1
01-JAN-2000 12:00:00 Number of rows in table instance_payload purged is : 1
01-JAN-2000 12:00:00 Number of rows in table composite_sensor_value purged is : 0
01-JAN-2000 12:00:00 Number of rows in table composite_instance_assoc purged is : 1
01-JAN-2000 12:00:00 Number of rows in table component_instance_purge inserted = 0
01-JAN-2000 12:00:00 Number of rows in table xml_document purged is : 0
01-JAN-2000 12:00:00 Number of rows in table instance_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table component_instance purged is : 0
01-JAN-2000 12:00:00 Number of rows in table attachment purged is : 0
01-JAN-2000 12:00:00 Number of rows in table attachment_ref purged is : 0
01-JAN-2000 12:00:00 : completed fabric.deleteComponentInstances
01-JAN-2000 12:00:00 : time check
01-JAN-2000 12:00:00 : sysdate = 01/JAN/2000:12/00
01-JAN-2000 12:00:00 : stoptime = 01/JAN/2000:12/00
01-JAN-2000 12:00:00 : loop count = 2
01-JAN-2000 12:00:00 : deleting orphaned instances
01-JAN-2000 12:00:00 : calling soa_orabpel.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 Number of rows in table temp_document_dlv_msg_ref Inserted no cikey 1
01-JAN-2000 12:00:00 Number of rows in table temp_cube_instance Inserted = 0
01-JAN-2000 12:00:00 Number of rows in table temp_document_ci_ref Inserted = 0
01-JAN-2000 12:00:00 Number of rows in table temp_document_dlv_msg_ref Inserted = 0
01-JAN-2000 12:00:00 Number of rows in table HEADERS_PROPERTIES purged is : 1
01-JAN-2000 12:00:00 Number of rows in table AG_INSTANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table TEST_DETAILS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table CUBE_SCOPE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table AUDIT_COUNTER purged is : 0
01-JAN-2000 12:00:00 Number of rows in table AUDIT_TRAIL purged is : 0
01-JAN-2000 12:00:00 Number of rows in table AUDIT_DETAILS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table CI_INDEXES purged is : 0
01-JAN-2000 12:00:00 Number of rows in table WORK_ITEM purged is : 0
01-JAN-2000 12:00:00 Number of rows in table WI_FAULT purged is : 0
01-JAN-2000 12:00:00 Number of rows in table XML_DOCUMENT purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DOCUMENT_DLV_MSG_REF purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DOCUMENT_CI_REF purged is : 0
01-JAN-2000 12:00:00 Number of rows in table DLV_MESSAGE purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DLV_SUBSCRIPTION purged is : 0
01-JAN-2000 12:00:00 Number of rows in table DLV_AGGREGATION purged is : 0
01-JAN-2000 12:00:00 Number of rows in table CUBE_INSTANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_AUDIT_QUERY purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_MEASUREMENT_ACTIONS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_MEASUREMENT_ACTION_EXCEPS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_CUBE_AUDITINSTANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_CUBE_TASKPERFORMANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_CUBE_PROCESSPERFORMANCE purged is : 0
01-JAN-2000 12:00:00 : completed soa_orabpel.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 : calling workflow.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 : workflow.deleteNoCompositeIdInstances begins
01-JAN-2000 12:00:00 : workflow.truncate_temp_tables
01-JAN-2000 12:00:00 : workflow.deleteNoCompositeIdInstances populates temp_wftaks_purge using createdDate between
min_date=01-JAN-00 12.00.00.000000 AMand max_date=31-JAN-00 12.00.00.000000 AM
01-JAN-2000 12:00:00 : workflow.deleteNoCompositeIdInstances done. No WFTask instances were found
01-JAN-2000 12:00:00 : completed workflow.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 : calling mediator.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 Number of rows in table temp_mediator_instance Inserted = 0
01-JAN-2000 12:00:00 : No Mediator instances found with composite instance id as null or zero
01-JAN-2000 12:00:00 Number of rows in table temp_mediator_instance Inserted = 0
01-JAN-2000 12:00:00 : No Mediator instances found from mediator_resequencer_message
01-JAN-2000 12:00:00 Number of rows in table temp_mediator_instance Inserted = 0
01-JAN-2000 12:00:00 : No Mediator instances found in mediator_deferred_message
01-JAN-2000 12:00:00 : completed mediator.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 : calling decision.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 Number of rows in table temp_brdecision_instance Inserted = 0
01-JAN-2000 12:00:00 : No Decision instances found with null composite instance ids
01-JAN-2000 12:00:00 : completed decision.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 : calling fabric.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 Number of rows in table reference_instance_purge inserted = 0
01-JAN-2000 12:00:00 Number of rows in table xml_document purged is : 0
01-JAN-2000 12:00:00 Number of rows in table instance_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table reference_instance purged is : 0
01-JAN-2000 12:00:00 Number of rows in table composite_fault_purge inserted = 0
01-JAN-2000 12:00:00 Number of rows in table xml_document purged is : 0
01-JAN-2000 12:00:00 Number of rows in table rejected_msg_native_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table instance_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table composite_instance_fault purged is : 0
01-JAN-2000 12:00:00 Number of rows in table component_instance purged is : 0
01-JAN-2000 12:00:00 : completed fabric.deleteNoCompositeIdInstances

Fusion HCM Cloud Bulk Integration Automation

$
0
0

Introduction

Fusion HCM Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the bulk integration to load and extract data to/from cloud. The inbound tool is the File Based data loader (FBL) evolving into HCM Data Loaders (HDL). HDL supports data migration for full HR, incremental load to support co-existence with Oracle Applications such as E-Business Suite (EBS) and PeopleSoft (PSFT). It also provides the ability to bulk load into configured flexfields. HCM Extracts is an outbound integration tool that let’s you choose data, gathers and archives it. This archived raw data is converted into a desired format and delivered to supported channels recipients.

HCM cloud implements Oracle WebCenter Content, a component of Fusion Middleware, to store and secure data files for both inbound and outbound bulk integration patterns. This post focuses on how to automate data file transfer with WebCenter Content to initiate the loader. The same APIs will be used to download data file from the WebCenter Content delivered through the extract process.

WebCenter Content replaces SSH File Transfer Protocol (SFTP) server in the cloud as a content repository in Fusion HCM starting with Release 7+. There are several ways of importing and exporting content to and from Fusion Applications such as:

  • Upload using “File Import and Export” UI from home page navigation: Navigator > Tools
  • Upload using WebCenter Content Document Transfer Utility
  • Upload programmatically via Java Code or Web Service API

This post provides an introduction, with working sample code, on how to programmatically export content from Fusion Applications to automate the outbound integration process to other applications in the cloud or on-premise. A Service Oriented Architecture (SOA) composite is implemented to demonstrate the concept.

Main Article

Fusion Applications Security in WebCenter Content

The content in WebCenter Content is secured through users, roles, privileges and accounts. The user could be any valid user with a role such as “Integration Specialist.” The role may have privileges such as read, write and delete. The accounts are predefined by each application. For example, HCM uses /hcm/dataloader/import and /hcm/dataloader/export respectively.

Let’s review the inbound and outbound batch integration flows.

Inbound Flow

This is a typical Inbound FBL process flow:

 

HDL_loader_process

The data file is uploaded to WebCenter Content Server either using Fusion HCM UI or programmatically in /hcm/dataloader/import account. This uploaded file is registered by invoking the Loader Integration Service – http://{Host}/hcmCommonBatchLoader/LoaderIntegrationService.

You must specify the following in the payload:

  • Content id of the file to be loaded
  • Business objects that you are loading
  • Batch name
  • Load type (FBL)
  • Imported file to be loaded automatically

Fusion Applications UI also allows the end user to register and initiate the data load process.

 

Encryption of Data File using Pretty Good Privacy (PGP)

All data files transit over a network via SSL. In addition, HCM Cloud supports encryption of data files at rest using PGP.
Fusion supports the following types of encryption:

  • PGP Signed
  • PGP Unsigned
  • PGPX509 Signed
  • PGPX509 Unsigned

To use this PGP Encryption capability, a customer must exchange encryption keys with Fusion for the following:

  • Fusion can decrypt inbound files
  • Fusion can encrypt outbound files
  • Customer can encrypt files sent to Fusion
  • Customer can decrypt files received from Fusion

Steps to Implement PGP

  1. 1. Provide your PGP Public Key
  2. 2. Oracle’s Cloud Operations team provides you with the Fusion PGP Public Key.

Steps to Implement PGP X.509

  1. 1. Self signed fusion key pair (default option)
    • You provide the public X.509 certificate
  2. 2. Fusion Key Pair provided by you:
    • Public X.509 certificate uploaded via Oracle Support Service Request (SR)
    • Fusion Key Pair for Fusion’s X.509 certificate in a Keystore with Keystore password.

Steps for Certificate Authority (CA) signed Fusion certificate

      1. Obtain Certificate Authority (CA) signed Fusion certificate
      2. Public X.509 certificate uploaded via SR
      3. Oracle’s Cloud Operations exports the fusion public X.509 CSR certificate and uploads it to SR
      4. Using Fusion public X.509 CSR certificate, Customer provides signed CA certificate and uploads it to SR
    5. Oracle’s Cloud Operations provides the Fusion PGP Public Certificate to you via an SR

 

Modification to Loader Integration Service Payload to support PGP

The loaderIntegrationService has a new method called “submitEncryptedBatch” which has an additional parameter named “encryptType”. The valid values to pass in the “encryptType” parameter are taken from the ORA_HRC_FILE_ENCRYPT_TYPE lookup:

  • NONE
  • PGPSIGNED
  • PGPUNSIGNED
  • PGPX509SIGNED
  • PGPX509UNSIGNED

Sample Payload

<soap:Envelope xmlns:soap=”http://schemas.xmlsoap.org/soap/envelope/”> <soap:Body>
<ns1:submitEncryptedBatch
xmlns:ns1=”http://xmlns.oracle.com/apps/hcm/common/batchLoader/core/loaderIntegrationService/types/”>
<ns1:ZipFileName>LOCATIONTEST622.ZIP</ns1:ZipFileName>
<ns1:BusinessObjectList>Location</ns1:BusinessObjectList>
<ns1:BatchName>LOCATIONTEST622.ZIP</ns1:BatchName>
<ns1:LoadType>FBL</ns1:LoadType>
<ns1:AutoLoad>Y</ns1:AutoLoad>
<ns1:encryptType>PGPX509SIGNED</ns1:encryptType>
</ns1:submitEncryptedBatch>
</soap:Body>
</soap:Envelope>

 

Outbound Flow

This is a typical Outbound batch Integration flow using HCM Extracts:

extractflow

The extracted file could be delivered to the WebCenter Content server. HCM Extract has an ability to generate an encrypted output file. In Extract delivery options ensure the following options are correctly configured:

  1. Select HCM Delivery Type to “HCM Connect”
  2. Select an Encryption Mode of the 4 supported encryption types. or select None
  3. Specify the Integration Name – his value is used to build the title of the entry in WebCenter Content

 

Extracted File Naming Convention in WebCenter Content

The file will have the following properties:

  • Author: FUSION_APPSHCM_ESS_APPID
  • Security Group: FAFusionImportExport
  • Account: hcm/dataloader/export
  • Title: HEXTV1CON_{IntegrationName}_{EncryptionType}_{DateTimeStamp}

 

Programmatic Approach to export/import files from/to WebCenter Content

In Fusion Applications, the WebCenter Content Managed server is installed in the Common domain Weblogic Server. The WebCenter Content server provides two types of web services:

Generic JAX-WS based web service

This is a generic web service for general access to the Content Server. The context root for this service is “/idcws”. For details of the format, see the published WSDL at https://<hostname>:<port>/idcws/GenericSoapPort?WSDL. This service is protected through Oracle Web Services Security Manager (OWSM). As a result of allowing WS-Security policies to be applied to this service, streaming Message Transmission Optimization Mechanism (MTOM) is not available for use with this service. Very large files (greater than the memory of the client or the server) cannot be uploaded or downloaded.

Native SOAP based web service

This is the general WebCenter Content service. Essentially, it is a normal socket request to Content Server, wrapped in a SOAP request. Requests are sent to the Content Server using streaming Message Transmission Optimization Mechanism (MTOM) in order to support large files. The context root for this service is “/idcnativews”. The main web service is IdcWebRequestPort and it requires JSESSIONID, which can be retrieved from IdcWebLoginPort service.

The Remote Intradoc Client (RIDC) uses the native web services. Oracle recommends that you do not develop a custom client against these services.

For more information, please refer “Developing with WebCenter Content Web Services for Integration.”

Generic Web Service Implementation

This post provides a sample of implementing generic web service /idcws/GenericSoapPort. In order to implement this web service, it is critical to review the following definitions to generate the request message and parse the response message:

IdcService:

IdcService is a predefined service node’s attribute that is to be executed, for example, CHECKIN_UNIVERSAL, GET_SEARCH_RESULTS, GET_FILE, CHECKOUT_BY_NAME, etc.

User

User is a subnode within a <service> and contains all user information.

Document

Document is a collection of all the content-item information and is the parent node of the all the data.

ResultSet

ResultSet is a typical row/column based schema. The name attribute specifies the name of the ResultSet. It contains set of row subnodes.

Row

Row is a typical row within a ResultSet, which can have multiple <row> subnodes. It contains sets of Field objects

Field

Field is a subnode of either <document> or <row>. It represents document or user metadata such as content Id, Name, Version, etc.

File

File is a file object that is either being uploaded or downloaded

For more information, please refer Configuring Web Services with WSDL, SOAP, and the WSDL Generator.

Web Service Security

The genericSoapPort web service is protected by Oracle Web Services Manager (OWSM). In Oracle Fusion Applications cloud, the OWSM policy is: “oracle/wss11_saml_or_username_token_with_message_protection_service_policy”.

In your SOAP envelope, you will need the appropriate “wsee” headers. This is a sample:

<soapenv:Header>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" soapenv:mustUnderstand="1">
<saml:Assertion xmlns:saml="urn:oasis:names:tc:SAML:1.0:assertion" MajorVersion="1" MinorVersion="1" AssertionID="SAML-iiYLE6rlHjI2j9AUZXrXmg22" IssueInstant="2014-10-20T13:52:25Z" Issuer="www.oracle.com">
<saml:Conditions NotBefore="2014-10-20T13:52:25Z" NotOnOrAfter="2015-11-22T13:57:25Z"/>
<saml:AuthenticationStatement AuthenticationInstant="2014-10-20T14:52:25Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">
<saml:Subject>
<saml:NameIdentifier Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">FAAdmin</saml:NameIdentifier>
<saml:SubjectConfirmation>
<saml:ConfirmationMethod>urn:oasis:names:tc:SAML:1.0:cm:sender-vouches</saml:ConfirmationMethod>
</saml:SubjectConfirmation>
</saml:Subject>
</saml:AuthenticationStatement>
</saml:Assertion>
</wsse:Security>
</soapenv:Header>

Sample SOA Composite

The SOA code provides a sample on how to search for a document in WebCenter Content, extract a file name from the search result, and get the file and save it in your local directory. The file could be processed immediately based on your requirements. Since this is a generic web service with a generic request message, you can use the same interface to invoke various IdcServices, such as GET_FILE, GET_SEARCH_RESULTS, etc.

In the SOA composite sample, two external services are created: GenericSoapPort and FileAdapter. If the service is GET_FILE, then it will save a copy of the retrieved file in your local machine.

Export File

The GET_FILE service returns a specific rendition of a content item, the latest revision, or the latest released revision. A copy of the file is retrieved without performing a check out. It requires either dID (content item revision ID) for the revision, or dDocName (content item name) along with a RevisionSelectionMethod parameter. The RevisionSelectionMethod could be either “Latest” (latest revision of the content) or “LatestReleased” (latest released revision of the content). For example, to retrieve file:

<ucm:GenericRequest webKey="cs">
<ucm:Service IdcService="GET_FILE">
<ucm:Document>
<ucm:Field name="dID">401</ucm:Field>
</ucm:Document>
</ucm:Service>
</ucm:GenericRequest>

Search File

The dID of the content could be retrieved using the service GET_SEARCH_RESULTS. It uses a QueryText attribute in <Field> node. The QueryText attribute defines the query and must be XML encoded. You can append values for title, content Id, and so on, in the QueryText, to refine the search. The syntax for QueryText could be challenging, but once you understand the special characters formats, it is straight forward. For example, to search content by its original name:

<ucm:Service IdcService="GET_SEARCH_RESULTS">
<ucm:Document>
<ucm:Field name="QueryText">dOriginalName &lt;starts&gt; `Test`</ucm:Field>
</ucm:Document>
</ucm:Service>

In plain text, it is dOriginalName <starts> `Test`. The <substring> is the mandatory format. You can further refine the query by adding more parameters.

This a sample SOA composite with 2 external references, genericSoapPort and FileAdapter.

ucmComposite

This is a sample BPEL process flow that demonstrates how to retrieve the file and save a copy to a local directory using File Adapter. If the idcService is GET_SEARCH_RESULTS, then do not save the file. In a real scenario, you will search, check out and start processing the file.

 

ucmBPEL1

The original file name is preserved when copying it to a local directory by passing the header property to the FileAdapter. For example, create a variable fileName and use assign as follows:

1. get file name from the response message in your <assign> activity as follows:

<from expression="bpws:getVariableData('InvokeGenericSoapPort_GenericSoapOperation_OutputVariable','GenericResponse','/ns2:GenericResponse/ns2:Service/ns2:Document/ns2:ResultSet/ns2:Row/ns2:Field[@name=&quot;dOriginalName&quot;]')"/>
<to variable="fileName"/>

Please make note of the XPath expression as this will assist you to retrieve other metadata.

2. Pass this fileName variable to the <invoke> of the FileAdapter as follows:

<bpelx:inputProperty name="jca.file.FileName" variable="fileName"/>

Please add the following property manually to the ../CommonDomain/ucm/cs/config/config.cfg file for the QueryText syntax: AllowNativeQueryFormat=true
Restart the managed server.
The typical error is: “StatusMessage”>Unable to retrieve search results. Parsing error at character xx in query….”

Testing SOA Composite:

After the composite is deployed in your SOA server, you can test it either from Enterprise Manager (EM) or using SoapUI. These are the sample request messages for GET_SEARCH_RESULTS and GET_FILE.

The following screens show the SOA composites for “GET_SEARCH_RESULTS” and “GET_FILE”:

searchfile

getfile

Get_File Response snippet with critical objects:

<ns2:GenericResponse xmlns:ns2="http://www.oracle.com/UCM">
<ns2:Service IdcService="GET_FILE">
<ns2:Document>
<ns2:Field name="dID">401</ns2:Field>
<ns2:Field name="IdcService">GET_FILE</ns2:Field>
....
<ns2:ResultSet name="FILE_DOC_INFO">
<ns2:Row>
<ns2:Field name="dID">401</ns2:Field>
<ns2:Field name="dDocName">UCMFA000401</ns2:Field>
<ns2:Field name="dDocType">Document</ns2:Field>
<ns2:Field name="dDocTitle">JRD Test</ns2:Field>
<ns2:Field name="dDocAuthor">FAAdmin</ns2:Field>
<ns2:Field name="dRevClassID">401</ns2:Field>
<ns2:Field name="dOriginalName">Readme.html</ns2:Field>
</ns2:Row>
</ns2:ResultSet>
</ns2:ResultSet>
<ns2:File name="" href="/u01/app/fa/config/domains/fusionhost.mycompany.com/CommonDomain/ucm/cs/vault/document/bwzh/mdaw/401.html">
<ns2:Contents>
<xop:Include href="cid:7405676a-11f8-442d-b13c-f8f6c2b682e4" xmlns:xop="http://www.w3.org/2004/08/xop/include"/>
</ns2:Contents>
</ns2:File>
</ns2:Document>
</ns2:Service>
</ns2:GenericResponse>

Import (Upload) File for HDL

The above sample can also be use to import files into the WebCenter Content repository for Inbound integration or other use cases. The service name is CHECKIN_UNIVERSAL.

Summary

This post demonstrates how to secure and automate the export and import of data files in WebCenter Content server implemented by Fusion HCM Cloud. It further demonstrates how integration tools like SOA can be implemented to automate, extend and orchestrate integration between HCM in the cloud and Oracle or non-Oracle applications, either in Cloud or on-premise sites.

The SOA sample code is here.

REST Adapter and JSON Translator in SOA/OSB 12.1.3

$
0
0

If you are using REST adapter in SOA/OSB 12.1.3, you would probably encounter some requirements that you would need to response with a JSON array format which has no object name or name/value pairs, and must be a valid format according to RFC4627 specification. For example:

["USA","Canada","Brazil","Australia","China","India"]

In SOA/OSB 12.1.3, the REST adapter requires you to design an XML schema in order to generate the proper JSON format you would require. If you want to generate the above JSON format, you would need to understand how JSON translator works in 12.1.3.

JSON and XML, although different, have some similarities. Hence, JSON constructs can be mapped to XML and vice-versa. The inherent differences between these two formats are handled by following some pre-defined conventions. The convention used in SOA 12.1.3 is based on the BadgerFish convention. Here are some of the rules:

 

XML JSON Comments
<AccountName>Peter</AccountName> { “AccountName” : “Peter” } XML elements are mapped to JSON object properties
<AccountName isOpen =”true”>Peter</AccountName> {
“AccountName”:
{“@isOpen” : true, “$” : “Peter” }
}
XML attributes are mapped to JSON object properties, with property names starting with @ symbol.  When elements have attributes defined in the XML schema, text nodes are mapped to an object property with the property name $. This is true even if at runtime the attributes do not occur.
<Address>
  <City>San Francisco</City>
</Address>
{“Address” : { “City” : “San Francisco” }} Nested elements become nested objects
<Name>Peter</Name><Name>John</Name> { “Name” : [ "Peter", "John" ] } Elements with maxOccurs > 1 in their schemas (repeating elements) become JSON arrays
<RootElement>
  <Country>USA</Country>
</RootElement>
{ “Country” :  “USA” } XML root elements are dropped when converting to JSON. In the reverse direction, a root element is adding when converting JSON to XML; in such cases the name of the root element is obtained from the schema. This is done because JSON can have multiple top-level object properties which would result in multiple root elements which is not valid in XML.
<Integer>10</Integer>
<String>string-value</String>
<Boolean>true</Boolean>
{“Integer” : 10,”String”: “string-value”,”Boolean”: true} The JSON data types – boolean, string and number are supported. When converting XML to JSON, based on the type defined in the XML schema the appropriate JSON type is generated.
<RootElement xmlns=”http://xmlns.oracle.com/country”>  <Country>USA</Country>
</RootElement>
{ “Country” :  “USA” } The RootElement  and all namespace information (ns declarations and prefixes) are dropped when converting XML to JSON. On converting the JSON back to XML, the namespace information (obtained from the schema) is added back to the XML.
<customers>
 <customer>Peter<customer>      <customer>John<customer>
</customers>
[ "Peter", "John"] Top level arrays – an nxsd annotation nxsd:jsonTopLevelArray=”true” can be set in the schema to indicate that the JSON will have a top level array.

The following scenarios are not handled by the JSON translator:

  • A choice group with child elements belonging to different namespaces having the same (local) name and a sequence group with child elements having duplicate local names. This is because all namespace information is dropped when converting XML to JSON and translates to a JSON object with duplicate keys, which is not a valid format according to RFC4627 specification. This translates to an object with duplicate property. For example,
    <productList>
    	<products>
    		<product>
    			<productCode>1</productCode>
    			<productDesc>product 1</productDesc>
    		</product>
    		<product>
    			<productCode>2</productCode>
    			<productDesc>product 2</productDesc>
    		</product>
    	</products>
    </productList>
  • Arrays within arrays, for example: [ [ "Harry", "Potter"] , ["Peter", "Pan"]]
  • Mixed arrays, for example:  [ [ "Harry", "Potter"] , “”, {“Peter”  : “Pan” }]
  • Handling JSON null
  • XML Schema Instance (xsi) attributesare not supported.

In order to generate the required JSON array format: ["USA","Canada","Brazil","Australia","China","India"], you need to have a XML schema similar to the example as shown below:

<?xml version = '1.0' encoding = 'UTF-8'?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
            xmlns="http://TargetNamespace.com/ListOfValues_countries_response"
            targetNamespace="http://TargetNamespace.com/ListOfValues_countries_response" 
            elementFormDefault="qualified"
            xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" 
            nxsd:version="JSON" nxsd:jsonTopLevelArray="true"
            nxsd:encoding="UTF-8">
    <xsd:element name="Root-Element">
        <xsd:complexType>
            <xsd:sequence>
                <xsd:element name="topLevelArray" maxOccurs="unbounded" type="xsd:string"/>
            </xsd:sequence>
        </xsd:complexType>
    </xsd:element>
</xsd:schema>

When the JSON translator converts the XML to JSON format, the XML root elements are dropped. In the reverse direction, a root element is adding when converting JSON to XML; in such cases the name of the root element is obtained from the schema. This occurred because JSON can have multiple top-level object properties that resulted in multiple root elements which, on the other hand, is not valid in XML.  An nxsd annotation nxsd:jsonTopLevelArray=”true” can be set in the schema to indicate that the JSON will have a top level array.

One of the option to generate the required XML Schema format is to use the Native Format Builder.  Read this blog about using Native Format Builder: http://www.ateam-oracle.com/introduction-to-fmw-12c-rest-adapter/, and Oracle documentation: http://docs.oracle.com/middleware/1213/soasuite/develop-soa/soa-rest-integration.htm#SOASE88861

Invoke Fusion Cloud Secured RESTFul Web Services

$
0
0

Introduction

The objective of this blog is to demonstrate how to invoke secured RestFul web services from Fusion Cloud using Oracle Service Oriented Architecture (SOA) as an Integration hub for real time integration with other clouds and on-premise applications. SOA could be on-premise or in the cloud (PAAS). The SOA composites deployed in on-premise SOA can be migrated to SOA in cloud.

What is REST?

REST stands for Representational State Transfer. It ignores the details of implementation and applies a set of interaction constraints. The web service APIs that adhere to the REST Architectural constraints are called RestFul. The HTTP based RESTFul APIs area defined with the following aspects:

  • Exactly one entry point – For example: http://example.com/resources/
  • Support of media type data – JavaScript Object Notation (JSON) and XML are common
  • Standard HTTP Verbs (GET, PUT, POST, PATCH or DELETE)
  • Hypertext links to reference state
  • Hypertext links to reference related resources

Resources & Collections

The Resources can be grouped into collections. Each collection is homogeneous and contains only one type of resource. For example:

URI Description Example
/api/ API Entry Point /fusionApi/resources
/api/:coll/ Top Level Collection :coll /fusionApi/resources/department
/api/:coll/:id Resource ID inside Collection /fusionApi/resources/department/10
/api/:coll/:id/:subcoll Sub-collection /fusionApi/resources/department/10/employees
/api/:coll/:id/:subcoll/:subid Sub Resource ID /fusionApi/resources/department/10/employees/1001

 

Invoking Secured RestFul Service using Service Oriented Architecture (SOA)

SOA 12c supports REST Adapter and it can be configured as a service binding component in a SOA Composite application. For more information, please refer to this link. In order to invoke a secured RestFul service, Fusion security requirements must be met. These are the following requirements:

Fusion Applications Security

All external URLs in the Oracle Fusion Cloud, for RESTful Services, are secured using Oracle Web Security Manager (OWSM). The server policy is “oracle/http_jwt_token_client_policy” that allows the following client authentication types:

  • HTTP Basic Authentication over Secure Socket Layer (SSL)
  • Oracle Access Manager(OAM) Token-service
  • Simple and Protected GSS-API Negotiate Mechanism (SPNEGO)
  • SAML token

JSON Web Token (JWT) is a light-weight implementation for web services authentication. A client having valid JWT token is allowed to call the REST service until it expires. The OWSM existing policy “oracle/wss11_saml_or_username_token_with_message_protection_service_policy” has the JWT over SSL assertion. For more information, please refer to this.

The client must provide one of the above policies in the security headers of the invocation call for authentication. In SOA, a client policy may be attached from Enterprise Manager (EM) to decouple it from the design time.

Fusion Security Roles

The user must have appropriate Fusion Roles including respective data security roles to view or change resources in Fusion Cloud. Each product pillar has respective roles. For example in HCM, a user must have any role that inherits the following roles:

  • HCM REST Services Duty – Example: “Human Capital Management Integration Specialist”
  • Data security Roles that inherit “Person Management Duty” – Example: “Human Resource Specialist – View All”

 

Design SOA Code using JDeveloper

In your SOA composite editor, right-click the Exposed Services swimlane and select Insert > REST. This action adds REST support as a service binding component to interact with the appropriate service component.

This the sample SOA Composite with REST Adapter using Mediator component (you can also use BPEL):

rest_composite

These are the following screens on how to configure RestFul Adapter as an external reference:

REST Adapter Binding

rest_adapter_config_1

REST Operation Binding

rest_adapter_config_2

REST Adapter converts JSON response to XML using Native Format Builder (NXSD). For more information on configuring NXSD from JSON to XML, please refer this link.

generic_json_to_xml_nxd

Attaching Oracle Web Service Manager (OWSM) Policy

Once the SOA composite is deployed to your SOA server, the HTTP Basic Authentication OWSM policy is attached as follows:

Navigate to your composite from EM and click on policies tab as follows:

 

rest_wsm_policy_from_EM_2

 

Identity Propagation

Once the OWSM policy is attached to your REST reference, the HTTP token can be passed using the Credential Store. Please create credential store as follows:

1. Right-Click on  SOA Domain and select Security/Credentials.

rest_credential_1

2. Please see the following screen to create a key under oracle.wsm.security map:

 

rest_credential_2

Note: If oracle.wsm.security map is missing, then create this map before creating a key.

 

By default, OWSM policy uses basic.crendial key. To use newly created key from above, the default key is override using the following instructions:

1. Navigate to REST reference binding as follows:

rest_wsm_overridepolicyconfig

rest_wsm_overridepolicyconfig_2

Replace basic.credentials with your new key value.

 

Secure Socket Layer (SSL) Configuration

In Oracle Fusion Applications, the OWSM policy mandates HTTPs protocol. For introduction to SSL and detailed configuration, please refer this link.

The cloud server certificate must be imported in two locations as follows:

1. keytool -import -alias slc08ykt -file /media/sf_C_DRIVE/JDeveloper/mywork/MyRestProject/facert.cer -keystore /oracle/xehome/app/soa12c/wlserver/server/lib/DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase

This is the output:

Owner: CN=*.us.mycompany.com, DC=us, DC=mycompany, DC=com
Issuer: CN=*.us.mycompany.com, DC=us, DC=mycompany, DC=com
Serial number: 7
Valid from: Mon Apr 25 09:08:55 PDT 2011 until: Thu Apr 22 09:08:55 PDT 2021
Certificate fingerprints:
MD5: 30:0E:B4:91:F3:A4:A7:EE:67:6F:73:D3:E1:1B:A6:82
SHA1: 67:93:15:14:3E:64:74:27:32:32:26:43:FF:B8:B9:E6:05:A8:DE:49
SHA256: 01:0E:2A:8A:D3:A9:3B:A4:AE:58:4F:AD:2C:E7:BD:45:B7:97:6F:A0:C4:FA:96:A5:29:DD:77:85:3A:05:B1:B8
Signature algorithm name: MD5withRSA
Version: 1
Trust this certificate? [no]: yes
Certificate was added to keystore

2. keytool -import -alias <name> -file /media/sf_C_DRIVE/JDeveloper/mywork/MyRestPorject/facert.cer -trustcacerts -keystore /oracle/xehome/app/jdk1.7.0_55/jre/lib/security/cacerts

This is the output:

Enter keystore password:
Owner: CN=*.us.mycompany.com, DC=us, DC=mycompany, DC=com
Issuer: CN=*.us.mycompany.com, DC=us, DC=oracle, DC=com
Serial number: 7
Valid from: Mon Apr 25 09:08:55 PDT 2011 until: Thu Apr 22 09:08:55 PDT 2021
Certificate fingerprints:
MD5: 30:0E:B4:91:F3:A4:A7:EE:67:6F:73:D3:E1:1B:A6:82
SHA1: 67:93:15:14:3E:64:74:27:32:32:26:43:FF:B8:B9:E6:05:A8:DE:49
SHA256: 01:0E:2A:8A:D3:A9:3B:A4:AE:58:4F:AD:2C:E7:BD:45:B7:97:6F:A0:C4:FA:96:A5:29:DD:77:85:3A:05:B1:B8
Signature algorithm name: MD5withRSA
Version: 1
Trust this certificate? [no]: yes
Certificate was added to keystore

You must restart Admin and SOA Servers.

 

Testing

Deploy the above composite in your SOA server. The SOA composite can be invoked from EM or using tools like SOAPUI. Please see the following link to test REST adapter using HTTP Analyzer.

Conclusion

This blog demonstrates how to invoke secured REST services from Fusion Applications cloud using SOA. It provides detailed configuration on importing cloud keystores and attaching OWSM policies. This sample supports multiple patterns such as cloud-to-cloud, cloud-to-OnPremise, cloud-to-BPO, etc.

 

 

 

Implementing Upsert for Oracle Service Cloud APIs

$
0
0

Introduction

Oracle Service Cloud provides a powerful, highly scalable SOAP based batch API supporting all the usual CRUD style operations. We have recently worked with a customer who wants to leverage this API in large scale but requires the ability to have ‘upsert’ logic in place, i.e. either create or update data in Oracle Service Cloud (OSvC) depending on whether an object already exists in OSvC or not. At this time the OSvC API does not provide native support for upsert, but this article will show an approach to accomplish the same leveraging Oracle SOA Suite. It also provides data points regarding the overhead and the scalability in the context of high-volume interfaces into OSvC.

Main Article

Why Upsert?

One might ask why would one need upsert logic in the first place. Aside that this is common practice in some well established applications such as Siebel, there are situations where upsert capabilities come in very handy. For example, if one cannot rely on the source system feeding data into a target application to be able to tell whether some data has been provided earlier or not, it’s useful to be able to determine this on the target side and take the right action. I.e. create a new record or object in the target or update an existing record/object with new data. Clearly, creating duplicate information in the target is the one thing to be avoided most.

Maintaining Cross-References

In order to determine if a particular source record has already been loaded into OSvC previously or not, cross-reference information must be maintained at some place. There are different approaches to this, depending on system capabilities this could be either in the source system, the integration middleware, or the target system. There are specific advantages for each approach, but this is outside the scope of this article. In this case we want to leverage OSvC extensibility capabilities to provide additional attributes that can hold the references to the source record in the source system. A common practice is to use a pair of attributes such as (SourceSystem,SourceSystemId) for this purpose. With the OSvC object designer it’s a straightforward task to do this, e.g as shown for the Contact object below:

Custom Cross-Reference AttributesPerformance and scalability really matter in this scenario, so we have to make sure that the queries to determine if a records already exists will perform well. We will ultimately construct ROQL queries that will translate to point lookup queries in the OSvC database in order to verify is a set of (SourceSystem,SourceSystemId) pairs exist in the OSvC database. Therefore, having a custom Index on these two custom attributes will allow the OSvC database to execute such queries in a performant way avoiding full table scans. In the OSvC object designer, defining a custom index is straight-forward:

Custom Index

With that in place (after deploying the updated object to the system) we have all we need to store, maintain, and query the cross-references to the source record in OSvC. In the next section we will discuss how a this can be leveraged in a SOA implementation to realise the upsert logic.

SOA Implementation

As we are looking at a batch-style interface here with the need to process large volumes of records, it certainly does not make sense to query OSvC for each record to determine wether we need to execute a Create or Update operation for each record. Instead, as we want to process a bulk of say 100 objects in one service invocation against OSvC, we rather design it in the following way to keep round trips at a minimum:

Step 1: SOA composite receives a bulk of 100 records.

Step 2: BPEL process constructs a single ROQL query to determine for all records in one go whether they already exist in OSvC or not. This ROQL will be executed via the queryCSV API method. Running individual object queries would not scale very well for this scenario.

Step 3: BPEL constructs the bulk API payload for OSvC by combining Create and Update operations.

Step 4: BPEL invokes the OSvC batch API and processes the response e.g. for a reply to the source system.

In other words, we have two interactions with OSvC. The first one is to retrieve the cross-referencing information held in custom attributes and the second one does the actual data processing taking the cross-referencing into account.

Upsert BPEL Process

As stated previously, in Step 2 above we need to construct a single ROQL query that takes care of looking up any cross-references for the list of records currently processed by the BPEL process. This is accomplished by string concatenation adding a criteria to the ROQL where clause per record. The condition ensures that the core structure of the query ‘SELECT … FROM … WHERE’ is set for the first record while for each subsequent records it will just add another OR clause.

  <xsl:variable name="whereclause">
    <xsl:for-each select="/ns0:LoadDataCollection/ns0:LoadData">
      <xsl:choose>
        <xsl:when test="position() = 1">
          <xsl:value-of select="concat (&quot;select c.id, c.CustomFields.CO.SourceSystemId from Contact c where (c.CustomFields.CO.SourceSystem='LegacyApp1' and c.CustomFields.CO.SourceSystemId='&quot;, ns0:cdiId, &quot;')&quot; )"/>
        </xsl:when>
        <xsl:otherwise>
          <xsl:value-of select="concat (&quot; or (c.CustomFields.CO.SourceSystem='LegacyApp1' and c.CustomFields.CO.SourceSystemId='&quot;, ns0:cdiId, &quot;')&quot; )"/>
        </xsl:otherwise>
      </xsl:choose>
    </xsl:for-each>
  </xsl:variable>
  <xsl:template match="/">
    <tns:QueryCSV>
      <tns:Query>
        <xsl:value-of select="$whereclause"/>
      </tns:Query>
      <tns:PageSize>10000</tns:PageSize>
      <tns:Delimiter>,</tns:Delimiter>
      <tns:ReturnRawResult>false</tns:ReturnRawResult>
      <tns:DisableMTOM>true</tns:DisableMTOM>
    </tns:QueryCSV>
  </xsl:template>

This results in a ROQL query in the following structure:

select c.id, c.CustomFields.CO.SourceSystemId 
from Contact c 
where (c.CustomFields.CO.SourceSystem='LegacyApp1' and c.CustomFields.CO.SourceSystemId='15964985') 
or (c.CustomFields.CO.SourceSystem='LegacyApp1' and c.CustomFields.CO.SourceSystemId='15964986') 
or (c.CustomFields.CO.SourceSystem='LegacyApp1' and c.CustomFields.CO.SourceSystemId='15964987')
etc.

The corresponding result from running this ROQL against OSvC using the QueryCSV operation provides us entries for all source records that already exists based on the SourceSystemId criteria. Vice versa, for non-existing references in OSvC there isn’t a result record in the queryCSV response:

         <n0:QueryCSVResponse xmlns:n0="urn:messages.ws.rightnow.com/v1_2">
            <n0:CSVTableSet>
               <n0:CSVTables>
                  <n0:CSVTable>
                     <n0:Name>Contact</n0:Name>
                     <n0:Columns>ID,SourceSystemId</n0:Columns>
                     <n0:Rows>
                        <n0:Row>12466359,15964985</n0:Row>
                        <n0:Row>12466369,15964987</n0:Row>
                        <n0:Row>12466379,15964989</n0:Row>
                        <n0:Row>12466387,15965933</n0:Row>
                        <n0:Row>12466396,15965935</n0:Row>
                        <n0:Row>12466404,15965937</n0:Row>
                     </n0:Rows>
                  </n0:CSVTable>
               </n0:CSVTables>
            </n0:CSVTableSet>
         </n0:QueryCSVResponse>

So in the case of the example we can conclude that for the record referencing 15964985, it would have to be an update, while it would be a create for reference 15964986.

In the next Step 3 this result needs to be merged with the actual data to construct the payload for the OSvC Batch API. We conditionally either construct a CreateMsg or UpdateMsg structure depending on wether the previous ROQL has retrieved the source application key or not. And if it’s an update, it’s essential to include the OSvC object identifier in the RNObjects structure so that the API is pointed to the right object in OSvC for update.

  <xsl:template match="/">
    <ns1:Batch>
      <xsl:for-each select="/ns0:LoadDataCollection/ns0:LoadData">
        <xsl:variable name="appKey" select="ns0:appKey"/>
        <xsl:choose>
          <xsl:when test="count($InvokeLookup_QueryCSV_OutputVariable.parameters/ns1:QueryCSVResponse/ns1:CSVTableSet/ns1:CSVTables/ns1:CSVTable/ns1:Rows/ns1:Row[contains(text(),$appKey)]) = 0 ">
            <ns1:BatchRequestItem>
              <ns1:CreateMsg>
                <ns1:RNObjects xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="v13:Contact">
                  <v13:CustomFields xmlns:n3="urn:generic.ws.rightnow.com/v1_2">
                    <n3:GenericFields name="CO" dataType="OBJECT">
                      <n3:DataValue>
                        <n3:ObjectValue xsi:type="n3:GenericObject">
                          <n3:ObjectType>
                            <n3:Namespace/>
                            <n3:TypeName>ContactCustomFieldsCO</n3:TypeName>
                          </n3:ObjectType>
                          <n3:GenericFields name="SourceSystem" dataType="STRING">
                            <n3:DataValue>
                              <n3:StringValue>LegacyApp1</n3:StringValue>
                            </n3:DataValue>
                          </n3:GenericFields>
                          <n3:GenericFields name="SourceSystemId" dataType="STRING">
                            <n3:DataValue>
                              <n3:StringValue>
                                <xsl:value-of select="$appKey"/>
                              </n3:StringValue>
                            </n3:DataValue>
                          </n3:GenericFields>
                        </n3:ObjectValue>
                      </n3:DataValue>
                    </n3:GenericFields>
                  </v13:CustomFields>
                  <v13:Name>
                    <v13:First>
                      <xsl:value-of select="ns0:firstName"/>
                    </v13:First>
                    <v13:Last>
                      <xsl:value-of select="ns0:lastName"/>
                    </v13:Last>
                  </v13:Name>
                </ns1:RNObjects>
                <ns1:ProcessingOptions>
                  <ns1:SuppressExternalEvents>true</ns1:SuppressExternalEvents>
                  <ns1:SuppressRules>true</ns1:SuppressRules>
                </ns1:ProcessingOptions>
              </ns1:CreateMsg>
            </ns1:BatchRequestItem>
          </xsl:when>
          <xsl:otherwise>
            <ns1:BatchRequestItem>
              <ns1:UpdateMsg>
                <ns1:RNObjects xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="v13:Contact">
                  <ID xmlns="urn:base.ws.rightnow.com/v1_2">
                    <xsl:attribute name="id">
                      <xsl:value-of select="substring-before($InvokeLookup_QueryCSV_OutputVariable.parameters/ns1:QueryCSVResponse/ns1:CSVTableSet/ns1:CSVTables/ns1:CSVTable/ns1:Rows/ns1:Row[contains(text(),$appKey)]/text(),',')"/>
                    </xsl:attribute>
                  </ID>
                  <v13:CustomFields xmlns:n3="urn:generic.ws.rightnow.com/v1_2">
                    <n3:GenericFields name="CO" dataType="OBJECT">
                      <n3:DataValue>
                        <n3:ObjectValue xsi:type="n3:GenericObject">
                          <n3:ObjectType>
                            <n3:Namespace/>
                            <n3:TypeName>ContactCustomFieldsCO</n3:TypeName>
                          </n3:ObjectType>
                          <n3:GenericFields name="SourceSystem" dataType="STRING">
                            <n3:DataValue>
                              <n3:StringValue>LegacyApp1</n3:StringValue>
                            </n3:DataValue>
                          </n3:GenericFields>
                          <n3:GenericFields name="SourceSystemId" dataType="STRING">
                            <n3:DataValue>
                              <n3:StringValue>
                                <xsl:value-of select="$appKey"/>
                              </n3:StringValue>
                            </n3:DataValue>
                          </n3:GenericFields>
                        </n3:ObjectValue>
                      </n3:DataValue>
                    </n3:GenericFields>
                  </v13:CustomFields>
                  <v13:Name>
                    <v13:First>
                      <xsl:value-of select="ns0:firstName"/>
                    </v13:First>
                    <v13:Last>
                      <xsl:value-of select="ns0:lastName"/>
                    </v13:Last>
                  </v13:Name>
                </ns1:RNObjects>                  
                <ns1:ProcessingOptions>
                  <ns1:SuppressExternalEvents>true</ns1:SuppressExternalEvents>
                  <ns1:SuppressRules>true</ns1:SuppressRules>
                </ns1:ProcessingOptions>
              </ns1:UpdateMsg>
            </ns1:BatchRequestItem>
          </xsl:otherwise>
        </xsl:choose>
      </xsl:for-each>
    </ns1:Batch>

The outcome of this transformation is a list of UpdateMsg and CreateMsg elements that are passed to the OSvC API in a single invocation.

From the perspective of pushing data into the SOA layer, this now transparently provides upsert logic and cross-referencing is maintained in OSvC. The next question one might ask is what is the performance overhead of the above? Or in other words: how does the extra round trip impact the overall throughput of the interface?

Performance Analysis

In order to understand the performance implications we have tested a batch interface into OSvC to synchronise Contact information with and without this upsert logic. The below diagram illustrates the throughput, i.e. number of processed Contact objects per time period for a different set of scenarios. We were executing batches of 1 million records each time with a concurrency of 50 parallel client threads.

Upsert performance results

The first, blue bar represents the throughput when the Upsert logic is not in place, i.e. there is no extra round-trip and all 1M records translate to Create operations in OSvC. The second bar represents also 1M create operations, but this time with the upsert logic in place. It turns out that the overhead for doing the extra round trip is negligible in such as scenario as all the heavy lifting is done during the actual data processing. The fast lookup queries (<1s for a batch of 100 records) are practically irrelevant for this specific use case.

We have conducted further tests with a growing proportion of update operation as opposed to create operations. The throughput keeps increasing as there are more updates and less creates. The simple reason is that the updates in our test case were rather light (updating 5 attributes of the object) compared to the creation of the full object with a much higher number of standard and custom attributes.

Conclusion

This article has provided an approach for implementing upsert capabilities for Oracle Service Cloud APIs. We have chosen to maintain cross-referencing information in OSvC and to use Oracle SOA Suite as the integration technology. We have also provided test results indicating the performance impact of the proposed design in high-volume scenarios.

Fusion HCM Cloud – Bulk Integration Automation Using Managed File Transfer (MFT) and Node.js

$
0
0

Introduction

Fusion HCM Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the bulk integration to load and extract data to/from the cloud.

The inbound tool is the File Based data loader (FBL) evolving into HCM Data Loaders (HDL). HDL is a powerful tool for bulk-loading data from any source to Oracle Fusion Human Capital Management (Oracle Fusion HCM). HDL supports one-time data migration and incremental load to support co-existence with Oracle Applications such as E-Business Suite (EBS) and PeopleSoft (PSFT).

HCM Extracts is an outbound integration tool that lets you choose HCM data, gathers it from the HCM database and archives it as XML. This archived raw XML data can be converted into a desired format and delivered to supported channels recipients.

HCM cloud implements Oracle WebCenter Content, a component of Fusion Middleware, to store and secure data files for both inbound and outbound bulk integration patterns.

Oracle Managed File Transfer (Oracle MFT) enables secure file exchange and management with internal systems and external partners. It protects against inadvertent access to unsecured files at every step in the end-to-end transfer of files. It is easy to use, especially for non technical staff, so you can leverage more resources to manage the transfer of files. The built in extensive reporting capabilities allow you to get quick status of a file transfer and resubmit it as required.

Node.js is a programming platform that allows you to execute server-side code that is similar to JavaScript in the browser. It enables real-time, two-way connections in web applications with push capability, allowing a non-blocking, event-driven I/O paradigm. Node.js is built on an event-driven, asynchronous model. The in-coming requests are non-blocking. Each request is passed off to an asynchronous callback handler. This frees up the main thread to respond to more requests.

This post focuses on how to automate HCM Cloud batch integration using MFT (Managed File Transfer) and Node.js. MFT can receive files, decrypt/encrypt files and invoke Service Oriented Architecture (SOA) composites for various HCM integration patterns.

 

Main Article

Managed File Transfer (MFT)

Oracle Managed File Transfer (MFT) is a high performance, standards-based, end-to-end managed file gateway. It features design, deployment, and monitoring of file transfers using a lightweight web-based design-time console that includes file encryption, scheduling, and embedded FTP and sFTP servers.

Oracle MFT provides built-in compression, decompression, encryption and decryption actions for transfer pre-processing and post-processing. You can create new pre-processing and post-processing actions, which are called callouts.

The callouts can be associated with either the source or the target. The sequence of processing action execution during a transfer is as follows:

  1. 1. Source pre processing actions
  2. 2. Target pre processing actions
  3. 3. Payload delivery
  4. 4. Target post processing actions
Source Pre-Processing

Source pre-processing is triggered right after a file has been received and has identified a matching Transfer. This is the best place to do file validation, compression/decompression, encryption/decryption and/or extend MFT.

Target Pre-Processing

Target pre-processing is triggered just before the file is delivered to the Target by the Transfer. This is the best place to send files to external locations and protocols not supported in MFT.

Target Post-Processing

Post-processing occurs after the file is delivered. This is the best place for notifications, analytic/reporting or maybe remote endpoint file rename.

For more information, please refer to the Oracle MFT document

 

HCM Inbound Flow

This is a typical Inbound FBL/HDL process flow:

inbound_mft

The FBL/HDL process for HCM is a two-phase web services process as follows:

  • Upload the data file to WCC/UCM using WCC GenericSoapPort web service
  • Invoke “LoaderIntegrationService” or “HCMDataLoader” to initiate the loading process.

The following diagram illustrates the MFT steps with respect to “Integration” for FBL/HDL:

inbound_mft_2

HCM Outbound Flow

This is a typical outbound batch Integration flow using HCM Extracts:

extractflow

 

The “Extract” process for HCM has the following steps:

  • An Extract report is generated in HCM either by user or through Enterprise Scheduler Service (ESS) – this report is stored in WCC under the hcm/dataloader/export account.
  • MFT scheduler can pull files from WCC
  • The data file(s) are either uploaded to the customer’s sFTP server as pass through or to Integration tools such as Service Oriented Architecture (SOA) for orchestrating and processing data to target applications in cloud or on-premise.

The following diagram illustrates the MFT orchestration steps in “Integration” for Extract:

 

outbound_mft

 

The extracted file could be delivered to the WebCenter Content server. HCM Extract has an ability to generate an encrypted output file. In Extract delivery options ensure the following options are correctly configured:

  • Select HCM Delivery Type to “HCM Connect”
  • Select an Encryption Mode of the four supported encryption types or select None
  • Specify the Integration Name – this value is used to build the title of the entry in WebCenter Content

 

Extracted File Naming Convention in WebCenter Content

The file will have the following properties:
Author: FUSION_APPSHCM_ESS_APPID
Security Group: FAFusionImportExport
Account: hcm/dataloader/export
Title: HEXTV1CON_{IntegrationName}_{EncryptionType}_{DateTimeStamp}

 

Fusion Applications Security

The content in WebCenter Content is secured through users, roles, privileges and accounts. The user could be any valid user with a role such as “Integration Specialist.” The role may have privileges such as read, write and delete. The accounts are predefined by each application. For example, HCM uses /hcm/dataloader/import and /hcm/dataloader/export respectively.
The FBL/HDL web services are secured through Oracle Web Service Manager (OWSM) using the following policy: oracle/wss11_saml_or_username_token_with_message_protection_service_policy.

The client must satisfy the message protection policy to ensure that the payload is encrypted or sent over the SSL transport layer.

A client policy that can be used to meet this requirement is: “oracle/wss11_username_token_with_message_protection_client_policy”

To use this policy, the message must be encrypted using a public key provided by the server. When the message reaches the server it can be decrypted by the server’s private key. A KeyStore is used to import the certificate and it is referenced in the subsequent client code.

The public key can be obtained from the certificate provided in the service WSDL file.

Encryption of Data File using Pretty Good Privacy (PGP)

All data files transit over a network via SSL. In addition, HCM Cloud supports encryption of data files at rest using PGP.
Fusion HCM supports the following types of encryption:

  • PGP Signed
  • PGP Unsigned
  • PGPX509 Signed
  • PGPX509 Unsigned

To use this PGP Encryption capability, a customer must exchange encryption keys with Fusion for the following:

  • Fusion can decrypt inbound files
  • Fusion can encrypt outbound files
  • Customer can encrypt files sent to Fusion
  • Customer can decrypt files received from Fusion

MFT Callout using Node.js

 

Prerequisites

To automate HCM batch integration patterns, the following components must be installed and configured respectively:

 

Node.js Utility

A simple Node.js utility “mft2hcm” has been developed for uploading or downloading files to/from a MFT server callout to Oracle WebCenter Content server and initiate HCM SaaS loader service. It utilizes the node “mft-upload” package and provides SOAP substitution templates for WebCenter (UCM) and Oracle HCM Loader service.

Please refer to the “mft2hcm” node package for installation and configuration.

RunScript

The RunScript is configured as “Run Script Pre 01” to configure a callout that can be injected into MFT in pre or post processing. This callout always sends the following default parameters to the script:

  • Filename
  • Directory
  • ECID
  • Filesize
  • Targetname (not for source callouts)
  • Sourcename
  • Createtime

Please refer to “PreRunScript” for more information on installation and configuration.

MFT Design

MFT Console enables the following tasks depending on your user roles:

Designer: Use this page to create, modify, delete, rename, and deploy sources, targets, and transfers.

Monitoring: Use this page to monitor transfer statistics, progress, and errors. You can also use this page to disable, enable, and undeploy transfer deployments and to pause, resume, and resubmit instances.

Administration: Use this page to manage the Oracle Managed File Transfer configuration, including embedded server configuration.

Please refer to the MFT Users Guide for more information.

 

HCM FBL/HDL MFT Transfer

This is a typical MFT transfer design and configuration for FBL/HDL:

MFT_FBL_Transfer

The transfer could be designed for additional steps such as compress file and/or encrypt/decrypt files using PGP, depending on the use cases.

 

HCM FBL/HDL (HCM-MFT) Target

The MFT server receives files from any Source protocol such as SFTP, SOAP, local file system or a back end integration process. The file can be decrypted, uncompressed or validated before a Source or Target pre-processing callout uploads it to UCM then notifies HCM to initiate the batch load. Finally the original file is backed up into the local file system, remote SFTP server or a cloud based storage service. An optional notification can also be delivered to the caller using a Target post-processing callout upon successful completion.

This is a typical target configuration in the MFT-HCM transfer:

Click on target Pre-Processing Action and select “Run Script Pre 01”:

MFT_RunScriptPre01

 

Enter “scriptLocation” where node package “mft2hcm” is installed. For example, <Node.js-Home>/hcm/node_modules/mft2hcm/mft2hcm.js

MFTPreScriptUpload

 

Do not check ”UseFileFromScript”. This property replaces an inbound file (source) of MFT with the file from target execution. In FBL/HDL, the response (target execution) do not contain file.

 

HCM Extract (HCM-MFT) Transfer

An external event or scheduler triggers the MFT server to search for a file in WCC using a search query. Once a document id is indentified, it is retrieved using a “Source Pre-Processing” callout which injects the retrieved file into the MFT Transfer. The file can then be decrypted, validated or decompressed before being sent to an MFT Target of any protocol such as SFTP, File system, SOAP Web Service or a back end integration process. Finally, the original file is backed up into the local file system, remote SFTP server or a cloud based storage service. An optional notification can also be delivered to the caller using a Target post-processing callout upon successful completion. The MFT server can live in either on premise or a cloud iPaaS hosted environment.

This is a typical configuration of HCM-MFT Extract Transfer:

MFT_Extract_Transfer

 

In the Source definition, add “Run Script Pre 01” processing action and enter the location of the script:

MFTPreScriptDownload

 

The “UseFileFromScript” must be checked as the source scheduler is triggered with mft2hcm payload (UCM-PAYLOAD-SEARCH) to initiate the search and get WCC’s operations. Once the file is retrieved from WCC, this flag tells MFT engine to substitute the file from downloaded from WCC.

 

Conclusion

This post demonstrates how to automate HCM inbound and outbound patterns using MFT and Node.js. The Node.js package could be replaced with WebCenter Content native APIs and SOA for orchestration. This process can also be replicated for other Fusion Applications pillars such as Oracle Enterprise Resource Planning (ERP).


HCM Atom Feed Subscriber using Node.js

$
0
0

Introduction

HCM Atom feeds provide notifications of Oracle Fusion Human Capital Management (HCM) events and are tightly integrated with REST services. When an event occurs in Oracle Fusion HCM, the corresponding Atom feed is delivered automatically to the Atom server. The feed contains details of the REST resource on which the event occurred. Subscribers who consume these Atom feeds use the REST resources to retrieve additional information about the resource.

For more information on Atom, please refer to this.

This post focuses on consuming and processing HCM Atom feeds using Node.js. The assumption is that the reader has some basic knowledge on Node.js. Please refer to this link to download and install Node.js in your environment.

Node.js is a programming platform that allows you to execute server-side code that is similar to JavaScript in the browser. It enables real-time, two-way connections in web applications with push capability, allowing a non-blocking, event-driven I/O paradigm. It runs on a single threaded event loop and leverages asynchronous calls for various operations such as I/O. This is an evolution from stateless-web based on the stateless request-response paradigm. For example, when a request is sent to invoke a service such as REST or a database query, Node.js will continue serving the new requests. When a response comes back, it will jump back to the respective requestor. Node.js is lightweight and provides a high level of concurrency. However, it is not suitable for CPU intensive operations as it is single threaded.

Node.js is built on an event-driven, asynchronous model. The in-coming requests are non-blocking. Each request is passed off to an asynchronous callback handler. This frees up the main thread to respond to more requests.

For more information on Node.js, please refer this.

 

Main Article

Atom feeds enable you to keep track of any changes made to feed-enabled resources in Oracle HCM Cloud. For any updates that may be of interest for downstream applications, such as new hire, terminations, employee transfers and promotions, Oracle HCM Cloud publishes Atom feeds. Your application will be able to read these feeds and take appropriate action.

Atom Publishing Protocol (AtomPub) allows software applications to subscribe to changes that occur on REST resources through published feeds. Updates are published when changes occur to feed-enabled resources in Oracle HCM Cloud. These are the following primary Atom feeds:

Employee Feeds

New hire
Termination
Employee update

Assignment creation, update, and end date

Work Structures Feeds (Creation, update, and end date)

Organizations
Jobs
Positions
Grades
Locations

The above feeds can be consumed programmatically. In this post, Node.js is implemented as one of the solutions consuming “Employee New Hire” feeds, but design and development is similar for all the supported objects in HCM.

 

Refer my blog on how to invoke secured REST services using Node.js

Security

The RESTFul services in Oracle HCM Cloud are protected with Oracle Web Service Manager (OWSM). The server policy allows the following client authentication types:

  • HTTP Basic Authentication over Secure Socket Layer (SSL)
  • Oracle Access Manager(OAM) Token-service
  • Simple and Protected GSS-API Negotiate Mechanism (SPNEGO)
  • SAML token

The client must provide one of the above policies in the security headers of the invocation call for authentication. The sample in this post is using HTTP Basic Authentication over SSL policy.

 

Fusion Security Roles

REST and Atom Feed Roles

To use Atom feed, a user must have any HCM Cloud role that inherits the following roles:

  • “HCM REST Services and Atom Feeds Duty” – for example, Human Capital Management Integration Specialist
  • “Person Management Duty” – for example, Human Resource Specialist

REST/Atom Privileges

 

Privilege Name

Resource and Method

PER_REST_SERVICE_ACCESS_EMPLOYEES_PRIV emps ( GET, POST, PATCH)
PER_REST_SERVICE_ACCESS_WORKSTRUCTURES_PRIV grades (get)jobs (get)
jobFamilies (get)
positions (get)
locations (get)
organizations (get)
PER_ATOM_WORKSPACE_ACCESS_EMPLOYEES_PRIV employee/newhire (get)
employee/termination (get)
employee/empupdate (get)
employee/empassignment (get )
PER_ATOM_WORKSPACE_ACCESS_WORKSTRUCTURES_PRIV workstructures/grades (get)
workstructures/jobs (get)
workstructures/jobFamilies (get)
workstructures/positions (get)
workstructures/locations (get)
workstructures/organizations (get)

 

 

Atom Payload Response Structure

The Atom feed response is in XML format. Please see the following diagram to understand the feed structure:

 

AtomFeedSample_1

 

A feed can have multiple entries. The entries are ordered by “updated” timestamp of the <entry> and the first one is the latest. There are two critical elements that will provide information on how to process these entries downstream.

Content

The <content> element contains critical attributes such as Employee Number, Phone, Suffix, CitizenshipLegislation, EffectiveStartDate, Religion, PassportNumber, NationalIdentifierType, , EventDescription, LicenseNumber, EmployeeName, WorkEmail, NationalIdentifierNumber. It is in JSON format as you can see from the above diagram.

Resource Link

If data provided in the <content> is not sufficient, the RESTFul service resource link is provided to get more details. Please refer the above diagram on employee resource link for each entry. Node.js can invoke this newly created RestFul resource link.

 

Avoid Duplicate Atom Feed Entries

To avoid consuming feeds with duplicate entries, one of the following parameters must be provided to consume feeds since last polled:

1. updated-min: Returns entries within collection  Atom:updated > updated-min

Example: https://hclg-test.hcm.us2.oraclecloud.com/hcmCoreApi/Atomservlet/employee/newhire?updated-min=2015-09-16T09:16:00.000Z – Return entries published after “2015-09-16T09:16:00.000Z”.

2. updated-max: Returns entries within collection Atom:updated <=updated-max

Example: https://hclg-test.hcm.us2.oraclecloud.com/hcmCoreApi/Atomservlet/employee/newhire?updated-max=2015-09-16T09:16:00.000Z – Return entries published at/before “2015-09-16T09:16:00.000Z”.

3. updated-min=&updated-max: Return entries within collection (Atom:updated > updated-min && Atom:updated <=updated-max)

Example: https://hclg-test.hcm.us2.oraclecloud.com/hcmCoreApi/Atomservlet/employee/newhire?updated-min=2015-09-16T09:16:00.000Z&updated-max=2015-09-11T10:03:35.000Z – Return entries published between “2015-09-11T10:03:35.000Z” and “2015-09-16T09:16:00.000Z”.

Node.js Implementation

Refer my blog on how to invoke secured REST services using Node.js. These are the following things to consider when consuming feeds:

Initial Consumption

When you subscribe first time, you can invoke the resource with the query parameters to get all the published feeds or use updated-min or updated-max arguments to filter entries in a feed to begin with.

For example the invocation path could be /hcmCoreApi/Atomservlet/employee/newhire or /hcmCoreApi/Atomservlet/employee/newhire?updated-min=<some-timestamp>

After the first consumption, the “updated” element of the first entry must be persisted to use it in next call to avoid duplication. In this prototype, the “/entry/updated” timestamp value is persisted in a file.

For example:

//persist timestamp for the next call

if (i == 0) {

fs.writeFile('updateDate', updateDate[0].text, function(fserr) {

if (fserr) throw fserr; } );

}

 

Next Call

In next call, read the updated timestamp value from the above persisted file to generate the path as follows:

//Check if updateDate file exists and is not empty
try {

var lastFeedUpdateDate = fs.readFileSync('updateDate');

console.log('Last Updated Date is: ' + lastFeedUpdateDate);

} catch (e) {

// handle error

}

if (lastFeedUpdateDate.length > 0) {

pathUri = '/hcmCoreApi/Atomservlet/employee/newhire?updated-min=' + lastFeedUpdateDate;

} else {

pathUri = '/hcmCoreApi/Atomservlet/employee/newhire';

}

 

Parsing Atom Feed Response

The Atom feed response is in XML format as shown previously in the diagram. In this prototype, the “node-elementtree” package is implemented to parse the XML. You can use any library as long as the following data are extracted for each entry in the feed for downstream processing.

var et = require('elementtree');
//Request call
var request = http.get(options, function(res){
var body = "";
res.on('data', function(data) {
body += data;
});
res.on('end', function() {

//Parse Feed Response - the structure is defined in section: Atom Payload Response Structure
feed = et.parse(body);

//Identify if feed has any entries
var numberOfEntries = feed.findall('./entry/').length;

//if there are entries, extract data for downstream processing
if (numberOfEntries > 0) {
console.log('Get Content for each Entry');

//Get Data based on XPath Expression
var content = feed.findall('./entry/content/');
var entryId = feed.findall('./entry/id');
var updateDate = feed.findall('./entry/updated');

for ( var i = 0; i > content.length; i++ ) {

//get Resouce link for the respected entry
console.log(feed.findall('./entry/link/[@rel="related"]')[i].get('href'));

//get Content data of the respective entry which in JSON format
console.log(feed.findall('content.text'));
 
//persist timestamp for the next call
if (i == 0) {
  fs.writeFile('updateDate', updateDate[0].text, function(fserr) {
  if (fserr) throw fserr; } );

}

One and Only One Entry

Each entry in an Atom feed has a unique ID. For example: <id>Atomservlet:newhire:EMP300000005960615</id>

In target applications, this ID can be used as one of the keys or lookups to prevent reprocessing. The logic can be implemented in your downstream applications or in the integration space to avoid duplication.

 

Downstream Processing Pattern

The node.js scheduler can be implemented to consume feeds periodically. Once the message is parsed, there are several patterns to support various use cases. In addition, you could have multiple subscribers such as Employee new hire, Employee termination, locations, jobs, positions, etc. For guaranteed transactions, each feed entry can be published in Messaging cloud or Oracle Database to stage all the feeds. This pattern will provide global transaction and recovery when downstream applications are not available or throws error. The following diagram shows the high level architecture:

nodejs_soa_atom_pattern

 

Conclusion

This post demonstrates how to consume HCM Atom feeds and process it for downstream applications. It provides details on how to consume new feeds (avoid duplication) since last polled. Finally it provides an enterprise integration pattern from consuming feeds to downstream applications processing.

 

Sample Prototype Code

var et = require('elementtree');

var uname = 'username';
var pword = 'password';
var http = require('https'),
fs = require('fs');

var XML = et.XML;
var ElementTree = et.ElementTree;
var element = et.Element;
var subElement = et.SubElement;

var lastFeedUpdateDate = '';
var pathUri = '';

//Check if updateDate file exists and is not empty
try {
var lastFeedUpdateDate = fs.readFileSync('updateDate');
console.log('Last Updated Date is: ' + lastFeedUpdateDate);
} catch (e) {
// add error logic
}

//get last feed updated date to get entries since that date
if (lastFeedUpdateDate.length > 0) {
pathUri = '/hcmCoreApi/atomservlet/employee/newhire?updated-min=' + lastFeedUpdateDate;
} else {
pathUri = '/hcmCoreApi/atomservlet/employee/newhire';
}

// Generate Request Options
var options = {
ca: fs.readFileSync('HCM Cert'), //get HCM Cloud certificate - either through openssl or export from web browser
host: 'HCMHostname',
port: 443,
path: pathUri,
"rejectUnauthorized" : false,
headers: {
'Authorization': 'Basic ' + new Buffer(uname + ':' + pword).toString('base64')
}
};

//Invoke REST resource for Employee New Hires
var request = http.get(options, function(res){
var body = "";
res.on('data', function(data) {
body += data;
});
res.on('end', function() {

//Parse Atom Payload response 
feed = et.parse(body);

//Get Entries count
var numberOfEntries = feed.findall('./entry/').length;

console.log('...................Feed Extracted.....................');
console.log('Numer of Entries: ' + numberOfEntries);

//Process each entry
if (numberOfEntries > 0) {

console.log('Get Content for each Entry');

var content = feed.findall('./entry/content/');
var entryId = feed.findall('./entry/id');
var updateDate = feed.findall('./entry/updated');

for ( var i = 0; i < content.length; i++ ) {
console.log(feed.findall('./entry/link/[@rel="related"]')[i].get('href'));
console.log(feed.findall('content.text'));

//persist timestamp for the next call
if (i == 0) {
fs.writeFile('updateDate', updateDate[0].text, function(fserr) {
if (fserr) throw fserr; } );
}

fs.writeFile(entryId[i].text,content[i].text, function(fserr) {
if (fserr) throw fserr; } );
}
}

})
res.on('error', function(e) {
console.log("Got error: " + e.message);
});
});

 

 

HCM Atom Feed Subscriber using SOA Cloud Service

$
0
0

Introduction

HCM Atom feeds provide notifications of Oracle Fusion Human Capital Management (HCM) events and are tightly integrated with REST services. When an event occurs in Oracle Fusion HCM, the corresponding Atom feed is delivered automatically to the Atom server. The feed contains details of the REST resource on which the event occurred. Subscribers who consume these Atom feeds use the REST resources to retrieve additional information about the resource.

For more information on Atom, please refer to this.

This post focuses on consuming and processing HCM Atom feeds using Oracle Service Oriented Architecture (SOA) Cloud Service. Oracle SOA Cloud Service provides a PaaS computing platform solution for running Oracle SOA Suite, Oracle Service Bus, and Oracle API Manager in the cloud. For more information on SOA Cloud Service, please refer this.

Oracle SOA is the industry’s most complete and unified application integration and SOA solution. It transforms complex application integration into agile and re-usable service-based connectivity to speed time to market, respond faster to business requirements, and lower costs.. SOA facilitates the development of enterprise applications as modular business web services that can be easily integrated and reused, creating a truly flexible, adaptable IT infrastructure.

For more information on getting started with Oracle SOA, please refer this. For developing SOA applications using SOA Suite, please refer this.

 

Main Article

Atom feeds enable you to keep track of any changes made to feed-enabled resources in Oracle HCM Cloud. For any updates that may be of interest for downstream applications, such as new hire, terminations, employee transfers and promotions, Oracle HCM Cloud publishes Atom feeds. Your application will be able to read these feeds and take appropriate action.

Atom Publishing Protocol (AtomPub) allows software applications to subscribe to changes that occur on REST resources through published feeds. Updates are published when changes occur to feed-enabled resources in Oracle HCM Cloud. These are the following primary Atom feeds:

Employee Feeds

New hire
Termination
Employee update

Assignment creation, update, and end date

Work Structures Feeds (Creation, update, and end date)

Organizations
Jobs
Positions
Grades
Locations

The above feeds can be consumed programmatically. In this post, Node.js is implemented as one of the solutions consuming “Employee New Hire” feeds, but design and development is similar for all the supported objects in HCM.

 

HCM Atom Introduction

For Atom “security, roles and privileges”, please refer my blog HCM Atom Feed Subscriber using Node.js.

 

Atom Feed Response Template

 

AtomFeedSample_1

SOA Cloud Service Implementation

Refer my blog on how to invoke secured REST services using SOA. The following diagram shows the patterns to subscribe to HCM Atom feeds and process it to downstream applications that may have either web services or file based interfaces. Optionally, all entries from the feeds could be staged either in database or messaging cloud before processing it during events such as downstream application is not available or throwing system errors. This provides the ability to consume the feeds, but hold the processing until downstream applications are available. Enterprise Scheduler Service (ESS), a component of SOA Suite, is leveraged to invoke the subscriber composite periodically.

 

soacs_atom_pattern

The following diagram shows the implementation of the above pattern for Employee New Hire:

soacs_atom_composite

 

Feed Invocation from SOA

HCM cloud feed though in XML representation, the media type of the payload response is “application/atom+xml”. This media type is not supported at this time, but use the following java embedded activity in your BPEL component:

Once the built-in REST Adapter supports the Atom media type, java embedded activity will be replaced and further simplify the solution.

try {

String url = "https://mycompany.oraclecloud.com";
String lastEntryTS = (String)getVariableData("LastEntryTS");
String uri = "/hcmCoreApi/atomservlet/employee/newhire";

//Generate URI based on last entry timestamp from previous invocation
if (!(lastEntryTS.isEmpty())) {
uri = uri + "?updated-min=" + lastEntryTS;
}

java.net.URL obj = new URL(null,url+uri, new sun.net.www.protocol.https.Handler());

javax.net.ssl.HttpsURLConnection conn = (HttpsURLConnection) obj.openConnection();
conn.setRequestProperty("Content-Type", "application/vnd.oracle.adf.resource+json");
conn.setDoOutput(true);
conn.setRequestMethod("GET");

String userpass = "username" + ":" + "password";
String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userpass.getBytes("UTF-8"));
conn.setRequestProperty ("Authorization", basicAuth);

String response="";
int responseCode=conn.getResponseCode();
System.out.println("Response Code is: " + responseCode);

if (responseCode == HttpsURLConnection.HTTP_OK) {

BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));

String line;
String contents = "";

while ((line = reader.readLine()) != null) {
contents += line;
}

setVariableData("outputVariable", "payload", "/client:processResponse/client:result", contents);

reader.close();

}

} catch (Exception e) {
e.printStackTrace();
}

 

These are the following things to consider when consuming feeds:

Initial Consumption

When you subscribe first time, you can invoke the resource with the query parameters to get all the published feeds or use updated-min or updated-max arguments to filter entries in a feed to begin with.

For example the invocation path could be /hcmCoreApi/Atomservlet/employee/newhire or /hcmCoreApi/Atomservlet/employee/newhire?updated-min=<some-timestamp>

After the first consumption, the “updated” element of the first entry must be persisted to use it in next call to avoid duplication. In this prototype, the “/entry/updated” timestamp value is persisted in a database cloud (DbaaS).

This is the sample database table

create table atomsub (
id number,
feed_ts varchar2(100) );

For initial consumption, keep the table empty or add a row with the value of feed_ts to consume initial feeds. For example, the feed_ts value could be “2015-09-16T09:16:00.000Z” to get all the feeds after this timestamp.

In SOA composite, you will update the above table to persist the “/entry/updated” timestamp in the feed_ts column of the “atomsub” table.

 

Next Call

In next call, read the updated timestamp value from the database and generate the URI path as follows:

String uri = "/hcmCoreApi/atomservlet/employee/newhire";
String lastEntryTS = (String)getVariableData("LastEntryTS");
if (!(lastEntryTS.isEmpty())) {
uri = uri + "?updated-min=" + lastEntryTS;
}

The above step is done in java embedded activity, but it could be done in SOA using <assign> expressions.

Parsing Atom Feed Response

The Atom feed response is in XML format as shown previously in the diagram. In this prototype, the feed response is stored in output variable as a string. The following expression in <assign> activity will convert it to XML

oraext:parseXML($outputVariable.payload/client:result)


Parsing Each Atom Entry for Downstream Processing

Each entry has two major elements as mentioned in Atom response payload structure.

Resource Link

This contains the REST employee resource link to get Employee object. This is a typical REST invocation from SOA using REST Adapter. For more information on invoking REST services from SOA, please refer my blog.

 

Content Type

This contains selected resource data in JSON format. For example: “{  “Context” : [ {    "EmployeeNumber" : "212",    "PersonId" : "300000006013981",    "EffectiveStartDate" : "2015-10-08",    "EffectiveDate" : "2015-10-08",    "WorkEmail" : "phil.davey@mycompany.com",    "EmployeeName" : "Davey, Phillip"  } ]}”.

In order to use above data, it must be converted to XML. The BPEL component provides a Translator activity to transform JSON to XML. Please refer the SOA Development document, section B1.8 – doTranslateFromNative.

 

The <Translate> activity syntax to convert above JSON string from <content> is as follows:

<assign name="TranslateJSON">
<bpelx:annotation>
<bpelx:pattern>translate</bpelx:pattern>
</bpelx:annotation>
<copy>
 <from>ora:doTranslateFromNative(string($FeedVariable.payload/ns1:entry/ns1:content), 'Schemas/JsonToXml.xsd', 'Root-Element', 'DOM')</from>
 <to>$JsonToXml_OutputVar_1</to>
 </copy>
</assign>

This is the output:

jsonToXmlOutput

The following provides detailed steps on how to use Native Format Builder in JDeveloper:

In native format builder, select JSON format and use above <content> as a sample to generate a schema. Please see the following diagrams:

JSON_nxsd_1JSON_nxsd_2JSON_nxsd_3

JSON_nxsd_5

 

One and Only One Entry

Each entry in an Atom feed has a unique ID. For example: <id>Atomservlet:newhire:EMP300000005960615</id>

In target applications, this ID can be used as one of the keys or lookups to prevent reprocessing. The logic can be implemented in your downstream applications or in the integration space to avoid duplication.

 

Scheduler and Downstream Processing

Oracle Enterprise Scheduler Service (ESS) is configured to invoke the above composite periodically. At present, SOA cloud service is not provisioned with ESS, but refer this to extend your domain. Once the feed response message is parsed, you can process it to downstream applications based on your requirements or use cases. For guaranteed transactions, each feed entry can be published in Messaging cloud or Oracle Database to stage all the feeds. This will provide global transaction and recovery when downstream applications are not available or throws error.

The following diagram shows how to create job definition for a SOA composite. For more information on ESS, please refer this.

ess_3

SOA Cloud Service Instance Flows

First invocation without updated-min argument to get all the feeds

 

soacs_atom_instance_json

Atom Feed Response from above instance

AtomFeedResponse_1

 

Next invocation with updated-min argument based on last entry timestamp

soacs_atom_instance_noentries

 

Conclusion

This post demonstrates how to consume HCM Atom feeds and process it for downstream applications. It provides details on how to consume new feeds (avoid duplication) since last polled. Finally it provides an enterprise integration pattern from consuming feeds to downstream applications processing.

 

Sample Prototype Code

The sample prototype code is available here.

 

soacs_atom_composite_1

 

 

Custom Transports in Oracle Service Bus 12.2.1

$
0
0

Oracle Service Bus (or Service Bus for short) provides a very powerful set of APIs that allow experienced Java developers to create custom transport providers. This is called Service Bus Transport SDK. By using this SDK, it is possible to create custom transport providers to handle both inbound and outbound message handling for specific protocols, without having to worry with the internal details of Service Bus.

fig-01

The objective of this post is not about how the Service Bus Transport SDK works, neither about providing examples about how to use it. This is very detailed in the Service Bus documentation. Instead, we are going to cover the specifics about creating custom transport providers for Service Bus 12.2.1. Thus; this post will walk through the changes and challenges introduced by this new version, which may help people that want to port their custom transports from previous versions of Service Bus to 12.2.1.

Changes in the Classpath

No matter which IDE you commonly use to develop the code for custom transport providers, when you try to open your project you will face some annoying classpath issues. This will happen because the 12.2.1 version of Service Bus changed many of its JAR files, in an attempt to create a more consistent system library classpath. This is also true for some JAR files that belongs to WebLogic, and many others from the Fusion Middleware stack.

Therefore, you will have to adapt your classpath to be able to compile your source-code again, either compiling the code from the IDE or using the Ant javac task. The XML snippet below is an Eclipse user library export with some of the most important JARs that you might need while working with Service Bus 12.2.1.

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

<eclipse-userlibraries version="2">

    <library name="Java EE API" systemlibrary="false">
        <archive path="/oracle/mw-home/wlserver/server/lib/javax.javaee-api.jar"/>
    </library>

    <library name="Service Bus Transport SDK" systemlibrary="false">
        <archive path="/oracle/mw-home/wlserver/server/lib/weblogic.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.xml.xmlbeans.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.kernel-api.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.configfwk.jar"/>
        <archive path="/oracle/mw-home/osb/lib/transports/main-transport.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.common.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.services.sources.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.services.core.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.platform.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.utils.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.jmspool.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.resources.svcaccount.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.descriptor.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.descriptor.j2ee.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.descriptor.application.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.descriptor.wl.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.resources.service.jar"/>
        <archive path="/oracle/mw-home/wlserver/server/lib/wls-api.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.utils.full.jar"/>
    </library>

    <library name="WebLogic JMS API" systemlibrary="false">
        <archive path="/oracle/mw-home/wlserver/server/lib/wljmsclient.jar"/>
    </library>

    <library name="WebLogic WorkManager API" systemlibrary="false">
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.weblogic.workmanager.jar"/>
    </library>

</eclipse-userlibraries>

You might need to change your Ant script as well:

fig-02

Changes in the Kernel API

Although there are minimal, there were changes in the Service Bus Kernel API that can avoid your code to be compiled. Specifically, you will see some compiler errors in the classes that handle the UI part of your custom transport provider. The first change noticed is the removal of the setRequired() method in the com.bea.wli.sb.transports.ui.TransportEditField class. It seems that it vanished in the 12.2.1 version.

fig-03

Similarly, the Kernel API removed the DISPLAY_LIST constant from the TransportUIFactory.SelectObject inner class:

fig-04

However, if you try to compile the source-code using the Ant javac task, it works. Moreover; all the missing parts are still available in the 12.2.1 version of Service Bus, and works in runtime after you install the custom transport provider. For this reason, it can be considered safe to ignore those compiler errors.

Targeting to JDK 1.8

Service Bus 12.2.1 is certified to run on top of Fusion Middleware 12.2.1, which in turn is certified to run on top of JDK 1.8. Thus, it might be a good idea to change your compiler settings to generate JDK 1.8 compliant bytecode.

fig-05

This is not a pre-requirement of course, since the JVM allows the execution of code compiled in earlier versions. But to promote better alignment with the Fusion Middleware certification matrix, that can be considered a best practice. Besides, you might be interested in using some of the JDK 1.8 new features such as lambdas expressions, pipelines & streams and default methods.

Issues with the Service Bus Console

The Service Bus Console had its UI changed in version 12.2.1. Now it uses the Oracle Alta UI, the same look-and-feel found in major Cloud offerings such as the Integration Cloud Service and SOA Cloud Service. While this is good because it provides better experience for users using the Service Bus Console, it brings an additional challenge when you deploy your custom transport provider.

The challenge is that even after having the custom transport provider installed, you will notice that it will not be available in the Service Bus Console. At first, you will think that the custom transport provider was not installed properly, but if you strictly follow the instructions about how to deploy custom transport providers, you can be certain that it will be installed correctly.

The issue here is a bug in the Service Bus Console regarding internationalization. All the transports must have an entry in a property file that maintains the descriptions of the resources created in Service Bus Console. For the transports that come with Service Bus, these entries are set. But for custom transport providers, you will have to manually create these entries in the properties file in order to have your custom transport provider working with the Service Bus Console. The instructions below will help you to solve this issue.

Firstly, locate the following file:

$MW_HOME/osb/lib/osbconsoleEar/webapp/WEB-INF/lib/adflib_osb_folder.jar

Secondly, open this JAR and edit a properties file that is contained inside. The file to be edited is:

/oracle/soa/osb/console/folder/l10n/FolderBundle.properties

This file generically handles internationalized messages when no language is specified in the browser. You might need to change other files to make your custom transport provider available when specific languages (i.e.: Brazilian Portuguese, Spanish, Japanese) have been set.

You will have to create two, maybe three entries in this file. The first entry provides a generic description to your custom transport provider. If your custom transport provider has inbound support, then it must have an entry for the Proxy Service description. If your custom transport provider has outbound support, then it must have an entry for the Business Service description.

The example below shows the entries for a custom transport provider named kafka, that has both inbound and outbound support:

desc.res.gallery.kafka=The Kafka transport allows you to create proxy and business services that communicate with Apache Kafka brokers.

desc.res.gallery.kafka.proxy=The Kafka transport allows you to create proxy services that receive messages from Apache Kafka brokers.

desc.res.gallery.kafka.business=The Kafka transport allows you to create business services that route messages to Apache Kafka brokers.

Save all the changes made in the properties file, and save this file back to the JAR file. You will need to restart Service Bus to check if this change had effect. After restart Service Bus, you will notice that the Service Bus Console now allows your custom transport provider to be used.

fig-06

The Oracle support and engineering teams are aware about this bug, and hopefully future versions of Service Bus will eliminate the need to manually create these entries. This issue has no impact if you develop Service Bus applications using Fusion Middleware JDeveloper.

Oracle HCM Cloud – Bulk Integration Automation Using SOA Cloud Service

$
0
0

Introduction

Oracle Human Capital Management (HCM) Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the batch integration to load and extract data to and from the HCM cloud. HCM provides the following bulk integration interfaces and tools:

HCM Data Loader (HDL)

HDL is a powerful tool for bulk-loading data from any source to Oracle Fusion HCM. It supports important business objects belonging to key Oracle Fusion HCM products, including Oracle Fusion Global Human Resources, Compensation, Absence Management, Performance Management, Profile Management, Global Payroll, Talent and Workforce Management. For detailed information on HDL, please refer to this.

HCM Extracts

HCM Extract is an outbound integration tool that lets you select HCM data elements, extracting them from the HCM database and archiving these data elements as XML. This archived raw XML data can be converted into a desired format and delivered to supported channels recipients.

Oracle Fusion HCM provides the above tools with comprehensive user interfaces for initiating data uploads, monitoring upload progress, and reviewing errors, with real-time information provided for both the import and load stages of upload processing. Fusion HCM provides tools, but it requires additional orchestration such as generating FBL or HDL file, uploading these files to WebCenter Content and initiating FBL or HDL web services. This post describes how to design and automate these steps leveraging Oracle Service Oriented Architecture (SOA) Cloud Service deployed on Oracle’s cloud Platform As a Service (PaaS) infrastructure.  For more information on SOA Cloud Service, please refer to this.

Oracle SOA is the industry’s most complete and unified application integration and SOA solution. It transforms complex application integration into agile and re-usable service-based components to speed time to market, respond faster to business requirements, and lower costs.. SOA facilitates the development of enterprise applications as modular business web services that can be easily integrated and reused, creating a truly flexible, adaptable IT infrastructure. For more information on getting started with Oracle SOA, please refer this. For developing SOA applications using SOA Suite, please refer to this.

These bulk integration interfaces and patterns are not applicable to Oracle Taleo.

Main Article

 

HCM Inbound Flow (HDL)

Oracle WebCenter Content (WCC) acts as the staging repository for files to be loaded and processed by HDL. WCC is part of the Fusion HCM infrastructure.

The loading process for FBL and HDL consists of the following steps:

  • Upload the data file to WCC/UCM using WCC GenericSoapPort web service
  • Invoke the “LoaderIntegrationService” or the “HCMDataLoader” to initiate the loading process.

However, the above steps assume the existence of an HDL file and do not provide a mechanism to generate an HDL file of the respective objects. In this post we will use the sample use case where we get the data file from customer, using it to transform the data and generate an HDL file, and then initiate the loading process.

The following diagram illustrates the typical orchestration of the end-to-end HDL process using SOA cloud service:

 

hcm_inbound_v1

HCM Outbound Flow (Extract)

The “Extract” process for HCM has the following steps:

  • An Extract report is generated in HCM either by user or through Enterprise Scheduler Service (ESS)
  • Report is stored in WCC under the hcm/dataloader/export account.

 

However, the report must then be delivered to its destination depending on the use cases. The following diagram illustrates the typical end-to-end orchestration after the Extract report is generated:

hcm_outbound_v1

 

For HCM bulk integration introduction including security, roles and privileges, please refer to my blog Fusion HCM Cloud – Bulk Integration Automation using Managed File Trasnfer (MFT) and Node.js. For introduction to WebCenter Content Integration services using SOA, please refer to my blog Fusion HCM Cloud Bulk Automation.

 

Sample Use Case

Assume that a customer receives benefits data from their partner in a file with CSV (comma separated value) format periodically. This data must be converted into HDL format for the “ElementEntry” object and initiate the loading process in Fusion HCM cloud.

This is a sample source data:

E138_ASG,2015/01/01,2015/12/31,4,UK LDG,CRP_UK_MNTH,E,H,Amount,23,Reason,Corrected all entry value,Date,2013-01-10
E139_ASG,2015/01/01,2015/12/31,4,UK LDG,CRP_UK_MNTH,E,H,Amount,33,Reason,Corrected one entry value,Date,2013-01-11

This is the HDL format of ElementryEntry object that needs to be generated based on above sample file:

METADATA|ElementEntry|EffectiveStartDate|EffectiveEndDate|AssignmentNumber|MultipleEntryCount|LegislativeDataGroupName|ElementName|EntryType|CreatorType
MERGE|ElementEntry|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|E|H
MERGE|ElementEntry|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|E|H
METADATA|ElementEntryValue|EffectiveStartDate|EffectiveEndDate|AssignmentNumber|MultipleEntryCount|LegislativeDataGroupName|ElementName|InputValueName|ScreenEntryValue
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Amount|23
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Reason|Corrected all entry value
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Date|2013-01-10
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Amount|33
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Reason|Corrected one entry value
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Date|2013-01-11

SOA Cloud Service Design and Implementation

A canonical schema pattern has been implemented to design end-to-end inbound bulk integration process – from the source data file to generating HDL file and initiating the loading process in HCM cloud. The XML schema of HDL object “ElementEntry” is created. The source data is mapped to this HDL schema and SOA activities will generate the HDL file.

Having a canonical pattern automates the generation of HDL file and it becomes a reusable asset for various interfaces. The developer or business user only needs to focus on mapping the source data to this canonical schema. All other activities such as generating the HDL file, compressing and encrypting the file, uploading the file to WebCenter Content and invoking web services needs to be developed once and then once these activities are developed they also become reusable assets.

Please refer to Wikipedia for the definition of Canonical Schema Pattern

These are the following design considerations:

1. Convert source data file from delimited format to XML

2. Generate Canonical Schema of ElementEntry HDL Object

3. Transform source XML data to HDL canonical schema

4. Generate and compress HDL file

5. Upload a file to WebCenter Content and invoke HDL web service

 

Please refer to SOA Cloud Service Develop and Deploy for introduction and creating SOA applications.

SOA Composite Design

This is a composite based on above implementation principles:

hdl_composite

Convert Source Data to XML

“GetEntryData” in the above composite is a File Adapter service. It is configured to use native format builder to convert CSV data to XML format. For more information on File Adapter, refer to this. For more information on Native Format Builder, refer to this.

The following provides detailed steps on how to use Native Format Builder in JDeveloper:

In native format builder, select delimited format type and use source data as a sample to generate a XML schema. Please see the following diagrams:

FileAdapterConfig

nxsd1

nxsd2_v1 nxsd3_v1 nxsd4_v1 nxsd5_v1 nxsd6_v1 nxsd7_v1

Generate XML Schema of ElementEntry HDL Object

A similar approach is used to generate ElementEntry schema. It has two main objects: ElementEntry and ElementEntryValue.

ElementEntry Schema generated using Native Format Builder

<?xml version = ‘1.0’ encoding = ‘UTF-8’?>
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:nxsd=”http://xmlns.oracle.com/pcbpel/nxsd” xmlns:tns=”http://TargetNamespace.com/GetEntryHdlData” targetNamespace=”http://TargetNamespace.com/GetEntryHdlData” elementFormDefault=”qualified” attributeFormDefault=”unqualified” nxsd:version=”NXSD” nxsd:stream=”chars” nxsd:encoding=”UTF-8″>
<xsd:element name=”Root-Element”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”Entry” minOccurs=”1″ maxOccurs=”unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”METADATA” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementEntry” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveStartDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveEndDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”AssignmentNumber” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”MultipleEntryCount” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”LegislativeDataGroupName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EntryType” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”CreatorType” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”${eol}” nxsd:quotedBy=”&quot;”/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:annotation>
<xsd:appinfo>NXSDSAMPLE=/ElementEntryAllSrc.dat</xsd:appinfo>
<xsd:appinfo>USEHEADER=false</xsd:appinfo>
</xsd:annotation>
</xsd:schema>

ElementEntryValue Schema generated using Native Format Builder

<?xml version = ‘1.0’ encoding = ‘UTF-8’?>
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:nxsd=”http://xmlns.oracle.com/pcbpel/nxsd” xmlns:tns=”http://TargetNamespace.com/GetEntryValueHdlData” targetNamespace=”http://TargetNamespace.com/GetEntryValueHdlData” elementFormDefault=”qualified” attributeFormDefault=”unqualified” nxsd:version=”NXSD” nxsd:stream=”chars” nxsd:encoding=”UTF-8″>
<xsd:element name=”Root-Element”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”EntryValue” minOccurs=”1″ maxOccurs=”unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”METADATA” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementEntryValue” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveStartDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveEndDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”AssignmentNumber” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”MultipleEntryCount” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”LegislativeDataGroupName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”InputValueName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ScreenEntryValue” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”${eol}” nxsd:quotedBy=”&quot;”/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:annotation>
<xsd:appinfo>NXSDSAMPLE=/ElementEntryAllSrc.dat</xsd:appinfo>
<xsd:appinfo>USEHEADER=false</xsd:appinfo>
</xsd:annotation>
</xsd:schema>

In Native Format Builder, change “|” separator to “,” in the sample file and change it to “|” for each element in the generated schema.

Transform Source XML Data to HDL Canonical Schema

Since we are using canonical schema, all we need to do is map the source data appropriately and Native Format Builder will convert each object into HDL output file. The transformation could be complex depending on the source data format and organization of data values. In our sample use case, each row has one ElementEntry object and 3 ElementEntryValue sub-objects respectively.

The following provides the organization of the data elements in a single row of the source:

Entry_Desc_v1

The main ElementEntry entries are mapped to each respective row, but ElementEntryValue entries attributes are located at the end of each row. In this sample it results 3 entries. This can be achieved easily by splitting and transforming each row with different mappings as follows:

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “1” from above diagram

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “2” from above diagram

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “3” from above diagram

 

Metadata Attribute

The most common use cases are to use “merge” action for creating and updating objects. In this use case, it is hard coded to “merge”, but the action could be set up to be dynamic if source data row has this information. The “delete” action removes the entire record and must not be used with “merge” instruction of the same record as HDL cannot guarantee in which order the instructions will be processed. It is highly recommended to correct the data rather than to delete and recreate it using the “delete” action. The deleted data cannot be recovered.

 

This is the sample schema developed in JDeveloper to split each row into 3 rows for ElementEntryValue object:

<xsl:template match=”/”>
<tns:Root-Element>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C9″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C10″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C11″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C12″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C13″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C14″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
</tns:Root-Element>
</xsl:template>

BPEL Design – “ElementEntryPro…”

This is a BPEL component where all the major orchestration activities are defined. In this sample, all the activities after transformation are reusable and can be moved to a separate composite. A separate composite may be developed only for transformation and data enrichment that in the end invokes the reusable composite to complete the loading process.

 

hdl_bpel_v2

 

 

SOA Cloud Service Instance Flows

The following diagram shows an instance flow:

ElementEntry Composite Instance

instance1

BPEL Instance Flow

audit_1

Receive Input Activity – receives delimited data to XML format through Native Format Builder using File Adapter

audit_2

Transformation to Canonical ElementEntry data

Canonical_entry

Transformation to Canonical ElementEntryValue data

Canonical_entryvalue

Conclusion

This post demonstrates how to automate HCM inbound and outbound patterns using SOA Cloud Service. It shows how to convert customer’s data to HDL format followed by initiating the loading process. This process can also be replicated to other Fusion Applications pillars such as Oracle Enterprise Resource Planning (ERP).

How to find purgeable instances in SOA/BPM 12c

$
0
0

If you are familiar with SOA/BPM 11g purging, after you have upgraded/implemented SOA/BPM 12c, you will not be able to use most of the SQL for 11g to determine the purgeable instances.  This is because SOA/BPM 12c is no longer using composite_instance table for composite instance tracking.

In SOA/BPM 12c, a common component is used to track the state associated with a business flow and report audit information.  This design will reduce the instance tracking data generated and stored in the database, and improve purge performance by minimizing the number of tables that need to be accessed.  Component instance state will no longer be stored in individual table for instance tracking purpose, the overall flow state will be stored in SCA_FLOW_INSTANCE table.

In SCA_FLOW_INSTANCE table, the “active_component_instances” column keeps track of how many component instances are still in a running/active state. These are the instances in one of the following states:

  • RUNNING
  • SUSPENDED
  • MIGRATING
  • WAITING_ON_HUMAN_INTERVENTION

When the “active_component_instances” value reaches 0, this indicates that the Flow is no longer executing. There is another column called “recoverable_faults”, this column keeps track of how many faults can be recovered. This information together with the “active_component_instances” is used to determine whether the Flow can be purged or not.

The SCA_FLOW_ASSOC table is used to record the association between the original Flow that creates the BPEL component instance and the correlated Flow. The SCA_FLOW_ASSOC table is used by the purge logic to ensure that all correlated Flows are purged together when none of the flow is in an active state.

Another important thing to take note: if you create a SOAINFRA schema with LARGE database profile, all transactional tables will be created with range-partition. If you decide to run the SOA purging with the purge script either manually by running the stored procedure or by using auto purge function which can be configured in Oracle Enterprise Manager Fusion Middleware Control, you will need to set the purge_partitioned_component => true (default is false), otherwise the purge logic will skip all partitioned tables when the purge script run and no flow instance will be purged.  You will be able to find all the partition tables in your SOAINFRA schema by using the following SQL:

select table_name from user_tables where partitioned = 'YES';

You can use the following sample PL/SQL to determine whether the SCA_FLOW_INSTANCE has been partitioned and the number of purgeable flow instances in your SOAINFRA schema.

set serveroutput on;
DECLARE
  MAX_CREATION_DATE TIMESTAMP;
  MIN_CREATION_DATE TIMESTAMP;
  batch_size        INTEGER;
  retention_period  TIMESTAMP;
  purgeable_instance INTEGER;
  table_partitioned INTEGER;
BEGIN
  MAX_CREATION_DATE := to_timestamp('2015-12-27','YYYY-MM-DD');
  MIN_CREATION_DATE := to_timestamp('2015-12-01','YYYY-MM-DD');
  retention_period  := to_timestamp('2015-12-27','YYYY-MM-DD');
  batch_size        := 100000;
 
  if retention_period < max_creation_date then
    retention_period := max_creation_date;  
  end if;
 
  select count(table_name) into table_partitioned from user_tables where partitioned = 'YES' and table_name='SCA_FLOW_INSTANCE';
 
  if table_partitioned > 0 then
   DBMS_OUTPUT.PUT_LINE ('SCA_FLOW_INSTANCE is partitioned ');
  else
   DBMS_OUTPUT.PUT_LINE ('SCA_FLOW_INSTANCE is not partitioned ');
  end if;
 
  SELECT Count(s.flow_id) into purgeable_instance
  FROM sca_flow_instance s
  WHERE s.created_time            >= MIN_CREATION_DATE
  AND s.created_time              <= MAX_CREATION_DATE
  AND s.updated_time              <= retention_period
  AND s.active_component_instances = 0
  AND s.flow_id NOT IN  (SELECT r.flow_id FROM temp_prune_running_insts r)
  AND s.flow_id IN
    (SELECT c.flow_id FROM sca_flow_to_cpst c, sca_entity e, sca_partition p WHERE c.composite_sca_entity_id = e.id)
  AND rownum <= batch_size;
   DBMS_OUTPUT.PUT_LINE ('Total purgeable flow instance: ' ||  purgeable_instance);
END;
/
Viewing all 97 articles
Browse latest View live