Announcement

Collapse
No announcement yet.

[SOLVED] Larger XML...

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • chrisplough
    replied
    Stefano,

    OTM's default behavior is to process integration in parallel, if it is posted in parallel. By this, I mean that each XML file is a separate HTTP post from the integration server. However, if several XML files are sent to the OTM server in one long post, then they should be processed in serial.

    If you absolutely need your posts to be processed in serial (for instance, the order of status updates matters - you can't process a signature event until a delivery event has occurred - or something similar), then you can utilize OTM thread groups - which provide this functionality. However, I strongly recommend trying to solve this issue on the integration server, rather than in OTM for best performance and scalability.

    If this is a QA or DEV server, then I agree with reducing the DB Pool size (the PRIMARY_JTS Pool, specifically). If this is production, then I'd recommend leaving the defaults (100 minimum, 150 maximum) because as your users and integration ramp up, you'll see much higher usage from this pool. I was one of the people at G-Log who came up with this minimum value after a lot of performance testing, so maybe I'm a bit biased

    There are various thread groups in OTM that need to be tuned for greater performance - this is one of the tasks I do most for our clients. You can view the various thread groups and their current status by using the following servlet:Tuning OTM is a fairly complex process, though and usually involves the following (in this order):
    1. Generate a load against the servers that represents your production traffic well (web users, integration, bulk plans, etc). Good free tools for achieving this are JMeter, Perl, Ruby and Python.
    2. Tune the OS on all OTM and DB servers (web, app, report, db).
    3. Tune the DB as much as possible, including storage. This is the most common OTM bottleneck.
    4. Tune the Java JVM on each OTM server (web, app) including heap size and garbage-collection parameters.
    5. Tune WebLogic on the OTM app server.
    6. Tune the OTM internal threads and queues.
    7. Repeat as necessary - it usually involves multiple iterations, especially as volumes increase.
    I hope this helps!
    --Chris

    Leave a comment:


  • Stefano
    replied
    Thanks Chris for your answer.

    Also us we could certify that parallel processing now increase upload performance.
    Really our process of HTTP post is serial, but finish it we have seen that OTM process in parallel interface file in fresh status.

    We see that is possible to change DB session pool configuration, we change it for UI standard access, to reduce it, becase having 100 session open with only 10 users it's no so useful and it's only resource consuming without any advantage (also if ORACLE people says that 100 it's minimun...).

    it Seems that there is a session pool dedicate to import process, in your opinion increasing session of this pool increase parallel processing??

    Thanks in advance.

    Regards.

    Leave a comment:


  • chrisplough
    replied
    Stefano,

    You're very welcome. I agree with your concerns - having to split up the integration files is difficult and not always possible, depending on your integration software.

    On the plus side, I was just working with a very large client who was seeing this exact issue, but were also having significant performance issues as a result of their volumes. By splitting up the XML transmissions into many more smaller files and posting several of them in parallel using multiple threads, they were able to increase their performance significantly.

    So - I know the effort in splitting up the integration files is tedious and time-consuming, but it will help you avoid serious problems down the road.

    Hope this helps!

    Chris

    Leave a comment:


  • Stefano
    replied
    Thanks Chris for your answer.

    At the moment we had yet implemented solution 1, but for doing it we changed some part of our integration software.

    Unfortunately we need agent post-processing , so solution 2 , that we don't find on GC3 documentation, it's not applicable.

    So we remain in solution 1, hoping that our splitted file don't became larger that 10mb.
    Unfortunately we have to split file don't by size , but for record status,,, so we can't drive split process...

    regards

    Leave a comment:


  • chrisplough
    replied
    Stephano,

    There is a file-size limit with the normal WMServlet, due to the way that it processes integration. At this point, you have two options that I'm aware of:
    1. If you need agents and other processes to kick off or act on your integration, then you'll need to continue to use WMServlet. In this case, the best option is to break up the XML on your integration server and post multiple, smaller files. Another benefit, is that this will speed up your integration, allowing OTM to process it using multiple threads. I'd recommend this option first, as long as you don't need to maintain a serial order to your integration.
    2. If you don't need agents to process against your XML, you can utilize the LargeTransmissionServlet, which accepts much larger files. To use this, just post your integration to http://otm.company.com/GC3/glog.inte...missionServlet (v5.0 and above) or http://otm.company.com/servlets/glog...missionServlet (v4.5 and below) instead of the normal WMServlet.
    As I said, your best bet is to break up the integration files on your integration server, though, and this will help you avoid future performance issues as your volumes ramp up.

    Has anyone else used a different approach to solve this?

    Hope this helps!

    --Chris

    Leave a comment:


  • Stefano
    started a topic [SOLVED] Larger XML...

    [SOLVED] Larger XML...

    Hello,
    I have a problem uploading on OTM xml file larger than 10 MB
    Problem happens using UI function and also by http post.

    Anyone solve this

    Thanks in advance...
Working...
X
😀
🥰
🤢
😎
😡
👍
👎