No announcement yet.

Extremely Large SHIPMENT_STATUS table causing performance issues?

  • Filter
  • Time
  • Show
Clear All
new posts

  • Extremely Large SHIPMENT_STATUS table causing performance issues?

    Hi everyone,

    Due to a sudden spike in volume, my shipment_status table grew from 3 million to almost 7 million rows in 1 week.

    My archive policy is 40 days so it will be a while before these shipments and their statuses get moved to the archive schema. But this is causing slowness on alot of my queries and I believe it is affecting performance as well.

    I would like to find out if the following are possible options and what are the risks:

    1) Can I remove certain shipment statuses in my domain if I am not using them?
    2) Removing shipment statuses will make the shipment_table smaller - but what are the risks?
    3) What do I need to watch out for when removing an unused shipment_status
    4) Where can I check if a particular shipment status is used by the system / workflow?

    Would appreciate feedback from anyone with experience of trimming the shipment statuses.


  • #2
    Re: Extremely Large SHIPMENT_STATUS table causing performance issues?

    Not being an expert on trimming shipment statuses, but with my rule of thumb is 25 statuses per shipment, this means you had 160,000 shipments created this week. Not bad!

    I can image this impacted the performance, but this can't have been just because of the status table. If for example you have an agent run on shipment create and this agent takes a second, OTM needed 9 hours per day to process the agents. And now I don't take into account that you would have quite some orders to be created, integration, etc.

    Even if you were able to delete 10 out of the 25 statuses per shipment (please note I don't recommend doing this) this would only solve a very tiny bit of your worries.


    • #3
      Re: Extremely Large SHIPMENT_STATUS table causing performance issues?

      Hi Lourens,

      Yeah ... 160,000 was only the first week ... over the last 3 weeks it averaged 20,000 per week. Operations had been working on shifts over the last 4 weeks to clear the shipments ... and there are some more from our customer ...

      I guess my immediate action was to reduce the retention period for shipments, orders and order bases and I am still monitoring the performance based on this action.

      It does seem like a waste of tablespace to have shipment statuses when you don't need them.

      But I will heed your advise and refrain from the increasing temptation to delete unsused shipments

      Thing is, have you encountered this before and what did you do to resolve the issue? I know Eric (Rosenbloom) had a powerpoint on optimising SQL queries - but I am reluctant to disable VPDs for the SHIPMENT_STATUS tables as I am not sure what may break in OTM - and if I were to upgrade, what will the impact be? (Upgrade failure, data corruption etc).

      Would like to hear your thoughts as well as others reading this post!



      • #4
        Re: Extremely Large SHIPMENT_STATUS table causing performance issues?

        Hi Ian,

        We currently deal with shipment_status volumes of 20 million and don't really have too many problems with the app.

        What queries are causing concern? Are they custom or is it the application?

        What kind of hardware/software combination do you run?



        • #5
          Re: Extremely Large SHIPMENT_STATUS table causing performance issues?


          I feel your pain. We went through the same thing. I had about 10 million rows in ShipmentStatus and OrderReleaseStatus tables (my refnum tables are worse).

          I have a lot of unused status values as well and I asked the same question of Oracle Support. The response was that I should NOT remove the unused rows and that the application may expect them to be there for certain functionality and that removing them may cause a problem. I wish Oracle would make this configurable so if you aren't using them it doesn't create them.

          We ended up reducing our retention from 6 months down to 45 days in order to keep the status and refnum tables small enough so that queries would run.

          Our problem is the queries run and/or saved from the Shipment and Order Finder screens. We have many queries that use 2-4 status values. (Ex: we are checking for Tender, Schedule and and Enroute status in one query). The generated queries are VERY INEFFICIENT as a sub-query is used for each status value. Add the VPD on top of an inefficient query and it only gets worse.

          More than a year ago, I did some experimenting, timing the generated queries and executing them directly through TOAD or SQL+, with and without VPD. I then tuned the queries and found I could get much better results than the Generated Queries.

          I planned to use these new tuned queries with the Parameterized Query functionality that was available in 5.5.CU5 (?). Note that timings running the query in the applicaton were much longer than running them directly against the database, so not all your time is directly related to the database. For us, the difference in the time is significant. (Mine are most likely related to platform - AIX/Websphere).

          The Parameterized Query allows you to write your own query for the screen set, but it is not really production ready in my opinion. It's very awkward to use, so we abandoned that idea.

          The only way we found to make performance acceptable is to purge down to 45 days, which gives me 3 million or less rows in these tables.

          If you find a way to get better performance with high volumes of Status or Refnum rows, I'd love to hear about it.


          I'm would be interested on how you get performance with 20+ million rows. Do your users use OTM generated queries that go after multiple status or refnum values? Have you done any database tuning, indexing etc on these tables? Do you reorg/shrink the tables often?

          I've been thinking about dropping VPD from this table, but I haven't tried it yet.

          We are currently running on 10g on a HP-UX Itanium b11.23.




          • #6
            Re: Extremely Large SHIPMENT_STATUS table causing performance issues?

            Hi Janice.

            Yes, our users use OTM generated queries and for the most part the database performs quite well. There are certain situations where they might put in unselective like conditions in the contains dropdown where performance isn't great and there's not much you can do.

            The business might have a different view on performance ; ) . We had OTM generated queries on the shipment status table which ran for 30-60 seconds originally but we have got then down to <5 seconds and that's with 20 million records.

            We have made tweaks to the buffer cache and made sure the optimizer is choosing the right plan by setting the index cost adj.

            We've gone through a lot of the queries generated by OTM and used AWR and all the db tools available to identify the contention and see what we can do. Is physical reads a problem? Latch contention? Are our disks slow? etc, etc

            No extra indexes. We don't reorg/shrink the tables.

            It's been an iterative process of identifying little things and improving them and the sum of it gives us a system which works ok... well, from a database perspective anyway.