After search, use << and >> links at top of page to view other pages.

Get Updates on Facebook

Deadtime versus Lag

The dynamic response of self-regulating processes can be described reasonably accurately with a simple model consisting of process gain, deadtime and lag (a.k.a. time constant). The process gain describes how much the process responds to a change in controller output, while the deadtime and time constant describes how quickly the process responds.

Note: Lag is a phenomenon and time constant is its measurement in time. But these two terms are often used interchangeably.

Although the deadtime and time constant both describe aspects of the process’ dynamic response, there are several fundamental differences between how deadtime and time constant affects a control loop. The first difference is that deadtime describes how long it takes a process to begin responding to a change in controller output, while the time constant describes how fast the process responds once it has begun moving.

Measuring the Deadtime and Time Constant of a Process

Let’s begin with the measurement of deadtime and time constant of a self-regulating process. Typically, one will place the controller in manual control mode, wait for the process variable to settle down, and then make a step change of a few percent in the controller output. At first the process variable does nothing (deadtime) and then it begins changing (time constant) until finally it settles out at a new level.

Measuring Dead Time and Time Constant

Measuring deadtime and time constant

To measure the deadtime and time constant, draw a horizontal line at the same level as the original process variable. We’ll call this the baseline. Then find the maximum vertical slope of the process variable response curve. Draw a line tangential to the maximum slope all the way to cross the baseline. We’ll call this crossing the intersection.

– The process deadtime is measured along the time axis as the time spanned between the step change in controller output and the intersection.

Next, measure the total change in process variable. Then find the point on the process response curve where the process variable has changed by 0.63 of the total change in process variable. We’ll call this point P63.

– The process time constant is measured along the time axis as the time spanned between the intersection (described previously) and P63.

Deadtime versus Time Constant

We can draw a chart with a continuum of deadtime through time constant (see figure below). Processes with dynamics consisting of pure deadtime will be on the left and pure lag (time constant) on the right. In the middle the process deadtime will equal its time constant.

We’ll find that flow loops and liquid pressure loops fall just about in the middle of the continuum, because their deadtime and time constant are almost equal. Gas pressure and temperature loops will be located more toward the right – they are lag (time constant) dominant. Serpentine channels in water treatment plants and conveyors with downstream mass meters will appear on the left side – they are deadtime dominant.

Level loops should actually be treated differently because the are modeled without a time constant, but they can be approximated on the continuum by placing them all the way to the right, as if they have infinitely long time constants.

The ratio of deadtime to time constant affects the usefulness of derivative control, the tuning rules we use, the controllability of the process, and the shortest possible loop settling time.

Dead Time versus Time Constant

A continuum from pure deadtime (td) to pure lag (tau)

Controller Modes

The derivative control mode works well where process variables continue to move in the same direction for some time, i.e. lag-dominant processes. Derivative control does not work well on processes where the process variable changes sporadically – typically processes with relatively short time constants, located in the middle and to the left on the continuum.

Applicability of Tuning Rules

Most tuning rules will work on lag-dominant processes. However, the Ziegler-Nichols rules have only a narrow range of applicability. Lambda tuning rules apply to a broader spectrum of processes, while Cohen-Coon has the widest coverage. The Deadtime tuning rule, applies to processes on the left, as its name implies.

Controllability

Lag-dominant loops are easier to control than deadtime-dominant loops. Operators find that lag-dominant processes respond much more intuitively than deadtime-dominant processes and are easier to control in manual mode.

Loop Settling Time

When tuning a loop for the shortest possible settling time, one finds that there is a minimum limit on settling time. If you tune the controller any tighter, the loop will begin oscillating, thereby increasing the settling time. The minimum settling time depends mostly on the deadtime in a control loop, and will be between two and four times the length of the deadtime. The ratio of time constant to deadtime determines where the minimum settling time falls between two and four times the process deadtime.

Stay tuned!

Jacques Smuts – Author of the book Process Control for Practitioners

14 Responses to “Deadtime versus Lag”

  • Tejaswinee:

    Sir, you explained the method for self-regulating processes. How we calculate delay, tau and Ts for processes which are not self-regulating?

  • Tejaswinee, please see this article on level controller tuning for determining deadtime on non-self-regulating (integrating) processes. For integrating processes, process time constants contribute to the apparent deadtime, so we don’t have to consider them independently. And the estimated minimum closed-loop settling time will be four times as long as the apparent deadtime.
    – Jacques

  • Nay:

    Hi ! please help me on deadtime also. For my case , pressure control PID ( reverse acting)
    at first PV is higher than SP(52) , so CV is 100% open. but eventually PV goes down and pass SP(52) , for example: PV(51.5 or 51 ) . but PID not start closing and take long time 5-15 min to start tuning. Recently , PID parameters are Kp 6.5 , Ki 0.3 and Kd 0. Kindly advise me …Thanks in advance

  • Nay, you have to do step-tests and use the process’s dynamic characteristics to calculate appropriate tuning settings.
    Se this writeup for more details: Cohen Coon Tuning Rules.

  • Ajay:

    Respected Sir
    1) What is time delay?
    2)For how much time delay PID can be implemented?
    3)How to control the process with large time delay?

  • Ajay,
    1. Time delay is another term for deadtime.
    2. There is no limit on deadtime (time delay) for the implementation of a PID controller, but your controller has to be tuned appropriately.
    3. If your process deadtime is significantly longer than the time constant, use the tuning rule described in this article: https://blog.opticontrols.com/archives/275.
    Also note that the derivative control mode becomes ineffective on deadtime-dominant processes, and PI control should be used.

  • Ajay:

    Respected Sir,
    1) what is need of Dahlin PID for deadtime process?
    2) why Smith Predictor can not be implemented in analog version?

  • Ajay:
    1) The Dahlin deadtime compensation algorithm is simply a PID algorithm with an extra term added to “compensate” for deadtime. Both the Dahlin Controller and Smith Predictor allow the use of higher controller gains to obtain faster control responses than what is otherwise possible with deadtime-dominant processes.
    2) Both the Dahlin Controller and Smith Predictor requires past values of the controller output to be stored. This is almost impossible with analog implementations, but very easy with digital ones.
    Both these algorithms are very sensitive to changes in deadtime, and may it be difficult to maintain loop stability under changing process conditions unless the deadtime compensation is updated in realtime.

  • Alex:

    Hi Sir,
    Most of the way to measure deadtime is like the one you mentioned in the blog, but I also find some definition of deadtime as the time when the output just changed. Are both ways fine?
    Thank you!

  • Alex, Deadtime should be measured as described in my blog. This method includes apparent deadtime, originating from relatively small lags in the system.

  • Matthew:

    Hello Jacques

    I read that if the lag time of a system (the point where a sample is added to the point where it is analyzed) is too much bigger than the response time of the analyzer, then the control will ‘sawtooth’.

    Is the Lag time the same as the deadtime?

    Is the process loop time the same as the lag time?

    If the process loop time is say 10 minutes but the sample is being continuously analysed, will the actuator adjust every 10 mins or will it be trying to continually adjust?

    Any help with any of the above would be huge help!

    Kind Regards

  • Matthew – The system will not oscillate (sawtooth) if it is tuned properly and the control valve / dosing pump is in good working order. From your description of lag time, it seems to be what I refer to as deadtime. You did not provide enough information for me to know what “process loop time is”.

  • Tony:

    Hi Jacques

    I have your book, congrats on a well written and useful little reference.

    I have a question concerning the lag to deadtime ratio when using a second order model.

    With the introduction of a second lag and with the damping ratio, is this ratio still useful or is there now a better measure of controllability and strategy choice?

    Thanks

    Tony

  • Tony: Although the math gets more complex with adding another lag, the damping ratio will still be a good measure of control loop stability.

Leave a Reply

The Book for Practitioners