Study on Dynamic Time-Based Scheduling for Hard Real-Time Systems

Description
In one use, which occurs in stellar physics, the dynamical time scale is alternatively known as the freefall time scale, and is in general, the length of time over which changes in one part of a body can be communicated to the rest of that body. This is often related to the time taken for a system to move from one equilibrium state to another after a sudden change.

Abstract
Title of Dissertation: Dynamic Time-Based Scheduling for Hard Real-Time

Systems

Seonho Choi, Doctor of Philosophy, 1997 Dissertation directed by: Professor Ashok K. Agrawala Department of Computer Science

In traditional time-based scheduling schemes for real-time systems time line is explicitly managed to obtain a feasible schedule that satis es all timing constraints. In the schedule the task attributes, such as task start time, are statically decided o -line and used without modi cation throughout system operation time. However, for dynamic real-time systems, in which new tasks may arrive during the operation, or tasks may have relative constraints based on information only known at run-time, such static schemes may lack the ability to accommodate dynamic changes. Clearly a solution of dynamic real-time scheduling has to re ect the knowledge about tasks and their execution characteristics. In this dissertation we present a dynamic time-based scheduling scheme and show its applicability for three problem domains. In dynamic time-based scheduling scheme attributes of task instances in the schedule may be represented as functions parameterized with information available at task dispatching time. These functions are called attribute functions and may denote any attribute of a task instance, such as lower and upper bound of its start time, its execution mode, etc. Flexible resource management becomes possible in this scheme by utilizing the freedom provided by the scheme. First, we study the problem of dynamic dispatching of tasks, re ecting relative timing constraints among tasks. The relative constraints may be de ned across the boundary of two consecutive scheduling windows as well as within one scheduling window. We present the solution approach with which we are not only able to test the schedulability of a task set, but also able to obtain maximum slack time by postponing static task executions at run-time. Second, new framework is formulated for designing real-time control systems in which the assumption of xed sampling period is relaxed. That is, sampling time instants are found adaptively based on physical system state such that a new cost function value is minimized which incorporates computational costs. We show, for linear time-invariant control systems, that the computation requirement can be reduced while maintaining the quality of control.

Third, acceptance tests are found for dynamically arriving aperiodic tasks, and for dynamically arriving sporadic tasks, respectively, under the assumption that an Earliest Deadline First scheduling policy is used for resolving resource contention between dynamic and static(dynamic) tasks. Dynamic time-based scheduling scheme can be applied as solution approaches for these problems as will be shown in this dissertation, and its e ectiveness will be demonstrated.

Dynamic Time-Based Scheduling for Hard Real-Time Systems
by Seonho Choi
Dissertation submitted to the Faculty of the Graduate School of the University of Maryland in partial ful llment of the requirements for the degree of Doctor of Philosophy 1997

Advisory Committee: Professor Ashok K. Agrawala, Chairman/Advisor Professor Satish K. Tripathi Professor Moon-Jhong Rhee Associate Professor David Mount Assistant Professor Je Hollingsworth

c Copyright by Seonho Choi 1997

Dedication
To my parents and my wife

ii

Contents
List of Tables List of Figures 1 Introduction v vi 1
1 1 2 2 2 3 4 5 5 5 6

1.1 Motivation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.1.1 Scheduling with Relative Constraints : : : : : : : : : : : : 1.1.2 Dynamic Adjustment of Timing Constraints : : : : : : : : 1.1.3 Scheduling Dynamic Tasks : : : : : : : : : : : : : : : : : 1.2 Our Approach : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.2.1 Dynamic Time-based Scheduling Scheme : : : : : : : : : 1.2.2 Dynamic Dispatching with Complex Timing Constraints : 1.2.3 Dynamic Dispatching with Variable Sampling Periods : : 1.2.4 Scheduling Dynamic Tasks : : : : : : : : : : : : : : : : : 1.3 Contributions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.4 Outline : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : :

2 Prior Work

2.1 Real-Time Scheduling Theory : : : : : : : : : : : : : : : : : : : : 2.1.1 Fixed Priority Scheduling : : : : : : : : : : : : : : : : : : 2.1.2 Dynamic Priority Scheduling : : : : : : : : : : : : : : : : 2.1.3 Static Time-based Scheduling : : : : : : : : : : : : : : : : 2.2 Scheduling with Relative Timing Constraints : : : : : : : : : : : 2.3 Control : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2.4 Scheduling Aperiodic and Sporadic Tasks : : : : : : : : : : : : : 2.4.1 Scheduling Dynamic Tasks in Fixed Priority Systems : : : 2.4.2 Scheduling Dynamic Tasks in Dynamic Priority Systems : 2.4.3 Scheduling Dynamic Tasks in Time-based Systems : : : : 2.5 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :

7 7 8 8 9 10 10 10 10 11 12

7

3 Scheduling with Relative Constraints
3.1 Problem Description : : : : : : 3.2 Prior Work : : : : : : : : : : : 3.2.1 Static Cyclic Scheduling 3.2.2 Parametric Scheduling : 3.3 Summary : : : : : : : : : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :
iii

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

13

13 16 17 18 22

4 Dynamic Dispatching of Cyclic Real-Time Tasks with Relative Constraints
4.1 Dynamic Cyclic Dispatching : : : : : : : : : : : : : : : : : : : : : 4.1.1 Transforming a Constraint Set into a Constraint Graph : 4.1.2 De nitions for Constraint Graphs : : : : : : : : : : : : : : 4.1.3 Characteristics of Constraint Graphs : : : : : : : : : : : : 4.1.4 O -line Component : : : : : : : : : : : : : : : : : : : : : 4.1.5 O -line Component with Restricted Standard Constraints 4.2 Example : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4.3 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : :

23

23 25 30 33 36 39 40 41

5 Design of a Dynamic Temporal Controller

5.1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5.2 Problem Formulation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5.3 Temporal Control with Fixed Sampling Times : : : : : : : : : : : : : : : : 5.3.1 Inductive Construction of an Optimal Control Law with T Given : 5.3.2 Dynamic Temporal Control : : : : : : : : : : : : : : : : : : : : : : 5.4 Implementation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5.5 Example : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5.6 Discussion : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5.7 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

42

42 42 44 46 50 52 52 62 62

6 Scheduling Aperiodic and Sporadic Tasks
6.1 Introduction : : : : : : : : : : : : : : : : : 6.2 Problem Description : : : : : : : : : : : : 6.3 Dynamic Time-based Scheduling Schemes 6.3.1 Aperiodic Task Scheduling : : : : 6.3.2 Sporadic Task Scheduling : : : : : 6.4 Summary : : : : : : : : : : : : : : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

63

63 63 65 65 73 77

7 Conclusion A B

7.1 Future Research : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 78 A.1 Proofs for Chapter 4 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 80 B.1 Proofs for Chapter 6 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 89

78

80

89

iv

List of Tables

v

List of Figures
1.1 Overview of dynamic time-based scheduling scheme : : : : : : : : : : : : : : : : : : 3 2.1 Example case. : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11 2.2 No spare capacities can be found. : : : : : : : : : : : : : : : : : : : : : : : : : : : : 12 3.1 3.2 3.3 3.4 3.5 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8

5.1 5.2 5.3 Cost di erences between dynamic temporal controller and traditional controller with 0:05 sampling period. The maximum cost di erence is less than 0:03. : : : : : : : : : 5.4 Number of control computation performed by a dynamic temporal controller is shown for each initial state. Note that the maximum number of control computation is less than 20, and for many of initial states they are less than 18. : : : : : : : : : : : : : : 5.5 Cost di erences between optimal controller with 0:05 sampling period and an optimal controller with 0:1 sampling period are depicted for each initial state. The maximum cost di erence is almost 0:5. : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5.6 Cost di erences shown in Figure 5.3 and Figure 5.5 are compared together. Note that for almost all initial states the dynamic temporal controller outperforms traditional controller with equal sampling period 0:1, even though the number of control computations done by a dynamic temporal controller is smaller than that for traditional controller. : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :

:::::::::::: :::::::::::: :::::::::::: :::::::::::: :::::::::::: Constraint Graph for 1;2 :::::::::::: Equivalence between Predicates and Graphs : : : : : : : : : : : : 2 1;2 Elimination of f2 and s2 ::::::::::::::::: 2 from 1;2 1 (f2 ) is denoted as dashed edges meeting with a vertical line. 1;3 2 Homogeneous edge sets, 1;3(s3 (s2) : : : : : : : : : : : 2 ) and Overview of o -line component : : : : : : : : : : : : : : : : : : : Parametric bound functions found from sched1;4 : : : : : : : : : Asymptotic parametric bound functions for sched1;1 : : : : : : : Decomposition of JM into Fi . : : : : : : : : : : : : : : : : : : : : Two control changing time sets Ti1 and Ti2 . : : : : : : : : : : : :

Example Job Sequence : : : : : : : : : : Static Cyclic Scheduling : : : : : : : : : Limitation of Static Scheduling Scheme Parametric Calendar Structure : : : : : Parametric Calendar for Example : : : :

: : : : : :::::::::

: : : : : :

: : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

: : : : : : : : : : : : : : :

15 17 18 19 22 26 28 29 32 34 38 41 41 45 51 54 55 56

57

vi

5.7 Normalize costs from dynamic temporal controller and from traditional controller with sampling period 0:1. Costs are normalized by dividing by the cost from traditional controller with sampling period 0:05. : : : : : : : : : : : : : : : : : : : : : : : 5.8 Normalized costs from two controllers with adjusted threshold values. One from dynamic temporal controller and the other from traditional controller with equal sampling period 0:1. : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5.9 Di erences of worst case normalized costs between a dynamic temporal controller with = 0:01 and a traditional controller with a sampling period 0:1. The computational delays are randomly generated with a normal distribution. For each initial state, the control trajectories are found 100 times, and the maximum cost among them is recorded. : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5.10 Di erences of average normalized costs between a dynamic temporal controller with = 0:01 and a traditional controller with a sampling period 0:1. The computational delays are randomly generated with a normal distribution. For each initial state, the control trajectories are found 100 times, and the average cost is recorded. : : : : : : 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10

58 59

60 61 66 66 68 69 70 71 73 74 75 76 90 91 91

: : : : : : : : : : B.1 Busy period : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : B.2 00 is increased or remains the same in the shifted interval : : : : : : : : : : : : : : : B.3 is increased or remains the same in the shifted interval : : : : : : : : : : : : : : :

Deriving !i (0) recursively : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : est(i) and lst(i) for an example task set : : : : : : : : : : : : : : : : : : : : : : : : Joint scheduling of a non-realtime and ::::::::::::::::::::::: Obtaining maximum slack within a scheduling window of a hard aperiodic task A. Example Schedules : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Deriving virtual deadlines and release times : : : : : : : : : : : : : : : : : : : : : : Worst Case for Deadline Determination : : : : : : : : : : : : : : : : : : : : : : : : Under-utilization of the transformed sporadic task : : : : : : : : : : : : : : : : : : found for an example task set : : : : : : : : : : : : : : : : : : : : : : : : : : : : 0 (t1 ; t2) for an example task set : : : : : : : : : : : : : : : : : : : : : : : : : : : :

vii

Dynamic Time-Based Scheduling for Hard Real-Time Systems Seonho Choi July 29, 1997

This comment page is not part of the dissertation.
aTEX using the dissertation style by Pablo A. Straub, University of Maryland. Typeset by L

0

Chapter 1

Introduction
Real-time computer systems are characterized by the existence of timing constraints on computations they carry out. The timing constraints are statically determined at pre-runtime from the characteristics of physical systems they interact with. In some real-time systems, called hard realtime systems, a timing failure is considered catastrophic and a guarantee should be given prior to execution that every timing constraint will be satis ed. Examples are found in application domains such as avionics, process control, automated manufacturing, robotics, etc. Real-time systems of the next generation will be required to interact with more complex and dynamic systems 40, 2]. In such environments it will be required that a mechanism be provided to support high degree of concurrency, and to adapt itself to dynamically changing system state. Dynamic tasks such as aperiodic tasks with or without timing constraints may arrive at any time instant during system operation. Transient system overload may occur from dynamic nature of the system. Dynamic resource discovery and allocation methods, and methods of dynamically adapting to changing system conditions to assure or re-negotiate quality of service have to be supported by the real-time systems. In this dissertation, we concentrate on the issues concerning how to achieve exibility for hard real-time systems while not sacri cing the required quality of service. The new scheduling scheme, called dynamic time-based scheduling, is developed for this purpose. Then, this scheme is applied to three problems. First, it is addressed how to dynamically dispatch tasks in the presence of complex timing constraints such as relative timing constraints. Second, the issues are studied regarding dynamic adjustment of timing constraints, such as dynamic selection of task periods based on physical system state. Finally, it is studied how to incrementally schedule dynamic tasks such as aperiodic or sporadic tasks. The dynamic time-based scheduling scheme provides a sound basis for realizing the solution approaches derived.

1.1 Motivation

In some real-time systems complex timing constraints exist, such as jitter, separation, and relative deadline constraints, in addition to release time and deadline constraints 23]. Those constraints are usually speci ed between event occurrence times and are based on information(e.g. task completion time) which is only available at run-time. Such timing constraints make it more di cult to enhance the system with a capability to dynamically allocate CPU times to dynamic tasks while 1

1.1.1 Scheduling with Relative Constraints

not hampering the schedulability guarantee given to tasks with complex timing constraints. In priority-based preemptive systems, one of the approaches to schedule the tasks with jitter constraints is to separate the constrained event in the task into another task, and to associate highest priority with it. By doing this, the event occurrence times in consecutive periods can be made to be more predictable since higher priority tasks preempt lower priority tasks 36]. However, this approach can not be used e ciently when there exist many periodic tasks with jitter constraints, or when other types of relative constraints exist such as separation, or relative deadline constraints. Moreover, it is quite di cult to exibly incorporate aperiodic task executions by postponing static task executions, when possible. It is the lack of ability to explicitly control task executions over a time line that causes these problems in priority-based systems. Some work has been done on scheduling aperiodic tasks and slack stealing algorithms in priority based scheduling systems 47, 18, 49, 32, 17, 24, 28, 48, 31]. However, most of their work assumes that only release time and deadline constraints are present. The complexity of optimal slack stealing algorithms in priority based systems is high 18, 17]. Usually, the timing constraints of tasks are statically determined prior to system operation time from the characteristics of the physical system. Periodic task model is widely used and assumed in most real-time systems. One of the reasons for its popularity is that almost every control algorithm is formulated under the assumption of periodicity since it is easy to derive control laws under that assumption. Regardless of the current state of the system being controlled, the same period is used for a control task. The usual determination rule for task period is to select a task frequency to be 5-10 times the corresponding system's characteristic frequency. We study the issue of relaxing the periodicity assumption and propose a new control formulation, called dynamic temporal control, which dynamically decides the periods based on the system information such as current system state. To show the feasibility and bene t of this scheme, a solution approach is presented for a linear-time invariant control systems.

1.1.2 Dynamic Adjustment of Timing Constraints

1.1.3 Scheduling Dynamic Tasks

A lot of work has been done on scheduling dynamic tasks such as aperiodic or sporadic tasks for priority-based scheduling systems 47, 18, 49, 32, 17, 24, 28, 48, 31]. However, only recently some results have been reported on scheduling aperiodic tasks on the basis of time-based scheduling scheme 22] in the presence of release time and deadline constraints. But, the solution approach presented in the paper is incomplete as explained in Chapter 2. We apply a dynamic time-based scheduling scheme for this problem, and develop acceptance tests for dynamically arriving aperiodic tasks, and for dynamically arriving sporadic tasks.

1.2 Our Approach
Two categories of tasks are considered in this dissertation: Static Tasks: Tasks whose invocation times are known at design time. Usually, these are periodic tasks, or those that have been converted to periodic tasks as in 38]. 2

Dynamic Tasks: Tasks whose executions are dynamically requested at run-time. These may be aperiodic or periodic. In this dissertation every static task is executed in non-preemptive manner. That is, once a CPU is assigned to a task no preemption occurs until the task voluntarily releases the CPU or a maximum execution time allowed for the task expires. The dynamic time-based scheduling scheme consists of two components, an o -line scheduler that generates a dynamic calendar for static tasks, and a dynamic dispatcher that is responsible for scheduling both static and dynamic tasks while maintaining the total order for static tasks found by an o -line scheduler. The architecture of this scheduling system is shown in Figure 1.1.
Static Task Set

1.2.1 Dynamic Time-based Scheduling Scheme

Off-line Scheduler

Dynamic Calendar

Dynamic Tasks

Dynamic Dispatcher

Task Attributes

Execute Tasks

On-line Component

Internal System
(e.g. Task Execution History)

External System
(e.g. Physical System State)

Figure 1.1: Overview of dynamic time-based scheduling scheme A dynamic calendar is constructed from a totally ordered static task set found by an o -line scheduler. Each task in a dynamic calendar may have functions denoting any of its attributes, such as the task's valid start time range, its execution mode denoting which version of the task will be executed, etc. Those functions are called attribute functions. The functions may be parameterized with any information available to the system at the function evaluation time such as any attribute values of previously executed tasks, or current physical system state, etc. At system operation time, the dynamic dispatcher makes use of any information available to it at a current time instant to dynamically evaluate the attribute functions for the next task in the 3

dynamic calendar, and it decides the attributes of the next task(s), such as actual task start time, actual execution mode of the task if multiple task versions exist, etc. Then, it records the decided attributes of the next task for future usage.

1.2.2 Dynamic Dispatching with Complex Timing Constraints

The problem of scheduling tasks is studied when there exist complex timing constraints, such as relative inter-task constraints. The min/max constraints may be given between start or nish times of any two tasks. In this dissertation, it is assumed that a lower and upper bound of each task's execution time is known at pre-runtime, and the actual execution time may vary within those bounds. The non-deterministic execution times may make it infeasible to assign static start times to tasks at pre-runtime in the presence of the relative constraints between start or nish times of tasks. To incorporate realistic relative constraints such as jitter constraints, a cyclic task model is de ned with cyclic constraints allowed to be speci ed across the boundaries of scheduling windows. To apply the dynamic time-based scheduling scheme to this problem, the following questions have to be answered: How a total ordering among tasks can be found by the o -line scheduler? How a schedulability of a totally ordered task set can be checked in the presence of complex timing constraints? What is the structure of a dynamic calendar? How a dynamic calendar should be constructed for the total ordering such that all timing constraints imposed on tasks are guaranteed to be satis ed at run-time? How much is the overhead of dynamic dispatching at run-time? The problem of deterministic scheduling has been well addressed in the literature. The solution approaches are based on either heuristic or approximation algorithms, or optimal schemes using implicit enumeration methods such as branch and bound search. In this dissertation, it is assumed that a total ordering of static tasks is given, and the rest of the issues mentioned above are addressed. The dynamic time-based scheme is elaborated as follows for this problem: Each task in a dynamic calendar has two attribute functions, denoting lower and upper bounds for the task's start time. The attribute functions may be parameterized with start and nish times of already executed tasks. With this re ned dynamic time-based scheduling scheme, the solution approach is presented in Chapter 4.

4

1.2.3 Dynamic Dispatching with Variable Sampling Periods

Traditional control systems have been designed to exercise controls at regularly spaced time instants. When a discrete version of the system dynamics is used, a constant sampling interval is assumed and a new control value is calculated and exercised at each time instant. In Chapter 5, we propose a new control scheme, dynamic temporal control, in which we not only calculate the control value at control computation time but also decide the time instant when the next control computation is done. The system state at control computation time is also used for obtaining the next control computation time as well as for calculating a new control value. Taking a discrete, linear, time-invariant system, and a cost function which re ects a cost for computation of the control values, as an example, we show the feasibility of using this scheme. We implement the dynamic temporal control scheme in a rigid body satellite control example and demonstrate the signi cant reduction in cost. Also, the dynamic temporal control technique proposed can be implemented by using the dynamic time-based scheduling scheme under the assumptions given in Chapter 5.

We present a solution approach in Chapter 6 for scheduling dynamically arriving aperiodic and sporadic tasks. It is assumed that a total ordering among static tasks is given at pre-runtime, and that only release time and deadline constraints are allowed. The total ordering among static tasks given initially is assumed to be maintained at run-time. Under this assumption, an EDF 1 scheduling algorithm is assumed to be used, and acceptance tests for dynamically arriving aperiodic tasks, and for dynamically arriving sporadic tasks, are derived, respectively. This solution approach seems to be a sound basis for extending the problem to allow more complex timing constraints.

1.2.4 Scheduling Dynamic Tasks

1.3 Contributions
The main contributions of this dissertation are: We propose a new dynamic time-based scheduling scheme in which the dispatcher has the capability to dynamically decide the parameters(attributes) of the future tasks, such as start time or sampling period, while not a ecting the guaranteed schedulability of future tasks. The dynamic decision is done based on the information available to the system at the decision time, such as start times and execution times of already executed tasks, or current physical system state and current system load. We develop a scheduling scheme which handles relative constraints, not only those de ned between tasks within a scheduling window but also those cyclically de ned across the boundaries of consecutive scheduling windows. Jitter constraint is a good example of such constraints. An algorithm is developed for checking the schedulability of a totally ordered cyclic task set. If the task set is schedulable, a dynamic calendar is constructed during the execution of the algorithm. The algorithm is based on variable elimination techniques. Also, we show that the run-time dispatching overhead is small, at most O(N ) execution time is spent for each
1

EDF stands for Earliest Deadline First.

5

task instance for evaluating attribute functions where N is the number of task instances in one scheduling window. We present a new method for designing control systems in which the sampling periods are adaptively selected based on system states. Traditionally, control processes are implemented under the assumption of xed sampling period. It is shown that, by dynamically selecting timing instants when new controls are calculated, the computational resource requirement can be greatly reduced while not sacri cing the quality of control. Linear time-invariant control system is used as an example to show the feasibility of the scheme. This result can be e ectively used in an environment where computational resources can become scarce, e.g., in an overloaded situation. The acceptance tests are developed for dynamically arriving aperiodic or sporadic tasks when a time-based scheduling scheme is used to schedule static tasks. EDF scheduling algorithm is assumed to be used to resolve the con icts between static and dynamic tasks.

1.4 Outline
The rest of the dissertation is organized as follows. We summarize prior work on real-time scheduling theory in Chapter 2. Then, in Chapter 3, we formally introduce the problem of scheduling tasks with relative timing constraints, present more detailed prior works related to the problem. In Chapter 4, we present a solution approach for the problem de ned in Chapter 3 by utilizing dynamic timebased scheduling scheme. In Chapter 5, the dynamic temporal controller is developed for linear time-invariant control systems. In Chapter 6, acceptance tests are found for dynamic tasks such as aperiodic or sporadic tasks on the basis of a dynamic time-based scheduling scheme. Finally, concluding remarks and directions for future work are presented in Chapter 7.

6

Chapter 2

Prior Work
In this chapter we review previous work on real-time scheduling, and on real-time control systems. In Section 2.1, some relevant prior work on real-time scheduling theory is presented. In Section 2.2, prior work on scheduling with relative timing constraints is given. The previous work on design of real-time control systems is presented in Section 2.3. Finally, previous work on scheduling dynamic tasks in a time-based scheduling scheme is presented in Section 2.4. Some prior work is presented in more detail in the second part of Chapter 3 since they are directly related to our solution approach which will be presented in Chapter 4.

2.1 Real-Time Scheduling Theory
Scheduling algorithms in hard real-time systems may be classi ed in several ways. One way is to classify them in terms of how the scheduling is done. Priority-based scheduling schemes resolve the resource(CPU) contention between di erent tasks by making use of the xed or dynamic priorities of the tasks. Another scheduling approach is time-based scheduling scheme in which the explicit time line is used to schedule the tasks. In traditional time-based scheduling schemes, all resource requirements are satis ed by statically assigning time intervals to the task instances at pre-runtime. Each approach has its own advantages and disadvantages. Even though scheduling in priority based approach can be done in a simple manner, it lacks the capability of scheduling tasks with complex constraints such as precedence relations, and relative timing constraints, while the timebased approaches have that capability. In this scheme, xed priority is assigned to each task which is used at runtime to resolve the resource contention. A task with a higher priority can preempt any lower priority task and thus the currently executing task has the highest priority among the tasks currently active(released). It is well known that rate monotonic scheduling algorithm is optimal for scheduling a set of independent periodic tasks with deadlines at the end of their periods 36]. It is optimal in a sense that it can schedule any set of tasks if that is schedulable by any xed priority scheduling scheme. Any set of n tasks is schedulable according to rate monotonic scheduling scheme if the total utilization of the tasks doesn't exceed n(21=n 1) which converges to ln(2) = 0:69314718 as n goes to 1. This is a su cient condition for a given set of tasks and not a necessary condition. The exact schedulability condition 7

2.1.1 Fixed Priority Scheduling

which is necessary and su cient is found in 30] with the statistical simulation results showing that in general the utilization of the schedulable task set is higher than ln(2). A deadline monotonic scheduling algorithm is shown to be optimal for a set of tasks which have deadlines less than or equal to their periods. It assigns priorities according to their deadlines, the shorter the deadline, the higher priority is assigned regardless of the periods 33, 3]. For a set of tasks with arbitrary deadlines, it is shown that the optimal priority assignment can't be done in a simple priority assignment method, but requires a pseudo polynomial time algorithm 50]. A synchronization protocol becomes necessary when tasks use shared resources such as semaphores. Sharing resources may lead to a possible priority inversion when a higher priority task is blocked due to the lower priority task possessing the resource required by a higher priority task. This priority inversion may cause an unbounded blocking times. To solve this problem, several synchronization protocols have been developed. In a priority ceiling protocol 45], a priority ceiling is rst assigned to each semaphore, which is equal to the highest priority of the tasks that may use this semaphore. Then, a task, , can start a new critical section only if 's priority is higher than all priority ceilings of all the semaphores locked by tasks other than . In stack-based protocol 5], the concept of preemption level is used instead of the priorities to derive the protocol suitable for both xed priority and dynamic priority based systems. Also, sharing of multiple-unit resources becomes possible with this protocol. The word \stack" is used in the sense that a task with higher preemption level can only preempt and thus block tasks with lower preemption level. Preemption levels are found statically re ecting synchronization constraints and resource requirements. The priorities of tasks in dynamic priority scheme are decided at runtime. This means that the task instances from the same task may have di erent priorities at runtime while in the xed priority scheme the same priority is used for scheduling them. The earliest deadline rst(EDF) scheduling algorithm which assigns the highest priority to a task instance with the closest deadline is known to be optimal for a set of periodic or aperiodic tasks 36, 19]. The necessary and su cient schedulability condition for a set of independent tasks with their deadlines equal to their periods is that the total processor utilization of the tasks should be less than or equal to 1 36]. A dynamic priority ceiling protocol 10] and a stack-based protocol 5] have been developed for dynamic priority systems to enable the use of shared resources and to bound the blocking times. Note that the stack based resource allocation protocol may be used for both xed priority and dynamic priority scheduling algorithms. Also, in 5], it is shown that the stack-based protocol provides a better schedulability test than that of dynamic priority ceiling protocol. In a static time-based scheduling scheme, a calendar for a set of task instances is constructed at pre-runtime. At runtime this calendar is referred to execute each task instance at a scheduled time instant. Through o -line scheduling, we can schedule any set of tasks with various constraints, such as complex precedence relation, relative timing constraints, and other synchronization constraints. Even though the complexity of the o -line scheduling is NP-Complete in general, the scheduling can be done in a reasonable amount of time in most cases using techniques such as branch and bound or heuristic search algorithms 52, 21, 12, 56]. It has been shown that the complexity of non-preemptive scheduling can be dramatically reduced in many cases by decomposition scheduling 8

2.1.2 Dynamic Priority Scheduling

2.1.3 Static Time-based Scheduling

approach where task instances are decomposed into a sequence of subsets, which are scheduled independently 54]. Also, the time based scheduling scheme can e ciently schedule task sets with relative timing constraints which can't be easily accommodated in priority-based systems 23, 12]. Because of these reasons, it is claimed that the time-based scheduling scheme is the most appropriate scheduling approach for hard real-time systems 53].

2.2 Scheduling with Relative Timing Constraints
In some hard real-time systems, relative timing constraints should be satis ed between event occurrence times. as well as release time and deadline constraints on tasks. For example, control output events in two successive instances of a periodic task may have to occur with the jitter requirement satis ed. That is, the di erence of two event occurrence times, called jitter, should lie between a lower and an upper bound. The occurrences of events in di erent tasks may also be constrained from the requirements and characteristics of the environment by separation or relative deadline constraints 23]. These relative constraints have to be enforced in many real-time control systems such as process control systems and ight control systems 9], etc. For example, in process control systems, it has been shown that jitter constraints have more in uence on control systems performance than the frequency constraints 29]. Usually, the relative constraints between events are transformed into relative constraints between start or nish times of the tasks to make feasible the process of scheduling and dispatching of task instances 26, 23]. In 26] a preemptive xed priority scheduling algorithm is developed to schedule periodic tasks with relative deadline constraints between nish times of two successive instances of periodic tasks. However, other types of relative constraints are not allowed in that work and it is not possible to exibly manage slack times at runtime for dynamic tasks. In 23] dispatching of a totally ordered non-preemptive task instance set with any min/max constraints is studied and a new scheduling mechanism called parametric scheduling is developed. In that paper, it is also shown that a traditional static scheduling approach, in which task instance start times are statically scheduled under the assumption that every task instance spends its worst case execution time, doesn't work any more for task instance sets with general min/max constraints even when a total ordering among them is given. Furthermore, in parametric scheduling scheme, it is possible to e ectively schedule aperiodic tasks at run-time by dynamically managing task instance start times. However, the task instance set in parametric scheduling scheme consists of a nite number of task instances with a nite number of constraints. This implies that the approach cannot be applied to a periodic task model, since periodic tasks may invoke an in nite number of task instances with an in nite number of relative constraints. In a traditional time-based scheduling scheme the task start times are statically decided in a scheduling window, and this static schedule is cyclically used at run-time. In the presence of jitter constraints between start times of non-preemptive task instances, the problem of nding a static schedule has been addressed in 11]. However, this static cyclic scheduling approach only allows certain types of min/max constraints to be speci ed, and it only works under low utilization. Moreover, it is very di cult to exibly manage task start times at run-time to incorporate any dynamic tasks such as aperiodic tasks into the schedule.

9

2.3 Control
Rich literature exists on the design of controllers. However, nearly all the results develop control laws under the assumption of equal sampling periods. In addition, taking computation time delay into consideration for real-time computer control has been studied in several research papers 6, 25, 27, 41, 46, 55]. However, to the best of our knowledge, the dynamic temporal control approach which is explained in Chapter 5 has not been studied in the past. In dynamic temporal control, the computational cost is incorporated into the cost function and the time instants for performing control computations are chosen to minimize this cost function. With this new approach, we can perform the same quality of control with fewer control computations compared to the traditional approaches 1].

2.4 Scheduling Aperiodic and Sporadic Tasks
Scheduling of dynamic tasks such as aperiodic or sporadic tasks has been studied extensively for priority-based scheduling systems. In this section, those works are summarized as well as a recent work on aperiodic task scheduling problem on the basis of time-based scheduling scheme 22].

2.4.1 Scheduling Dynamic Tasks in Fixed Priority Systems
Hard and non-realtime aperiodic tasks can be scheduled within a xed priority scheduling scheme. One approach is to utilize the aperiodic server concept in which a certain percentage of the processor utilization is reserved for servicing the aperiodic tasks. That is, one or several periodic tasks are reserved for servicing aperiodic tasks. Several algorithms have been developed and their performances have been compared 31, 47]. Another approach is slack stealing approach which tries to utilize as much processor time as possible by postponing the execution of hard periodic task executions as long as the schedulability of the hard tasks is not a ected 18, 32, 49]. The optimal slack stealing algorithm is found to be pseudo polynomial 18] and several approximation algorithms have been proposed 17].

2.4.2 Scheduling Dynamic Tasks in Dynamic Priority Systems
An aperiodic task scheduling problem has been studied under the assumption that only hard periodic tasks exist 28, 24]. The O(N ) acceptance test for a hard aperiodic task is given when a set of independent periodic tasks is scheduled by EDF where N is the total number of task instances in an LCM 1 of the periods of periodic tasks 14, 13, 15]. Aperiodic scheduling schemes for EDF have been proposed and studied and the Improved Priority Exchange Algorithm is shown to perform well 48].
1
LC M

stands for Least Common Multiple.

10

2.4.3 Scheduling Dynamic Tasks in Time-based Systems
The aperiodic task scheduling problem in time-based scheduling scheme has been addressed in the paper by Fohler et al. 22]. The initial schedule is assumed to be given and the arriving aperiodic tasks are scheduled at runtime. First, the deadlines of task instances, j , in the time-based schedule are sorted and the schedule is divided into a set of disjoint execution intervals, Ii . Then, the spare capacities are de ned for these intervals, which may be used to schedule arriving aperiodic tasks. Several task instances with the same deadline constitute one interval and the end of an interval, end(Ii ), is de ned to be the deadline of the last task instance in the interval. The earliest start time of an interval is de ned to be the minimum of the earliest start times of its constituting task instances. And, the start time of the interval, start(Ii ) is de ned to be the maximum of its earliest start time or the end of the previous interval. The length of an interval, j Ii j is de ned to be end(Ii ) start(Ii ). Then, the spare capacity is de ned recursively as: X sc(Ii ) = j Ii j Cj + min(sc(Ii+1); 0)

sc(Ie) = j Ie j

j 2Ii

X

where Cj denote the worst case execution time of j and Ie is the last interval in the schedule. Note that the spare capacity may have a negative value re ecting the fact that the borrowed spare capacity from the previous interval is used to schedule the task instances in the current interval. Figure 2.1 shows an example case of this. In this example, the spare capacities for I2 and I1 are found to be: sc(I2) = 2 3 = 1 sc(I1) = 8 3 + min( 1; 0) = 4 These spare capacities are used to schedule arriving aperiodic tasks and adjusted whenever the aperiodic tasks are accepted.
I I
1 2

j 2Ie

Cj

?1
0 3 7

?2
10 12

R2 R1

?2 ?1
D1

D2

Figure 2.1: Example case. However, no consideration is given about how to obtain correct spare capacities when the deadlines of the task instances are not in increasing order in the schedule. For example, no correct spare capacity can be obtained in the example case shown in Figure 2.2. According to the algorithm presented in that paper, we have two execution intervals, 10; 12] and 0; 10]. The spare capacities in these intervals are: sc( 10; 12]) = 2 7 = 5 sc( 0; 10]) = 10 3 5 = 2 11

R2 R1

?2 ?1 ?1 ?2
7

D2 D1

0

10

12

Figure 2.2: No spare capacities can be found. This result shows that, in an execution interval 0; 10], a spare capacity of 2 is found. However, as can be seen in Figure 2.2, zero spare capacity should have been found in an interval 0; 10]. This shows that their solution approach is incomplete.

2.5 Summary
We have presented a brief overview of the related work on real-time scheduling and control systems design. The works by Cheng et al. 11] and Gerber et al. 23] are combined and extended in Chapter 4 for scheduling tasks with relative timing constraints. Our solution approach overcomes the limitations of those previous approaches and provides more exible and uni ed ways for scheduling tasks with complex timing constraints. Also, Fohler et al. 22] propose a mechanism to exibly manage slack times in time-based scheduling scheme. However, their approach is un-necessarily complicated and incomplete as shown in the previous section. Our approach presented in Chapter 6 provides more intuitive and complete solution. Instead of spare capacities, we de ne slack values which can be obtained by postponing static tasks in the schedule, and make use of them to schedule dynamic tasks.

12

Chapter 3

Scheduling with Relative Constraints
We formulate the problem of scheduling a set of tasks with relative constraints, and present its solution in the next chapter. We also present some prior works in detail since our solution approach is based on parametric dispatching approach 23] developed for a transaction scheduling problem. In Section 3.1, the problem is formally de ned. Then two prior works are presented on scheduling with relative constraints, static approach and dynamic parametric approach. Finally, a brief summary is presented.

3.1 Problem Description
A task instance is called a job and these two terms will be used inter-changeably throughout the dissertation. Let j = f ij j i = 1; : : :; N g denote an ordered set of N jobs to be dispatched sequentially in a j -th scheduling window (j 1)L; jL] where L is a positive integer denoting a scheduling window size.. The jobs are executed non-preemptively in this order. At runtime, this job set will be cyclically scheduled in consecutive scheduling windows. In other words, ij and ik are jobs of the same task. Then, let 1;k = 1 2 : : : k denote a set of jobs to be executed in a time interval 0; kL]. Each job ij (j 1, 1 i N ) has the following set of parameters that may have integer values:
j A runtime variable sj i denoting the actual start time of i j A runtime variable ej i representing the actual execution time spent for i

j j A runtime variable fij = sj i + ei denoting the actual nish time of i

j A constant uj i denoting the maximum execution time of i . Note that it is simply assumed that execution times of jobs are nondeterministic and bounded from above and below, which is a realistic assumption in many real-time systems. j Standard constraints are de ned next that may be imposed on fsj i ; ei j 1 j k; 1 i N g for 1;k .

A constant lij corresponding to the minimum execution time of ij

13

De nition 3.1 (Standard Constraints) A standard constraint involves the variables of at most j and l (1 a b N , j j l j 1), where sj (or sj + ej ) appears on one side of two jobs, a a a a b j , l , the following \ ," and slb (or slb + elb ) appears on the other side of the \ ." For two jobs, a b constraints are permitted(where ci is an arbitrary constant) and called relative standard constraints:
l sj a sb l l (sb + eb ) j l sa + ej a sb j j l l sa + ea (sb + eb )

sj a

c1 c2 c3 c4

slb sj a j (sj a + ea ) slb + elb sj a l l j) sb + eb (sj + e a a slb

c5 c6 c7 c8

(3.1)

In addition, each job has release time and deadline constraints. These constraints are called j has the following absolute constraints: absolute standard constraints. A job a

c9

sj a

j sj a + ea

c10

(3.2)

We also include as standard any constraint that can be rewritten in one of the above forms; l l j e.g., sj a sb + eb ea + c falls into this category.

Next, the k-fold cyclically constrained job set is formally de ned. 1 Any 1;k considered in this dissertation belongs to this class.

De nition 3.2 (k-fold Cyclically Constrained Job Set) A job set ;k = ::: k (k = 1; 2; : : :; 1) is classi ed as a k-fold cyclically constrained job set if it has the following linear
1 1 2

constraints:

1. The set of standard relative constraints:

8j 2 1; k) :: A xj + A xj
1 2

+1

a

(3.3)

j j j j j T where xj is a 2N -dimensional column vector sj 1 ; e1 , s2 ; e2 , : : :, sN ; eN ] . A1 , A2 are m1 2N (m1 0) matrices of 0, 1, or 1, and a is an m1 -dimensional column vector whose elements are integers. Included in the m1 constraints are those denoting the total ordering on jobs:

8j 2 1; k] :: 8i 2 1; N ) :: sji + eji sji 8j 2 1; k) :: sjN + ejN sj
2. The set of release time and deadline constraints:
+1 1

+1

8j 2 1; k] :: Bxj bj 8j 2 1; k] :: Dxj dj
1

(3.4) (3.5)

Note that k may be equal to 1.

14

where bj is an m2 -dimensional column vector of non-positive integers satisfying: bj = b1 + (1 j )L and dj is an m3 -dimensional column vector of non-negative integers satisfying: dj = d1 + (j 1)L We de ne C 1;k to represent the logical conjunction of the constraints induced by each row of (3.3), (3.4), and (3.5).

In the above de nition, the same matrices A1 , A2, B , D are cyclically used to represent the standard constraints on consecutive job sets. The example job set shown in Figure 3.1 is presented here with corresponding matrices and vectors de ned in (3.3), (3.4), and (3.5).
18

f 2 2- f

1 2

22

Relative Constraints 5

15

f 12- f 11

25 5

?
Job Set

1 1 ?2

?

?

2

1 1

?

2 1

?2 2
2 s1

f
Absolute Constraints

1 2

20

20

f2

2

40

Time
0

Figure 3.1: Example Job Sequence

Example 3.1 Consider the example job set depicted in Figure 3.1. Each job set j , 1 j k, j j
consists of two jobs,
1

and

2

(i.e. N = 2), whose execution time bounds are:
j = 5 l1 j = 8 l2

uj = 8 1 j u2 = 10
5
+1 +1 +1 sj (sj + ej ) 2 1 1 j +1 s2

The standard relative constraints de ned within j or within j +1 are:
j sj 1 + e1

5

j sj (sj 2 1 + e1 ) sj 2

+1 +1 sj + ej 1 1

(3.6)

Similarly, the set of standard relative constraints across the boundary of j and j +1 are:
j sj 1 + e1 + 15 +1 +1 sj + ej 1 1 j s2 + ej 2
+1 +1 sj + ej 1 1 j sj 1 + e1 + 25 j +1 s1

j sj 2 + e2 + 18 j +1 +1 s2 + ej 2

+1 +1 sj + ej 2 2 j j s2 + e2 + 22

(3.7)

15

Finally, the absolute constraints on j and j +1 are: 20(j 1) 20(j 1) j sj 1 + e1 j sj 2 + e2
2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4

20j 20j
3 2

sj 1 sj 2

+1 +1 sj + ej 1 1 +1 +1 sj + ej 2 2

20j 20j

+1 sj 1 j +1 s2 20(j + 1) 20(j + 1)

(3.8)

All standard relative constraints can be denoted by the following inequality: 1 1 0 0 1 1 0 0 0 1 1 0 0 1 1 0 0 0 1 1 0 0 0 0 1 1 1
2 6 6 6 4

0 0 0 0 0 0 1 1 1 1 0 1 0

7 7 72 7 7 76 76 76 74 7 7 7 7 7 5

sj 1 ej 1 sj 2 ej 2

6 6 3 6 6 6 7 6 7 6 7+6 5 6 6 6 6 6 6 4

0 0 1 1 1 1 1 0 0

0 0 1 1 1 1 0 0 0
3 7 7 7 5

0 0 1 1 0 0 0 1 1
2 6 6 6 4

0 0 0 0 0 0 0 1 1

3

2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4

7 7 72 7 7 76 76 76 74 7 7 7 7 7 5

+1 sj 1 j e1+1 +1 sj 2 +1 ej 2

3 7 7 7 5

5 0 5 0 15 25 0 18 22

3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5

And, the set of absolute constraints is represented by the following inequality: 0 0 0 0 1 0 1 0 0 0 1 1
32 j s1 7 6 ej 76 1 76 j 5 4 s2

ej 2

20(j 1) 20(j 1) 20j 20j

3 7 7 7 5

One traditional approach for scheduling with complex timing constraints is a time-based scheduling scheme that assigns static start times to the jobs in the scheduling window such that the relative constraints are satis ed if the static schedule is cyclically repeated at runtime. However, this approach can't be used in the presence of arbitrary relative constraints between start or nish times of jobs 23]. Also, this approach su ers from the loss of schedulability problem. Some task sets are not schedulable in this approach, even though they are schedulable if our approach is employed. This will be explained through an example later. To cope with some of the above limitations the parametric scheduling scheme was developed in the context of real-time transaction scheduling 23]. However, as far as we know, the solution approach has not been found for general periodic task models where jobs in di erent scheduling windows may have relative constraints. The objective of the next chapter is to develop a schedulability test for 1;1 , and to develop a exible job dispatching mechanism for schedulable job sets, 1;1 .

3.2 Prior Work
In this section, we brie y describe two scheduling schemes closely related to ours. The rst one is the static cyclic scheduling scheme 11] and the second one is the parametric scheduling scheme 23].

16

3.2.1 Static Cyclic Scheduling
The static cyclic scheduling problem has been studied in 11]. The periodic task model is used, which means that every job has a release time and a deadline constraints, and only the jitter constraints between two job start times are allowed. An important assumption made in the work is that the start times of jobs in j are statically determined as o sets from the start of the j -th scheduling window (j 1)L; jL], and this schedule is invoked repeatedly by wrapping around the +1 j end point of the current schedule to the start point of the next. In other words, sj i = si + L holds for all 1 j . In the presence of jitter constraints, the job start times should be chosen carefully such that the jitter constraints are satis ed at run-time as well as the absolute constraints. Obtaining the ordering and job start times is an NP-hard problem, since non-preemptive scheduling problem with release time and deadline constraints is NP-hard. Several priority based non-preemptive scheduling algorithms are presented and their performances are compared in 11]. Suppose that a job ij belongs to j , and a job ij +1 belongs to j +1 , and they have jitter +1 +1 j constraints c1 sj sj c2 (0 < c1 c2 L). From the above assumption, sj i i i = L + si holds. Thus, a new constraint is created, c1 L sj sj c2 L, which is again equal to i i j j L c2 si si L c1 . Therefore, the jitter constraints across the boundary of j and j+1 are transformed into jitter constraints between two jobs in j . As a consequence, if we can nd a static schedule for j that satisfy the above transformed constraints and the constraints between jobs within j , it is clear that all timing constraints will be satis ed if that schedule is repeatedly used at run-time. This approach is depicted in gure 3.2.
1 2 2 1 2 2 2 1 1 2
relative constraint

s2 1 sN
1

s1 1

2 s2 1 s2

sj

i+1

sj i

s i-1

j

Figure 3.2: Static Cyclic Scheduling However, this approach su ers from the following limitations: The relative constraints allowed are limited to jitter type constraints between start times of two jobs. The schedulability of job sets are reduced due to the static start time assignments. It is very di cult to e ectively incorporate dynamic tasks, such as aperiodic tasks, into a schedule by dynamically adjusting the start times of the jobs. 17

In some real-time applications, the jitter constraints may be imposed between the nish times of the jobs rather than between the start times 26]. Furthermore, a periodic task may be decomposed into several subtasks and any kind of standard constraints may be de ned between these subtasks 23]. In these cases this static scheduling approach is no more applicable without sacri cing the schedulability 23]. By transforming the jitter constraints across the boundary of j and j +1 into those between jobs within j , we are a ecting the schedulability of job sets. We will show that, under our new scheduling scheme in which this transformation is not necessary, the schedulability of job sets is increased, i.e., some job sets are not schedulable according to this scheme whereas it is schedulable by our scheme.

3.2.2 Parametric Scheduling
Gerber et al. 23] proposes a parametric scheduling scheme in the context of transaction scheduling, in which any standard constraints may be given between jobs in one transaction. Let =< 1 ; : : :; N > denote a sequence of jobs constituting one transaction with a set of standard constraints, C . Also, let li and ui denote a lower and upper bound of i 's execution time, respectively. In the presence of standard constraints between start or nish times of tasks, it may not be possible to statically assign start times to tasks in the scheduling window by using the maximum execution time(uj i ) as the worst case execution time for each job. This is due to the nondeterministic execution times of tasks and the existence of standard constraints involving the nish times of tasks. This is well explained with the following example 42].
s2

0

s1

s1 +2
<= 3

s1 +6

L

Figure 3.3: Limitation of Static Scheduling Scheme Consider a simple example shown in Figure 3.3 which consists of two jobs, 1 and 2 . Suppose that l1 = 2, u1 = 6, and there exists a constraint s2 f1 3. In this example, it is not possible to statically assign start times for two jobs due to large variability of rst job's execution time and due to the existence of relative deadline constraint between rst job's nish time and second job's start time. However, if we allow the start time s2 for 2 be parameterized with f1 , then all the constraints are satis ed under all execution scenarios. In 42], a parametric schedulability of is de ned as follows:

Sched 9s1 :: 8e1 2 l1; u1] :: : : : :: 9sN :: 8eN 2 lN ; uN ] :: C

(3.9)

From this Sched predicate, parametric lower and upper bound functions for each start time si are obtained by eliminating the variables in an order eN , sN , : : :, ei . The parametric lower and max upper bound functions, denoted as Fsmin i and Fsi , are parameterized in terms of the runtime 18

variables, s1 , e1 , : : :, si 1 , ei shown in gure 3.4.

1

of already executed jobs. The parametric calendar structure is

Fsmin(s ; e )
1 2

Fsmin()
1 1

Fsmin N (s ; e ; s ; : : :; sN ; eN )
1 1 2 1 1

. . .

s1 s2 sN

. . .

Fsmax() Fsmax(s ; e )
1 2

1

1

Fsmax N (s ; e ; s ; : : :; sN ; eN )
1 1 2 1 1

Figure 3.4: Parametric Calendar Structure This parametric calendar is obtained from an o -line component of the algorithm by applying variable elimination techniques that will be given later in this section, and the actual bounds of si are found at runtime by evaluating the parametric functions in the parametric calendar by using the start and nish times of already executed jobs, 1, : : :, i 1 . The actual form of these parametric functions are given in the following proposition which is obtained from the paper by Saksena et al. 23]. This proposition will be used in deriving our solution approach in Chapter 4.

sj is of the following form:

Proposition 3.1 (Parametric Bound Functions 23]) A parametric lower bound function for
= max(p1 + c1; p2 + c2 ; : : :; pa + ca; min j )

Fsmin j (s ; f ; : : :; sj ; fj )
1 1 1 1

(3.10)

where each pi , 1 i a, belongs to fs1 ; f1; : : :; sj 1 ; fj 1 g, and ci is an arbitrary constant.2 And, max is a non-negative integer. j Similarly, a parametric upper bound function for sj is of the following form:

= min(q1 + d1 ; q2 + d2; : : :; qb + db ; max j )
where each qi , 1 i b, belongs to fs1; f1 ; : : :; sj 1 ; fj 1 g, and di is an arbitrary constant..

Fsmax j (s ; f ; : : :; sj ; fj )
1 1 1 1

(3.11)

The main result obtained is that, for an arbitrary set of standard constraints on = f 1; : : :; N g, we can nd the parametric calendar in O(N 3) time and the run-time evaluation of each bound function can be carried out in O(N ) time. By applying this parametric scheduling scheme, we are not only able to schedule any sequence of jobs with standard constraints, but also able to take advantage of the exibility o ered by the scheme. That is, the job start times may be decided dynamically at runtime to incorporate other dynamic activities in the system. Even though this scheme is directly applicable to our k-fold cyclically constrained job sets, if the number of jobs in 1;k becomes large, the bounds need to be found on the size of parametric functions and for the memory requirements for them. In the rest of this section, the parametric scheduling scheme is presented with an example.
2

Note that fi = si + ei .

19

Consider a set of linear constraints C in n variables (x1; x2; : : :; xn), C Hx h which must be satis ed with respect to some de ned existential and universal quanti cation over the variables. In this section we show how an innermost universally quanti ed variable xn , with associated lower (ln ) and upper (un ) bounds can be eliminated to obtain a new set of equivalent constraints. The set of constraints C may be partitioned into three subsets, depending on whether the coe cient of xn is positive, negative or zero. Thus, C CP ^ CN ^ CZ where CP fxn Di(x0); 1 i pg CN fxn Ej (x0); 1 j qg CZ f0 Fk (x0); 1 k rg Di (x0); Ej(x0); Fk (x0) are linear functions of x0 = x1; ; xn 1]T . The elimination of variable xn leads to a new system of constraints C 0 obtained from C by substituting xn with ln or un , depending on its coe cient: xn n C0 (C P )x ln ^ (C N )un ^ (C Z ) The following lemma has been derived and proved in the paper by Saksena et al. 23], and shows the validity of the above variable elimination process. Lemma 3.1 ( 23]) Let C be a system of linear constraints and let C0 be the resulting set of constraints after eliminating a universally quanti ed variable xn with lower bound ln and upper bound un . Then the sentence 8xn 2 ln; un] :: C holds if and only if C0 holds. The existential quanti er can be eliminated by using Fourier-Motzkin variable elimination technique 16].

Elimination of Quanti ed Variables

Fourier-Motzkin Elimination. Consider a system of linear constraints C in n variables (x ; x ; : : :; xn ). We wish to nd a system of linear constraints C 0 over x0 = x ; : : :; xn ]T , such that x0 is a solution to C 0 if and only if x0 is a solution to 9xn :: C . As before, the constraints
1 2 1 1

in C may be partitioned into three subsets.
C
8 > < xn

xn > : 0
(

Di(x0); 1 i p Ej (x0); 1 j q Fk (x0); 1 k r
0

The elimination of variable xn leads to a new system of constraints:
C0

9xn :: C

Di(x0)

Ej (x0); 1 i p; 1 j q Fk (x0); 1 k r

Again, the following lemma has been derived and proved in the paper by Saksena et al. 23], and shows the validity of the above variable elimination process. 20

Lemma 3.2 ( 23]) Let C be a set of linear constraints. Let C0 represent the set of constraints as
a result of eliminating xn using Fourier Motzkin elimination as described above. Then,

9xn :: C
holds if and only if C0 holds.

Example
This example is based on the work presented in the paper by Saksena et al. 23]. Here, the variable elimination technique is applied to

9s :: 8e 2 5; 8] :: 9s :: 8e 2 8; 10] :: 9s :: 8e 2 5; 8] :: 9s :: 8e 2 8; 10] :: C ; where C ; is a constraint set given on ; in Example 3.1. Initially, since e is the innermost universally quanti ed variable, it can be eliminated rst. The constraints involving e in C ; are:
1 1 2 2 3 3 4 4 12 12 12 4 4 12

18 s4 + e4 (s2 + e2 ) The elimination of e4 from these constraints results in the following derived constraints: s4 30 (e4 := u4 = 10) s4 (s2 + e2 ) 12 (e4 := u4 = 10) 10 s4 (s2 + e2 ) (e4 := l4 = 8) Therefore, these three constraints are substituted for the original constraints containing e4 . Thus, the complete set of constraints is given below: 0 s1 s2 (s1 + e1 ) 5 s2 + e2 20 15 s (s1 + e1 ) 3 + e3 20 s3 s3 + e3 (s1 + e1 ) 25 s4 30 (3.12) 10 s ( s 4 2 + e2 ) s1 + e1 s2 s4 (s2 + e2 ) 12 s2 + e2 s3 s4 (s3 + e3 ) 5 s3 + e3 s4 Next, an existentially quanti ed variable s4 , which is the innermost one, is eliminated. The constraints containing s4 in the above constraint set are:

s4 + e4 s4 + e4 (s2 + e2 )

40 22

30 From these constraints, the parametric lower and upper bound functions are obtained as follows: F4min(s1; e1; s2; e2; s3; e3) = max(s3 + e3; s2 + e2 + 10) F4max(s1; e1; s2; e2; s3; e3) = min(s2 + e2 + 12; s3 + e3 + 5; 30) 21

s2 + e2 + 10 s3 + e3

s4 s4

s4 s4 s4

s2 + e2 + 12 s3 + e3 + 5

(3.13)

And, as a result of eliminating s4 , the constraints in (3.13) are replaced by the following constraints: 10 12 s2 + e2 + 10 s3 + e3 + 5 s2 + e2 20 s2 + e2 + 10 30 s + e + 5 s 2 2 3 + e3 (3.14) ; s3 + e3 s2 + e2 + 12 s3 + e3 s2 + e2 + 12 s3 + e3 30 0 5 s3 + e3 30 If we continue this process until s1 is eliminated, then we will obtain all the parametric bound functions, or the predicate will turn out to be false during the process. Figure 3.5 shows the obtained parametric bound functions. 0 max(8; s1 + e1 ) max(20; s1 + e1 + 10; s2 + e2 ) max(s3 + e3; s2 + e2 + 10)

s1 s2 s3 s4

2 min(10; s1 + e1 + 5) min(22; s1 + e1 + 17; s2 + e2 + 4) min(30; s2 + e2 + 12; s3 + e3 + 5)

Figure 3.5: Parametric Calendar for Example

3.3 Summary
We have formally de ned the problem whose solution approach will be given in Chapter 4. The kfold cyclically constrained job set was de ned that allows standard constraints to be speci ed across the boundaries of two consecutive scheduling windows as well as within one scheduling window. Also, prior works were presented in detail on scheduling with relative constraints, including the parametric scheduling scheme 23].

22

Chapter 4

Dynamic Dispatching of Cyclic Real-Time Tasks with Relative Constraints
In this chapter, we present an o -line algorithm to check the schedulability of a job set, 1;1 . And, if they are schedulable, the parametric lower and upper bound functions for each job start time are found in terms of the start or nish times of the previous jobs. These bounds can be evaluated at runtime within O(N ) time. Suppose that ij belongs to j , then the parametric lower and upper bound functions of sj i , are parameterized in terms of the start and nish times of already executed jobs in j 1 and j . Another important result is that only N pairs of parametric bound functions have to be stored and cyclically used at runtime. The o -line algorithm has a pseudo-polynomial complexity O(n2 N 3), where n is the number of jobs in one scheduling window that have relative constraints with jobs in the next scheduling window. If only jitter constraints on periodic tasks are allowed, it can be shown that the o -line and online components require O(n4 N ) and O(n) times, respectively. Also, it is shown that, for a certain class of standard constraints, called restricted standard constraints, the o -line algorithm requires at most O(N 3 + n5 ). The rest of this chapter is organized as follows. In Section 4.1, the parametric scheduling approach is developed by using the quanti er elimination techniques, and by transforming the constraint set into an equivalent constraint graph. In Section 4.2, example job sequences are given with parametric calendars found from the o -line algorithm. Finally, a summary of the chapter follows in Section 4.3.

4.1 Dynamic Cyclic Dispatching
As in the parametric scheduling approach developed for transaction scheduling 23], we want to devise a schedulability test and an e cient dispatching mechanism when an 1-fold cyclically constrained job set, 1;1 , is given with its constraint matrices and vectors. We say 1;k , is schedulable if there exists any method which can successfully dispatch the jobs in 1;k .
schedulable if the following predicate holds:

De nition 4.1 (Schedulability of
sched1;k
1 1

1

;k ) The k-fold cyclically constrained job set
1 1 1 1 1 1 1 2 1 2 1 2 1 2

1

;k (1

k) is
(4.1)

9s :: 8e 2 l ; u ] :: 9s :: 8e 2 l ; u ] :: : : : k k k ;k 9sk N :: 8eN 2 lN ; uN ] :: C k where C ;k is a set of standard constraints de ned on fs ; e ; : : :; sk N ; eN g.
1 1 1 1 1 1

23

Proposition 4.1

Then, the following proposition holds for all k 1.

8k 1 :: sched ;k =) sched ;k
1 +1 1

sched1;k in (4.1). Hence, once sched1;k turns out to be False, then all sched1;j , k
proposition, the schedulability of 1;1 is de ned.

Proof: Obvious from the de nition of a cyclically constrained job set and from the de nition of
j , are False, too. By this

De nition 4.2 (Schedulability of

1

In 23], it is shown that checking Predicate (3.9) is not trivial because of the nondeterministic job execution times and because of the existence of standard relative constraints among the jobs. This applies to the above sched1;k predicate, too. The variable elimination techniques are used in 23] to eliminate variables from Predicate (3.9). At the end of the variable elimination process parametric bound functions for si , that are parameterized in terms of the variables in fs1 ; e1; : : :; ei 1g, are found as well as the predicate value. However, if we want to apply the variable elimination technique to sched1;k , the following problems have to be addressed rst: j j j 1 1. On which subset of fs1 1 ; e1 ; : : :; si 1 ; ei 1 g does the parametric bound functions for si depend? 2. Is it required to store parametric bound functions for every job in 1;k ? 3. What parametric bound functions have to be used if k is not known at pre-runtime and dynamically decided at runtime? denote parametric lower and upper bound functions for sj and Fsmax;k Let Fsmin;k j j i , respectively, i i 1;k that are found after the variable elimination algorithms are applied to sched . If the number of is parameterized, then it is not possible to or Fsmax;k variables is unbounded with which Fsmin;k j j i i evaluate them at run-time within bounded computation times. Also, if it is required that parametric bound functions for every job in 1;k be stored at runtime, the scheme is not implementable for large k because of memory requirements. Finally, if the value of k is not known at pre-runtime and is decided dynamically at runtime, which is true in most real-time applications, the parametric bound functions to be used have to be selected. In this section, the answers to the above questions are sought by rst transforming sched1;k into a constraint graph and by investigating the properties of such graphs. In section 4.1.1 the transformation rule is presented with lemmas showing the equivalence relationship between sched1;k and its constraint graph with respect to variable elimination process. In section 4.1.2 several terminologies are de ned for constraint graphs, and in section 4.1.3 the properties of constraint graphs are investigated. Then, in section 4.1.4 a complete o -line algorithm is presented to check sched1;1 and to obtain parametric bound functions for job start times if it is schedulable. In addition, for a certain class of standard constraints, it is shown in section 4.1.5 that the o -line algorithm can be executed within O(N 3 + n5 ) time by pre-eliminating certain nodes from the constraint graph. 24

k!1

lim

;1 )

;1 is schedulable if and only if sched1;k = True
1

We want to apply the variable elimination algorithms to sched1;k for some xed k, and want to nd out answers to the previously raised three questions. For that purpose, we rst transform the predicate into a constraint graph and apply node elimination algorithms(corresponding to the variable elimination algorithms) to the graph. Then, the properties of the constraint graphs created during the node elimination process are examined. Working on constraint graphs, instead of constraint sets themselves, makes it easier to infer and prove useful properties. In this section, the transformation rules are given for a set of jobs and its associated constraint set. Let = f 1; 2; : : :; N g be a nite set of jobs with a set of standard constraints, C . Consider eliminating quanti ed variables from the following predicate:

4.1.1 Transforming a Constraint Set into a Constraint Graph

nating variables.

9s :: 8e 2 l ; u ] :: : : : 9sN :: 8eN 2 lN ; uN ] :: C Then, predicates on subsets of fs ; e ; : : :; sN ; eN g are de ned next that are found after elimiSched
1 1 1 1 1 1 1 1

De nition 4.3 Sched(sa)(1 a N ) is de ned to be a predicate on a set of variables fs ; e ; : : :; sag
that are found after eliminating variables of < fN ; sN ; : : :; fa > from Sched. Sched(ea) is de ned similarly.

That is, Sched(sa) can be expressed as

Sched(sa)

9s :: 8e 2 l ; u ] :: : : : 9sa :: C (sa)
1 1 1 1

It will be shown that Sched(or Sched(sa), or Sched(ea)) can be transformed into a directed graph, which is called a constraint graph, such that the variable elimination process can be mapped into a corresponding node elimination operation in the graph. Note that, in the following de nition of a constraint graph, semi-exclusive-ORed edges are de ned, which will be used in de ning w restricted paths in constraint graphs. Also, v1 ! v2 denotes an edge from a node v1 to a node v2 with a weight w, and v1 w! v2 w! : : : w ! vi > denotes a path from a node v1 to a node vi with P< w 1 a weight sum w = ij =1 wj . v1 ; vi denotes that there exists a path from v1 to vi, and v1 ; vi denotes that there exists a path from v1 to v2 whose weight sum is w. The following rule is used to transform a predicate into a constraint graph. Here, the semiexclusive-ORed edges denote a pair of edges that cannot be arbitrarily placed in a restricted path that will be de ned in De nition 4.6.
1 2 i 1

De nition 4.4 (Constraint Graph) A constraint graph G(V; E ) is found from Sched (or Sched(sa), or Sched(ea)) as follows: 1. node set V is obtained as follows: v0 2 V si , fi 2 V for 1 i N where fi = si + ei . 2. edge set E is obtained as follows: For each tuple < si ; fi >, add the following semi-exclusive-ORed edges to E :
25

(a) si l! fi u (b) fi ! si For each constraint in C that can be converted to: c (a) vi vj c (vi ; vj 2 fsi ; fi j 1 i N g): add vj ! vi to E . c (b) vi c: add v0 ! vi to E . c (c) vi c: add vi ! v0 to E .
i i

De nition 4.5 The constraint graph found from Sched(sa) is denoted as G(sa). Similarly, G(fa)
represents a graph found from Sched(ea).
1

Figure 4.1 shows a graph created from the example job set 1;2 de ned in Example 3.1. Note that v0 is an extra node created to represent a constant 0 that is used to specify absolute constraints such as the release time and the deadline constraints. In the gure, the edges connected by are semi-exclusive-ORed edges.
25 5 8 5 22 8
1 s1

)+
-8

f1 1

5 0

s1 2

)+
-10 -15

f2

1

0

s2 1

)+
-8

f1

2

5 0

2 s2

)+
-10

f2

2

-18 20 -20 40

0

v0

Figure 4.1: Constraint Graph for 1;2 Note that there may exist only one edge from one node to another from the uniqueness of inequality in the constraint set. For example, if there are two constraints v1 v2 c1 and v1 v2 c2 in C , then one of them is redundant. Therefore, we can denote an edge from v1 to v2 in a constraint graph as v1 ! v2 without its weight speci ed. Also, note that any edge from fi to si is semiexclusive-ORed to any edge from si to fi . That is, even if any of these two edges is created from another constraint in C rather than from the minimum or maximum execution time constraint, they are semi-exclusive-ORed.

De nition 4.6 (Restricted Path) In a constraint graph, a path, < v ! v ! : : :vi
1 w1 2 w2

vi >, is called a restricted path from v1 to vi if the following is satis ed: If fj ! sj appears in the path, then its semi-exclusive-ORed edge sj ! fj may appear at most
once in the path, and vice versa. If two semi-exclusive-ORed edges, fj ! sj and sj ! fj , appear in the path, then they belong to a sub-path < fj ! sj ! fj >.
1

1

wi 1

!

The full notation would be G(sa )(V; E ). But, if no confusion is caused, G(sa ) will be used in this chapter.

26

Note that if a sub-path < fj ! sj ! fj > appears once in the path, then neither fj ! sj nor sj ! fj should appear at another place in the path, and vice versa.

De nition 4.7 (Restricted Cycle) A restricted cycle in a constraint graph is de ned to be a
cycle2 such that 1. it satis es the de nition of a restricted path. 2. it is not a sub-path of < fj ! sj ! fj > for any 1 j

N.

For example, a path < fj ! sj ! fj ! sl ! fj > is a restricted cycle while a path < fj ! sj ! fj > is not. Also, a restricted path without any restricted cycle in it is called an acyclic restricted path. The elimination algorithm of a node fa from a graph G(fa) is presented next.

Algorithm 4.1 (Elimination of fa from a Graph G(fa)) Elimination of fa from G(fa) is performed by the following algorithm.
1 1 2

1. For each edge pair, < y w ! fa ; fa w! sa >, that are not semi-exclusive-ORed in G(fa):
2

create an edge y w +w ! sa . (a) If y = sa and w1 + w2 < 0, then return False.3 (b) If y = sa and w1 + w2 0, then remove this edge.4 0 (c) If there already exists an edge y w ! sa before creating y w +w ! sa, then the edge with less weight remains, while the other is removed.
1 2

2. For each edge pair, < sa w! fa ; fa w ! z >, z 6= sa, that are not semi-exclusive-ORed in G(fa):
1 2

create an edge sa w +w ! z.
1 2

(a) If there already exists an edge sa w! z before creating sa w +w ! z, then the edge with less weight remains, while the other is removed.
1 2

00

3. Set V = V

ffag and remove all edges to or from fa in G(fa).

Let Elim(G(fa); fa) denote a new graph created after eliminating fa from the graph G(fa) according to Algorithm 4.1 in case False is not found. In this case, the following lemma proves the equivalence, with regards to the graph transformation rule, between the elimination of an universal quanti er from the predicate and the elimination of a node, fa , from the constraint graph.

Lemma 4.1 Elim(G(fa); fa) is equal to G(sa).
2 3

A cycle is de ned to be a path < y ! v1 : : : ! vi ! y > where i 1, or to be a path < y ! y >. This is because y y = 0 w1 + w2 < 0 is a contradiction. 4 This is because y y = 0 w1 + w2 is a tautology.

27

Proof: Given in appendix.
Next, we show how a node corresponding to an existential quanti er sa may be eliminated from the graph G(sa ).
formed by the following algorithm.
1 1 2

Algorithm 4.2 (Elimination of sa from a Graph G(sa)) Elimination of sa from G(sa) is per1. For each edge pair, < y w ! sa ; sa w! z >, in G(sa):
2

create an edge y w +w ! z. (a) If y = z and w1 + w2 < 0, then return False. (b) If y = z and w1 + w2 0, then remove this edge. 0 (c) If there already exists an edge y w ! z before creating y w +w ! z, then the edge with less weight remains, while the other is removed.
1 2

2. Set V = V

fsag and remove all edges to or from sa in G(sa).

equivalence between the elimination of a node in the graph and the elimination of an existential quanti er from the constraint set.

Similarly, let Elim(G(sa); sa) denote a new graph created after eliminating sa from the graph G(sa) according to Algorithm 4.2 in case False is not found. Then, the following lemma shows the

Lemma 4.2 Elim(G(sa); sa) is equal to G(fa ). Proof: Given in appendix.
1

By inductively applying Lemma 4.1 and 4.2, the equivalence relationship between node elimination and variable elimination processes can be established. This relationship is shown in Figure 4.2 with respect to the constraint graph derivation rules.
Graph Transform

Sched(e q)
Variable Elimination

G(f q)
Node Elimination

Sched(s q)

G(s q)

Sched(e q-1 )

G(f q-1 )

Figure 4.2: Equivalence between Predicates and Graphs The elimination process of nodes, fa and sa , from the graph G(fa ) can be viewed as preserving the connectivity between any two nodes in fv0; s1; f1 ; : : :; sa 1 ; fa 1g through fa and sa in G(fa ). That is, if there exists any restricted path from y to z only through sa and fa in G(fa), then a 28

new edge from y to z is created to maintain the connectivity from y to z even after fa and sa are eliminated. This is formally proved in Lemma 4.3. Figure 4.3 shows a graph and its node elimination processes for sched1;2 that is derived from 1;2 in Example 3.1.
25
1

22 5 5 8

5

5
1 f1

s1

)+
-8

1 s2

8

)+
-10 -15

f2

1

0

s2 1

)+
-8

f1
0 -18

2

s2 2

)+
-10

f2

2

0

20 0

-20

40

v0

25 5 5
1 f1 1 s2

12

s1

1

)+
-8

)+
-10 -15

8

1 f2

0

s2 1

5

)+
-8 -10

f1
0

2

5

s2 2

0

20

-20

30

v0

25 5 5
1 f1 1 s2

s1

1

8

12

)+
-8

)+
-10

f2

1

0

0 -5

s2 1

)+
-8

5

f1

2

-15 20 0 -20

30

v0

2 1;2 Figure 4.3: Elimination of f2 and s2 2 from

The following proposition describes a necessary condition for Sched to be true in terms of its constraint graph. Proposition 4.2 If a constraint graph for Sched has a negative weight restricted cycle, then Sched = False. Proof: Given in appendix. The following lemma shows how the connectivity is maintained during the node elimination process, which is quite an useful property that will be frequently used throughout this chapter.
that is found after eliminating nodes of < fN , sN , fN 1 , sN 1 , : : :, fa+2 , sa+2 , fa+1 , sa+1 > from G(fN ). Also, assume that no contradiction has been found yet. Then, the following two conditions are equivalent: 1. y w ! z 2 G(fa)

Lemma 4.3 Let fv ; s ; f ; s ; f ; : : :; sa ; fa ; sa; fag, 1 a N , denote a node set of G(fa)
0 1 1 2 2 1 1

29

w 2. there exists a minimum weight acyclic restricted path y ; z in G(fN ) where all intermediate5 6 nodes of the path belong to fsa+1 ; fa+1 ; : : :; sN ; fN g.

Proof: Given in appendix.
In the next corollary it is assumed that v and v 0 denote any two nodes that are located consecutively in a sequence < v0; s1 ; f1; : : :sN ; fN >.

Corollary 4.1 Let fv ;0s ; f ; : : :; vg, denote a node set of G(v) that is found after eliminating
0 1 1

nodes of < fN ; sN ; : : :; v > from G(fN ). Also, assume that no contradiction has been found yet. If an edge from y to z0 exists in G(v ), then there exists a path from y to z in G(fN ) whose intermediate nodes belong to fv ; : : :; sN ; fN g.

Proof: Given in appendix.
2 1 12 For example, in the example shown in gure 4.3, after eliminating ff2 ; s2 ! f12 is 2 g an edge f2 1 22 created since, in the initial graph, there exists a minimum weight acyclic restricted path < f2 ! 10 0 2 2 2 2 2 f2 ! s2 ! f1 > whose weight sum is 12 and whose intermediate nodes belong to fs2; f2 g. 2 1 Also, an edge f2 ! f21 is created in G1;2(f12), since there exists a minimum weight restricted path 22 10 8 18 < f21 ! f22 ! s2 ! f22 ! f21 > without any intermediate restricted cycle. 2

In this section, we de ne several terms for constraint graphs. They will be used in deriving and proving the properties of constraint graphs in the next section. In this section, it is assumed that an initial graph is obtained from the predicate sched1;k that is de ned in (4.1) for 1;k . Before de ning terminologies for constraint graphs, the following function is de ned on node sets of constraint graphs. De nition 4.8 (g ) g is an one-to-one mapping

4.1.2 De nitions for Constraint Graphs

v0 if v = v0. + g (v ) = > sj if v = sj i i where 1 i N . : f j + if v = f j where 1 i N . i i g (V ) on a node set V is de ned to be a set of g (v ) where v is an element of V .
fv1 ; v2 ; : : : ; vig is a set of intermediate nodes of a path < y ! v1 ! v2 : : : vi ! z > where i 1, or fg is an intermediate node set if the path consists of one edge. 6 y ! z may also be considered as a path whose intermediate nodes belong to fsa+1 ; fa+1 ; : : : ; sN ; fN g.
5

by the following rule:

fsji ; fij j 1 i N ^ max(1; + 1) j g ! fsji ; fij j 1 i N ^ max( + 1; 1) j g
8 > <

30

j can be related to a node sj in j by For example, sj i in i
1 1 2 2

sj i = g( j
2

2

j1 j1 ) (si )

1;k j where C 1;k (sj i ) is a set of standard constraints obtained after variable elimination. sched (ei ) is de ned similarly. Also, as in De nition 4.5, the graphs found from the above predicates are denoted as follows: 1;k j G1;k(sj i ) denotes a graph constructed from sched (si ). G1;k(fij ) denotes a graph constructed from sched1;k (ej i ). 1;k j Note that, from C 1;k (sj i )(or G (si )), we can nd out the parametric lower and upper bound functions for sj i in the forms presented in Proposition 3.1. First, several terms are de ned for constraint graphs. Let E denote a subset of edges in a graph j 1;k G1;k (sj i ), (or G (fi )) in the following two de nitions.

j j In this case sj i is called a corresponding node of si in a job set , and vice versa. As in De nition 4.3, sched1;k (sj i ) (1 i N ^ 1 j k) is de ned to be a predicate on a set j j 1 1 k of variables fs1 ; e1; : : :; si g that is obtained after eliminating the variables, ek N , sN , : : :, ei , from sched1;k . That is, it can be expressed as j 1 1 1 1;k j sched1;k (sj 9s1 (si ) 1 :: 8e1 2 l1 ; u1 ] :: : : : 9si :: C i)
2 1 2

edge in E .

De nition 4.9 (Node Set from E ) Node(E ) denotes a set of nodes that are connected by any

De nition 4.10 (Preceding Node Set from E ) PrecNode(E ) is de ned to be a subset of Node(E ) in the graph such that v 2 PrecNode(E ) if and only if
j j 1 there exists a node v 0 that lies after v in the sequence < v0; s1 1 ; f1 ; : : :; si (; fi ) > satisfying:

v ! v0 2 E _ v0 ! v 2 E

1 2 2 In the example constraint graph shown in Figure 4.1 let E be ff2 ! f22, s2 2 ! f2 , v0 ! f2 g. 1 2 2 Then, a node set from E , Node(E ) is found to be fv0 ; f2 ; s2; f2 g. Also, the preceding node set, PrecNode(E ), is fv0 ; f21; s2 2 g. In the following de nition, let y and z denote any two consecutive nodes in the sequence 1 1 1 1 1 2 2 k k k k < v0 ; s 1 1 ; f1 ; s2 ; f2 ; : : :; sN ; fN ; s1 ; f1 ; : : :; : : :; s1 ; f1 ; : : :; sN ; fN >.

De nition 4.11 (Crossing Edge Set over a Node y) A crossing edge set ;k (y) is de ned to k ) satisfying either of the following two conditions: be a set of edges v ! v in G ;k (fN k 1. v 2< v ; s ; f ; : : :; y > and v 2< z; : : :; sk N ; fN >. k 2. v 2< v ; s ; f ; : : :; y > and v 2< z; : : :; sk N ; fN >.
1 1 2 1 1 0 1 1 1 1 2 2 0 1 1 1 1 1

1 For example, in Figure 4.4, 1;2(f2 ) is shown with dashed edges. Informally speaking, any 1;k k edges created in G (y ) after eliminating nodes < z; : : :; sk N ; fN > may connect only the nodes that belong to PrecNode( 1;k (y )). This is proved in Proposition 4.4.

31

25 5
1

22 5 8

s1 1

)+
-8

f1

5 0

s2

1

8

)+
-10

f2

1

0

s2 1

)+
-8

f1

2

5 0

s2

2

)+
-10

f2

2

-15 20 0 -20 40

-18

v0

Figure 4.4:

12

;

1 (f2 ) is denoted as dashed edges meeting with a vertical line.

De nition 4.12 (Created Edge Set in G ;k(fij )) A created edge set ;k (fij ), 1 j k 1, is de ned to be a set of edges v ! v in G ;k (fij ) where v , v satisfy the following condition:
1 1

k ) such that there exists a path v1 ; v2 in G1;k (fN

1

w

2

1

1

2

1. it has at least one intermediate node. 1 k k 2. all of its intermediate nodes belong to fv0 , s1 1 , f1 , : : :, sN , fN g
1

;k (sj ) is de ned similarly. i

fv , s , f , : : :, fij g.
0 1 1 1 1

That is, a created edge set in G1;k (fij ) contains edges that could be created during the variable elimination process. Note that, if a newly created edge is implied by an already existing edge k ) with a less weight and thus removed during the elimination process as explained in in G1;k (fN Algorithm 4.1 and 4.2, then the already existing edge is included into the created edge set instead of the removed one that is actually created during the variable elimination process. In gure 4.5, the constraint graph is shown corresponding to Example 3.1. Dashed edges are used to represent 1;3 3 (s2) and 1;3 (s2 2 ). Next, the semi-homogeneity and homogeneity relationships are de ned between two edge sets in two constraint graphs that are found during variable elimination processes from two job sets, 1;k and 1;l (k l), respectively.

De nition 4.13 (Semi-homogeneous Edge Sets) Let jE and E be subsets of j j j
1 2
1 2 1 2

edges in G1;k (fi ) and G1;l(fi ) (or, G1;k (si ) and G1;l(si )), respectively, where k k ^ j2 l. Then, E1 is semi-homogeneous to E2 if and only if

l ^ j1

j E j=j E j
1 2

^

(v1 ! v2 2 E1) =) (g(j j )(v1 ) ! g(j j ) (v2) 2 E2 )]
2 1 2 1

Here, note that, if E1 is semi-homogeneous to E2 , then (v3 ! v4 2 E2) =) (g(j j )(v3 ) ! g(j j ) (v4) 2 E1) holds, too, since j E1 j=j E2 j and E1 is mapped onto E2 under the index function g(j j ) which is one-to-one. The homogeneity relationship is de ned next which is stronger than semi- homogeneity relationship. Again, let E1 and E2 be subsets of edges in G1;k (fij ) and G1;l(fij ) (or, G1;k (sj i ) and G1;l(sj )), respectively, where k l ^ j k ^ j l . 1 2 i
1 2 1 2 2 1 1 2 1 2

32

De nition 4.14 (Homogeneous Edge Sets) E is homogeneous to E , denoted as E
and only if 1. E1 and E2 are semi-homogeneous. 2. For two nodes v1(6= v0 ), v2(6= v0 ), (v1 w ! v2 2 E1) () (g(j j )(v1) w ! g(j j )(v2) 2 E2) 3. For two nodes v0, v2, where v2 6= v0 , (v0 w ! v2 2 E1) () (v0 w+(j !j )L g(j j )(v2) 2 E2) 4. For two nodes v1, v0, where v1 6= v0 , (v1 w ! v0 2 E1) () (g(j j )(v1) w (j !j )L v0 2 E2)
2 1 2 1 2 1 2 1 2 1 2 1

1

2

1

E , if
2

Homogeneity relations are commutative and transitive, i.e., (E1 2 E3 2 3 1 which can be easily proved from the de nition of homogeneity. Two homogeneous created edge 1;3 2 sets, 1;3(s3 (s2 ), are shown in Figure 4.5 with dashed edges where L = 20. 2 ) and A constant, n, is de ned next that will be used in obtaining a complexity bound of our algorithm.

E E () E E E ) ^ (E E ) =) E
1 2 2 1

cyclically constrained job set and the de nition of a preceding node set. n 1 is the number of jobs in one scheduling window that have standard relative constraints with jobs in the next scheduling window.

De nition 4.15 (n) n =j PrecNode( ;k(fN )) j; k 2 j )) j is same for all 2 k and all 1 j k 1 from the de nition of a Note that j PrecNode( ;k (fN
1 1 1

From now on, the properties of constraint graphs will be examined that remain true during the node elimination process. Note that, from Proposition 4.2 if a negative weight restricted cycle exists in the constraint graph, Algorithm 4.1 or 4.2 will detect it and return False. In this case the predicate sched1;k is false and the job set 1;1 is not schedulable as well as 1;k . If a constraint graph appears in any of the following propositions, it is assumed that no contradiction has been found in the process of obtaining that graph. First, it is shown that the parametric bound functions j 1;k for sj i found from a constraint graph G (si ) depend on the start or nish times of the jobs in j 1 and j that are already executed. This means that the number of jobs it may actually be dependent on is shown to be bounded by O(N ). This bounds the number of variables to O(N ) that have to be used in evaluating parametric bound functions at runtime.

4.1.3 Characteristics of Constraint Graphs

Proposition 4.3 In a graph G ;k(sji ), if sji is connected to a node v, then v 2 PrecNode( ;k (sj i )) P j ;k k ;k k where P = fy j y 2< v ; sj ; f j ; : : :; fij > ^ (y ! sj i 2 G (fN ) _ si ! y 2 G (fN ))g
1 1 0 1 1 1 1 1 1 1

33

25 5 5 8 5

12 5
3 f1 3 s2

2 s1

)+
-8

f1
0 -18

2

s2 2

)+
-10

f2
-15

2

0

s3 1

)+
-8 -10

0

-20

40

-40

50 -48

v0

25 5 5 8 5

12 5

s1

1

)+

-8

f1 1
0

s1 2

)+
-10 -15

f2

1

0

2 s1

)+
-8 -10

f1
0

2

s2 2

20 0

30 -20

-28

v0

Figure 4.5: Homogeneous edge sets,

13

;

(s3 2 ) and

13

;

(s2 2)

Proof: Given in appendix.
Similar result holds for a graph G1;k (fij ). Then, the following proposition implies that the set of nodes, to which additionally created j 1;k 1;k j edges in G1;k (sj (si ))(or i )(or G (fi )) may be connected, is a subset of the set, PrecNode( j 1;k PrecNode( (fi ))).

Proposition 4.4

Node(
Also, j
1

;k (f j ) j N

Node( n(n 1) holds.

;k (sj )) i 1;k (fij ))
1

PrecNode( PrecNode(

1

;k(sj )) i 1;k (fij ))

Proof: Given in appendix.
These two propositions give an upper bound on the actual number of nodes sj i may be connected to in G1;k (sj ), which is O ( N ). If only jitter constraints are allowed from periodic tasks, it is i j 1;k j easy to see that si in G (si ) is connected to at most O(n) number of jobs. This is because j PrecNode( 1;k(sji )) j n and j P j 2. j ), is given in the Then, an interesting property of an additionally created edge set, 1;k (fN following proposition. After eliminating 4N variables of the last 2N jobs (belonging to k and k 1 ) from sched1;k , we periodically obtain semi-homogeneous created edge sets once eliminating each 2N variables for j , 2 j k 2.

Proposition 4.5 An edge set

1

;k (f j ) is semi-homogeneous to N

1

;k (f j N

1

) for 2 j k 2.

34

Proof: Given in appendix.
In addition, the edge weight change patterns between two semi-homogeneous edge sets, j ), are presented in the following proposition. and 1;k (fN
1 1 1 w 2 1 ( 1) 1 w ( 1) 2 1 1

;k (f j N

1

)

Then, the following is satis ed: 1. If v1 6= v0 and v2 6= v0 ,

j ) and ;k (f j ), Proposition 4.6 Consider two semi-homogeneous created edge sets, 0 ;k (fN N j j ;k ;k where 2 j k 2. Suppose v ! v 2 (fN ) and g (v ) ! g (v ) 2 (fN ).
1 1

w0 w 2. If v1 = v0 and v2 6= v0 , w0 w L 3. If v1 6= v0 and v2 = v0 , w0 w + L

Proof: Given in appendix.
j ) and 1;k (f j 1 ) for some j , then the Once we nd two homogeneous created edge sets, 1;k (fN N following proposition enables us to stop the variable elimination process, since homogeneous created edge sets will be found to the ones already obtained, if the node elimination process continues.

Proposition 4.7 If an edge set
j k 2, then

1

;k (f j ) is homogeneous to an edge set N
1

1

;k (f j N

1

), where 2

8l : 2 l j 1 :: 8i : 1 i N ::

;k (f l ) i

1

;k (f j ) i

^

1

;k (sl ) i

1

;k (sj ) i

Proof: Given in appendix.
1

;k (f j ) and N

More generalized result is presented next which holds whenever two homogeneous edge sets, j 1 ), are found during the variable elimination process. 1;k (fN
1

Proposition 4.8
;k (f j N
1

)

1

;k (f j ) N

=) (8i : 1 i ::

1

;k+i (f (j N

1)+

i)

1

;k+i (f j +i )) N
1 + +

k ) and G ;k i (f k i ), Proof: This is obvious from the cyclic structures of constraint graphs, G ;k(fN N
1

and from Proposition 4.6 and 4.7. From the de nition of homogeneity between edge sets in constraint graphs, the following proposition is derived. Proposition 4.9 Suppose 1;k (sji ) 1;k (sli) holds. Then,
1 2

j 1;k l 1;k l 2. the set of edges from sj i in G (si ) is homogeneous to the set of edges from si in G (si ).
1 2

j 1;k l 1;k l 1. the set of edges to sj i in G (si ) is homogeneous to the set of edges to si in G (si ).
1 2

35

j 1;k Note that from the set of edges to sj i in G (si ) we can obtain the parametric upper bound j j 1;k function Fsmax;k for sj j i , and from the set of edges from si in G (si ) we can obtain the parametric i lower bound function Fsmin;k for sj j i by inversely transforming the edge set into constraints. Two i l parametric lower (upper) bound functions for sj i and si are de ned to be homogeneous if they satisfy min;k condition 1(2) in the above proposition. If Fsj and Fsmin;k are homogeneous, it is denoted as: l
1 1 1 1 1

We have the following lemma from Proposition 4.7 and 4.8. j ) holds for 2 j k 2, then j 1) 1;k (fN Lemma 4.4 If 1;k (fN ^ Fsmax;k 1. 8l : 2 l j 1 :: 8i : 1 i N :: Fsmin;k Fsmax;k Fsmin;k j j l l
+a 2. 8a : 1 a :: Fsmin;k j a
(

Fsmin;k j i

i

i

2

1

Fsmin;k l i

2

1 and F max;1, This lemma enables us to obtain asymptotic 7 parametric bound functions, Fsmin; j sj i i once we nd two homogeneous created edge sets during node elimination process from the constraint graph. By using asymptotic parametric bound functions at run-time we can guarantee that the constraint set C 1;k will be satis ed with any arbitrary value of k. 1 and F max;1 , are parameterized in Note that asymptotic parametric bound functions, Fsmin; j sj i i 1 j 1 ; : : :; f j g and in terms of the index variable j . By knowing terms of the variables in fsj ; f 1 1 i 1 the scheduling window(job set) index j at run-time, only one pair of asymptotic parametric bound functions need to be stored for all sj i where i is xed and j 2. In addition to this, another pair of parametric bound functions needs to be stored for s1 i.

i

1)+

Fsmin;k j a i
+

+

a

a F max;k ^ Fsmax;k a j a sj i
(

i

i

i

1)+

+

+

+

a

i

i

In this section, a 4N -node graph, called basis graph, is obtained to which we can cyclically apply k ) for variable elimination algorithm without explicitly obtaining a large constraint graph G1;k (fN large k. That is, by recursively applying variable elimination algorithm to this smaller graph, it j ), j = k; k 1; : : :, will converge or can be decided whether the created edge set sequence, 1;k (fN not. 2 De nition 4.16 (Basis Graph) A basis graph Gb(Vb; Eb) is de ned as a subgraph of G1;2(fN ) as 8 follows. 1. Vb = Vb;1 Vb;2 fv0 g where: 1 Vb;1 = PrecNode( 1;2(fN )) fv0 g 2 2 2 Vb;2 = fs2 1 ; f1 ; : : :; sN ; fN g
7 \Asymptotic" means \converging" in the sense that homogeneous parametric bound functions will be found to the ones already obtained, if the variable elimination process continues. 8 1; 2 2 G (f N ) is found from 1;2 .

4.1.4 O -line Component

36

2 2. All edges in G1;2(fN ) connecting any two nodes in Vb are included into Eb .

k ) can be transformed into an equivThen, the variable elimination process for a graph G1;k (fN alent one by using a basis graph as follows:

Algorithm 4.3 Cyclic algorithm to obtain G ;k (fN ).
1 2

Input: k, Basis Graph Gb (Vb; Eb) 2 Output: G1;k (fN ) 1. Initialize i = 1. 1 2. Initialize G1 in (Vb ; Ein) = Gb (Vb ; Eb). 3. From i = 1 to i = k 2 repeat the following: i ), the nodes of Vb;2 by alternately using Algorithm 4.1 and (a) Eliminate, from Giin (Vb; Ein 4.2. (b) If False is returned from Algorithm 4.1 or 4.2, then return False. i ) denote the resulting graph. (c) Let Giout (Vb;1 fv0 g; Eout i ) = Gi 1 (Vb;1 fV0g; E i 1), then return Gi (Vb; E i ). (d) If i 2 and Giout (Vb;1 fv0 g; Eout out out in in i +1 i +1 (e) Let Gin (Vb; Ein ) = Gb (Vb; Eb) i ), (f) For each edge v1 w! v2 in Giout (Vb;1 fv0g; Eout i. If v1 6= v0 and v2 6= v0 , add an edge g(1)(v1) w! g(1)(v2) to i+1). +1 (Vb; Ein Giin +L +1 i+1). ii. If v1 = v0, add an edge g(1)(v1) w ! g(1)(v2) to Giin (Vb ; Ein +1 i+1). iii. If v2 = v0, add an edge g(1)(v1) w !L g(1)(v2) to Giin (Vb ; Ein (g) Set i = i + 1.
12 12 12 12

i ) is returned. By utilizing Proposition 4.7, this graph can At step 3 (d) the graph Giin (Vb; Ein 1;k 2 be shown to be equal to G (fN ). Once we nd homogeneous created edge sets on Vb;1 fv0 g at step 3 (d), asymptotic parametric bound functions for job start times can be found from the graph 2 2 2 2 G1;k (fN ). From this graph the variables in the sequence < fN ; s2 N ; : : :; f1 ; s1 > are eliminated to 2 obtain the parametric bound functions for each si , 1 i N . During this elimination process, the weights of edges connected to or from v0 have to be modi ed appropriately to re ect scheduling window index j 2 as well as the node index of the graph. For example, j 2 2 2 if an edge v0 w ! s2 i is obtained after eliminating < fN ; sN ; : : :; fi >, then a formula si w + (j 2)L must be used in deriving asymptotic parametric bound functions for sj i. w 2 2 2 if an edge s2 i ! v0 is obtained after eliminating < fN ; sN ; : : :; fi >, then a formula w + j (j 2)L sj i must be used in deriving asymptotic parametric bound functions for si .
w j 2 2 2 2 if an edge s1 a ! si , is obtained after eliminating < fN ; sN ; : : :; fi >, then a formula si j 1 sj w must be used in deriving asymptotic parametric bound functions for si . a

37

After obtaining asymptotic parametric bound functions for sj i , 2 j , we can also nd parametric 1 1;k 1 bound functions for by eliminating nodes from G (fN ). Note that, at each iteration in the above algorithm, no explicit transformation of node indices are performed by using g( 1). This is because our purpose is to check the schedulability and obtain asymptotic parametric bound functions, and this may be done without explicit knowledge of node indices. The key property that this algorithm makes use of is that the basis graph is recursively used and transformed until the schedulability is checked. It is clear that this algorithm produces exactly the same result (True or False) and graph as the node elimination algorithm applied to k ) does. G1;k (fN The following theorem provides an upper bound on the number of loop iterations in Algorithm 4.3 that have to be performed before the schedulability is checked.
is not schedulable.

Theorem 4.1 If Algorithm 4.3 doesn't terminate within n n + 2 loop iterations, then sched ;1
2 1

Proof: Given in appendix.
Therefore, we obtain the nal algorithm for checking sched1;1 and deriving asymptotic parametric bound functions if 1;1 is schedulable. The overview of o -line component is shown in Figure 4.6.
) G b( V b ,E b

Algorithm 3 with k=n 2-n+4

Homogeneous Created Edge Sets are found G (f N )
2

No

Not Schedulable

Yes

Eliminate variables from G (f N )
2

Parametric Bound Functions
1 for s i , i=1,2,...,N

Parametric Bound Functions for s i , j=2,3,... , i=1,2,...,N
j

Figure 4.6: Overview of o -line component From Theorem 4.1 the total complexity of the o -line algorithm is O(n2N 3 ), since each loop iteration of Algorithm 4.3 may take at most O(N 3) computation time 23]. If only jitter constraints are allowed from periodic tasks, then the o -line algorithm will be nished within O(n4N ) time where n is the number of periodic tasks that have jitter constraints, since each loop iteration in this case takes at most O(n2N ) time. This is because j PrecNode( 1;k(sj i )) P j n + 2 holds, and because from Proposition 4.3 we know that at most O(n) number of edges exist in G1;k (sj i) 38

j 1;k j that are connected to or from sj i . This implies that the elimination of si from the graph G (si ) will require at most O(n2 ) time, and eliminating nodes of one job set requires O(n2N ) time. Also, the on-line component in this case requires at most O(n) execution time.

For a certain class of standard constraints, called restricted standard constraints, it will be shown that the o -line component can be carried out in O(N 3 + n5 ) time instead of O(n2 N 3) time.
constraints:

4.1.5 O -line Component with Restricted Standard Constraints

De nition 4.17 (Restricted Standard Constraints) For two jobs, aj and bl, where (j = l 1) _ (j = l ^ a < b), the following constraints are de ned as restricted standard
sj a j sa + ej a slb slb c1 c2 slb l sb + elb sj a sj a c3 c4
(4.2)

Also, as in the de nition for standard constraints, release time and deadline constraints can also be classi ed as restricted standard constraints. We also include as restricted standard any constraint that can be rewritten in one of the above forms.

For this class of constraints the following lemma makes it possible to pre-process the basis graph and to obtain a smaller graph that can be used in the o -line algorithm instead of the basis graph. This graph is called a compact basis graph.
and only if it is schedulable for the maximum execution times of the jobs.

Lemma 4.5 ( 42]) If
Sched

1

;k is constructed with restricted standard constraints, it is schedulable if

Let the following be a predicate representing a schedulability for a job set .

restricted standard constraints.
1

9s :: 8e 2 l ; u ] :: : : : 9si :: 8ei 2 li; ui] :: 9sN :: 8eN 2 lN ; uN ] :: C From Lemma 4.5 this predicate is equivalent to the following predicate where C only consists of
1 1 1 1

9s :: : : : :: 9si :: : : : 9sN :: 9sN :: C ej =uj : 1 j N ]
1

where ej =uj denotes a substitution of uj for a variable ej . In other words, Sched can be checked by rst replacing every universally quanti ed variable ej with uj for 1 j N , and then by eliminating existentially quanti ed variables sN , : : :, s1 . However, eliminating the existentially quanti ed variables, sN , sN 1 , : : :, si+1 , in any order will produce the same constraint graph G(si ). This is because there exists no exclusively-ORed edges between nodes in fv0 ; si+1 ; si+2; : : :; sN g after substituting the maximum execution times for the variables ej , 1 j N , and because any minimum weight acyclic restricted paths through the nodes of fsi+1 ; : : :; sN g are preserved in the remaining constraint graph after eliminating the variables sj , i + 1 j N , regardless of the elimination order. 2 This property is used to nd a compact basis graph from sched1;2(fN ) as follows: 39

4. Let Gcb (Vcb ; Ecb) be a subgraph of G00 (V 00 ; E 00 ): (a) Vcb = Vcb;1 Vcb;2 fv0 g where: 0 1 1 1 Vcb;1 = fs1 (sN )) 1 ; s2 : : :; sN g \ PrecNode( Vcb;2 = g(1)(Vcb;1) (b) All edges in G00 (V 00 ; E 00 ) connecting two nodes of Vcb de nes Ecb . We can apply Algorithm 4.3 to this compact basis graph instead of the basis graph. This limits the complexity of obtaining homogeneous created edge sets to O(N 3 + n5 ) instead of O(n2 N 3). Once we nd homogeneous created edge sets on Vcb;1, asymptotic parametric bound functions 2 can be found by rst unrolling the nal graph from the algorithm to obtain G1;1(fN ) and then 2 2 2 2 by eliminating from this graph the nodes in the sequence < fN ; sN ; : : :; f1 ; s1 >. During this elimination process, as in Section 4.1.4 the weights of edges connecting v0 have to be modi ed appropriately to re ect scheduling window index as well as the node indices of the graph.

Algorithm 4.4 (Compact Basis Graph) Algorithm to obtain a compact basis graph. 2 Input: sched1;2(fN ) Output: Compact Basis Graph Gcb (Vcb; Ecb ) 1. Let G0 (V 0 ; E 0 ) denote a graph from a predicate that is found by substituting uj for each 2 universally quanti ed variable ej in sched1;2(fN ). 0 0 0 1 2. Let 0 (s1 N ) denote a crossing edge set of sN found from G (V ; E ). 3. Let G00 (0 V 00 0; E 000) denote a graph found after eliminating the following nodes from G (V ; E ). 2 2 fs2 g(1)(PrecNode( 0 (s1 1 ; s2 ; : : :; sN g N )))

4.2 Example

The asymptotic parametric bound functions are found for the job set, 1;1 , in Example 3.1. Figure 4.7 shows the parametric bound functions found from 1;4 , and Figure 4.8 shows asymptotic parametric bound functions for sched1;1 . It is clear from this gure that the following hold:

1 Note that n =j PrecNode( 1;4(f2 )) j= 3, and n2 n + 2 = 8 is the iteration bound given in Theorem 4.1. But, Algorithm 4.3 found homogeneous created edge sets after 3 loop iterations. This shows that the upper bound on the number of loop iterations given in Theorem 4.1 is not tight in general.

Fsmin; Fsmax; Fsmin; Fsmax;
2 1 2 1 2 2 2 2

4 4

4 4

Fsmin; Fsmax; Fsmin; Fsmax;
3 1 3 1 3 2 3 2

4 4

4 4

40

max(8; s 1 1 max(20; s2 + e1 2 ; s1 + e 2 2 max(28; s1 + e1; s1 2 +e 2 2 max(40; s2 + e2; s2 1 +e 3 2 max(48; s3 + e ; s 1 1 2 +e 3 3 max(60; s2 + e2; s3 1 +e 4 3 max(s4 + e ; s 1 1 2 +e

1 1 1 1 1 2 2 1 2 2 3 1 3 2

0 +e ) + 10) + 10) + 10) + 10) + 10) + 10)
1 1

s1 1 s1 2 s2 1 s2 2 s3 1 s3 2 s4 1 s4 2

2 1 min(10; s1 1 + e1 + 5) 1 1 1 min(22; s1 1 + e1 + 17; s2 + e2 + 4) 1 1 2 min(30; s2 + e2 + 12; s1 + e2 1 + 5) 2 2 2 min(42; s1 + e1 + 17; s2 + e2 2 + 4) 2 3 3 min(50; s2 + e + 12 ; s + e 2 2 1 1 + 5) 3 3 3 min(62; s1 + e1 + 17; s2 + e3 2 + 4) 3 4 4 min(70; s3 + e + 12 ; s + e 2 2 1 1 + 5)

Figure 4.7: Parametric bound functions found from sched1;4

Fsmin Fsmax Fsmin Fsmax Fsmin j max Fsj Fsmin j max Fsj
1 1 1 1 1 2 1 2 1 1 2 2

= = = = = = = =

0 2 1 max(8; s1 1 + e1 ) 1 min(10; s1 1 + e1 + 5) 1 1 1 1 max(20 + (j 2)20; sj + ej ; sj + ej + 10) 2 2 1 1 1 1 1 1 min(22 + (j 2)20; sj + ej + 17; sj + ej + 4) 1 1 2 2 j j j 1 j 1 max(28 + (j 2)20; s1 + e1 ; s2 + e2 + 10) 1 1 j min(30 + (j 2)20; sj + ej + 12; sj 2 2 1 + e1 + 5)

Figure 4.8: Asymptotic parametric bound functions for sched1;1

4.3 Summary
In this chapter, we presented a solution approach for the problem de ned in Chapter 3. A new technique, called dynamic cyclic dispatching, is developed based on the dynamic time-based scheduling scheme introduced in Chapter 1. A schedule(ordering) of N jobs is assumed to be given on a scheduling window, and it is required that this schedule be repeated at run time. The relative constraints may be cyclically de ned across the boundaries of the scheduling windows as well as between jobs in one scheduling window. Unlike static approaches which assign xed start times to jobs in a scheduling window, our approach not only allows us to exibly manage the slack times with the schedulability of a job set not a ected, but also yields an guaranteed schedulability in the sense that, if other dispatching mechanism can dispatch the job sequences satisfying all given constraints, then our mechanism can also schedule them. A pseudo-polynomial time o -line algorithm is presented to check the schedulability of a cyclically constrained job set and to obtain parametric lower and upper bound functions for each job start time. The o -line algorithm requires at most O(n2 N 3) time where n is the number of jobs in a scheduling window that have relative constraints with jobs in the next scheduling window. Then, the parametric bound functions for each start time can be evaluated by an on-line algorithm within O(N ) time. In addition, with restricted standard constraints it is shown that the o -line component requires at most O(N 3 + n5 ) execution time. 41

Chapter 5

Design of a Dynamic Temporal Controller
5.1 Introduction
In this chapter, we consider the issue of how a dynamic temporal controller can be constructed. In dynamic temporal control, the regular sampling interval assumption is relaxed, and computational costs are incorporated into the cost function. At run-time new controls are computed and exercised at chosen time instants such that the cost function is minimized. The feasibility of this new scheme is demonstrated by obtaining dynamic temporal control laws for linear time-invariant control systems. In Section 5.2, we formulate the dynamic temporal control problem and introduce computation cost into performance index function. The solution approach for linear time-invariant systems is discussed in Section 5.3. In Section 5.4, implementation issues are addressed. We provide an example of controlling rigid body satellite in Section 5.5 . In this example, a dynamic temporal controller is designed. Results show that the dynamic temporal control approach performs better than the traditional sampled data control approach with the same number of control exercises. Section 5.6 discusses the issues arising from the application of dynamic temporal controls to the design of real-time control systems. Finally, Section 5.7, we present a summary.

5.2 Problem Formulation
In dynamic temporal control, the control changing time instants are chosen such that a cost function is minimized which incorporates computational costs as well as state, input costs. We consider a steady state control problem on a nite time line 0; Tf ]. To formulate the dynamic temporal control problem for a discrete, linear time-invariant system, we rst discretize the time interval 0; Tf ] into M subintervals of length = Tf =M . Let DM = f0; ; 2 ; : : : (M 1) g denote M time instants that are regularly spaced. Here, control exercising time instants are restricted within DM for the purpose of simplicity. The linear time-invariant controlled process is described by the di erence equation: x(k + 1) = Ax(k) + Bu(k) (5.1) where k is the time index. One unit of time represents the subinterval , whereas x 2 Rn and u 2 Rl are the state and input vectors, respectively. It is well known that there exists a steady state optimal control law 20, 39] uo (i) = fi x(i)] i = 0; 1; :::; M 1 (5.2) 42

that minimizes the quadratic performance index function (Cost)

JM =

M X1 k=0

xT (k)Qx(k) + uT (k)Ru(k)] + xT (M )Qx(M )

(5.3)

where Q 2 Rn n is positive semi-de nite and R 2 Rl l is positive de nite. As we can see, traditional controller exercises control at every time instant in D. However, in temporal control, we are no longer constrained to exercise control at every time instant in D. In dynamic temporal control we require that the control be exercised with the following steps: At time ti , ti 2 DM and 1 i, 1. Compute a current state x(ti ) 2. Compute (ti ) 3. Compute and apply u(ti ) to the system 4. Repeat the process at ti+1 = ti + (ti ) Note that ti , 1 i, denote control changing time instants, and (ti ) denotes the time interval between i-th control exercise and (i + 1)-th control exercise. For the purpose of simplicity, dual mode dynamic temporal control is considered. That is, (ti ) may take one of the following two values:

a b a and b are positive integers(a < b) such that b is an integer multiple of a. Also, it is assumed that b divides M without any remainder. b is called a base sampling period and a is called a rapid sampling period. Let M = b where is a positive integer. In addition to the above assumption, we further assume that at all time instants in f0, b , 2b , : : :, ( 1)b g new controls are computed. Let each time interval (i 1)b ; ib ] of size b be called a frame for 1 i . The sampling period decision function is evaluated at only time instants that are start times of frames, and once (ib ) is decided it will be enforced during the next time frame ib ; (i + 1)b ]. In other words, if (ib ) = a the control computations will be done at ib , (ib + a) , : : :, (ib + b a) . And, if (ib ) = b the control computations will be done only at ib in ib ; (i + 1)b ). Under these assumptions the steps performed by a dynamic temporal controller can be summarized as follows: At time ib , 0 i 1, 1. Compute a current state x(ib ) 2. Compute (ib ) = gi(x(ib )) (a) If (ib ) = a At tj = (ib + ja) for 0 j (b=a 1), compute and apply u(tj ) = hi;j (x(tj )) (b) If (ib ) = b compute and apply u(ib ) = hi (x(ib ))
43

3. Repeat the process at (i + 1)b if i < 1. This new formulation of dynamic temporal control makes it possible to nd a good approximation approach to optimal control laws as can be seen in later sections of this chapter. We want to nd a feedback control law, gi , hi , and hi;j for i = 0; 1; 2; :::; 1 and j = 0; 1; : : :; (b=a 1), that minimizes a new performance index function
0 JM = JM +

(5.4)

Here, is the computation cost of exercising control with a rapid sampling period instead of a base sampling period in one frame, and denotes the number of frames in 0; M ] done with a rapid sampling period. Hence, exercising controls with a rapid sampling period increases the cost term . So, if exercising control with a rapid sampling period doesn't reduce the term JM by more than this increase, exercising control with a base sampling period is likely to be a better choice. This is a key idea of the solution approach given in the next section. This new cost function is di erent from JM in two aspects. First, the concept of computational 0 cost is introduced in JM as term to regulate the number of frames with rapid sampling periods. If we do not take this computation cost into consideration is likely to become . If computation cost is high (i.e., has a large value) then is likely to be small in order to minimize the total cost function. Second, in dynamic temporal control, not only do we seek control law u(x(t)), but also the control exercising time instants and the number of control changes. In the next section, we present in detail speci c techniques for nding a dynamic temporal control law with performances close to optimal solutions. Let T = ft0 ; t1; t2; : : :; t 1 g denote a set of control changing time instants where t0 = 0, t1 = n1 , : : :, t 1 = n 1 . That is n0 , n1 , : : :, n 1 are the indices for control changing time instants. In this section, an optimal control law is derived when T is given which minimizes the cost function JM . In the next section, the results0 developed in this section will be used in devising good heuristics for deciding values minimizing JM . Assume that T is given. Then a new control input calculated at ti will be applied to the actuator for the next time interval from ti to ti+1 . Our objective here is to determine the optimal control law

5.3 Temporal Control with Fixed Sampling Times

uo(ni ) = fi x(ni )] i = 0; 1; :::;

1

(5.5)

that minimizes the quadratic performance index function (Cost) JM which is de ned in (5.4). The principle of optimality, developed by Richard Bellman 7, 8] is the approach used here. That is, if a closed loop control uo (ni ) = fi x(ni)] is optimal over the interval t0 t t , then it is also optimal over any sub-interval tm t t , where 0 m . As it can be seen from Figure 5.1, the total cost JM can be decomposed into Fi s for 0 i where

Fi = xT (ni)Qx(ni ) + xT (ni + 1)Qx(ni + 1) + xT (ni + 2)Qx(ni + 2) + ::: + xT (ni+1 1)Qx(ni+1 1) + (ni+1 ni )uT (ni )Ru(ni )
44

(5.6)

State Cost S m+1

F ?-m F? -1 F?

......
n0
1 2 3

......
o

n ? -m n ? -m

... +1 ... n

....
-1

? -m+1

....
o

n ? -1

... ...

time
n?

u (n? -1 )

u (n ? -m )

Control Input Cost

Figure 5.1: Decomposition of JM into Fi . That is, from (5.1), Fi = xT (ni)Qx(ni ) + (Ax(ni ) + Bu(ni ))T Q(Ax(ni) + Bu(ni)) + (A2x(ni ) + ABu(ni ) + Bu(ni))T Q(A2 x(ni ) + ABu(ni ) + Bu(ni )) + ::: + (Ani ni 1 x(ni ) + Ani ni 2 Bu(ni ) + ::: + ABu(ni ) + Bu(ni ))T Q (Ani ni 1 x(ni ) + Ani ni 2 Bu(ni ) + ::: + ABu(ni ) + Bu(ni )) + (ni+1 ni )uT (ni )Ru(ni ) This can be rewritten as
+1 +1 +1 +1

(5.7)

Fi =

xT (ni)Qx(ni ) +

ni+1X ni j =1

1

Aj x(ni ) + Bj u(ni)]T Q Aj x(ni ) + Bj u(ni)]

(5.8)

+ (ni+1 ni )uT (ni )Ru(ni) P 1 k where Aj = Aj and Bj = j k=0 A B . Then JM can be expressed as JM = F0 + F1 + F2 + ::: + F : Let Sm be the cost from i = m + 1 to i = : Sm = F m+1 + F m+2 + ::: + F 1 + F ; 1 m 45

(5.9) + 1: (5.10)

These cost terms are well illustrated in the above Figure 5.1. Therefore, by applying the principle of optimality, we can rst minimize S1 = F , then choose o o 1 to minimize S2 = F 1 + F = S1 + F 1 where S1 is the optimal cost occurred at t . We o and so on until can continue choosing F 2 to minimize S3 = F 2 + F 1 + F = F 2 + S2 T S +1 = JM is minimized. Note that S1 = F = x (n )Qx(n ) is determined only from x(n ) which is independent of any other control inputs.

F

5.3.1 Inductive Construction of an Optimal Control Law with T Given
We inductively derive an optimal controller which changes its control at time instants t0 ; t1, : : :, t 1 . As we showed in the previous section, the inductive procedure goes backwards in time o to S o . Since S1 = F = xT (n )Qx(n ) + uT (n )Ru(n ) and x(n ) is independent of from S1 +1 o = xT (n )Qx(n ) where Q is symmetric and positive u(n ), we can let uo(n ) = uo(M ) = 0 and S1 semi-de nite. o = xT (n )Qx(n ) where Q is symmetric. Induction Basis: S1 Inductive Assumption: Suppose that o = xT (n m+1 )P ( Sm m + 1)x(n m+1 ) holds for some m where 1 m and P ( m + 1) is symmetric.
o as We can write Sm o = A Sm (n P(
m+1 n m ) x(n

A(n m n m ) x(n From the de nition of Sm and (5.8), o +F Sm+1 = Sm m o T = Sm + x (n m )Qx(n m )
+1

m + 1)

m ) + B(n m+1 n m )u(n m )]T m ) + B(n m+1 n m )u(n m )]

(5.11)

+

n m+1X n m j =1 m+1
+1

(5.12)
TQ A j x(n m ) + Bj u(n m )]

1

Aj x(n

m ) + Bj u(n m )]

+ (n n m )uT (n m )Ru(n m ) And the above equation becomes Sm+1 = An m n m x(n m ) + Bn m n m u(n m )]T P ( An m n m x(n m ) + Bn m n m u(n m )] + xT (n m )Qx(n m )
+1 +1 +1

m + 1)

(5.13)

+

n m+1X n m j =1 m+1

1

Aj x(n
m )u T (n

m ) + Bj u(n m )] m )Ru(n m )

TQ A

j x(n m ) + Bj u(n m )]

+ (n

n

46

If we di erentiate Sm+1 with respect to u(n m ), then

@Sm+1 = BT n m @u(n m ) + (AT n m T + 2Bn m
+ + 2(n T = 2fBn + +

+1

n mP( n mP( n mP(
1

m + 1)An m + 1)Bn m + 1)Bn

m+1 n m x(n m+1 n m ) m+1 n m u(n

m) m) m)

(5.14)

+1 +1

T x(n

n m+1X n m j =1 m+1

2BjT QAj x(n m ) + 2BjT QBj u(n m )]
m )Ru(n m )

m+1 n m P ( n m+1X n m 1

n

m + 1)An
m)

m+1 n m

(5.15)

T + 2fBn

m+1 n m P ( n m+1X n m 1

j =1

BjT QAj gx(n BjT QBj + (n

m + 1)Bn

m+1 n m

Note that P ( above.

m +1) is symmetric and the following three rules are applied to di erentiate Sm+1 @ (xT Qx) = 2Qx @x @ T @x (x Qy ) = Qy @ (xT Qy ) = QT x @y

j =1

m+1

n

m )Rgu(n m )

@Sm = 0, from Lemma 5.1 and Lemma 5.2 given later we can obtain uo (n m ) which Let @u (n m ) o . minimizes Sm+1 and thus obtain Sm +1
+1

uo (n

m)

=

T fBn

+

m+1 n m P ( n m+1X n m 1

m + 1)Bn

m+1 n m

(5.16)
1

= where K ( m) is de ned in (5.16). Therefore, we can write An m n m x(n m ) + Bn
+1

j =1 T fBn m+1 n m P ( m + 1)An m+1 n m n m+1X n m 1 BjT QAj gx(n m ) + j =1 K ( m)x(n m )

BjT QBj + (n

m+1

n

m )Rg

An

m+1 n m u m+1 n m

o (n

Bn

m) =

m+1 n m K (

m)]x(n

m)

(5.17)

47

If we use (5.16) and (5.17), we have
o Sm = f An m n m Bn +1 P ( m + 1)f An m + xT (n m )Qx(n m )
+1

m+1 n m K (

+1

n m

Bn

m+1

m)]x(n m )gT m)]x(n n m K(
T m )g Q

m )g

(5.18)

+

n m+1X n m j =1

1

f Aj Bj K (

m)]x(n
m )]

+ (n m+1

f Aj Bj K (

m)]x(n m)g n m ) K ( m)x(n

T R K(

m)x(n

m )]

This equation can be rewritten as
o Sm = xT (n m )f An m +1 P ( m + 1) An + Q
+1

n m Bn m+1 n m K ( m+1 n m Bn m+1 n m K (

m)]T m)] m)]

(5.19)

+ = +

n m+1X n m

1

j =1 (n m+1 n m )K T ( m)RK ( xT (n m )P ( m)x(n m)

Aj Bj K (

m)]T Q Aj Bj K ( m)gx(n
m ):

where P ( m) is obtained from K ( m) and P ( m + 1) as in (5.19). Also note that knowing P ( m + 1) is enough to compute K ( m) because other terms of (5.16) are known a priori.
o = xT (n m )P ( m)x(n m ). Therefore, we nd a symmetric matrix P ( m) satisfying Sm +1 From (5.16) and (5.19), we have the following recursive equations for obtaining P ( m) from P ( m + 1) where m = 1; 2; :::; .

K(

T m) = fBn

+

m+1 n m P ( n m+1X n m 1

m + 1)Bn m + 1)An BjT QAj g
m+1 n m K (

m+1 n m

(5.20)
1

j =1 T fBn m+1 n m P ( n m+1X n m 1

BjT QBj + (n

m+1

n

m )Rg

m+1 n m

+

j =1

P(

m) = An m n m Bn P ( m + 1) An m + Q
+1

+1

n m

Bn

m+1 n m K (

m)]T

m)]

(5.21)

48

+

n m+1X n m j =1 m+1

1

A j Bj K (
m )K T(

m)]T Q Aj Bj K ( m)

m)]

+ (n

n

m)RK (

Also, we know that at each time instant n m uo(n m ) = K ( m)x(n m )

(5.22)

Hence, with P ( ) = Q, we can obtain K (i) and P (i) for i = 1; 2; :::; 0 recursively using (5.20) and (5.21). At each time instant ni ; i = 0; 1; 2; :::; 1 the new control input value will be obtained using (5.22) by multiplying K (i) by x(ni ) where x(ni ) is the estimate of the system o = S o = xT (0)P (0)x(0) where P (0) state at ni . Also, note that the optimal control cost is JM +1 is found from the above procedure. To prove the optimality of this control law we need the following lemmas.

Lemma 5.1 If Q is positive semi-de nite and R is positive de nite, then P (i); i = ;

1; 2; :::; 0; matrices are positive semi-de nite. Hence, P (i)s are symmetric from the de nition of a positive semi-de nite matrix.

Proof Since P ( ) = Q , from assumption P ( ) is positive semi-de nite. Assume that for k = i + 1, P (k) is positive semi-de nite. We use induction to prove that P (i) is semi-de nite. Note that Q is positive semi-de nite and R is positive de nite. From (5.21) we have (5.23) P (i) = Ani ni Bni ni K (i)]T P (i + 1) Ani ni Bni ni K (i)] + Q
+1 +1 +1 +1

+

ni+1X ni j =1

1

Aj Bj K (i)]T Q Aj Bj K (i)]

+ (ni+1 ni )K T (i)RK (i) Since P (i + 1) and Q are positive semi-de nite, R is positive de nite, and (ni+1 ni ) > 0, it is easy to verify that for 8y 2 Rm : y T P (i)y 0. This means that P (i) is positive semi-de nite. This inductive procedure proves the lemma.

Lemma 5.2 Given T , the inverse matrix in (5.20) always exists.
P + n m+1 n m

m + 1) is positive semij =1 de nite. nite, R is positive de nite and n m+1 n m > 0. This implies that V is positive de nite. Hence the inverse matrix exists.
1

Proof

m + 1)Bn m+1 n m m+1 n m P ( T Bj QBj + (n m+1 n m )R. From Lemma 5.1, P ( Therefore, 8y 2 Rm : y T V y > 0 because Q is positive semi-de
49

T Let V = Bn

Theorem 5.1 Given T , K (i) (i = 0; 1; 2; :::; Proof

1) obtained from the above procedure are the 0 optimal feedback gains which minimize the cost function JM (and JM ) on 0; M ]. 1. Thus the

Note that given T , JM is a convex function of u(ni ); i = 0; 1; :::; above feedback control law is optimal. Suppose that T1 and T2 denote two sets of control changing time instants.

Lemma 5.3 If T Proof

which change controls at time instants in T1 and T2 respectively.

1

o T , then JM;
2

1

o where J o and J o are the optimal costs of controls JM; 2 M;1 M;2

o < J o , then, in controlling the system with T2 , if we do not Suppose that JM; 1 M;2 change controls at time instants in T2 T1 and change controls at time instants in T1 to the same o with T1 , we obtain J o . This ^M;2 which is equal to JM; control inputs that were exercised to get JM; 1 1 o ^M;2 contradicts the fact that JM;2 is the minimum cost obtainable with Dq since we have found J o o o o which is equal to JM;1 and therefore less than JM;2. Hence, JM;1 JM;2 .

This lemma implies that if we do not take computation cost, , into consideration, then the more control exercising points, the better the controller is (less cost). With the computation cost being included in the cost function, the statement above is no longer true. Therefore we need to 0 search for an optimal T which minimizes the cost function JM . The following sections provide a detailed discussion on searching for such an optimal solution. Note that if we let T = DM then the optimal temporal control law is the same as the traditional linear feedback optimal control law.

5.3.2 Dynamic Temporal Control
In this section, we design a dynamic temporal controller by introducing a heuristic for (ib0 ) ) function. The heuristic tries to estimate how much performance gain( reduction of JM term in JM and how much performance loss (increase of term) will incur if a rapid sampling period is used in the next frame. If the performance gain is greater than or equal to a given threshold , then (i) = a , otherwise (i) = b . By making use of the results developed in the previous section, we can obtain an optimal control law for Ti1 = fib ; (i + 1)b ; : : :; ( 1)b g on a time interval ib ; b ] where 0 i 1. Let K1(i) and P1 (i) denote two matrices found from Ti1 by applying the algorithm given in the previous section. Consider another control changing time instants set Ti2 = fib ; (ib + a) ; : : :; (ib + b a) , (i + 1)b ; : : :; ( 1)b g where 0 i 1. Also, let K2 (i) and P2(i) denote two matrices found from Ti2 by applying the algorithm given in the previous section. Also, let K2 (i; j ), 0 j (b=a 1), denote a gain matrix obtained for time instant (ib + ja) . Two control changing time sets, Ti1 and Ti2 , are depicted in Figure 5.2. From Lemma 5.3 we know that xT (ib ) P1(i) x(ib ) is less than or equal to xT (ib ) P2(i) x(ib ). Furthermore, xT (ib ) P1(i) x(ib ) is less than or equal to xT (ib ) Pa (i) x(ib ) where 50

Ti

1

ib? (ib+a) ?

(i+1)b ?

(i+2)b ?

(??1) b?

? b?

Ti

2

ib?

(i+1)b ?

(i+2) b ?

(??1) b?

? b?

Figure 5.2: Two control changing time sets Ti1 and Ti2 . ing to the assumptions given in the problem formulation section, i.e., the same sampling period is enforced during one frame. In addition, the cost xT (ib ) P2 (i) x(ib ) is less than or equal to xT (ib ) Pb (i) x(ib ) where Pb(i) is a matrix found from any arbitrary control changing time instant set on ib ; b ] that contains time instants ib ; (ib + a) ; : : :; (ib + b a) , i.e., a rapid sampling period is used in the rst frame ib ; (i + 1)b ]. From these facts, it can be said that a cost xT (ib ) P1(i) x(ib ) is a lower bound of the costs found from any control changing time instant sets on ib ; b ] that conform to the assumptions, and a cost xT (ib ) P2 (i) x(ib ) is a lower bound of the costs found from any control changing time instant sets that enforce rapid sampling period in the rst frame ib ; (i + 1)b ). In our solution approach, the above costs are used at time ib to estimate the performance gain of using a rapid sampling period in the next frame ib ; (i + 1)b ]. This is a heuristic approach, and the e ectiveness of this approach is validated through an example in a later section. We present a heuristic dynamic temporal control law which performs the following steps at each frame start time: 1. Compute a current state x(ib ) 2. If xT (ib )(P1(i) P2(i))x(ib ) < , let (i) = b . Otherwise, let (i) = a . (a) If (i) = a , At each time instant tj = ib + ja , 0 j (b=a 1), apply u(tj ) = K2 (i; j )x(tj ) (b) If (i) = b , u(ib ) = K1(i)x(ib ) 3. Repeat the process at (i + 1)(b ) The following theorem proves0 that the dynamic temporal control using the above control law guarantees the cost term JM of JM to be less than or equal to xT (0) P1(0) x(0) which is a cost for T01 with only a base sampling period enforced on the entire interval 0; Tf ].
0 is less than Theorem 5.2 If the above dynamic temporal control law is used, the cost JM of JM or equal to xT (0)P (0)x(0) where P (0) is obtained from T .
1 1 1 0

Pa(i) is a matrix found from any arbitrary control changing time instant set on ib ; b ] conform-

51

Suppose that C d (x0) denotes a set of time instants at which new controls are exercised according to the above dynamic temporal control law for a given initial state x0 . Let I d (x0) = fi j 1 i g denote a set of frame indices at which a rapid sampling period is used. Also, let i1 2 I d(x0 ) denote a smallest index in I d (x0), and i2 2 I d0(x0 ) denote a second smallest 0 1 index, and so on. Consider two control changing time sets, T0 and T0 , where in T0 only i1 -th frame uses a rapid sampling period. Also, suppose that for these two control changing time sets, K1(l) is used if l-th frame uses a base sampling period, and K2 (l; j ) is used if l-th frame uses a rapid sampling period. Under these assumptions, it is clear that the control cost (without computation cost) for T01 is greater than or equal to that for0 T00 , when the same initial state x0 is used. Consider two control changing time sets, T0 and T000 , where in T000 i1 -th and i2-th frames use a rapid sampling period. Also, suppose that for these two control changing time sets, K1(l) is used if l-th frame uses a base sampling period, and K2(l; j ) is used if l-th frame uses a rapid sampling period. Under these assumptions, it is clear that the control cost (without computation cost) for T00 is greater than or equal to that for T000 , when the same initial state x0 is used. If we transitively apply this process, we can conclude that, for the same initial state x0 , the control cost (without computation cost) for T01 is greater than or equal to that obtained by applying the dynamic temporal control law. This proves the theorem. To implement dynamic temporal control, we need to calculate and store K1 (i) and K2(i; j ) matrices, and use them when controlling the system. The number of matrices that need to be stored is O( + (b=a) ), which is O((b=a) ). Note that in traditional optimal linear control a similar matrix is obtained and used at every time instant in DM to generate control input value. In dynamic temporal control, there is a CPU time overhead for calculating xT (ib ) (P1(i) P2(i)) x(ib ) at the start of each frame. This calculation can be done within O(n2) time. This calculation has to be done once each frame. More discussion is presented in a discussion section on this overhead. In order to implement temporal control we require an operating system that supports scheduling control computations at speci c time instants, and allows dynamic selection of sampling periods. The Maruti system developed at the University of Maryland is a suitable host for the implementation of dynamic temporal control 43, 35, 34]. In Maruti, all executions are scheduled in time and the time of execution can be modi ed dynamically, if so desired. This is in contrast with traditional cyclic executives often used in real-time systems, which have a xed, cyclic operation and which are well suited only for the sampled data control systems operating in a static environment. It is the availability of the system such as Maruti that allows us to consider the notion of dynamic temporal control, in which time becomes an emergent property of the system.

Proof

5.4 Implementation

5.5 Example
To illustrate the advantages of a dynamic temporal control scheme let us consider a simple example of rigid body satellite control problem 51]. The system state equations are as follows:

52

x(k + 1) = y (k) =

" h

0 1 x(k) + 0 1 2 0:00125 u(k) i 1 1 x(k)

#

"

#

where k represents the time index and one unit of time is the discretized subinterval of length = 0:05. The linear quadratic performance index JM in (5.4) is used here with the following parameters.

Q = R = M =
= a = b =

1 0 0 1 0:0001 40 0:05 1 4

"

#

(5.24)

The objective of the control is to drive the satellite to the zero position and the desired goal state is xf = 0; 0]T . We applied the dynamic temporal control law with an initial state space f(x1; x2) j 0:2 x1; x2 0:8g with the following parameter: = 0:01 (5.25) The performance of the dynamic temporal controller is compared to that of traditional optimal control with a sampling period 0:05. In Figure 5.3 the cost di erences between dynamic temporal controller and a traditional optimal controller are depicted for each initial state (x1; x2). Note that the maximum cost di erence is less than 0:03. In Figure 5.4 the number of control computation performed by a dynamic temporal controller is shown for each initial state. Note that the maximum number of control computation is less than 20, and for many of initial states they are less than 18. To estimate how much cost reduction is achieved through dynamic temporal control, we compare its performance with that of traditional optimal controller with 0:1 sampling period, i.e., sampling is done at 20 regular spaced time instants. In Figure 5.5 the cost di erences between optimal controller with 0:05 sampling period and an optimal controller with 0:1 sampling period are depicted for each initial state (x1; x2). Note that the maximum cost di erence is almost 0:5. The cost di erences shown in Figure 5.3 and Figure 5.5 are compared together in Figure 5.6. Note that with almost all initial states the dynamic temporal controller outperforms traditional optimal controller with sampling period 0:1, even though the number of control computations done by a dynamic temporal controller is smaller than that for optimal controller. If we normalize the costs from dynamic temporal controller and from traditional controller with sampling period 0:1, by dividing by the cost from traditional controller with sampling period 0:05, we obtain graphs shown in Figure 5.7. The Figure 5.7 shows two graphs, one for normalized costs from dynamic temporal controller and the other for normalized costs from traditional controller with a sampling period 0:1. Note that 53

0.03 Cost0.02 0.01 0 0.525 0.275 0.4 0.525 x1 0.65 0.775 0.275 0.4 x2 0.775 0.65

Figure 5.3: Cost di erences between dynamic temporal controller and traditional controller with 0:05 sampling period. The maximum cost di erence is less than 0:03.

54

20

15

Number

10 0.275 0.4 0.4 0.275

0.525 0.65 x2 0.775 0.775 0.65

0.525 x1

Figure 5.4: Number of control computation performed by a dynamic temporal controller is shown for each initial state. Note that the maximum number of control computation is less than 20, and for many of initial states they are less than 18.

55

Cost

0.4 0.2 0 0.525 0.275 0.4 0.525 x1 0.65 0.775 0.275 0.4 x2

0.775 0.65

Figure 5.5: Cost di erences between optimal controller with 0:05 sampling period and an optimal controller with 0:1 sampling period are depicted for each initial state. The maximum cost di erence is almost 0:5.

56

0.5 0.4 0.3 Cost 0.2 0.1 0.2 0.43 0.66 x2 0.66 x1 0.43 0 0.2

Figure 5.6: Cost di erences shown in Figure 5.3 and Figure 5.5 are compared together. Note that for almost all initial states the dynamic temporal controller outperforms traditional controller with equal sampling period 0:1, even though the number of control computations done by a dynamic temporal controller is smaller than that for traditional controller.

57

0.66 0.43 0.2

0.66 0.43 0.2

1.01 Cost 1.005

1

x2

x1

Figure 5.7: Normalize costs from dynamic temporal controller and from traditional controller with sampling period 0:1. Costs are normalized by dividing by the cost from traditional controller with sampling period 0:05.

58

0.66 0.43 0.2

0.66 0.43 0.2

1.01 Cost 1.005

1

x2

x1

Figure 5.8: Normalized costs from two controllers with adjusted threshold values. One from dynamic temporal controller and the other from traditional controller with equal sampling period 0:1. for some initial states the optimal controller outperforms dynamic temporal controller. However, this is from using uniform threshold value for the entire initial state space. As a result of using one threshold value, the number of control computations over initial state space shows non-uniformity as can be seen in Figure 5.4. By adjusting threshold values for some initial state, we can obtain more uniform graphs. This is seen from Figure 5.8 which is found after using di erent(smaller) threshold values for the initial states that results in higher normalized costs in Figure 5.7. The di erences between normalized costs shown in Figure 5.8 is not so big, less than 0:01. However, the advantage of dynamic temporal scheme is more clearly seen from the following experiment. Usually, in concurrent real-time systems, the actual control update time instants for one periodic control task varies in consecutive periods. This is from the variations of task execution times and also from the resource contention between di erent tasks. The delay of control update from the ideal control updating time instant is called computational delay. Computational delay has an adverse e ect on control algorithm's performance. Figure 5.9 shows the di erences of 59

Cost

0.1 0.05 0 0.275 0.4 0.525 x1 0.65 0.775 0.275 0.4

0.775 0.65

0.525 x2

Figure 5.9: Di erences of worst case normalized costs between a dynamic temporal controller with = 0:01 and a traditional controller with a sampling period 0:1. The computational delays are randomly generated with a normal distribution. For each initial state, the control trajectories are found 100 times, and the maximum cost among them is recorded. worst case normalized costs between a dynamic temporal controller with = 0:01 and a traditional controller with a sampling period 0:1. The computational delays are randomly generated with a normal distribution in 0; ], and they are injected into the system trajectory. For each initial state, the control trajectories are found 100 times, and the maximum cost among them is recorded. The graph shows that using a dynamic temporal controller reduces normalized costs. Finally, Figure 5.10 shows the di erences of average normalized costs between a dynamic temporal controller with = 0:01 and a traditional controller with a sampling period 0:1. Again, the computational delays are randomly generated with a normal distribution. For each initial state, the control trajectories are found 100 times, and the average cost is recorded. This gure shows that the di erences are not as large as in Figure 5.9 since they are from average costs.

60

Cost

0.1 0.05 0 0.275 0.4 0.525 x1 0.65 0.775 0.275 0.4

0.775 0.65

0.525 x2

Figure 5.10: Di erences of average normalized costs between a dynamic temporal controller with = 0:01 and a traditional controller with a sampling period 0:1. The computational delays are randomly generated with a normal distribution. For each initial state, the control trajectories are found 100 times, and the average cost is recorded.

61

5.6 Discussion
In the previous section, we showed by using an example that the number of control computations can be dramatically reduced by using dynamic temporal control law, while not sacri cing the quality of control. Employing the dynamic temporal control methodology in concurrent real-time embedded systems will have a signi cant impact on the way computational resources are utilized by control tasks. A minimal amount of control computations can be obtained for a given regulator by which we can achieve almost the same control performance compared to that of traditional controller with equal sampling period. This signi cantly reduces the CPU times for each controlling task and thus increases the number of real-time control functions which can be accommodated concurrently in one embedded system. Particularly, in a hierarchical control system if dynamic temporal controllers can be employed for lower level controllers the higher level controllers will have a great degree of exibility in managing resource usages by adjusting computational requirements of each lower level controller. For example, in emergency situations the higher level controller may force the lower level controller to run as infrequently as they possibly can (thus freeing computational resources for handling the emergency). In contrast, during normal operations the temporal control tasks may run as necessary, and the additional computation time can be used for higher level functions such as monitoring and planning, etc. As is mentioned in Section 5.4, there is an associated CPU overhead with dynamic temporal controller. At start of each frame the sampling period decision has to be done, which requires O(n2 ) execution time. However, this computation is required once every frame, and we can get bene ts by reducing the number of context switches in concurrent real-time systems. More work needs to be done on the e ects of computational delays and variations on control systems performance when dynamic temporal controls are used.

5.7 Summary
In this chapter, we proposed a dynamic temporal control technique based on a new cost function which takes into account computational cost as well as state and input cost. In this scheme new control input values are de ned at time instants which are not necessarily regularly spaced. For the linear control problem we showed that almost the same quality of control can be achieved while much less computations are used than in a traditional controller. The proposed formulation of dynamic temporal control is likely to have a signi cant impact on the way concurrent embedded real-time systems are designed. In hierarchical control environment, this approach is likely to result in designs which are signi cantly more e cient and exible than traditional control schemes. As it uses less computational resources, the lower level temporal controllers will make the resources available to the higher level controllers without compromising the quality of control.

62

Chapter 6

Scheduling Aperiodic and Sporadic Tasks
6.1 Introduction
In this chapter we develop an approach to addressing the problem of incremental scheduling of dynamic tasks in a hard real-time system. Traditionally, the scheduling problem considered for real-time systems is that of generating a schedule for n tasks. In practice, however, a system may have to accept additional tasks during its operation. Here, we study the problem of incremental scheduling in dynamic time-based environment. We assume that we are given a set of n tasks, T (and all their task instances), along with a schedule for their execution. We consider adding a task to the schedule. To add a new task, we have to rst analyze the acceptability of it. If this task can not be scheduled without violating constraints of any of the tasks in T then this task is not accepted. If this can be scheduled, we not only accept the task, but also add it to the schedule. In Section 6.2 the incremental scheduling problem is formally de ned within a time-based scheduling scheme. The results on incremental scheduling of aperiodic and sporadic tasks are presented in Section 6.3. Finally, a summary follows in Section 6.4.

6.2 Problem Description
The main problem addressed in this chapter is how to incrementally accept and schedule tasks while not sacri cing the schedulability of the tasks already accepted. A task in a real-time system may invoke its corresponding task instances by informing the system of the release time, deadline, and execution time of the task instance. Tasks in real-time systems may be classi ed into single instance task and multiple instance task. Single instance task, which is also called aperiodic task, invokes its task instance only once, and multiple instance task invokes its instance repeatedly. Multiple instance tasks are further divided into periodic tasks and sporadic tasks. A periodic task invokes its instances at regular time intervals(period), whereas a sporadic task invokes its instances at any time instant with a de ned minimum inter-arrival time between two consecutive invocations. Any arriving task belongs to one of these classes. A periodic task P is characterized by an invocation of a sequence of task instances. The following characteristics are assumed to be known at the arrival time, Ap , of the periodic task, P . 63

task invocation time I p from which the task starts to invoke its instances. task termination time X p when the task is terminated. period p invocation time of the j -th task instance is de ned to be Ijp = I p + (j 1)p relative deadline dp which implies that the absolute deadline of j -th task instance is Ijp + dp. worst case execution time cp A hard aperiodic task A invokes its task instance only once. A has the following set of parameters: arrival time of the request, Aa ready time Ra from which the task instance can start its execution. relative deadline da which implies that the absolute deadline is Da = Ra + da worst case execution time ca A sporadic task S is characterized by an invocation of its task instances with a minimum interarrival time. The following characteristics are assumed to be known at the arrival time, As , of the sporadic task, S . task invocation time I s from which the task instances can be invoked. task termination time X s when the task is terminated. minimum inter-arrival time invocation time of the j -th task instance, Ijs , can be any time instant satisfying Ijs Ijs 1 + relative deadline ds ( ) which implies that the absolute deadline of the j -th task instance is Ijs + ds . worst case execution time cs In addition to these, the system may be called upon to handle non-realtime tasks which don't have deadlines; Instead, they require as fast completion time as possible(best e ort). For a set of task instances to be scheduled, a traditional time-based scheduling scheme rst nds a complete schedule for them in a given scheduling window. This schedule contains a static start time, si , for each task instance, which is decided based on the worst case execution time ci and re ects all task dependencies. However, to enhance the scheduler with the ability to schedule dynamically arriving tasks, it may change si at runtime, while conforming to all constraints, such as release time ri , deadline di, precedence relations, relative constraints, etc. Clearly, this additional information has to be kept for each task instance with the schedule. If a new task arrives, based on the current schedule it needs to be decided whether this new task can be accepted by the system, and if it can be accepted, a new schedule has to be constructed to incorporate this new task. 64

In a hard real-time environment, tasks may be executed in preemptive or non-preemptive manner. When a task is executed non-preemptively it begins execution at time si and is assured CPU access for the time, ci , without any interruption or preemption. In preemptive execution, the task execution may be preempted at some de ned time instant, and resumed at a later time instant. Note that the task preemption and resumption times may be dynamically decided. We extend the static time-based scheduling scheme into a dynamic time-based scheduling scheme that enables any dynamically arriving aperiodic, periodic, or sporadic task to be incrementally scheduled. In a traditional static time-based scheduling scheme, every resource requirement is met by assigning explicit start times to the task instances. But, in this dynamic time-based scheduling scheme, the start times no longer have to be statically determined. Instead, the schedule includes a mechanism for determining the time when a task instance will be started or resumed based on the information available prior to its start time.

6.3 Dynamic Time-based Scheduling Schemes
Two variations of dynamic time-based scheduling scheme are presented here. In Section 6.3.1, a mechanism is presented to incrementally schedule aperiodic tasks over a schedule for static tasks found at pre-runtime. In Section 6.3.2, a mechanism is presented to incrementally schedule sporadic(periodic) tasks. In both sections, it is assumed that a valid schedule of static tasks is initially given with start times of the task instances. We develop acceptance tests for dynamically arriving aperiodic(or sporadic) tasks under the assumption that the total ordering among the static tasks is maintained, and EDF scheduling policy is assumed to be used for resolving the CPU contentions between static and dynamic tasks. Between static tasks, the time-based scheduling scheme is used in a sense that a total ordering among them is maintained at run-time, and between static(dynamic) and dynamic tasks, EDF scheduling algorithm is used. In this section, a mechanism is presented to schedule arriving aperiodic tasks. The key idea of this mechanism is to make use of the fact that the task executions may be dynamically shifted to the left or to the right in a time line as long as the timing constraints of the tasks can be satis ed. All task instances in this section are assumed to be preemptable.

6.3.1 Aperiodic Task Scheduling

Task Model
We assume that an initial schedule of task instances is given in a scheduling window 0; L] and this schedule is used by dispatcher at run-time. Let = f 1; 2; : : :; N g be a set of task instances in the initial schedule. It is assumed that i is scheduled before i+1 in the schedule. Each task instance i has the following parameters in the schedule: release time Ri absolute deadline Di (Di L for all 1 i N ) worst case execution time Ci 65

runtime variable ei denoting the processing time already spent for i up to a current time instant runtime variable !i denoting the latest start(or resume) time of i , which is a function of the current time t and the value of ei earliest start time est(i) latest start time lst(i) A hard aperiodic task A is de ned the same way as in Section 6.2 except that the ready time is assumed to be equal to its arrival time, i.e, Aa = Ra . Also, the task instances in are assumed to be preemptable by an aperiodic task and any aperiodic task is assumed to be preemptable by a task instance in . The values of est(i), lst(i) , i = 1; 2; : : :; N , are found as follows:

DN CN min(Di; lst(i + 1)) Ci for i = N 1; N 2; : : :; 1 If Di lst(i + 1), then lst(i) value will be decided from Di . And if Di > lst(i + 1), then lst(i) will be decided from lst(i + 1). Fig 6.1 shows an example of these relationships.
?i
lst(i) lst(i+1) Di

est(1) est(i) lst(N ) lst(i)

= = = =

R1

max(Ri ; est(i 1) + Ci) for i = 2; 3; : : :; N

? i+1
lst(i+1) +C i+1

Figure 6.1: Deriving !i (0) recursively Also, Fig 6.2 shows an example set of task instances with their est(i) and lst(i).
R1R2 R 3 R4 R5

?1
est(1) est(2)

?2
est(3)

?3
est(4)

?4 ?3 ?4
lst(4)

?5
est(5)

?1
lst(1)

?2
lst(2)

?5
lst(5)

lst(3)

D1

D2

D3

D4

D5

Scheduling window for ?

Figure 6.2: est(i) and lst(i) for an example task set Note that the run-time variable ei is initialized to 0 and !i to lst(i). 66

and a set of arriving aperiodic tasks A1 , : : :, Ai are said to be feasible if and only if there exists a schedule which satis es all the timing constraints on and aperiodic tasks. The optimality of a scheduling algorithm is de ned as: De nition 6.1 (Optimality) A scheduling algorithm is optimal if and only if the following is satis ed: It can schedule and arriving aperiodic tasks whenever there exists a feasible schedule.

Scheduling of Non-realtime Tasks
We can e ciently schedule any non-realtime tasks in a sense that maximum processor time can be found and used to service non-realtime tasks at any time instant by delaying as much as possible the executions of task instances. The non-realtime tasks are assumed to be processed by using FIFO 1 scheduling policy. At a current time instant t1 , let j denote a task instance in which is just nished or partially executed. Also, let t0 denote the last time instant when the dispatcher took control before t1 , and let t2 denote the run-time variable denoting the future time instant when the dispatcher can take control. The dispatcher takes control whenever a non-realtime task or a task instance in is nished, or whenever t1 = t2 holds. Then, at a current time instant t1 when a dispatcher takes the control: If j is executed in t0; t1 ] then let ej = ej + t1 t0 let !j = !j + t1 t0 If j is nished then let j = j + 1 let t2 = !j If t1 < !j then if there exists a non-realtime task pending, then give the processor to the rst non-realtime task in the queue else if Rj t1 , then give the processor to j else let the processor be idle else give the processor to j If no non-realtime tasks are pending, the next(or partially executed) task j is executed if it is possible, i.e., the release time of it is reached. Whenever there exists a non-realtime task waiting in the queue, and the latest start(or resume) time, !j , is not reached for j the non-realtime task will be executed(after preempting j if it is already started) until it nishes or !j is reached. If it
1

FIFO stands for First In First Out.

67

continues its execution until !j , the non-realtime task is preempted and j will resume its execution or start its execution. In other words, the non-realtime tasks have higher priorities until the latest start(or resume) time of j is reached. Example case is shown in Fig 6.3.
A d1 d2 = d 3 = d4

?1

?1

?2

?3

?4

?1 Non-Realtime ? 1

?2

?2

?3

?4

?1Non-Realtime ? 1
?
1

?2

?3

?4

d1

B

d2 = d 3 = d4

Figure 6.3: Joint scheduling of a non-realtime and

Acceptance Test for A Hard Aperiodic Task
In some real-time systems, there may exist aperiodic tasks that may arrive to the system at any time instants. At their arrival times, tests should be performed to decide if they can be accepted to the system or not. Once an aperiodic task is accepted and started it must be completed before its hard deadline. If it is rejected, then a higher level entity in the application may decide the following steps to the rejection message. For example, the higher level task may decide to re-invoke the aperiodic task until it is nally accepted. In this section, an acceptance test is developed that should be performed at the arrival times of hard aperiodic tasks. It is assumed that the context switch overheads are small and they are not taken into account in our work. The relative deadline of an aperiodic task A is assumed to be less than or equal to the scheduling window size L. The approach taken in this section treats arriving aperiodic task instances in FIFO order. This assumption will be removed in the next section. The acceptance test algorithm follows. Assume that i is the next or partially executed task when the hard aperiodic task, A, arrived at time Ra. At the arrival time, Ra, of an aperiodic task, A: TotalCapacity = !i Ra k = i+1 While (TotalCapacity < ca and lst(k) Ra + da ) begin TotalCapacity = TotalCapacity + lst(k) lst(k 1) Ck 68

k=k+1 If (TotalCapacity ca ) then Return(Success) end TotalCapacity = TotalCapacity + max(0; Ra + da lst(k 1) Ck 1 ) If (TotalCapacity ca) then Return(Success) else Return(Fail) At the arrival time of an aperiodic task, Ra , the acceptance test can be done in O(M ) time within this framework where M denotes the total number of task instance j (i j ) which satis es Ra lst(j ) Ra +da. In this case, the total amount of available processor time for A in Ra; Ra+da] can be found by the following formula:
(Ra; Ra + da ) = !i Ra +
0 1 jX

(6.1)

+ max(0; Ra + da lst(j 0 ) Cj 0 ) where j 0 (i j 0 ) is the last index satisfying !j 0 Ra + da . Example case is depicted in Fig 6.4 where j 0 = 5.
D1 D2 D3 D4 D5

k =i

(lst(k + 1) lst(k) Ck )

?1 ?1

?2 ?1
?1 lst(2)

?3 ?2
lst(3)

?4 ?3
lst(4)

?5 ?4 ?5
lst(5)

t1

t2 Available Slack

Figure 6.4: Obtaining maximum slack within a scheduling window of a hard aperiodic task A.

Acceptance Test for A Set of Hard Aperiodic Tasks
In this section, we address the problem of scheduling aperiodic tasks when several such tasks may arrive at any time instants. In this generalized scheduling model, we need to decide which scheduling policy is to be used for resolving the resource con icts between the task instances in and the aperiodic tasks, as well as the con icts among the aperiodic tasks. For example, we can assign higher priorities to aperiodic tasks than the task instances in as long as the latest start times of them are not reached, and use an earliest deadline rst scheduling algorithm among the 69

aperiodic tasks. However, this algorithm is not optimal as you can see from Fig 6.5. In this gure, the example task set is shown which is not schedulable according to the above approach. But, there exists a feasible schedule for this task set as is shown at the bottom of this gure. In the following subsections, we develop an optimal scheduling algorithm.
?2 ?1
?

?1

?2

A1

A2 Stealing Maximum Slacks from ?

?2 ?1
?

?1

?2

A1

A2 EDF Scheduling

Figure 6.5: Example Schedules

Deriving Virtual Deadlines and Virtual Release Times
As a rst step, we derive a virtual deadline and a virtual release time for each task instance i in . This process is necessary to enforce the total order on when we employ EDF scheduling policy to resolve the resource con icts in an uni ed manner for all the task instances. A virtual deadline of i is de ned by the following recursive equation where Dio is the original deadline of i : o DN = DN Di = min(Di+1 Ci+1 ; Dio) for i = N 1; N 2; : : :; 1 If a virtual deadline is missed by some task i , then either the deadline of that task itself is missed or at least one of the following tasks misses its deadline. It is clear that the virtual deadline is always less than or equal to the original one and the virtual deadline Di is always less than Di+1 by a di erence of at least Ci+1 , i.e. Di Di+1 Ci+1 . Also, a virtual release time of i is de ned by the following recursive equation where Ro i is the original release time of i . Fig 6.6 explains the virtual release time and deadlines of the example 70

tasks. Virtual release time is necessary to impose a total order on algorithm is used to schedule the tasks.

when an EDF scheduling

R1 = Ro 1 Ri = max(Ri 1; Ro i ) for i = 2; 3; : : :; N
?3 ?2 ?1 ?1 ?2 ?3

Original release times and deadlines

?3 ?2 ?1 ?1 ?2 ?3

Virtual release times and deadlines

Figure 6.6: Deriving virtual deadlines and release times
o This reduction of scheduling window of each task to Ri ; Di] from Ro i ; Di ] by the introduction of the virtual deadline is the result of imposing total order on . The following proposition establishes the equivalence between the original task set and the transformed task set with virtual deadline and release times in terms of the schedulability when an EDF is used to schedule and an additional set of aperiodic tasks. Here, it is assumed that the total order of the task instances in should be kept. Proposition 6.1 and a set of additional aperiodic tasks are schedulable by EDF if and only if with virtual deadlines and release times is schedulable with the additional aperiodic tasks by EDF.

Proof Proof can be derived from the theorem in 15]. Optimal Scheduling Algorithm
In this section, the optimal scheduling algorithm is presented and its optimality is proved. We assume that the task instances in have virtual deadlines and virtual release times instead of the original ones. The optimal scheduling algorithm assigns a higher priority to a task instance with a closer deadline in an uni ed manner. old old At any time instant t, let Aold (t) = fAold 1 ; A2 ; : : :; Am g denote a set of active aperiodic tasks. Here, active aperiodic task is the aperiodic task that was accepted before t and still needs to be 71

executed. It is obvious that the deadlines of these aperiodic tasks are greater than t. The tasks in Aold (t) are assumed to be sorted in their increasing order of deadlines. In addition, Anew t denotes a newly arrived aperiodic task at time t. The rst step of testing the acceptability of Anew t is to old (t), thus producing A(t) = fA1; A2; : : :; Am+1 g in which the tasks are sorted insert Anew into A t according to their deadlines in increasing order. Also, let ea i (t) denote the processor time already new . At this point, we derive the following spent for Ai up to time t. Obviously, ea ( t ) = 0 if A = A i t i lemmas and theorem which proves the optimality of the EDF scheduling algorithm proposed above. The following lemma speci es the necessary condition for A(t) to be schedulable. Here, let Dia (1 i m + 1) denote a deadline of the i-th aperiodic task, Ai , in A(t). Lemma 6.1 Let A(t) denote a set of aperiodic tasks de ned above. If there exists a feasible schedule for A(t), then

8 1 i m + 1 :: (t; Dia)

i X (ca j =1 j

ea j (t))

(6.2)

Proof Suppose (6.2) is not satis ed for some 1 k m + 1, then
a) < (t; Dk j =1 a] required t; Dk k X (ca j

ea j (t))

by A(t) exceeds the maximum processor This means that the processor demand in a] available for A(t). The un-schedulability of A(t) follows. time in t; Dk
under the proposed EDF if

Lemma 6.2 Let A(t) denote a set of aperiodic tasks de ned above. Then A(t) can be scheduled
8 1 i m + 1 ::
(t; Dia)
i X (ca j =1 j

ea j (t))

Proof The proof can be easily derived from the theorems 3.2 and 3.3 in the paper by Chetto et al. 13]. Theorem 6.1 Let A(t) denote a set of aperiodic tasks de ned above. Then the proposed EDF
scheduling algorithm is optimal and the schedulability condition is:
i X (ca j =1 j

8 1 i m + 1 :: (t; Dia)

ea j (t))

Proof From Lemma 6.1 and Lemma 6.2, this theorem follows.
Clearly, the condition of the above theorem can be checked within O(M + m) by utilizing the formula (6.1) where M denotes the total number of task instances in whose deadlines are greater a , i.e., the task instances in which may be executed within than t and less than or equal to Dm +1 72

a ]. The rst step is to insert the newly arrived aperiodic task into the set of active the range t; Dm +1 aperiodic tasks so that the aperiodic tasks are ordered in increasing order of their deadlines. Then, the maximum slack times, (t; Dia), are found from i = 1 to i = m + 1 by making use of (t; Dia 1) already found. If multiple aperiodic tasks arrive at t, we have to give priorities to these aperiodic tasks to decide which one has to be accepted and scheduled rst. In this case, the above acceptance test is repeated for each aperiodic task from the one with highest priority to the one with lowest importance. The total complexity in this case is O(K (M + m)) where K denotes the number of aperiodic tasks arrived at t.

One of the drawbacks of time-based scheduling scheme is that the sporadic task scheduling becomes very di cult. The algorithm to transform a sporadic task to an equivalent pseudo-periodic task has been proposed by Al Mok 38]. From the de nition of the sporadic tasks, the events which invoke the sporadic task instances may occur at any time instant with the minimum inter-arrival time, . And, once the task is invoked, it has to be nished within its relative deadline from the invocation time, ds . The rst step of the transformation is to decide the relative deadline of the pseudo-periodic task, dp , which is less than or equal to ds . And then, the period, prdp, of the pseudo task is found from the equation prdp = min(ds dp + 1; ). This is justi ed from the worst case scenario which can be seen in Figure 6.7.
State change t+1

6.3.2 Sporadic Task Scheduling

d

s

t

d

p

dp prd
p

Figure 6.7: Worst Case for Deadline Determination However, this approach may lead to signi cant under-utilization of the processor time, especially when ds is small compared to , since a great amount of processor time has to be reserved statically at pre-runtime for servicing dynamic requests from sporadic tasks. This is well explained in Fig 6.8 through a simple example where an equivalent periodic task is to be found from a sporadic task whose worst case execution time is cs = 4, whose relative deadline is ds = 8, and whose minimum inter-arrival time is = 8. If we employ Mok's algorithm, the corresponding periodic task has a worst case execution time cp = cs = 4, a relative deadline dp = 4( ds ), and a period prdp = min(ds dp + 1; ) = 5. The processor utilization of this new periodic task is 4=5 = 0:8. In our proposed scheduling approach, the incremental scheduling of hard periodic tasks and sporadic tasks may be decomposed into two steps. We assume that the initial schedule of task instances is given in a scheduling window 0; L] as in the previous sections. Then, the release times and deadlines of those task instances are transformed into virtual ones as was done in Section 6.3.1. And at runtime, every time new sporadic task arrives, the schedulability check is performed to see if 73

d = ? =8
Sporadic

s

c =4

s

prd =5 d =4
Transformed Sproadic
p

p

c p =4

Figure 6.8: Under-utilization of the transformed sporadic task the already accepted tasks and this new sporadic tasks can be scheduled using the EDF scheduling algorithm. And at runtime, the hard task instances from the schedule and the sporadic tasks are scheduled according to EDF. This can be viewed as merging two task instance streams, one from hard tasks and the other from sporadic tasks.

Extended Task Model
As in Section 6.3.1, an initial schedule of task instances is assumed to be given in an scheduling window 0; L] and denoted as . Let = f 1; 2; : : :; N g be a set of task instances where i appears earlier than i+1 . Each i has a following set of parameters in the schedule. virtual release time Ri virtual deadline Di ( L) worst case execution time Ci deadlines and virtual release times are obtained as in Section 6.3.1 from the original ones. Let S = fS1; S2; : : :; Sms g be a set of sporadic tasks which have to be scheduled with . For each sporadic task Si , the minimum inter-arrival time i , the maximum execution time cs i , and the ( ) are assumed to be given. It is also assumed that the S s are ordered in relative deadline ds i i i s ds . The objective of this section is to increasing order of their relative deadlines, ds , i.e., d i i i+1 develop an optimal scheduling algorithm and its schedulability test for and S together. Some additional terms are de ned in the following: Extended scheduling window for and S , 0; LCM ], where LCM is the least common multiple of L and the minimum inter-arrival times of the tasks in S . N 0 denotes the total number of hard task instances scheduled in 0; LCM ]. N 0 = N (LCM=L) where 0; L] is the original scheduling window. Extended schedule in an extended scheduling window 0; kLCM ] is found by repeating k times the schedule and denoted as k . We need to check the schedule in an extended window 0; 2LCM ] to verify a schedulability of and S according to the following scheduling model. 74

Scheduling Model
The CPU contention among tasks in is resolved naturally from the total order among the tasks. This can be done by using an earliest deadline rst scheduling algorithm and by using the virtual deadlines introduced earlier since Ri Ri+1 and Di < Di+1 . But, the mechanisms to resolve the resource contention between tasks from S and those from should be provided to enable them to be scheduled at run-time. We assume that those contentions are also resolved through the same scheduling algorithm(EDF), leading to an uniform scheduling policy for S and . We denote a subset, f a ; a+1; : : :; b g, of in 0; LCM ] as if: 1 a b N est(j + 1) = est(j ) + Cj for j = a + 1; a + 2; : : :; b 1 est(a) > est(a 1) + Ca 1 if 1 < a est(b + 1) > est(b) + Cb if b + 1 N In this case, we divide the set of task instances in 0; LCM ] into disjoint subsets, 1 , 2 , : : :, , satisfying the above conditions. Let est( i ) denote the earliest start time of the rst task instance in i and let eft( i ) denote the earliest nish time of i . Figure 6.9 shows an example case.
R3 R5

?1
est(1) est(2)

?2
est(3)

?3
est(4)

?4

?5
est(5)

?1

?2

?3

Figure 6.9:

found for an example task set

In addition, we de ne 0 (t1 ; t2) (0 t1 < LCM ^ t1 < t2 < 2LCM ) as the maximum slack time obtainable in t1 ; t2] under the assumption that from time 0 up to time instant t1 task instances only from have been executed with their maximum execution times, i.e., 0 tasks have started at their earliest start times and spent their worst case execution times. Then, (t1; t2 ) can be obtained as follows. First step is to nd task instance i satisfying: est(i 1) + Ci 1 t1 ^ t1 est(i) + Ci If t1 est(1) + C1, then let i = 1. Then, 0 (t1 ; t2 ) = lst(i) t1 + max(0; t1 est(i)) (6.3) +
0 0 0 1 jX

where j (i j ) is the last index satisfying lst(j 0 ) t2 . This process is similar to the one used in the acceptance test of aperiodic task in Section 6.3.1. An example case is depicted in Figure 6.10. 75

k =i

(lst(k + 1) lst(k) Ck ) + max(0; t2 lst(j 0 ) Cj 0 )

D1

D2

D3

D4

D5

?1
est

?2 ?1
eft

?3 ?2 ?2
lst(3)

?4 ?3
lst(4)

?5 ?3 ?4 ?5
lst(5)

?1
est(1)

?1
lst(2)

t1

t2

Figure 6.10:

0

(t1 ; t2) for an example task set

The following proposition speci es the necessary condition for and S to have a feasible schedule. Proposition 6.2 If there exists a feasible schedule for and S , then 8i 2 1; ] :: 8t 2 est( i); est( i) + LCM ] ms s X (6.4) :: 0 (est( ); t) cs b (t est( i ) + k dk ) c
i

Schedulability Test

Proof: This is proved in the appendix.

k=1

k

k

The following theorem speci es the su cient and necessary schedulability condition of the task set and S . The extended schedule in 0; 2LCM ] is assumed to be given. Theorem 6.2 and S are schedulable according to EDF if and only if 8i 2 1; ] :: 8t 2 est( i); est( i) + LCM ] ms s X 0 :: (est( ); t) cs b (t est( i ) + k dk ) c (6.5)
i k=1 k k

Proof: By proposition 6.2 and proposition B.5. From the above proposition and a theorem, we can know that EDF is optimal for scheduling and S . Finally, we obtain an equivalent condition to (6.5) of the theorem 6.2, which enables us to reduce the complexity of the schedulability check. This corollary speci es that only the time instant which is equal to a deadline of some task instance in S needs to be examined at or after est( i ) in checking the condition (6.5) of the theorem 6.2. Corollary 6.1 The following two conditions are equivalent to each other: (1) 8i 2 1; ] :: 8t 2 est( i ); est( i ) + LCM ] ms s X :: 0 (est( ); t) cs b (t est( i ) + k dk ) c
i k=1 k k

76

(2) 8i 2 1; ] :: 8dj 2 est( i ); est( i ) + LCM ] ms X (dj est( i ) + k ds k) c :: 0 (est( i ); dj ) cs b k
where dj is the deadline of some task instance in S .
k=1 k

Therefore, the total complexityP of the schedulability check algorithm is reduced to O(M 0 ) where P 0 Pms ms s M = (N + i=1 (LCM= i)) + m i=1 (LCM= i ) log( i=1 (LCM= i )). The rst step is to obtain the deadlines(dj ) of the task instances from S in the window 0; LCM ] and sort them in increasing order. Then, for each est( i ) (1 i ), the second condition of the above corollary is checked P s in O(N 0 + m i=1 (LCM= i )) for the deadlines obtained in the rst step. This process is similar to the one used in Section 6.3.1.
0

6.4 Summary
In this chapter, we addressed the issue of incremental scheduling on the basis of time-based scheduling scheme. The acceptance tests are developed for dynamically arriving aperiodic tasks, and for dynamically arriving sporadic tasks, respectively. A mixed scheduling policy was used such that the total ordering among static tasks is maintained. By making use of this property, we can extend the approach when there exist complex timing constraints between static tasks such as standard relative constraints.

77

Chapter 7

Conclusion
A new dynamic time-based scheduling scheme has been developed in this dissertation, and it is applied as a solution approach to several problems. In the new scheme, task attributes in the schedule may be represented as functions parameterized with information available at task dispatching time. By doing so, more freedom is available for a task dispatcher, and exible resource management becomes possible at system operation time. In Chapter 3 and 4, we addressed the problem of scheduling tasks in the presence of relative timing constraints in addition to release time and deadline constraints. Applying dynamic timebased scheduling scheme as a solution approach to this problem enables us not only to check the schedulability of a given cyclically constrained job set, but also to exibly manage slack times at system operation time. In Chapter 5, we addressed the problem of designing a dynamic temporal controller for linear time-invariant control systems. In dynamic temporal control technique, the xed sampling period assumption is relaxed and sampling periods are adaptively decided based on current physical system state. It is shown that this new technique allows us to greatly reduce the computational resource requirement while maintaining the quality of control. When multiplexing multiple concurrent control tasks, especially when a transient overload has occurred, this new scheme provides a sound basis for increasing the system performance by e ciently distributing computational powers to tasks. This technique may be implemented by applying the dynamic time-based scheduling scheme, for example, by parameterizing task execution mode. Finally, in Chapter 6, incremental scheduling problem is addressed on the basis of time-based scheduling scheme. That is, the total ordering among static tasks is maintained during system operation time, while dynamic tasks are executed in slack times available from static tasks. Only release time and deadline constraints are assumed to exist, and EDF is assumed to be used in resolving resource contention between dynamic(static) and dynamic tasks. It is shown in this dissertation that dynamic time-based scheduling scheme may be e ectively used as solution approaches to the problems in dynamic real-time systems.

7.1 Future Research
In this dissertation, a new dynamic time-based scheduling scheme is presented and its applicability has been shown through examples. In the presence of relative timing constraints, each entry in the dynamic calendar is parameterized with start or nish times of previous task instances. However, this restriction may be removed and an entry in the dynamic calendar may be an arbitrary 78

function parameterized with any information available to the system. With this generalization, other extensions may be possible, especially in the presence of inter-task dependencies, or faulttolerance requirements. Clearly, such functions lead to a highly state dependent dynamic schedules. For example, the dynamic time-based scheduling scheme may be applied to cope with transient overloads that occur in many real-time systems 4]. In xed priority-based systems, some work has been done on this issue 44]. However, as far as we know, no systematic work has been done on this, especially on time-based scheduling scheme. Dynamic time-based scheduling scheme seems to be an appropriate framework for this problem. In Chapter 3 and 4, it is assumed that task order remains xed throughout the system operation time. When a new task is to be added to a schedule, the original order may no longer be the best or most appropriate. In the presence of relative timing constraints, a new task order generated at run-time should be validated such that every timing constraints will be satis ed. This may require O(n2 N 3) time in the worst case if our algorithm is applied. But, if a few task instances in the near future are allowed to change their orders, it may be possible to develop an algorithm with less complexity by utilizing that fact. In Chapter 3 and 4, it is also assumed that a total ordering among tasks is found at pre-runtime by an o -line scheduler. Previous work by Cheng et al. 11] and Mok et al. 37] use a heuristic approach called smallest latest start time rst to schedule task instances with relative constraints. However, their heuristics don't fully re ect the relative timing constraints. Improved heuristic functions may be developed if the constraint graph structure is utilized. We considered the scheduling of tasks in uni-processor systems where tasks may have relative timing constraints. However, if we want to extend the dynamic dispatching approach to distributed systems, where tasks located in di erent nodes may have relative constraints, several issues have to be addressed further such as what kind of information have to be sent out to other nodes, and how parametric functions can be found. In Chapter 4, a new controller design method is presented while its implementation issues were not addressed. As was mentioned in Chapter 4, dynamic time-based mechanism may be utilized to implement the scheme by creating a variable for each task instance designating its execution mode, i.e., whether that speci c instance will be invoked or not. More work needs to be done on how the parametric functions can be found in this case. In Chapter 6, the solution approach is found under the assumption that every task is preemptable. An extension of the work needs to be made for non-preemptive tasks.

79

Appendix A A.1 Proofs for Chapter 4

edge pair set in G(fa ) from which a new edge will be created after fa is eliminated, and a constraint in Sched(ea) to be changed after eliminating ea . Also, it is clear that a new constraint created in Sched(sa) will correspond to a new edge created in G(sa). Therefore, Elim(G(fa); fa) is equal to G(sa).

Proof of Lemma 4.1: It is obvious that there exists an one-to-one correspondence between an

Proof of Lemma 4.2: The proof for this lemma is similar to that of Lemma 4.1, and is omitted. Proof of Proposition 4.2: Let be a negative weight restricted cycle in G(fN ) satisfying:
no restricted cycle appears as a proper sub-cycle of . If there exists a negative weight restricted cycle in G(fN ), then also exists in G(fN ). Also, let y be a node in that appears rst in a sequence < v0 ; s1; f1; : : :; sN ; fN >. Then, can be denoted as w w w where

< 0. By eliminating nodes that lie after y in the node sequence < 0 v0 ; s1; f1; : : :; sN ; fN >, we will obtain a negative weight edge y w ! y where w0 < 0. This is

Pi+1

j =1 wj

< y ! v1 ! v2 : : :vi ! y >
1 2 i+1

clear from the path preserving property of node elimination algorithms. Then, from the equivalence relationship between constraint graphs and predicates, a contradiction is obtained during the elimination of the variables from Sched. Therefore, Sched is equal to False.
w

w acyclic1 restricted path y ; z in G(fN ) where w0 w and all its intermediate nodes belong to fsa+1; fa+1; : : :; sN ; fN g. If v = fN , then the claim holds. Suppose that there exists an edge y w ! z in G(fa) where 1 a N 1.

Proof of Lemma 4.3: Claim 1: If y ! z 2 G(fa) holds where y 6= z , then there exists an 0

For a case when y = z , it can be similarly shown that a restricted path without any intermediate restricted cycle(i.e., excluding y and z ) is obtained, even though the resulting restricted path is not acyclic.
1

80

Assume that there exists an acyclic restricted path in G(fb ) with a weight sum w, a b N 1,
; ; ; ; z> (A.1) vi w ! v2 : : : w! v1 w ! < y w! where i 0, and vj 2 fsa+1 ; fa+1; : : :; sb ; fb g for 1 j i. If all edges constituting this path exist in G(fb+1 ) with same weights, then there exists an acyclic restricted path in G(fb+1 ) with a weight sum w where all its intermediate nodes belong to fsa+1 ; fa+1; : : :; sb+1 ; fb+1 g. So, assume that at least one of these edges is created in G(fb ) just after eliminating fb+1 and sb+1 from G(fb+1 ). Let J = fj1; j2; : : :; jk g, where 1 k i + 1 and 1 jl i + 1 for 1 l k, denote an index set of edges in the above path which are newly created in G(fb ). The indices in J is assumed to be w ; increasing. Each edge vjl 1 ! vjl , for 1 l k, is created2 just after fb+1 and sb+1 are eliminated from G(fb+1 ). Fact 1: In G(fb+1 ) the weight of an edge sb+1 ! fb+1 is equal to lb+1 , and the weight of fb+1 ! sb+1 is equal to ub+1 .
b1 b2 bi b i+1 b jl

If the fact is not true, then a contradiction should have been derived, which is against the assumption. w ; From the node elimination algorithm we know that the edge vjl 1 ! vjl is created from one of the following restricted paths in G(fb+1) whose weight sum is wb;jl :
b jl

1. < vjl 2. < vjl 3. < vjl

1

w1 b+1 jl

!; sb
;

+1

w2 b+1 jl

!; vjl >
+1 w2 b+1 jl

1

w1 b+1 jl

! fb !; sb
;

+1

ub+1

! sb

!; vjl >
w2 b+1 jl

1

w1 b+1 jl w1 b+1 jl

+1

lb+1

! fb
ub+1

+1

w2 b+1 jl

!; vjl >

4. < vjl 1 ! fb+1 ! sb+1 ! fb+1 !; vjl > We can extend the path in (A.1) into a path in G(fb+1 ) by replacing each edge in (A.1) with an index jl by one of the above paths via sb+1 and fb+1 . If k = 1, i.e., only one edge is created after eliminating fb+1 and sb+1 from G(fb+1 ), then it is obvious that the extended path is also a restricted path with a weight w in G(fb+1 ). So, assume that k 2. In this case, there exists a cycle in the extended path. w ; w ; First, consider two edges, vj 1 ! vj and vj 1 ! vj . For all 16 possible combinations of the above 4 paths from which these two edges will be created, a restricted cycle is obtained after extending these two edges in (A.1). For example, if both of these two edges are created from the paths of the form 4, then the extended path will be of the following form:
lb+1
b j1 b j2 1 1 2 2

< y ! v1 ! v2 : : :vj 1 ! < fb+1 ! sb+1 ! fb+1 ! vj : : :vj 1 ! fb+1 > ! sb+1 ! fb+1 ! vj : : :vi ! z > The inner path, < fb+1 ! sb+1 ! fb+1 ! vj : : :vj 1 ! fb+1 >, is a restricted cycle, since the sub-path < vj : : :vj 1 > is a restricted path and neither sb+1 nor fb+1 appears in this sub-path.
1 1 2 2 1 2 1 2 2

For the purpose of convenience v0 denotes a node y, and vi+1 denotes a node z .

81

Then, from Proposition 4.2 the weight sum of this restricted cycle is non-negative. If it is negative, then a False should have been derived during eliminating the nodes in ffN ; sN ; : : :; fa+1; sa+1 g, which is a contradiction to the assumption. Therefore, if we reduce this restricted cycle into a single node fb+1 , then we obtain the following restricted path whose weight sum is less than or equal to w:

< y ! v1 ! v2 : : :vj 1 ! fb+1 ! sb+1 ! fb+1 ! vj : : :vi ! z > w ; w ; As a result, two edges, vj 1 ! vj and vj 1 ! vj , are merged into one sub-path
1 2 b j1 b j2 1 1 2 2

vj

1

1

! fb ! sb ! fb ! vj
+1 +1 +1 wb j1
1 1 b j3

2

; ; vj , the similar results vj and vj 1 ! Similarly, for other combinations for two edges, vj 1 ! can be obtained. w ; If we continue this merging process for an edge, vj 1 ! vj , and for the sub-path < vj 1 ! fb+1 ! sb+1 ! fb+1 ! vj > found above, we will obtain a merged acyclic sub-path from vj 1 to vj through fb+1 or sb+1 . Therefore, after k 1 iterations of the above process, we will obtain an acyclic restricted path in G(fb+1 ) whose intermediate nodes belong to fsa+1 ; fa+1; : : :; sb+1 ; fb+1g and whose weight sum is less than or equal to w. Therefore, by inductively applying the above argument, we know that there exists an acyclic restricted path in G(fN ) whose intermediate nodes belong to fsa+1 ; fa+1; : : :; sN ; fN g and whose weight sum is less than or equal to w. w Claim 2: If there exists an acyclic 3 restricted path y ; z in G(fN ) whose intermediate nodes 0 w belong to fsa+1 ; fa+1; : : :; sN ; fN g, then y ! z 2 G(fa ) holds where w0 w. The proof for this claim is similar to that for Proposition 4.2, and is omitted. From claim 1 and 2 the lemma is proved.
wb j2
2 2 3 3 1 2 1 3

Proof of Corollary 4.1: Suppose that an edge y ! z exists in G(v). If v = fa for some a, then from Lemma0 4.3 it is obvious that there exists a path y ; z in G(fN ) whose intermediate nodes belong to fv ; : : :; sN ; fN g. So, assume that v = sa for some a in 1; N ]. If there exists an edge from y to z in G(fa), then the condition 2 holds. Hence, further assume that an edge y ! z is created just after eliminating fa from G(fa). From the node elimination algorithm, the edge is created from either of the following paths: 1. y ! fa ! sa 2. sa ! fa ! z From Lemma 4.3 we know that there exist two acyclic restricted paths whose intermediate nodes belong to fsa+1 ; fa+1; : : :; sN ; fN g. By merging these paths, we obtain a path from y to z whose intermediate nodes belong to ffa; sa+1 ; fa+1; : : :; sN ; fN g.
For a case when y = z , it can be similarly shown that the claim holds for a restricted path without any intermediate restricted cycle(i.e., excluding y and z ).
3

82

k ). obvious that v belongs to a node set P . So, assume that there exists no such edge in G1;k (fN Two cases must be considered. Case 1: v w ! sji 2 G1;k (sji ) 1;k k From Corollary 4.1 there exists a path from v to sj i in G (fN ) whose intermediate nodes belong j to ffi ; : : :; sN ; fN g. Note that this path has at least one intermediate node. From the de nition 1;k j of a crossing edge set 1;k (sj (si )). i ), it is clear that v 2 PrecNode( w j 1;k j Case 2: si ! v 2 G (si ) Similarly, the proposition can be proved in this case.

k ), then it is Proof of Proposition 4.3: If there exists an edge connecting sji and v in G ;k(fN
1

Proof of Proposition 4.4: Suppose that v belong to Node( ;k (sj i )). Then, there exist an edge 0 0 j ). Then from Corollary 4.1, we v 2 fv ; s ; f ; : : :; sj g such that an edge v ! v exists in G ( s i i 0
1

k ) where all intermediate nodes in the path belong know that there exists a path v ; v in G1;k (fN j k k 1;k j to ffi ; : : :; sN ; fN g. From the de nition of (si ) there exist two edges v ! v1 and v2 ! v 0 k in v ; v 0 where v1 and v2 belong to ffij ; : : :; sk N ; fN g. Note that v1 may be equal to v2 . This j 1;k j means that v is an element of PrecNode( 1;k(si )). Thus, Node( 1;k (sj (si )) is i )) PrecNode( proved. The second assertion, Node( 1;k (fij )) PrecNode( 1;k (fij )), can be proved in a similar j ), 1 j k 1, is way. Also, from these we know that a maximum number of edges in 1;k (fN 1 less than or equal to n(n 1), since n is the number of nodes in PrecNode( 1;2(fN )).
0

1 1

1 1

), then j ). there also exists an edge from g(1)(v1) to g(1)(v2) in 1;k (fN j 1 ) where 1 j 1 k 3. Then, from the de nition First suppose that v1 ! v2 2 1;k (fN of a created edge set, there exists a path from v1 to v2 that has at least one intermediate node j k k and whose intermediate nodes belong to fsj 1 ; f1 ; : : :; sN ; fN g. By applying a technique similar to the one used in the claim 1 of the proof for Lemma 4.3, we can reduce this path into an acyclic restricted path from v1 to v2 that has at least one intermediate node. Let this reduced path be denoted as < v1 ! x1 ! x2 : : : ! xl ! v2 >, l 1, where every intermediate node xh (1 h l) j j k 1 k 1 g, then k k belongs to < sj h l, belong to fsj 1 ; f1 ; : : :; sN ; fN >. If all nodes xh , 1 1 ; f1 ; : : :; sN ; fN it is clear from the cyclic nature of constraint graphs that there exists an acyclic restricted path k ) whose intermediate nodes belong to fsj +1 ; f j +1 ; : : :; sk ; f k g. from g(1)(v1 ) to g(1)(v2) in G1;k (fN 1 1 N N k ; : : :; sk ; f k g. Hence, assume that there exists at least one xm , 1 m l, that belongs to fsk ; f 1 1 N N j j j Note that x1 ; xl 2 fsj 1 ; f1 ; : : :; sN ; fN g. There are two possible cases to be considered:
1 2 1 1

Proof of Proposition 4.5: Claim 1: If there exists an edge from v to v in

;k (f j N

j j j 1. x1 is located later than xl in the node sequence < sj 1 ; f1 ; : : :; sN ; fN >. In this case there exists an acyclic restricted path < v1 ! x1 ; xl ! v2 > whose interj j j mediate nodes belong to fsj 1 ; f1 ; : : :; sN ; fN g. This is because every node in constraint 1 k k graphs has an edge to its previous node in the node sequence < s1 1 ; f1 ; : : :; sN ; fN >. In other words, g(1) < v1 ! x1 ; xl ! v2 > is an acyclic restricted path from g(1)(v1) k ) whose intermediate nodes belong to fsj +1 ; f j +1 ; : : :; sk f k g.4 to g(1)(v2 ) in G1;k (fN 1 1 N N
g( ) < y1

4

a

!

y2 : : :

!

y

i > is de ned to be < g(a) (y1 ) ! g(a) (y2 ) : : : ! g(a) (yi ) >.

83

j ), then there also exists an edge from Claim 2: If there exists an edge from v3 to v4 in 1;k (fN j 1 ). g( 1)(v3 ) to g( 1)(v4) in 1;k (fN j ). Then, from the de nition of a Suppose that there exists an edge from v3 to v4 in 1;k (fN created edge set, there exists a path from v3 to v4 that has at least one intermediate node and +1 k whose intermediate nodes belong to fsj ; f1j+1; : : :; sk 1 N ; fN g. By applying the technique in the claim 1 of the proof for Lemma 4.3, we can reduce this path into an acyclic restricted path from v3 to v40 that has at least one intermediate node. Let this path be denoted as < v03 ; v 0 ; v4 > +1 j +1; : : :; sk ; f k g. In this case, the path g where v belongs to fsj ; fN ( 1) < v3 ; v ; v4 > is also 1 N N j k 1 k 1 g. k an acyclic restricted path in G(fN ) whose intermediate nodes belong to fsj 1 ; fN ; : : :; sN ; fN j 1 1;k Then, from Lemma 4.3 there exists an edge g( 1) (v3) ! g( 1)(v4) in G (fN ). Also, because the path g( 1) < v3 ; v 0 ; v4 > satis es the condition in the de nition of a created edge set, this j 1 ), too. edge belongs to 1;k (fN j ) is semi-homogeneous to 1;k (f j ) for 1 j From Claim 1 and 2, we conclude that 1;k (fN 1 N j2 k 2.
1 2

j ). Because Hence, from Lemma 4.3 there exists an edge g(1)(v1 ) ! g(1)(v2) in G1;k (fN there exists a path from g(1)(v1) to g(1)(v2) satisfying the condition given in de nition j) of a created edge set, this edge belongs to 1;k (fN 2. x1 is located before xl. Let the reduced path be denoted as < v1 ! x1 ; xi ; xm ! xl ! v2 > where xi is a rst node appearing in this path that lies after x1 in the node sequence < j j j j +1 j +1 k k k 1 sj 1 ; f1 ; : : :; sN ; fN >. Note that xi 2 fs1 ; f1 ; : : :; sN ; fN g. Again, since j + 1 and every node has a path to its predecessor in the node sequence, there exists an acyclic restricted path < v1 ! x1 ; xi ; xl ! v2 > that doesn't have a node for a job in k . Hence, there exists an acyclic restricted path g(1) < v1 ! x1 ; xi ; +1 k xl ! v2 > whose intermediate nodes belong to fsj ; f1j+1; : : :; sk 1 N ; fN g. This means j 1;k that g(1)(v1) ! g(1)(v2 ) 2 G (fN ). Also, because the above path satis es the de nition j) for a created edge set, this edge belongs to 1;k (fN

w imum weight acyclic restricted path 2 =< g( 1)(v1 ) ; g( 1)(v2) > whose intermediate nodes j j k k belong to fs1 ; f1 ; : : :; sN ; fN g. Three cases must be examined: Case 1: v1 6= v0 and v2 6= v0 0 In this case it is clear that w is less than or equal to w, since the set of acyclic restricted paths k ) whose intermediate nodes belong to fsj +1 ; f j +1 ; : : :; sk ; f k g is a subset from v1 to v2 in G1;k (fN 1 1 N N k ) whose intermediate nodes belong to of a set of acyclic restricted paths from v1 to v2 in G1;k (fN k fsj1; f1j ; : : :; sk N ; fN g. Case 2: v1 = v0 The path g( 1) 1 is also an acyclic restricted path. The weight of a path g( 1) 1 is equal to w L, since every edge weight in this new path is the same as that of corresponding edge in 1

Proof of Proposition 4.6: From Lemma 4.3 there exists a minimum weight acyclic restricted j j k path =< v ; v > whose intermediate nodes belong to fs ; f ; : : :; sk N ; fN g, and a min0
1 1 w 2 +1 1 1 +1

84

except for the rst edge v0 ! g( 1) (x1) of g( 1) 1 where x1 denotes the rst node appearing after v0 in 1. 0The weight of this edge is L less than that of v0 ! x1 which is the rst edge of 1 . This implies w w L from Lemma 4.3. Case 3: v2 = v0 The path g( 1) 1 is also an acyclic restricted path. The weight of a path g( 1) 1 is equal to w + L, since every edge weight in this new path is the same as that of corresponding edge in 1 except for the last edge g( 1)(xl ) ! v0 of g( 1) 1. The weight of this edge is L more than the weight of xl ! v0 which is the last edge of 1. This implies w0 w + L from Lemma 4.3. ) and 1;k (fij ), can be shown to be semi-homogeneous by employing similar proof to that for Proposition 4.5 where 2 j k 2 The following claim is proved where i is any integer satisfying 1 i N . Claim 1: 1;k (fij 1 ) 1;k (fij ) First, suppose that v1 w ! v2 2 1;k (fij 1), where v1 6= v0,v2 6= v0. Consider a graph j 1 ). From Lemma 4.3, we can nd a minimum weight acyclic restricted path within this G1;k (fN w 1 j 1 j 1 j 1 graph, 1 =< v1 ; v2 > whose intermediate nodes belong to fsj i+1 ; fi+1 ; : : :; sN ; fN g. From the j 1 ), j 1 j assumption of homogeneity between 1;k (fN ) and 1;k (fN ), every edge x1 ! x2 in G1;k (fN 1 j 1 j 1 where x1 ; x2 2 fsj i+1 ; : : :; sN ; fN g, has the same weight as an edge g(1)(x1) ! g(1)(x2) in j ). This one-to-one correspondence between created edge sets implies that an acyclic reG1;k (fN stricted path g(1) 1 has the same weight w1 as that of 1 , and g(1) 1 is a minimum weight acyclic j ) whose intermediate nodes belong restricted path among the acyclic restricted paths in G1;k (fN w j j j 1;k to fsj i+1 ; : : :; sN ; fN g. Hence, g(1)(v1 ) ! g(1)(v2) 2 G (fi ) holds from Lemma 4.3. Because 1;k (fij 1 ) and 1;k (fij ) are semi-homogeneous, this edge also belongs to 1;k (fij ). Second, suppose that v3 w! v4 2 1;k (fij ), where v3 6= v0,v4 6= v0. Consider a graph j ). From Lemma 4.3, we can nd a minimum weight acyclic restricted path within this G1;k (fN w j j j graph, 2 =< v3 ; v4 > whose intermediate nodes belong to fsj i+1 ; fi+1 ; : : :; sN ; fN g. Again, from the one-to-one correspondence between created edge sets, a path g( 1) 2 has the same weight w2 as that of 2 , and the path is also a minimum weight acyclic restricted path among the acyclic j 1 ) whose intermediate nodes belong to fsj 1 ; : : :; sj 1 ; f j 1g. Hence, restricted paths in G1;k (fN i+1 N N w j 1 j 1 1;k 1;k g( 1)(v3 ) ! g( 1)(v4) 2 G (fi ) holds from Lemma 4.3. Because (fi ) and 1;k (fij ) are semi-homogeneous, this edge also belongs to 1;k (fij 1 ). Therefore, the following is proved where v1 6= v0 and v2 6= v0 :

Proof of Proposition 4.7: Note that two created edge sets,

1

;k (f j i

1

1

1

1

2

2

2

(v1 w ! v2 2

1

;k (f j i

1

)) () (g(1)(v1) w ! g(1)(v2) 2

1

;k (f j )) i

For cases where one of v1 or v2 is equal to v0, the condition 3 or 4 in the de nition of homogeneous edge sets may be proved in a similar way to the above one by using the de nition of homogeneity between created edge sets and Lemma 4.3. Therefore, the Claim 1 is proved. Then, from the transitivity of homogeneous relations, it is clear that the following holds:

8l : 2 l j 1 :: 8i : 1 i N ::
85

1

;k (f l ) i

1

;k (f j ) i

Claim 2: 8l : 2 l j 1 :: 8i : 1 i N :: 1;k (sli ) 1;k (sj i) j 1;k l 1;k For xed l and i, we know that (fi ) (fi ) holds from claim 1. From this homogeneity, 1;k it is clear that 1;k (sli ) (sj ) holds from node elimination algorithms. That is, 1;k (fil ) is i obtained after eliminating fil from G(fil), and 1;k (fij ) is obtained after eliminating fij from G(fij ). graph for a cyclically constrained job set. Claim: If the Algorithm 4.3 applied to Gb (Vb; Eb) doesn't terminate within n2 n + 2 loop k ) for k n2 . iterations, then there exists a negative weight cycle in G1;k (fN 2 Suppose that the algorithm doesn't terminate within n n + 2 loop iterations. From Propok 2 ), 1;k (f k 3 ), : : :, 1;k (f 1 ) are semi-homogeneous. Thus, sition 4.5, we know that 1;k (fN N N i ), 2 i n2 n + 2, are semi-homogeneous, too. This means that after each Giout (Vb;1 fv0g; Eout i ), 3 loop iteration for i 3 in the algorithm, there exists at least one edge in Giout (Vb;1 fv0g; Eout i 1). i n2 n +2, whose weight has been reduced from the corresponding one in Giout1 (Vb;1 fv0g; Eout 2 If not, then the algorithm should have been completed within n n + 2 loop iterations at step 3 (d), because homogeneous created edge sets are already found, which is against the assumption. For the purpose of clarity, each node vi (2 Vb ) used in this proof will be denoted as vij to represent j that vi belongs to a node set Vb in a graph Gj in (Vb ; Ein), or to a node set Vb;1 fv0g of a graph j Gj out (Vb;1 fv0 g; Eout). n n+2 ! v n n+2 , v ; v 2 V n n+2 (V Let v1 b;1 1 2 b;1 fv0 g(v1 6= v2 ), denote one such edge in Gout 2 n n+2 ) whose weight is less than that of the corresponding edge in Gn n+1 (V fv0g; Eout b; 1 out n n +1 fv0g; Eout ). Equivalently, from the cyclic operation performed at step 3 (f ) in Algorithm 4.3 n n+2 ! v n n+2 is an edge in Gn n+2 (V n n+2 ) whose weight is we can say that v1 b;1 fv0g; Ein out 2 less than or equal to w 1, if the edge doesn't connect v0. w L 1, if the edge is from v0. w + L 1, if the edge is to v0. n+2 (V ; E n n+2 ). where w is a weight of an edge (g(1)(v1))n n+2 ! (g(1)(v2 ))n n+2 of Gn b in in n n +2 n n +2 with a weight to v2 Let p1 denote a minimum weight acyclic restricted path from v1 n n +2 n n +2 w12 in Gin (Vb ; Ein ) whose intermediate nodes belong to Vb;2 . Note that no intermediate node, if there exists any, is equal to v0 . p1 exists from Lemma 4.3. Then, after (n2 n + 2)-th loop n n+2 ). n n+2 ! v n n+2 will be changed to w in Gn n+2 (V iteration, the weight of v1 12 b;1 fv0 g; Eout out 2 n+2 (V ; E n n+2 ), p has at least one edge connecting two di erent nodes sub-claim 1: In Gn b in 1 in that belong to g(1)(Vb;1) fv0 g. Suppose the claim is not true. Then, p1 is also a minimum weight acyclic restricted path from n n+1 to v n n+1 with a weight w in Gn n+1 (V ; E n n+1 ), since only the weights of edges v1 12 b in 2 in connecting two di erent nodes of g(1)(Vb;1) fv0g may be reduced after each loop iteration of the algorithm. This contradicts to the de nition of the path p1. Then, the following is proved. Here, it is assumed that v3 ; v4 (v3 6= v4 ) belong to g(1)(Vb;1) fv0 g, and thus g( 1) (v3), g( 1)(v4) belong to Vb;1 fv0 g.
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

Proof of Theorem 4.1: Let Gb(Vb; Eb) denote a basis graph obtained from an initial constraint

86

n n+2 w! v n n+2 , v ; v 2 g (V ) fv g, sub-claim 2: There exists at least one edge in p1 , v3 3 4 b;1 0 (1) 4 satisfying 0
2 34 2

n+1 (V ; E n n+1 ). n n+1 where w34 is a weight of an edge v3 b in Suppose that the claim is not true. Then, all edges lying in p1 that connect two nodes of g(1)(Vb;1) fv0g don't satisfy the above condition. In other words, all edge weights of p1 in n+2 (V ; E n n+2 ) are not reduced compared to the edge weights of p in Gn n+1 (V ; E n n+1 ). Gn b in 1 b in in in This means that p1 is also a minimum weight acyclic restricted path with a weight w12 in n+1 (V ; E n n+1 ), which is clear from Proposition 4.6. From Lemma 4.3 this implies that Gn b in in n n+1 ! v n n+1 in Gn n+1 (V ; E n n+1 ) is equal to w . This contradicts to the the weight of v1 b in 12 2 in de nition of the path p1 . Therefore, sub-claim 2 is proved. w Hence, we know that in path p1 0there exists an edge v3 ! v4 whose weight is less than that n n+1 w! v n n+1 in Gn n+1 (V ; E n n+1 ). of the corresponding edge v3 b in 4 in From the cyclic operation performed at step 3 (f ) in Algorithm 4.3 and from Lemma 4.3, we know that there exists a minimum weight acyclic restricted path from g( 1)(v3) to g( 1) (v4) in n+1 (V ; E n n+1 ) whose intermediate nodes belong to V and which is equal to one of the Gn b in b;2 in following forms: 1. If v3 6= v0 and v4 6= v0 , ; (g( 1)(v4))n n+1 (g( 1)(v3))n n+1 w
2 2 2 2 2 2 2 2 2 2 2 2 2 2 34 2 34 2 2 2 2 2 2 34 2

0

w34 < w34 n n+1 in Gn ! v4 in

2. If v3 = v0 and v4 6= v0 , 3. If v3 6= v0 and v4 = v0 ,

n v0

2

n+1

w34

;
2

L

(g( 1)(v4))n n+1
2 34 2

+L n v0 n+1 (g( 1)(v3 ))n n+1 w ; Note that, if any edge weight in the above minimum weight acyclic restricted path is reduced, n n+2 (V ; E n n+2 ) will also be reduced by at least the same the weight of an edge v3 ! v4 in Gin b in amount after (n2 n + 1)-th loop iteration of Algorithm 4.3. Hence, p1 can be denoted as:
2 2 2 2 34 2 2

n n+2 > (v3 ))n n+1 ; (g( 1)(v4 ))n n+1 >; v2 where the inner path, < (g( 1)(v3))n n+1 ; (g( 1)(v4))n n+1 >, will be reduced to an edge n n+2 w! v n n+2 after (n2 n + 1)-th loop iteration if Algorithm 4.3 is applied to the above v3 4 extended path. n n+2 w! v n n+2 , Note that applying Algorithm 4.3 to this new path will produce an edge v1 2 and if some edge weight is reduced, w12 will be0 reduced, too. From the above result and from w34 < w34, we know that the edge weight of g( 1)(v3 ) ! n n+1 (V n n+1 ) is: g( 1)(v4 ) in a graph Gout b;1 fv0 g; Eout n < v1
2 2

n where the edge v3 Then p1 can be denoted as:
2

n n+2 ; v n n+2 w! v n n+2 ; v n n+2 > < v1 2 4 3 n+2 w! v n n+2 can be replaced by one of the above minimum weight paths. 4
34 2

n+2

;< (g

2

2

2

( 1)

2

2

2

34

2

12

2

2

2

87

This enables us to repeatedly apply the same procedure to a new minimum weight acyclic n n+1 (V ; E n n+1 ). Therefore, we obtain the following restricted path g( 1)(v3) ; g( 1)(v4) in Gin b in extension of path p1: n n+2 ;< (g (v ))n n+1 ; < (g (v ))n n ; (g (v ))n n > < v1 ( 1) 3 ( 1) 5 ( 1) 6 n n +1 n n+2 > ; (g( 1)(v4)) >; v2 where the intermediate nodes of < (g( 1)(v5))n n ; (g( 1)(v6 ))n n > in the above path belong n n n to Vb;2 of Gn in (Vb; Ein ). And, this extension may be continued until the following is obtained: n n+2 ; < (g (v ))n n+1 ;< (g (v ))n n ; : : : < v1 ( 1) 3 ( 1) 5 2 ; < (g( 1)(v2(n n+1) 1 )) ; (g( 1)(v2(n n+1)))2 >; n n+2 > : : : ; (g( 1)(v6))n n >; (g( 1)(v4))n n+1 >; v2 j 2 Consider the following set of node pairs in Vb;1 fv0g of Gj in (Vb; Ein), 2 j n n + 2, that have been included in the extension of path p1 at each iteration of the process. n n+2 ; v n n+2 ); ((g( 1)(v3))n n+1 ; (g( 1)(v4 ))n n+1 ); : : :; f(v1 2 ((g( 1)(v2(n n+1) 1 ))2; (g( 1)(v2(n n+1) ))2)g Note that this set has n2 n + 1 node pairs. Because there exist n nodes in Vb;1 fv0 g, there may exist only n2 n distinct node pairs. Hence, there should exist at least one node pair that appears twice in the above node pair set. Let (vij ; vij ); (vil ; vil ), l < j , denote two such node pairs where i1 = i3 ^ i2 = i4. Therefore, in the extension process of p1 performed above, we should have encountered the following path: (A.2) < vij ;< vil !vil >; vij > j Because the extension process choose an edge vij ! vij in Gj in (Vb; Ein) whose weight is less than vij 1 ! vij 1 at the (n2 n + 2 j + 1)-th iteration of Algorithm 4.3, we know that the weight of an edge vil ! vil is greater than the weight of the edge vij ! vij since j > l. This implies that there exists a path that reduces the the edge weight of vij ! vij from that of vil ! vil after j -th loop iteration in Algorithm 4.3. Then, from Proposition 4.6 we know that, after l + k(j l) loop iteration in Algorithm 4.3 where k 1, the edge weight of vi ! vi in the resulting graph will be reduced from the corresponding edge weight in the graph found after l + (k 1)(j l) loop iteration. This means that the edge weight of vi ! vi will be in nitely decreased. But, since every job has a release time and a deadline constraints, this repeated process will eventually create a negative weight cycle during the variable elimination process applied to a constraint graph for sched1;1 . This contradicts to the assumption, and proves Claim 1 and the theorem.
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 3 4 1 1 2 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2

0 w34 1 or less, if v3 6= v0 and v4 6= v0 . 0 w34 L 1 or less, if v3 = v0. 0 w34 + L 1 or less, if v4 = v0. 0 where w34 is an edge weight of v3 ! v4 in Gn in

2

n+1 (V

n2 n+1 ). b ; Ein

88

Appendix B B.1 Proofs for Chapter 6
The proof of theorem 6.2 is presented here.

Proposition B.1 If and S are schedulable then the following condition is satis ed: 8i 2 1; ] :: 8t 2 est( i); est( i) + LCM ] ms X (t est( i ) + k ds k) c :: 0 (est( i ); t) cs b k
k=1 k

Proof: Suppose that S and are schedulable and the above condition doesn't hold. Let tv be the rst time instant at which the condition is not satis ed. That is, the following is satis ed for some iv 2 1; ]: ms X (tv est( iv ) + k ds 0 k) c (est( iv ); tv ) < cs k b However, from this we can conclude that the task set is not schedulable when all the sporadic tasks start to be invoked at time est( iv ) with their minimum inter-arrival times. This is because the processor demand by S in est( iv ); tv ] exceeds the processor time in est( iv ); tv ] available for tasks in S . Therefore, if and S are schedulable, the condition is satis ed.
k=1 k

BP = a; f ] where f is the actual nish time of the task

We de ne a busy period for the given task, , which belongs to or S and denote it as at run-time. Let D denote a deadline of . Then, let be the last task satisfying the following conditions: (1) 2 or 2 S (2) starts its execution before f . (3) starts its execution at its release time r . (4) no idle period exists between r and f . (5) no task whose deadline is greater than D is executed between r and f .

Then, the following proposition claims that the task exists for any given task .

Proposition B.2 If EDF is used at run-time to schedule and S , for any given task ( 2 or 2 S ), the task ( 2 or 2 S ) exists.
89

Proof: It is clear that at the end of the last idle period before f , the conditions (1), (2), (3), and (4), hold for some task 0 whose release time is equal to the end of that idle period. If there is no idle period before , then let 0 denote the rst task which starts its execution at time 0. Let 1 denote the last task which starts its execution between the end of the idle period and f , and which satis es all of the conditions from (1) to (4). In this case, 1 is the last task in r ; f ] which starts its execution at its release time. In other words, every task executed between r and f ] has started its execution some time after its release time except 1 . Suppose that the condition (5) is not satis ed in r ; f ] and let denote a task whose start time is between r and f , and which has a deadline D greater than D . But, because D is less than D and EDF is used to schedule the tasks, a contradiction has occurred. should never have been executed between r and f since the task has a higher priority than . Therefore, task instance 1 satis es the condition from (1) to (5).
0 1 1 1

Then, the start time a of the busy period for is de ned to be r which is found in the above proof procedure. Example busy period is depicted in Fig B.1.
1

?

?1

?2
S 1,2 DS S2,1

?3
S 1,3

S 1,1 S1

S 1,4

1,2

S 2,2

S2,3

S2 DS Busy Period for S 1,2 Busy Period for S
2,2 2,2

Figure B.1: Busy period Here, the earliest nish time of i is de ned as est(i) + Ci . Proposition B.3 The following is satis ed for every i 2 2; + 1]:

8t 2 eft(
1

i

1

); est( i)] :: 8l > 0 ::

0

(t1 ; t1 + l)

0

(est( i ); est( i ) + l)

(B.1)

Proof: If the time interval est( i ); est( i ) + l] is shifted to the left by the amount of est( i ) t1 which results in a new time interval t1 ; t1 + l], the slack time is increased by the amount of est( i ) t1 and decreased with the amount less than or equal to est( i ) t1 . This is depicted in Figure B.2. 90

D1

D2

D3

D4

D5

?1
est

?2
?1
eft

?3
?2

?4 ?3 ?4
lst(4)
l

?5
?3

?5
lst(5)
est (?2 ) +l

lst(3)
est (?2 )

?3
lst(3) lst(4)
l

?4

?5
lst(5)

t

1

t 1+l

Increased Slack Time

Figure B.2:

0

is increased or remains the same in the shifted interval (B.2)

Proposition B.4 The following is satis ed for every i 2 1; ]: 8t 2 (est( i); eft( i)) :: 8l > 0 :: 0 (t ; t + l) 0 (est( i ); est( i) + l)
1 1 1

Proof: If the time interval est( i ); est( i ) + l] is shifted to the right by the0 amount of t1 est( i ) which results in a new time interval t1 ; t1 + l], the maximum slack time, is increased or at least remains the same as can be seen from Figure B.3. This proves the proposition.
D1 D2 D3 D4 D5

?1
est

?2
?1
eft

?3
?2

?4 ?3 ?4
lst(4)

?5
?3

?1
lst(1)
l est ( ? ) 1

?2
lst(3)

?5
lst(5)

est (?1 ) +l

?1

?1

?2
lst(3)
l

?3
lst(4)

?4

?5
lst(5)

t1

t 1+l

Figure B.3:
EDF scheduling algorithm.

0

is increased or remains the same in the shifted interval

Proposition B.5 If and S satisfy the condition of proposition 6.2, then they are schedulable by
Proof: 91

Suppose that the condition is satis ed for S and and some task can't be nished within its deadline. Let's call that task ( 2 or 2 S ) and the deadline of that task D . And, let BP = ti ; f ] denote a busy period for . In this case, the actual nish time of , f , is greater than D . Then there are two cases to be considered. Case 1: D ti > LCM . Note that the maximum processor demand in ti ; ti + LCM ] by task instances from S is less 0 than or equal to (ti ; ti + LCM ) from the condition 6.4. In this case, at ti + LCM a new task instance starts its execution whose release time is equal to ti + LCM . Then, it is obvious that the start time of the busy period, ti , should be greater than or equal to ti + LCM , which is a contradiction. Case 2: D ti LCM . Let be the rst task in ti ; D ] which belongs to . First, suppose that this exists. Then, let j denote the task group containing iota . From the de nition of a busy period we know that the release time of , r , is greater than or equal to ti . Then from proposition B.3 and B.4,

inter-arrival times, then they are schedulable with . This implies that the task instances invoked at or after ti are schedulable since the worst case scenario is that every Si 2 S starts to be invoked at ti with i inter-arrival time, which is proven to be schedulable. This contradicts to the assumption that misses its deadline at D . Second, suppose that doesn't exist. In this case all the task instances executed in the interval ti; D ] ti ; f ] are from S . It is clear in this case from the condition 6.4 that

8l > 0 :: 0 (ti; ti + l) 0 (est( j ); est( j ) + l) This means that if the tasks in S starts to invoke their task instances from ti with their minimum

8l > 0 :: 0 (ti ; l)

k=1

ms X cs

k

b(l ti +

k

ds k )= k c

From this, we can conclude that every task instance in ti ; D ] is schedulable, which contradicts to the assumption that misses its deadline at D .

92

Bibliography
1] Ashok K. Agrawala, Seonho Choi, and Leyuan Shi. Designing temporal controls. Technical Report CS-TR-3504, UMIACS-TR-95-81, Department of Computer Science, University of Maryland, July 1995. 2] N. C. Audlsey, A. Burns, R. I. Davis, and A. J. Wellings. Integrating best e ort and xed priority scheduling. In Proceedings of the 1994 Workshop on Real-Time Programming, Lake Constance, Germany, June 1994. 3] N. C. Audsley. Deadline monotonic scheduling. YCS 146, University of York, Department of Computer Science, October 1990. 4] T. Baker and A. Shaw. The Cyclic Executive Model and Ada. Real-Time Systems, 1(1):7{25, September 1989. 5] T. P. Baker. A Stack-Based Resource Allocation Policy for RealTime Processes. In Proceedings, IEEE Real-Time Systems Symposium, 1990. 6] A. Belleisle. \Stability of systems with nonlinear feedback through randomly time-varying delays". IEEE Transactions on Automatic Control, AC-20:67{75, February 1975. 7] R. Bellman. Adaptive Control Process: A Guided Tour. Princeton,NJ: Princeton University Press, 1961. 8] R. Bellman. Bellman special issue. IEEE Transactions on Automatic Control, AC-26, October 1981. 9] T. Carpenter, K. Driscoll, K. Hoyme, and J. Carcio ni. Arinc 659 scheduling: Problem de nition. In Proceedings, IEEE Real-time Systems Symposium, San Juan, PR, December 1994. 10] M. Chen and K. Lin. Dynamic Priority Ceilings: A Concurrency Control Protocol for RealTime Systems. Real-Time Systems, 2(4):325{346, 1990. 11] S. Cheng and Ashok K. Agrawala. Scheduling of periodic tasks with relative timing constraints. Technical Report CS-TR-3392, UMIACS-TR-94-135, Department of Computer Science, University of Maryland, December 1994. 12] S. T. Cheng and Ashok K. Agrawala. Allocation and scheduling of real-time periodic tasks with relative timing constraints. Technical Report CS-TR-3402, UMIACS-TR-95-6, Department of Computer Science, University of Maryland, January 1995. 93

13] H. Chetto and M. Chetto. Scheduling Periodic and Sporadic Task in a Real-Time System. Information Processing Letters, 30(4):177{184, 1989. 14] H. Chetto and M. Chetto. Some Results of the Earliest Deadline First Algorithm. IEEE Transactions on Software Engineering, SE-15(10):1261{1269, October 1989. 15] H. Chetto, M. Silly, and T. Bouchentouf. Dynamic Scheduling of Real-Time Tasks under Precedence Constraints. Real-Time Systems, 2:181{194, 1990. 16] G. Dantzig and B. Eaves. Fourier-Motzkin Elimination and its Dual. Journal of Combinatorial Theory(A), 14:288{297, 1973. 17] R. I. Davis. Approximate slack stealing algorithms for xed priority pre-emptive systems. Technical Report YCS 217 (1993), Department of Computer Science, University of York, England, November 1993. 18] R. I. Davis, K. W. Tindell, and A. Burns. Scheduling slack time in xed priority pre-emptive systems. In Proceedings, IEEE Real-Time Systems Symposium, pages 222{231. IEEE Computer Society Press, December 1993. 19] M. Dertouzos. Control Robotics: the Procedural Control of Physical Processes. Proceedings of the IFIP Congress, pages 807{813, 1974. 20] P. Dorato and A. Levis. \Optimal linear regulators: The discrete time case". IEEE Transactions on Automatic Control, AC-16:613{620, December 1971. 21] G. Fohler and C. Koza. Heuristic Scheduling for Distributed Real-Time Systems. MARS 6/89, Technische Universitat Wien, Vienna, Austria, April 1989. 22] Gerhard Fohler. Joint scheduling of distributed complex periodic and hard aperiodic tasks in statically scheduled systems. In Proceedings, IEEE Real-Time Systems Symposium. IEEE Computer Society Press, December 1995. 23] R. Gerber, W. Pugh, and M. Saksena. Parametric Dispatching of Hard Real-Time Tasks. IEEE Transactions on Computers, 44(3), Mar. 1995. 24] T.M. Ghazalie and T.P. Baker. Aperiodic servers in a deadline scheduling environment. RealTime Systems, 9:31{67, 1995. 25] A. Gosiewski and A. Olbrot. \The e ect of feedback delays on the performance of multivariable linear control systems". IEEE Transactions on Automatic Control, AC-25(4):729{734, August 1980. 26] C. Han, C. Hou, and K. Lin. Distance-Constrained Scheduling and Its Applications to RealTime Systems. IEEE Transactions on Computers. To appear. 27] K. Hirai and Y. Satoh. \Stability of a system with variable time delay". IEEE Transactions on Automatic Control, AC-25(3):552{554, June 1980. 28] X. Homayoun and P. Ramanathan. Dynamic priority scheduling of periodic and aperiodic tasks in hard real-time systems. Real-Time Systems, 6(2), March 1994. 94

29] Seung H. Hong. Scheduling Algorithm of Data Sampling Times in the Integrated Communication and Control Systems. IEEE Transactions on Control Systems Technology, 3(2):225{230, June 1995. 30] J. P. Lehoczky, L. Sha, and Y. Ding. The Rate Monotonic Scheduling Algorithm: Exact Characterization and Average Case Behavior. In Proceedings, IEEE Real-Time Systems Symposium, pages 166{171, Dec. 1989. 31] J. P. Lehoczky, L. Sha, and J. K. Strosnider. Enhanced aperiodic responsiveness in hard realtime environments. In Proceedings, IEEE Real-Time Systems Symposium, pages 261{270, Dec. 1987. 32] John P. Lehoczky and Sandra Ramos-Thuel. An optimal algorithm for scheduling softaperiodic tasks in xed-priority preemptive systems. In Proceedings, IEEE Real-Time Systems Symposium, pages 110{123. IEEE Computer Society Press, December 1992. 33] J.Y. Leung and J. Whitehead. On the Complexity of Fixed-Priority Scheduling of Periodic, Real-Time Tasks. Performance Evaluation, 2(4):237{250, 1982. 34] S. T. Levi, Satish K. Tripathi, Scott Carson, and Ashok K. Agrawala. \The MARUTI hard real-time operating system". ACM Symp. on Op. Syst. Principles, Op. Syst. Review, 23(3), July 1989. 35] Shem-Tov Levi and Ashok K. Agrawala. Real Time System Design. McGraw Hill, 1990. 36] C. L. Liu and J. Layland. Scheduling Algorithm for Multiprogramming in a Hard Real-Time Environment. Journal of the ACM., 20(1):46{61, Jan. 1973. 37] A. Mok, D. Tsou, and R. Rooij. The msp.rtl real-time scheduler synthesis tool. In Proceedings, IEEE Real-Time Systems Symposium, Dec. 1996. 38] A. K. Mok. Fundamental Design Problems for the Hard Real-time Environments. PhD thesis, MIT, May 1983. 39] C. L. Phillips and H. Troy Nagle. Digital Control System: Analysis and Design, chapter 10. Linear Quadratic Optimal Control, pages 356{399. Prentice Hall, 1990. 40] K. Ramamritham and J. A. Stankovic. Scheduling algorithms and operating systems support for real-time systems. Proceedings of the IEEE, 82(1):55{67, January 1994. 41] Z. Rekasius. \Stability of digital control with computer interruptions". IEEE Transactions on Automatic Control, AC-31:356{359, April 1986. 42] M. Saksena. Parametric Scheduling for Hard Real-Time Systems. PhD thesis, University of Maryland, College Park, MD 20742, 1994. 43] Manas Saksena, James da Silva, and Ashok K. Agrawala. \Design and implementation of Maruti-II", chapter 4. Prentice Hall, 1995. In Advances in Real-Time Systems, edited by Sang H. Son. 95

44] L. Sha, J. P. Lehoczky, and R. Rajkumar. Solutions for some practical problems in prioritized preemptive scheduling. In Proc. IEEE Real-Time Syst. Symp., pages 181{191, Dec. 1986. 45] L. Sha, R. Rajkumar, and J. P. Lehoczky. Priority Inheritance Protocols: An Approach to Real-Time Synchronization. IEEE Transactions on Computers, 39(9):1175{1185, September 1990. 46] K. G. Shin and H. Kim. \Derivation and application of hard deadlines for real-time control systems". IEEE Transactions on Systems, Man and Cybernetics, 22(6):1403{1413, November 1992. 47] B. Sprunt, L. Sha, and J. Lehoczky. Aperiodic task scheduling for hard-real-time systems. Real-Time Systems, 1(1):27{60, June 1989. 48] Marco Spuri and Giorgio C. Buttazzo. E cient aperiodic service under earliest deadline scheduling. In Proceedings, IEEE Real-Time Systems Symposium, pages 2{11. IEEE Computer Society Press, December 1994. 49] Sandra R. Thuel and John P. Lehoczky. Algorithm for scheduling hard aperiodic tasks in xedpriority systems using slack stealing. In Proceedings, IEEE Real-Time Systems Symposium, pages 22{33. IEEE Computer Society Press, December 1994. 50] K. Tindell, A. Burns, and A. Wellings. An Extendible Approach for Analyzing Fixed Priority Hard Real-Time Tasks. Real-Time Systems, 6(2), March 1994. 51] G.S. Virk. Digital Computer Control Systems, chapter 4. McGraw Hill, 1991. 52] J. Xu and D. L. Parnas. Scheduling processes with release times, deadlines, precedence, and exclusion relations. IEEE Transactions on Software Engineering, SE-16(3):360{369, March 1990. 53] J. Xu and D. L. Parnas. On Satisfying Timing Constraints in Hard-Real-Time Systems. In Proceedings of the ACM SIGSOFT'91 Conference on Software for Critical Systems, pages 132{146, December 1991. 54] X. Yuan, M. Saksena, and A. Agrawala. A Decomposition Approach to Real-Time Scheduling. Real-Time Systems, 6(1), 1994. 55] K. Zahr and C. Slivinsky. \Delay in multivariable computer controlled linear systems". IEEE Transactions on Automatic Control, pages 442{443, August 1974. 56] W. Zhao and K. Ramamritham. Simple and Integrated Heuristic Algorithms for Scheduling Tasks with Time and Resource Constraints. Journal of Systems and Software, pages 195{205, 1987.

96



doc_121937482.pdf
 

Attachments

Back
Top