State-of-the-art Versus Time-triggered Object Tracking in Advanced Driver Assistance Systems

Most state-of-the-art driver assistance systems cannot guarantee that real-time images of object states are updated within a given time interval, because the object state observations are typically sampled by uncontrolled sensors and transmitted via an indeterministic bus system such as CAN. To overcome this shortcoming, a paradigm shift toward time-triggered advanced driver assistance systems based on a deterministic bus system, such as FlexRay, is under discussion. In order to prove the feasibility of this paradigm shift, this paper develops different models of a state-of-the-art and a time-triggered advanced driver assistance system based on multi-sensor object tracking and compares them with regard to their mean performance. The results show that while the state-of-the-art model is advantageous in scenarios with low process noise, it is outmatched by the time-triggered model in the case of high process noise, i.e., in complex situations with high dynamic.


I. INTRODUCTION
In 2009, 397448 people were injured and 4154 people were killed in road accidents in Germany. Most of the fatalities were caused by situations in which a driver did not react properly or quickly enough to an unexpected event [10]. To make roads safer, many automotive original equipment manufacturers and suppliers work on the development of advanced driver assistance systems based on object tracking [26]. Advanced driver assistance systems consist of one or multiple sensor(s), an object tracking subsystem and one or multiple feature service subsystem(s) interconnected via a bus system.
As the number and potential of advanced driver assistance system features grow, the question of how to guarantee the correctness of their services becomes more and more important [52], [53]. Although advanced driver assistance system feature services "only" assist while the driver remains in full control, an incorrect advanced driver assistance system feature service can undoubtedly cause dangerous situations, as the capability of human beings to adapt quickly to unexpected events is restricted [18], [61].
The basis for achieving a correct advanced driver assistance system feature service is an exact assessment of the surrounding environment. This requires the tracking of all relevant objects within a feature service specific range and maintaining real-time (RT) images of the object states whose deviations During this work, M. Koplin was with Volkswagen AG, Germany. from reality do not exceed a feature specific upper bound (feature specific accuracy demand) [59]. As real-time images of evolving object states are invalidated by the progression of time, they have to be updated within a well-defined time interval (accuracy interval) with object state observations that satisfy a well-defined accuracy level [32]. As a result, the lowest possible accuracy level of object state observations, the maximum object state evolution and the maximum system latency that can occur in an advanced driver assistance system have to be taken into account when determining which feature specific accuracy demand can be satisfied [33]. Because the accuracy level of an object state observation from a singlesensor may be subject to fluctuations [48], [7], [27], singlesensor advanced driver assistance systems are often limited to low feature service specific accuracy demands. One approach to deal with this problem comprises updating the real-time images of the object states with redundant object state observations derived from heterogeneous sensors [14].
In contrast to single-sensor advanced driver assistance systems, where it is common to use point-to-point connections between sensor and object tracking subsystem, the use of multiple heterogeneous sensors in multi-sensor advanced driver assistance systems leads to the use of a bus system that interconnects the sensors and the object tracking subsystem [46]. In most state-of-the-art multi-sensor advanced driver assistance systems, the object state observations are transmitted over a controller area network, CAN, bus system [58], which is the dominant bus system in the automobile industry. However, the transmission of object state observations from a sensor to the object tracking subsystem may be delayed by other data traffic transmitted over the bus system, leading to unpredictable transmission delays [37]. Because of this, it is impossible to guarantee an update of object state observations within a predefined time interval. To overcome this shortcoming, a paradigm shift toward time-triggered multi-sensor advanced driver assistance systems based on the principles of the timetriggered architecture which was presented by Kopetz et al. [34] seems feasible. According to said principles, a timetriggered deterministic bus system establishes a global timebase and synchronizes the clocks of all nodes, which allows for deterministic sensor scheduling, measurement transmission and processing and thus leads to guaranteed accuracy intervals, bounded detection latency for timing and omission errors, replica determinism and temporal composability. However, this paradigm shift is expected to affect the mean system performance, as the gained temporal determinism may introduce additional delays and demand supplementary hardware resources [31], [46].
It is the objective of this paper to study how the mean system performance is affected by the paradigm shift toward timetriggered multi-sensor advanced driver assistance systems. Due to the difficulty to accomplish reproducible conditions for the high number of test drives that would be necessary to produce statistically meaningful results for a set of scenarios in field tests [21], this paper tackles the posed question through simulation.

II. RELATED WORK A. Sensor Scheduling
The scheduling of sensors has received considerable attention over the last years, especially in the fields of military [54] and robotics [20]. This is due to the fact that in both fields multiple sensors provide object state observations for one or multiple feature services under a dynamically changing environment.
If environmental conditions or the demand for object state observations changes drastically over time, the activation of the most appropriate sensor set can lead to improved results [57], [60] or the reduction of sensor usage costs [38].
In [44], Mehra uses different norms of the observability and the Fisher information matrix [51] as criteria for the optimization of measurement scheduling and shows that it is preferable to cluster measurements around specific design points t k .
Avitzour and Rogers [2] present a theory of optimal measurement scheduling for least squares estimation which is based on the assumption that the cost of a measurement is inversely proportional to the variance of measurement noise.
In [45], Mourikis et al. compute the localization uncertainty of a group of mobile robots wherein the localization uncertainty is determined by the covariance matrix of the equivalent continuous-time system at a steady state.
However, it lacks a study of how the mean system performance is affected by a paradigm shift from an indeterministic scheduling and transmission concept, where sensors run free and sample measurements at highest rate, and a time-triggered scheduling and transmission concept, where sensors have a fixed sampling rate and measurement time stamps can be controlled.

B. Out-of-Sequence-Measurements
An object tracking subsystem processes object state observations provided by sensors and provides real-time images of the object states to the feature service subsystem. The fusion of object state observations and related processes are usually triggered by incoming measurements and the demand for outgoing real-time images of the object states.
If the time stamp of an object state observation is not more recent than the instant which the associated object state represented before a retrodiction, the corresponding measurement is classified as out-of-sequence measurement (OOSM). Figure 1 depicts a situation with an out-of-sequence measurement problem which is independent from communication system issues, i.e., the transmission times of object state observations from both sensors to an object tracking subsystem, ∆t etb1/ttb1 P T and ∆t etb2/ttb2 P T , are approximately equal. Due to different observation preprocessing times, ∆t sens1 P T > ∆t sens2 P T , the measurement originating from sensor 2 is received earlier at the object tracking subsystem than the measurement originating from sensor 1, although the measurement from sensor 2 represents a more recent snap-shot of the surrounding environment. sensor   To deal with out-of-sequence-measurements, two approaches have been extensively explored in research throughout the fusion community, i.e., the buffered (BUFF) approach and the advanced algorithms (ADVA) approach.
1) BUFF approach: The BUFF approach is based on storing measurements in a measurement buffer. In the buffer, the measurements are sorted chronologically and the oldest information is provided for fusion.
Kaempchen et al. [28] discuss the maximum latency (here defined as the time difference between the instant of measurement fusion and the measurement time stamp) that arises when the BUFF approach is used to guarantee the fusion of chronologically ordered measurements.
The time needed to process these object state observations will usually depend on the complexity of the surrounding environment, i.e., the number of object state observations and the number of possible associations. In peak load scenarios, the increasing computational load which is due to the increasing number of tracked objects may reach a critical level. Thereupon, the time during which the incoming measurements have to be kept in a buffer before they can be processed constantly increases.
2) ADVA approach: In the ADVA approach received outof-sequence-measurements are directly fused using advanced algorithms, which exploit the correlation between the actual Kalman filter state and the object state observations arriving too late.
There are several ADVA approaches that deal with onelag and multi-lag delays, filtering and tracking, linear and non-linear systems as well as single-model and multi-model systems (in the following, t κ refers to the out-of-sequencemeasurement time stamp and t k refers to the time stamp of the measurement which updated the fusion before the out-ofsequence-measurement was received).
Larsen et al. present a suboptimal multi-lag filtering algorithm for linear systems [36]. If a measurement is expected to arrive out-of-sequence, a correction term derived from object state observations error covariance matrices and estimated object state error covariance matrix is set up after the last measurement representing the surrounding environment at a time point before t κ is fused. Said correction term is then updated whenever measurements are fused until the out-ofsequence-measurement is available. As soon as the delayed measurement is available, the correction term is used to update the current object state estimate with the delayed measurement.
Bar-Shalom presents an optimal one-lag tracking algorithm for linear systems [3]. The delayed measurement is incorporated by computing the update of an object state at time point t k with the residual of the out-of-sequence-measurement and the retrodicted state to the time point t κ as well as the covariance matrices between the object states at t k and t κ . In [6], [5], Bar-Shalom et al. extend the presented one-lag algorithm to deal with multi-lag out-of-sequence-measurements by virtually compressing the information of the measurements between t κ and t k into one update. This approach is further extended to a multi-model approach in [4].
Mallick et al. describe an extension to the algorithm presented in [3] toward a multi-lag, single-model and a one-lag, multi-model approach [39]. In [41], Mallick et al. present a multi-lag, single-model algorithm that includes data association, likelihood computation and hypothesis management and a particle filter for out-of-sequence-measurement treatment in [40].

III. MODEL OF A STATE-OF-THE-ART MULTI-SENSOR ADVANCED DRIVER ASSISTANCE SYSTEM
In the following, it is assumed that the advanced driver assistance system consists of two sensors , an object tracking subsystem, and a feature service subsystem, interconnected via a bus system, as schematically depicted in Figure 2.

A. Sensors
In an automotive environment, many obstacle detection systems achieve good results with a combination of active sensors such as radars and lasers and passive sensors such as cameras [12]. Thus, sensor 1 is an abstraction of an automotive vision sensor providing position observations, z 1 , and sensor 2 is an abstraction of an automotive radar or laser sensor providing position and velocity (Doppler) observations, z 2 , which are calculated with reference to a Cartesian coordinate frame.
The object state observation vectors can be decomposed into quantities of the true object state vector x mapped by matrix H and a Gaussian distributed error vector r with zero mean [49] as shown in 1 and 2.
The object state observation error covariance matrices, R 1 = E r 1 r 1 and R 2 = E r 2 r 2 , are assumed to consist of position independent variance values (an example for annotating distance sensor values with variance values is given in [15], particular values for accuracy of vision sensors can be found in [47], [43], [42], for accuracy of radar or laser sensors see [17], [19], [22], for conversion of range and bearing measurements to Cartesian coordinate measurements see [13]).
(4) The object state observation error covariance matrices are assumed to be slightly higher than specified in the cited papers. This is due to the fact that the specified precision of both sensors refers to measuring coordinates of points or edges of a non-planar contour of a vehicle.
However, in scenarios where the measured coordinates of points or edges are used for estimating a vehicle's geometrical center, observations of the vehicle's dimensions such as width and length are additionally required [55]. When estimating the vehicle's geometrical center using width and length observations, the potential inaccuracy of the width and length observations has to be taken into account.
Furthermore, the reflection of a laser scanner or radar beam on a vehicle contour or the edges that a vision sensor detects when analyzing a vehicle contour may shift during a maneuver due to changing aspect angles. This shifting adds further uncertainty to the estimation of the vehicle's geometrical center and has to be taken into account in the tracking process, for example, by increasing the object state observation error covariance matrices.
The preprocessing times of the sensors are assumed to be dependent on the complexity of the surrounding environment. It is assumed, however, that there are upper bounds for the sensor preprocessing times as each sensor does not detect more than a maximum number of objects. Accordingly, the preprocessing time of sensor 1 is assumed to vary within a range of c · 160 ms to 160 ms and the preprocessing time of sensor 2 is assumed to vary within a range of c·80 ms to 80 ms due to changes in the complexity of the environment [56], where c accounts for different complexity variances.
Furthermore, it is assumed that the sensors do not continuously provide object state observations, but tend to lose an object from time to time, which can result, for example, from object occlusions, difficulties in the observation preprocessing or a badly working association process. The recognition ability is modeled for both sensors independently by a Markov process with binary states j = 0 and j = 1 where 0 indicates that a sensor has not observed an object and 1 indicates that a sensor has observed an object, the Markov process being governed by the following transition probability matrix.

B. Bus System
The bus system within the state-of-the-art model is assumed to be a CAN which operates event-triggered using a carrier sense multiple access/collision resolution scheme. Furthermore, it is assumed that the CAN is exclusively used for transmitting object state observations. The time for transmitting the object state observation vectors from a sensor to the object tracking subsystem is assumed as ∆t etb P T = 2 ms.

C. Object Tracking Subsystem
It is further assumed that associated in-sequence object state observations and predicted images of the object states are fused by a Kalman filter algorithm using a white-noise jerk model [50] with and The time required for fusing all object state observations from one sensor is assumed to be dependent on the complexity of the environment as every additional object increases the required fusion time.
As the maximum number of object state observations is assumed to be restricted, there exists an upper bound for the time required to fuse in-sequence-measurements, ∆t f usISM P T ≤ UB f us . In order to be applicable for typical sensor configuration, the upper bound is assumed to range between 2 ms and 25 ms, UB f us = {2, 5, 10, 15, 20, 25} ms. Furthermore, it is assumed that ∆t f usISM P T varies within a range of c · UB f us to UB f us depending on the complexity of the surrounding environment where c accounts for different complexity variances as modeled in subsection V-A.
The occurrence of out-of-sequence-measurements is either dealt with by the BUFF approach (buffering and chronologically sorting measurements) or the ADVA approach as presented by Bar-Shalom in [5]. The ADVA approach is assumed to demand additional processing time following . Furthermore, the object tracking subsystem does not maintain a buffer object state observations. Newer observations replace older observations from the same sensor. At predefined points in time, the object tracking subsystem starts to predict images of the object states in order to generate real-time images of the object states which are provided to the feature service subsystem. The time required for predicting realtime images of the object states is assumed to be ∆t pre . The real-time images of the object states are then transmitted to the feature service subsystem. It is assumed that the control loop performed within the feature service subsystem has a frequency of 25 Hz which is a typical value for vehicle control [35], [24], [25]. Within the state-of-the-art model as depicted in Figures 3(a) and 3(b), the two sensors ("sensor 1" and "sensor 2") measure with cycle times, ∆t sens1 CT and ∆t sens2 CT , that vary over realtime and are equal to the corresponding sensor preprocessing times, ∆t sens1 P T and ∆t sens2 P T . The sensor preprocessing times are not constant due to the complexity variance as described in subsection V-A. The phases of the sensors are uncontrolled as the internal sensor clocks are not synchronized.

D. State-of-the-art Model Schedule
The transmission of an object state observation is indicated in Figure 3(a) and Figure 3(b) by bars labeled "activity of bus system".
As soon as object state observations are received by the object tracking subsystem and no task is processed simultaneously, the object position observations can be fused with associated images of the object states ("fusion task") hereby taking into account the particulars of out-of-sequence-measurements.
In Figure 3(a), the received object state observations are sorted chronologically within an object state observation buffer which allows the fusion of all object state observations without the use of advanced algorithms. However, as can be seen from Figure 3(a), the buffering of object state observations adds additional delays to the system.  In Figure 3(b) the received object state observations are fused as soon as sufficient processing resources are available. The fusion process task interval, ∆t f usISM P T , varies as described above.
Every ∆t pre CT , real-time images of the object states are generated ("prediction cycles") and transmitted over the bus system to the feature service subsystem.

IV. PARADIGM SHIFT TO TIME-TRIGGERED MODEL A. Sensors
The sensors in a time-triggered multi-sensor advanced driver assistance system are assumed to have fixed sensor cycle times that are equal to the maximum sensor preprocessing times, ∆t sens1 P T = 160 ms and ∆t sens2 P T = 80 ms, i.e., the sensors are scheduled to account for the worst case execution time of observation preprocessing [30]. The sensor phase, ∆t sens2 P H , can be controlled and chosen by a system designer in order to arrive at an optimal schedule.

B. Bus System
The bus system within the time-triggered model is assumed to be time-triggered using a TDMA scheme, which results in well-defined transmission slots and bounded transmission jitter. ∆t ttb CT is chosen to be a factor of ∆t sens1 CT and ∆t sens2 CT and has the typical value of 10 ms [16], [23]. The time for transmitting object state observation vectors from a sensor to the object tracking subsystem is assumed to be ∆t ttb P T = 2 ms [16]. Please note that the transmission delays introduced by the event-triggered bus system as described in subsection III-B and the time-triggered bus system are assumed to be equal. This assumption seems feasible as the focus of this paper is not on any particular event-triggered or time-triggered bus system but on the paradigm shift toward time-triggered advanced driver assistance systems.

C. Object Tracking Subsystem
The object tracking subsystem fuses the incoming object state observations with associated images of the object states, taking into account the particulars of out-of-sequencemeasurement processings.
The time-triggered model schedule is set up according to the upper bound for the fusion process task interval UB f us which is assumed to vary between 2 ms and 25 ms, depending on the hardware resources of the object tracking subsystem.
The occurrence of out-of-sequence-measurements is either dealt with by a BUFF approach or an ADVA approach as presented by Bar-Shalom in [5].
At predefined points in time, the object tracking subsystem starts to predict images of the object states in order to generate real-time images of the object states. The scheduling of the prediction can be chosen by a system designer in order to arrive at an optimal schedule. The real-time images of the object states are then transmitted to the feature service subsystem. Please note that in the time-triggered synchronized configuration as depicted in Figure 5, the received object state observations are fused as soon as sufficient processing resources are available, as out-of-sequence-measurements are avoided by design.

D. Time-Triggered Model Schedule
Every ∆t pre CT , real-time images of the object states are predicted from the fused images of the object states and then transmitted via the bus system to the feature service subsystem ("prediction cycles"). For the time-triggered unsynchronized ADVA configuration, the prediction cycle phase is ∆t pre P H = ∆t f usOOSM P T + 2 · ∆t ttb P T .
For the time-triggered unsynchronized BUFF configuration the prediction cycle phase is chosen to be for 2 · ∆t f usISM P T + ∆t pre P T < ∆t pre CT and for 2 · ∆t f usISM P T + ∆t pre P T > ∆t pre CT . Due to the deterministic nature of the time-triggered approach and the fact that the jitter of all processes is assumed to be sufficiently small compared to the cycle times and can therefore be neglected, the whole system schedule is defined by the constant cycle times and the phases of all processes.

V. ENVIRONMENT MODEL
The environment is modeled with regard to two aspects, the variance of its complexity, i.e., how the preprocessing times of the sensors and the object tracking subsystem depend on the environment, and the process noise which is a measure of how good the employed Kalman filter prediction model describes reality.

A. Modeling Environment Complexity
The changes in the complexity of the environment are modeled by a random walk with step size 1 ms. The Markov processes regarding the varying object observation preprocessing times and the varying object observation fusion time are modeled by Markov chains comprising states from c · 160 ms to 160 ms, c · 80 ms to 80 ms, and c ·

B. Process Noise
The process noise of the object state evolution is assumed to be white with power spectral density q, and to account for modeling errors, e. g., higher order derivatives that are not contained in the object state vector. In an automotive environment, the choice of a single specific q is problematic since, for example, q in an traffic jam environment will be much smaller than q in a freeway environment.
As a result, q is assumed to vary in the range of q = [0.01, 100] m 2 s 5 to compensate for not modeling the derivative of acceleration in the white noise jerk model of subsection III-C (see also [1], [9], [29], [11]).

VI. PERFORMANCE MEASURE
As mentioned in the introduction, the basis for achieving a correct advanced driver assistance system feature service is a correct assessment of the surrounding environment. The correctness of this assessment depends on the deviations between the real-time images of the object states and reality, which have to be smaller than a feature specific upper bound.
Assuming that all relevant objects are detected by the sensors and that the number of false positives ("ghost" objects) and false negatives (non-detects) is negligible (otherwise the sensors would not be suited for use in advanced driver assistance systems), the mean performance of both models can be expressed by the mean error covariance matrix trace of the real-time images of the object states (for error covariance matrix trace see also [8]). Since the state-time (ST) of the images of the object states is delayed due to object state observation preprocessing, transmission and fusion, it is assumed that the real-time images of the object states are predicted from the state-time images of the object states using the object state evolution model of the Kalman filter, which leads to with t RT = ∆t pre P H + n · ∆t pre CT and t ST = t ST (t RT ).

VII. SIMULATION RESULTS
We have compared the state-of-the-art and time-triggered configurations for different regions of the parameter space (spanned by the environment parameters and the upper bound for fusion processing time). Figure 6 depicts a three-dimensional parameter space grid spanned by the parameters for complexity variance, c, upper bound for the fusion processing time, UB f us , and process noise power spectral density, q. Therein, every grid point is identified by a symbol indicating the configuration which is best for the respective parameter set with regard to the simulated mean performance. The symbols are referenced by numbers 1 to 5 in the figure legend, the numbers referring to the model configurations:

A. Best Configurations
• State-of-the-art BUFF (1); • State-of-the-art ADVA (2); • Time-triggered unsynchronized BUFF (3); • Time-triggered unsynchronized ADVA (4); and • Time-triggered synchronized (5). Figure 6 shows that the state-of-the-art ADVA configuration (indicated by blue squares) is best for most grid points in the three-dimensional parameter space grid spanned by complexity variance, upper bound for the fusion processing time, and process noise power spectral density.
However, there are boundary grid points where the state-ofthe-art ADVA configuration is outperformed by other configurations.
For big to medium complexity variance in combination with slow object tracking subsystem and small process noise For small complexity variance in combination with mediumslow to medium-fast object tracking subsystem and medium to high process noise, the time-triggered unsynchronized ADVA configuration (indicated by red triangles) is best.
The time-triggered synchronized configuration (indicated by yellow stars) is best for small complexity variance in combination with a fast or slow object tracking subsystem, and small to high process noise power spectral density.
It is also noteworthy that the time-triggered unsynchronized BUFF configuration is suboptimal over the whole parameter region. In the top two-dimensional parameter grids, every grid point is identified by a symbol indicating the respective best-suited configuration with respect to the mean performance.

B. Best State-of-the-art and Time-Triggered Configurations
The three-dimensional figures depict the ratio of the best state-of-the-art configurations mean performance to the best time-triggered configurations mean performance.
Every set of three subfigures represents one of q = 0.01 m 2 s 5 , q = 0.1 m 2 s 5 , q = 1 m 2 s 5 , q = 10 m 2 s 5 and q = 100 m 2 s 5 . Figures 7(a), 8(a), 9(a), 10(a), and 11(a) show that the stateof-the-art BUFF configuration outmatches the state-of-theart ADVA configuration for slow object tracking subsystems, UB f us = 25 ms, in combination with small to medium process noise power spectral density, q = {0.01, 0.1, 1} m 2 s 5 . However, for the remaining parameter grid points, the state-of-the-art ADVA configuration yields better results than the state-of-theart BUFF configuration.
Regarding  and 11(b). The figures show that the difference between the best state-of-the-art configurations and the best time-triggered configurations range from −15% to +6% of the mean real-time error covariance matrix trace of the respective best time-triggered configuration.
For low process noise power spectral density, q = 0.01 m 2 s 5 , the difference ranges from −15% to +1% and for high process noise power spectral density, q = 100 m 2 s 5 , the difference ranges from −10% to +2%. It should be noted however, that the biggest difference of +6% is found for medium process noise power spectral density, q = 1 m 2 s 5 .

VIII. ANALYSIS OF SIMULATION RESULTS
As the time-triggered configurations schedule all processes in accordance with their worst case execution time, the mean performance measures of the time-triggered model configurations are unaffected by a decrease of the lower bounds for sensor and fusion preprocessing times, indicated by a decrease of the complexity variance parameter. As a state-of-the-art configuration may start a new task as soon as the preceding task has been finished, the state-of-the-art configuration The time-triggered synchronized configuration is able to fuse all object state observations but is more valuable in the sequence of intervals between state-time and real-time compared to the state-of-the-art ADVA configuration. Accordingly, an increase in the process noise power spectral density which increases the sequence of integrated process noise traces to a greater extent than the sequence of object state statetime image error covariance matrix traces is unfavorable for the time-triggered synchronized configuration, as the time-triggered synchronized configuration has the greater values in the sequence of intervals between state-time and realtime and therefore, the sequence of greater integrated process noise traces. The reason why the state-of-the-art ADVA configuration is outmatched by the time-triggered synchronized configuration for medium process noise power spectral density lies in the fact that the state-of-the-art ADVA configuration cannot fuse all object state observations of sensor 1. When the process noise power spectral density decreases, the influence of the integrated process noise traces is diminished and the focus shifts toward the sequence of object state state-time image error covariance matrix traces. Here, the state-of-the-art BUFF configuration outmatches the time-triggered synchronized configuration due to the higher number of object state observation sets that are fused. The reason why this behavior is also observed for small lower bounds for sensor and fusion preprocessing times is obvious when considering that the long times required to fuse an object state observation set and the high number of uncoordinated object state observation sets may lead to fusion "jams".
The time-triggered unsynchronized ADVA configuration has a sequence of object state state-time image error covariance matrix traces that is unaffected by a variation in the upper bound for the fusion processing time but reacts with a 1.5 times greater variation in the sequence of intervals between state-time and real-time. The time-triggered synchronized configuration experiences a jump in the sequence of object state state-time image error covariance matrix traces for the upper bound for the fusion processing time changing from UB f us = 15 ms to UB f us = 20 ms. Furthermore, the sequence of intervals from state-time to real-time varies proportionally for UB f us ≥ 20 ms and twice as great for UB f us ≤ 15 ms. Accordingly, the sequence of intervals between state-time and real-time increases stronger in the time-triggered synchronized configuration for the upper bound for the fusion processing time UB f us ≤ 15 ms and weaker for UB f us ≥ 20 ms than the time-triggered unsynchronized configuration, which leads to the observed behavior.
The observed interrelation derives from the influence of the sequence of integrated process noise traces that increase with increasing process noise power spectral density. In this regard, the jump in sequence of object state state-time image error covariance matrix traces which react unfavorably to the upper bound for the fusion processing time changing from UB f us = 15 ms to UB f us = 20 ms becomes greater and makes it impossible for the time-triggered synchronized configuration to be competitive.

IX. CONCLUSION
In this paper a state-of-the-art model and a time-triggered model for multi-sensor advanced driver assistance systems have been compared. In the state-of-the-art model, the sensor phases are not controllable and the sensor cycle times are equal to the sensor preprocessing times which vary within a given range according to a Markov chain with given transition probability matrix. The state-of-the-art model can be operated in two configurations, a state-of-the-art BUFF configuration, where object state observations are buffered and chronologically sorted before fusion and a state-of-theart ADVA configuration that directly fuses out-of-sequencemeasurement using an ADVA approach.
In the time-triggered model, the sensor phases are controllable, the sensor cycle times are fixed and equal to the sensors' worst case preprocessing times. Furthermore, a timetriggered bus system with fixed transmission slots is used that transmits the object position observations from the sensors to the object tracking subsystem. The time-triggered model can be operated in various configurations from which three phase-aligned configurations are selected for further analysis: a time-triggered unsynchronized BUFF configuration, a timetriggered unsynchronized ADVA configuration, and a timetriggered synchronized configuration, where the object state observation sampling of both sensors is either unsynchronized or synchronized.
The mean performance of both models has been evaluated by simulations with multiple configurations differing in the sensor and bus system schedules and the treatment of OOSMs. The results show that for the chosen parameter space, the state-of-the-art ADVA configuration yields the best results. However, the results also show that there are points in parameter space where the state-of-the-art ADVA configuration is outmatched by the state-of-the-art BUFF configuration, the time-triggered unsynchronized ADVA configuration or the time-triggered synchronized configuration.
Accordingly, the state-of-the-art configurations are favorable when the sensor preprocessing times show very high variations. However, with decreasing sensor preprocessing time variation, the time-triggered configurations outmatch the stateof-the-art configurations for two reasons. The first reason is the increasing mean of the sequence between state-time to realtime. The second reason is that the time-triggered configurations show a smaller variation in the sequence between statetime and real-time which is advantageous when considering the higher order dependence of the mean trace of the integrated process noise. As a result, the state-of-the-art configurations show weaknesses in situations of high risk potential, because such situations are characterized by a high number of objects which leads to low sensor preprocessing time variation and/or a fast changing environment which is represented by a high process noise power spectral density.
Given the aforesaid, it can be concluded that time-triggered control paradigm is well-suited for advanced driver assistance systems equipped with sensors of the current generation, as positive features like guaranteed accuracy intervals, bounded detection latency for timing and omission errors, replica determinism and temporal composability are achieved by a minimal degradation of the mean system performance.

ACKOWLEDGMENTS
This work was supported by Lakeside Labs GmbH, Klagenfurt, Austria, and funding from the European Regional Development Fund and the Carinthian Economic Promotion Fund (KWF) under grant 20214/21532/32604 and by the Austrian FWF project TTCAR under contract No. P18060-N04. Special thanks go to Kornelia Lienbacher for proofreading the paper.