• First half of the Introduction completely re-written

    From E Douglas Jensen@21:1/5 to All on Mon Aug 13 07:46:33 2018
    Introduction (to the Preview)

    This is a work-in-progress, highly condensed and less formal, preview of my work-in-progress book An Introduction to Fundamental Principles of Timeliness in Dynamically Real-Time Systems [Jensen 201x].

    The purpose of this preview is to provide early access to some abbreviated and simplified content of the book, with the hope of sooner fostering new research on its topics.

    This Introduction first briefly summarizes the keystone ideas that the book and its preview discuss, and why I am writing about them. Then it describes the topic of each chapter. It is prerequisite reading for the rest of the preview.

    The preview includes in-line references to freely accessible material on the Internet. The book instead has conventional published references.

    The primary objective of the book and this preview is to introduce a scholarly yet practical foundation for timeliness and predictability of timeliness at both the action (e.g., task) and system levels. Those are the core properties of all real-time
    systems. Here they are the primary ones which distinguish the general case of dynamic real-time systems from the conventional subset of static ones—and thus are the basis for this foundation.

    Such a foundation is important (and this one has been proven to be invaluable) for generalizing conventional real-time system models and practices from their historical focus on the narrow special case of predominately static periodic-based (informally
    termed “hard”) real-time systems.

    That generalization encompasses all real-time systems—defined in this preview’s Introduction (temporarily informally) to be systems for which timeliness and predictability of timeliness are part of the logic of the system.

    Although the real-time computing field erroneously dismisses non-hard systems as “soft” in various ad hoc and ambiguous ways, this foundation reveals non-hard (soft, as will be precisely defined herein) to be not just the general, but also the most
    common, case of real-time systems.

    Informally, timeliness of a “happening”—an event (e.g., task completion) or information (e.g., data change)—is a measure of the extent to which its occurrence time can be related to the happening’s usefulness in specific circumstances.

    In the conventional real-time computing system special case, timeliness metrics are ordinarily binary: whether a task completion deadline is met; whether the latency of an interrupt response or system call response can be less than a least upper bound.

    However, in principle (cf. scheduling theory) and real world practice, the logic of a real-time system includes relationships between: happenings’ lateness (earliness and tardiness) with respect to a deadline or least upper bound; and the consequent
    usefulness of the happening when it is manifest.

    The book’s foundation furthermore provides for the frequent cases of even richer expressive relationships evident in an application—e.g.: whose lateness is not necessarily linear; and whose completion time constraints are not deadlines or least upper
    bounds—using Jensen’s time/utility functions and application-specific utility accrual optimality criteria. The time/utility function methodology’s major challenge has been predictability of timeliness, which is addressed with this foundation.

    Continuing informally, something (notably here, timeliness) is predictable in the manner and to the extent that it can be known á priori. In conventional (i.e., “hard”) real-time computing system models, predictability is static and almost
    exclusively binary—i.e., it is presumed that all (especially task) completion time constraints will always be met (unless there is a failure). In that field, predictability is often mistakenly thought to mean “deterministic.”

    That mistake can be recognized and remedied by considering predictability to be a continuum. In that type of model, the maximum predictability end-point is deterministic á priori knowledge, and the minimum predictability end-point is total absence of á
    priori knowledge. Everywhere on the predictability continuum in-between those end points is degrees of non-determinism. Typically, happenings in dynamic real-time systems have predictability between the two end points of that continuum, according to some
    formal model- and context-specific metric. A simplified example, using a statistical continuum for predictability is: the maximum predictability (deterministic) end-point is represented by the constant distribution; the minimum predictability (maximum
    non-determinism) end-point is represented by the uniform distribution; the metric is in terms of some model instantiation properties of probability distributions, such as shape, dispersion, etc.

    Predictability is a vastly deeper and more complex topic than that intuitive continuum model.

    Predictability consists of reasoning about kinds and degrees of uncertainty. That usually first brings to mind probability theory, but there are multiple different interpretations and theories of “probability.”

    The familiar frequentist interpretation may seem applicable to predicting timeliness (e.g., task completion times) in static periodic-based real-time systems, but even there it can be misleading—and it is not at all applicable to dynamic real-time
    systems.

    The term “dynamic” in the context of the book encompasses two distinct departures from almost all conventional static real-time systems. Both departures are common for timeliness and predictability of timeliness in the real world.

    First, the system models’ defaults include that parameters (in particular, which may be used for scheduling) such as task arrivals, task execution durations, task time constraints, task resource conflicts, etc. are not necessarily static. (Such system
    models are sometimes called “open” ones.) Instead, parameters may be aleatoric—i.e., statistically uncertain ones which may differ (e.g., stochastically) during system execution and for each time the system is run. In most static real-time
    computing systems, modern processor and computer architectures force task execution times to be a dynamic parameter. Fitting that as “worst-case execution time” into the conventional static orthodoxy is very challenging.

    Second, there may be inherent epistemic uncertainties (i.e., ignorance) about aspects of the system and its environment. They are not objectively statistical in nature, in the sense that statistical characterizations do not capture their values and
    meaning. Reasoning about these uncertainties requires employing an appropriate, often application-specific, theory of uncertainty from the various extant ones.

    The Bayesian theory of probability and inference for reasoning about uncertainty is popular in many fields. Bayesian probability can be assigned to any statement whatsoever, even when no random process is involved, to represent its subjective
    plausibility or the degree to which the statement is supported by the available evidence. However, it has major limitations—most notably that it cannot model ignorance, uncertain data, conflicting data, and scarce data. Those notable limitations make
    it unsuitable for predictability of timeliness in many dynamic real-time systems (and for various other applications). They are overcome by several more expressive theories to which the book broadly applies the term belief function theories.

    Very simply: Belief function theories for reasoning about the uncertainty of a hypothesis combine evidence from different sources to arrive at an updated degree of belief represented by a mathematical object called a belief function. Different rules for
    combining evidence and updating beliefs have been devised to better accommodate varied uncertainty characterizations (e.g., conflicting and contradictory evidence). Belief function theories began with Dempster-Shafer theory, also called the mathematical
    theory of evidence. Evolved theories currently include (in roughly increasing order of expressiveness and generality) the Theory of Hints, the Transferable Belief Model, and Dezert-Smarandache theory.

    Belief function theories are natively expensive in computation time (e.g., Dempster’s combination rule is #P-complete) and memory space, especially in comparison with reasoning based on classical probability theory and even Bayesian conditional
    probability. Reasoning time frames can be much longer than those in predominately static systems (recall that time frame magnitudes are not part of the definition of real-time).

    Fortunately, the attractiveness of belief function theories for more comprehensively reasoning about uncertainty has engendered an active field of research and development on efficient algorithms, data structures, and even hardware (GPU, FPGA)
    implementations, for reducing those costs. Even so, the theories are usually instantiated in application-specific ways for maximal cost-effectiveness. Any specific application may have kinds and degrees of uncertainties (e.g., of dynamic system model
    parameters) that outreach all resource-feasible formal reasoning approaches. Ultimately there will always be cases for which the only viable approach is to restructure an application so that uncertainty is manageable—just as is true for simple static
    real-time systems (e.g., worst-case execution times may be infeasible to ascertain or tolerate).

    Belief function theories are increasingly used in a broad range of different fields, despite initial misunderstandings and mistakes in both research and practice.

    The framework in the book is novel in accommodating Bayesian and belief function theories, in addition to classical probability theory, for predicting timeliness (i.e., schedules’ outcomes) of dynamic real-time systems. In particular, it provides for
    innovative approaches to the historical challenge of predicting the individual and collective completion times and consequent utilities for time/utility functions.

    Generically, scheduling employs system model parameters which may be numerical constants (in static systems), or numeric variables (whether random or not). The scheduling algorithm produces a schedule (before or during system operation) according to one
    or more optimality criteria, while respecting constraints such as dependencies (e.g., task precedence, resource access conflict resolution rules). The optimality criteria for real-time scheduling explicitly or implicitly express the desired timeliness
    and predictability of timeliness of the scheduled entities (e.g., tasks) individually and collectively. In general, sequencing (scheduling, dispatching) is an FNP-complete problem, so application-specific heuristics are normally used, which are
    sufficiently feasible and acceptably suboptimal.

    Real-time scheduling with belief functions involves algorithms which formulate mathematical beliefs about timeliness and predictability of timeliness based on combining beliefs about uncertain system model parameters with any available certain parameter
    information. The beliefs about those uncertain scheduling parameters are based on combining evidence from one or more pertinent (application, system) sources. Beliefs are combined and updated according to a (usually) application-specific belief updating
    rule. Beliefs are updated as evidence is updated.

    When belief functions are used with time/utility functions, the utility functions may be belief functions, and the utility accrual procedure is normally a belief function.

    The book includes a dynamic real-time application case study with examples of employing and comparing Bayesian and belief function theories, in addition to classical probability theory, in that application’s context.

    For case studies (and in practice), the choice of application contexts should balance their need—e.g., relative (e.g., societal, economic, etc.) importance and timeliness predictability difficulty due to uncertainties—against their time frames’
    adequacy for the complexity of reasoning about schedules’ timeliness. Available application subject matter expertise is also essential.

    The book’s example case study’s context is a military combat system one, namely battle management (e.g., defense against cruise and ballistic missiles). These have a variety of real-time behaviors with dynamic kinds and degrees of uncertainty, plus
    static behaviors. Despite that, the missions can be as human safety-critical as imaginable, having constituent real-time functionalities whose individual mission-criticalities vary innately and circumstantially in terms of mission operational
    effectiveness. The time frames for dynamic real-time scheduling include ones on the order of seconds and minutes. Subject matter expertise is gained from the book author and other (collaborator and independent) persons’ work in that application context
    including from the successful use of the book’s framework in that context.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)