In this dissertation, we work in the area of automatic memory management for hard real-time and embedded systems. The main objective is to build hard real-time and embedded systems using modern languages. As these languages typically use automatic memory management or garbage collection (GC) that historically has had an unstable runtime behavior, we will either make an effort to get rid of the dependence on garbage collection GC using manual approaches, or we will build garbage collection GC techniques for these systems. Considering that garbage collection GC is unquestionably a robust tool to get rid of memory related programming errors, we chose to develop solutions to use garbage collection GC in hard real-time and embedded systems. Throughout this work 3 other GC approaches for these systems have been published. The key benefit of our work in comparison to the other 3 is that memory utilization efficiency is enhanced by about 50 %. We have also created an optimization for incremental garbage collectors and a static garbage collector which is designed to get rid of the dependence on runtime garbage collection.
Contents
1 Introduction
1.1 Perspective
1.2 Problem Definition
1.3 Contributions
1.4 Thesis Organization
1.5 Publications
2 Real-Time Systems
2.1 Definition
2.2 Categorizing Real-Time Systems
2.2.1 Interactive Systems
2.2.2 Soft Real-Time
2.2.3 Hard Real-Time
2.3 Predictability
2.3.1 Execution Time
2.3.2 Memory Usage
2.4 Scheduling
2.4.1 Cyclic Executive
2.4.2 Pre-emptive Priority Scheduling
2.5 Which Systems Are Hard?
3 Garbage Collection Techniques
3.1 Terminology
3.2 Reference Counting
3.2.1 Lazy Reference Counting
3.2.2 Cyclic Reference Counting
3.2.3 Bobrow’s Approach to Reclaim Cycles
3.2.4 Deferred Reference Counting
3.3 Mark-and-Sweep
3.3.1 Incremental Mark-and-Sweep
3.3.2 Yuasa’s Algorithm
3.3.3 Dijkstra’s Algorithm
3.4 Mark-and-Compact
3.4.1 Compaction Methods
3.4.2 Steele’s Incremental Mark-and-Compact Algorithm
3.4.3 Bengtsson’s Mark-and-Compact Algorithm
3.5 Copying Algorithms
3.5.1 Cheney’s Algorithm
3.5.2 Incremental Copying Algorithms
3.6 Generation Scavenging
3.6.1 Inter-generational References
3.6.2 Promotion Strategies
3.6.3 The Train Algorithm
3.6.4 Beltway Collectors
3.7 Replication Copying
3.8 The Treadmill
3.9 Hardware-Supported Garbage Collection
3.10 Requirements of a Hard Real-Time GC
3.11 Algorithm Analysis
3.11.1 Reference Counting
3.11.2 Mark-and-Sweep
3.11.3 Mark-and-Compact
3.11.4 Two Sub-Heap Copying
3.11.5 Generational Scavenging
3.11.6 Replication Copying
3.11.7 Baker’s Treadmill
3.11.8 Hardware Solutions
3.12 Summary
4 Real-Time Reference Counting
4.1 Drawbacks of Standard Reference Counting
4.2 Eliminating External Fragmentation
4.2.1 Selecting the Block Size
4.3 Eliminating Recursive Freeing
4.4 Improving WCET of Allocation
4.5 Manually Reclaiming Cycles
4.5.1 Manually Breaking Cycles
4.5.2 Weak References
4.5.3 Balloon Types
4.6 Automatically Reclaiming Cycles
4.6.1 Mark-and-Sweep Backup
4.7 Design
4.7.1 Configuration
4.7.2 Type Definitions
4.7.3 Global State
4.7.4 Initialization
4.7.5 Allocation
4.7.6 Increasing Allocation Performance
4.7.7 Releasing References
4.7.8 Public Interface
4.8 Complexity and Overhead
4.8.1 Execution Time
4.8.2 Memory
4.9 Emitted GC Code
4.9.1 Allocations
4.9.2 Reference Assignments
4.9.3 Method Calls
4.9.4 Methods
4.10 RTRC vs. The Competition
4.10.1 Execution time
4.10.2 Memory Overhead
4.11 Summary
5 Object Ownership
5.1 Basic Idea
5.2 Owning an Object
5.3 Static Analysis
5.3.1 Supporting Separate Compilation
5.4 Benchmarks
5.5 Extensions
5.5.1 Explicit Freeing
5.5.2 Overlapping References
5.5.3 Supporting Other GC Techniques
5.6 Summary
6 Static Garbage Collection
6.1 Overview
6.2 Optimizing Memory Management
6.2.1 Static Allocation
6.2.2 Thread Local Allocation
6.2.3 Stack Allocation
6.2.4 Explicit Freeing
6.2.5 Variable Sized Objects
6.2.6 An Example
6.3 Extended Escape Analysis
6.3.1 The Data Flow Graph
6.3.2 Building the Data Flow Graph
6.3.3 Converting the Call Graph into a Tree
6.3.4 The Escape Analysis
6.4 Code Generator Extensions
6.5 Limitations
6.5.1 Handling Fields
6.5.2 Large Systems Can not Be Analyzed
6.5.3 Objects Are Kept Alive
6.5.4 Exceptions Are Not Handled
6.5.5 Finalizers Are Not Handled
6.6 Overcoming Limitations
6.6.1 Fields
6.6.2 Large Systems
6.6.3 Calling Methods from Different Contexts
6.7 Summary
7 Implementation
7.1 C implementation of RTRC
7.2 CoSy Implementation
7.2.1 CoSy
7.2.2 The Reference Counter Engine
7.2.3 The Record Splitter Engine
7.2.4 The Runtime System
7.2.5 The Compiler
7.2.6 The CCMIR Engines
7.2.7 Future Work
7.3 The Jamaica VM
7.3.1 Real-Time Reference Counting
7.3.2 Static Garbage Collection
7.3.3 Debug Output
7.4 Summary
8 Benchmarks
8.1 Benchmarking a Garbage Collector
8.2 Control System Application
8.3 Java Grande Benchmarks
8.3.1 Low Level Operations
8.3.2 Kernels
8.3.3 Large Scale Applications
8.4 Summary
9 Related Work
9.1 Real-Time Garbage Collection
9.1.1 One-Pass Real-Time Mark-and-Sweep
9.1.2 Henriksson’s Scheduling Strategy
9.1.3 Siebert’s Real-Time Mark-and-Sweep
9.1.4 Mostly Non-copying GC
9.2 GC Optimization
9.2.1 Deferred and Anchored Pointers
9.2.2 Reference Escape
9.3 Static Garbage Collection
9.3.1 Escape Analysis
9.4 Region Interference
10 Future Work
10.1 Real-Time Runtime Garbage Collection
10.1.1 Worst Case Memory Requirements Analysis
10.1.2 Reclaiming Cycles
10.1.3 Mark-and-Compact GC
10.1.4 Optimizations
10.2 Static Garbage Collection
10.2.1 Inter-Procedural Def-Use Analysis
10.2.2 Reusing Objects in Loops
10.2.3 Object Inlining
10.2.4 Supporting Separate Compilation
10.3 Dynamically Updated Systems
11 Conclusion
11.1 Garbage Collection in Real-Time
11.2 Selecting a Base Algorithm
11.3 Contributions
11.3.1 Real-Time Reference Counting…..
Source: Linköping University
Download URL 2: Visit Now