Code Density Test
A code density test is a software analysis technique used to measure and evaluate the proportion of essential, functional code within a program relative to its total size, often to identify and quantify inefficiencies like bloat, redundancy, or unused components. This form of testing is a critical aspect of software maintenance and optimization, falling under the broader disciplines of static code analysis and software quality assessment. Its primary importance lies in its ability to reveal hidden technical debt and resource consumption, as software systems often accumulate unused or inefficient code over time, which can significantly impact maintainability, performance, and energy efficiency [4][8]. By providing a quantitative metric for code "leanness," these tests help developers and architects make informed decisions about refactoring, specialization, and system design. The core characteristic of a code density test is its focus on the ratio of utilized to total code. It works by analyzing source code or binaries to identify sections that are never executed, are redundant, or represent overly generic implementations that generate excessive machine code. Key methodologies include static analysis to detect dead code, profiling to uncover unused features, and specialized metrics like "moved lines," which track the rearrangement of code into reusable modules as an indicator of consolidation efforts [1]. A major type of bloat targeted by these tests is that caused by overly generic programming constructs; for example, in C++, the Standard Template Library (STL) uses templates where each instance can generate a completely separate piece of code, potentially leading to significant binary expansion if not managed carefully [5][6]. Other specialized analyses, such as application debloating and strategic inlining, also rely on density measurements to guide optimization [2][7]. The applications of code density testing are widespread in both industrial and academic settings for improving software sustainability. It is fundamentally significant for controlling system complexity, reducing attack surfaces by eliminating unnecessary code, and improving energy proportionality in hardware-software systems [8]. In modern software development, its relevance has grown with the rise of resource-constrained environments like embedded systems, mobile computing, and cloud infrastructure, where efficient resource use is paramount. The historical context of hardware limitations, exemplified by the derided quote attributed to Bill Gates about 640KB of memory, underscores a perennial drive for efficiency that code density testing supports [3]. Today, these tests are integral to combating the compounding technical debt from AI-generated code and ensuring that software remains adaptable, performant, and cost-effective to maintain throughout its lifecycle [1][4].
Overview
A code density test is a specialized software analysis methodology designed to evaluate the efficiency and compactness of source code, typically within the context of software engineering, compiler optimization, and system performance. Unlike simple line-of-code (LOC) metrics, which can be misleading indicators of functionality or quality, code density assessments aim to quantify the functional payload per unit of code, analyzing how effectively a given algorithm or module achieves its purpose relative to its size and complexity [13]. This analysis is critical for identifying software bloat—the phenomenon where software becomes progressively larger and more resource-intensive without a commensurate increase in useful functionality—which can degrade performance, increase energy consumption, and exacerbate system bottlenecks [14]. The concept extends beyond mere static analysis to encompass dynamic behaviors, such as how code changes over time during development and refactoring cycles, and how its structure impacts hardware utilization, particularly in energy-proportional computing environments [14].
Defining and Measuring Code Density
Code density is a multi-dimensional metric that resists a single, universal definition. At its core, it represents the ratio of useful computational work or semantic meaning to the physical or logical volume of code required to express it. High-density code performs more operations or encapsulates more logic with fewer instructions, variables, and control structures. Measuring this requires moving beyond raw line counts to consider factors such as:
- Instruction-level efficiency: How many machine or intermediate language instructions are generated from a source construct [13].
- Algorithmic complexity: Whether the implemented algorithm is optimal for the problem space.
- Data structure overhead: The memory and access pattern efficiency of chosen data representations.
- Abstraction overhead: The runtime cost introduced by layers of abstraction, interfaces, and design patterns. For example, compiler optimizations like function inlining directly target code density by replacing a function call site with the body of the called function. This eliminates the overhead of the call sequence (e.g., pushing registers, branching) but increases the size of the calling function. A code density analysis would evaluate whether the performance gain from eliminating call overhead outweighs the potential negative impacts of increased code size, such as instruction cache misses. Research into inlining strategies seeks to automate this trade-off analysis to produce denser, faster code [13].
Code Density in the Context of Software Evolution and Bloat
The significance of code density tests is magnified when examining software evolution. As software is maintained and extended over years, it often accumulates cruft—unused code, inefficient patches, and layers of compatibility shims. This process leads to software bloat, a well-documented issue where applications grow in size and resource demands faster than their feature sets expand [14]. Bloat is not merely a storage concern; it has cascading effects on system performance and energy efficiency. Bloated code can strain CPU caches, increase memory bus traffic, and force the processor into higher-power states for longer durations, directly contradicting the principles of energy-proportional computing where system energy use should scale closely with utilization [14]. Code density tests serve as a diagnostic tool to pinpoint bloat. By analyzing modules or commits for low functional yield relative to their added mass, developers can identify refactoring candidates. This relates to metrics like "moved lines," which track the rearrangement of existing code into reusable modules. While moving code does not directly reduce total lines, it is a refactoring action aimed at improving structural density—consolidating functionality to reduce duplication and improve organization, which can facilitate future optimizations and reduce the likelihood of bloat in subsequent development cycles.
Interplay with Hardware and System Performance
The impact of code density is not isolated to software metrics; it directly interfaces with hardware behavior. Modern processors rely on deep cache hierarchies and speculative execution to achieve high performance. Low-density, bloated code can negatively interact with these microarchitectural features in several ways:
- Instruction Cache Thrashing: Large, sprawling code bodies may exceed the capacity of the L1 instruction cache, causing frequent misses and stalls while fetching instructions from slower L2 or L3 caches or main memory [13].
- Poor Branch Prediction: Excessively complex control flow, sometimes a byproduct of poorly structured code, can reduce branch prediction accuracy, leading to pipeline flushes and wasted work cycles.
- Memory Footprint: Inefficient data structures and algorithms increase the working set size, leading to more data cache misses and increased memory bandwidth consumption [14]. Consequently, a comprehensive code density test must consider the target hardware architecture. Code that is dense on one microarchitecture (e.g., with a large, forgiving cache) might perform poorly on another (e.g., a constrained embedded system). The interplay between software bloat and "hardware energy proportionality" is a critical research area. Systems struggle to achieve energy proportionality when software inefficiencies force hardware components like CPUs, memory, and disks to remain active or in high-power states to service poorly optimized code, thereby creating system-level bottlenecks that limit performance per watt [14].
Methodologies and Applications
Implementing a code density test involves both static and dynamic analysis techniques. Static analysis might involve parsing source code or intermediate representations to build control flow graphs, calculate cyclomatic complexity, and estimate the path length and resource usage of functions. Dynamic analysis, using profiling tools, measures actual runtime characteristics such as instruction retirement rates, cache miss ratios, and energy consumption correlated with specific code segments. These tests are applied in several key domains:
- Compiler Development: To evaluate and tune optimization passes, ensuring they genuinely improve performance density rather than just reducing one metric at the expense of another [13].
- Code Review and Refactoring: To identify modules that are candidates for simplification, optimization, or replacement.
- Embedded Systems and IoT: Where memory and computational resources are severely constrained, high code density is a non-negotiable requirement for functionality.
- Performance Engineering: To diagnose systemic performance issues rooted in inefficient code patterns that consume disproportionate resources. In summary, a code density test provides a sophisticated lens for evaluating software quality, far surpassing simplistic size metrics. It connects software engineering practices directly to tangible outcomes in execution speed, hardware resource utilization, and system energy efficiency. By quantifying the relationship between code mass and computational value, it addresses fundamental challenges of software bloat and its hardware ramifications, making it an essential practice for developing sustainable, high-performance software systems [13][14].
History
The concept of code density testing emerged from the long-standing engineering challenge of optimizing software to fit within constrained hardware resources, a practice that dates to the earliest days of computing. While the formalization of "code density" as a specific metric for testing is a more recent development, its intellectual and practical foundations are deeply rooted in the evolution of programming languages, compiler design, and system architecture.
Early Foundations and Hardware Constraints (1940s–1970s)
The imperative for dense, efficient code was inherent in the first programmable computers. Early machines like the ENIAC (1945) and the Manchester Baby (1948) had severely limited memory capacities, measured in words or bytes [15]. Programmers wrote directly in machine code or assembly language, where every instruction had a direct and critical impact on the program's memory footprint. This era established the fundamental principle that code density—understood as the amount of functionality achievable within a given memory size, or conversely, the memory required for a specific functionality—was a primary determinant of a program's feasibility [15]. The development of higher-level languages like FORTRAN (1957) and C (1972) introduced compilers, which automated the translation of human-readable code into machine instructions. A key measure of a compiler's quality became the efficiency and compactness of its output, making code generation a central battleground for optimization. During this period, the relationship between algorithmic efficiency, data structure choice, and memory consumption was forged, though often addressed in an ad-hoc manner rather than through systematic testing.
The Rise of Software Bloat and the Need for Metrics (1980s–1990s)
The 1980s and 1990s saw a paradigm shift with the advent of personal computers and graphical user interfaces, which offered vastly increased memory and storage capacities. This led to a phenomenon often termed "software bloat," where applications grew in size and resource consumption faster than the underlying functionality warranted. The industry publication IEEE Xplore became a key repository for research examining this trend, including studies on the interplay between expanding software, hardware energy proportionality, and emerging system bottlenecks. As software complexity exploded, the informal optimization practices of earlier decades proved insufficient. The need for objective, quantifiable metrics to assess and control code efficiency became apparent. This period also saw the birth of specialized tools for code compression and optimization. Notably, UPX (the Ultimate Packer for eXecutables), released in the late 1990s, provided a practical tool for improving effective code density by compressing executable files, and its license allowed free distribution even with commercial applications, facilitating widespread adoption [16].
Formalization of Code Density Analysis (2000s–2010s)
The 2000s marked the beginning of code density's formalization as a distinct quality attribute to be tested and measured. The proliferation of embedded systems, mobile devices, and battery-powered hardware created a renewed commercial imperative for efficiency. Research into software bloat evolved to consider not just static size but dynamic runtime behavior and energy consumption. Academic and industrial efforts began to develop standardized benchmarks and profiling methodologies to measure code density in a repeatable way. This era saw the integration of density considerations into integrated development environments (IDEs) and continuous integration pipelines. Furthermore, the rise of version control systems, particularly Git, enabled new forms of historical analysis. Companies like GitClear pioneered metrics such as "Moved Lines," which tracked the rearrangement of code—an action typically performed to consolidate previous work into reusable modules. Analyzing trends in this metric provided insight into development hygiene and long-term maintainability, connecting code movement to architectural quality.
Modern Era and Holistic Assessment (2020s–Present)
In the contemporary software landscape, code density testing has matured into a multifaceted discipline. It is no longer solely concerned with binary size but encompasses a holistic view of resource efficiency, including:
- Runtime memory patterns
- CPU cache utilization
- Energy consumption per operation
- Cloud computing resource costs
Modern toolchains incorporate sophisticated static and dynamic analysis. For instance, coverage analysis tools can help find unused functions in C/C++ code, allowing developers to eliminate dead code and improve density. However, new challenges have emerged. Analyses of development trends, such as those tracking "Moved Lines," have revealed a concerning pattern in some organizations: a year-on-year decline in code movement suggests developers are becoming less likely to refactor and reuse previous work. This represents a marked shift from established industry best practices like DRY (Don't Repeat Yourself) and would, if widespread, lead to more redundant systems with less consolidation of functions. The context of application domains also critically informs density requirements. While web browsers have morphed from hypertext document renderers into complete virtual computing environments with massive codebases, the core functions of a word processor, text editor, or image editor would remain recognizable to a developer from decades past, implying a more stable and potentially more optimizable density profile for such tools [15]. Today, code density testing is an integral part of performance engineering, security (where smaller attack surfaces are desirable), and sustainability-focused computing. It represents the ongoing synthesis of historical constraint-driven programming with modern data-driven software analytics.
Description
A code density test is a quantitative assessment of a software system's efficiency in terms of its source code organization, compiled binary size, and runtime resource utilization relative to its functionality. It evaluates the principle that well-designed software should achieve its intended purpose with minimal, non-redundant code, avoiding unnecessary abstraction layers, unused features, and inefficient data representations that contribute to "software bloat" [1][14]. This concept extends beyond mere binary size to encompass the structural quality and maintainability of the source code itself, where poor density can manifest as increased complexity and technical debt [1].
Metrics and Methodologies for Assessing Density
Code density analysis employs several key metrics. Source code density can be evaluated by tracking code movement, a metric devised to quantify the rearrangement of code into reusable modules [1]. A year-on-year decline in this movement suggests developers are becoming less likely to consolidate previous work, a marked shift from industry best practices that can lead to more redundant systems [1]. At the binary level, tests compare the size of compiled executables for equivalent functionality. For instance, canonical "Hello World" programs can vary dramatically in size across programming languages due to differing runtime libraries and abstraction models; some languages perform console output through extensive stream abstractions, while others use more direct system calls [17]. Furthermore, static analysis tools, such as coverage analyzers, can identify unused functions within codebases (e.g., calcSum or Foo::run in C++ header files), which are direct contributors to bloat [5][14].
The Impact of Programming Language and Abstraction
The choice of programming language fundamentally influences potential code density. Interpreted languages like Python often prioritize developer productivity and maintainability over runtime performance and binary size [19]. This trade-off is evident in Python's object representation, which differs from languages like C or Rust, necessitating a conversion process that introduces overhead in C extensions [20]. Conversely, languages designed for extreme minimalism, such as FORTH, demonstrate high density by operating with minimal abstractions. The milliFORTH project, for example, implements a real programming language in only 340 bytes, its design converging unintentionally with the principles outlined in the sectorFORTH guide [18]. This contrasts with the trajectory of many mainstream applications. While core tools like text editors or image editors remain functionally recognizable over decades, other platforms like web browsers have evolved from hypertext renderers into comprehensive virtual computing environments, inherently incorporating vast layers of functionality that challenge traditional density metrics [3].
Consequences of Low Code Density
Low code density has significant operational and economic repercussions. As noted earlier, inefficient data structures and algorithms increase memory footprint, but the problems extend further. Bloated software exacerbates system bottlenecks and conflicts with hardware energy proportionality, as larger, less efficient codebases consume more processor cycles and memory bandwidth even during idle or low-activity states [1]. This results in higher energy consumption and reduced performance on all hardware, but particularly on resource-constrained devices. Furthermore, systems with low source code density—characterized by poor reuse and high redundancy—become more difficult to maintain, test, and secure, compounding technical debt over time [1].
Strategies for Improvement and Testing
Improving code density requires deliberate engineering practices. Regular use of static analysis tools to eliminate dead code is essential [14]. Developers can adopt a mindset of "code subtraction," actively seeking to remove features, dependencies, and abstraction layers that do not provide commensurate value. Building on the imperative for efficient code discussed in earlier foundations, modern testing involves benchmarking not only execution speed but also binary size for equivalent features and monitoring source code metrics like the rate of code movement and consolidation [1][17]. The interplay between software bloat, hardware efficiency, and system bottlenecks underscores that code density is not merely a stylistic concern but a critical factor in sustainable software engineering [1].
Significance
The significance of code density testing extends beyond mere binary size measurement to encompass fundamental questions about software efficiency, developer productivity, and the systemic consequences of code expansion. It provides a quantitative framework for analyzing the trade-offs inherent in modern software development practices, from template metaprogramming to dependency management. As software systems have grown in complexity, the tools and methodologies for assessing their compactness have evolved from simple byte counts to sophisticated analyses of runtime behavior and maintainability metrics.
Quantifying Abstraction and Template Overhead
Code density tests reveal the concrete costs of programming language abstractions that are often treated as theoretically zero-cost. For instance, C++ templates, while powerful for generic programming, can cause significant code duplication known as "template bloat" [21]. When a function template like calcSum is instantiated for multiple types, the compiler generates a separate instance of the function machine code for each distinct template parameter type used [21]. This compilation model means that using calcSum<int>, calcSum<float>, and calcSum<double> in the same program typically creates three complete, independent function bodies in the final executable, directly expanding binary size proportionally to the number of distinct instantiations [21]. This expansion occurs despite potential logical redundancy, as the algorithmic operations might be identical across types at the machine instruction level after optimization. Modern C++ has introduced features aimed at mitigating such issues while maintaining type safety. The std::optional and std::variant types provide standardized mechanisms for handling optional values and type-safe unions, respectively, which can reduce ad-hoc implementations that contribute to bloat [6]. However, their implementation complexity within the standard library itself represents a trade-off, encapsulating sophisticated machinery that must be linked into the binary. Code density testing against benchmarks that utilize these features helps quantify their actual footprint versus handwritten alternatives, providing data-driven guidance for performance-critical applications where binary size constraints are strict, such as embedded systems or web assembly modules.
Benchmarking and the Canonical Experience
The methodology of code density testing is itself a subject of significance, as it establishes standardized baselines for comparison across languages and toolchains. The "canonical Hello World" benchmark, compiled with standard, default settings, serves as a fundamental metric for the minimal practical footprint of a language's runtime and toolchain [17]. The motivation for these standardized tests is to measure the out-of-the-box developer experience, reflecting the baseline overhead a user accepts when choosing a particular ecosystem [17]. This canonical measurement is crucial because it represents the default scenario for most projects, especially during initial prototyping, and highlights inherent overheads that might be obscured by aggressive post-hoc optimization. These tests extend to extreme cases that explore the theoretical limits of expressivity per byte. Projects like milliForth, a FORTH implementation in 340 bytes, demonstrate the extreme density achievable in a Turing-complete language by stripping away all but the most essential abstractions [18]. Its word signature function @ ( addr -- value ), which fetches a value from a memory address, exemplifies the minimalistic, direct mapping of language primitives to machine operations that maximizes density [18]. Comparing such minimalist implementations against mainstream language outputs via code density tests illustrates the vast spectrum of design priorities in programming language development, from maximal abstraction to maximal compactness.
Performance Correlations and System-Level Impact
While often focused on static binary size, code density has profound implications for dynamic runtime performance and energy efficiency. Denser code typically exhibits better instruction cache utilization, reducing misses and improving execution throughput. This relationship is particularly critical in large-scale object-oriented applications, where pervasive abstraction can lead to "software bloat" characterized by excessive layers of indirection and delegation that degrade performance [14]. Analysis of such bloat involves finding, removing, and preventing performance problems that stem not from algorithmic inefficiency but from systemic architectural choices that inflate code paths [14]. The interplay between software bloat, hardware energy proportionality, and system bottlenecks creates complex feedback loops [14]. Less dense code increases the working set size, potentially overwhelming CPU caches and memory bandwidth. This forces the hardware to activate more silicon and draw more power to service the same logical computation, moving systems away from energy-proportional operation where power consumption scales linearly with utilization. Performance measurement tools like Python's IPython %timeit magic function are essential for correlating static code characteristics with dynamic behavior, allowing developers to profile hotspots where abstraction overhead materializes as tangible latency [20]. Comparative language benchmarks, such as CPU-intensive tests run across multiple languages and versions, further contextualize density within the broader performance landscape [19].
Maintainability and the Hidden Cost of Churn
Beyond compilation and execution, code density intersects with software maintainability and the economics of development. High-level languages and expansive SDKs accelerate feature delivery but can introduce long-term maintenance burdens through dependency chains and opaque abstractions. The historical trend toward UI complexity and feature accretion in applications like Microsoft Office exemplifies how user-facing functionality can drive underlying codebase expansion, sometimes beyond the minimal requirements for core operations [22]. For instance, full utilization of modern collaboration suites may necessitate hardware like video cameras and microphones, with supporting code for device management and media processing integrated into the core application bundle [22]. Modern development analytics have begun to quantify aspects of code evolution that affect maintainability. Metrics like "Moved" lines track the rearrangement of existing code, typically to consolidate functionality into reusable modules or improve structure [14]. While not directly reducing line count, such refactoring aims to increase logical density—packing more coherent functionality into well-organized modules—which can reduce future duplication and ease comprehension. This reflects a shift from measuring only static size to assessing organizational quality, recognizing that a smaller, convoluted codebase may be more costly to maintain than a slightly larger, well-factored one. Developers bear responsibility for ensuring that the SDKs and dependencies they incorporate comply with relevant regulations and do not introduce unnecessary bloat or vulnerability [14]. Code density testing, when applied to dependency trees, can help audit this incorporated third-party code, distinguishing essential functionality from superfluous additions that increase attack surface and maintenance overhead without commensurate benefit.
Applications and Uses
Code density tests serve as critical diagnostic and evaluative tools in modern software engineering, with applications spanning from legacy system maintenance to contemporary mobile application development. Their primary use lies in identifying and quantifying unnecessary code expansion, known as bloat, which can degrade performance, increase resource consumption, and introduce security vulnerabilities [21][7]. The proliferation of third-party libraries and software development kits (SDKs) has made these tests indispensable for developers who must integrate external code while maintaining application efficiency and compliance [8].
Quantifying Feature Creep in Commercial Software
A principal application of code density analysis is in assessing the evolution of large-scale commercial software suites. Historical analysis of applications like Microsoft Office reveals a trajectory of increasing UI complexity and feature accumulation, often disproportionate to core functionality gains [2]. This phenomenon is not merely aesthetic; it manifests in tangible system requirements. For instance, while basic document editing requires minimal resources, modern suites specify requirements such as a "standard built-in camera or USB 2.0 video camera" for features like video conferencing, illustrating how ancillary functions drive overall application bulk [22]. Code density tests can isolate the contribution of such features to the overall binary size and memory footprint, providing metrics to guide specialization or debloating efforts [2]. These tests help answer questions about efficiency, such as why adding a simple paragraph of text to a document can trigger disproportionate processing overhead, causing complex graphical elements to destabilize or "explode into pieces" [7]. By measuring the ratio of functionality delivered to code volume, these tests provide an objective basis for deciding whether to refactor, modularize, or remove features.
Managing Third-Party SDK Integration in Mobile Development
In mobile application ecosystems, code density tests are routinely employed to audit the impact of integrated third-party SDKs. At their core, SDKs are code written by external parties to provide specific functionality within an application, such as analytics, advertising, or social media integration [8]. While integration is a standard practice, each added SDK increases the application's final size, complexity, and attack surface [8]. Density testing here involves benchmarking the application's size and performance before and after SDK integration, often revealing that a significant portion of the delivered binary is attributable to external code. Furthermore, developers are responsible for ensuring that all SDKs used in their applications comply with relevant data privacy and security regulations (e.g., GDPR, CCPA) [8]. Code density analysis aids this compliance vetting by helping to identify redundant or non-compliant code paths introduced by SDKs, allowing for more informed vendor selection or the pursuit of custom, leaner implementations.
Enabling Application Specialization and Debloating
A advanced use case for code density metrics is driving automated or semi-automated application specialization. Tools and research frameworks, such as those discussed in the context of "Trimmer," use density profiles to create specialized application variants from a generalized, bloated codebase [2]. The process typically involves:
- Profiling the application to establish a baseline code density map. - Tracing execution paths for specific, common user workflows. - Identifying and safely removing code modules, libraries, and dependencies that are never invoked during those targeted workflows [2]. This application of density testing moves beyond measurement to active remediation, effectively creating leaner, more secure, and faster versions of software tailored for particular use cases. It directly addresses the "astronomical" bloat observed in contemporary software, where general-purpose applications bundle vast libraries to cover every possible scenario, resulting in inefficiency for most individual users [23].
Benchmarking and Comparative Analysis
Code density tests provide essential data for benchmarking compilers, programming languages, and development frameworks. For example, analyses of C++ code bloat examine how high-level abstractions, template metaprogramming, and inline expansions translate into machine code volume, sometimes with exponential growth in output size from minimal input changes [21]. These benchmarks guide language evolution and compiler optimization efforts. Similarly, density tests allow for comparative analysis between competing libraries or frameworks claiming to offer similar functionality, quantifying the efficiency trade-off between convenience and bulk. This is particularly relevant in service-oriented contexts, where, as noted by developers, many competing services offer identical core functions but are backed by software clients or web interfaces of vastly different sizes and performance characteristics [23][24]. A developer choosing between such services can use density analysis of their respective integration packages as a key decision metric.
Informing System Requirements and Deployment Planning
The results of code density testing directly influence the formal system requirements for software deployment. As noted earlier, inefficient structures can increase memory footprint, but density tests also predict storage needs, startup times, and update bandwidth requirements. For enterprise software like Microsoft 365, the aggregated bloat from decades of feature addition dictates minimum hardware specifications for acceptable performance [22][14]. System administrators use density profiles to plan hardware refreshes, virtual machine resource allocation, and deployment strategies, especially in constrained environments like virtual desktop infrastructure (VDI) or government systems with strict configuration controls [22]. By modeling how code density correlates with resource utilization, organizations can optimize their IT infrastructure spending and user experience. In summary, code density tests transition the conceptual understanding of software bloat into actionable data. Their applications range from the micro-scale of choosing a library to the macro-scale of managing an enterprise software portfolio. They serve as a foundational practice for developing efficient, maintainable, and compliant software in an era characterized by abundant computing resources but also by escalating complexity and security demands [2][7][8].