The Army tactical network(s) currently comprise multiple, individually federated, transport mechanisms. Almost all warfighting functions, in addition to other specialized services (e.g. medical), maintain a dedicated network communication infrastructure. While this does provide some redundancy[i], it also impedes collaboration and data sharing, as well as greatly increases complexity and Cost, Size, Weight and Power (SWaP) requirements across all tactical echelons.
The U.S. Army Cyber Center of Excellence has recently introduced a plan to converge these Command Post (CP) network architectures, promoting the concept of a single transport layer as a means to increase efficiency and enable the sharing of data across all mission functions. Achieving this degree of integration has numerous challenges. This article will focus on just one – critical information delivery assurance.
Given that within this network model, all data must share a single finite capacity communication transport layer, how do we ensure that critical information is provided some assurance of guaranteed delivery and responsiveness? To achieve this, we make the case that a converged tactical network must support a comprehensive Quality of Service (QoS) implementation as well as graceful degradation mechanisms.
As related to computer networking, QoS is a means of prioritizing amongst various data flows such that some degree of assured service can be maintained. Simply put, QoS can be thought of as a contract between the application (user) and the network, ensuring some agreed-upon minimum level of service. QoS is predicated on the fact that not all data streams are as susceptible to high latency or bit error rate conditions as others, or that not all communication streams are of equivalent importance. By categorizing and tagging such data, QoS techniques[ii] can ensure that the highest priority information, and the most susceptible communications streams, receive preferential treatment.
As depicted in Figure 1, communication failures can occur as a result of three basic conditions. First, the communication channel can be degraded, increasing the bit error rate to the extent where transmitted data can no longer be decoded at the receiver. Secondly, within the network core, congestion due to multiple competing communication streams can introduce routing delays and cause packets to be “dropped” by routers too overwhelmed to deal with the demand. Lastly, congestion can also occur at the application layer. Specifically, if a server is overtaxed my multiple simultaneous requests, this would result in added latency and potential denial of service to awaiting clients.
Figure 1: End-to-End network model with potential sources of service degradation
Internet QoS concepts have been studied for some time, however, few have gained mass adoption. Today’s Internet, for the most part, still only provides a “best effort” delivery guaranty. In other words, any packet sent across the Internet is treated the same as any other, regardless of data content. The reason for this is primarily an economic one. Internet Service Provider (ISP) networks utilize highly reliable optical and wired transport links. A packet traversing the global Internet can undergo dozens of routing retransmissions (“hops”) before reaching its destination very reliably. As a result of this architecture, network level congestion is by far the most prevalent cause of Internet service degradation or disruption (up to 99% by some estimates). For these reasons, it has simply been more convenient and less expensive for telecommunication companies to overprovision the capacity of the Internet backbone than to integrate complex QoS algorithms across a global network spanning a multitude of different service providers.[iii]
Relevance to Army Tactical Networks
Military tactical networks cannot follow this same strategy. Our transport mechanisms are highly reliant on Radio Frequency (RF) communications links that are prone to disconnection and interference (intentional or otherwise), making high bit error rates a likely significant contributing cause of service disruptions. Furthermore, they are significantly more bandwidth (BW) constrained. Because of this, we cannot simply overprovision as is done in the commercial sector to ensure data availability.
On a positive note, as related to the implementation of QoS algorithms, our tactical networks have some advantages. One of the greatest challenges associated with the deployment of QoS on the Internet, is ensuring that service guarantees can be consistently maintained from source to destination. This is especially difficult when traffic flows have to cross multiple different service provider network, each with their own customer base and data prioritization schemes. This is not the case for Army tactical networks that are, for the most part, fully controlled by US government entities. As a result, implementation of QoS techniques is a viable and necessary solution to ensure that the highest priority communications are given preferential service.
For QoS to function, data must be prioritized in some way. This in itself is challenging. Generally, prioritization can be subdivided into four distinct categories; each which has clear parallels to tactical military networks:
- Data type: Ensures all data streams of a certain type is given priority. This is particularly useful for data streams that have some degree of delay intolerance such as streaming voice/video, or transactional protocols such as online banking or e-commerce. Tactical networks must also support multiple types of delay intolerant and transactional data streams that require some inflexible amount of minimal data throughput to properly function.
- Application: This categorization gives traffic from a particular application or service priority over others. For instance, data sent by emergency service applications could be given automatic preferential service. A clear parallel can be made for application data traversing tactical networks. For instance, medical or fires information is likely more important (or time sensitive) than logistics in most cases.
- User: This prioritization differentiates based on user type. For the commercial sector, this could be implemented based on a customer tier system (silver, gold, platinum), which provides increasing quality of service based on a service level agreement. From a military tactical network perspective, user may be differentiated based on rank or job function (e.g. should emails from a senior leader be of higher priority than those of a private?).
- Relevance: This prioritization attempts to assign value to the data itself. For instance, not all emails are created equal. Many are of low priority and will not be impacted by significant delays. Others, might relate to something that is time sensitive and critical. For military tactical systems, given data messages can be categorized based on mission criticality, then obviously, higher priority mission related data should be given preferential service.
It should be fairly self-evident, that none of the data prioritization schemes described above can adequately manage the transmission of all network data independently. As such, a sound prioritization scheme must be a blending of each of them. Unfortunately, deciding what data is to be prioritized above other, under some finite set of conditions, is not trivial, and will require some research and coordination across the community to accomplish.[iv] Creating this prioritization will necessitate both operational and technical expertise, and will require both static and dynamic[v] prioritization constructs to be viable. It is paramount to note that no QoS technique can succeed if this first step is not properly accomplished.
QoS alone, however, is not sufficient to meet all mission requirements. There will be times when network conditions (which can vary significantly over time) will be strained to the point that data streams of equivalent importance will not have sufficient resources to all be fully satisfied. Under such conditions, a mechanism that allows for graceful degradation[vi] of services is essential. In the absence of graceful degradation techniques, all equivalent priority services will compete for resources, either result in degraded performance to all, or complete denial of service for some. Neither of these outcomes is appealing. Additionally, if congestion reaches some critical mass, certain communication streams (specifically those that are latency constrained) will cease to function altogether. We refer to such data streams as inelastic in the fact that they must have some fixed amount of consistent bandwidth available, or they lose all functionality (video is a classic example). The strategy here should be to convert inelastic communication protocols into elastic[vii] ones, thus relaxing their latency contains, and simultaneously reducing their bandwidth requirements. Figure 2 provides a hierarchical taxonomy of various data types for both elastic and inelastic communication streams.
Converting inelastic data application into elastic ones can have significant BW reduction benefits. For example, an analog voice channel requires approximately 4Kbit/sec of continuous throughput to ensure it is viable. If we wish to transmit a 4-second message, “flank left and lay down suppressive fire”, this would require a total of 16 Kbits. Conversely, if this message was converted to simple text, it would only require approximately 600 bits, and just 150bit/s to achieve an equivalent response time (though this too can be extended for an even lower throughput requirement). Streaming video can also be transformed into an elastic data type by simply converting it into a set of periodic still images that have significantly lower BW requirements, and no network latency dependencies.[viii]
Implementation of such graceful degradation techniques will allow for more efficient network resource utilization during degraded network conditions, while still allowing some useful level of functionality. Another advantage of such data transformation techniques is that they can be implemented either at the application layer or at the network layer as part of an integrated QoS architecture.
Figure 2: Inelastic Vs Elastic data streams and associated throughput requirements
Existing QoS algorithms can be adapted to function within tactical networks, however, an overarching data prioritization scheme must first be agreed upon and established based on commander and operational priorities. Additionally, graceful degradation techniques will be essential to ensure that application that traditionally required consistent throughput and latency can maintain some level of functionality, even when network capacity falls below minimal bandwidth requirements. Implementation of QoS and graceful degradation algorithms within Army tactical networks will be necessary to ensure priority service can be provided to mission critical data. This is especially important if we are to transition to a converged network architecture that will be required to support multiple various information streams that span numerous mission applications.
About the authors
Mr. Giorgio Bertoli current serves as Senior Scientific technology Manager (SSTM) of Offensive Cyber Technologies for the U.S. Army Intelligence and Information Warfare Directorate (I2WD), a part of the U.S. Army Research Development and Engineering Command (RDECOM). Mr. Bertoli has extensive government experience in the areas of Cyber, Electronic Warfare, and military tactics both as a civilian and as a former active duty soldier. His primary research areas include the development of advanced Electronic Warfare (EW), Computer Network Operations (CNO), Cyber and Quick Reaction Capability (QRC) technologies. Mr. Bertoli holds a BSEE and MSEE from the New Jersey Institute of Technology, and a MS in Computer Science from the University of Massachusetts Amherst. Mr. Bertoli is also a Certified Information Systems Security Profession (CISSP). During his 6 and a half year Military career, Mr. Bertoli served as a combat Engineer and was stationed in Germany, Ft Bragg NC, and Korea as well as being deployed as part of Operation Desert Shield and Desert Storm.
Dr. Paul G. Zablocky currently serves at the Office of Naval Research (ONR) as the Division Director of the Complex Hybrid Warfare Sciences Division (Code 301) within the Expeditionary Maneuver Warfare and Combating Terrorism Science and Technology Department. He is responsible for leading and directing an integrated portfolio of basic research, applied research, and advance technology development science and technology (S&T) efforts in support of the United States Marine Corps (USMC). Dr. Zablocky had over 20 years of research and development experience in Electrical Engineering prior to joining the government in 2005. Paul has extensive experience in managing teams of engineers and scientists, quick reaction capability development efforts, mobile app deveopment, and cellular and mobile data technology research.
 W. Zhao, D. Olshefski, H. Schulzrinne, Internet Quality of Service: an Overview, Columbia University Research Report CUCS-003-00, 2000.
 X. Xiao, L. Ni, Internet QoS: A Big Picture, IEEE Network March/April 1999.
 V. Firoiu, J. Le Bounce, D. Towsley, Theories and Models for Internet Quality of Service, Proceedings of the IEEE Vol. 90, NO. 9, Sep 2002.
 M. Andreolini, S. Casolari, M. Colajanni, A distributed Architecture for gracefully degradable Web-Based Services, Proceedings of the fifth IEEE international symposium on network Computing and Applications, 2006.
 F. Mata, J. Aracil, Quality of Service Analysis of Internet Links with Minimal Information, 978-3-901882-50-0©2013 IFIP.
[i] By having a different data transport mechanism, the loss of one network may not impact another.
[ii]This document does not elaborate on the various types of QoS algorithms that have been devised. Please refer to reference  for a good overview of some of the most common.
[iii] Recently, there are signs that this may be changing. Applications such as Netflix, and other streaming video content providers, are now beginning to stress core backbone capacity limits. This was brought into the political forefront in early 2015 with a proposal that would have allowed service providers to charge companies such as Netflix a premium in return for improved and guaranteed service. This however, has political and economic ramification related to “net neutrality”, which are beyond the scope of this document, and has so far been stymied.
[iv] The complexities and potential solutions available to establish such a data prioritization schema are beyond the scope of this document, but can be presented as part of a more technical follow on document.
[v] Static prioritization is constant and set a priory. This usually takes the form a simple rule set which places a consistent priority weight base on some constant attribute. Dynamic prioritization is situational dependent and can fluctuate based on current events. For instance, during direct adversary engagement medical data might be escalated in importance. The ability to dynamically alter the prioritization of data based on mission context is by far one of the most challenging aspects necessitating S&T investment.
[vi] Within the context of this paper, graceful degradation is defined as: “The ability of a computer, machine, electronic system or network to maintain limited functionality even when a large portion of it has been destroyed or rendered inoperative”.
[vii] Elastic data communications stream are those that can theoretically still function with any amount of bandwidth above zero. For instance, a file download can still occur under very low throughput conditions. It will just take a long time.
[viii] Such a transformation (video to still images) is relatively simple and is already supported by existing commercial SW products. S&T is currently investigating mechanisms to convert video streams into even smaller data representations such as textual scene descriptions (e.g. distil a 30 minute video into “White Pick-up truck with license plate number XXXXX positioned at grid coordinates LAT/LONG seen at 16:00 Zulu, and traveled to x, y, z… locations”). This not only greatly reduces the necessary communication BW, but also has the added benefit of reducing the analyst’s cognitive load.