Technology has greatly improved the means for near real-time Alerts, Warnings and Notification (AWN) systems that can enhance the safety and situational awareness of all sectors of our society. To date these have been focused on quickly communicating hazard information from an authoritative source to an interested stakeholder. Typically, both ends of this flow are humans, requiring thoughtful message design, review and careful wording to ensure decisive, protective actions are taken by those at risk.
As these systems mature, there is a second wave of AWN that is emphasizing a more automated approach to protect critical infrastructure and, indirectly the citizens that depend on it. For these automated exchanges to work best, the AWN messaging need to be trustworthy, timely and specific and the response by the infrastructure being alerted must be prescribed, mitigative and failsafe. With these elements in place, such a system can take advantage of the fact that hazards move slower than the speed of light and most hazards do not. An electronic AWN messages, barring delays in creation, validation and sending, can be dispatched and received before the first mechanisms of a hazard may arrive at locations distant from the source of the hazard.
These system to system or machine to machine AWN efforts can allow sensor networks like stream gauges and seismic networks, often enhanced by models, to have a threshold driven means of broadcasting to pre-defined constituent systems that can automatically perform actions such as stopping elevators and trains, raising flood walls or lowering traffic barriers. These thresholds can be established based upon levels or trends in the data. An example of the later could be a stream gauge that is starting to show a rapid rise in water. This trend, could trigger an alert, as could a threshold where the water exceeds a certain pre-determined level.
This text will explore key technical aspects of such a system, develop qualitative criteria to evaluate communications protocols in this light and directly compare two of the leading options available for use by such systems. This effort build on previous work to bring the AWN architecture to wider audiences and is a follow-on effort to the “Geospatial Considerations of Public Alerts, Warnings and Notifications” document shared on the DHS Geospatial Concept of Operations (GeoCONOPS) website1.
1.2 AWN Interoperability:
1.2.1: Current State
The state of conventional alerting systems is maturing steadily, expanding in scope and capability. FEMA’s Integrated Public Alert and Warning System (IPAWS) continues to be the national alerting backbone, with many new operators supporting other communities, universities and facilities. Other systems are employed by local communities, university campuses and large employers to facilitate this timely and critical communication.
The delivery of AWNs has been bolstered by the well-established Common Alerting Protocol (CAP) standard which is allowing for applications to confidently develop tools that support this need. By leveraging this standard, application developers can focus on developing software or extensions to existing applications to provide this capability to their customers. This has opened the market up to non-traditional players and lowered the burden on adopting this technology. One example is Blackboard Mass Notification System, an extension to the academic learning management system that is used in many universities. This system leverages the ability to connect with the students and faculty with accounts and profiles in the program, but also generates CAP messages to reach out to other infrastructure that the university may have available for disseminating alerts.
The general trend in alerting technology is to leverage standards to allow for integration of more modular or a la carte approaches where the community builds their architecture rather than buying it from a single vendor. While this adds complexity, the cost savings, flexibility and improved user interfaces and proving to be very beneficial. Additionally, software tools that allow for the development of pre-plans for specific scenarios also help with expediting alerts and improving their targeting.
1.2.1: Innovative Conceptual Model
While the CAP standard is enabling growth in the use of conventional alerting tools, more robust standards must be developed and employed to provide additional content and capability to a new class of alerting systems. This text is focused, more specifically on the systems under development that will employ hazard sensor networks, models, communication mechanisms and constructive actions that are under consideration for innovation and development. Without a thoughtfully endorsed set of standards for this discipline, this innovation will be inhibited by requiring researcher to develop their own or adopt standards that are not widely applicable and have limited utility for future integration opportunities yet to be considered. However, despite this forward focus, the issues are entirely relevant to existing systems that are considering enhancement to their existing systems and exchange protocols.
The graphic on the following page (Figure 1) demonstrates the nature of these new systems that aim to use electronic transmission speeds to trigger protective actions among facilities and individuals to provide some ability to take protective actions before a no-notice, rapid-onset disaster. Unlike conventional alerts, these can be almost fully automated and can trigger automated behaviors by utilities, transportation infrastructure and other industrial systems.
The graphic depicts time through the middle horizontal stripe, geographic distance on the bottom and the alerting process on the top. Once the hazard force reaches the sensor, the race is on to alert the facility in advance of the hazard. In this hypothetical example, the nuclear facility gets 45 seconds to prepare for the hazard. If systems are linked to initiate prescribed protective actions, the system can use that time to execute those actions and possibly save lives and property.
Figure 1 – Workflow of a new generation alerting system leveraging the speed of light communications to provide short, but useful margins of safety.
From this graphic, we also want to clarify some components and terminology. These include the “upstream component” where the sensor is processed into an alert and transmitted. This would typically be a scientific agency or other responsible party such as a chemical facility manager. The “downstream component” is where the alert is received and is acted upon. This could be any organization that has assets that could benefit from even small margins of time to enhance safety, such as an airport, nuclear facility or a hospital.
Currently, the most mature end to end system under development in the United States is the USGS ShakeAlert system for California, currently in prototype form (see Appendix B). This system uses a network of strategically distributed seismic sensors to provide instantaneous alerting of any shaking to a central computer, which runs a model or pulls from a matching previously run model of ground shaking propagation and transmits specific warnings to subscribing users and a public notification interface.
2 PROPOSED SOLUTION: CONSIDERATIONS AND METHODOLOGY
2.1.1: Sensor Networks
To provide such a system with reliable data, each hazard under consideration must have a suitable network of interconnected sensors that are located in areas where such hazards are expected to be generated. Such locations could be ravines for flash floods, steep, unstable slopes for landslide, near chemical storage facilities or around active seismic faults.
Many such sensor networks exist, but most were developed for other purposes and have limitations in the data they collect, the frequency of transmission and other parameters. To serve such a high velocity mission, each sensor network will have to be assessed and likely refined to the task. These networks are typically sensors with a direct connect to a node, but research and development is advancing sensor webs. These webs connect the sensors to each other either to propagate their signals to a centralized node or to provide some more sophisticated interaction such as validation and expedited interpolation of broader events2.
2.1.2: Communications Infrastructure
Each sensor situated in the landscape needs some means to communicate with a centralized entity to process this information into meaningful information products. This infrastructure can be wired or wireless and needs to be optimized for the anticipated use cases. These may be partially or fully automated and may report periodically, provide constant feeds, or report upon request.
One example is the traffic cameras situated around many cities. Traffic cameras typically provide a series of images pushed at some set time interval versus a a constant video stream. These burst transmissions can be more efficient in terms of energy, bandwidth and storage, but if the gap interval is too great, important data may be missed. In our example, if the intent is to monitor traffic flow, 30 second intervals are fine, however, if the goal is to track specific vehicles, the interval will have to be much shorter. These use cases must be factored into the infrastructure design.
Once the data is consolidated within the sponsoring agencies database, often the data must be processed to validate and screen out obvious corrupt data and then it is often processed through computer models to create new information products. These models are also often spatial in nature. For example, the ground shaking measured by seismometers can be interpolated into a surface map of ground shaking when all sensors are processed together. This surface map can then be used to estimate shaking intensity at any location within the modeled area.
Models can be processor intensive and can take some time to run in real time, when seconds count. Where possible, it is best to identify likely scenarios and run those in advance, keeping the model results accessible within the system for use when a similar event strikes. This suggests the possibility of pushing these proactive model outputs, or localize threshold levels, into the sensor units and having the alert originate from the sensor. This would expedite things further, although field infrastructure is typically more difficult to maintain and validate. This would work well for highly localized hazards, like flash floods where there is little need for interpretation. Broader hazards require input from a number of sensors to fully understand the scope and magnitude of a major event, like a hurricane or earthquake. As these systems develop, sensor networks can permit the peer to peer communication between sensors, opening additional possibilities for supporting larger scale hazard alerting.
The model outputs can be used to generate highly specific alerts that consider hazard intensity, impacts and geographic location to push these critical communications to the right audience. The design of these messages are critical to the efficient utilization of the communications modes available and the use of the individual or system receiving the message. Too much information can be as problematic as too little as it risks higher latency in low bandwidth situations.
The conventional process of dissemination originates in an alerting agency, often as a CAP message into an aggregation\dissemination platform like IPAWS for targeted broadcast to predetermined receivers (cell phones, road signs, etc.). As sensors become more ubiquitous, affordable and connected, there is a need for even more speed which, as mentioned previously could be manifest in connecting sensors directly to the alerting aggregator\disseminator.
Careful consideration must be given to how such sensor\aggregator integrations would be enabled and what communication protocol should be applied. In the near term, these sensors are likely to trigger limited actions with little risk of false positive alerting, such as those currently employed at train crossings. More sophisticated communication protocols are under development to enable more nuanced interactions with systems and machines. The Open Geospatial Consortium (OGC) is currently developing standards within its Sensor Web Enablement (SWE)3 program to facilitate just such information exchanges, including the Sensor Alert Service (SAS) standard4.
2.1.5: Scenario Matching
One of the critical and unique aspects of a system to system alerting interface is the ability to match the alert to a scenario and a prescribed set of associated protective actions. As described previously, these could be a range of actions from closing of traffic gates to shutting down reactors. Some of these interventions may have significant costs associated with them. Since there is always a possibility of a false positive alert, thus failsafe actions should be selected for full automation, with more aggressive actions requiring validation by human or an independent system.
Integration is the meaningful connection of two systems for the exchange of information, either in a two way or a one-way connection. In this case, we are looking at the one way integration of Sensor-Model-Alert systems, or the “Upstream” component to the “Downstream” industrial systems potentially controlled by automated processes triggered by these external alerts and matched to prescribed scenarios and their associated actions. AWN integrations are typically one way, from the alerting system to the recipient, and must contain predefined essential information, as well as metadata that ensures validation, accountability and proper data management.
For system to system alerting integrations, just as with alerts informing individuals or populations, the owners of the receiving system must maintain the responsibility to ingest and act based upon the information within the AWN, in a manner reflective of their own and their stakeholder’s interests. This ownership ensures that the alerting system does not make the flood gate rise or the train stop any more than it can make a person evacuate. The downstream system needs to have the protocols in place and codified to ensure protective action is taken and to retain appropriate accountability and responsibility. Therefore, to be effective, the upstream (alerting) system must convey sufficient information on the hazard as well as a level of confidence in the information provided, that allows the downstream partner to confidently take the appropriate action.
Another integration to consider is the horizontal alerting between different systems. For example, a University, with its own alerting system within a City with another system, all within the United States, with a National system. Ideally, these systems would have some means to coordinate their messaging, with the most local providing the greatest level of detail in both geo-targeting and suggested protective action options.
2.2.2: Technical Challenges to Automated Alerting Systems
To achieve the goal of evaluating alerting protocols for this purpose, it is important to understand the existing or potential limitations that may arise from a sub-optimal choice. While no one solution will be perfect for all use cases, it is important that the stakeholders work toward an industry standard to help promote innovation and to ensure these systems are developed in a manner that will not isolate them in the future.
Current potential challenges for alerting protocols are as follows:
- Latency in increasingly complex and interconnected AWN systems – The desire to bring more capability can increase file size and lead to over complexity
- Delays from human in the loop processing and interfaces – Given the consequences of errors, organizations are hesitant to remove humans from the decision loops on both the upstream and downstream components
- Lack of horizontal Interconnectivity and information exchange between AWN systems
- Interconnectivity between Modeling Tools and AWN systems – In the upstream component, there is also a need to formalize the protocols between model outputs and AWN systems for optimization.
2.2.3: Solution Criteria
To properly select an information exchange protocol, it is helpful to define some weighted selection criteria to facilitate the selection. Many of these are at odds with each other, for example, excessive message richness can reduce efficiency of transmission, so a balance must be struck.
- Metadata – This is background information about the specific alert that assists in ensuring appropriate use, archiving and discovery.
- Security – Systems, and the messages contained within, should have sufficient cyber-security rigor to ensure a very low likelihood of spoofing, jamming or corruption.
- Richness of Content – The message content must have enough detail to facilitate the downstream decisions to be made.
- Transmission efficiency – The data element must be small enough to navigate the minimal expected bandwidth efficiently.
- Openness – This must not be a proprietary format or it will inhibit innovation.
- Scalable – It must be amenable to use in small to large systems and adapt as user groups expand and contract.
- Extensible – It should be structured in such a way to allow it to convey additional information to satisfy the needs of key stakeholder groups while retaining the core information for wider use.
- Backward Compatibility – This reflects the ability to revert to a previous standard once a newer format has been adopted.
2.2.4: Existing Protocols
Currently there are two main competing standards that we will need to understand and assess against this solution criteria. These are the Common Alerting Protocol (CAP) and the Distributed Element 2.0 (DE 2.0). Both of these standards are maintained and governed by Organization for the Advancement of Structured Information Standards (OASIS).
The CAP is a simple XML based messaging standard developed as part of the Emergency Data Exchange Library (EDXL) suite5 of standards designed to support integration of emergency management related information systems. A typical CAP message will convey all critical information about the alert including location, timing and recommended actions, but it has limitations that relate to its format and mode of dissemination. It is currently employed by FEMA’s Integrated Public Alert and Warning System (IPAWS) and a number of other commercial applications. The following is the abstract provided by OASIS, the governing body that manages the standard:
“The Common Alerting Protocol (CAP) is a simple but general format for exchanging all-hazard emergency alerts and public warnings over all kinds of networks. CAP allows a consistent warning message to be disseminated simultaneously over many different warning systems, thus increasing warning effectiveness while simplifying the warning task. CAP also facilitates the detection of emerging patterns in local warnings of various kinds, such as might indicate an undetected hazard or hostile act. And CAP provides a template for effective warning messages based on best practices identified in academic research and real-world experience6.”
The Distributed Element 2.0 (DE 2.0) is also XML based, but contains additional capabilities and sophistication, that may factor positively or negatively depending upon the use case. These features are the ability to embed additional rich content elements and the ability to embed routing address information within the record itself. DE 2.0 leverages XLink, the embedded XML protocol that facilitates the linkages to related, external content including other alerts, multi-media and textual content. The following is the abstract provided by OASIS, the governing body that manages the standard:
“This Distribution Element 2.0 (DE 2.0) specification describes a standard message distribution format for data sharing among emergency information systems. The DE 2.0 serves two important purposes:
1. The DE 2.0 allows an organization to wrap separate but related pieces of emergency information, including any of the EDXL message types, into a single “package” for easier and more useful distribution;
2. The DE 2.0 allows an organization to “address” the package to organizations or individuals with specified roles, located in specified locations or those interested in specified keywords.
This version of the DE expands the ability to use local community-defined terms, uses a profile of the Geographic Markup Language (GML) , follows best practices for naming conventions, provides the capability to link content objects, supports extensions, and is reorganized for increased flexibility and reuse of common types. The DE 2.0 packages and addresses emergency information for effective distribution with improved standardization and ability to be tailored for user needs7.”
2.3 Comparative Analysis
The following comparative analytical framework is not intended to determine a superior standard, but to break out the features of the system for consideration by system owners for their specific use cases. Additionally, a key factor is not just what standard is used, but how it is used, as both allow for some customization. This analysis will compare the relative strengths against the two standards as few objective measuring standards exist to rank them. This is not an exhaustive list of comparative aspects, but provides a starting point for consideration.
The Common Alerting Protocol (CAP) v1.2
Distributed Element 2.0
Relies upon the encryption of the system. Cannot be encrypted at the record level
Could allow for encrypted content
|Richness of Content||
Limited to text based information
Both can be expanded, but the ability to link additional content elevates DE 2.0
CAP is likely a smaller data packet under normal circumstances
Embedded objects can be of varying sizes.
CAP is less complex and non-proprietary
DE 2.0 could hold proprietary formats
Format limitations allow for more reliable scaling.
May have difficulty scaling depending upon the embedded objects.
Extensible, but limited to the XML format and standard constraints
The ability to add objects to the record can be a powerful feature
As the older standard, CAP is not able to adopt other standards with additional capabilities or XML content.
DE 2.0 can either contain CAP information within the XML or carry a CAP message as a linked object.
Sensors continue to play a more important role in our lives, from tracking people’s health, to tracking the comfort of our homes, they will bring important changes to how first responders deal with natural disasters. From the modeling and simulation community to the first responders who combat the disasters, sensors and the vital information they provide can bring a new level of analysis and alerting to these groups.
System to system alerting will continue to be developed as it has significant potential to save lives and property. The appropriateness of the standard selected would be specific to the nature of the hazard, the upstream stakeholders and the downstream needs as currently identified and desired in the foreseeable future.
The Distribution Element standard allows for an elegant solution to combine the information required to deliver DE messages along with sensor information and CAP messages. The additional information added to DE 2.0 would not break existing applications that use the data standard. Both the sensor information, a link to any videos produced by a sensor and a CAP message can all be added to the DE message for delivery. The benefits of using DE versus CAP comes down to how the data standard can handle the new content, DE is setup to handle this additional information while CAP would require changes to be able to manage this information and still be compatible with existing CAP implementations.
The solution discussed above would give the modeling and simulation community additional information to understand historical trends, and plan for future disasters, however, this may come at a cost of efficiency and reliability. Understanding how sensor information and alerting worked together during a disaster would allow for better models to be constructed and more accurate portrayals of response. Near real-time alerting can help first responders understand changing conditions and plan accordingly. The combination of alerting and sensor information can prove to be a powerful tool in combatting the unknown.
APPENDIX A – KEY TERMS
The following are key terms for understanding concepts discussed in this document:
- Interoperability is the ability of a system or product to work with other systems or products without special effort on the part of the customer8.
- A sensor is a mechanical device sensitive to light, temperature, radiation level, or the like, that transmits a signal to a measuring or control instrument9.
- Standards are documented agreements containing technical specifications or other precise criteria to be used consistently as rules, guidelines, or definitions of characteristics to ensure that materials, products, processes, and services are fit for their purpose.” (ISO)10.
- Alerts, Warnings, and Notifications (AWN) typically contain information about a particular hazard such as its intensity, a timeframe for when the hazardous condition is expected to occur (and diminish), the locations likely to feel impacts, and sometimes a recommended action that the recipient should take to protect his or her life and/or property. (Source: AWN Appendix)
- The Common Alerting Protocol (CAP) is a digital format for exchanging emergency alerts that allows a consistent alert message to be disseminated simultaneously over many different communications systems11.
- The Distribution Element (DE) 2.0 is a data standard that contains information and the recipients it must be delivered to.
- The Integrated Public Alert Warning System (IPAWS) is a modernization and integration of the nation’s alert and warning infrastructure and will save time when time matters most, protecting life and property12.
APPENDIX B – SHAKEALERT: EARTHQUAKE EARLY WARNING SYSTEM
Given the suddenness and potential for destruction related to earthquakes, research is under way to try and find ways to enhance alerting to provide some options for enhancing public safety. One of these is to provide early warning after an earthquake has started, but before the damaging waves of shaking energy reach distant infrastructure and populations. Such a system would not anticipate the event, but leverages the speed of light and electronic transmissions to outpace the ground shaking waves to the alert recipient allowing for trains to stop, automated systems to make refineries, reactors and other assets shift into protective modes and notify populations. The goal is to reduce the damage and fatalities when these events occur, just like any other alerting system, but only faster.
A partnership with the United States Geological Survey and a number of universities is innovating and leveraging work done in Japan to design an experimental system in Southern California to see if this is feasible and sustainable in the United States. The system works by tying together a dense network of ground shaking sensors near the areas where earthquakes are likely to be generated. When the sensors detect a possible earthquake, the system rapidly models the distribution patterns, intensity and timing of the event throughout the at-risk area and transmits an alert that will provide early warning on the order of minutes, but more likely tens of seconds, which affords the at-risk facility or populations to take immediate protective actions.
Figure 2 – A screenshot from the video: ShakeAlert – Earthquake Early Warning. How does it work? (https://youtu.be/WWl3m4OyU44)
While the technical aspects of the system seem quite feasible, the cost are currently challenging as the system requires a high number of custom build sensors to be distributed and maintained over a large area. One the system is established, a key goal will be to find solutions to these high costs by mass production and cost sharing strategies.
For more information on this system, and to follow its progress, see the ShakeAlert website: https://www.shakealert.org/