Intelligent Robot-Assisted Humanitarian Search and Rescue System

The unprecedented scale and number of natural and man-made disasters in the past decade has urged international emergency search and rescue communities to seek for novel technology to enhance operation efficiency. Tele-operated search and rescue robots that can navigate deep into rubble to search for victims and to transfer critical field data back to the control console has gained much interest among emergency response institutions. In response to this need, a low-cost autonomous mini robot equipped with thermal sensor, accelerometer, sonar, pin-hole camera, microphone, ultra-bright LED and wireless communication module is developed to study the control of a group of decentralized mini search and rescue robots. The robot can navigate autonomously between voids to look for living body heat and can send back audio and video information to allow the operator to determine if the found object is a living human. This paper introduces the design and control of a low-cost robotic search and rescue system based on an immuno control framework developed for controlling decentralized systems. Design and development of the physical prototype and the immunity-based control system are described in this paper.


Introduction
Emergency search and rescue is an integral part of most large-scale humanitarian operations. With the rising awareness of world humanitarian crisis, NGOs (Non-Governmental Organization) humanitarian relief operations are gaining more support from the public financially and their annual expenditure on relief missions around the globe are comparable to the military expenditures of many countries. ICRC (International Committee of Red Cross) for example has spent 958. . These figures are increasing annually; ICRC for example has increased its mission expenditures by 16% since year 2000, and MSF has also increased by 13% from 2000 to 2004. In contrary to the rich financial resources, search and rescue technology to-date still rely on search dogs, camera mounted probes; mainly technology that has been in service for decades. With the increasing demand for scapegoats to go into dangerous environment, robots are being identified as good candidates to step in for human rescuers. Robots equipped with advanced sensors have therefore become more and more popular in the search and rescue theatre. This paper begins with brief introductions on emergency search and rescue operations and robotics search and rescue systems, then moves on to describe the design principle of the newly developed robot prototype and details of its mechatronic design. The design of the distributed control system is also discussed before conclusions at the end of the paper.

Emergency Search and Rescue Operations
One of the most common field environments for humanitarian search and rescue operations is collapsed buildings. Earthquakes, typhoons, tornados, weaponry destructions, and catastrophic explosions can all generate damaged buildings in large scales. These structures are unstable and may collapse anytime; thus heavy equipment cannot get close enough to the core of the site. The equipment is often blocked by twisted steel and extrusive objects. The use of heavy machinery is prohibited because they would destabilize the structure, risking the lives of rescuers and victims buried in the rubble ( Fig. 1 & 2). Only by hand should the pulverized concrete, glass, furniture and other debris be removed ( Fig. 3 & 4). Over the past decade, natural and human-induced disasters claimed millions of lives and demolished enormous sum of assets around the world. Natural disasters such as the Oakland Hills Firestorm in 1991 (Parker 1991), Hurricane Marilyn in 1995 (Centers for Disease Control and Prevention 1995), the Oklahoma Tornado in 1999 (National Severe Storms Laboratory 1999), the Indian Ocean Earthquake (Zubair 2004) and Hurricane Katrina in 2005(Federal Emergency Management Agency 2005, and the Pakistan Earthquake in 2005 (Birsel 2005), all claimed deadly and costly tolls to the affected communities. Though, in general, humaninduced disasters are smaller in scale when compared to the natural ones, their chronic effects may last much longer and worsen as the disaster prolongs. The 9/11 attack at World Trade Center (WTC) and the revenge at Afghanistan are two recent examples most people have not yet forgotten. Economic loss and death toll during the tragic incident and its aftermath were enormous. Rescue specialists use trained search dogs, cameras and listening devices to search for victims from above ground. Though search dogs are effective in finding human underground, they are as limited as human in the depth they can reach below the surface of rubbles and are unable to provide a general description of the physical environment the victim locates. Camera mounted probes can provide search specialists a visual image beyond voids that both dogs and human cannot navigate through, however their effective range is no more than 2-4 meters along a straight line below ground surface. Listening devices can pickup very weak sound a victim makes but is often difficult to determine the exact location of the victim. In addition to the limiting factors mentioned above, search dogs require search specialists to supervise at all time. Probes mounted with camera and listening device also require operators to attend full time at the spot. Since collapsed structures are potentially unstable structures, every extra search specialist deployed increases the risk for the structure to collapse. Hence the maximum number of man the rubble can safely support at one time before it the area becomes too dangerous is the maximum number of search equipment that can operate simultaneously in that area, which is often insufficiently little. Also, for safety reason, excavation and search operations must begin from the outmost perimeter and work towards the centre and be done layer-by-layer from top to bottom. This constraint further lowers the possibility of reaching the survivors buried at the center of the field within the critical timeframe.

Robot-assisted Search and Rescue Systems
Mobile robots designed for search and rescue operations (Lau and Ko 2007) are rugged in design and offer many features to address the above constraints. Search and rescue robots can navigate through voids and crevices that are too small for search dogs, and can zigzag between obstacles to reach areas where straight probes cannot reach. Though search and rescue robots also require trained personnel to monitor and operate at all time, tele-operated robots can be deployed simultaneously to any point in the site while the operator stays safely outside the perimeter of the ruin. Hence, search specialists do not have to stay above unstable ruins, and the maximum number of searching units operating on site is not limited by the number of man the unstable rubble can support. Being able to deploy robots anywhere also means victims buried at the centre of the field can too have a chance to be found as those buried near to the perimeter. Search and rescue robots equipped with camera and two-way voice communications allow the operator to see and hear the victim and his/her surrounding. Moreover, once the location of the victim is identified, other robots can deliver water and food to prolong the victim's life. With the advancement in sensor miniaturizations and exponential increment in the speed and capability of microcontrollers, rescue robots small enough to thread through rubbles are rolling out of experimental laboratories into catastrophic areas. The first real research on search and rescue robot began in the aftermath of the Oklahoma City bombing in 1995 (Murphy 2004a). Robots were not used at the bombing response, but suggestions as to how robots might have been applied were taken. In 2001, the first documented use of urban search and rescue robots took place during the 9/11 World Trade Center (WTC) disaster. Mobile robots of different sizes and capacities were deployed. These robots range from tethered to wireless operated, and from the size of a lunch box to the size of a lawnmower (Snyder 2001). Their primary functions are to search for victims and paths through the rubble that would be quicker to excavate, perform structural inspection and detection of hazardous material. During the WTC response, four teams of scientists coordinated by Center for Robot-Assisted Search and Rescue (CRASAR) were called to the operation. These groups of scientists were from a variety of sectors include University of South Florida (USF), Space and Naval Warfare Systems Command (SPAWAR), Foster-Miller Inc., and iRobot Inc. Despite many robots were present with these groups, only two types (three robots) were officially used on the rubble pile (Murphy 2004b). These robots were Micro-VGTV (Variable Geometry Tracked Vehicle) by Inuktun Service Ltd. (Micro-VGTV 2006), and Solem and Talon by (Foster-Miller Inc. 2007). Though current search and rescue robots are far from perfect and are not likely to replace human rescuers and search dogs in the near future, they have demonstrated promising characteristics to assist human rescuers.

Design Principles
During field operations, existing robots being deployed into rubbles, to search for victims or into mine fields to locate bombs, are often deterred by narrow passages. Smaller robots can often reach deeper into rubble pile than larger ones; however, smaller robots are more likely to report lost if they drop themselves into large openings. Since search and rescue robots are typically deployed to work in highly unstructured environment, damaging and losing of robots due to uncontrollable external factors should not be considered as failures; instead, all loses should be considered as calculable risk and incorporate the risk into normal operation cost. Following this line of thought, to minimize operation cost, small low-cost search and rescue robots that can be deployed in high volume are more suitable for search and rescue missions in unstructured environment than large sophisticated units. The search and rescue robot system being discussed in this paper consists of an operation console and two autonomous robots. The robots are essentially two mechatronically loaded aluminum cases, each fitted with two tread-belts driven by two separate gear-motors. Each of these robots is equipped with a Thermal Array Sensor (TAS), a camera for transmitting visual image to the operator, a microphone for picking up sound under the rubble, an accelerometer to tell the orientation of the robot in respect to gravitational pull, a sonar range finder for obstacle avoidance, a high intensity LED for lighting, 6V rechargeable battery, a custom built AIS control board designed for executing different AIS control algorithms on the physical prototype (Fig. 5), and a ZigBee wireless network module for communication between the operator and the robots.

Onboard Mechatronic Devices
The AIS control board is equipped with an on-board 5V regulator, and can operate between 5V and 12V through the Main. The Main switch can cut off power supplied through the Main; power from the Main is split into two channels, one maintains the original voltage, and the other is regulated to 5V. The two 2A on-board motor channels controlled by a L293D can chose to operate from  regulated or non-regulated power by relocating a pin. Six servo controllers for controlling typical hobbyist RC servo motors are aligned at the south end of the board, these channels can also be used for digital I/O control. One twochannel tri-color LED indicator for testing and debugging, a smaller LED power indicator as also installed. A 5V I 2 C port that can support up to eight I 2 C enabled instruments and a RS232 serial port for programming are also available (Fig. 6). The prototype is equipped with camera, TAS, and highintensity LED encased in the aluminum case. Sonar is to be encased in the second prototype. The operation console consists of a mini-monitor for displaying video images obtained from the robots, a ZigBee wireless network module for communication and a remote control unit for interfacing human inputs to the mechatronic system. Basic layout of the system is illustrated in Fig. 7. The primary function of the robots is to navigate autonomously into rubble to search for living bodies using the TAS equipped in front of the robot. The TAS is a thermopile array that detects infrared in the 2µm to 22µm range. The unit has eight thermopiles arranged in a row and can measure the temperature of 8 adjacent points simultaneously. These thermopiles are identical to those used in non-contact infrared thermometers, and can detect heat generated from a human body from 2 meters away regardless of lighting condition. The robots can avoid obstacles and find passage under rubble autonomously using its sonar range finder, but the operator can, at anytime, choose to control each robot individually using the remote controller with the assistant of the control console's mini-monitor. This alternative control scheme enables the human operator to assist the robot to solve navigation problems based on real-time visual images. Without this alternative control scheme, the robots would require many more onboard sensors and higher computational power to achieve comparable results, which in turn adds to the weight and cost of the robots. The robots are equipped with two separate communication channels to minimize power consumption. The A/V channel for audio and video uses 2.4GHz transmission, and is by default turned off to save power. When the robot detects an object with humanbody-like temperature or when it has difficulty to navigate out of a trap, it will generate a request for the operator to turn on the A/V channel and assist it to navigate or to determine if the object is a human being. The data channel is for sharing data and transmitting command between robots and the operator. This is done through the second channel using the ZigBee communication modules (ZigBee 2007), custom-designed by Sengital (2007) (see Fig. 8). ZigBee is a low-power, short-distance wireless standard based on 802.15.4 that was developed to address the needs of wireless sensing and control applications. ZigBee supports many network topologies, including Mesh. Mesh Networking can extend the range of the network through routing, while self-healing increases the reliability of the network by re-routing a message in case of a node failure. This unique feature is highly desirable for search and rescue robots operating in unstructured environment. The ZigBee communication channel can also be turned off to save power, and can be waken wirelessly with a single command. In fact, it is programmed to stay in standby mode when it is not transmitting or receiving data.

Behavioral Design
To effectively achieve the designated function, the robots are instructed to behave in two distinct modes in respond to external stimulations. These two distinct modes govern the robots' actions in victim searching and in exception handling. When the robot is behaving in search mode, it uses its sonar to identify open passages and navigates autonomously into the rubble to look for possible victims using its TAS. While in this mode, the robot shuts down all onboard devices that are not directly related to its objective to conserve energy for navigation and exploration. In practice, the A/V system and the high intensity LED for illumination are deactivated under exploration mode. When the robot identifies a possible victim based on data obtained from the TAS, or when the robot believes it is trapped in rubble, it will switch to the exception-handling mode to request for operator assistance. In exception-handling mode, the robot would first send all data related to its current situation (i.e. current set of data from TAS and sonar) plus its current status (i.e. possible victim identified or trapped) to the operation console. Then it shuts down all energy consuming devices, put the ZigBee communication module to standby mode and wait for the operator's assistance. The human operator can reactivate the robot wirelessly by responding to the console. Once the robot is reactivated in exception-handling mode, it would reinitiate the A/V device, the LED, the sonar, the TAS, and the motor controllers to assist the human operator to determine whether the object identified is a living human body. The human operator can also remotely control the robot to navigate out of a trap with the assistant of the video feedback. The robot can switch back to exploration mode at the operator's command. External interruptions (operator commands or help requests) received through ZigBee communication module can also cause the robot to enter exception-handling mode.

Distributed Control System
The search and rescue robot control system presenting in this paper is based on the GSCF developed for controlling decentralized systems (Ko et al. 2004b). The General Suppression Control Framework (GSCF) (Ko et al. 2005) is based around the analogy of the immunological suppression hypothesis in the discrimination theory (Aickelin et al. 2003). The framework consists of five major components (Fig. 9); they are Affinity Evaluator, Cell Differentiator, Cell Reactor, Suppression Modulator, and the Local Environment. Their functions are explained below.

Affinity Evaluator -evaluates information in the Local
Environment against the objective and output an affinity index.

Cell Differentiator -evaluates inputs from the Affinity
Evaluator and Suppression Modulator to determine the type of behavior to react. 3. Cell Reactor -reacts to the cellular signal from the Cell Differentiator and executes the corresponding behaviors that take effect in the Local Environment.

Suppression Modulator -is a collection of Suppressor
Cells that are sensitive to predefined external stimulants. 5. Local Environment -is where interactions between different components take place and a theoretical space to integrate the physical objects and the abstract system in an analyzable form. The first step in designing a GSCF based control system is to identify the system objective and system constraints. For the search and rescue robots in this research, the primary objective is to search for human body under rubbles using their TAS. Therefore searching for humanbody-temperature-like heat generating objects is the system objective. Next, for the robot to navigate through rubbles to search for heat, the robots must be able to avoid obstacles and to ask for help when it is stuck. Therefore avoiding obstacle is a crucial condition that the robots must satisfy before pursuing the system objective; hence obstacle avoidance is a system constraint. In addition, operator commands and help requests, made by other robots within the system, received through ZigBee communication module are also treated as external constraints.
With system objective and constraints being identified, the next step is to organize these conditions into a system solvable form. For GSCF, the fundamental idea is to let the Affinity Evaluator to decide whether there is a problem to solve (an system objective to pursue), and then consult the Cell Differentiator to decide whether the system has the resources to solve the problem under imposed constraints. For the search and rescue robot, the Affinity Evaluator is responsible for monitoring the status of the system objective. The system objective is said to have achieved when a human-body-temperature-like heating object is detected. The Affinity Evaluator would produce a high affinity index when the system object is achieved to encourage the system to behave aggressively. When a robot is in aggressive mode, it would remain in its position and perform a series of actions to alarm the operator for assistant. Otherwise, the Affinity Evaluator would produce a low affinity index to allow the system to continue exploring the surrounding to search for heating objects. When the affinity index is low, Cell Differentiator would actively evaluate various system constraints to see how the robot should behave. These constraints being evaluated may be predefined system constraints or newly developed constraints due to changes in the environment. GSCF define these constraints as suppressor cells (SC), these cells may evolve to adapt to new changes and may proliferate to increase their sensitivity to specific stimulants. The search and rescue robots under discussion have two main sensors that determine the robots' behaviors. The sonar range finder helps the robot to avoid obstacles, and the TAS helps to locate heating objects. Suppressor cells that have high sensitivity to these sensors are situated in the Suppression Modulator. Suppression Modulator is a very important component in GSCF; it contains suppressor cells that are sensitive to particular sensors and can be viewed as representations of external constraints reacting inside the control system. The function of the Cell Differentiator, on the other hand, is similar to the biological cell differentiation mechanism, in which cells develop aggressive or tolerant behavior in response to the type of cytokines present in the immune system. Similar to the Suppression Modulator, Cell Differentiator is also an important component of GSCF; it is responsible for integrating complex information from different sources into simple instructions and converts intricate problems into quantitative outputs. The decision flow of the Cell Differentiator is summarized in the flow chart as shown in Fig. 10. The suppression indices from the suppressor cells have priority over all others, it is being evaluated first to see whether the robot is blocked by obstacles or has found a heating body. If the suppression index is high, meaning the system has detected something unusual; the suppressor modulator can force the robot to behave in aggressive mode instantly. On the other hand, when the suppression index is low, the system will check the affinity index and follow the normal procedures to determine how the robot should behave. Since the Cell Differentiator in GSCF is only responsible for producing high-level behavioral instructions such as "sound the alarm", "stand fast", "search for heat", etc., there has to be a component to interpret these high level commands into lower level commands for the mechanical controllers. This component is called Cell Reactor. Since mechanical control schemes varies greatly between different operation platforms, GSCF delegates this work to Cell Reactor, so the high level design of other components can remain platform independent.

System Evaluation
To evaluate the performance of a field system and to determine points of improvement, the distributed search and rescue robot system is put to test in a semiunstructured environment. The test environment is a dumpsite for old furniture and equipment; the piling of chairs, broken pallets, and construction debris resembles a condition close to an earthquake-affected indoor environment. The purpose of this test is to observe how well the robots navigate between rubbles and in what condition it navigate best or worse. Also, the userfriendliness of the control interface that allows the human operator to interact with the robot is of the research's interest. The robot deployed into the test environment performed as designed. The robot navigates autonomously into the rubble to search for heat emitting objects that are close to the temperature of heat emitted from a living human body. The robot stopped and switched into tolerant mode after it detected the operators hand. The operator then took over the control of the robot, navigated it to a different location, and let the robot resume its patrolling. Mobility of the robot is biased towards certain terrain. The small size of the robot inherently handicapped its mobility over terrains with large holes, as the robots would simply fall through them as it strolls over. When in front of narrow passages, the robot demonstrated good mobility. The equipped accelerometer helped the robot to determine if it is flipped over and allow the control system to change its motor directions accordingly, so the robot can continue to move in the same direction after being flipped in an accidental event. This feature proves to be very useful over rough terrain and narrow passages, as the operator does not need to know which side of the robot is up to drive the robot forward. Since this is only a prototype for testing the concept of controlling low-cost autonomous search and rescue robots with GSCF based system, it is fair to say the performance of the robot is in-line with design expectation and the GSCF based control system works well as the backbone of the system. To further develop the current prototype system, certain improvements can be made. Suggestions and recommendations derived from this research are discussed in the concluding section.

Conclusions
This paper presented a low-cost search and rescue robot system that can navigate into voids in rubbles, avoid obstacles, detect living human body temperature, transfer video image, and communicate in a low-power ZigBee network. The robot system consists of two robots and one operator console and can be expended to include any number of robots. The immunity-based control system enables the system to be controlled in decentralized manner using simple commands and limited communication power. In spite of the technological challenges and mistrust of new technologies in human nature, search and rescue robots will become an indispensable tool in future rescue operations. Starting to develop and field search and rescue robots with regular rescue teams can help scientists to better understand the strength and weaknesses of different robot designs under different situations. Having robots working in parallel with regular rescue team can also help scientists to investigate how robots should behave to comply with their operators' instructions and to best assist the rescue effort in general. User friendly operation control interface allows amateur rescuers to be trained to operate the robot in a shorter period of time, eliminating the need to occupy limited professionals to look after each robot. Low manufacturing cost allows robots to be deployed in mass quantity to increase the chance of finding survivors. Battery is the heart of robots; it keeps electricity pumping inside the robots. Lighter, smaller and more powerful battery is also an important constituent of effective search and rescue robots. Emergency wireless network for communication is also important for coordinating actions between robots, collecting visual image from the robots, and to communicate with the victim when the robot finds one.

Acknowledgements
This work was supported in part by the Research Grant Council of the Hong Kong Special Administrative Region under the CERG Project No. HKU7142/06E.