Vision

Fig. 1: Victim - Doll

The main task in the Rescue League is to detect people, draw a map of possible ways into and out of the building and send important information to search and rescue teams. The victims are simulated by dolls which show signs of life as moving, body heat, speaking or least breathing. Closed to the victims hazmat labels and eye charts are placed. So the robot should be able to detect them and send the information to the rescue team.

For the detection of these hazmat labels and QR-Codes the robots are using simple USB cameras and a database with different hazmat labels was created. With the use of the OpenCV library and other open-source tools it is possible to try out many different algorithms for computer vision.

Fig. 2: HazMat Labels

Fig. 3: SIFT 1

Fig. 4: SIFT 2


Thermovision

Fig. 5: Doll detection with
thermo camera

The core of thermo vision system is the FLIR infrared thermo camera A320 and A65 which works in a range of 7.5µm up to 13µm wavelength. The camera uses an uncooled micro bolometer to detect the infrared radiation which is emitted by the objects in the observed area. The camera works on 30fps which also allows detecting the movement of objects precisely.The data of the sampled information is sent via wireless LAN to the main computer where a program analyses the live stream. The Figure below shows the picture of the thermo camera compared to the picture of a standard camera.

It is planned to implement a smart algorithm which is supposed to detect interesting objects such as victims automatically. Therefore the picture is scanned for conspicuous areas. The objects in the picture should be found using the temperature information. On one hand victims can be classified by a certain body temperature on the other hand dangerous heat sources can be localized. The next step in the development process will be acquiring detailed information about the location of the object and to mark it as an interesting point in the created map. The distance between the robot and the detected object will be calculated using a “depth of focus” algorithm.