Recently, Pakistan experienced a dreadful flood that resulted in ⅓ of the country going underwater and thousands of people died and many of them were not even reported. It is common knowledge that a lot of these precious lives could have been saved if we had been more active and had better resources to carry out fruitful rescue operations. Now that most of the people have been rescued and inhabited in rescue camps, we must not forget the devastating event and try to arm ourselves with better plans and strategies. So that God Forbid in case there is another event like this we can be better equipped to cope with it. So what do Autonomous Drones have to do anything with floods in Pakistan?
Today we will like to present a strategy that has been adopted by many countries all around the world and has proved to be very beneficial for them. The use of Autonomous Drones for carrying out search and rescue missions for such catastrophes. It is a relatively high-cost equipment but on the other hand, it is also very high-value and effective in ways that no other techniques can prevent the amount of damage than Autonomous Drones. During hurricanes, flash flooding, and other disasters, it can be extremely dangerous to send in first responders, even though people may badly need help.
In Pakistan, rescuers use helicopters in some cases, but most require individual pilots who fly aircraft manually which requires a whole crew to operate. That limits how quickly rescuers can view an entire affected area, and it can delay aid from reaching victims. Autonomous drones could cover more ground faster, especially if they could identify people in need and notify rescue teams. Even the other drones that require a manual remote control drive are not used in Pakistan and are even rejected by first-world countries.
Floods and other disaster areas are often cluttered with downed trees, collapsed buildings, torn-up roads, and other disarray that can make spotting victims in need of rescue very difficult. 3D lidar sensor technology, which uses light pulses, can detect objects hidden by overhanging trees.
Autonomous Drones have an artificial neural network system that could run on a computer onboard a drone. This system emulates some of the ways human vision works. It analyzes images captured by the drone’s sensors and communicates notable findings to human supervisors. First, the system processes the images to improve their clarity. Just as humans squint their eyes to adjust their focus, this technology takes detailed estimates of darker regions in a scene and computationally lightens the images.
In a rainy environment, human brains use a brilliant strategy to see clearly: By noticing the parts of a scene that don’t change as the raindrops fall, people can see reasonably well despite the rain. Our technology uses the same strategy, continuously investigating the contents of each location in a sequence of images to get clear information about the objects in that location.
The Autonomous Drone technology is not alien to Pakistan, of Undergrads from NUST have been able to make Pakistan’s first autonomous drone by representing Pakistan in various international competitions and exhibitions. They have won many competitions worldwide and are impressing the world with their creativity and hard work.
The students involved in making these drones are Muhammad Hassan Khan, Hafiz Hamza Jalil Qureshi, Ali Shair Muhammad Bhutta, Muhammad Taaha Rana, and many others. They were fourth-year mechanical engineering students who built these drones in 2016 and were studying at CEME.
The group has won 9 international awards since 2018, representing Pakistan worldwide. These include Unmanned Aircraft Systems (UAS) Challenge 2021 and the best safety award. This means that they have produced a fine product, and government can invest in these students and can equip them with better strategies next time a flood or any disaster comes around.
Other researchers have also invested their researchers in manufacturing drones, especially drones that can be used for flood-damage prevention. Vijayan Asari at the University of Dayton Vision Lab has been designing these autonomous systems of the future to eventually help spot people who might be trapped by debris. Our multi-sensor technology mimics the behavior of human rescuers to look deeply at wide areas and quickly choose specific regions to focus on, examine more closely, and determine if anyone needs help.
The deep learning technology that we use mimics the structure and behavior of a human brain in processing the images captured by the 2-dimensional and 3D sensors embedded in the drones. It is able to process large amounts of data simultaneously to make decisions in real-time.
They use different strategies by computing 3D models of people and rotating the shapes in all directions. We train the autonomous machine to perform exactly as a human rescuer does. That allows the system to identify people in various positions, such as lying prone or curled in the fetal position, even from different viewing angles and in varying lighting and weather conditions.
The system can also be trained to detect and locate a leg sticking out from under the rubble, a hand waving at a distance, or a head popping up above a pile of wooden blocks. It can tell a person or animal apart from a tree, bush, or vehicle.
- New Technology Makes It Possible for Flood Affected Areas to Have Drinking Water
- Pakistan Accomplishes Another Milestone in Drone Technology