Research Activities

I consider myself as a robotocist with  strong passion and desire in building practical robotic systems that can positively impact our lives and increase our quality of living.

Having worked extensively in the robotics, autonomous systems and artificial intelligence domains, I have developed great interest in advancing the state of robotics and autonomous systems to enable their deployment in practical applications.

Coverage Path Planning

Coverage path planning focuses on mobilizing a robotic platform in a particular environment in order to achieve adequate coverage as needed by the intended application. For instance, domestic robotic vacuum cleaners attempt to achieve full coverage by moving the robot around the house in order to completely clean it. Similarly, a painting robot utilizes a coverage algorithm in order to completely paint the surface. Many of the new applications, however, focus on either 3D reconstruction or inspection of a large structure, building, or area (e.g., bridges, wind turbines, planes, power plants etc.).

The focus of this research was on coverage path planning for the purpose of 3D reconstruction and inspection of geometrically complex structures and infrastructure. The coverage path planning in these cases requires gathering visual information about the surfaces of the target objects or environments. The process of gathering surface information could consume a significant amount of time and requires human intervention. Therefore, planning autonomously an effective path for the coverage tasks to improve efficiency and reduce costs is critical and highly desirable. For coverage tasks, autonomous robotic systems are equipped with vision sensors such as RGB cameras, depth sensors, laser scanner, and thermal cameras, in order to collect visual information of an object surface. The vision sensor measurements are subject to certain constraints such as mounting angle, occlusions, minimum and maximum clipping distance, and Field of View (FOV). The coverage path will be an optimized collection of viewpoints that the robot has to follow in order to achieve the desired coverage performance.

Collaborators

Students

Relevant Publications

Search and Rescue

Robotic Search and Rescue (RSAR) is a challenging yet promising technology area with the potential of high-impact practical deployment in real-world search and rescue disasters scenarios. Response time in a disaster environments is considered a key factor that requires a careful balance between rapid and safe intervention. SAR response should be fast and rapid in order to maximize the number of detected survivors/victims and locate all sources of danger in a timely manner. Appropriate disaster response should be organized and synchronized in order to save as many victims as possible, the primary objective in search and rescue operations. In general, responders have approximately 48 hours to find trapped survivors, otherwise the likelihood of finding victims alive drops substantially. Moreover, the conditions of SAR sites are usually hazardous making the SAR team more vulnerable, forced to operate in an unstructured environment with limited access to medical supplies, power sources, and other essential tools and utilities.

The research work conducted mainly focuses on:

  • Exploring indoor environment in order to identify and locate victims using multi-sensors (e.g., electro-optical, thermal, wireless, stereo imaging etc.)
  • Coordinating a team of robots and humans during search and rescue missions
  • Semantic environment mapping that labels hazards and ranks them

Collaborators

Students

Relevant Publications

UAV based victim Localization

@article{goian2019victim,
  title={Victim localization in USAR scenario exploiting multi-layer mapping structure},
  author={Goian, Abdulrahman and Ashour, Reem and Ahmad, Ubaid and Taha, Tarek and Almoosa, Nawaf and Seneviratne, Lakmal},
  journal={Remote Sensing},
  volume={11},
  number={22},
  pages={2704},
  year={2019},
  publisher={Multidisciplinary Digital Publishing Institute}
}

Semantic Risk 3D Mapping

Exploration using Adaptive Grid Sampling

@article{almadhoun2019guided,
  title={Guided next best view for 3D reconstruction of large complex structures},
  author={Almadhoun, Randa and Abduldayem, Abdullah and Taha, Tarek and Seneviratne, Lakmal and Zweiri, Yahya},
  journal={Remote Sensing},
  volume={11},
  number={20},
  pages={2440},
  year={2019},
  publisher={Multidisciplinary Digital Publishing Institute}
}

Exploration for Semantic 3D Mapping

Dual UAV collaborative transportation

Dual UAV payload transportation using RTDP

Multi-UAV co-ordinated payload transport

Site Inspection Drone

UAV payload transportation via RTDP

UAV wireless charging and tethering

Energy Estimation and distribution

Next Best View (NBV) with profiling

Simultaneous Localization and Mapping (SLAM)

This work focused on educing SLAM position estimation error by utilizing modern deep learning techniques and semantic scene understanding for a richer feature extraction.

Collaborators

Students

Relevant Publications

Human Robot Interaction (HRI) in Assistive Robotics

Intention prediction - wheelchair navigation

Minimal Interaction during Intention prediction

Other Activities

Strain sensor control

Robot control using flexible strain sensor from tissue paper

Indoor UAV Swarms

Swarm formation - 5 Drones - GPS Denied

Robot Manipulation

Robot object grasping

Haptics Teleoperation

Haptic based UAV Teleoperation

Autonomous Indoor Navigation

Semantic 3D mapping

3D Mapping with Image snapshots