The ability to independently navigate in the surrounding space (build a map and find one's position on it) is one of the key functionalities, without which the autonomous functioning of any mobile robot, and in particular an unmanned aerial vehicle, is impossible. It is not surprising that about 60% of the topics discussed at any major conference on robotics, one way or another, relate to this question (an exact and complete answer to which has not yet been found).
Usually, in robotics, the tasks of building a map (mapping) and identifying one's position on it (localization) are combined into a single sub-task of simultaneous localization and mapping (SLAM). The SLAM problem can be solved in many different ways, depending on what information about the environment is available to the agent, which in turn depends on what sensors the agent is equipped with.
In our research, we focus on small (up to 50 cm in diameter) multi-rotor aircraft, which, due to design features and low payload (as well as low power-to-weight ratio), are equipped only with compact video cameras. Therefore, we solve the SLAM problem by processing the data of the video stream (the so-called vSLAM, from video-SLAM).
As part of this direction, you are invited to participate in the development of effective methods of mapping and localization based on video data.
Proficiency in C ++
Linux (OpenCV and ROS - optional)
Ability to read scientific and technical literature (articles, textbooks, technical documentation, manuals) in English.