Research and Development of autonomous intelligent systems

Deep learning, computer vision, NLP and much more

Who we are?

AUTONOMOUS GROUP was established in 2011 having as core business the research and development of autonomous air, land and naval systems. Other domains of interest are: cloud and mobile software development, artificial intelligence, surveillance of critical infrastructure, areas affected by natural disasters or man-made structures, support intervention in emergencies and autonomous systems, systems testing.

What we do?

Autonomous has a project oriented organizational structure, which ensures high flexibility and productivity. The experts working with Autonomous are specialized in various domains (Mobile software development, Cloud computing, R&D Robotics , Computer Vision, Digital Image Processing, Forensics, UAV training, UAV Services, Technological Partnership). Together with private companies and public authorities we have managed to develop research and development projects that have a key contribution in the development of society. From industrial robots to autonomous underwater systems, we increase efficiency and bring innovation in public safety, transport, public services, industry or energy and utilities. 

Our strenghts

  • Strong research and development expertise on computer vision and robotics
    Our researchers and engineers are developing state of the art object recognition systems using the latest machine learning technologies, with focus on aerial image semantic segmentation and object detection. Besides the development of specific computer vision systems we also publish our results in the top computer vision and machine learning conferences.
  • Team of experts in the field of system design and system integration.
    We have consolidated the team of experts in the field of system design and system integration. This capability allows to our team to easily integrate into our configuration a large range of equipment and sensors or to integrate our systems into other upper level systems. This include also integration of the UAV systems into a number of specialized Command and Control Centres.
  • Consistent background in designing integrated systems for mobile or tablets with backoffice capability in cloud Google or Microsoft Azure
  • “Mesh Concept” implementation that generate a cost-effective and efficient mesh of unmanned platforms for surveillance, detection and monitoring of any type of incidents
    This approach will allow the adaptation of long-tested technologies to the specific requirements of a specific area to be surveyed and also will increase the monitoring and surveillance capability mainly in difficult to access and remote locations.

Reference projects

A2I2
Automated Aerial Image Interpretation: from geometric alignment to semantic segmentation and recognition of object categories

Computer Vision will play an important role in the world of tomorrow, having the potential to improve quality of life and future technologies.

We are committed to develop such smart vision systems, which should be capable of operating in close relationship to various areas of robotics, such as autonomous aerial vehicles.
The domain of Computer Vision studies and develops computational methods and systems that are capable of perceiving the world through images and videos in a smart manner, as close as possible to the level of human visual perception. Despite being a relatively new sub field in Artificial Intelligence and Robotics, Computer Vision currently enjoys a fast-growing development in both scientific research and industry.

We aim to develop high performance prototypes through scientific research, as well as create technological systems that have immediate usability. Thus, we shall try to discover new aspects derived from the connections between eye, sight and thinking. We should also develop computing systems that may support such complex cognitive processes.

Kraken
Hybrid automated machine integrating concurrent manufacturing processes

KRAKEN is a project which aim is to integrate manufacturing technologies to develop the most accurate and the biggest subtractive and additive device in the world.
KRAKEN will develop a hybrid manufacturing machine to equip industry with an all-in-one affordable machine for the customised design, production/reparation and quality control of functional parts (made in aluminum, thermoset or both materials combined) from 0,1m to 20m through subtractive and novel additive technologies.

The integration of Key Enabling Technologies (KETs), such as 3D printing, robotics, 7DoF (Degrees of Freedom) real time control, complex monitoring and advanced control algorithms, supported by an innovative CAM software, will make KRAKEN the largest 3D printer in the world, for metal and non metallic materials, printing high performance industrial products, with largely improved accuracy and quality of a final product.

Muros
MULTISENSOR ROBOT SYSTEM FOR AERIAL MONITORING OF CRTICAL INFRASTRUCTURE

With the MUROS (Multisensor Robot System for Aerial Monitoring of Critical Infrastructure) project, we come forward with a surveillance and monitoring system provided with unmanned aerial platforms for monitoring, preventing and reducing the effects of the incidents with impact on the critical infrastructure, such as oil transportation pipelines, railways, electric-power transmission or highways

The activities developed within this R&D project contribute to the scientific value of the project at various levels:
MUROS is the configuration of an experimental model for monitoring and securing the critical infrastructure;
The product is adapted to the research requirements of ESA (European Space Agency);
Educational and demonstrative activities are initiated and carried out with the purpose of multidisciplinary training for the experts in relevant fields such as UAV, sensors, sensors networks, advanced data processing algorithms etc.
The development, at national level, of the necessary elements for sustaining the material and human resources required on the long run for the space exploration activities.

Swarms
Smart and Networking Underwater Robots in Cooperation Meshes

The primary goal of the SWARMs project is to expand the use of underwater and surface vehicles (AUVs, ROVs, USVs) to facilitate the conception, planning and execution of maritime and offshore operations and missions. This will reduce the operational costs, increase the safety of tasks and of involved individuals, and expand the offshore sector.

SWARMs project aims to make AUVs, ROVs and USVs further accessible and useful, making autonomous maritime and offshore operations a viable option for new and existent industries:
Enabling AUVs/ROVs to work in a cooperative mesh thus opening up new applications and ensuring re-usability by promoting heterogeneous standard vehicles that can combine their capabilities, in detriment of further costly specialised vehicles.
Increasing the autonomy of AUVs/USVs and improving the usability of ROVs for the execution of simple and complex tasks, contributing to mission operations’ sophistication.

Scientific publications

  • Aerial image geolocalization from recognition and matching of roads and intersections
    Abstract Aerial image analysis at a semantic level is important in many applications with strong potential impact in industry and consumer use, such as automated mapping, urban planning, real estate and environment monitoring, or disaster relief. The problem is enjoying a great interest in computer vision and remote sensing, due to increased computer power and improvement in automated image understanding algorithms. In this paper we address the task of automatic geolocalization of aerial images from recognition and matching of ...
    D Costea, M Leordeanu
    British Machine Vision Conference (BMVC)
    2017
  • A Local-Global Approach to Semantic Segmentation in Aerial Images (Master's Thesis, University Politehnica Bucharest)
    Abstract Aerial images are often taken under poor lighting conditions and contain low resolution objects, many times occluded by other objects. In this domain, visual context could be of great help, but there are still very few papers that consider context in aerial image understanding and still remains an open problem in computer vision. We propose a dual-stream deep neural network that processes information along two independent pathways. Our model learns to combine local and global appearance in a complementary ...
    AE Marcu
    arXiv preprint arXiv:1607.05620
    2016
  • Aerial image geolocalization from recognition and matching of roads and intersections
    Abstract Aerial image analysis at a semantic level is important in many applications with strong potential impact in industry and consumer use, such as automated mapping, urban planning, real estate and environment monitoring, or disaster relief. The problem is enjoying a great interest in computer vision and remote sensing, due to increased computer power and improvement in automated image understanding algorithms. In this paper we address the task of automatic geolocalization of aerial images from recognition and matching of ...
    AE Marcu
    arXiv preprint arXiv:1607.05620
    2016
  • Dual Local-Global Contextual Pathways for Recognition in Aerial Imagery
    Abstract Visual context is important in object recognition and it is still an open problem in computer vision. Along with the advent of deep convolutional neural networks (CNN), using contextual information with such systems starts to receive attention in the literature. At the same time, aerial imagery is gaining momentum. While advances in deep learning make good progress in aerial image analysis, this problem still poses many great challenges. Aerial images are often taken under poor lighting conditions and contain low resolution ...
    A Marcu, M Leordeanu
    arXiv preprint arXiv:1605.05462
    2016
  • Labeling the Features Not the Samples: Efficient Video Classification with Minimal Supervision
    Abstract Feature selection is essential for effective visual recognition. We propose an efficient joint classifier learning and feature selection method that discovers sparse, compact representations of input features from a vast sea of candidates, with an almost unsupervised formulation. Our method requires only the following knowledge, which we call the\ emph {feature sign}---whether or not a particular feature has on average stronger values over positive samples than over negatives. We show how this can be estimated using as few as ...
    M Leordeanu, A Radu, S Baluja, R Sukthankar
    AAAI Conference on Artificial Intelligence
    2016
  • Labeling the Features Not the Samples: Efficient Video Classification with Minimal Supervision
    Abstract Feature selection is essential for effective visual recognition. We propose an efficient joint classifier learning and feature selection method that discovers sparse, compact representations of input features from a vast sea of candidates, with an almost unsupervised formulation. Our method requires only the following knowledge, which we call the\ emph {feature sign}---whether or not a particular feature has on average stronger values over positive samples than over negatives. We show how this can be estimated using as few as ...
    M Leordeanu, A Radu, S Baluja, R SukthankarAAAI Conference on Artificial Intelligence
    2016
  • How hard can it be? Estimating the difficulty of visual search in an image
    Abstract We address the problem of estimating image difficulty defined as the human response time for solving a visual search task. We collect human annotations of image difficulty for the PASCAL VOC 2012 data set through a crowd-sourcing platform. We then analyze what human interpretable image properties can have an impact on visual search difficulty, and how accurate are those properties for predicting difficulty. Next, we build a regression model based on deep features learned with state of the art convolutional ...
    R Tudor Ionescu, B Alexe, M Leordeanu, M Popescu, DP Papadopoulos, ...
    Proceedings of the IEEE Conference on Computer Vision and Pattern...
    2016

© Copyright 2023 Autonomous. All Rights Reserved.