The current call closes in

ASUR Projects


Machine Perception, Reasoning and Intelligence

Picture Quality Metadata Extraction

0913_C1_Ph1_002 : QinetiQ

Show summary...

The picture quality of imagery captured by Unmanned Aerial Vehicle (UAS) camera sensors can be limited by transient events, such as the platform encountering turbulence and inducing motion blur or camera autofocus lag (for zoomed in views), especially for smaller UAS where mechanical stabilisation of camera sensors is impossible due to Size, Weight and Power (SWaP) or cost considerations.

This can significantly reduce the intelligence value of the imagery acquired and may inhibit the performance of other autonomous algorithms (e.g. automatic target or visual waypoint detection). With continuous operation of video cameras the same ground area is likely to be viewed multiple times in succession which can be wasteful in storage or communications bandwidth for an autonomous system.

There are significant benefits in providing on-platform assessment of picture quality immediately following acquisition, including:

  • Imagery can be ranked by quality to prioritise processing and communication of the best data;
  • Mission plans can be dynamically changed: failure to initially acquire good quality imagery of an important area could automatically trigger an immediate overfly and re-collect rather than necessitating a separate, subsequent mission (with cost and timeliness implications). Conversely, if good imagery is known to have been acquired rapidly the mission might be altered to include additional, lower priority, “stretch” activities.
  • Measurements of relevant picture quality attributes can be used to dynamically optimise camera system performance. The integration time (“shutter speed”) and sensitivity/electronic gain (“ISO-speed”) of cameras provide trade-offs between contrast, noise and motion blur. The camera parameters can be dynamically altered to optimise image gathering based on attributes such as platform stability or visibility of a specific ground area (as opposed to conventional simplistic criteria such as light level).
  • Possible inaccuracies in subsequent processing arising from image degradation can be automatically.

Graph-based Trajectory Mining for Knowledge Discovery of Vehicular Movements

0913_C1_Ph1_009 : Centre for Secure Information Technologies (CSIT), Queen's University Belfast

A summary of this task is not available

Real-time Navigational Meta-data

0913_C1_Ph1_013 : Plextek Consulting, trading as Plextek Services Ltd

Show summary...

This project addresses extraction of meta-data from sensor data (Challenge 1). Plextek aim to automatically generate meta-data (in the form of linear features or vectors) from sensors such as IR & visible cameras, in real-time, to permit remote operation of a Unmanned Air System (UAS) by a human. The degree of autonomy may be scaled (i.e. operation may include piloting the UAS through to mission commander tasks).

Deep Methods for Unsupervised Metadata Extraction

0913_C1_Ph1_017 : BAE Systems

A summary of this task is not available

Intelligent Video CODEC

0913_C1_Ph2_001 : Roke Manor Research Ltd

A summary of this task is not available

Bio-Inspired Stabilised Payload, Environmental Calibration and Control Suit – 1 (Bio-SPECCS 1)

SUAS_2013_009 : Blue Bear Systems Research Ltd

A summary of this task is not available

Picture Quality metadata extraction

0913_C1_Ph2_002 : QinetiQ

A summary of this task is not available

Bio-inspired Stabilised Payload, Environmental Calibration and Control Suite – 2 (BioSPECCS 2)

1014_PH2_001 : Blue Bear Systems Research Ltd

Show summary...

BioSPECCS gaze stabilised technology developed during successful delivery of phase 1 work has been shown to improve the performance and capability of autonomous systems whose sensors are subject to high degrees of motion, particularly small air vehicles. BioSPECCS achieves this by improving the quality of sensor data through mechanical stabilisation of the sensor suite and increases the available bandwidth of the sensor system by using rich vision. This performance improvement enables high manoeuvrability during challenging operations within complex urban environments. BioSPECCS 2 seeks to build on Phase 1 work in order to address specific problems relating to Challenge 3.

Two key recommendations were agreed following the completion of Bio-SPECCS 1, firstly to close the outer visual loops with wide field machine vision to further explore the performance benefits of this system. Secondly, to optimise for size, weight, inertia and actuation requirement to explore the potential for direct drives and compliant structures within the actuation system. BioSPECCS 2 will therefore focus on optimising the size and weight of the system delivered during phase 1 to widen its potential for application on small platforms and to undertake a safe, mission-based landing requirement. This landing demonstration will enable the system to close the guidance and control loop for given landing profiles (through landing site tracking) around the rich vision system and will determine the suitability and safety of the landing zone. The wider outcomes and benefits from BioSPECCS 2 which relate more generally to robust, dependable autonomy of Unmanned Air Systems (UAS) are expected to be relevant across all other challenges and the wider community relating to unmanned flight, such as GPS denied flight and airspace integration.

Visual Signature Reduction using Flexible Electronic Paper

1014_C3_PH1_045 : Plextek Consulting

Show summary...

Mini and micro Unmanned Air Systems (UAS) are potentially highly visible, especially where the apparent background (e.g. sky, ground) is changing. A camouflage scheme may be used to reduce visible signature, but it must be selected to suit a particular scenario (e.g. light grey sky, green/brown ground). If the environment or the weather changes, the camouflage loses some of its effectiveness.

Plextek Consulting are developing an adaptive visual camouflage technology for a fixed wing UAS. With our proposed solution, the camouflage scheme displayed will be adjusted during a mission to suit the environment in which the UAS is operating, ensuring that visual signature is always minimised. The technology has the potential to provide this capability in a very low Size, Weight and Power (SwaP) package, consistent with the constraints of a typical mini- or micro-UAS.

Building the Urban Picture Using Multi Sensor Data Gathering

1014_C1_Ph1_120 : Callen-Lenz Associates

A summary of this task is not available

Autonomous Multi-Criteria Decision Making for Integrated Power Systems

1014_C4_PH1_064 : Rolls-Royce Plc and Sheffield University

Show summary...

Integrated Power Systems use a combination of gas turbines, electric generators, fuel cells, super-capacitors, electric motors, and batteries to meet the power needs of marine and aerospace systems. Intelligent control of these systems by construction of a power schedule for the supply and delivery plan for the entire operation of the system has potential to significantly increase efficiency, availability and life. This is achieved through a decision making system for a Power Management System (PMS).

By construction of desirable power schedules through optimal decisions, the PMS delivers desirable characteristics such as maximising performance, maximising life, or minimising fuel consumption. The diversity of operating environment for defence applications poses particular challenges above those experience in civil Unmanned Air System (UAS) applications, introducing disturbances to the system. PMS is expected to perform well in this environment, coping with presence of noise and uncertain information.

This project sets out to demonstrate the feasibility of real-time, autonomous Multi-Criteria Decision Making (aMCDM) in the presence of real world constraints with application to optimising power management. Decision-making is a process that results in the selection of a single belief or a course of action from amongst several alternative possible solutions. The evaluation of the solution efficacy is subject to uncertainty, and competing objectives (life, performance, mission risk etc.) which make the definition of what ‘best’ means non-trivial. For power management, the past and future sequence of actions has a significant influence on the current solution, which must be delivered in real-time. We propose the development of an approach founded in optimisation which addresses these challenges that are pertinent to defence application.

A Neuro-inspired Multisensory Perceptual Decision-making Model with decision Confidence

1014_C4_PH1_071 : University of Ulster and University of Wolverhampton

Show summary...

There is an important practical need and challenge for autonomous systems to optimize multisensory decision-making and reporting the systems’ decision confidence or “trust” level in real-time. This project will develop a brain-inspired computational model of human decision-making capable of making optimal decisions based on integrating uncertain multisensory information while providing real-time covert/internal information about its decision confidence. This integrated model will be tested under various complex simulated environments and implemented in reconfigurable and programmable hardware for real-time applications in autonomous mobile robots. At the end of the project, we will establish an artificial intelligent machine that employs human’s ability to perform optimal multisensory decision-making while equipped with self-assessment about its potential decisions, i.e. an optimal decision-making machine with metacognition.

Interaction with hierarchical planning systems

0913_C4_Ph1_002 : University of Birmingham

Show summary...

This project directly addresses challenge 4 – “Operator System Decision Making Partnership”. The project will address how to include human input within a hierarchical decision making system by adjusting the level of detail modelled at different layers of the system. This will allow complex multi-vehicle, multi-level reasoning to be carried out in a timely manner by making use of both system responses and human input to adjust the complexity of the planning system.

This directly addresses the issues of optimising the human input to the system and the need to provide time-bounded solutions by dynamically varying the complexity and depth of search used to solve different elements of a problem. This allows the experience and insight of the human operator to guide the planning effort to the critical elements of a specific problem.

Specifically we intend to take an existing multi-vehicle task assignment, planning and scheduling system and incorporate varying levels of detail within the models used to calculate the effects of decisions taken at different levels within the system.

For example when assigning tasks to multiple vehicles, the time taken to travel between targets may initially be represented simply as an estimated value based on distance, as in a conventional Travelling Salesman problem. Further, more detailed, levels of decision making then identify obstacle free trajectories between the targets and dynamically adjust speeds to maintain separation between vehicles. If the area is not highly congested then the original abstraction will be accurate and thus the planned trajectories will meet the estimated timing assumption. However if parts of the area are highly congested, it may only be possible to accurately represent the expected time by planning obstacle free paths and co-ordinating the trajectories of multiple vehicles. As a result the task assignment algorithm may only be able to accurately know the travel times (and therefore the costs/benefits of a particular set of task assignments) by running detailed trajectory planning. This comes at the cost of a much higher computational load which in turn reduces the number of alternatives that can be considered in a time-bounded system.

A time-bounded decision system therefore has to strike a balance between using accurate but costly models or simple but potentially inaccurate ones. The information to make a choice at the level of detail to work out can come from two sources: from experience gathered during execution of current and previous plans and from a human operator. We believe that a human operator is well placed to identify areas where the simple model is struggling and to adjust the model appropriately. In this way the overall system makes maximum use of the human ability to abstract and reason about a problem, while making best use of the computer’s ability to rapidly compare large numbers of different options.

Precise Navigation of UAVs in Complex Environment – 2 (PUCE-2)

SUAS_2013_013 : Blue Bear Systems Research Ltd

A summary of this task is not available

Precise Navigation of UAVs in Complex Environment-2 (PUCE-2 Phase 2)

1014_PH2_02 : Blue Bear Systems Research Ltd

Show summary...

This project builds upon the previous Dstl funded PUCE-2 project to demonstrate robust operation of Small Unmanned Air Systems (SUAS) in the urban environment in collaboration with Createc (UK SME), Oxford University, Imperial College and Prox Dynamics. The GPS-denied problem for Nano-Unmanned Air Systems (UAS) in complex environments is challenging for a number of reasons including the variety of operating environments and integration issues onboard a platform with minimal scope for extra processing or payload. The work carried out under PUCE-2 developed a framework for fully automated navigation within the complex environment; this project builds on this work and integrates other promising techniques developed outside of the PUCE-2 project to provide a step change in capability for the Black Hornet (BH) UAS (PD-100 Block-2) currently used by UK MOD. This project is in scope for Challenge 2 of the SUAS call as it considers operation of UAS in the urban environment where GPS is not available and will enable autonomous missions/supports remote piloting.

Optic flow and Radar Stabilisation, Sense and Avoid for Nano UAV’s

1014_C2_PH1_031 : Advanced Nano Technology and Scientific (ANTS) Ltd

Show summary...

ANTS is creating a truly autonomous Nano Unmanned Air Vehicle (NUAV) with the ability to sense and avoid objects, humans and fly in complex space safely. This work will lead to full intelligent air vehicles that can intelligently map its environment and operate missions on its own with safe return to operator.

Insect-inspired Sense and Avoid Strategies: Surface Detection in the Dark using Induced Flow Field Modulation

1014_C2_PH1_036 : Royal Veterinary College, University of London

Show summary...

Unmanned Air Vehicles (UAV) and animals share similar guidance and control challenges when airborne but, to date, they manage those challenges in fundamentally different ways. Mosquitoes and other insects perceive the world around them through an array of sensors. Alongside the visual and olfactory systems is an extensive suite of mechano-sensors, which are known to be incredibly sensitive to air flow, vibrations and strains. They are also extremely fast to respond, detecting important information at frequencies much higher than the eyes could manage. Best of all, they even work at night and do not require any sort of emitter other than the propulsive mechanism itself. Could a biologically-inspired engineered scheme be used to control Unmanned Air Vehicles in cluttered, confined and dark environments where GPS and optical control fails?

In this project we aim to describe in detail the aerodynamic information which is available to animals and UAVs alike, using a nocturnal mosquito as our animal model. The project requires a range of techniques, including high speed videography to extract the intricate motion of the wingbeat, aerodynamic measurement of insects in flight and complementary computational simulations. Finally, we will test prototypes of the novel device on small rotorcraft.

Micro-Radar for Small UAS

1014_C3_PH1_044 : Plextek Consulting

Show summary...

A small Unmanned Air Systems (SUAS) operating in a complex environment (e.g. a congested and cluttered urban scenario) must have collision avoidance as its primary objective. Unfortunately, conventional sensors such as camera systems only provide a range (depth) perception after significant and complex processing of the sensor data that is not easily compatible with a small platform. Fast reaction to closing obstacles is therefore difficult to obtain in this way.

Plextek Consulting’s project underpins the development of a mm-wave radar sensor that can directly provide a ‘view of the world’ that is useful (including range and Doppler) and timely, yet can be sufficiently small to be fitted to a SUAS. It has the additional benefit of avoiding the need for complex, processor-intensive image processing.

Insect-inspired Sense and Avoid Strategies: Surface Detection in the Dark Using Induced Flow Field Modulation

1014_C2_PH2_036 : Royal Veterinary College, University of London

Show summary...

Flying animals must sense and avoid obstacles, often in sensory deprived environments, and this capability provides a rich source of inspiration for small Unmanned Air Systems (UAS). In particular, the nocturnal Culex mosquito avoids surfaces even in darkness. We hypothesise that this is mediated by the mechanosensory array detecting deformations of its own induced flow field as it experiences ground our wall effect. Such a strategy would also be useful for small UAS as they operate in enclosed areas.

Inspired by the Culex mosquito, we have successfully demonstrated that a quadrotor platform can sense nearby walls and floors through monitoring pressure changes at key locations around the vehicle in its own induced flow field. The present project will take this proven method of detecting obstacles and refine and implement it into a free‐flying quadrotor platform that will be capable of evading floors and walls using monitored pressure readings alone. This sense and avoid system is particularly suited to small pico‐UAS, as this detection strategy works for different scales, would add negligible mass through small lightweight components, and be more efficient and computationally less expensive than other methods.

Interaction with Hierarchical Planning Systems

0913_C4_Ph2_002 : QinetiQ and University of Birmingham

A summary of this task is not available

Micro-Radar for Small UAS044

1014_C3_PH2_044 : Plextek Consulting

A summary of this task is not available

UWB Zigbee Precision Landing System

1014_C3_PH1_043 : Roke Manor Research Ltd

Show summary...

Within this project Roke intend to demonstrate a technology that will open up precision automatic landing as a capability for even the smallest Unmanned Air Vehicles (UAV) that could not consider this using existing technology. Based on a genuinely disruptive technology providing location of tagged objects both indoors and outdoors to within 10 cm over ranges up to 300 m all from a tiny chip that uses little power. Marrying this core capability with Roke’s proven positioning technologies really unlocks the possibilities for Unmanned Air Systems (UAS) paving the way for applications that have full 3D position information available.

This project will provide strong foundations for the future development of this technology by exploring the fundamental capabilities, providing a ground based demonstrator showing the core functionality and exploring the wider system issues.

STARTLE – Enhanced Technology Demonstrator

DF_2013_006 : Selex ES, now Leonardo

A summary of this task is not available


Human/Autonomous Systems Interaction and Collaboration

120003 – Using Natural Language to define Interaction

0913_C5_Ph1_010 : BAE Systems

Show summary...

This project addresses Challenge 5 – “Interaction Techniques”. As the level of autonomy in an Unmanned Air System (UAS) increases, the control interface becomes less concerned with ‘piloting’ and more concerned with ‘command’. A consequence of this is that the content of any control messages, and hence the content and layout of the control interface, will tend to become mission-specific. Add to this the possibility that a mission complement may comprise several UAS and, perhaps, manned vehicles or ground personnel as well, then it becomes less likely that any single generic interface will suffice. If a sufficiently complex interface were to be designed to cope with multiple mission profiles, assets, and targets, then it is likely that it would be cumbersome in the extreme to use, thus, potentially, overloading the operator/commander and causing errors in execution. We propose to investigate and develop a proof of concept demonstration of mission-specific control interfaces which are automatically constructed from a mission plan, written in a controlled subset of English.

The advantages of this approach would be:

  • The constructed interface is specific to the mission, eliminating irrelevant controls and displays, and thus reducing operator load.
  • By using a natural language (written) mission definition, we eliminate the requirement for technical (e.g. software) experts during the mission planning stage.

Note that some mission parameters may be map-based. We assume that location coordinates will be used. We plan to use a template-based approach to the generation of mission-specific interfaces. The rationale behind this is not to simplify the process, but to ensure that all generated interfaces have a consistent ‘look and feel’. We suggest that this is essential to avoid operator overload, in this case through unfamiliarity with the layout of the generated interface. By using a controlled subset of English, we can demonstrate an unambiguous one-one correspondence between the specification in English and a corresponding specification as a formal design language. By this means we hope to show a practicable path to certification of the generated interfaces. For the purposes of this phase 1 project we are limiting our attention to computer-based interfaces following the Windows-Icon-Mouse (WIMP) paradigm – although in general the Mouse may be replaced by some other form of ‘pointing’ device such as a joystick.

Self Aware Trustworthiness Levels and Information Dissemination (SATLID)

1014_C4_PH1_060 : Frazer-Nash Consultancy

Show summary...

With increasing levels of autonomy in systems, the humans who will be responsible for the autonomous system need to be able to understand and have confidence in (i.e. trust) the autonomous system decision making rationale. The concept of trust in humans is subjective and situational, therefore, to remove this potential disconnect between humans and autonomous systems, the concept of Trustworthiness Levels are introduced.

By enabling unmanned vehicles to become self-aware of their own sense of trustworthiness, and exploiting this information in joint airspace, the work will provide ways to measure trust by ‘independent observation’ of behaviour and improve the safety of operation of unmanned vehicles of the future. In doing so, it is the intention of this work to provide an open framework for the determination of trust which is comprehensible for the general public.

Assessing Autonomous Decision Making by Subjective Logic

1014_C4_PH1_110 : QinetiQ

A summary of this task is not available

Bio-inspired Robust Distributed Heterogeneous Sensor Management

1014_C1_PH1_007 : Roke Manor Research Ltd

A summary of this task is not available

Feasibility Study of the use of a “Mixed Reality”, Low Infrastructure C2 Environment for UAS Ground Stations

1014_C1_PH1_009 : BAE Systems MAI and University of Birmingham

A summary of this task is not available

Autonomous Swarm-Based Mission Planning and Management Systems

1014_C1_PH1_010 : University of Salford

Show summary...

Unmanned Aerial Vehicles (UAV) have a wide range of uses, but as they generally require one human operator per vehicle, their operation can be labour intensive. This project will develop a swarm-based mission management and mission planning system capable of handling multiple fleets of UAVs involved in multiple missions simultaneously. Although the platform is to be scalable but for the purpose of demonstration the system will be designed to require only one operator in the loop governing four missions simultaneously. A ‘Virtual Intelligent Operator (VIO)’ – an on screen avatar – will assist the operators in their mission management tasks with the ability to detect and announce unforeseen contingencies and propose solutions to operators.

From Data Gathering, through Mission Planning to Execution

1014_C1_PH1_017 : BISim and Partners

Show summary...

This novel project draws together simulation, military and data gathering experts to demonstrate how Unmanned Systems operations might be conducted in future. Novel data gathering and processing techniques will be combined with state of the art simulation technology to provide the environment for military experts to explore future autonomous systems operations. The resulting output has the potential to advance the development of autonomous systems far beyond their current application.

Real-Time Adaptive Software Define Datalink Network Management

1014_C1_PH1_019 : Integrated Surveillance Systems Ltd (ISSL)

A summary of this task is not available

Multi-objective Optimal Motion Planning and Formation Control of UAVs in Complex Urban Environments

1014_C1_PH1_021 : University of Kent

A summary of this task is not available

Multi-UxV – Global Planner for Navigation and Communications

1014_C1_PH1_022 : Blue Bear Systems Research Ltd

Show summary...

This project addresses the mission execution aspect of Challenge 1. In particular, this project will seek to demonstrate optimised mission execution within a dynamic environment through prioritisation of both task and communications goals. The key focus of this work is to provide:

  • Global mission planning capable of providing effective co-operation and co-ordination between vehicles
  • Communications planning (spectrum management) to ensure availability of data links between assets and ground stations
  • Sharing of prioritised situational awareness through smart bandwidth management across the network of vehicles to maintain a common operating picture.

This work will conclude with a live demonstration combining real hardware and synthetic components.

Cooperative Surveillance Planning for Multiple Autonomous UAVs

1014_C1_PH1_025 : QinetiQ

A summary of this task is not available

Maintaining Network Connectivity and Performance Using Multi-layer ISR System

1014_C1_PH1_028 : Loughborough University

A summary of this task is not available

Atmospheric Aware Mission Planning (AAMP)

1014_C3_PH1_053 : Blue Bear Systems Research Ltd

Show summary...

This project investigates and demonstrate how small Unmanned Air System (UAS) platforms (<7kg) can increase persistence through energy harvesting under Challenge 3 of the ASUR call. There is a significant body of research looking at component technologies and strategies for energy harvesting, however, this research tends to discuss the benefits of these technologies independently of the concept of operation and is typically based around a UAS designed specifically for this approach. This work will develop in simulation a number of representative vignettes and will seek to quantify The benefit of a number of these approaches to energy harvesting.

Episkopos Project – A Co-Evolving Team Mission Manager

1014_C5_PH1_002 : The Great Circle Ltd

A summary of this task is not available

Planning Distributed Search Operations

1014_C5_PH1_087 : King's College, London

Show summary...

There are multiple contexts in which several diverse vehicles or other sensor-equipped systems might be deployed to carry out search for targets. These include search and rescue, search and detect or search and track missions. For example, mine countermeasures and searches for sea-wreck survivors. Efficient use of assets in these missions requires planned coordination, sometimes using multiple assets in a cooperative task, as well as sharing responsibilities and coverage of search areas. This project will explore the problem of coordinating search underwater, when facing intermittent communications, where adaptive planning of the behaviours of the assets involves each element of the search planning its own activity in the face of prior commitments, both to other agents and from other agents, that underpin the cooperative achievement of the mission goals.

The project will also include consideration of land- and air-based assets that might contribute to the search mission and engage in coordinated or synchronised behaviour in cooperation with sea-based assets. As plans are revised, the integrity of the mission can be threatened by poor communications and this project will seek to address this problem by providing robust strategies for re-planning, relying on shared commitments that remain a priority for the agents that are partners in activities that require coordination or synchronised behaviours.

Personalising Autonomous Systems (PAuSe)

0913_C4_Ph1_004 : Queen's University Belfast

A summary of this task is not available

Human-Agent Collaboration for Multi-UAV Coordination

0913_C4_Ph1_006 : University of Southampton

Show summary...

This project addresses Challenges relating to an Unmanned Air System (UAS) system comprising a human operator and multiple Unmanned Air Vehicles (UAV). Within this system the human operator specifies a mission objective (goals, priorities, etc.) and the UAVs employ agent-­‐based computing methods to perform a coordinated set of actions (move, sense, etc.) that are aligned to the objective. The operator and the UAV agents interact during the mission and, in so doing, form a partnership to monitor performance and ensure the tactical and strategic mission objectives are met. The specific challenges to be considered by this project are the following:

  • How do the UAV agents re-plan their actions when the human specifies changes to the mission objectives.
  • What is the dynamic interaction between the agents’ and the human during the mission; how are agent’s decisions presented to the human; how flexible is autonomy; when and how should the operator override the agents’ decisions
  • How could a meta-agent be designed to monitor the UAV agents and partner with the human to ensure the right balance of autonomy in the system to satisfy performance objectives and achieve strategic goals.
  • How does the Pilot Authority and Control of Tasks (PACT) taxonomy map to the above problems; do they suggest extensions or adaptation of the taxonomy to accommodate multi-agent situations.
  • What is the interface through which the human specifies the mission objective to the UAV agents; how does this accommodate multiple objectives, priorities, and deadlines.
  • How do the UAV agents decompose the mission objective into a set of coordinated actions and what is the interface through which the UAV agents communicate their actions to the human.

From these challenges, following a scoping exercise at the beginning of the project, we will focus on those considered to be of greatest importance and also achievable within the project timeline.

This project meets the scope of the call in the following regards:

  • It focuses on the interaction between the UAV agents (to coordinate actions in response to an objective) and the interaction between the human and the agents to ensure the objective is current and the actions are safe, reliable, and efficient; the interface technology, though considered, is less important in this project.
  • It demonstrates how an existing agent-­‐based computing method for coordination (the max-sum method) should function within a human-agent system to maximise the interaction between the low-­‐level tactical decision making capability of agents and the high-level strategic reasoning provided by the human operator.
  • It meets the requirement for demonstrable technology by leveraging work developed in related EPSRC programmes (ORCHID, SEAS DTC, and MOSAIC in particular) and adapting/extending it to match the scope of the ASUR programme.

Operator-Driven Optimal Mission Execution (ODOME) with Online Constraint and Outlay Map Evolution (OCOME)

0913_C4_Ph1_010 : Blue Bear Systems Research Ltd

Show summary...

This project addresses Challenge 4 – “Operator-System Decision Making Partnership”. The aim of this research is to develop a greater understanding of the dynamic variation of authority between the Unmanned Air System (UAS) and the operator. It seeks to identify dynamic variation of authority scenarios that are viable for future trialling/modelling under the ASUR programme in terms of:

  • The complexities of dynamically shifting between managing different numbers of UAS (or groups of UAS).
  • The acceptability of the step changes in autonomy levels with respect to human performance (and the implications of this for operator performance, situational awareness, workload etc.)
  • Shifting authority dynamically due to a reduction in the technology, capability or information available (i.e. degraded states).

UAS Ground Manoeuvre, 3D Mapping and Visualisation

0913_C5_Ph1_005 : General Dynamics UK Ltd

Show summary...

Remote operation of Unmanned Air Systems (UAS) leaves the operator with reduced awareness of obstacles to the sides of the aircraft. This increases the risk of collision with stationary obstructions while taxiing and taking off. Any collision will cause cost and down-time as the aircraft has to be taken out of operation for repairs, and also risks damage to other aircraft, vehicles and airfield infrastructure.

GDUK and Blue Bear Systems Research will team to investigate and develop an innovative operator’s situational awareness enhancement, using 3D mapping data to generate a real time graphical and audio user interface.

SUMS – Sharing UAV Metadata System

0913_C2_Ph1_005 : TEKEVER

Show summary...

Typically, small Unmanned Air Vehicle (UAV) platforms relay data to a single Ground Control Station (GCS). Data is stored at the GCS and sharing of that information is achieved through connecting the GCS to existing communications systems, which may not have been designed with UAV data feeds in mind. The situational awareness of the UAV operator is limited to the field of view of the sensors mounted to the platform, but decision–‐making could potentially be improved by allowing an operator to see data from another platform, even if there is not the bandwidth to receive an entire stream. How can sensor data be shared effectively, enhancing an operator’s awareness of events? Key events may be occurring outside the field of view of the sensor fitted to the platform being controlled, but how can the operator be cued to that event? What advantages to the communications environment does sharing small volumes of metadata present, compared to the sharing of large amounts of raw sensor data?

These are the questions that TEKEVER will address with this project, proposing a new concept and structure to share metadata across a number of UAV platforms in a formation. The goal is to reduce the required amount of information flowing in the network, optimising communications and saving resources. The project will also focus on the definition of the relevant subset of metadata to share across platforms and on clearly associating each subset with a UAV and its position. Feasibility is the driver of the project research, thus potential solutions to the problem need to be implementable, especially considering the dynamic environments that these future UAV formations will be subjected to. Considering these factors, it is clear that this project addresses most of the concerns in challenge 2 and fits within the scope of the call.

Exploitation, Prioritisation and Adaptation Through Meta-data Reasoning in a Distributed Autonomous Environment

0913_C2_Ph1_008 : Envitia

A summary of this task is not available

Report Generation from Meta‐data

0913_C2_Ph1_009 : Heriot-Watt University

A summary of this task is not available

Using Mission Phase Information and Meta-data for Smart Bandwidth Control

0913_C2_Ph1_012 : QinetiQ

Show summary...

A synthetic environment, developed under previous research programmes to model multiple unmanned air vehicles performing a mission, has been adapted to create a system capable of running in batch mode with a model fulfilling the function of the human operator. The simulation was developed to test the hypothesis that a system of image prioritisation, tailored to meet the requirements of different mission phases, would enable faster identification and prosecution of targets for a given communications system bandwidth. Three different image prioritisation schemes have been evaluated: Last In/First Out (LIFO), First In/First Out (FIFO) and a managed, mission phase dependent scheme. The schemes have been tested in three different target laydown scenarios and for six image transmission times.

The relative performance of these schemes has been evaluated and recommendations made for future work. The study has also indicated the usefulness of a batch operating mode for the autonomous system synthetic environment for use in parametric studies

Exploitation, Prioritisation and Adaptation through Metadata Reasoning in a Distributed Autonomous Environment

0913_C2_Ph2_008 : Envitia

A summary of this task is not available

Enhanced Awareness and Forward Operating Capability for Unmanned Air Systems (EA-FOCUS)

1014_C1_PH2_023 : Blue Bear Systems Research Ltd

Show summary...

EA-FOCUS 2 is a collaborative project between Blue Bear Systems Research and Deep Vision. This project will demonstrate efficient autonomous real-time detection, identification, handover and reacquisition between multi-layer unmanned systems. This phase of the project will demonstrate real time, on-board automated selection of targets and hand-off to other unmanned systems across domains.

Multi-UxV – Global Planner for Navigation and Communications

1014_C1_PH2_022 : Blue Bear Systems Research Ltd

Show summary...

There is currently no benchmark to work from where multiple autonomous unmanned platforms operate within a system-of-systems. This project, building on M-UXV, seeks to research how to best command and control multiple autonomous behaviours in unmanned platforms during intelligence, surveillance and reconnaissance operations in the urban environment.

It is proposed that flexible (ad hoc) squads made up of different autonomous unmanned platforms should be selected for a certain task based on their specific capabilities. For each task the make-up of the squad will potentially differ in order to get the best results for the commander. In addition to this, the autonomous behaviours of each of the platforms may also need to be controlled differently depending on how the commander wants to employ those assets.

This project will help to identify where these behaviours are best controlled from and by whom. The project will culminate in a demonstration which will combine simulation and live flying in order to exercise the flexible nature of ‘ad hoc’ squads and configurable autonomy.

Report Generation from Meta-data

0913_C2_Ph2_009 : Heriot-­Watt University, Edinburgh

A summary of this task is not available

Self-Aware Trustworthiness Levels and Assurance against Operational Policy

1014_C3_PH2_060/073/110 : Frazer-Nash Consultancy, QinetiQ and Leicester University

A summary of this task is not available

Modelling Human Interaction with Autonomous Systems Using Integrated Cognitive Architectures

0913_C6_Ph1_009 : University of Huddersfield

A summary of this task is not available


Scalable teaming of autonomous systems and autonomous systems technology

120005 – Interface Technologies for Teams (INTFORT)

0913_C5_Ph1_012 : BAE Systems

A summary of this task is not available

Managing the Interplay of Link Degradation and Aircraft Control (MILDAC)

SUAS_2013_010 : Blue Bear Systems Research Ltd

A summary of this task is not available

Advanced Digital Waveform Techniques for Through Wall Comms

SUAS_2013_022 : Integrated Surveillance Systems Ltd (ISSL)

A summary of this task is not available

The Use of LTE with Interleaved Spectrum for SUAS Comms

SUAS_2013_028 : Roke Manor Research Ltd

A summary of this task is not available

SUAS Acoustic Emissions Collection

SUAS_2013_029 : Leonardo

A summary of this task is not available

Quantised Updates for Partial Information Delivery (QUPID)

SUAS_2013_034 : University of Liverpool

A summary of this task is not available

Sub Low Frequency Packet Radio and Adaptive Compression Compensation for Low Bandwidth

SUAS_2013_047 : Torquing

A summary of this task is not available

Low SWaPC SUAS 60 GHz Antenna

SUAS_2013_053 : Plextek Consulting, trading as Plextek Services Ltd

Show summary...

This project addresses the challenge of providing an enhanced, high bandwidth and covert communications capability for a Small Unmanned Aerial Vehicles (SUAS) operating in Non-Line Of Sight (NLOS) environments, such as urban canyons or inside buildings. The concept will be suitable for a mini and micro SUAS, and potentially a Nano-Unmanned Aerial Vehicles (NUAS).

We intend to provide a key enabling technology for a 60 GHz RF data link between an operator and a SUAS. 60 GHz is well suited to urban canyons and indoor environments. It is strongly reflected by most surfaces, and by reflecting off many surfaces it can penetrate deeply into a building, essentially following the same route as a SUAS which has infiltrated the building. 60 GHz transmission is also covert because the oxygen absorption line at this frequency reduces the scope for detection at longer ranges.

The commercial exploitation of the 60 GHz Industrial, Scientific and Medical (ISM) bands is well underway, with single chip solutions based on SiGe technology that address both mm-wave RF and baseband modem. These solutions are low cost, relatively low power consumption and light in weight. The modem technology (notably Orthogonal Frequency-Division Multiplexing (OFDM)) is well suited to difficult propagation conditions (for example indoor channels with many multipath elements) and offers a high data throughput suitable for video transmission from the SUAS.

The main challenge, now that the problem of low Size, Weight, Power and Cost (SWaPC) RF implementation has effectively been solved, is the antenna. It is necessary to achieve significant antenna gain at both ends of the link in order to meet the requirements of the link budget. This requires a narrow (highly directional) beam, and for the highly mobile SUAS application, the beam formed by these antennas needs to be constantly adjusted. Conventional ‘phased array’ methods are not easily compatible with a low SWaPC implementation because they require many individual transmit/receive paths. Whilst suitable for an ‘anchor’ node, wirelessly connected to the SUAS, these methods are unlikely to be suitable for the SUAS itself. The novel methods proposed here promise a much lower SWAP alternative which is likely to find application for high bandwidth communications with SUAS in non-line of sight environments.

Enabling Technologies for Pico SUAS Operations

1014_C2_PH1_001 : Integrated Surveillance Systems Ltd (ISSL)

A summary of this task is not available

Low SWAPC SUAS 60 GHz Communication

SUAS_2013_053_PH2 : Plextek Consulting, trading as Plextek Services Ltd

Show summary...

This project aims to underpin communications with future Small Unmanned Air System (SUAS) platforms. The main objective of this work is to de-risk the key technologies to a point where it is possible to scope a subsequent phase that will demonstrate the technology at scale on a small flying platform. This will require integration of several different strands of technology both produced by this thread of work and developed by other parts of the ASUR project. A notable contribution to this is the work being undertaken by IQHQ Ltd, which is developing wireless modem techniques.

The use of LTE with Interleaved Spectrum for SUAS Comms

SUAS_2013_028_ph2 : Roke Manor Research Ltd

A summary of this task is not available


Test, Evaluation, Validation and Verification (TEV&V)

Verification and Validation of Autonomous Systems

DF_2013_004 : QinetiQ

Show summary...

Agent based software is typically used in decision making systems such as routing algorithms. There are a number of concerns regarding certification of such agent based systems, not least of which is how to verify that safety properties can be shown to be met in all circumstances. The use of traditional testing techniques may not be tenable on time/cost grounds because the number of tests cases for real-world interactions is extremely large. The use of Formal Methods holds the potential to be able to prove that such systems will do what is required and that undesired behaviour will never manifest. Formal Methods provide an alternative means of specifying and verifying system behaviours through model based development, which may benefit this type of system where a range of outputs can all be deemed acceptable providing they satisfy a common set of bounding criteria, and Formal Methods are capable of generating the evidence required to support certification.

However, there remain a number of challenges. The ability to model decision making algorithms that are agent based in an appropriate design medium needs to be confirmed. Appropriate safety properties (or requirements) also have to be defined. It has to be shown that the design modelling enables the automated check against the safety properties. Finally, it has to be validated that the approach can be used as a basis for platform certification, potentially using civil standards such as DO-178C/DO-333.

A short initial study has been undertaken into the application of automated Formal Methods based techniques to support the certification of agent based software. This included a workshop with key stakeholders to discuss clearance issues associated with agent based systems and an initial investigation into whether Formal Methods might be applied to Agent Based systems and Planning algorithms. The length of the study precluded application of the Formal Method tools discussed and comparison with other possible clearance methods which are recommended as areas for further work.

EVIRE: An Evidence-Driven Reasoning Framework to Support the Transparent Control, Verification, and Validation of Autonomous Systems

1014_C4_PH1_073 : University of Leicester

A summary of this task is not available

Formal Methods for Learning and Reconfiguration in Autonomous Systems

1014_C6_PH1_104 : University of Southampton

Show summary...

Autonomous Systems play an important role in helping society to tackle global challenges in security, economic development and environmental sustainability. Autonomous Systems, such as Unmanned Air Vehicles (UAV), are used in defence and policing and are increasingly being used for environmental monitoring such as coastal and flood monitoring.

Organisations such as Google and Facebook are planning the use of thousands of UAVs to provide internet access to large parts of the developing world that currently have limited or no access to the internet. Autonomous features, such as lane centring and adaptive cruise control, are already standard in high-end cars, and huge progress is being made in developing fully autonomous cars.

For advanced Autonomous Systems to be trusted and accepted by society, especially in civilian applications, they will need to have impeccable safety records while also being affordable to construct, operate and maintain. This Project is undertaking research that will enable autonomous systems to be engineered to high assurance levels at affordable costs. Building on powerful mathematical modelling and analysis methods, we will develop innovative methods for precise definition of safety requirements together with automated mathematical analysis tools that can be used by manufacturers to ensure that safety requirements are fully addressed during design and operation of Autonomous Systems.

ASUR – Aurelian

0913_C3_Ph1_003 : Innovation Works Department, EADS UK Ltd

Show summary...

This power and heat management project will focus on the topic of fuel temperature management coupled with on-board energy load management and mission planning. On Unmanned Air Vehicle (UAV) fuel temperature is a major issue. Too hot while on the ground in warm climates or at low altitude/high speed operation, especially when fuel is used as a heat sink. Too cold when the UAV operates at high altitude and low speed. On today’s UAVs this causes limitation in the kind of mission that can be flown.

The aim of the work to be conducted in this project is to combine advanced predictive functional control with mission planning and environmental data to better manage fuel temperature during one or multiple successive missions.

Strategies could include the transfer of fuel between wing and fuselage tanks as a function of the mission and environmental conditions. To the wing to cool down the fuel at high altitude in anticipation of hot ground conditions or high payload utilization generating excess heat. When multi mission data is available, it could be possible to allow higher temperature on ground before take-off when the first phase of the mission is to be a high altitude one. Temperature could be managed taking into account the sun cycle for long endurance mission, letting the fuel warm up during daytime in anticipation of the night.

119994 – Shaping Emergence

0913_C6_Ph1_011 : BAE Systems

Show summary...

This project is researching emergent behaviour in complex autonomous systems. The objective is to develop a framework for predicating and controlling emergent behaviour. Prediction and control is traditionally done by requiring a user to interact with an agent-based model simulation, which can be non-intuitive and slow.

It will enable the user to interact more quickly with a meta-model that is learned off-line from training samples generated by the agent- based model. Predictions will be done by applying a regression algorithm to the training data, a Gaussian Process method will be issued since it returns errors on the predictions. These can be used to drive an intelligent sampling strategy for creating the training samples. Control will be done by preforming a reverse mapping from a system space to the agent configuration space.

The project will explore a range of mathematical methods (e.g. inverse regression, optimal design, and optimisation) for performing this mapping. The research will be validated by demonstrating it is better (faster, more convenient) for a user to interact with learned system-level model than with an agent-based model. It will be verified by comparing estimated values from the meta-model with true values from the agent-based simulation model.

Hybrid Power Packs to Suit Both High Power and High Energy Uses with the Associated Control System

DF_2013_001 : Rolls-Royce Plc

Show summary...

As future aerospace applications are moving to more electric solutions there is an opportunity to consider energy storage as a means to offer enhanced/new platform capability, replace some existing systems with more flexible, lower cost solutions and to reduce the system transient power requirements, reducing the cost and size of systems.

This capability has not been considered in detail during prior studies, specifically the real requirement for systems to have the flexibility to support both high power and high energy uses. Utilising a Hybrid Battery/Super Capacitor system it would be possible to realise the benefits offered by both of these technologies to deliver the capabilities discussed above in applications where both high power and high energy is required in a weight optimised solution. This project aims to develop a demonstrator capability to showcase integration of high power and high energy electrical provision and the associated control system required to enable the deployment of such a system.

Specifically Rolls-Royce interests will be regarding the integration of such systems and the specification of the Battery and Super Capacitor, rather than the energy storage device technology.

Designing UAS for High Velocity Manned Aircraft Impacts

1014_C2_PH1_040 : Iannucci Innovation Ltd

A summary of this task is not available

Low Mass Lithium-ion Cells for Micro-UAS Application

1014_C2_PH1_121 : QinetiQ

A summary of this task is not available

“Crash-Happy” Nano Aerial Vehicles: From Obstacle Avoidance to Obstacle Robustness

1014_C5_PH1_080 : Swarm Systems Ltd (SSL), with Aerial Robotics Laboratory and Imperial College

Show summary...

It is desirable to use small Unmanned Air Systems (UAS) in environments of increasing complexity and unpredictability; specifically, inside buildings or urban canyons where they can be used to sense the environment and collect potentially life-saving information. These environments present challenges for flying platforms because of the high number of obstacles, the absence of GPS, the unstructured nature of the environment and poor visual conditions.

Techniques exist for avoiding obstacles based on vision or distance sensing. However they require advanced sensors and significant on-board processing. Further, there are always situations when the environment is too complex, making collisions unavoidable. Even nature’s most agile flyers (e.g. insects) are not able to avoid all collisions, and are often seen bumping into windows, low-contrast walls or moving obstacles. While insects recover after a collision, existing UAS are often out of commission after a single collision.

“Crash-happy” UAS are an innovative way to allow UAS to operate in complex GPS-denied environments. They offer several advantages over conventional UAS, including reduced need for advanced sensing and processing; increased operational tempo, since UAS can complete missions faster and capture situational awareness more quickly; and enhanced robustness to unexpected situations, such as moving obstacles or sensor failures. The objective of this project is to investigate, develop and demonstrate the benefits of “crash-happy” UAS.

The Aerial Robotics Laboratory at Imperial College, in conjunction with SSL, will investigate practical impact protection mechanisms, operational requirements and conduct systematic impact tests. A proof of concept will be integrated onto SSL’s ‘Nano Owl’ UAS. “Crash-happy” UAS will be assessed according to criteria generated by SSL’s experience operating similar systems. This assessment will inform a second development phase to investigate reaction and recovery to impacts as well as deliver prototypes to higher technology readiness levels for use in training exercises and outdoor environments.

Novel Power Generation and Energy Management using Hydrogen Pellet System

1014_C5_PH1_081 : Cranfield University/Cella Energy

Show summary...

Cella Energy has developed a solid state hydrogen storage system that when combined with a fuel cell produces 2 to 3 times the energy per kilogram of a lithium-polymer battery, which for these UAS systems that equates to 2 to 3 times the flight time. However, to be able to use this technology efficiently it is necessary to hybridize the Cella power system with a high power battery in a smart way that can adapt to the specific mission. Cranfield University has extensive experience in developing hybrid power system and have the capability to simulate and test power systems. The aim of this project is to take the concept that Cranfield have for a smart Power Manager and develop both the system and algorithms so that together we can exploit the Cella power system in different UAS and for different mission profiles.

As the system matures and become less expensive, this system has the potential to be exploited in sectors outside the aerospace, for example, soldier portable power and non-military off grid applications, such as electric bicycles or scooters.

Structurally Integrated Battle Damage and Structural Health Monitoring

1014_C5_PH1_113 : Roke Manor Research Ltd

A summary of this task is not available

Resilient Electrical Architectures (Phase 1 Project)

1014_C5_PH1_118 : Rolls-Royce Plc

Show summary...

Smaller Unmanned Air Vehicles (UAV) are moving towards an all-electric propulsion system, enabled by a substantial reduction in the weight of batteries, electric motors and power electronics. Consequently, the robustness of the electrical power system to battle damage or component failure is critical to the ability of the platform to either continue with its mission or return safely.

This project will combine previous research in the fields of high-speed fault electrical fault protection and reconfigurable electrical architectures to demonstrate how these concepts can be applied to improving the resilience of UAV power systems during adverse events caused by internal or external factors.

Formal Modelling of Autonomous Systems with STPA and Human-in-the-Loop Techniques

0913_C6_Ph1_010 : Critical Software Technologies Ltd

Show summary...

Critical Software provides solutions, services and technologies for mission- and business-critical processes. Our success lies in bringing timely and cost-effective quality and innovation to our customers’ IT systems. Critical Software is keen to exploit current technology to provide further cost-effective quality to our customers. One such area that is being exploited is the application of more rigorous requirements engineering during the software development life-cycle.

During this project, the requirements engineering framework, Prova, will be extended with support for the high-level safety-oriented STPA technique. Prova allows requirements engineers to use natural language to define requirements, and then automatically translate them into mathematical statements and analyse using automated theorem proving. This helps ensure the correctness, completeness and consistency of the requirements before any development. A fundamental design principle of Prova, that makes it an ideal candidate for this work, is to provide an excellent user experience. The integration of STPA into Prova will further expand the formal portion of the Prova framework with support for modelling hardware, software, liveware (human component) and domain knowledge, and the interaction between these elements. Research shall be undertaken into the semantics behind the linking of the STPA technique onto these models, and the definition of user interfaces that maintain the excellent user experience.

There are further advantages from the explicit modelling of the hardware, software, liveware and domain knowledge from both the user and technical perspectives. This will allow the user to be presented with customised interfaces, possibly graphical, for each different type of model. The technical advantages of this approach is that it becomes clear where assumptions are made, i.e. hardware model or domain knowledge, and that the interactions between the human component and the system become clear and naturally formalised.

Formal Methods for Learning and Reconfiguration in Autonomous Systems

1014_C6_PH2_104 : University of Southampton

A summary of this task is not available


Enabling Technologies

Architectures for Autonomous Systems – Phase 1

DF_2014_003 : QinetiQ

A summary of this task is not available

Aerodynamic Simulation of Avian Flapping Flight

SUAS_2013_004 : BAE Systems

A summary of this task is not available

Free Flying Autoresonant NAV

SUAS_2013_019 : Esotechnic Ltd

Show summary...

This project addresses the scope of Challenge 3 by providing a significant increase in the capability of hovering Nano Air Vehicles (NAV) for troop level Information, Surveillance and Reconnaissance (ISR). This work seeks to further optimise and refine the flapping wing NAV demonstrator previously developed by the applicant (CDE12748) to an untethered, free flying state. This insect inspired flapping platform uses bio-inspired auto-resonance to ensure efficient operation by removing the inertial load, allowing the advantageous elements of flapping flight (manoeuvrability, gust response, acoustic signature) to be exploited without loss in efficiency. The result is the basis for a highly practical platform using conventional well tested electric motor technology which would exceed the capabilities of current rotary wing NAVs.

Enhanced Agility and Coping with High Gusts – CFD – Phase 2

1014_PH2_04 : BAE Systems

Show summary...

This project addresses the challenge of how nature inspired aeroelastic and morphable wing geometry can be used to enhance the manoeuvrability and gust tolerance of a micro UAV. Small UAV’s are increasingly being required to operate close to the ground and close to, or even within, urban environments. Such terrain is usually highly cluttered with objects that have to be avoided through agile manoeuvring. These environments also give rise to atmospheric conditions where ambient wind speeds and gusts are often significantly amplified by the influence of buildings, trees and other features.

Birds have evolved to be capable of operating effectively in these environments through their ability to rapidly reshape their wing (e.g. wing sweep, twist angle, dihedral, and aspect ratio) and tail geometry to alter both their aerodynamic efficiency (e.g. lift curve slope, drag modulation, control effectiveness) and their static and dynamic stability characteristics. There is also evidence that they also use the aeroelastic nature of their structure (feathers, tendons and connective tissues) to passively alleviate the impact of gusts on both their structural loading and dynamic response.

The approach to this challenge is to evaluate how birds achieve the levels of agility and gust responsiveness that they do. This knowledge, combined with the application of the transient, large amplitude motion aerodynamic prediction tools validated in Phase 1 of this project will then be used to design a morphable geometry, aeroelastically tailored wing which will be tested in a wind tunnel to explore both passive and active agility and gust response.

Strain Sensing for Improved Flight in Turbulent Conditions

1014_C3_PH1_049 : Department of Aerospace Engineering, University of Bristol

Show summary...

Birds, bats and insects are all able to sense the forces acting on their wings using sense organs which measure how much different parts of the wing are being stretched. These types of sensors are known as strain sensors, and these give the animals information about the aerodynamic forces acting on their wings. It is thought that this information helps animals to deal with the windy, gusty conditions that they often experience, allowing them to tailor their wing movements to the moment by moment wind conditions.

Current small scale robotic fixed wing aircraft, or Unmanned Air Systems (UAS), particularly those at the scale of birds and insects, have limited ability to cope with windy gusty conditions; such as those found in cities due to the presence of large buildings. This project sets out to test if the performance of these aircraft can be improved by giving them the ability to sense the forces acting on their wings using arrays of strain sensors. The information that these sensors provide will first be measured by flying gliders in an indoor flight arena through a controlled gust created by a large industrial fan. We will then measure what they sense when a small powered radio controlled aircraft is flown in windy outdoor conditions. Automatic control schemes will then be developed based on these signals, aimed at enabling more stable flight. By giving the aircraft the ability to “feel” the effects of wind on its wings, like animals do, we hope to develop new automatic control systems to allow aircraft to operate more effectively in windy conditions.

Gust Alleviation for mini UAVs using Disturbance Rejection Flight Control

1014_C3_PH1_056 : Loughborough University

Show summary...

Due to their small scale and light weight, mini Unmanned Air Vehicles (UAV) are quite vulnerable to gust wind and other disturbance such as the wake of buildings when performing urban overwatch and similar missions. This project is to improve gust tolerance of mini-UAVs by using the latest nonlinear Disturbance Observer Based Control (DOBC) technique. In the DOBC framework, a nonlinear disturbance observer is designed to estimate gusts and other uncertainties, and then is integrated with a baseline controller to form a composite flight control strategy. Initial flight tests on both fixed wing aircraft and helicopters have demonstrated that promising gust attenuation performance has been achieved.

This project will refine and improve the initial DOBC design for mini-UAVs by explicitly exploiting the gust properties and the nature of small UAV structures, and have a detailed assessment of its performance in realistic operation scenarios. The nonlinear disturbance observer will be redesigned to achieve good robustness and estimation performance, and the performance and stability analysis of the composite nonlinear control strategy will be investigated. It is expected that the proposed flight control scheme will significantly improve the flight stability and performance of the mini UAVs under adverse weather conditions. The proposed method could be also integrated with new airframe or actuation mechanisms to further improve the gust tolerance of small and large UAVs. It would find a wide range of applications for small UAVs such as aerial photographing, and extend the flight envelope and the application range of small UAVs.

Acoustic Wind Sensor for Improved Wind Velocity Sensing on SUAS

1014_C3_PH1_114 : Roke Manor Research Ltd and Southampton University subcontract

Show summary...

The weather conditions under which fixed wing Small Unmanned Air Systems (SUAS) can currently operate severely limits their operational usefulness. High winds/gusts are particularly problematic. By using an estimate of wind velocity and the GPS derived velocity of the SUAS it would be possible to compensate for local wind/gust conditions in the flight control system and extend the range of conditions that a fixed wing SUAS can operate under.

The work proposes a novel approach to measuring wind velocity on an SUAS using an acoustic sensor. The particular sensor being proposed has the potential for a combined role providing acoustic direction finding information.

Systems Approach to Safety Engineering for Autonomous Systems

0913_C6_Ph1_008 : BMT Isis Ltd

A summary of this task is not available