Drone Simulation and Control Usin Matlab and Simulink
Drone Simulation and Control Usin Matlab and Simulink
Drone Simulation and Control Usin Matlab and Simulink
Quadcopters and other styles of drones are extremely popular. A lot of even entry-grade vehicles
have sophisticated control systems programmed into them that allow them to be stable and fly
autonomously with very little human intervention. Their four propellers are spun in precise ways
to control the quadcopter in 6 different degrees of freedom. In this series, we’re going to walk
through the process of designing a control system that will get a drone to hover at a fixed
altitude.
Even if you don’t plan on writing your own drone controller, it’s worth understanding the
process because the workflow that we’re going to follow is similar to the work flow you’ll need
for almost any control project.
With that being said, this series is using a drone rather than some other platform because of how
accessible the hardware is and the existing infrastructure available for programming and
modeling drones. Also, they are just really fun to fly.
So with that in mind, let’s head over to the blackboard and set up our problem. I’m Brian and
welcome to a MATLAB Tech Talk.
In this series, we’re focusing on the control strategies for a quadcopter, named so because of
their four rotating propellers. Similarly, you can add more rotors and they take on names like
hexacopter and octocopter. But all of these drone style flying machines are part of an entire
family of rotating wing aircraft called rotorcraft. This includes the familiar helicopter and the
less familiar autogyro as well as any other flying machine that uses a rotating wing rather than a
fixed wind to generate lift. Even though these are all rotary wing vehicles, they have different
dynamics and therefore different control strategies. For this series, we’re going to be designing a
control system for a quadcopter, the Parrot Minidrone.
Now, in order to set up the control problem, we need to spend a little time understanding our
hardware. In this case, the hardware already exists, so I don’t have the ability to easily change
the sensors or actuators in any meaningful way. If you’re the control engineer for a quadcopter
development program, that wouldn’t be the case. You would probably be expected to guide and
influence the design process so that the hardware is appropriately designed to meet your control
requirements.
Since our hardware is already built, we have to deal with what’s given to us. So let’s take a look
at the Minidrone and see what we have to work with. We’ll start with the sensors. On the bottom,
there are two sensors that you can see. The one with the grid is an ultrasound sensor that is used
to measure vertical distances. It sends out a high frequency sound pulse and measures how long
it takes that pulse to bounce off the floor and return back to the sensor. From the measured time,
distance between the floor and the drone can be calculated. At least up to about 13 feet in
altitude. After that, the reflected sound is too soft for the sensor to pick up.
The other sensor is a camera. It’s taking images at 60 frames per second and using an image
processing technique called optical flow to determine how objects are moving between one
frame and the next. From this apparent motion, the Minidrone can estimate horizontal motion
and speed.
Inside the minidrone there is a pressure sensor which is indirectly measuring altitude. As the
drone climbs in altitude the air pressure drops slightly. We can use this slight change in pressure
to estimate how the altitude of the minidrone is changing - is it going up or down?
The last sensor is an Inertial Measurement Unit, or IMU for short. This is made up of a 3-axis
accelerometer which measures linear acceleration and a 3-axis gyroscope that measures
angular rate. From the IMU, and our knowledge of acceleration due to gravity, we can estimate
the Minidrone’s attitude relative to gravity and how fast it’s rotating.
And that’s it for the sensors. We have these four sensors to work with. We can use ultrasound
and pressure to determine altitude and the IMU and camera to determine rotational and
translational motion.
With the sensors covered, let’s talk about our actuators. We have four motors, each with their
own propeller. The front two are white and the back two are black, but the colors are just there to
indicate to the operator which way the drone is facing while they are flying it. The most
important thing to recognize with these motors is their configuration and spin direction. These 4
motors are laid out in an X configuration, as apposed to the plus configuration. The only
difference between these two is which motors you send commands to when pitching and rolling
the minidrone. In general, the underlying concepts are the same for both.
The most ingenious part of a quadcopter’s motor configuration is the spin direction. Opposing
motors spin in the same direction as each other, but the opposite direction as the other pair. This
is necessary to make sure that thrust, roll, pitch, and yaw can be commanded independently of
each other. That means we can command one motion without affecting the others. We’ll see why
the configuration makes this true, at least to a first order, in just a just a bit. In reality, complex
fluid dynamics around the drone means that all motion is coupled together in some small way,
but for our purpose we can ignore that detail and eventually let our feedback control system
correct for that error.
OK, now we can talk about the overview of the control problem. We have our plant - the drone
itself - and we have our four actuators that inject forces and torques into the system. So the
question is, how do we inject the right inputs into our system so that the output is the result we
want. That is, can we figure out how to manipulate these four motors in very precise ways so that
the drone can rotate and maneuver around in 3 dimensional space?
To help us, we have our set of sensors that we can use to directly or indirectly estimate the state
of our minidrone. System states are things like angular position and rates, altitude, and horizontal
velocity. The states that we are estimating depend on the control architecture and what we are
trying to accomplish. We’ll flush that out in more detail throughout this series.
Finally, with knowledge of the state of the system and an understanding of what we want our
minidrone to do, we can develop a controller, basically an algorithm that runs in software, that
takes in our set point and estimated state and calculates those precise motor commands that will
inject the necessary forces and torques. That’s the whole problem, but as you might imagine,
coming up with that algorithm isn’t straight forward for a quadcopter.
The first thing we should notice is that this is an underactuated system. We only have 4
actuators, but we have 6 degrees of freedom - three translational directions, up and down, left
and right, forward and backwards, and three rotational directions, roll, pitch, and yaw. Since we
don’t have an actuator for every motion, then we already know that some directions are
uncontrollable at any given time. As a example, our minidrone is not capable of moving left, at
least not without first rotating in that direction. The same goes for forward and backward motion
as well.
We’ll get around this underactuation problem by developing a control system that couples
rotations and thrust to accomplish the overall goals.
So now let’s walk through how we generate thrust, roll, pitch, and yaw with just 4 motors, and
why the spin direction allows us to decouple one motion from the other.
A motor produces thrust by spinning a propeller which pushes air down, causing a reaction force
that is up. If the motor is placed in a position that the force is applied through the center of
gravity of an object, then that object will move in pure translation, no rotation at all. And if the
force of the thrust is exactly equal to and opposite of the force of gravity then the object will
hover in place.
A force at a distance from the center of gravity produces both a translational motion as well as a
torque, or rotating moment around the center of gravity. If our motor is attached to the bar, as it
rotates, the torque will stay constant since the distance between the force and center of gravity
stays the same, but the force is no longer always in the opposite direction of gravity and therefore
our bar will begin to move off to the side and fall out of the sky. Now, if there is a counter force,
on the opposite side of the center of gravity, and each force is half of the force of gravity, then
the object will again stay stationary because the torques and forces will cancel each other out.
But our actuators don’t generate purely force when it generates thrust. Since it accomplishes
thrust by rotating and torquing a propeller which has mass, our actuators are also generating a
reaction torque that is in the opposite direction. And if both of our motors are spinning in the
same direction, then the torque is doubled and our bar would start to rotate.
To counter this torque, we could spin the two motors in opposite directions. That would work
just fine in 2 dimension, but a bar with only two motors would not be able to generate torques in
the third dimension, that is we wouldn’t be able to roll this bar. So we add a second bar with two
more motors to create our quadcopter.
With this configuration, we can hover by accelerating each motor until they produce a force 1/4
that of gravity. And as long as we have two counter rotating motors, the torques from spinning
the propellers will balance and the drone will not spin. It doesn’t matter where you put the
counter rotating motors, as long as there are two in one direction and two in the other.
But quadcopter developers settled on a configuration with opposing motors spinning the same
direction, so there must be a reason for this. And there is. It’s because of how yaw, or the flat
spinning motion, interacts with roll and pitch.
To understand why this is true, let’s look at how we would command yaw.
We have counter-rotating motors specifically so that there is no yaw torque on the system with
all motors spinning at the same speed. So, it makes sense that if we want to spin the drone about
the vertical axis, or we want to have the vehicle yaw, then we’d need create a yaw torque by
slowing two motors down that are running the same direction and speed the other two up. By
slowing two down and speeding two up appropriately, we can keep the same total force through
the maneuver so that we’re still hovering and counteracting gravity, but the summation of the
motor torques is non-zero and the vehicle will spin. So we can yaw without affecting thrust.
Now let’s see if yaw affects roll and pitch. If the rotating motor pairs are on the same side, then
slowing one pair down and increasing the other pair will cause an imbalance of forces about the
center of gravity and the vehicle will either pitch or roll depending on which side the motor pairs
are on. However, if we separate the two motors and place them on opposite sides of the drone,
then the forces will balance each other out. This is why the motor configuration and spin
directions are so critical. We can now send commands to the four motors in such a way that the
vehicle will yaw, but not roll, pitch, or change its thrust.
Similarly, we can look at roll and pitch. To roll, we’d decrease one of the left/right pairs and
increase the other causing a rolling torque, and to pitch we’d decrease one of the front/back pairs
and increase the other causing a pitching torque. Both of these motions would have no effect on
yaw since we are moving counter rotating motors in the same direction, and their yaw torque
would continue to cancel each other out.
To change thrust, we need to increase or decrease all four motors simultaneously. In this way,
roll, pitch, yaw, and thrust are the 4 directions that we have direct control over. And the
commands to the motors would be a mix of the amount thrust, roll, pitch, and yaw required.
As we now know, we can command thrust by setting all four motors to the same speed. Then we
can create yaw by increasing two motors spinning the same direction and decrease the other two.
Pitch is created by increasing or decreasing the front motor pair and then commanding the back
pair in the opposite direction. Roll is the same, but with a left/right pair. This is our simple motor
mixing algorithm that can convert between the intuitive roll, pitch, yaw, and thrust, and the less
intuitive motor speeds.
As I said earlier, moving forward, backwards, left, and right are unactuated motions. And the
way we get around that is by first rotating into an attitude where the thrust vector is partially in
the gravity direction and partially in the direction of travel to accelerate the drone in that
direction. If we wanted to maintain altitude, then we’d increase the thrust so that the vertical
component is still canceling out the downward pull of gravity.
So know we know that manipulating the four motors in specific ways will allow us to control the
drone in 3D space, we have a set of sensors that we can use to estimate the state of the system,
and we have an onboard processor that can run our controller logic.
The control system development will ultimately be done in Simulink, where we will build and
simulate the quadcopter model, tune the controller, test it in a closed loop simulation, and finally
automatically generate flight code that we will load into the onboard micro controller on the
Parrot minidrone. The very next step is to figure out how we want to set up the control system
architecture.
So if you don’t want to miss the next tech talk video, don’t forget to subscribe to this channel.
Also, if you want to check out my channel, control system lectures, I cover more control theory
topics there as well. Thanks for watching, I’ll see you next time.
How Do You Get a Drone to Hover? | Drone Simulation and Control, Part 2
In the last video, I showed that we can manipulate the four motors of a quadcopter to maneuver it
in 3D space by getting it to roll, pitch, yaw and change its thrust. We also covered the four
sensors that we have at our disposal to estimate the system states. In this video, we’re going to
use that knowledge to design a control system architecture for hovering a quadcopter. That
means, we’re going to figure out which states we need to feedback, how many controllers we
need to build and how those controllers interact with each other. I’m Brian, and welcome to a
MATLAB Tech Talk.
In the control system we’re developing, the plant is the minidrone itself. It takes fourmotor
speeds as inputs, which then spin the propellers, generating forces and torques that affect its
output state. The output that we want is to have the the minidrone hover at a fixed altitude. So
the question becomes, how do we command these four motors autonomously so that happens?
Well, to start, let’s assume that rather than do it autonomously, we want to command the
minidrone manually. That is, we have a remote control with toggles that directly control all four
motor speeds. The left toggle would control the two front motors and the right toggle would
control the two rear motors. This puts you in the feedback path because you could see where the
drone is, and then react to its position and attitude by moving the four toggles in very specific
ways.
If you want to increase thrust, you’d speed up all four motors by moving the two toggles in this
direction. Yaw requires that two opposing motors increase speed and the other two decrease
speed so that yawing left, for example, would require this kind of toggle motion. Then to roll the
vehicle, you would increase one of the left/right pairs, and decrease the other. And to pitch the
vehicle, you would increase one of the front/back pairs and decrease the other. In this way, you,
as the feedback controller, could get the drone to hover by expertly changing the commands to
these four motors.
While possible, thinking in terms of motor speed seems really hard, and we want to make our job
easier, so instead we can use the motor mixing algorithm we created in the last video and
command thrust, roll, pitch, and yaw directly. When we command thrust, for example, this single
thrust command gets split evenly to all four motors, the yaw command is distributed positive to
two motors and negative to the other two, and so on.
Now our remote control toggles are aligned with the intuitive roll, pitch, yaw, and thrust rather
than mind bending motor speeds. This is the controller configuration that a lot of operators use
when manually flying their quadcopters. But it turns out, thinking in terms of roll, pitch, yaw,
and thrust is also beneficial when we’re developing an autonomous control system. So we’ll
keep the motor mixing algorithm and build a feedback control system with it in the loop as well.
Alright, let’s get rid of the human operator and think about how we can accomplish the same
thing autonomously. We’ll start by focusing on the thrust command. Thrust is always in the
same direction relative to the drone airframe, it’s along the Z axis of the minidrone. That means
that if the drone is flying level, and the z-axis is aligned with the gravity vector, then increasing
thrust causes the drone to increase its altitude rate, which is how fast it is rising, and decreasing
thrust drops the altitude rate. That is pretty straight forward. However, if our drone is flying at a
steep pitch or roll angle, then increasing thrust is coupled to both the altitude rate and the
horizontal speed.
If we’re building a controller for a racing drone that is likely to fly at extreme roll and pitch
angles then we need to take this coupling into account. However, for our simple hover
controller, I’m going to assume that the roll and pitch angles are always really small. This way,
changing the thrust only meaningfully impacts altitude rate and nothing else. With this
information, we can start our control design.
To begin, let’s build a controller that uses thrust to adjust the altitude. If we are able to measure
the drone altitude then we can feed it back to compare it to an altitude reference. The resulting
error is then fed into an altitude controller that is using that information to determine how to
increase or decrease thrust. For now, we can think of this controller as some form of a PID
controller and we’ll talk about tuning it in a future video in this series.
With this controller, if the drone is hovering too low, the altitude error will be positive and the
thrust command will increase causing all four motors to speed up at the same time, and the drone
will rise. If the drone is too high, then all four motors will slow down so that the drone will
descend.
So that’s it, right? Our job is done. We’ve developed our simple altitude controller which will
hover our minidrone. And it’s perfect! Except, you know that’s not the case, because there are
disturbances, like wind gusts, that will induce a little roll or pitch into the system and when that
happens, the thrust will not only adjust altitude but also create some horizontal motion and the
drone will start to move away from you. And what good is a hover controller that maintains
altitude but requires you to chase after it or it crashes into walls? That’s hardly hovering.
Nope, we clearly need a better control system architecture, and we should start by trying to
maintain level flight by controlling roll and pitch angles to zero degrees. If we can keep the mini
drone level, then thrust once again will only impact altitude and the drone won’t wander away.
We know from the last video, that we are able to command thrust, roll, pitch and yaw
independently, that is, we can command one action without affecting the others. Knowing this,
we can create three more feed back controllers, one for roll, pitch, and yaw, exactly the same
way we did for thrust. To give us a little more room, I’m going to redraw the block diagram and
will condense the motor mixing algorithm block to just say MMA.
Now, the output of the plant is more than just altitude, we’re also going to need to measure or
estimate the roll, pitch, and yaw angles as well. I’ll feed back all of the system states and then
label which state I’m using for each controller so hopefully this all makes sense to you. I’m
feeding the estimated roll angle into the roll controller, and the yaw angle into the yaw controller,
and so on.
We have four independent or decoupled controllers, one for thrust, which is really an altitude
controller since we’re claiming small roll and pitch angles. And then three controllers that are
trying to maintain 0 degrees in roll, pitch, and yaw, respectively. This should maintain altitude,
keep the minidrone facing forward and level with the ground.
This is a better hover controller than our first one. But again, it’s still not perfect. To understand
why, let’s play our hypothetical wind gust through this system. The wind might initially
introduce a little roll angle, but our roll controller will remove that error and get the drone back
to level flight. However, for a very brief time during the roll the thrust vector is not straight up
and down and therefore the drone will have moved horizontally a little bit. Then another gust
comes and causes another roll or pitch error and the drone walks away a little bit more. So even
though the drone won’t run away continuously like our first controller, this controller will still
allow it to meander away slowly. There is nothing in our control system architecture that will
bring the drone back to its starting position.
Let’s improve this control system to do just that. I’m going to once again clean up our block
diagram to make some room for additional control logic. OK, now that we have some room, let’s
think about what roll and pitch are doing while hovering. It’s tempting to say that both angles
should be zero and that’s how we set up this current version of the controller. However, they
may need to be non-zero in order to hover. For example, if we want to hover in a strong constant
wind, then the drone will have to lean into the wind, at some angle, to maintain its ground
position. So rather than specifying that we want level flight, really what we need is a ground
position controller, something that will recognize when the drone is wandering off and make the
necessary corrections to bring it back to the reference point X and Y.
But just because we don’t want to pick specific roll and pitch angles, that doesn’t mean we don’t
need the roll and pitch controllers. Remember from the first video, that a quadcopter is incapable
of moving left, right, forward, or backward without first rolling or pitching into the desired
direction of travel. So our control system needs to couple position errors with roll and pitch.
This is a complicated sounding set of maneuvers, but luckily the resulting controller is pretty
simple to understand. We can feedback the mini drone’s measured X, Y position and compare it
to the reference to get the position error. For now, we’ll say that the reference position is 0, 0.
This way our controller will cause the drone to hover right above the take off point.
Our position controller takes the position error as an input, and then outputs roll and pitch angles.
These are the reference angles that our roll and pitch controllers are trying to follow. So instead
of us, as the designer, picking roll and pitch reference angles, we’re letting the position controller
create them for us. In this way, the position controller is the outer loop and it is generating the
reference commands for the inner loop roll and pitch controllers. These are cascaded loops.
As a quick side note, the measured yaw angle also feeds into the position controller. The reason,
very briefly, is that the X, Y position error is relative to the ground, or the world reference frame,
whereas roll and pitch are relative to the body of the drone. Therefore, pitch doesn’t always
move the drone in the X world direction and roll doesn’t always move the drone in the Y world
direction. It depends on how the drone is rotated, or its yaw angle. So, if we need to move the
drone to a very specific spot in the room, then it needs to know yaw in order to determine
whether roll, pitch or some combine of two is needed to achieve that. So our position controller
uses yaw to convert between the world X, Y from and the body X, Y frame.
Alright, let’s walk through a thought exercise to see how all of these controllers work together to
maintain position and altitude. Let’s say the mini drone is flying level at the correct altitude but
a little too far left of where it took off. This will result in a position error that feeds into the
position controller. The proportional part of the controller will multiply that error by a constant
which will request that the drone roll to the right. The roll controller will see there is a roll error
because the drone is still level and request a roll torque. This will play through the motor mixing
algorithm and request that the motors on the left side of the drone speed up and the motors on the
right side slow down. This will roll the drone to the commanded angle. Now the drone will
begin to move to the right, but since the vertical component of thrust is slightly smaller when
rolled, the drone will also start to lose altitude. The altitude controller will sense this error and
increase the thrust command accordingly.
As the drone continues moving right, the position error is dropping and therefore the requested
roll angle through the proportional path is also dropping, bringing the drone back level. But if
the drone is traveling too fast, then there is some horizontal momentum in the direction of travel
that needs to be removed. This is where the derivative term in the position controller is useful.
It can sense that the drone is closing in on the reference position quickly, and start to remove roll
earlier, even causing it to roll in the opposite direction in order to slow the drone down and stop
right where we want it to.
This is now a good architecture for our hover control system, but there are two glaring obstacles
to creating and tuning it. First, this requires having a way to estimate the states yaw, roll, pitch,
altitude, and XY position. These are the signals that we’re feeding back. And second, we need
to tune six PID controllers that all interact with each other, and specifically four of them directly
coupled in cascaded loops. The way that we’re going to handle the first problem is by combining
the measurements from the four sensors we have and in some cases using a model and a Kalman
filter to estimate each of those feedback states. For the second problem, we need a good model
of our system so that we can use Model-Based Design and MATLAB and Simulink to tune our
six PID controllers.
In fact, if we just look at the controller portion of this feedback system you can see that the
Simulink model has pretty much the exact same controller architecture that we’ve built up in this
video. The outer loop XY position controller is generating the reference pitch and roll angles for
the inner loop pitch/roll controller. There is also the Yaw and altitude controllers and each of
them feed into the motor mixing algorithm. In the next video, we’re going to leave the world of
concepts and drawings and move over to this Simulink model to look at the actual software for
this control system and the plant and environment dynamics.
So if you don’t want to miss the next tech talk video, don’t forget to subscribe to this channel.
Also, if you want to check out my channel, control system lectures, I cover more control theory
topics there as well. Thanks for watching, I’ll see you next time.
How to Build the Flight Code | Drone Simulation and Control, Part 3
We are well on our way to designing a control system for a quadcopter. At this point in the
series, we’ve learned how quadcopters generate motion with their four propellers and we’ve
stepped through a control system architecture that we think is capable of getting our drone to
hover. However, there are still a few more steps we need to take before we can actually get the
drone up and flying. First, we need to code the control logic in a way that we can put it on the
minidrone. We’ll call this the flight code. And second, we’ll need to tune and tweak the flight
code until the hover performance is what we’re looking for. To do that we’ll use Model-Based
Design where we’ll use a realistic model of the quadcopter and the environment to design our
flight code and simulate the results. So we’ll have two different bits of software that we’ll write:
the actual flight code that runs on the quadcopter, and the model code that we use to simulate the
real world. In this video, we’re going to explore the flight code in more detail. I’m Brian, and
welcome to a MATLAB Tech Talk.
Flight control software is just a small part of the overall flight code that will exist on the Parrot
minidrone. There’s also code to operate and interface with the sensors and process their
measurements. There’s code to manage the batteries, the Bluetooth® interface, and LEDs,
there’s code to manage the motors speeds, and so on, there’s a bunch.
One option to implement the flight controller would be to write the C code by hand and then
compile the entire flight code with your changes to the flight controller and finally load the
compiled code to the minidrone. This is a perfectly reasonable approach to creating flight code,
but it has a few drawbacks when developing feedback control software. With this method, we
don’t have an easy way to tune the controllers except by tweaking them on the hardware. We
could develop our own tuning tools and a model of the system and simulate how it would behave
that way. But my experience is that designing and modeling control systems in C code makes it
hard to explain the architecture to other people and it’s more difficult to understand how changes
impact the whole system than it is with graphical methods.
So we’re going to use a second option, describing the flight controller graphically using block
diagrams. With this option, we’ll develop the flight controller in Simulink, then auto code it into
C code, where if we wanted to we could make changes manually, then compile that C code and
load it onto the minidrone. The benefit of adding this extra step is that I think the Simulink code
is easier to understand, plus over the next two videos, we’ll talk about how we can build a model
of the drone and the environment in Simulink so that we can simulate the performance of our
flight controller and use existing tools to tune the controller gains.
We won’t need to worry about writing most of the flight code because we are going to use the
Simulink Support Package for Parrot Minidrones to program our custom flight control software.
This package loads new flight firmware onto the vehicle in a way that keeps all of the normal
operating functions of the drone in place but let’s us replace just the control portion of the
firmware. As long as we keep the same input and output signals intact, then when we program
the minidrone through Simulink, any code we write will be placed in the correct spot and can
interface with the rest of the minidrone firmware.
So in this video, here’s what we’re trying to do. Design a Simulink model that takes external
commands and the measured sensor readings as inputs, and outputs the commanded motor
speeds along with a safety flag that will shut the minidrone down if set. This flag is our
protection in case our code causes the drone to run away or go crazy in some way. Then we can
build the C code from that model and fly the actual drone with that software to see how it does.
We have a pretty good start on the flight control system from the architecture that we developed
in the last video, but it’s not all of the software that we need to write. This is only the controller
part of the control system. For example, the drone has a sensor that measures air pressure and
this air pressure reading is what is passed into our flight control system. However, we’re not
trying to control the drone to a particular pressure, we’re trying to control an altitude. Therefore,
there is additional logic needed to convert all of the measured states coming from the sensors
into the control states needed by the controller. We’ll call this block the state estimator.
In addition to the state estimator and the controllers, there’s also logic we have to write that
determines whether or not to set the shutdown flag. We could leave this code out, but then we’d
run the risk of the drone flying away or causing harm to nearby observers. We could check for
things like the rotation rate of the drone is above some threshold or the position or altitude is
outside of some boundary set. Creating this code is relatively easy and can really save us from
damaging the hardware or other people.
Lastly, we need to think about data logging. All of the firmware that exists on the minidrone
records data that we have access to during and after the flight. Since the minidrone doesn’t know
about the software that we’ve written, we need to make sure we have data logging set for the
variables that we’re interested in. Since we’ll be using Simulink to write our flight code, we can
easily create logic that will store data as a .mat file locally on the drone that we can download to
MATLAB® after the flight.
These are the four main subsystems that we need to develop in Simulink in order to have safe
and functioning flight code. But we’re not going to build the entire Simulink model of this code
from scratch in this video, it would just take too much time. Luckily, we don’t have to.
Aerospace Blockset™ in MATLAB has a quadcopter project based on the Parrot minidrone that
we’re going to use as a starting point. There are some good webinars that I’ll link to in the
description that describe how to open this model and use it, so I’m not going to cover much of
that here. Instead, I’m going to show you how this Simulink code matches the control
architecture that we developed in the last video and point out how it also accomplishes state
estimation, data logging, and fault protection. We’ll end by auto coding the Simulink model and
seeing it in action by flying the minidrone.
One thing to note when looking at this Simulink model is that it was developed at MIT for the
lab portion of a control theory course and, therefore, is set up in a way to teach the underlying
theory. In some cases, it has more logic than we need to perform our relatively simple hover
maneuver. I’m going to start from this stock model so that it looks the same to you when you
open it, but I’m going to modify it slightly as we go along to have it be clearer for our purpose.
Alright, let’s get to it. As you can see, this top level of the model has several subsystems: FCS,
airframe, environment, and so on. It might not immediately look like it, but this is our classic
feedback control system. In the top left, we have the system that is generating the reference
signals or the set points that we want the drone to follow. There is the flight control system
block, where the error term is generated and the PID controllers live. This is the flight code block
that gets auto coded and loaded onto the minidrone and where we’re going to spend the majority
of the rest of this video.
The outputs from the FCS block are the motor commands that are played through the plant, the
airframe dynamics. The visualization block up in the corner that just plots signals and runs the
3D visualizer while the simulation runs — it’s outside of our feedback loop. There is an
environment block that models things like the gravity vector and air pressure for the plant and
the sensors. And finally, the sensor model block, which simulates the noise and dynamics of the
four sensors that are on the minidrone. So you can see the characteristic feedback loop at this top
level. The important thing to realize for this video is that the FCS block is the flight code, and
everything else is part of the model used for simulation.
So now, let’s go into the FCS block and see what’s there. First off, you’ll notice that there are
three inputs instead of the two that I mentioned. This is because the software as written makes
use of the camera and image processing to help with precision landing. While precision landing
is useful, this complicates our hover controller so I’m going to remove it for now and get back to
just the two inputs.
OK, let’s open the flight control system block where things will start to look familiar. And we’ll
begin with the heart of our control system, the controller itself. As a quick reminder, the
controller subsystem takes the reference signal and compares it to the estimated states to get the
error signals. The error is then fed into several PID controllers to ultimately generate the required
motor commands. So let’s open up the subsystem block and see what it looks like.
Graphically, it looks different than the controller architecture we covered in the last video, but I
assure you that the logic is the mostly same, it’s just organized in a slightly different way and
this logic allows us to command special take-off behaviors as well as control the roll and pitch
angles directly for landing. Both of these capabilities we won’t need for our simple hover
maneuver. To show you the remaining logic matches the architecture we developed before, I’m
going to zip through some reorganization so that what we’re left with matches our expectation.
There we go.
You can now see that we have the XY position outer loop controller feeding into the roll pitch
inner loop controller. And independent of those, we have the yaw and altitude controllers.
Overall, there are six PID controllers that work together to control the position and orientation of
the minidrone.
If we take a look at just the altitude controller, which is set up as proportional and derivative,
you’ll see that it might be implementation slightly differently than you’re used to.
Rather than a single altitude error term feeding into the P and D branches, the P gain is applied to
the altitude error derived from the ultrasound, whereas, the D gain is applied to the vertical rate
measurement directly from the gyro.
This way, we don’t have to take a derivative of a noisy altitude signal, we already have a
derivative from a different sensor. One that is less noisy than taking a derivative of the
ultrasound sensor.
I'm going to talk more about the benefits and drawbacks of tuning when setting up your PID
controller this way in a future video in this series. For now, we’ll just accept that this is the way
it is and move on.
The output of these PID controllers are force and torque commands which all feed into the
mixing algorithm. This produces the required thrust per motor and then that thrust command is
converted into a motor speed command through this block.
All in all this subsystem is executing the logic that we built in the last video.
Let’s leave the controller now and move on to the state estimator because there is some really
cool stuff going on in this block that we should talk about.
There are two steps involved in taking the raw sensor measurements and generating the
estimated states. First, we process the measurements and then we blend them together with
filters to estimate the control states. Let’s look at the details of the sensor processing block. This
looks daunting at first but the underlaying concept is pretty simple. Along the top, the
acceleration and gyro data are calibrated by subtracting off the bias that has been previously
determined. By removing the bias, zero acceleration and zero angular rate should result in a zero
measurement. The next step is to rotate the measurements from the sensor reference frame to the
body frame. And lastly, filter the data through a low pass filter to remove the high frequency
noise.
Similarly, the ultrasound sensor has its own bias removed. And the optical flow data just has a
pass/fail criterion. If the optical flow sensor has a valid position estimate, and we want to use that
estimate, then this block sets the validity flag, TRUE. There’s always more sensor processing
that can be done, but we’ll see shortly that our drone hovers quite nicely with just this simple
amount of bias removal, coordinate transformation, and filtering.
Now that we have filtered and calibrated data, we can begin the task of combining measurements
to estimate the states we need for the controllers. The two orange blocks are used to estimate
altitude and XY position. If you look inside these blocks, you’ll see that each of them use a
Kalman filter to blend together the measurements and a prediction of how we think the system is
supposed to behave in order to come up with an optimal estimation. There is already a MATLAB
Tech Talk series that covers Kalman filtering so I’m not going to spend any more time of them
here, but I recommend watching that series if you’re not familiar with it and I’ve left a link in the
description of this video.
The other non-orange block estimates roll, pitch, and yaw and it does it using a complementary
filter instead of a Kalman filter. A complementary filter is a simple way to blend measurements
from two sensors together and it works really well for this system. In the description of this
video, I also linked a complementary video on complementary controllers that I posted to my
channel if you are interested.
With the state estimation and controller subsystems described, we can now move on to the other
important, but less flashy subsystems.
There is the logging subsystem that is saving a bunch of signals like the motor commands and
position references to .mat files. These are the values that we can download and plot after the
flight. We also have the crash predictor flag subsystem. The logic in here is just checking the
sensed variables like position and angular velocity for outliers. When that happens, it sets the
flag that shuts down the minidrone. This where you could add additional fault protection logic if
you wanted to.
There is also the sensor data group, which is just pulling individual sensor values off of the
sensor bus so that we have access to them elsewhere in the code.
And finally, there is the landing logic block. This block will overwrite the external reference
commands with landing commands if the landing flag is set. Once again, I’ll remove the switch
and landing portion to simplify the logic since we don’t want to execute a precision landing.
I have to change one other thing here because the reference block from the top level of the model
isn’t part of the auto coder that runs on the drone. So it won’t get loaded onto the drone and
execute. But that’s OK because I can move this logic into the flight code right now. Since I know
I just want the drone to hover, I’m going to hardcode the reference values in this block. There we
go. This will keep the drone at an X Y position of 0 and 0 and an altitude of -0.7 meters.
Remember, the z-axis is down in the drone reference frame, so this is an altitude of 0.7 meters
up.
OK, so this is no longer the landing logic, but instead is the block that is generating our reference
commands. And we don’t need these inputs anymore since the reference commands are now
hardcoded values.
That completes the very quick walkthrough of the entire flight control software that is in this
quadcopter model. And you should now sort of understand how each of these subsystems
contribute to getting our minidrone to hover safely, whether it’s the sensor processing and
filtering or the various feedback controllers or even the logic that sets the stop flag. We need it
all to have a successful control system. I haven’t yet spoken about how to tune all of these
controllers, that will be a future video in this series. For now, we can rely on the fact that the
default gains delivered with the model are already tuned pretty well. OK, enough looking at
Simulink models, I think it’s about time we see this default flight code in action by flying it on a
Parrot minidrone.
We need the Simulink Support Package for Parrot Minidrones installed in order to allow
Simulink to communicate with the drone. I already have this package so all I need to do is pair
my drone to my computer via Bluetooth and hit the build model button at the top of the model.
Again, the webinar linked in the description describes how to set all of this up if you’re
interested in doing this at home.
While the software is building, let me revisit what is going on behind the scenes. We have all of
this flight code in Simulink, which at the top level has the necessary interfaces for the rest of the
minidrone firmware. We’re now in the process of auto coding C code from the Simulink block
diagrams. If you have Simulink Coder™ installed like I do, then you will have access to the C
code and can make changes if you like. If you don’t have Similink Coder, then you just can’t see
the code but it is still generated. The C code is then compiled on your computer and the
compiled binary code is sent to the minidrone via Bluetooth and placed in the correct spot in the
firmware. Once it’s ready to fly, this GUI interface pops up which allows us to start the code on
the drone, and more importantly, stop it. I’ve set up my computer and drone in an area that’s safe
to fly, but don’t forget to grab your safety googles. Now we just sit back, hit start, and watch our
feedback control system in action.
I told you the default gains are pretty good. In the next video, I’ll deep dive into the models so
that you have a pretty good idea of how we’re simulating the real world and then how we’re
going to use those models to tune the controllers ourselves. If you don’t want to miss the next
Tech Talk video, don’t forget to subscribe to this channel. Also, if you want to check out my
channel, control system lectures, I cover more control theory topics there as well. Thanks for
watching, I’ll see you next time.
How to Build a Model for Simulation | Drone Simulation and Control, Part 4
In the last video, we were given functioning flight code in the form of the quadcopter model in
Simulink and we showed that it successfully hovered the Parrot Minidrone. But what if we had
to develop the code ourselves or make changes to it? How could we design and test the code in a
safe way? For that, we need a good model of the mini drone and the environment it’s going to
operate in. That’s what we’re going to talk about in this video. I’m Brian, and welcome to a
MATLAB Tech Talk.
To understand how a model can help us, let’s look at a really simple block diagram. In this first
block, we have the flight control software. This represents all of the control system software that
we reviewed in the previous video. This code has to interface with the rest of the mini drone
firmware and so, as we talked about before it has two inputs, the raw sensor readings and the
reference commands or set points, and two outputs, the motor speed commands and the stop
flag. Remember though, in the last video, we moved the reference commands to inside the flight
code so really this input goes away and we’re just left with the one.
The second block we’ll call 'model’ and it represents everything else, anything that isn’t the
flight control code. This includes the rest of the mini drone firmware, the hardware, the
atmosphere it’s flying in … everything. But we’ll get into the details of that in just a bit. At a
very basic level, the model inputs motors commands and the stop flag, makes a few calculations
and outputs sensor measurements. In this way, the model is wrapping around the flight code and
provides the feedback loop.
Imagine we had this model that was so accurate that it perfectly represented reality. To a
bystander, it would be indistinguishable whether the results came from the actual hardware or if
it came from this perfect model. If this was the case we could simulate the mini drone
performance using the model and be very confident that when we later run the flight code on the
actual hardware it will have the same result. As long as we’re happy with the simulated
performance, then our design is complete. The nice thing about simulating the performance, is
that we can reset the model quickly and put the vehicle in any situation we want to see how it
does. And if it does poorly, we make the necessary changes and we don’t damage any of the
hardware in the process.
As you can imagine though, a perfect model of reality is impossible to create. Luckily, we don’t
need to model everything. The trick is to figure out what to include in the model and what to
leave out. Some of that knowledge comes easily by just understanding your system and how it
will be operated. For example, we probably don’t need to model the code that turns on and off
the front LED’s. They won’t impact our control system. But there are a lot of other things that
aren’t as obvious and knowing what to model can require a little experience and investigation.
One example is whether to model the airframe structure as a rigid body or as a flexible body. Do
we need to take into account the flexibility and vibrational modes of the structure? Will they
meaningfully impact our sensor measurements or is the additional logic in the model just going
to complicate matters and slow the simulation down without any noticeable benefit?
It’s hard to know exactly what to model and what to leave out initially. Usually what happens is
you start with your best guess, and then over time, the fidelity of your model will grow until you
are satisfied with the match between your experimental results and your simulation.
So simulation is a way to verify the system in situations that are hard or time consuming to
physically test as long as the model adequately reflects reality. But we also use a model to design
our control system in the first place. And for control design it would be great if we could use the
linear analysis tools that we already know. Unfortunately, our nonlinear model we use for
simulation doesn’t lend itself to well linear analysis and design. For that, we need a linear
model. Essentially, what we need to do is remove the nonlinear components in our model or
estimate them as linear components. The linear model will not reflect reality as accurately as the
nonlinear model, but the hope is that it’s still accurate enough that we can use it to design our
controllers.
We would then have two different models that we can use for Model-Based Design. We have a
lower fidelity, linear model that is useful for determining the controller structure and gains, and
we have a higher fidelity nonlinear model that is useful for simulating the result and verifying the
system.
To summarize, the steps we’d take to design our flight control software using Model-Based
Design looks like this.
1. Create a high fidelity model of everything the flight control software needs to interact with.
More than liely, this will be a nonlinear model.
2. Verify that the model matches reality with a number of test cases
3. Once we have a model that reflects reality, we create a linear version of it so that we have
both a linear and nonlinear model
4. We use the linear model and our linear analysis tools to design and analyze our control
system
5. We use the nonlinear model to verify the performance of the control system
6. We feel confident enough to run the flight control software on the actual hardware for final
verification
So far, I keep drawing this model as this single block. But rather than thinking about it as a
monolithic set of calculations it’s generally easier to break it up into several smaller models that
represent specific and discrete systems.
For our minidrone, we might break it up into the airframe structure, the actuators, the
environment, and the sensors. And then within these models are even smaller subsystem models
like the gravity model or the IMU model.
There are many reasons to approach thinking about modeling in this hierarchical way rather than
lumping all of the behavior together into a single model.
For instance, it allows multiple people and teams to build different parts of the
model simultaneously
You can upgrade portions of the model based on which area needs more fidelity without
impacting the rest
You can choose the modeling technique that makes the most sense or is the easiest for each
system. Then when you put it all together you’ll have an entire model that you can use for
wrapping around your flight control software and simulating the results.
Just like the last video on the flight software, we don’t have the time in this video to build up the
models from scratch. However, the quadcopter example within the aerospace block set comes
with a model that provides a good starting point for our discussion. So let’s head over to
Simulink and walk through it.
I’ll point out a few interesting things as we go along, but I want you to realize that this is just one
perspective on how to develop a quadcopter model and the different techniques you can use to
generate the different subsystems.
So with that disclaimer out of the way, let’s jump into this particular model.
You can see we have the flight software that we talked about in the last video and wrapped
around this block is the model that we’ll use for simulation. At this level we have the Airframe,
Environment, and Sensors. Airframe is implemented as a variant subsystem, which means that
before you run the model you can select which version of airframe you want to run with; the
nonlinear model that we’ll use in this video for simulating the flight, or the linear airframe model
that we’re going to use in the next video to tune the controllers.
Let’s take a peak inside the nonlinear model to see how it’s set up. There are two main blocks,
the AC model on the left consists of the actuators models and a model of how the environment
disturbances impact the system. Basically, anything that can create a force or torque on our mini
drone is calculated in this block. The forces and torques are then fed into the 6DOF model. This
is a rigid body model that comes with the aerospace blockset. This is an example of using an
existing model rather than going through the effort of writing out the equations of motion for a
rigid body yourself. Of course, you still need to determine the specific parameters for your rigid
system like mass and inertia. More than likely, the developer pulled this information from a
CAD model of the minidrone, however, you could set up a physical test to calculate this
information.
Let’s go back up to the top level of the model and go into the environment block. Here, again, it
is a variant subsystem and we have the option of choosing constant environment variables or
variables that change based on position. We’re going to select the constant variables because
things like gravity and air pressure aren’t going to change between ground level and hovering at
less than a meter. However, if you wanted to simulate how high your mini drone could fly, then
selecting the changing environment will lower air pressure and density as it gets higher which
will eventually stall the drone at some maximum altitude. So choosing one model or another
depends on what we’re trying to test for.
OK, lastly, I want to go into the sensors block. And, well you’ve guess it, this is also a variant
subsystem. Here, we can select dynamic sensors with noise or feedthrough sensors. We’ll select
the feedthrough option for tuning in the next video, but for this simulation we want our sensors
to behave as much like the real things as possible. Inside this subsystem, there are some
hardcoded sensor calibration data and a block called sensor system that houses the models for the
camera, the IMU, the ultra sound, and the pressure sensor.
Alright, now at this point, let’s head back up to the top level and run the simulation so that I can
show you what the output looks like. This particular model is set up with a 3D visualization that
renders the mini drone based on the outputs from the simulation. Just like in the last video,
we’re trying to get the mini drone to hover about .7 meters above the ground. We could check
the simulation data and compare it to the saved data from the actual test we ran in the last video,
but in the interest of time let’s just compare them visually. I’ll play the two results side by side
so that we can see that this model is fairly close to reality, at least for this one test condition.
Now that we have a model that we like, we can use it for things like safely simulating a failure
and seeing how the system does. For example, we can go back tot he sensor block and into the
IMU model and change the gyro bias. Let’s say we estimated the bias poorly and it’s really three
times worse than we expect. Now we can simulate the system and see how our controller
responds.
It takes off just about the same, but the gyro bias error quickly causes our drone to roll away
from level and then it just runs away, and eventually crashes into the ground. So we’ve learned
two things from this; one, if the gyro bias is 3 times worse, the drone won’t perform well, and 2,
if we are worried about being this far off on estimating bias, then maybe we should change the
stop flag logic to recognize that we’re drifting away and shut the drone down before we hurt the
hardware.
Alright, this was a fast walkthrough, but hopefully I’ve given you enough information to review
the model on your own in more detail or to start the process of creating your own drone model.
In the next video, we’re going to see how we can use the linear version of this model to tune the
PID controllers. If you don’t want to miss the next Tech Talk video, don’t forget to subscribe to
this channel. Also, if you want to check out my channel, control system lectures, I cover more
control theory topics there as well. Thanks for watching, I’ll see you next time.
In the last video, we learned how accurate, nonlinear models are great for simulation, but they
don’t lend themselves well to linear analysis and design. For that we need a linear model. It
won’t be as accurate as our simulation model but we will be able to use it for tuning the 6 PID
controllers in our control architecture. And that’s what we’re going to do in this video. I’m
Brian, and welcome to a MATLAB Tech Talk.
We’re going to spend the majority of this video working in Simulink, but rather than start here, I
think everything will make more sense if I set the context first.
So what do we have so far? We have a set of nonlinear models wrapped around a model of the
full flight control software. This is the software that is set up for automatic code generation and it
has the controllers and state estimators that are directly in the control loops, but also other logic
like fault protection and data logging. Every one of these have nonlinear components to them. So
it would make sense that in order to have a linear model of the entire system we need to linearize
both the fight software and the models that are wrapped around it.
However, in my experience it’s hard to have flight software that is set up well for automatic code
generation as well as for linearization. This is because flight software has if statements and
switches and state machines and all sorts of things that are needed for the code to run but are
difficult or impossible to linearize. Therefore, we usually build a completely separate model for
controller design. Specifically, one that is capable of being linearized. So in this video, I’ll start
with the full up quadcopter model that comes with the aerospace block set and then start
removing a bunch of stuff that we don’t need for controller design. Specifically all of that stuff
that makes linearization difficult.
Once we have that stripped down model, we’ll use the PID tuner app in Simulink to linearize this
model and tune the PID controllers.
If you recall from the 2nd video, our control architecture looks like this with several different
control loops and 6 different PID controllers. We’ll start by tuning just a single loop, the altitude
loop. Remember, this is independent of the other loops so we can tweak and adjust altitude
without affecting roll, pitch, or yaw. To make sure they’re out of the equation completely, we’ll
just set the commands to 0 for roll, pitch, and yaw.
We’ll also make the assumption that the sensor dynamics and noise don’t meaningfully impact
the controller design. If this is true, then we can remove the sensor models and the state
estimation logic as well. Basically, we’ll assume that our controller knows the true altitude of the
drone perfectly. After we complete our controller tuning we’ll test the results on the full
nonlinear model and see if that assumption is good. If it doesn’t work then we’ll add the sensors
back in a try again. I like to start with the simplest model possible and only go more complicated
if necessary.
Then we’ll linearize the altitude loop and adjust the gains to get the altitude performance we’re
after. Now, I’m only going to tune this one controller in this video, but the process is nearly the
same for the others as well.
After the altitude loop I’d move onto tuning the yaw controller, keeping roll and pitch constant
and holding altitude fixed. Once that’s done I’d then move onto roll and then pitch. Once those
inner loop controllers are all tuned then I’d move onto the outer loop position controller and tune
that while the inner loop controllers are active and maintaining orientation. In this way, we
would step through each of the 6 PID controllers and at the end have them all working together.
So that’s the overview of what we’re about to do, I’ll add some more context as we go but that
should be enough to understand what I’m doing as I start moving Simulink blocks around. So
let’s get to it.
The first thing I want to do is just see how well the stock altitude controller does. I’ll pick off the
altitude from the state bus and plot it with a scope. The controller is trying to hold an altitude of
0.7 meters and remember that altitude is measured in the drone reference frame which has
positive Z axis pointing down. So our controller is actually driving the altitude to -0.7. As you
can see, there is a slight overshoot with the way the stock controller is tuned but it settles out
nicely at -0.7. Let’s see if we can tune the controller for different performance.
This is our simulation model, and I don’t want to change this. Instead, I’ll copy the whole thing
and paste it in its own model that we can modify and use for controller tuning. Alright, now to
start removing stuff. First, we don’t any of the visualization or the scope that I just added.
We can also remove the sensor block since we’re just going to feedback a perfect altitude state.
OK, now if I go into the flight control system there’s a lot in here that I don’t need. Remember
we’re just focusing on the altitude controller for now. So inside the flight controller subsystem
I’ll grab just the altitude reference and the controller plus the motor mixing algorithm and the
thrust to motor command block. These will be the only parts of the flight code that we need to
complete the altitude loop.
I’ll bring those blocks up to the top level of the model and remove the rest of the flight control
software. The output of these blocks are the motor speed commands which we can feed directly
into the airframe model. And now I’ll set the roll, pitch, and yaw torques to 0 ensuring that as
long as there are no other external forces and torques on the airframe - like wind gusts or
something - then the airframe can only go up and down. I know that the environment block
doesn’t model external disturbances like that so we’re good with just setting the commands to 0.
Now we need to feed back our perfect altitude state into the altitude controller and we have the
simplified closed loop system that I showed you earlier. So now at this point we can go into the
altitude block and see what to do with the PID controller.
We saw this controller in the 3rd video, but I just want to briefly describe what is going on again.
First off, this is just a PD controller, and the derivative path is not fed by an actual derivative, but
rather by the altitude rate that is estimated by the Kalman filter. Remember, this is a good way to
set up the PD controller because we’re not taking a derivative of a noise signal, we’re estimating
the rate directly. However, this set up doesn’t work when we feed back the actual state rather
than going through the state estimator. That’s because we’re not also feeding back a true rate
state. But that’s ok because for this model we have removed the sensor block and the noise
associated with them and have a pretty clean altitude signal. So we can feed that altitude signal
into a Simulink PID block which will take care of the derivative for us as well as add some
filtering if needed.
I’ll set it to a PD controller and set the gains to the current values. Then I’ll remove the existing
PD gains and logic and replace it with the PID Controller block. And this is now ready for
autotuning. We have our altitude reference, -0.7 meters, which is compared to the true altitude in
this case. The error is fed into a PD controller and then a feedforward gravity term is added in
afterward. This basically adds in the amount of thrust needed to offset the weight of the drone so
that the PD controller just needs to add positive thrust to go up and negative thrust to go down.
We could remove that feedforward path by adding an integral to our PD controller but we’ll
leave it like this since that’s the way the model is already set up.
Alright, now we can open the PD block and hit the tune button to kick off the autotuner. The first
thing the autotuner does is linearize the entire control loop, remember now that we removed all
of the difficult components this system is capable of being linearized. The tool creates this linear
model of the plant that we could export and use to design our PID gains manually.
However, we’re going to stay in the PID tuner app because it plots the closed loop response of
the controller and the linearized plant for us and we can simply adjust the response time and
transient behavior of the loop with the sliders at the top. You can see that the stock design, the
dashed line, has a similar behavior to what we saw with the simulation model. It’s not going to
be exactly the same because we removed all of the nonlinear components, but the goal is that it’s
close enough for gain selection. We’ll test that out in just a bit. So now we can move the sliders
around and adjust the behavior. For this design, I want to have certain frequency domain
performance by setting the bandwidth to 5 rad/s and the phase margin to 60 degrees.
This produces a proportional gain of about 0.32 and a derivative gain still at 0.3.
So let’s leave our design model now, and head back to the full simulation model and see how
these new gains behave. I’ll go back to the altitude controller and put 0.32 in the proportional
path and leave 0.3 in the derivative path and then head back up to the top level to run the
simulation.
And look at that, with the new gains we’ve changed the behavior of our hover control system.
You’ll notice that the overshoot is gone but based on our linear analysis we would have expected
the drone to still overshoot a little bit. This difference is due to us doing our analysis with an
imperfect linear model. However, the result is still close to what we designed and so linear
analysis gave us a really good starting point. If we weren’t completely happy with this response,
we could now tweak the gains a little bit to see if we can improve the performance.
The real test, however, is not testing in a simulation environment, but testing it on the real
physical hardware. Now, if the hardware behaves exactly like this model, then we know that our
Model-Based Design tuning will work for it as well.
But we don’t have to assume, since we can generate code for the Parrot Minidrone from this
Simulink model, we can just try out our new gains on the actual hardware.
I mentioned this last time, but it bears repeating. Remember your safety goggles. You can never
be 100% confident that your control law won’t cause the vehicle to go out of control and since
it’s practically a flying lawn mower that can be dangerous. Alright, here we go. I have the new
gains and the flight code loaded onto my mini droned ready for takeoff.
Well, I’m not 100% sure. But I do know something and that’s that the hardware doesn’t behave
like the model. There is something that the model is missing or has modeled incorrectly for my
hardware that gave me the impression that the control law would work. Having a good model is
the foundation of Model-Based Design. And if you use Model-Based Design, I think you’ll find
that you spend a lot more time creating and validating a model of your system than you will
developing your control law. But that is time well spent, because once you have a good model,
then designing, simulating, and verifying your system becomes so much easier than having to do
all of that with physical hardware.
Some things that definitely aren’t modeled that may be important is how well the ultrasound
works at low altitudes, or how the aerodynamics change when flying close to the ground. One
problem in my case is that this model was developed for the general Parrot mini drone and my
specific drone may have parameters that are different. The battery voltage may be low, the mass
is off, the motor torques are different, and so on.
I would now want to investigate through system identification or other physical tests where this
model differs from reality and make the necessary changes. But, unfortunately, I don’t have the
time in this video to go into a detailed investigation of the model so I’m going to have to leave
that for another time.
OK, I changed my mind, I don’t want to leave this video with a failed experiment. I got to
thinking about the behavior and came up with a possible issue. This feedforward term is
calculated to produce the thrust needed to perfectly cancel the weight of the drone. That way the
PD controller just needs to adjust the thrust slightly up and down to change the altitude. But what
if this term is too low, or another way of putting it, what if the model thinks the weight of the
drone is lower or the thrust produced by the rotors is higher than what it actually is? Then there
will be some residual weight that the PD controller needs to handle and by lowering the
proportional path we’ve reduced the ability to overcome that weight and the drone will have
trouble taking off like we saw.
So I did a test where I removed the feedback altitude controller and just relied on the gravity
offset term to raise the drone. The stock value with my drone caused it to just sit on the ground
and not take off. I then raised the value by 10% and tried it again, and then by 20%. At 20%, the
drone just barely was able to rise off the ground. So, I think this is a more appropriate
feedforward term for my hardware.
I then added the feedback term back in and ran the test one final time and check this out. It
works. Now this was a quick fix to identify the problem. I still need to adjust the model of the
drone to reflect this. And that sounds like a pretty fun project to me. So if you find yourself
trying this at home, I think this will be a good introduction to Model-Based Design and give you
a chance to learn how to take a model that someone else provided to you and tweak it for your
particular circumstances. At the very least, if gives you an excuse to play around with developing
control laws for quadcopters. I hope you find it as exciting as I have.
If you don’t want to miss the next Tech Talk video, don’t forget to subscribe to this channel.
Also, if you want to check out my channel, control system lectures, I cover more control theory
topics there as well. Thanks for watching, I’ll see you next time.