A common problem for some of our customers is that when their products malfunction due to a software bug, solving the issue can be very complicated. Often, the product, e.g. an industrial machine, cargo ship or similar, could be deployed at a site anywhere in the world. The company sends a “superengineer”, to locate the error and fix it – this takes time and resources, often at a high cost. Is there no better solution?
Well, one way could be to develop a Hardware-In-the-Loop (HIL) simulator. In a HIL simulator, the code is run on the target, e.g. a PLC, in realtime. Since the code is running in exactly the same way as in a real machine, it expects to receive data from sensors and to send data to actuators (most likely with actuator feedback as well). If the machine hardware is not available, all sensors and actuators need to be modeled in a plant model. This plant model could also include a graphical interpretation of the machine, which allows the operator to see what happens in the same way as when observing the real machine. The plant model is then deployed to a realtime system, containing a machine with appropriate IO modules, which communicates with the target. If machine hardware is available, it is possible to interface this as well.
In the HIL simulator, the virtual machine can be configured to run with the exact same settings as when the failure occurred in the real machine. If the machine is modeled properly, the failure will be detected in the simulation and the bug can be fixed. The software update can then be sent to the customer, without ever having had access to the machine. The superengineer is happy, since he or she can focus on delivering the best solution. The machine site is happy, since they can get up and running quicker. The company is happy, since they save money and resources.
How to model the machine depends on how it is being used. For example:
Is the production code autogenerated, e.g. from MATLAB/Simulink, or developed directly in the target environment?
How complex is the plant process? What level of detail does the model require?
How many, and what type of, IO signals are needed?
When using MATLAB/Simulink, two popular options are Speedgoat and dSPACE. Speedgoat is specifically developed to integrate seamlessly with Simulink Real-Time and xPC Target in real-time testing. The systems from dSPACE are modular with a wide range of IO options, suitable for real-time simulation. Both systems are easy to use with MATLAB/Simulink and the benefits with each systems should be evaluated on a case-to-case basis.
If MATLAB/Simulink is not being used or if the code is developed in the target environment, e.g. implementing structured text in a PLC vendor system, another option is for example the systems from National Instrument. National Instrument does support MATLAB/Simulink, as well as other common languages and environments, but these systems are mainly intended to use LabVIEW. LabVIEW is an integrated development environment using graphical programming , which also supports creating user interfaces and instrument control.
The scenario described above is also applicable when the machine is not readily available due to testing costs and/or time allocation. The HIL simulator also allows for faster and more extensive testing. For example, it is possible to skip certain machine steps that are not relevant, which saves time, or to simulate settings that would potentially be of risk to the operator when performad in the machine.
If you have any questions on HIL solutions, or for more information on how Combine can help your company, please contact us.
Model predictive control (MPC) refers to a class of control algorithms that compute a sequence of control moves based on an explicit prediction of outputs within some future horizon. The computed control moves are typically implemented in a receding horizon fashion, meaning only the moves for the current time are implemented and the whole calculation is repeated at the next sample time. In essence, MPC is a feedback control strategy based on repeated calculation of open-loop control trajectories.
In the process industries, serious applications and research on the subject began in the late 1970 for providing effective solutions to difficult process control problems. Owing to its unique ability to handle process interactions and constraints in a unified manner MPC become popular.
So why is MPC popular? MPC has some advantages:
+Straightforward formulation, based on well understood
+ Explicitly handles constraints
+ Explicit use of a model
+ Well understood tuning parameters; Prediction horizon and Optimization problem setup
+ Development time much shorter than for competing advanced control methods
+ Easier to maintain: changing model or specs does not require complete redesign, sometimes can be done on the fly.
There are differences between e.g. LQG and MPC. The latter could handle process interactions and constraints within it’s framework. One interesting feature is ”funneling technique”. Industrial MPC controllers use four basic options to
specify future CV behavior; a set-point, zone, reference trajectory or funnel. In the latter the reference trajectory is optimized, see here.
What is explicit MPC?
A traditional model predictive controller solves a quadratic program (QP) at each control interval to determine the optimal manipulated variable (MV) adjustments. These adjustments are the solution of the implicit nonlinear function u=f(x).
Explicit model predictive control address the problem of removing the main drawbacks of MPC, namely the need to solve the mathematical program on line to compute the control action. The consequence of the computational constraint in MPC results in expensive hardware or limited bandwidth in the control loop.
Explicit model predictive control allows one to solve the optimization problem off-line for a given range of operating conditions of interest. In practice the control function are table lookups of linear gains.
This is a good overview of the field and a course is available here
As most of you probably know, Combine has hosted several master thesis works during the last few years, e.g. the hexacopter and hexapod in Lund, the balancing cube in Gothenburg and the ROV in Linköping. For more info on these, and other, thesis works, go to www.combine.se/projects/. For the students, this is a great opportunity to use what they have learned during their studies to complete a highly challenging project, with the assistance of some of the best engineers in the field, while getting an insight into the working experience at Combine. For Combine, this is a chance to interact with talented students, and to test and introduce new concepts which our clients may benefit from. A real win-win situation! However, not only do we host master thesis works, we also collaborate with students in other projects. One such project is currently being conducted in Lund with students from the department for automatic control at LTH.
The project uses the hexapod as one of two independent robot systems. The other system is a small quadcopter, Parrot BEBOP2, equipped with a full hd camera. The main objective is to enable the quadcopter to land in uneven terrain. To achieve this, the quadcopter act as master, sending a landing position to the hexapod. The hexapod should move to the landing position and switch to balancing mode, making sure that the quadcopter has a horizontal platform when landing. This is a small-scale, but complex, project with many possible applications. If, for example, the hexapod was equipped with an inductive charging station, the robots could comprise a fully autonomous high range unit. The balancing of a landing platform could also be applied when landing a helicopter in rough conditions, or as a part of a reusable launch system, such as the SpaceX Falcon.
Within this project, the students face several challenges. For the quadcopter, these include position control and position estimation. The position control must take into account both altitude, to maintain a set altitude while flying and to land on the hexapod, and positioning in the horizontal plane, i.e. to reach the landing site where the hexapod is positioned. The position estimation algorithms must estimate the quadcopter position using only the onboard camera, in combination with reference markers in the room and on the hexapod, since no external measurement system is available (as is often the case). A simple pinhole camera model is straightforward enough to solve but since the quadcopter is free to rotate in yaw, pitch and roll directions, the complexity increases. When adding image noise and other disturbances to this, as well as timing requirements, it is evident that it will take much effort to develop efficient algorithms for the position estimation. For the hexapod, the main challenges are controlling the velocity, including direction, to reach the target position and efficient handling of the balancing mode once the target position has been reached. Finally, the communication between the robots must be solved in an efficient way.
The students are adopting the model based design (MBD) workflow, which I am sure will help them immensely. Despite initial delivery problems with the hardware, they are still making progress since they have been able to develop algorithms and testing them in MATLAB/Simulink, applying noise and filters to achieve good performance. Even when the hardware is available, the development will benefit from MBD. For example, testing experimental positioning algorithms in the quadcopter for the first time may prove to be an expensive and physically painful experience. Of course, the code is automatically generated, to reduce development time and to ensure code quality.
The project is not due until January next year and the plan is to revisit the project in another post. I for one am very excited to see the results!
A few weeks back, Mathworks hosted their annual EMAB (European MATLAB Advisory Board). The EMAB is a set of seminars stretching over three days, with the purpose of presenting news from the current MATLAB release, presenting planned features of future releases, and get feedback from customers on issues and/or desired functionality. Intended mainly for the industry, Combine is an exception as being the only consulting company invited to participate (at the Nordic site at least). This post is not intended as marketing for Mathworks products, but since most you use MATLAB/Simulink at least to some extent, I thought this might be of interest.
Much of the contents from the seminars are confidential and cannot be published in this context, however, I would like to highlight a few features from the current release:
Initialize/Terminate Function block
Just-in-Time (JIT) Acceleration builds
The Initialize/Terminate Function block is a new function, allowing the user to specify model behaviour at initialization and/or termination. A corresponding built-in feature has not been available before, often resulting in quite complex models for what should be an easy task. Also, these workaround solutions results in two separate initialize/terminate functions in the generated code (implicit and explicit).
The new blocks contain an Event Listener and a State Reader/Writer block that links (not only in a code context, but also with a hyperlink for improved model readability) to states in the model. The event listener can in fact be set to Reset as well as Initialize and Terminate, allowing the user to create proper reset behaviours. There are sevel benefits to using the new built-in function blocks. From a modeling perspective, the work needed to create initialize/reset/terminate functions is reduced while the model readability is improved due to the reduced wiring and blocks necessary. From a code perspective, the generated code for the blocks are included in the initialize()/reset()/terminate() methods. An added benefit to this is when working with components, since the code is aggregated automatically. This means that even though the model may contain multiple function blocks distributed over several sublevels, one single function initialize()/reset()/terminate() is created, within which the code for all sublevels are ordered (lowest level of hierarchy first). The result is a more efficient code that is much easier to read in my opinion.
For a brief introduction to the new blocks, have a look at this video.
Just-in-Time (JIT) Acceleration
Just-in-Time (JIT) compilation has been used in MATLAB code for more than a decade (though according to Mathworks, greatly improved between R2015a/2015b releases) and was introduced in Simulink in release R2015a. With JIT compilation, rather than generating C code or MEX files, an execution engine is generated in memory. This allows for faster simulation startups and rebuilds, while removing the need for a C compiler. New for R2016b is the use of JIT acceleration in the Acceleration simulation mode (not only for Normal mode, as in the last few releases). This will in fact be the default setting when using accelerator mode, so if you upgrade and require the classic accelerator mode, be sure to revert (set_param(0, ‘GlobalUseClassicAccelMode’, ‘on’);).
Classic Accelerator mode.
JIT Accelerator mode.
When using JIT for the accelerator mode, execution speed will be maintained compared to the classic accelerator mode, but the initialization time will be reduced. Also, with the classic accelerator mode, it is possible to monitor signals during simulation using the Signal & Scope Manager and adding test points to signals. Unfortunately, this requires the model to be recompiled when adding test point. With the new JIT accelerator mode, test points can be added without recompilation.
For more information, check out the introduction to new features or the full release notes list. Also, if you have any questions at all, please let me know. I am of course very interested if there are features you are not happy with, features you are missing, or comments in general. EMAB is our best chance to have an impact on future releases of MATLAB/Simulink – let’s make the most of this opportunity!
In a control system, your control algorithm utilizes some states of the system to generate the control input which satisfies your desired output. If you can not measure all the states in the system , you need an ”Observer” (or more precisely a state observer) to estimate the unmeasured states from the outputs. Kalman filter is one class of observer.
An Observer based control structure could be useful in the case of Anti-windup Bump-less transfer, e.g. switching between manual (open loop) and automatic (closed loop) control mode. A more complex control system often use several control modes, e.g. failsafe or limp home mode. The control strategies and variables may vary, while the observer is the same.
How to design an observer in practice? One approach is the High Gain observer.
The High Gain observer is a fast nonlinear (or linear) full order observer with high observer gain chosen via pole placement. To protect the system from the destabilizing effect of peaking (huge estimation error during the short period, right after the initial time or the time when the output is changed abruptly), High Gain observers, as well as any continuous differentiating schemes, have to be followed by saturation of the control output.
See this article for an insightful analysis of the topic.
The Tesla Model S can go from 0 to 60 mph (96 km/h) in 2.5 seconds, in what the company calls ”Ludicrous mode,”, the company said in a statement. The only commercial cars on the planet that can beat the Tesla Model S, the LaFerrari and the Porsche 918 Spyder, each cost about $1 million and are ”tiny” two-seater roadsters. Is there a limit how fast it is possible to accelerate?
Traction is the possibility for a car to accelerate. As a result of the driver pushing the throttle, an engine revs up and puts down all torque onto the wheels in order to speed up. tyre works best under a very slight wheel spin. If there’s too much wheel spin, the tyres lose traction and acceleration is greatly reduced.
Of course you need anti-spin control, but this is a clever way to figure out what the limit is. What is the maximum braking power with an ABS? See this video for the answer.
An engineers perspective of an electric car
Let us assume that you have the right to use a Tesla Model S to travel a very long distance. What is the optimal velocity if the sequence is charging, driving repeated for ever? Further, let us assume that only the drag is considered, e.g. the rolling resistance is ignored. Then the optimization problem could be described as:
where v is the velocity, c_dA the drag coefficient, P the power.
The ”performance parameter” is the ratio of the charging power and the drag coefficient.
An economist perspective electric car
Let assume that you want to find the optimal velocity regarding energy consumption vs traveling time. According to ”Trafikverket” one hour is worth 108 SEK at year 2010. Let us further assume only drag is considered as load of the vehicle. Further, the density is calculated as at sea altitude International Standard Atmosphere with the cost of electricity 1 SEK/KWh, for Tesla Model S and Volvo XC90 2015 Diesel with 35 % efficiency in the internal combustion engine. The price for the Diesel at Göteborg and US are 12.32 and 5.40 SEK/litre respectively. The difference is in the CdA parameter (0.576 and 0.92 for Model S and XC 90).
The optimum velocity is very different in these circumstances. In the case of a electric price of 5.5 SEK/kWh the price curve of the Model S and the XC 90 in Göteborg are the same.
Recently, CCS held its annual autumn kick-off. This year, the kick-off was held in the penthouse at Avalon Hotel in Gothenburg. This years inspirational speech was given by Annelie Pompe, a professional adventurer, who among other things talked about freediving. Therefore, this seems like an excellent opportunity for a deep dive (pun very much intended) into the very exciting topic of diving!
While the allures of freediving may be obvious after listening to Annelie, of course there are potential risks as well. Apart from shallow water blackouts and barotrauma, there is also the risk of decompression sickness. DCS have traditionally been associated with scuba diving using compressed gas, however, studies have shown that DCS syndromes may manifest after repeated deep breath hold dives. While Annelie talked about freediving, the focus for this newsletter will be more on diving using compressed gas.
So, why talk about DCS in a MBD context? Well, the effects behind DCS can be modeled.
Partial Pressure and The Bends (no, not the Radiohead album…)
As you all know, the air we breath contains approximately 78% nitrogen and 21% oxygen. In decompression theory, these ratios are often expressed as partial pressure instead of percent. At surface level, where the pressure is 1 bar (1 atm), the partial pressures would be 0.78 bar for nitrogen and 0.21 bar for oxygen. However, while the air contain several gases, only oxygen is metabolized in the body. Nitrogen and other gases are not metabolized and is called inert. When breathing, the inert gases are dissolved in the blood by gas exchange in the lungs. The blood is transported to the rest of the body, where the gas is exchanged to the tissue. This exchange continues until the partial pressure of the dissolved gas is equal to the partial pressure of the gas in the lungs, at which point the tissue becomes saturated. The rate of saturation varies between different types of tissue, e.g. the nervous system gets saturated fast, while fat and bones get saturated slowly.
While at sea level, where the nitrogen partial pressure is 0.78 bar, all tissue in the body is saturated at 0.78 bar (actually slightly less, but we will get to that). Now, if a diver should dive to 30 meters where the ambient pressure is 4 bar and stay there for a very long time (the slowest tissue saturates in a few days), all body tissue would be saturated with nitrogen at a partial pressure of 3.12 bar. This is not an issue, provided that the diver has enough oxygen to remain at depth indefinitely of course. While diving is fun, let us assume for the sake of argument that most divers would like to ascend to the surface at some point. If the diver could ascend from 30 meters to the surface instantely, the partial pressure in the tissue would still be 3.12 bar, while the partial pressure in the air would only be 0.78 bar – the tissue is said to be supersaturated. Thus, the nitrogen would be released from the tissue to the blood stream in order to equalize the pressure difference, forming micro bubbles which are transported to the lungs and ventilated out of the body. However, just as in a soda bottle, if the gas is released too rapidly, these micro bubbles may grow large enough to get trapped and block blood flow. A small blockage in a joint may not cause any major inconvenience, the risk is of course that bubbles may block the heart or vessels in the brain. This condition is referred to as decompression sickness (DCS) or the bends (since early symptoms include stiff joints).
Supersaturation and the resulting pressure differential is good, it is required to vent out gases. But, what is “too rapidly” when talking about the gas release? This is exactly what decompression theory addresses.
Haldanean Model and beyond
The first to present a decompression theory was John Scott Haldane, in 1908. He proposed the use of body “compartments”, a mathematical model describing the partial pressure in hypotethical type of tissue. The compartment is characterized by its half-time, i.e. the time it takes for the tissue to reach half of its saturation level. This compartment model was developed further during several decades and was the dominating model until the 1960s when it was enhanced by considering more complex bubble models.
The rate of change of partial pressure for an inert gas in tissue is proportional to the partial pressure difference between the gas in the lungs and in dissolved gas in the tissue, i.e.
where is the partial pressure in the tissue, is the partial pressure in the lungs (alveoli) and is a tissue dependant constant. The constant k can be expressed in terms of the compartment half-time as . Assuming constant partial pressure in the lungs (i.e. when the diver remains at a fixed depth), the solution can be expressed as:
where subscript 0 indicates the pressure at t=0. This is called the Haldaneequation. Since instantaneous descents and ascents is uncommon, it is reasonable to extend the equation to include a linear variation in lung pressure,
where R is the change rate of the partial pressure of the gas in the lungs. This addition results in:
with the solution:
This extended equation of the Haldane equation is called the Schreiner equation. But why stop here? When introducing the partial pressure of nitrogen above, 78% nitrogen content in the air gave 0.78 bar partial pressure, assuming 1 bar ambient pressure. Since the diver breathes air at the same pressure as the ambient pressure, i.e. 1 bar at surface and an additional 1 bar per 10 meters depth, it would be easy to assume that the pressure in the alveoli would be equal to the ambient pressure. However, a few factors affect the alveoli pressure, in particular:
water vapor pressure, due to humidification in the upper airways, reducing the alveoli pressure by 0.0627 bar (based on 37 deg C water vapor)
oxygen/carbondioxide exchange, where the ventilation of carbondioide reduces the pressure by 0.0543 bar (corresponding to the partial pressure of carbondioxide in the blood, since the content of carbondioxide in air is negligable)
Thus, the pressure in the alveoli can be expressed as:
For the oxygen/carbondioxide gas exchange, the respiratory quotient, RQ, can be defined as . The RQ typically lies in an interval of 0.7-1.0 (with 0.9 being used by the US. Navy), depending on the level of exertion (and of course physical health and nutrition). Introducing RQ in the equation above, the alveoli partial pressure can be expressed as:
So, how does this look for an actual dive profile?
Assume that a diver starts at the surface, without any residual nitrogen stored in the body tissue, i.e. saturated at sea level partial pressure. The diver makes a fast descent to 30 meters (i.e close to instantaneous) and remains at the same depth for 20 minutes.The diver, overly cautious, then makes a very slow ascent of 2 meters per min (R = -0.3 bar/min) before reaching the surface. If focusing on two types of tissue, with an expected half-time of 5 and 40 min respectively,the equations above results in a simulation model for the dive:
That was quite a few expressions, but how do the results look for this dive profile?
As can be seen in the plot, the pressure in the compartment increases and would in time reach steady state. At t=20, the nitrogen starts to decompress as the diver ascends, a process that continues during the post dive part. Still, this only provides a model for calculating the partial pressure in the tissue. What about ”too rapidly”, that was the main question? When is the pressure gradient too large with the risk of DCS? Well, to make a (very) long story (very) short, Haldane considered a gradient of 2 to be reasonable, i.e. the body can handle a partial pressure of dissolved nitrogen in the tissue that is twice as large as the partial pressure in the alveoli. This ratio actually worked pretty well, but not well enough (especially not for deeper dives). In 1965, Robert D. Workman introduced ”M-values”, which are values for the maximum level of supersaturation to avoid micro bubbles forming:
where M (bar) is the maximum level of supersaturation for said compartment, M_0 (bar) is the partial pressure for said compartment at the surface, ΔM (bar/m) is the M-value rate of change and d (m) is the depth. So, how to find the M-values? Well, unfortunately, the only possibility was to test, to test different rates of ascent and see when DCS manifested. The measurements improved greatly with the introduction of Doppler measurements, allowing the researchers to detect bubbles before the divers showed symptoms. Several sets of values have been developed, by several research teams, using a various amount of compartments and some of these still serves as basis for dive computers. As mentioned previously, this is a fairly easy model and today much more complex algorithms have been developed, for example RGBM (used in the Suunto dive computers) and VPM – feel free to have a look at these on your own!
While on the subject, of course it is necessary to have a look at dive technology! As all of you know, when breathing only a small fraction of the oxygen in each breath is metabolized in the body. Most is exhaled, which is why CPR works for instance. This means that during a dive, the diver exhales the majority of the oxygen out into the water. Perhaps the fish appreciates it, but it seems like a waste, does it not? What if it was possible to re-circulate the used air? Well, this would cause hypercapnia due to the exhaled carbondioxide, and the oxygen content would soon drop, causing hypoxemia due to the decreasing oxygen levels. Ah, but what if the carbondioxide was removed from the exhaled air and oxygen was added to compensate the metabolized oxygen? That is exactly the function of a rebreather!
In a rebreather, the exhaled air passes through a scrubber, reducing the carbondioxide levels, before oxygen is added from a canister. Rebreathers have several benefits, for example:
increased bottom times, since oxygen enriched air reduces nitrogen partial pressure
silent diving without bubbles
warm air (in contrast to scuba diving, where the air is cold and dry, chilling the diver)
Of course, there are risks and downsides with using rebreathers as well. They are very expensive and if the scrubber should fail, carbondioxide would increase with the risk of dying. Another risk is oxygen toxicity. Wait a second, oxygen is not toxic? Well, yes it is – for partial pressures above approximately 1.5-1.6 bar. Thus, diving with pure oxygen would be toxic and possibly lethal below 6 meters.
The easiest form of rebreather is purely manual, requiring the diver to monitor the partial pressure of oxygen and add more when needed. However, lately more technically advanced rebreathers have been developed, using electronics and microcontroller to control the partial pressure. Some of the most advanced models are produced in Gothenburg by Poseidon. These models are fully automatic, including:
automatic pre-dive checks of all sensors
continuous alarm handling, sensor analysis and validation
full redundancy, i.e. should any part of the system fail, the diver must be able to finish the dive safely
Poseidon’s first model, the ”MK6 Rebreather” uses a network of ATmega microprocessors in a network, to control the partial pressure of the oxygen in the loop. This is a novel solution, since most rebreathers use redundant sets of electronics. All nodes communicate through the network and in case of a failure, other nodes in the network may warn the diver.
All software for the rebreather was developed using state machines, in software called visualSTATE and Embedded Workbench. The pO2 controller uses two redundant O2-sensors to monitor the partial pressure and a solenoid to add more oxygen when needed. The setpoint for the controller is dependent on:
depth of the diver
setpoint configured by diver
decompression ceiling, i.e. the minimum depth of the diver in order to avoid DCS
The control scenario is complicated further, as the partial pressure of all components in a gas mixture is related to the component fraction and the ambient pressure. This causes problems especially at shallow depth, where large variations in pressure occur often. Often, too much oxygen is injected, resulting in too much positive buoyancy.
For more information on the Poseidon rebreather systems, have a look at their website.
A final word…
Using the equations derived in the modeling section and the knowledge from the rebreather section, it should be possible to simulate the full system. Depending on the level of complexity of the model chosen above, the controller could be tested thoroughly before deploying the code and letting divers test the equipment. This is one of those test cases where ”destructive testing” would probably be ”frowned upon”…
Hopefully this newsletter have provided you all with some small insights to the world of diving – a different, but very exciting, application of ModelBasedDesign!