Za darmo

Automática y Robótica en Latinoamérica

Tekst
0
Recenzje
Oznacz jako przeczytane
Czcionka:Mniejsze АаWiększe Aa

Controlled variable

The universe of speech of the output variable (controlled variable) for type I fuzzy and type II fuzzy, will have 5 membership functions: large negative (NG), small negative (NP), zero (Z), small positive (PP) and large positive (PG). For these functions, functions of constant type will be used where only one value is assigned, which for the purposes of the implemented system will be the value of the PWM type signal since with this the control action on the solenoid valve can be exercised. Table 1 and Table 2 record the data of the parameters of the delimitations of each function for fuzzy sets.

Table 1

Parameters of type I fuzzy output functions


CONTROL
FUNCTIONVALUE
NG0
NP30
Z82.1
PP150
PG255

Source: Prepared by the authors

Table 2

Parameters of type II fuzzy output functions


CONTROL
FUNCTIONTYPE O FUNCTIONPARAMETERS
NGSHARP0
NPINTERVALUPPER30
LOWER10
ZSHARP82.1
PPINTERVALUPPER240
LOWER150
PGSHARP255

Source: Prepared by the authors

Rule base

In Table 3 are the rules which were designed so that the process does not present overshoot, in the same way that the response is rapid and that in stationary state exerts its control action.

Table 3

Rules for type I and type II fuzzy control systems


IF (ERROR)AND (ERROR)THEN
PGZEPG
PGNGPG
PGPGPG
PPZEPP
PPNGPG
PPPGNP
ZEZEZE
ZENGPP
ZEPGNP
NPZENP
NPNGNP
NPPGNG
NGZENG
NGNGNG
NGPGNG

Source: Prepared by the authors

Results

For the implementation of the fuzzy logic controllers, the block diagrams of Figure 4 and Figure 5 were implemented through the Simulink-Matlab software.

Figure 4

Fuzzy controller diagram of the type I fuzzy


Source: Prepared by the authors

Figure 5

Fuzzy controller diagram of the type II fuzzy


Source: Prepared by the authors

Change in reference level value

To validate the robustness of the control systems designed in this work, the control actions will be at different reference level points. The cases will be raised a unit of higher level and a lower one, that is to say for a reference level of 6 cm and 4 cm. In Figure 6.c, the fuzzy controller type I response (red line) for a reference level of 6 cm, this does not exert its control action given that the desired value is not contemplated within the universe of discourse having clear fuzzy sets do not have a margin of tolerance in front of certain uncertainties that could occur, otherwise with type II fuzzy control strategy (blue line) given that it exerts its control action in the level of referral required for the system. Figure 6.b, presents a very good transient stage, highlighting in this the reduction of the stabilization time by fuzzy technique type II (blue line) with respect to fuzzy type I. Figure 6.a shows the performance of the control systems for a reference level of 4, they present a very good transient stage, highlighting in this the reduction of the stabilization time using the type II fuzzy technique (blue line) with respect to the type I fuzzy since this presents a slight over-impulse originating a kind of oscillation or wave in its signal for the required reference level.

Figure 6

Response of fuzzy logic controllers for reference levels


Source: Prepared by the authors

As a final validation of the fuzzy control systems, two additional cases are made. The first case adding a noise signal to its loops that will have the function of simulating a disturbance outside the measurement system, the second case adding a delay block to their control loops, in order to establish the differences in response in fuzzy type I and type II technique, respectively, and observe their robustness in the face of certain inaccuracies. As can be seen in Figures 7a case 1 and 7b case 2, the fuzzy type II controller performs a quick corrective action maintaining its permanence in the stable state being a better response.

Figure 7

a. Response with noise signal; b. Response with delay


Source: Prepared by the authors

Discussion

Regarding its counterpart, the type I fuzzy controller, can be established an advantage given that having sets with uncertainty trace can be used by the controller designer to achieve a greater range of control operation in the units of level, where can also simulate uncertainties or inconveniences that could arise in a real scenario. This shows a great behavior of type II fuzzy controller versus type I fuzzy controller where the main advantage observed in Figures 7a and 7b is a good performance of the control system against uncertainties, in addition to a faster response to achieve the setting time (Ts) without presenting overshoot (Mp).

Table 4

Comparisons to of the responses


REFERENCE LEVEL (LEVEL UNITS)TYPE ONE FUZZYTYPE TWO FUZZY
TsMpTsMp
5105s80s0%
6NO CONTROL95s
492s0%67s0%

Source: Prepared by the authors

For the control loops with the presence of a delay in front of the fuzzy controllers, the difference between the two can be observed, obtaining as a result a much greater affectation in type I fuzzy technique with respect to type II fuzzy, it is valid to make this clarification, since that in normal conditions for this same reference level its behavior does not present either overshoot or oscillations at steady state (Figure 6.b).

Conclusions

The implementation of the fuzzy logic controllers of the level control prototype, it is the type II fuzzy controller presents the best response. Likewise, thanks to its second membership function type II fuzzy systems present a great advantage because they can consider the design ranges for uncertainties or events that could occur. In the methodology for the development of type II fuzzy controllers, the type to be implemented is prioritized. Facilitating the calculation of the response result for the control system since the computational cost will be much lower through the selection of Sugeno. Because this will not be done by reducing the type of fuzzy sets of type II to type I, this being the main difference with it. And, the improvement in the response of the control system to uncertainties.

The fuzzy type II controller obtained better performance conditions against all possible scenarios, showing as results a great robustness by this control strategy to noise signals, level reference changes, delays in the response of the sensed signal in front of to the fuzzy type I controller.

References

[1] D. W. Woei Wan Tan, “A simplified type-2 fuzzy logic controller for real-time control”, ISA Trans., vol. 45, n. 4, pp. 503-516, 2006. doi: 10.1016/S0019-0578(07)60228-6

[2] O. Castillo, Type-2 Fuzzy Logic in Intelligent Control Applications. Berlin. Springer Verlag, 2012.

[3] O. Castillo, and P. Melin, Type-2 Fuzzy Logic: Theory and Applications. Berlin. Springer Verlag, 2008.

[4] C. H. Wang, C. S. Cheng, and T. T. Lee, “Dynamical Optimal Training for Interval Type-2Fuzzy Neural Network (T2FNN)”, IEEE Trans. Syst. Man Cybern. Cyber., vol. 34, n. 3, pp. 1462-1477, 2004.

 

[5] H. Chaoui, W. Gueaieb, M. Biglarbegian, and M. Yagoub, “Computationally Efficient Adaptive Type-2 Fuzzy Control of Flexible-Joint Manipulators”, Robotics, vol. 2, n. 2, pp. 66-91, 2013. doi: 10.3390/robotics2020066

[6] J. M. Mendel, and R. I. Jhon, “Type-2 Fuzzy Sets Made Simple”, IEEE Trans. Fuzzy Syst., vol. 10, n. 2, pp. 117-127, 2002. doi: 10.1109/91.995115

[7] J. M. Mendel, R. I. John, and F. Liu, “Interval Type-2 Fuzzy Logic Systems Made Simple”, IEEE Trans. Fuzzy Syst., vol. 14, n. 6, pp. 808-821, 2006. doi: 10.1109/TFUZZ.2006.879986

[8] M. Rodríguez, y Y. Huertas, “Metodología para el Diseño de Conjuntos Difusos Tipo-2 a partir de Opiniones de Expertos”, Revista Ingeniería, vol. 21, n. 2, pp. 121-137, 2016. doi: 10.14483/udistrital.jour.reving.2016.2.a01

[9] Q. Liang, N. N. Karnik, and J. M. Mendel, “Connection Admission Control in ATM Networks Using Survey-Based Type-2 Fuzzy Logic Systems”, IEEE Trans. Syst. Man Cyber.- Part C, vol. 30, n. 3, pp. 329-339, 2000. doi: 10.1109/5326.885114

[10] Q. Liang, and J. M. Mendel, “Overcoming Time-Varying Co-Channel Interference Using Type-2 Fuzzy Adaptive Filters”, IEEE Trans. Circuits Syst., vol. 47, n. 12, pp. 1419-1428, 2000. doi: 10.1109/82.899635

[11] Q. Liang, and J. M. Mendel, “Equalization of Nonlinear Time-Varying Channels Using Type-2 Fuzzy Adaptive Filters”, IEEE Trans. Fuzzy Syst., vol. 8, n. 5, pp. 551-563, 2000. doi: 10.1109/91.873578

[12] M. Almaraashi, “Learning of Type-2 Fuzzy Logic Systems using Simulated Annealing”, doctoral thesis, De Montfort Univ., Leicester, UK, 2012 [Online]. Available: https://core.ac.uk/download/pdf/228182082.pdf. [Accessed: may 9 2019].

Lane Detection and Trajectory Generation System

Manuel Díaz-Zapata2*, José Miguel Correa-Sandoval1, Juan Perafán-Villota1, γ, Víctor Romero-Cano1,2

1 Departamento de Energética y Mecánica, Universidad Autónoma de Occidente, Santiago de Cali, Colombia

2 CHROMA, French Institute for Research in Computer Science and Automation (INRIA), Grenoble, France

* The first author undertook this work while he was part of Universidad Autónoma de Occidente

γ. Corresponding author: jcperafan@uao.edu.co

Abstract

This paper presents the development of a perception system that enables an Ackermann-type autonomous vehicle to move through urban environments using control commands based on short-term trajectory planning. We propose a lane detection and keeping system based on computer vision techniques that are computationally efficient. Also, a Kalman filter-based estimation module was added to gain robustness against illumination changes and shadows. Additionally, the simulation and control of the “Autónomo Uno” robot gave good results following the steering commands to keep the position. In the simulation, the controllers had some slight noise problems, but the robot executed the given steering commands and it moved following the road. This behavior was also seen in the physical implementation.

Keywords: Autonomous vehicle, computer vision, lane detection, lane keeping, Ackermann kinematic model.

Related work, motivation and objective

Since the late 20th century there have been studies about systems focused on lowering the amount of lives lost due to traffic accidents, improving road safety and increase the autonomy of the vehicle. These systems are known as Advance Driving Assistance Systems (ADAS). One of the main features of ADAS is lane detection and tracking, which it is a task where real-time processing is most crucial, since this and the steering actuation system are the ones controlling where the vehicle is headed with respect to the vehicle’s position in the lane and its surroundings.

In the study presented by Narote et. al [1], is shown that there have been different advances on the development of this techniques. On 2015 Son et. al [2] present different modules working together to do lane detection, some of these modules include color invariant lane detection when exposed to illumination changes, adaptive region of interest (ROI) based on the vanishing point, Canny edge detection, and lane grouping using minimum squares. Using these modules on a simulated environment, they got a fast and powerful result for real-time application under different weather and lighting conditions. But roads with cracks and blurred lane markings were this system’s weak spots.

A year later, Madrid and Hurtik [3] presented a low-cost warning departure system for vehicles, which is efficient and affordable for most people. They use the Hough transform for line detection together with a fuzzy image representation, which gave better results than the standard Hough transform with the Sobel operator. It also presented a reliable and accurate response to real-time processing. This system had difficulties during the night and when white vehicles enter the scene.

Mammeri et al. [4] on 2016 presented an internal computing system that finds the lane markings and passes them to the driver. For the line detection, they used Probabilistic Progressive Hough Transform, ROIs through MSER blobs were also used, which are then refined using a three-stage algorithm. Using the MSER results, the lane colors (white and yellow) are identified on the HSV color-space and Kalman filtering is used to follow both lane lines. This system is limited at night due to the road lighting and traffic conditions.

On the other hand, there has been little interest in Colombia to develop these kinds of technologies due to the economic and infrastructure difficulties present in the country. This lack of interest can be seen on the low adoption and acceptance of using technologies associated with renewable energies in the transport sector such as electric autonomous vehicles. Nevertheless, studies and work done in this area in world powers give a hopeful future that could bring many benefits to the Colombian territory [5].

These benefits are the main motivation to create a perception system for vehicles that facilitates lane detection and trajectory generation on urban environments. Hoping that the creation of software for platforms that want to develop autonomous vehicles, will promote research and development on this area in a national level.

Methods

This work was developed in three stages, first the Lane Detection system, second the Trajectory Generation system and finally the system was tested in the Autónomo Uno robot and, at the same time, in a simulation environment in Gazebo. For the development of the Lane Detection system, we mainly used videos [6] that were converted from mp4 format, to rosbag so they could be better integrated with ROS, since this environment allows easier integration with robot control hardware.

Perspective transform was used to create the bird’s eye view (BEV) of the road, based on a fixed ROI in order to avoid the vanishing point effect. Besides, the BEV view provides better information of the road which it helps to create a more accurate lane model. Using this transformation, it is easier to find the center of the lane and the error for the car to steer, so it can stay in the center of the lane. We predefined the ROI, since our own automatic ROI finder was too inefficient working at an average of 2 FPS on a 30 FPS video. Once the BEV is obtained, the following computer vision techniques were used to enhance the lane markings and filter unwanted data: grayscale conversion and vertical Haar-like feature filter [7].

After, the image is divided into two parts, one for each of the lane markings. Then, each image’s histogram gets equalized and a median filter is applied along with pixel-wise gamma correction using γ=20. These steps are done in order to enhance the brighter colors present on the lane markings, so it is easier to apply a threshold to binarize the image. Furthermore, image binary thresholding was performed on the value range from 240 to 250, where pixel values go from 0 or black to 255 or white, greater values were not selected due to included noise.

The Lane model is created by passing a set of sliding windows vertically through each of the lane markings and finding a set of centroids that describe each marking. The parameters used for the sliding windows are their height (Wh), width (Ww), number of windows (Nw) and horizontal starting point of search (Sh). The Ww, Nw and Sh are defined by us, but the height of each window is found dividing the image height (h) by Nw. The centroids for each marking were found by averaging the values of the x coordinates for the with pixels in the window’s area. The search process is repeated until Nw windows have been placed.

Kalman Filtering [8] was used to either filter the changes on the x coordinate of each centroid or to predict the position if no white pixels are found due to broken lines. Also, Kalman filtering is not applied on the y coordinate since it increases by a fixed known value Wh on each iteration. Finally, the center line was found by computing the difference between centroids of each lane model.

The Trajectory Generation system is based on the middle line provided by the lane detection system. First, two points of the middle line are selected for the steering vector, then the steering angle is calculated as shown in eq. 1. Where pt1 is the closest point to the car and pt2 the sixth point of the middle line, also the index x and y indicate the respective coordinates of the points.

(1)

The steering angle is used along with the selected points to create the steering vector. This vector describes the short-term trajectory that the vehicle needs to follow in order to stay in the middle of its lane. Furthermore, the steering angle is used to compute the inverse kinematics of the mobile robot, thus the Ackermann-type vehicle is able to move the steering wheels to perform the trajectory.

Finally, the system was integrated with the physical platform Autónomo Uno robot, also with the simulated environment using ROS along with Gazebo. The inverse kinematics of mobile robots [9] were used to control the Ackermann-type vehicle, in addition, a Proportional-Integral (PI) controller system was added to control the response given by the vehicle while it follows the steering vector provided by the lane detection system.

One of the main inputs to our simulation system is the robot’s heading, which is provided by the vision-based trajectory planning module. This heading value is translated into steer commands for each of the front wheels using the instantaneous center of rotation’s (ICR) distance and the ICR’s angle, which are provided by the inverse kinematics. The ICR distance is computed as shown in the eq 2.

(2)

Results

The results of the Lane Detection system and Trajectory Generation system can be observed on Figure 1. Where on figure 1a the ROI we defined can be seen delimited in red. Then, on Figure 1b, we can see the BEV view and the lane markings after the filtering. The final process to enhance the lane was performed and the result can be seen on Figure 1c. After, the sliding windows step is done and the centroid-based lane model is created, see Figure 1d, where the centroids in red are those detected by the sliding windows, but the ones in blue or green are the ones predicted by the Kalman filter, since the observation was not confident enough.

The center line model can be observed on figure 1d, along with the steering vector. The behavior of this steering vector can be tuned by choosing a different point as pt2. We found that selecting pt2 too close to the car, makes the system too sensitive to variations in the centroid’s x coordinate. Also, by choosing pt2 closest to the top side of the image reduces its sensitivity. The sixth point was chosen because it gave stability and good angle representation in both cases: straight and curve roads.

 

Figure 1

a. Manual ROI defined, b. Filtered perspective transform of a road curve, c. Binary image after thresholding, d. Lane centroids and steering vector


Source: Own elaboration

The Ackermann Kinematics model defined in [9] is computed locally in the Autónomo Uno robot, using the two main micro controllers installed. The first one is the Raspberry Pi 3 B+, that allows communication with a main computer in order to receive control commands. The second is an Arduino DUE, that controls two different types of motors used for traction and direction respectively. Also, each motor has its own built-in PID controller that manages its respective behavior.

On the other hand, the Ackermann Kinematics model needed for the Gazebo simulation environment are computed on the main computer. Also, the dynamics involved in the environment requires a controller module, see Figure 2 (left). Here the red and pink signals are respectively the front wheels response to the steer angle.

Figure 2

Left, Initial response state of the system without controller. Right, Ackermann system response with PI controller


Source: Own elaboration

Therefore, a PI controller was created using the Ziegler-Nichols tuning method [10] for the steering and velocity of the wheels. Even though the signals with the controller have noise, Figure 2 (right), the system manages to follow the set-point of the steering angle. This behavior was seen on the simulation environment and in the physical platform. On Figure 3, the system’s response is shown for three principal cases, turn left, go straight and turn right.

Figure 3

Column a. System behavior for left-curved road, Column b. System behavior for straight road, Column c. System behavior for right-curved road


Source: Own elaboration