Za darmo

Automática y Robótica en Latinoamérica

Tekst
0
Recenzje
Oznacz jako przeczytane
Czcionka:Mniejsze АаWiększe Aa

Hardware implementation with three robotic arms

In order to implement a hardware that provides an easy interaction with the users, has been considered the integration of the three robots at the hardware level, for which the robotic arms were assembled on a 1m x 1m wooden board, where were positioned at points that allow interaction between themselves (Figure 2). The robots are located 30 cm away from each other, enough distance to manipulate objects and carry out different trajectories. After the assembly of the robotic arms the wiring was made, which allows the robots feeding and control, it was used a caliber 20 cable, which is adapted to the electrical needs of the robotic arms in terms of current and thermal resistance. A wiring that connects each of the servos with the control cards below the wooden board (black wires to ground, red to the source and green to the PWM signal) was proposed.

Figure 2

Hardware implementation of the three robotic arms and control cards


Source: Own preparation

3D robots modeling

To execute the robot’s assembly in 3D Unity it’s necessary import the parts of each robot from a CAD tool. The design of the pieces is developed in Autodesk Inventor and Autodesk 3ds MAX tools. The first one was used since it allows a ratio in the scale of 1: 1, meanwhile the second one allows to render the designed objects and export them to Unity, with the purpose that the simulated dynamic resembles the real dynamics. Figure 3 shows the robots modeled in 3D.

Figure 3

Robot SainSmart 5-Axis, robot AL5B y robot SainSmart DIY 6 3D modeling


Source: Own preparation

Software implementation - User interface

To develop the graphic interface, the first thing to do is import the robots modeled in 3D. After the user interface was created where robots are modeled to scale, this interface has the option to enable different views and perspectives from the robots through two cameras. It has the buttons with the name of each robot, its respective panel is displayed where the channel is connected. It’s degrees of liberty and a slider which the movement of each servo. It also has the connect button, which is responsible for bringing the robots to the initial position and connect them to the COM port defined for that purpose. Finally, we have a window where a sequence is added and where the information of the positions of the robots is observed, it is important to mention that the sequences can be saved to be used in another occasion and repeat a defined movement. This interface can be seen in Figure 4.

Figure 4

Interface of the tool in 3D Unit


Source: Own preparation

Results

To demonstrate the implementation of the software and hardware, and the operation of the user interface developed in Unity 3D, the team proposed to perform a test where the robots interacted with each other. To this end, a wood cube was used by the robots. The test consisted in three basic steps. Initially, the first robot had to take the cube from a table with its clamp. Then, the trajectory had to be planned so the cube was passed to the next robot and then to the next one. Finally, this last robot was in charge to take the cube back to the start. To accomplish this exercise, the first step was to configure the sliders movement by movement while saving the sequence. After that, the exercise was repeated with the already saved sequence. Figure 5 shows a picture of the process.

The developed program was successfully executed in computers with different specifications and operating systems. The program has an offline installation which offers the possibility of installing the necessary programs for its correct execution. It is important to highlight that, due to the standardize developed code, the program is scalable and it allows to easily add more robotics arms.

Figure 5

Manipulation of a cube by the three robots


Source: Own preparation

Thanks to the implementation of the user interface in Unity 3D and the hardware environment of the three robots, it was demonstrated how the simulation and the manipulation of these devices are related. Moreover, the team could observe how the robots operated in synchrony and in a satisfactory manner.

Discussion and conclusions

The current article presented the implementation of a software tool that allows the manipulation of three educational robots of different characteristics. The robots for commercial purposes were the 5-axis SainSmart robot, the AL5B robot and the 6- axis SainSmart DIY robot. Since the three had the cards and programs for their use, it is a type of specific attention card to handle the three of them (SSC-32u Lynxmotion), and it is a program where they are simulated and / or controlled regarding the three real robots. The controller program was built using the Unity 3D tool, after importing the parts of the robots using CAD software. The program allows moving each of the joints of the robots through a sliding control and the sequences of movements that can be recorded and reproduced at the desired time.

Future works will implement the equations of the inverse kinematics of the three robots in order to move them not only joint by joint, but also to give them the desired three-dimensional position and where the program can be carried out. Another pending work is to be able to manipulate through a smartphone via wifi or bluetooth connection, so that you can expand the capabilities of the tool.

References

[1] M. Ostanin, and A. Klimchik, “Interactive Robot Programing Using Mixed Reality”, IFAC-PapersOnLine, vol. 51, n. 22, pp. 50-55, 2018. doi: 10.1016/j.ifacol.2018.11.517

[2] M. Sharifi, X. Chen, C. Pretty, D. Clucas, and E. Cabon-Lunel, “Modelling and simulation of a non-holonomic omnidirectional mobile robot for offline programming and system performance analysis”, Simul. Model. Pract. Theory, vol. 87, pp. 155-169, 2018. doi: 10.1016/j.simpat.2018.06.005

[3] R. Crespo, R. García, and S. Quiroz, “Virtual Reality Application for Simulation and Off-line Programming of the Mitsubishi Movemaster RV-M1 Robot Integrated with the Oculus Rift to Improve Students Training”, Procedia Comput. Sci., vol. 75, pp. 107-112, 2015. doi: 10.1016/j.procs.2015.12.226

[4] F. A. Candelas et al., “Experiences on using Arduino for laboratory experiments of Automatic Control and Robotics”, IFAC-PapersOnLine, vol. 48, n. 29, pp. 105-110, 2015. doi: 10.1016/j.ifacol.2015.11.221

[5] E. Hortal, E. Iáñez, A. Úbeda, C. Perez-Vidal, and J. M. Azorín, “Combining a Brain–Machine Interface and an Electrooculography Interface to perform pick and place tasks with a robotic arm”, Robot. Auton. Syst., vol. 72, pp. 181-188, 2015. doi: 10.1016/j.robot.2015.05.010

[6] A. Peidró, O. Reinoso, A. Gil, J. M. Marín, and L. Payá, “A Virtual Laboratory to Simulate the Control of Parallel Robots”, IFAC-PapersOnLine, vol. 48, n. 29, pp. 19-24, 2015. doi: 10.1016/j.ifacol.2015.11.207

[7] K. Alisher, K. Alexander, and B. Alexandr, “Control of the Mobile Robots with ROS in Robotics Courses”, Procedia Eng., vol 100, pp. 1475-1484, 2015. doi: 10.1016/j.proeng.2015.01.519

[8] V. F. Filaretov, and V. E. Pryanichnikov, “Autonomous Mobile University Robots AMUR: Technology and Applications to Extreme Robotics”, Procedia Eng., vol. 100, pp. 269-277, 2015. doi: 10.1016/j.proeng.2015.01.367

[9] B. Zhu, A. Song, X. Xu, and S. Li, “Research on 3D Virtual Environment Modeling Technology for Space Tele-robot”, Procedia Eng., vol. 99, pp. 1171-1178, 2015. doi: 10.1016/j.proeng.2014.12.700

[10] V. Vladareanu, R. I. Munteanu, A. Mumtaz, F. Smarandache, and L. Vladareanu, “The Optimization of Intelligent Control Interfaces Using Versatile Intelligent Portable Robot Platform”, Procedia Comput. Sci., vol. 65, pp. 225-232, 2015. doi: 10.1016/j.procs.2015.09.115

[11] M. Li, H. Wu, H. Handroos, G. Yang, and Y. Wang, “Software protocol design: Communication and control in a multi-task robot machine for ITER vacuum vessel assembly and maintenance”, Fusion Eng. Des., vol. 98–99, pp. 1532-1537, 2015. doi: 10.1016/j.fusengdes.2015.05.058

[12] A. Patwardhan, A. Prakash, and R. G. Chittawadigi, “Kinematic Analysis and Development of Simulation Software for Nex Dexter Robotic Manipulator”, Procedia Comput. Sci., vol. 133, pp. 660-667, 2018. doi: 10.1016/j.procs.2018.07.101

[13] I. J. Henao, J. A. Giraldo, F. A. Meza, C. W. Sánchez, y J. E. Ordoñez, “El brazo robótico como herramienta pedagógica en el aula de clase”, Rev. Lumen Gentium, vol. 1, n. 1, pp. 82-90, 2017 [En línea]. Disponible en: https://revistas.unicatolica.edu.co/revista/index.php/LumGent/article/view/10/15

 

[14] D. A. Triana Archila, S. Roa, y C. A. Forero, “Desarrollo y control de un brazo robótico mediante la adquisición de datos en tiempo real hacia un espacio no real”, en Memorias IV Congreso Internacional de Ingeniería Mecatrónica y Automatización - CIIMA 2015, M. Restrepo, Ed. Envigado, Colombia: Fondo Editorial EIA, 2015, pp. 112-117 [En línea]. Disponible en: https://revistas.eia.edu.co/index.php/mem/article/view/821/739

[15] R. Crespo, R. García, y S. Quiroz, “Virtual reality simulator for robotics learning”, in 2015 International Conference on Interactive Collaborative and Blended Learning (ICBL), A. Molina, Ed. Ciudad de México, México: ITESM, 2015, pp. 61-65. doi: 10.1109/ICBL.2015.7387635

[16] J. J. Castañeda, A. F. Ruiz-Olaya, W. Acuña, and A. Molano, “A low-cost Matlab-based educational platform for teaching robotics”, in 2016 IEEE Colombian Conference on Robotics and Automation (CCRA), H. Carrillo, Ed. Bogotá, Colombia: Universidad Sergio Arboleda, 2016, pp. 1-6. doi: 10.1109/CCRA.2016.7811425

[17] C. Bartneck, M. Soucy, K. Fleuret, and E. B. Sandoval, “The robot engine — Making the unity 3D game engine work for HRI”, in 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Y. Nakauchi, M. Imai, Ed. Kobe, Japan: IEEE, 2015, pp. 431-437. doi: 10.1109/ROMAN.2015.7333561

Use of virtual reality for cranial navigation in surgical exploration tasks

Ivanna Melissa Pai Zambranoγ, Mably Bolena Escobar Ortiz, Oscar Andrés Vivas Albán

Dpto. de Electrónica y Telecomunicaciones, Universidad del Cauca, Popayán, Colombia

γ. Corresponding author: melissa.pzmbrn@unicauca.edu.co

Abstract

The appearance of head mounted screens (HMD) has been fundamental for the development of new applications using virtual reality (VR), since these devices achieve immersion by users in a virtual world. This article shows a tool which allows the user to interact with a virtual skull from the Oculus Rift helmet. The tool was built using the Unity 3D gaming engine and the medical images were obtained from a real patient by means of a computerized tomography. The tests carried out with five users allow to foresee the potentialities of the tool for tasks of exploration and surgical diagnosis, taking into account the incidence of the simulator disease in medical students practicing surgery.

Keywords: Virtual reality, Cranial navigation.

Background, Motivation and Objective

Medicine demands precision in each of the analyzes and procedures performed, especially in surgeries, therefore the process of training a doctor should cover in detail the surgical training. The current technological advance provides new training tools such as virtual reality (VR). As mentioned by Grigore [1], it is an immersive experience that involves multimodal and multisensory interactions with simulated scenarios through a computer and fundamentally with visual, auditory and haptic feedback.

With the appearance of the head-mounted displays (HMD), computer-generated graphics within these devices were also used, and with this, not only games and entertainment were introduced, but they were also used in computer applications. engineering, medicine and simulations in the aerospace field [2]. However, it is highlighted that one of the biggest challenges faced by users when using HMD as training devices, whether it is surgery or aviation maneuvers, and even when they are used solely as an entertainment element, is the simulator disease [3] - [6] that occurs in most users with symptoms such as dizziness, headache, blurred vision, nausea and may even affect the balance.

Training programs using VR are increasingly used and complemented with HMD, presenting the user or surgeon in training [7] a perspective image of their movements, having a better immersion and making the perception as much as possible. real possible in order to obtain optimal results in the process. Novice surgeons do not feel safe to perform complicated surgery or higher surgical risk, and this is where VR technology can be used as a method to improve residents’ self-confidence and knowledge. a surgery [8], since among the benefits it provides are improving fine motor skills and eye-hand coordination in preclinical settings, as well as methods that can be implemented at low cost [9].

Among many existing HMD devices, the Oculus Rift tool has been introduced with greater acceptance in research in the field of medicine and there are already several studies carried out around it that encompass topics such as the diagnosis of pulmonary disorders [10]. immersive training [11], physical rehabilitation [12], [13], education of residents [14], among others. On the other hand, a qualitative study has been developed that evaluates the interaction and / or manipulation of the user in a VR environment and using the Oculus Rift tool with 3D elements [15], however it has not yet been explored how serious this interaction would be. in a hospital or surgical environment, nor what is the reaction of the user when faced with a scene of those characteristics.

This article shows the result of the combination of the Oculus Rift HMD tool with the Unity programming environment as a support system for medicine. In particular, it tries to qualitatively analyze the user’s interaction with 3D recreations of parts of the human body such as the skull and the brain, and their reaction when immersed in a scene related to this area of medicine. In this interaction, Oculus controllers play an important role since they allow the hands to be used in a genuine way and therefore the manipulation of the elements in the VR scene is achieved in a natural way, on the other hand, the viewer plays a very important role in terms of the simulation of the simulator, which finally suggests that this type of Tools can be used satisfactorily as part of non-vase surgeon training in the detection of said condition.

Creation of the application using hardware and game engine

For the realization of this project the Oculus Rift toolkit was used and as HMD the helmet or visor was used, which has two OLED screens with a resolution of 1080x1200 each, which means that the VR is fully immersive (Figure 1). It also has two position sensors in order to track the position of the user’s head and hands, which in this case are the game controls. For this tool to be manipulated it is necessary to have a powerful graphic card and in general that the computer where it is going to work has the characteristics of a computer for games in terms of memory and graphics card.

Figure 1

Oculus Rift Kit


Source: Prepared by the authors based in pictures from www.oculus.com.

Unity [16] is a real-time platform used for the creation of videogames, where more than half of the games that exist to date have been built. It is an easy tool to work and that provides all the facilities to create scenarios in 2D and 3D. Within Unity are integrated the necessary packages to work with Oculus Rift, which makes it a good tool for this project. The 3D models used are recreations of three-dimensional objects that are usually found in a laboratory or clinic and are also representations of some organs of the human body. For this project, the images obtained from a CT scan of a real patient were used, which were adapted to be correctly represented in the tool (Figure 2).

Figure 2

3D model of the human skull


Source: Own elaboration

Configuration and implementation

The graphic scenario was implemented in the Unity game engine and the integration with the Oculus Rift tool was made with the help of the built-in VR support, Oculus Integration, which contains scripts, prefabs and different resources that allow manipulating the tool. To navigate through the virtual scene and manipulate the objects of the same, the OVRPlayerContro-ller package was used, which allows access to the helmet camera and Oculus controls.

When accessing the desktop application and using the HMD and the aforementioned controls, the user starts positioned just in front of where the 3D model of the skull with which he is going to interact is located. From this position, he can move freely around the room as soon as the guardian allows it, and he is able to make a 360 ° visual tour with which he can perceive the details of the creation of the virtual space and feel inside a real clinical laboratory, so feel the immersion that this powerful VR tool generates. All the objects of the scene with which the user can interact have assigned physical properties such as texture, color and gravity, they also have other scripts that allow these elements to be manipulated with the help of the controls by the user (Figure 3).

Figure 3

Unity scenario - Enlargement completed


Source: Own elaboration

User interaction

During the development of the application the user mainly interacted with the skull modeled three-dimensionally, however a scenario was also adapted where he could walk all around and, in this way, have a more realistic setting of a conventional office and a room surgery (Figure 4).

Figure 4

Conventional office setting


Source: Own elaboration

In the final scene the action area has the necessary details to provide comfort to the user at the time of the dive, in addition other objects in the room can be manipulated. Regarding the model of the skull, its parts were differentiated for a better perception: gray matter of beige color, white matter of red wine and brick-colored corpus callosum (Figure 5).

Figure 5

Interaction inside the skull


Source: Own elaboration

In this version, the hands of the user are simulated with the help of the controls and the Oculus position sensors, with which it is possible to visualize the grip of the 3D legs and from the grip rotate, zoom in and change its position in the scene- River. This grip is achieved in a very natural way and allows the immersion to be more real (Figure 6).

Figure 6

Interaction of the Oculus Rift with the application


Source: Own preparation

Results

The final application achieves the user interaction with the 3D model of human organs and surgical tools, which was tested by five users who in their entirety presented a good acceptance of the product and experience. They denied having experienced dizziness or cyber disease, which is one of the most frequent effects when using HMD helmets.

All the users were comfortable with the immersion in the VR scene and expressed the ease of manipulating virtual objects, which also have the right size for their interaction. They also thought about the need to set the scene with elements that remind them that they are in a hospital environment, the relevance of being able to perform different actions with the objects to be manipulated, rotated, approached, grasped and able to move. around the scene in such a way that the immersion becomes stronger. On the other hand, users stressed that the tracking of the movements of the head during the test of the application was quite good, which in turn allowed to have a good handling of the controls, which are represented as the hands of the operator. Finally, it is recommended that the workspace be large enough so that the user can go through the entire scene without any restrictions. In Figure 7 a user is observed testing the application.

 

Figure 7

User testing the simulator


Source: Own preparation