Programming and control for skill-based robots

: This paper considers programming and control of skill-based robots. Robot skills are used to integrate and synchronize robot actions and sensor data in a consistent way. Skill-based approach provides a framework for configurable robot systems, enabling quick setups and start-ups of applications. In the paper we will introduce skill programming and skill control concepts in more detail and how they relate to usage of models and sensors. We will also give a practical example for programming and implementing skills for a grinding application


Introduction
With ever-increasing demands for agility and flexibility, robotized manufacturing is facing challenges and barriers to the use of robots because; the high cost of engineering the work cells (i.e., the design, fabrication, and installation of jigs, fixtures, conveyors, and third-party sensors and software).Robots must be able to perform their tasks in environments with greater uncertainty than current systems can tolerate [1], where the robustness implies wide usage of a variety of sensor technologies.Robust robot operations can be based on behavioral models supporting the integration of sensors into robot operations -this is how the robot "skills" are formulated.A skill is typically defined as a high level entity as a sensory-motoric decomposition pattern, where skills are decomposed to action and trajectory level representations [2].Skill representations support reusability and when the skill layer is abstract it provides a non-expert and intuitive programming interface, but still relies on a device level setup by an expert user [3].Reusability is two-fold, it can be supported by simpler, or conversely by more complex skills [4]: Skills can be hierarchic and at the lowest level there are atomic actions representing device interfaces or API functions.Hierarchic structure in skill-based control and programming will go as follows: a "task" is described as the highest-level of operation like for example "Single part feeding", "Welding with localization" and "Pick-up with contact-force" [5].A task can be decomposed to "skills" such as "Locate box", "Pick-up box" and "Place box" [5][6][7], which are formed from the lowest-level entities called "primitives", which can be described as "device operations".Further, the "primitive" layer can be seen to represent as the conventional robot work sequence, consisting of commands such as "Move linear", "Open gripper", "Get point-cloud" and "Get force-values".Skills should be designed in a way, where parametrization supports an easy and quick re-configurability of the skills for different situations in varying applications, for example phase dependent locations/points, poses, forces and so on.
To our knowledge, there are no off-line programming tools for skill based robot systems, where both robot path programming -especially with compliant motion control primitives -and sensor programming were integrated.Schou et al. [8] presents a similar hierarchical functional modelling and parametrization approach to task-level programming.But in their system, only some parameters were specified off-line (e.g.velocity, object type etc.) and all the locations were taught on-line.Whereas in our system, all skills are fully programmed off-line (with only updating the specific parameters based on information gained from the other skills.In their paper, Schou et al. also differentiate computational procedures from skills to "services", which is a challenging generalization since almost every task, skill and primitive include some sort of computational procedures which can be implemented as SW services.
In addition, as we have been developing robot skills especially relying on vision feedback and compliant contact motions, and for that there are only few, if any, reported Programming and control for skill-based robots 369 references in the literature.Zeiss has reported on modeling principles [9], where force and torque have been interconnected in the task representation with the a pose-wrench approach (wrench=force /torque parameters) but it considers only static path points and not compliance control.
In this paper we will introduce methods and tools for robot skill programming and an example of skill based control system.Skill execution control -or dynamics -is specified as UML action diagrams, the implementation of which may be distributed and implemented as robot programs or PLC programs.Skills are specified with (configuration) parameters, and our integrated tool supports setting and computing these, using a robot off-line programming tool with our extensions.In the experiments we will show examples of skills that rely heavily on off-line programming methods.

Skill models and implementations
Ideally, skill models are designed and programmed by expert users but they could be re-configured for different situations even by operators who have no experience in robot programming.This is achieved by making skills intuitive object-centered robot abilities, where they are applied on physical entities instead of 3D coordinates [5].In practice, this can be accomplished with a proper parametrization of programs, where a set of inputs will define the whole structure of a program.The program would automatically give all the necessary parameters for the sensors and control systems in a system.Robot skills are divided to primitive skills and compound skills, composed of primitive skills and other compound skills.These are described further below.

Feature based localization
Fundamentally, rough global localization of the target object is needed to produce the initial pose parameters for the fine localization.Rough location of the object can be determined based on known features of the object e.g.eigenvectors and center points of the 3D point clusters, or other known vectors, like surface normals or interest points, like corners.The non-orthogonal estimate of the rotation matrix Re can be computed as in which N 1 and N 2 are matrices including surface normal vectors and have more columns than rows.N + 1 denotes pseudoinverse which can be computed as required that matrix N 1 N T 1 is invertible.The matrix N 1 include vectors in the object's CAD model coordinate system and the matrix N 2 include vectors in the sensor's coordinate system The orthogonal estimate of the rotation matrix is constructed from Re by taking cross products of the columns of the rotation matrix estimate vz = vx × vy (5) in which vx = Re 1:3,1 and vy = Re 1:3,2 .
The initial, i.e. rough, rotation matrix is constructed from the orthogonal unit vectors.
The initial translational vector T init is computed as in which p c1 is the corresponding center point in the object's CAD coordinate system and p c2 is the cluster center point in the sensor's coordinate system.

Verification vision
In the fine localization, the target object's pose is estimated by fitting surface models to the measured 3D-points.
Here the equations for fitting planar surfaces are presented and equations for fitting other surface types have been presented in [14].The rotation correction ∆R is computed as a product of rotation matrices ∆R = Rot(x, ∆ϕx)Rot(y, ∆ϕy)Rot(z, ∆ϕz) (9) in which where ∆φx, ∆φy, and ∆φz are the rotation corrections with respect to the coordinate axes.The translational correction ∆T is defined by three translational components The correction vector ∆ is defined as The correction vector is calculated in which J is the Jacobian matrix gathered from partial derivatives of the error functions e i with respect to correction vector ∆.
E is the error vector.
The partial derivatives are computed as follows If the reference feature is a plane the error e i is computed as in which p Mi is the i:th measured point and the plane surface is defined by point pp and normal np.The partial derivative of e i respect to p Mi is The partial derivative of p Mi with respect to ∆ is The updated T and R of the target object pose are calculated as

Compliant motion control
The compliant motion control primitive, "Cartesian impedance motion control" lays the basis for motion accuracy and the compliant contact motions.The impedance controller regulates the relation between robot's position x and contact force F according to time domain equation in which M t is the target mass, B t is the target damping, K t is the target stiffness, x 0 is the nominal position trajectory and x is the actual position.A schematic diagram for the corresponding one degree-of-freedom impedance controlled motion is given in Figure 3.
The feasible values for target model parameters depend on the dynamics of the robot, stiffness of the environment and the characteristics of the contact task.Examples of derived impedance parameters for two target systems are [14]:  A detailed description of designing robust impedance controller for the industrial robot arm is given in [14] and an analysis of the impedance controller's capability to fol-low inclined and curved surfaces while keeping the contact force in desired limits is given in [15].

Skill modeling and compound skills
The compliant motion control primitive, "Cartesian impedance motion control" is outlined as a UML activity model in Figure 1.It has two types of input parameters.The setup parameters, the "configuration" parameters are nominal force, compliance frame, and impedance control parameters (stiffness, damping and inertia).In addition, it has control parameters, i.e. measured forces/torques and as output parameters, the pose increments.As a sequence, it is simple: monitor the contact forces and torques, execute the impedance control compensator, and execute the incremental Cartesian motion increments with the robot.
Typically, the compliance frame coincides the tool frame but certain tasks may require offsetting the compliance frame respect to the tool frame.The nominal force is a force offset that deviates the position of the robot to the desired direction in compliance with the target impedance model.
The surface processing skill represents a compound skill.There are many tasks where the management and control of the tool contact to the target object is critical.The surface processing skill relies on the impedance controlled motion sub-skill.It has as setup input parameters process and tool specific nominal force, and tool coordinate frame, which further forwarded to the impedance motion control primitive (the compliance frame is set to the contact point, i.e. the tool end tip) and also the tool dependent maximum speed.The surface processing skill includes motions for approaching the target, getting into contact, executing the process motions in contact and impedance controlled, and finally departing form the target.This is outlined in the Figure 2.

Skill programming
Programming of a skill can be based on off-line or on-line programming methods.In on-line programming, skill parameters are taught to the system using the actual object in a real world.On the other hand, off-line programming methods rely on using off-line programming tools together with 3D CAD-models of robots, tools, devices, and the target objects to configure the skill parameters.Skill parameters vary with different actions; they should define the actions for all the components in a robot system.Our proposal for the data flow for skill programming is given in Figure 4. Starting from the skill model and object models (or the real object), the skill parameters are acquired, and

Skill control
The robot system control model represents the architecture, which enables the robot to integrate and synchronize sensor data and robot actions.The control architecture (Figure 5  First, the work object will be localized globally in the robots workspace with accuracy of a few centimeters.For this, the sensor system needs a set of reference features, in this case a set of 3 planar surface segments and some metric parameters of these (e.g., size and nominal directions and center points).These reference features are programmed with the robot programming tool.The estimated initial position of the work object is used to program scanning movements for the robot to localize the object more accurately.This means, that the robot will use a more accurate 2D-profile scanner to scan local features of the object to get enough data for more precise localization.In precise localization phase, the measured data will be combined with reference data to produce a sub-millimeter-level accuracy for the object localization before applying the grinding motions.Finally, after the object has been localized, the grinding movements can be executed.

Global localization
The aim of global localization is to locate roughly the work object within the robot workspace.Accuracy of the localization depends on the sensor used in the process.Generally, it should be within a few centimeters, depending on the view angle of the scanning sensor and scanning distance.The global localization sensor used in the experiments is Intel's RealSense D415, which is a 3D-depth sensor that uses stereo vision with infrared projector to create a depth image of the scene (see Figure 7).3D-depth sensors are used because they offer a suitable range with reasonable accuracy for this purpose.With multiple external sensors, the whole workspace of industrial 6-DOF robot can be monitored reliably.If the object is known to be located at certain side of the robot, a single 3D-depth sensor was determined to be sufficient.Localization itself was performed by comparing and matching the reference and measured surfaces.Based on the camera data, the measured surfaces can be represented in robot coordinate system and matched against the reference surfaces.On the other hand, the reference surface normals are represented in 3D-models coordinate system.Thus, the comparison of surface normals will result a position of a 3D-models coordinate system relative to the robot's coordinate system.
The RoboDK tool was used to gather the reference data from 3D CAD-model; it was already used to program scanning and grinding movements for the robot.The surface data (a 3D point and a surface normal) were obtained by selecting sets of three points, one set from each of three reference surfaces of the 3D-model and calculating the nor-mal and reference points for every surface (see Figure 8).Surface normals and related 3D points from three nonparallel surfaces needed to be gathered to express unambiguously position and orientation of the object.Adding the points and calculation of the normal vectors and the 3D points were done through Python API.On the other hand, corresponding measured surfaces (3D points, surface normal) were solved by segmenting the measured point cloud and using principal component analysis (PCA) for the segments as seen in Figure 6.Direction of the least variance was defined as the surface normal for a segment (blue axis).Main challenge of the segmentation phase is related on finding suitable segmentation parameters, which is very dependent on the geometric surfaces of the work object.

Precise localization
After global localization had given an estimate of the object's location and orientation, scanning movements were updated based on programmed nominal paths.Scanning movements were composed of using a robot with a 2Dprofile scanner attached to its flange, performing scanning motions in proximity of the object.2D-profile scanners were used due to their high-level accuracy, which increases accuracy of the localization significantly.Scanning movements resulted a 3D-point cloud of the features that were within the sensors measurement range, which was then filtered and segmented into different regions using Point Cloud Library's [13] point cloud processing algorithms.
After filtering and segmentation, different regions of the point cloud represented different planes of the object.The same reference surfaces (obtained the same way as for global localization) were sent to the pose estimator that was integrated into same software, which was used to capture and process the point cloud.Pose estimator corrects iteratively the accurate position parameters so that the dis-tance between the measured point and corresponding surface in 3D-model will be minimized.The pose estimator provides a more accurate pose of the target object in the robot coordinate system, which was used to update position of the target object in the robot program ("user coordinates").Scanning movements, processed point cloud and fitted point cloud can be seen in Figure 9 and Figure 10.

Grinding skills
Even though the object was quite accurately localized after precise localization, some localization errors are still present.Even with high-level equipment and calibration routines, errors will occur due to manufacturing defects that will make the actual object to deviate from what is derived based on the 3D CAD-model.These localization errors were compensated by applying impedance-controlled compliant motions for the grinding movements.In these force-controlled motions, the robot used a force/torque sensor integrated into its flange to measure contact forces and torques between the object and the tool.Besides encountering localization errors, it was used to enhance even the effect of grinding by applying nominal force to the surface.
Seam grinding skill is based on four input parameters, i.e. target frames that define the characteristics of the seam.Targets were placed to the end regions of the seam, as shown in Figure 11, where target frame pairs were in the start and end of the seam on adjacent surfaces.All the approach and grinding movements were based on these four input target frames.Target frames were placed easily in Ro-boDK by using a "Teach target(s) on the surface" feature, which sets a target frame so that the Z-axis is perpendicular to the surface and pointing downwards.After this, targets were only needed to be fine-tuned in location and orientation around Z-axis.After the targets had been placed, the paths of the grinding skill were then created and simulated (see Figure 12).The simulation showed if the path could not be completed due to crossing singularities or issues with reachability.All of this made the skill configuration and validating the program for another seam relatively easy and fast.Seam grinding was designed to be in two phases.In the first phase, the seam will be grinded with long verti- cal movements with angle grinder to remove all excessive material from the seam.Objective of this phase is to result a sharp-pointed seam without any extra material left from welding.In the second phase, the seam will then be rounded by using a sander.Both phases will use the same four input targets to form the grinding programs.Results can be seen in Figure 13.

Summary
In this paper we have shown the design and implementation of off-line programmed skills.Skills such as global localization, scanning movements, precise localization and straight seam grinding were used to localize and grind the object.Programmed skills were tested successfully in a real robot environment.

Figure 3 :
Figure 3: The schematic control system transfer functions of the impedance controlled robot

Figure 4 :
Figure 4: Skill programming can be done off-line based on geometric models or on-line based on sensor data ) is based on proprietary and open interfaces, the latter of which are in our case based on ROS (Robot Operating System) [10].Proprietary interfaces are implemented for connecting devices, i.e. sensors and robot and data interfaces with adapters to ROS.Key functional components representing the objects (or "swim lanes") in the skill models, include an Object Detector component and a variety of 3D Point Cloud acquisition components (or Point Cloud sources), based on 3D cameras (based on time-of-flight or triangulation) or cooperation of a set of motion provider components (robots, transfer axis and conveyors) together with 2D laser profiler sensor components.Crosswise usage of these are supported over ROS based interfaces for 3D Point Cloud sources and 2D profiles.

Figure 5 : 1 .
Figure 5: A flexible system architecture for integrating sensor data and robot control

Figure 8 :
Figure8: Gathering reference (on the left) and corresponding measured surface points and normals[12]

Figure 10 :
Figure 10: Segmented point cloud in precise localization, obtained by scanning motions and fitted into 3D-model[12]

Figure 11 :
Figure 11: Defining location and orientation of the seam[12]

Figure 12 :
Figure 12: Full path of the grinding program visualized[12]

Figure 13 :
Figure13: Grinding skill in different phases in left to right: seam after welding, seam after vertical grinding and fully grinded seam[12]