Regularized empirical risk minimization plays an important role in machine learning theory. We will investigate a broad class of regularized pairwise learning (RPL) methods based on kernels. One example is regularized minimization of the error entropy loss which has recently attracted quite some interest from the viewpoint of consistency and learning rates. Another example is machine learning for ranking problems. We show that such RPL methods have additionally good statistical robustness properties, if the loss function and the kernel are chosen appropriately. We treat two cases of particular interest: (i) a bounded and non-convex loss function and (ii) an unbounded convex loss function satisfying a certain Lipschitz type condition. We will also give a result on the qualitative robustness of the empirical bootstrap of RPL methods. This is joint work with Prof. Dr. Ding-Xuan Zhou (City University of Hong Kong). The talk is based on a paper with the title ”Robustness of Regularized Pairwise Learning Methods Based on Kernels” which is accepted by the Journal of Complexity.