- HOME
- Electrical Technologies / Informatics
- Research Data (No.1978)

## 非拡大写像の不動点近似に基づく信号処理方式とその応用に関する先駆的研究

1. Hybrid steepest descent method and its applications to inverse problems

Signal processing and computational fixed point theory of nonexpansive mapping have been developed as two sides of the same coin. Many inverse problems in signal processing, what is known by, e.g., parameter estimation problem, signal and image recovery problem, etc, have been solved through elegant translations into fixed point problems, i.e., the solution has been characterized as a fixed point of certain mappings defined in a vector space.

The underlying principle of such many computational algorithms is to operate a certain nonexpansive mapping iteratively and generate a convergent sequence to its fixed point. This principle has been established based on a mathematical discovery: the solution set of such inverse problems can be expressed completely as the fixed point set, i.e., the set of all fixed points, of the so-called nonexpansive mappings in a real Hilbert space. However, the nonexpansive mapping often has infinitely many fixed points, e.g., in applications to recently attracted sparsity aware inverse problems, meaning that a selection from the fixed point set should be of great importance for further advanced applications of fixed point theory to signal processing.

Nevertheless, most iterative processes can only return an “unspecified” point from the fixed point set, which requires many iterations. Therefore, based on common sense, it seems unrealistic to wish for an “optimal” one from the fixed point set. In his pioneering effort, Dr. Yamada discovered a simple but an innovative iterative process for finding a solution of the convex optimization problem defined over the fixed point set of a nonexpansive mapping. The algorithmic solution was named the hybrid steepest descent method, because it is constructed by blending important ideas in the steepest descent method and in the fixed point theory.

Since this innovative discovery enables us to utilize whole findings in the modern fixed point theory of nonexpansive mapping, we can attain the very best solution from infinitely many existing solutions of many inverse problems. The remarkable applicability of this method to inverse problems in signal and image processing have been examined extensively not only by his school but also by many international scientists and engineers.

2. Adaptive projected subgradient method and its applications to adapive signal processing

Many adaptive filtering algorithms based on orthogonal projections updates the previous estimate to a better one which is closer to an unknown desirable information by utilizing a priori knowledge as well as the latest statistical knowledge obtained from observed data. Dr.Yamada invented a unified scheme named the Adaptive Projected subgradient Method (APSM); this scheme is a time-varying extension of the Polyak’s subgradient algorithm, which was developed for a nonsmooth convex optimization problem, to the case where the convex objective itself keeps changing in the whole process. Under this simple umbrella of the APSM, a unified convergence analysis has been established for a wide range of adaptive algorithms. Moreover the APSM has been serving as a guiding principle to create various powerful adaptive algorithms for acoustic systems,

wireless communication systems, distributed learning for diffusion network, online machine learning in Reproducing Kernel Hilbert Spaces, etc.

[2] I. Yamada, N. Ogura and N. Shirakawa, "A numerically robust hybrid steepest descent method for the convexly constrained generalized inverse problems, in Inverse Problems, Image Analysis, and Medical Imaging (Z.Nashed and O. Scherzer, Eds.) Contemporary Mathematics 313, American Mathematical Society, pp.269-305, 2002.

[3] I. Yamada and N. Ogura, "Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive Mappings," Numerical Functional Analysis and Optimization, vol.25, no.7&8, pp. 619-655, 2004.

[4] M. Yukawa, K. Slavakis and I. Yamada, Adaptive parallel quadraticmetric projection algorithms, IEEE Transactions on Audio, Speech and Language Processing, vol.15, no.5, pp.1665-1680, 2007.

[5] K. Slavakis and I. Yamada, "Robust wideband beamforming by the hybrid steepest descent method," IEEE Transactions on Signal Processing, vol. 55, no.9, pp. 4511-4522, 2007.

[6] R. LG. Cavalcante and I. Yamada, Multiaccess interference suppression in OSTBC-MIMO systems by adaptive projected subgradient method, IEEE Transactions on Signal Processing, vol.56, no.3, pp. 1028-1042, March 2008.

[7] K. Slavakis, S. Theodoridis and I. Yamada, Online kernel-based classification using adaptive projection algorithms, IEEE Transactions on Signal Processing, vol.56, no.7, pp.2781-2796, 2008.

[8] N. Takahashi and I. Yamada, Parallel algorithms for variational inequalities over the cartesian product of the Intersections of the fixed point sets of nonexpansive mappings, Journal of Approximation Theory, 153, pp.139-160, August 2008.

[9] R. LG. Cavalcante and I. Yamada, A flexible peak-to-average power reduction scheme by the adaptive projected subgradient method, IEEE Transactions on Signal Processing, vol. 57, no. 4, pp. 1456-1468, Apr. 2009.

[10] N. Takahashi and I. Yamada, Steady-state mean-square performance of epsilon-hyperslab projection algorithm, IEEE Transactions on Signal Processing, vol. 57, no. 9, pp.3361-3372, Sep. 2009.

[11] K. Slavakis, S. Theodoridis and I. Yamada, "Adaptive constrained filtering in reproducing kernel Hilbert spaces: the beamforming case," IEEE Transactions

on Signal Processing, vol. 57, no. 12, pp. 4744-4764, Dec. 2009.

[12] J.-L. Starck, F. Murtagh, and J. M. Fadili, Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity, Cambridge University Press, 2010.

[13] M. Yukawa, K. Slavakis, I. Yamada. Multi-domain adaptive learning based on feasibility splitting and adaptive projected subgradient method, IEICE Transactions on Fundamentals, vol. E93-A, no. 2, pp. 456-466, Feb. 2010.

[14] S. Theodoridis, K. Slavakis and I. Yamada, "Adaptive learning in a world of projections: a unifying framework for linear and nonlinear classification and regression tasks, IEEE Signal Processing Magazine, vol. 28, no. 1, pp. 97-123, Jan. 2011.

[15] I. Yamada, M. Yukawa and M. Yamagishi, "Minimizing the Moreau envelope of nonsmooth convex functions over the fixed point set of certain quasi-nonexpansive mappings," pp.345-390, In: Fixed Point Algorithms for Inverse Problems in Science and Engineering (Bauschke, Burachik, Combettes, Elser, Luke, Wolkowicz, eds.) , Springer-Verlag, 2011.

[16] R. LG. Cavalcante, A. Rogers, N. Jenkings and I. Yamada, Distributed asymptotic minimization of sequences of convex functions by a broadcast adaptive subgradient

method, IEEE Journal of Selected Topics in Signal Processing, vol. 5, Aug. 2011.

Signal processing and computational fixed point theory of nonexpansive mapping have been developed as two sides of the same coin. Many inverse problems in signal processing, what is known by, e.g., parameter estimation problem, signal and image recovery problem, etc, have been solved through elegant translations into fixed point problems, i.e., the solution has been characterized as a fixed point of certain mappings defined in a vector space.

The underlying principle of such many computational algorithms is to operate a certain nonexpansive mapping iteratively and generate a convergent sequence to its fixed point. This principle has been established based on a mathematical discovery: the solution set of such inverse problems can be expressed completely as the fixed point set, i.e., the set of all fixed points, of the so-called nonexpansive mappings in a real Hilbert space. However, the nonexpansive mapping often has infinitely many fixed points, e.g., in applications to recently attracted sparsity aware inverse problems, meaning that a selection from the fixed point set should be of great importance for further advanced applications of fixed point theory to signal processing.

Nevertheless, most iterative processes can only return an “unspecified” point from the fixed point set, which requires many iterations. Therefore, based on common sense, it seems unrealistic to wish for an “optimal” one from the fixed point set. In his pioneering effort, Dr. Yamada discovered a simple but an innovative iterative process for finding a solution of the convex optimization problem defined over the fixed point set of a nonexpansive mapping. The algorithmic solution was named the hybrid steepest descent method, because it is constructed by blending important ideas in the steepest descent method and in the fixed point theory.

Since this innovative discovery enables us to utilize whole findings in the modern fixed point theory of nonexpansive mapping, we can attain the very best solution from infinitely many existing solutions of many inverse problems. The remarkable applicability of this method to inverse problems in signal and image processing have been examined extensively not only by his school but also by many international scientists and engineers.

2. Adaptive projected subgradient method and its applications to adapive signal processing

Many adaptive filtering algorithms based on orthogonal projections updates the previous estimate to a better one which is closer to an unknown desirable information by utilizing a priori knowledge as well as the latest statistical knowledge obtained from observed data. Dr.Yamada invented a unified scheme named the Adaptive Projected subgradient Method (APSM); this scheme is a time-varying extension of the Polyak’s subgradient algorithm, which was developed for a nonsmooth convex optimization problem, to the case where the convex objective itself keeps changing in the whole process. Under this simple umbrella of the APSM, a unified convergence analysis has been established for a wide range of adaptive algorithms. Moreover the APSM has been serving as a guiding principle to create various powerful adaptive algorithms for acoustic systems,

wireless communication systems, distributed learning for diffusion network, online machine learning in Reproducing Kernel Hilbert Spaces, etc.

#### Publications

[1] I. Yamada, "The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings," Studies in Computational Mathematics, 8, pp.473-504, 2001.[2] I. Yamada, N. Ogura and N. Shirakawa, "A numerically robust hybrid steepest descent method for the convexly constrained generalized inverse problems, in Inverse Problems, Image Analysis, and Medical Imaging (Z.Nashed and O. Scherzer, Eds.) Contemporary Mathematics 313, American Mathematical Society, pp.269-305, 2002.

[3] I. Yamada and N. Ogura, "Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive Mappings," Numerical Functional Analysis and Optimization, vol.25, no.7&8, pp. 619-655, 2004.

[4] M. Yukawa, K. Slavakis and I. Yamada, Adaptive parallel quadraticmetric projection algorithms, IEEE Transactions on Audio, Speech and Language Processing, vol.15, no.5, pp.1665-1680, 2007.

[5] K. Slavakis and I. Yamada, "Robust wideband beamforming by the hybrid steepest descent method," IEEE Transactions on Signal Processing, vol. 55, no.9, pp. 4511-4522, 2007.

[6] R. LG. Cavalcante and I. Yamada, Multiaccess interference suppression in OSTBC-MIMO systems by adaptive projected subgradient method, IEEE Transactions on Signal Processing, vol.56, no.3, pp. 1028-1042, March 2008.

[7] K. Slavakis, S. Theodoridis and I. Yamada, Online kernel-based classification using adaptive projection algorithms, IEEE Transactions on Signal Processing, vol.56, no.7, pp.2781-2796, 2008.

[8] N. Takahashi and I. Yamada, Parallel algorithms for variational inequalities over the cartesian product of the Intersections of the fixed point sets of nonexpansive mappings, Journal of Approximation Theory, 153, pp.139-160, August 2008.

[9] R. LG. Cavalcante and I. Yamada, A flexible peak-to-average power reduction scheme by the adaptive projected subgradient method, IEEE Transactions on Signal Processing, vol. 57, no. 4, pp. 1456-1468, Apr. 2009.

[10] N. Takahashi and I. Yamada, Steady-state mean-square performance of epsilon-hyperslab projection algorithm, IEEE Transactions on Signal Processing, vol. 57, no. 9, pp.3361-3372, Sep. 2009.

[11] K. Slavakis, S. Theodoridis and I. Yamada, "Adaptive constrained filtering in reproducing kernel Hilbert spaces: the beamforming case," IEEE Transactions

on Signal Processing, vol. 57, no. 12, pp. 4744-4764, Dec. 2009.

[12] J.-L. Starck, F. Murtagh, and J. M. Fadili, Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity, Cambridge University Press, 2010.

[13] M. Yukawa, K. Slavakis, I. Yamada. Multi-domain adaptive learning based on feasibility splitting and adaptive projected subgradient method, IEICE Transactions on Fundamentals, vol. E93-A, no. 2, pp. 456-466, Feb. 2010.

[14] S. Theodoridis, K. Slavakis and I. Yamada, "Adaptive learning in a world of projections: a unifying framework for linear and nonlinear classification and regression tasks, IEEE Signal Processing Magazine, vol. 28, no. 1, pp. 97-123, Jan. 2011.

[15] I. Yamada, M. Yukawa and M. Yamagishi, "Minimizing the Moreau envelope of nonsmooth convex functions over the fixed point set of certain quasi-nonexpansive mappings," pp.345-390, In: Fixed Point Algorithms for Inverse Problems in Science and Engineering (Bauschke, Burachik, Combettes, Elser, Luke, Wolkowicz, eds.) , Springer-Verlag, 2011.

[16] R. LG. Cavalcante, A. Rogers, N. Jenkings and I. Yamada, Distributed asymptotic minimization of sequences of convex functions by a broadcast adaptive subgradient

method, IEEE Journal of Selected Topics in Signal Processing, vol. 5, Aug. 2011.