Matrix Polynomials

Since its inception, the theory of matrix polynomials has been influenced by its applications to differential equations and vibrating systems. In fact, vibrating systems strongly motivated the first works devoted primarily to matrix polynomials: one by Frazer, Duncan, and Collar in 1938~\cite{Frazer1955} and the other by Lancaster in 1966~\cite{Lancaster1966}. In addition, the theory of matrix polynomials has been influenced by results from matrix theory and complex analysis. For instance, the canonical set of Jordan chains defined in~\cite{Gohberg2009} are a natural generalization of a cycle of generalized eigenvectors, which are defined in~\cite{Friedberg2003}. A generalized Rouch\'e's theorem is presented in~\cite{Gohberg1990} and this result is used to prove a generalized Pellet's theorem for matrix polynomials~\cite{Melman2013}. These generalizations have benefited our study of matrix polynomials and their spectrum, and it is in this spirit that we have focused on similar generalizations from the theory of matrices and polynomials.

In~\cite{Cameron2015}, we use the generalized Rouch\'e theorem to provide spectral bounds for unitary matrix polynomials. This result can be viewed as a generalization of the fact that the eigenvalues of a unitary matrix lie on the unit circle or the fact that a scalar polynomial with unimodular coefficients has roots that lie in the annulus $\mathcal{A}(1/2,2)=\{z\in\mathbb{C}\colon~\frac{1}{2}<|z|<2\}$.

In~\cite{Cameron2016}, we use a pseudo inner product to provide a constructive proof that every matrix polynomial can be reduced to a matrix rational in Hessenberg form. In general, an apriori reduction of a matrix polynomial to a simpler form can be used to reduce the cost of computing its eigenvalues. For additional methods capable of reducing a matrix polynomial to a simpler form, see~\cite{Nakatsukasa2018}.

In~\cite{Cameron2018_DRSMP}, we present a generalization of Descartes' rule of signs for matrix polynomials. This result provides an upper bound on the number of real positive eigenvalues of a self-adjoint matrix polynomial whose coefficients are positive/negative definite, or otherwise null. Furthermore, this result has applications to overdamped vibration systems, which are discussed in~\cite{Duffin1955,Gohberg2009,Lancaster1966,Tisseur2001}.

Student Research

Currently, I am working with Davidson College student Michael Robertson on inclusion sets for the eigenvalues of a matrix polynomial. We are studying which inclusion sets give more information about the spectral set of a matrix polynomial. Applications for this work include initial estimates for iterative methods that compute the eigenvalues of a matrix polynomial.

Previous work includes a project with College of Idaho student Leo Trujillo on the numerical range of a matrix polynomial. Our work consisted of writing Python code to create plots of the numerical range and test a conjecture on Descartes' rule of signs for matrix polynomials.

Numerical Analysis

Modern-day polynomial root problems, such as those that stem from computer algebra applications or digital signal processing, require software that can solve for the roots of a polynomial of degree well above $100$ and sometimes on the order of several thousand or higher~\cite{Pan1997,Sitton2003}. For this reason, root solvers that use an explicit deflation strategy, such as \emph{zroots}~\cite{Press} from Numerical Recipes and \emph{C02AFF}~\cite{NAG} from the NAG library, are not applicable for solving today's large degree polynomial equations. For this reason, we have focused on deflation strategies suitable for solving large degree polynomial equations and their application to the polynomial eigenvalue problem.

In~\cite{Cameron2018_PML}, we present a modification of Laguerre's method that uses an implicit deflation strategy similar to the modification proposed by Ehrlich and Aberth for Newton's method~\cite{Aberth1973,Ehrlich1967}. Our method has strong virtues including the ability to compute all roots of a large degree polynomial, a local fourth-order convergence rate that can be observed in practice, and a workload that is well-suited for data-parallelism. We use this method to form an algorithm for the computation of all roots of a polynomial. Our algorithm is implemented in Fortran 90 using a double-precision floating-point format and is available at~\url{https://github.com/trcameron/FPML}.

In~\cite{Cameron2017}, we discuss the use of Laguerre's method to solve the polynomial eigenvalue problem. Topics include backward error analysis, stopping criterion, initial estimates, and the efficient computation of eigenvectors. Ultimately, it is noted that the standard Laguerre's method does not provide an advantage over the Ehrlich-Aberth method, which is applied in~\cite{Bini2013}. However, our modification of Laguerre's method~\cite{Cameron2018_PML} does provide an advantage over the Ehrlich-Aberth method; currently, we are revamping our work around this recent advancement.

In~\cite{Cameron2018_FPT}, we present a lively and inviting treatment of the floating-point format, the IEEE 754 standard, and compensated arithmetic techniques. This work was motivated by a current student project where we are looking to improve the accuracy and speed of our root solver software~\cite{Cameron2018_PML} by using compensated arithmetic techniques and multithreading, respectively.

Student Research

Currently, I am working with Davidson College student Aidan O'Neill on ways to improve the speed and accuracy of our root solver software~\cite{Cameron2018_PML}. We are studying compensated floating-point routines discussed in~\cite{Graillat2008,Graillat2005} to sharpen our stopping criterion and thereby improve the accuracy of our algorithm. Furthermore, we are interested in implementing our method so that it takes on the parallel capabilities of the user's computing system.

Previous work includes a group project with College of Idaho students Will Callahan, Sam Chandler, Johanna Mori, and Leo Trujillo on numerical solutions to a differential equation. Our work consisted of using Chebyshev polynomials to approximate a solution of the differential equation. In addition, I did work with Washington State University student Nikolas Steckley on the application of Laguerre's method to the polynomial eigenvalue problem.

Current Research

Recently, I have applied my experience in numerical linear algebra and matrix polynomials to the research of the rankability of data, inhomogeneous Markov chains, and Householder sets for matrix polynomials. The rankability of data is joint work with Paul Anderson, Tim Chartier, and Amy Langville; inhomogeneous Markov chains is joint work with Michael J. Tsatsomeros; Householder sets for matrix polynomials is joint work with Panayiotis J. Psarrakos.

Rankability of Data

The objective of ranking is to sort objects in a dataset according to some criteria. As a graph problem, we are finding the order or rank of vertices in a (weighted) directed graph. We are interested in defining a measure for the rankability of a data set, that is, the dataset's inherent ability to produce a meaningful ranking of its items.

In essence, the ranking of a data set is a one-dimensional compression of the data. For this reason, it is reasonable to suspect that the SVD of the adjacency matrix, or the corresponding centered matrix, will provide a useful measure of rankability. My work is focused on studying this connection and forming a cost-effective measure for the rankability of a dataset.

Inhomogeneous Markov Chains

Traditional homogeneous Markov chain models are broadly applied in different areas of ecology, such as behavioral sciences, conservation, and evolutionary and population dynamics. However, they are almost always based on a stationary assumption in the model. This stationary assumption has apparent limitations since ecological systems rarely remain stable under dynamic environmental conditions.

In our work, we consider time-inhomogeneous transition matrices $M(t)$ that are row stochastic as a function of time $t\geq 0$. With the additional assumption that the entries of $M(t)$ are polynomials in $t$, we can apply matrix polynomial theory to $(I-M(t))$ to study the left eigenvectors (the stationary distribution of the Markov chain as a function of time). Specifically, we can apply Perron-Frobenius theory and linearization techniques from~\cite{Mackey2006,Psarrakos2004} to our study of the left eigenvectors of $(I-M(t))$.

Householder Sets for Matrix Polynomials

In 1964, Alston S. Householder presented an elegant derivation of the Gershgorin set of a matrix~\cite{Householder1964}. Since then, Richard S. Varga has dubbed the normed defined set used by Householder in his derivation as the Householder set~\cite{Varga2004}.

Currently, we are working on the generalization of the Householder set for matrix polynomials. We are interested in applying this generalization to derive the Gershgorin set~\cite{Psarrakos2018}, the pseudo-spectra~\cite{Lancaster2005}, and other inclusion sets for the eigenvalues of a matrix polynomial.