Arindama Singh
Department of Mathematics

Home

Teaching

Research

Papers

Books

Extras

Contact

 

 

Some Interesting Links  

Singular Perturbation

American Math Soc

Math Writing

Problems Unsolved

Open Problems

More Problems

Computational Logic

Compu Learning Theory

The image on the top right is Harry Catching the Snitch.

 

Research

I have done research in numerical solution of singularly perturbed differential equations (NSSPTPBVS), knowledge compilation, and application of least squares heuristics. The last topic includes problems from singularly perturbed elliptic problems, image restoration, mathematical learning theory, and eigenvalues-eigenvectors of large matrices. Some of the papers in these areas and in elementary mathematics are listed in the Papers section. Preprints of quite a few of those can also be found there. The other subsections are self-explanatory.

NSSPTPBVS
Knowledge Compilation
Least Squares

 
NSSPTPBVS

I have started my research career with Numerical Solution of Singularly Perturbed Two-Point Boundary-Value Problems. The main problem is a system of ordinary differential equations, in which some of the derivatives are multiplied with a small positive parameter. The boundary values with the system are prescribed at two points, possibly in a coupled form. Assuming that the solution is unique, the problem is to compute a numerical solution. Since the parameter multiplying some of the derivatives is small, the solution displays two time scales. Some of the system variables can be captured in real time, whereas the others can only be captured in a scaled time, which is often exponential. Thus usual numerical methods fail to get an approximate solution which would converge throughout the (time-) interval of interest. Further, in a small neighbourhood of the boundary points, a portion of the solution blows up. This blowing up of the solution is referred to as boundary layer phenomenon.

My method consists of first decoupling the blowing-up portion of the solution, or, the fast components from the system. This is done by using a Lyapunov type transformation. Next, I choose different time domains, one, the usual, second, the transformed, to discretize the slow and the fast components. Once this is done, even the simple methods such as Euler's methods and trapezoidal rules can be used to integrate the system numerically. The methods are further chosen looking at the stability properties of the system. Basically this requires computing the time domains, where the eigenvalues of the system do not change their signs. Finally the discretized system, along with the boundary values are solved.

Nonlinear systems are approximated by a sequence of linear systems using a Newton-type linearization process. I also prove that the modified Newton-type linearization converges. Then the linear systems are numerically solved. This is a rough summary of my research work that I had done during my Ph.D. Most of these works can be found in my Ph.D. thesis titled Numerical Solution of Two-Point Boundary-Value Systems (1989-1990, IIT Kanpur). This work was carried out under the supervision of Professor Mohan K Kadalbajoo (IIT Kanpur). You can also find most of these works in published form. See the Papers section.

Back to Top

Knowledge Compilation

A propositional knowledge base is a body of knowledge where each piece of knowledge is stored as a clause. A querry is a clause whose answer is supposed to follow from the knowledge base. Querry answering consists of acertaining that the answer indeed follows from the knowlwdge base. This process is exponential in the size of the knowledge base. To speed up the online process of querry answering, the knowledge base is first transformed to a suitable from which it would be easier to answer a querry. This suitable form is usually taken as the set of prime implicants or implicates. Knowledge compilation refers to transforming the knowledge base to the set of prime implicants or implicates in the offline phase. Many algorithms had been developed for this transformation. I had devised one such algorithm that bases on the idea of transversal clauses. Since propositional logic is too restrictive in its capabilities, I also suggested extensions of knowledge compilation to a first order knowledge base. These extensions were done in collaboration with Dr. Manoj K Raut, a research student of IIT Madras. The relevant publications are in the Papers section.

Back to Top

Least Squares

I wanted to see the boundary layer phenomenon in partial differential equations. But this time, I wanted to use the regularization methods instead of the idea of decoupling. For this purpose, I took a perturbed Laplacian in two dimensions, where atleast one of the partial derivatives is multiplied with a small positive parameter. Since the operator is unbounded, the usual approach to regularizing the equation failed. My trick was to shift the spectrum by introducing a related operator involving the regularization parameter instead of the original so that the spectrum of the new operator may not include zero. Then we work with the new operator and can go back at the end. This gave rise to a simple proof that, in case, the concerned operator is self-adjoint and positive (which was the case), the usual regularization methods such as Lavrentiev's, Tikhonov's, Lardy's, and Showalter's would converge. The regularized equations can then be solved by any suitable finite difference approximation. This work was carried out in collaboration with Dr. S Sheela (Ph.D. student of IIT Madras).

We can view a picture as a collection of color values at the pixels. A two dimensional image (picture) is thus taken as a function from a rectangle to the set of real numbers. Given a corrupt image, our goal here is to restore or improve the image as much as possible. Since the operations on images satisfy the semi-group properties, we may think of a hyperbolic PDE of which the image may be a solution. We add a time component to the two dimensional image, and the corrupt image corresponds to the intial time, say, time equal to zero. The images corresponding to various time frames give possibly the improved images. With this idea at the background, many models (the PDEs) have been developed for image restoration. I have considered modifying the Perrona-Mullick model. Its modifications led to many interesting problems in PDEs, particularly, proving existence-uinqueness of solutions to those PDEs.

Though instances of learning look diverse, their mathematics seems to be very simple. Atleast, the problem is simple to formulate. We have a finite number of (x,y) pairs, where the y-values are supposed to have been obtained from the respective x-values by applying a function f. Since the function f is unknown, the problem is to find an approximation of this unknown function f. The approximation can be done in many ways imposing familiar concepts of closeness. Further restrictions of approximating f from a given function space or to satisfy some other statistical criteria such as predictivity etc. makes the mathematics of learning exciting.

Eigenvalues of a complex square matrix are the roots of its characteristic polynomial. When the order of such a matrix is more than five, we have no way of computing its characteristic polynomial, except in some very special cases. However, physical sciences and engineering disciplines require computation of eigenvalues of large matrices. If not all, at least some number of largest (in absolute value) or smallest eigenvalues are rwquied. Approximations to eigenvalues of such matrices are obtained by Arnoldi method (and Lanczos, if the matrix is symmetric), which uses projection methods, specifically into a Krylov subspace. Thogh largest eigenvalues can be approximated using this method, the corresponding eigenvectors are usually not approximated by this. Refined Arnoldi method has been proposed to approximate the eigenvectors. However, the refined method requires computation of singular values of large matrices. My focus is whether variants of Arnoldi method can be devised so that approximations to eigenvectors may be obtained using solution of linear systems. This would bring down computational cost since solution of linear sytems is cheape than computing singular values.

Back to Top

Funded Projects

I had completed one research project on Knowledge Compilation with Non-propositional Knowledge Bases funded by DST. This work was carried out during 1998-2001. The cost of the project was around Rupees Four lakhs. Later I also executed another project on Mathematical Aspects of Parallel Computing with Distributed Systems and Supercell Structure for which Professor R Rama was the principal investigator. This project was also funded by DST and was to the tune of Rupees Eleven lakhs. This work was carried out during 2002-2005.

Back to Top

Some more related stuff may be found in Extras section.


2022   Arindama Singh