Can anyone think of carrying out research activities without any use of computers? In the modern day scenario, it’s nearly impossible. Apart from the very fact that one has to use computers for typing the manuscript on, it has a lot more extensive usage. In fact, currently, there are undoubtedly more computational studies going on than compared to pure analytical ones, involving complete pen-paper works. The best piece of studies should involve analytical, computational as well as experimental works, all verifying the same problem under investigation. In the following, a basic idea of computational studies is presented in brief.

We know that in research we often face problems where we have to solve multiple coupled equations. If we are very lucky, that problem might have a proper analytical solution, but we might have to spend hours to get that solution with pen and paper works. A computer comes handy here. If we use computers to solve such problems, we can get that solution within a very small time interval and with high accuracy. Is it a computational study? In reality, it’s not a proper computational study, rather we are using it for saving our time, without making mistakes, and hence it’s also an analytical study. The actual computational studies deal with problems for which analytical solutions are not possible, for example, the dynamics of a gas. In such cases, we might use theoretical findings to set up a computational model which can now be solved numerically; hence it’s a numerical study. Study of such a model computationally yields a set of results as a function of time, e.g. the trajectory of the particles of the gas. Hence, it is often known as computer simulation studies. Often we might use some parameters found from experimental studies as well. Therefore, one can say that computational studies make a bridge between theoretical and experimental studies. In principle, all these three must yield identical results, to verify each other. The fundamental idea behind computational studies can be understood from the figure below, taken from Wikipedia.

Computer simulations are used, nowadays, in almost every branch of science, such as in physics, chemistry, biology, geology, earth and environmental sciences, medicines, astronomy, climatology, engineering etc. It is also used extensively in statistics, economics, psychology, social science and other disciplines as well. But why it is so popular? The basic reasons are as follows. Simulation studies can be used (a) as a parallel way of study to experiments and theory, (b) to verify a theoretical model, (c) to detect the faulty experimental findings, (d) before real complex experiments to get insights a priori, (e) to replace dangerous and impossible experiments, (f) to get microscopic insights, not possible with the state of the art experimental/ theoretical techniques, etc.

The fields of computer simulations have been seen to be developing very rapidly with the enhancements of powerful computers and smart algorithms. In 1953, Metropolis et al. carried out the first computer simulation of a liquid, in the computer named as ‘MANIAC’, in Los Alamos National Laboratory, in the United States, where they have used so-called ‘Monte Carlo (MC)’ technique to solve a hard sphere model of molecules. In 1957, MC was carried out on an Argon gas model. In 1964, Rahman used that idea to develop a new simulation technique, known as ‘Molecular Dynamics (MD)’. Computer simulations have grown enormously since then and especially in the last two decades, hand-in-hand with the growth of computers.

The two main techniques of computer simulations are Monte Carlo (MC) and Molecular Dynamics (MD). In MC simulations, the concept of probability and random numbers are utilized to solve the problem, by generating draws from the probability distribution. Quantities of interest are calculated as the ensemble average (selected draws) of the system (ensembles are identical copies of the system, such that average value of a quantity of interest is the average of the values in the ensemble). MD simulation, on the other hand, is based on the numerical solution of Newton’s equations of motion, with the time dynamics yielding the trajectory of the atoms. Physical observables are determined as the time average of the system. Both the above approaches are equivalent under ergodicity (Ergodic theory says that time average of a physical observable is equal to the ensemble average for a system in equilibrium and time average carried out for a sufficiently long duration, theoretically, t→∞).

Electronic structure calculation methods, mainly Density Functional Theory (DFT) are one of the most widely employed ‘computational studies’ in modern days. Unlike MD, it is used to get the static configuration of the system under study, which is the ground state of the system, having the minimum of energy among all the possible configurations. Approximate ground state wave functions are calculated which are used to calculate the properties of interest.

The basic form of MD uses predefined force fields or potentials acting among the atoms, based on which the positions are calculated. These are known as classical techniques. Forces obtained from electronic structure calculations can be used to sample the dynamics ‘on the fly’. Such methods are known as ‘on the fly’, ‘quantum mechanical’, ‘ab initio’ or ‘first principle’ MD techniques. While classical methods are known for their fast dynamics, on the fly methods are employed to study complex physical systems, at the expense of huge computational power and time, than compared to the classical methods. Similarly, quantum mechanical MC techniques have also been formulated, which come under huge computational overhead similar to ab initio MD.

There is a funny cartoon explaining the basic idea behind carrying out computer simulations, please have a look at it:

[cite]