JBCS



10:48, sáb nov 23

Acesso Aberto/TP




Educação

Basic concepts of self-organized phenomena in chemical systems

Leonardo Silva-Dias*

Departamento de Química, Universidade Federal de São Carlos, 13565-905 São Carlos - SP, Brasil

Recebido em 10/08/2020
Aceito em 21/10/2020
Publicado na web em 01/12/2020

Endereço para correspondência

*e-mail: leonardosilvadias@estudante.ufscar.br

RESUMO

We present the main concepts of nonlinear dynamics and thermodynamics of irreversible processes to introduce chemistry students to the topic of self-organized phenomena. This task is performed by theoretically describing the emergence of self-sustained oscillations, waves, and stationary patterns/Turing patterns in the Belousov-Zhabotinsky (BZ) reaction, through the Oregonator model. We carefully developed such a description, which resulted in long algebraic deductions and rich supplementary material. Considering that, we encourage the use of this material in undergraduate and graduate advanced physical chemistry classes.

Palavras-chave: nonlinear dynamics; irreversible thermodynamics; dissipative structures; Belousov-Zhabotinsky (BZ) reaction.

INTRODUCTION

Ludwig Ferdinand Wilhelmy (1812-1864) set up, for the first time in the history of physical chemistry, a differential equation (DE) to describe the rate of a chemical reaction, more specifically, the acid-catalyzed inversion of sucrose.1 In 1884, the work of the German physicist called the attention of Ostwald, who recognized the importance of such approach and spread those ideas to the scientific community.1 In consequence of that, other scientists could show that the mathematical formalism proposed by Wilhelmy was able to explain kinetic phenomena of low complexity, i.e., linear rate equations, and of high complexity, i.e., nonlinear rate equations.2 The last case refers to situations in which DE's with nonlinear terms, i.e., polynomial terms of order higher than one, compose the physical model. Such systems can evolve to organized dynamical states in conditions of constant flux of either matter or energy, i.e., far from thermodynamic equilibrium, giving rise to self-organized phenomena.2,3 These phenomena are lead by the action of fluctuations, which are generated by dissipative processes in nonequilibrium regimes. They are characterized by the formation of spatio-temporal structures, such as oscillations, waves, and stationary patterns.

There are a significant number of chemical and biochemical reactions capable of exhibiting complex dynamical behavior in different environments. Also, most of them are related to life.4 In the biochemical context, self-organized phenomena play a central role in many biological processes, such as metabolism, signaling, and development. In general, they are responsible for regulating essential events of cell physiology, e.g., circadian rhythms, DNA synthesis, and mitosis.5 Among the large variety of examples, the emergence of temporal oscillations in glycolysis, i.e., the metabolic pathway of conversion of glucose into pyruvate and concomitant production of ATP, is the oldest (1950s) and most studied example of a biological oscillator.2,5 Another relevant example is the spontaneous formation of spatial structures during the development of eukaryotic organisms. Even though the scientific community recognizes the importance of such processes to the early stages of living systems, the mechanism that controls it is still unknown.6 In the chemical context, the Belousov-Zhabotinsky (BZ) reaction is the most famous inorganic chemical reaction able to form different kinds of self-organized structures, e.g., temporal oscillation, complex (mixed mode) oscillation, traveling waves, stationary patterns, and chaos.2 From studies of the BZ reaction other chemical reactions with similar dynamical behavior were discovered as well, such as Briggs-Rauscher, CIMA, and Bromate-Sulfite-Ferrocyanide.7

Despite the profound relevance of self-organized phenomena to the better comprehension of natural processes, as exemplified previously, this subject still has little attention or no treatment in most undergraduate and graduate physical chemistry courses.8 This fact is related to the difficulties of dealing with physical and mathematical concepts of irreversible thermodynamics and nonlinear chemical kinetics. The irreversible thermodynamics is the theory capable of explaining the emergence of self-organized structures.3 It is known as the "modern" thermodynamics, and it was formulated by Lars Onsager, Theophile De Donder, Ilya Prigogine, and others. Differently of the "classical" thermodynamics - developed by Carnot, Clausius, Joule, Helmholtz, Kelvin, Gibbs, and others - that treats irreversible transformation through the Clausius inequality, the modern formulation includes time in the description of irreversible processes, obtaining explicit equations for the variation of entropy.3,4,9 This theory shows that a system with a constant flux of energy, i.e., to avoid the equilibrium state, can evolve to different steady states (SSs). In those situations, the action of fluctuations can make the SSs unstable, resulting in the emergence of order with increasing entropy.3,10 Most of the physical chemistry classes are composed of topics of classical thermodynamics in a way that the modern formulation is not presented.

The nonlinear chemical kinetics is associated with the reaction mechanism. This class of processes is described by nonlinear DE's, which in most cases, do not have analytical solutions.2,11 Because of that, the study of these equations requires more sophisticated mathematical methods, such as numerical and algebraic approaches.12-14 In general, the direct integration of nonlinear equations can result in many kinds of solutions. Therefore, to overcome that situation, simplifications of the mathematical model are done to characterize the possible dynamic states of the system qualitatively. We do that applying tools of linear algebra.11-14 In more complicated cases, in which a large number of dependent variables and parameters compose the model, we must consider numerical methods related to bifurcation and dynamical systems theories to evaluate the behavior of the solutions in terms of stability/instability.10-14 These mathematical concepts demanded are quite complicated, and because of that, they can be an issue in the study of the phenomena discussed in this article.

Considering the critical role of self-organized phenomena in the regulation of natural events, e.g., the emergence of life, we present in this work a very detailed description of the mathematical, physical, and chemical concepts associated with irreversible thermodynamics and nonlinear dynamics to the study of self-organization in chemical systems. To accomplish this task, we discuss the emergence of self-sustained oscillation, chemical waves, and stationary patterns/Turing patterns through a well-known model of the Belousov-Zhabotinky (BZ) reaction, the Oregonator. In section (II), we present a brief historical background of the BZ reaction, some aspects of its mechanism, and the construction of the model considered here. Finally, in section (III), we begin the analysis of the spatio-temporal structures, highlighting the main necessary tools and concepts to work with such processes. There is plenty of information in the Supplementary Material regarded to the algebraic deductions and Mathematica software codes.

 

BZ REACTION: HISTORY, MECHANISM, AND MODEL

The first observation of the oscillatory phenomenon in the BZ reaction was made accidentally by Boris Pavlovich Belousov while investigating inorganic analogs of the Krebs cycle.2,10 Even though his fortunate discovery had not been published previously, he faced many obstacles that interfered with the dissemination of his results. Belousov began the studies of the metabolic process in 1950 through a solution composed of bromate, citric acid, and ceric ions (Ce4+). Through it, he verified an unexpected periodic change of the solution's color, between yellow (Ce4+) to colorless (Ce3+), which resulted in a careful investigation of the effects of temperature and initial concentration in the oscillatory profile, as well as the notification of the formation of traveling waves.2 He tried to publish his outcomes, but they were rejected twice. Because of that, he decided to give up on publishing. Only in 1958, Belousov could publish a short document on his observations in the abstracts of a conference on radiation biology.2,7 Even with no formal publication, Belousov spread his ideas among colleagues in the former USSR and years latter Anatol M. Zhabotinsky had access to the reaction's recipe.7,10

Zhabotinsky is responsible for disseminating Belousov"~s findings and continuing his work, making significant progress in understanding the mechanism of BZ reaction.7,10 Zhabotinsky started an in-depth study of the oscillatory phenomenon through BZ reaction in 1961 following his adviser suggestion.7 He replaced the citric acid for malonic acid, avoiding the production of precipitates, and also demonstrated that ferroin, used by Belousov to heighten the color change during oscillations, could catalyze the reaction without cerium. More importantly, he identified the necessity of at least two reaction steps for the emergence of oscillations: autocatalysis, which results in the oxidation of Ce3+ via HBrO3, and a reduction of Ce4+.2,7,10 Differently from Belousov, Zhabotinsky and co-workers published their findings, which called the attention of other chemists. However, the reports were not well received by many of them.2

The scientific community did not recognize the BZ reaction as a true homogeneous oscillating reaction. They claimed that oscillatory behavior was either related to heterogeneous phenomena or violating the second law of thermodynamics.2,10 Many chemists asserted that the oscillation of chemical concentration was associated in some way with the presence of dust and formation of bubbles.2 Others argued that such a phenomenon disobeys the second law of thermodynamic by mistakenly thinking oscillating reactions would be analogous to physical pendulums.2,3 The argument is based on the oscillatory profile of the pendulum's total energy as it oscillates around the equilibrium point. Like a pendulum, an oscillating reaction would require an oscillatory behavior of the Gibbs free energy, as the reactants converted to products and then back to reactants, passing through the equilibrium state.2 Following the second law, which states that a system spontaneously evolves from an initial state to a final state with an increase of the total entropy, the behavior described previously is inconceivable.2,3 Such allegations promoted the development of the theoretical basis of nonequilibrium thermodynamics. This theory is capable of showing that a chemical oscillator is different from a pendulum, and self-organized phenomena obey the laws of thermodynamics.

In the 20th century, the notorious examples of self-organized phenomena theoretically and experimentally reported demanded a reformulation of thermodynamics as a theory of irreversible processes.4 This modern formulation states that a system far from equilibrium can organize itself, i.e., reaching states of lower entropy, by dissipating energy to the surroundings. Such a process results in a positive net change of entropy in the universe.2,3 Considering a chemical oscillator, the intermediates of the reaction can increase and decrease with time as long as the free energy decreases as a result of the continuous conversion of reactants into products.2,3 In fact, it is what exactly happens in the BZ reaction, as proposed by Field, Korös and Noyes.15 The oscillatory behavior of the intermediates, bromide and bromous acid, promotes oxidation and reduction of the cerium ions, which can visually be seen by the solution's color changing. However, such a process occurs with the consumption of reactants, i.e., bromate and malonic acid, and formation of products, carbon dioxide and bromomalonic acid.2,15

In the early 1970's Field, Korös and Noyes developed a detailed chemical mechanism to the BZ reaction, known as the FKN mechanism.15 It is composed of a large number of elementary steps, which can be best understood by three overall processes:16,17

Process 1: Removal of bromide ions, known as the inhibitor, from the reactor media.

Process 2: Oxidation of the metal catalyst (M) through a reaction autocatalyzed by HBrO2, known as the activator.

Process 3: Reduction of the metal catalyst by the organic reagent (malonic acid MA).

Processes 1 and 2 are negative and positive feedback steps, respectively. They are responsible for regulating the dynamics of the reaction, composing together a classic clock reaction.16 The emergence of sustained oscillatory behavior requires such a class of steps. Basically, in process 1, the bromide ions act as the inhibitor of process 2 by consuming bromate ions, whereas in process 2, the bromous acid catalyzes its own production, activating the reaction. As the concentration of bromide ions and bromous acid oscillate, the inhibitor consumed at process 1 must be replaced, this occurs through process 3, where f is an adjustable parameter.15,17 This last parameter is proportional to the ratio of the rate of Br- production to Mox consumption during the process 3.18

Through the FKN mechanism, Field and Noyes proposed the "Oregonator." It is the first kinetic scheme of reactions capable of to qualitatively describe the oscillatory behavior observed in BZ reactions.17 The Oregonator model reads as:

To study the phenomena related to irreversible processes, we are going to assume that the reverse steps of the Oregonator are quite slow compared with the forward steps. So the entire mechanism is irreversible.17 Therefore, considering the law of mass action, we can derive the rate equations of the intermediates:

We consider an open system to avoid the thermodynamic equilibrium, so the reactants, A and B, are kept constant over time. We can propose a transformation of variables, as used by Field and Noyes,17 and rewrite the system in a dimensionless form:

where u, y, and v are dimensionless variables, and k1, k2, k3, and k4 are the dimensionless parameters. Such transformations are taken with f = 1, and they are shown in the ref. 17. It is also known that the variable y change in time much faster than u and v.15 Because of that, we can apply the steady-state approximation, i.e., dy/dt =^ 0, and obtain the reduced Oregonator (RO) form:19,20

The reduced Oregonator (RO) model, Eqs. (7) and (8), is going to be used in the investigations of this article.

SELF-ORGANIZED PHENOMENA

In this section, we present the necessary techniques and theory to study the emergence of self-organized phenomena in chemical systems. To do that, we separately analyze self-sustained oscillations, waves, and stationary patterns in the Belousov-Zhabothinsky reaction through the reduced Orgonator (RO) model.

We employed the software Mathematica 9.0 to solve the mathematical problems addressed in this work, and considered kinetic parameters obtained from experimental data.17,21

Self-sustained oscillation

Self-sustained oscillation (SSO) in chemical reactions refers to the spontaneous temporal oscillatory behavior of the dependent variables, i.e., the chemical concentration. The occurrence of this dynamical state is a consequence of the action of fluctuations on the concentration of the reaction intermediates associated with the nonlinear kinetic mechanism, i.e., the dissipative process. In specific conditions, the steady states (SSs) become unstable, and these fluctuations drive the system to stable regimes where periodic solutions arise. This event is known as Hopf bifurcation (known as well as Poincaré-Andronov-Hopf bifurcation).8,22,23

Therefore, from the information presented previously, we need to define conditions whereby the system undergoes Hopf bifurcation to observe the emergence of SSO. We solve this task using a mathematical strategy known as linear stability analysis (LSA). The main idea of LSA is to use the linear version of the corresponding nonlinear dynamical model to characterize the behavior of the problem's solutions qualitatively.12-14 The linear approximation is carried out in the vicinity of a SS and usually is valid because it maintains the most important features of the nonlinear dynamical system's solution in the neighborhood of the SS, differing only by small distortions. As a result of that, we can algebraically determine conditions in which different dynamical states can exist. However, in some circumstances, LSA can fail, demanding different approaches. For a better explanation, see the Hartman-Grobman theorem.12

Briefly, the general procedure of LSA, used to study all phenomena addressed in this work, can be sketched as follows: 1) Definition of the SSs of the homogeneous dynamical system; 2) Linearization of the dynamic equations; 3) Construction of the linearized system using matrix notation; 4) Determination of the characteristic polynomial; 5) Analysis of the eigenvalues through the components of the characteristic polynomial.22,23

To begin with that, we are going to consider the RO model. This model can be written in a generic form as follows:

where f(u,v) = k1(k3v(1 - u) / (1 + u) + u - k2u2) and g(u,v) = k4(u - v). The Eqs. (9) and (10) compose a system of nonlinear ordinary differential equations (ODE). As described above, we need to define at first the SSs of the system. The SS solution, (u*,v*), is obtained when the reaction rates are equal to zero, i.e., du/dt = dv/dt = 0 , resulting in the Eqs. (11) and (12):

The Oregonator model has three possible SSs, and we explicitly present them in the Supplementary Material. In this case u* and v* are given in an algebraic form. Because of that, the SSs can assume complex values depending on the numerical choice of the parameters k2 and k3. So, to avoid this problem, we develop an algebraic manipulation to find regions in the k2 × k3 space on which only real SSs are obtained. This procedure is detailed described in the Supplementary Material, and we concluded that if k2 and k3 are chosen at the first quadrant of the k2 × k3 space, which is necessary once they represent physical parameters, we avoid problems of complex SSs.

The following step is to analyze the effects of small perturbations in the neighborhood of the SSs. Therefore, let us suppose that the solution to the problem can be written as:

with δu and δv infinitesimal perturbations. Note the time evolution of the perturbations help us to understand the behavior of the solutions near to the SSs. Using Taylor series, we can expand the functions f(u,v) and g(u,v), truncating the series in the second-order term:

and gv(u*,v*). From Eqs. (11) and (12), f(u*,v*) = g(u*,v*) = 0 and we obtain the linear approximation for f(u,v) and g(u,v):

Substituting Eqs. (17) and (18) into Eqs. (9) and (10) we get the linearized version of the RO model:

Continuing, we can use the matrix notation and rewrite this system of differential equations as follows:

Note that the 2 × 2 matrix is the Jacobian (J) of (f(u,v), g(u,v))T. In general, taking ∈ ℝ and the vector function , the Jacobian matrix m × n is the matrix whose entries are the partial derivatives of with respect to .23

The solution of a system of homogeneous linear ordinary differential equations is given by:24

where (λ1, λ2) are the eigenvalues and (, ) the associated eigenvectors of matrix J. Substituting the Eq. (22) into (21) results in:

Rearranging the terms of the previous equation, we rewrite it as

with I the identity matrix. The Eq. (24) is a linear system of equations and admits nontrivial solutions, i.e., , if and only if the system is not invertible, in other words:

Carrying out the calculation of the determinant, we obtain a polynomial, known as the characteristic polynomial. For a 2 × 2 system, it reads as:

In the characteristic polynomial, Tr[J] = fu(u*,v*) + gv(u*,v*) and Det[J] = fu(u*,v*)gv(u*,v*) - fv(u*,v*)gu(u*,v*) are the trace and determinant of the matrix J, respectively. We explicitly present Tr[J] and Det[J] in the Supplementary Material for the model considered in this section. The last step is to define the eigenvalues. The Eq. (26) is a second-order polynomial and its solutions, i.e., the eigenvalues of the Jacobian matrix, are given by the quadratic formula:

From Eq. (27) we can note that the solutions can assume real, complex, and pure imaginary values depending on the trace and determinant of matrix J. Moreover, each of the possible solutions of the characteristic polynomial is directly associated with a dynamical behavior (see in Supplementary Material, an illustration of this relationship). Therefore, from the expressions of the trace and determinant, we define conditions for the emergence of different dynamical states.

Hopf bifurcation is a critical point where the SS loses stability.12-14 Initially, the SS is stable, which requires Rei) < 0, consequently Tr[J] < 0 and Det[J] > 0, and then, due changes of parameters, it becomes unstable, requiring Rei) > 0, consequently Tr[J] > 0 and Det[J] > 0, see Eq. (27) and Figure 3S of the Supplementary Material. We can realize from Figure 3S, that to the SS reaches the unstable regime from the stable one, the system must pass by the condition of Tr[J] = 0 and Det[J] > 0, which is characterized by a pair of purely imaginary eigenvalues, i.e., Imi) =  0 and Rei) = 0 with i = 1, 2, see Eq. (27).25 The point where Tr[J] = 0 and Det[J] > 0 is called critical because it marks where SS's stability switches, and it is where Hopf bifurcation happens.12-14

In our situation, we could show that for the 1st and 3rd SSs, Det[J] > 0, and for the 2nd SS, Det[J] > 0, for any choice of the parameters, see Figure 1S in Supplementary Material. Considering that, we just need to use the trace of matrix J to check the system's stability. Basically, we plot Tr[J] = 0 as a function of the parameters k3 and k4 using the software Mathematica, see Figure 1A. Find the Mathematica code in the Supplementary Material. Figure 1A shows two possible dynamical states which are classified as (I) mono-stable, i.e., two SSs are unstable, and one SS is stable, and (II) oscillatory.

 


Figure 1. In all cases k1 = 77.27 and k2 = 0.02. (A) Parameter space of k3 × k4. (B) Limit cycle of RO model in the phase portrait u × 𝑣. (C) Numerical solution of the RO model with k3 = 1.0, and k3 = 10.0. In figure ln(C) is the natural logarithm of the concentration (C), where u is in black, and 𝑣 is in gray. (D) Rate of entropy production of the self-sustained oscillation presented in Figure 1C

 

Considering that information, we are able to observe the spontaneous emergence of SSO in the RO model through the numerical integration of nonlinear ODE system, i.e., Eqs. (9) and (10), by taking any pair of k3 and k4 chosen in the oscillatory region of Figure 1A. We could employ different integration methods to solve this problem, the most common is the class of Runge-Kutta methods.11,24,26 However, in this case, we performed the calculation using the software Mathematica, see the Supplementary Material. We highlight that the integration of DE's in the Mathematica is carried through the command "NDSolve", which automatically uses different methods to solve the equations depending on their type.21 Considering that, we maintain the automatic option for integration of the DE's and do not specify any. The solution is presented in Figures 1B and 1C.

Figure 1B presents the limit cycle in the phase portrait u × v. A limit cycle is a closed and isolated trajectory, which in general arises when a nonlinear system of ODEs undergoes a Hopf-bifurcation.12-14 We can classify such structures after their stabilities features. In the case of the RO model, the limit cycle is asymptotically stable because, as can be seen through the vector field, the internal and external trajectories are attracted to it. Figure 1C shows the time evolution of the chemical intermediates, revealing the oscillatory behavior expected.

We go one step further and use the modern formulation of thermodynamics to check that such a process obeys the second law. We can accomplish that by calculating the "rate of entropy production" (σ). Briefly, in the irreversible thermodynamic formalism, the changes in entropy are given by dS = deS + diS. The term deS accounts the entropy change due to reversible processes, and it can be greater, lower, or equal to zero. The last term, diS, is the entropy change of irreversible processes, and it is always greater than zero.3,4,9-11,27 From that, σ ≡ diS/dt is physically interpreted as the temporal variation of entropy internally produced by the occurrence of irreversible processes.3,9 We can generally write such expression as where Fk is a thermodynamic force and Jk is a thermodynamic flux.3,4,9,11,27 From that definition, we can derive explicit equations of s for each irreversible transformation. In this case, the chemical reactions are the only irreversible processes considered, and s is given by:4

where, R is the ideal gas constant, Rfk and Rrk are the forward and reverse rates of the k-reactions. Find in the Supplementary Material an explanation of how to use Eq. (28) to calculate σ. We avoid detailed discussions and derivations about this subject once it is well described in the literature.3,9-11,28 Figure 1D represents the time evolution of the rate of entropy production, evidencing an oscillatory behavior, as expected.29 Because in this situation deS = 0, the integration of s over time implies diS > 0, consequently, dS > 0. Therefore, we can conclude that the SSO process obeys the second law of thermodynamics.

Waves

The Belousov-Zhabotinsky reaction is probably most recalled by the formation of waves, which are inhomogeneous structures of chemical concentration that change periodically in time.2,10 In the BZ reaction, such structures can be visually seen due to the solution's color changing. This dynamical state arises when a temporally uniform SS becomes unstable due dissipative effects of the nonlinear kinetic mechanism, as in the case of SSO.2

Considering this information, we must find conditions of instability in an extended version of the dynamical system that accounts for the kinetic and the mass transport processes.2 So, the description of the phenomenon addressed in this section is composed of two parts, i.e., two independent variables: time (chemical kinetics) and space (diffusion). Therefore, the model is written in a generic form as follows:

The Eqs. (29) and (30) constitute a system of partial differential equations (PDE), called reaction-diffusion model,29 where f(u,v) = k1(k3v(1 - u)/(1 + u) + u - k2u2) and g(u,v) = k4(u - v). Du and Dv are the diffusion coefficients of u and v, respectively, and ∇2 is the Laplacian operator in 1D, i.e., . The terms Du2u and Dv2v comes from Fick's law of diffusion. This law describes how diffusion changes the concentration field in time and space, due to a spatial difference of chemical potential.3,9

The procedure here is quite similar to that carried out in the previous section, i.e., to examine the spatio-temporal behavior of small perturbations around the SSs solutions, as it is written below:

In this case, i.e., the linear case, we assume that the small perturbations are separable, in other words, δC(x,t) = δC(xC(t) and C = (u(x,t), v(x,t)) is the chemical concentration. The temporal solution, δC(t), is chosen to be the eigenfunction of the operator :

It can be verified by simple substitution that δC(t) = αeλt, α ∈ ℝ?, satisfies the condition imposed. The spatial solution, δC(x), is chosen to be the eigenfunction of the operator ∇2,

under the zero-flux boundary condition , where is the normal vector to the system's boundary. This boundary condition represents the physical situation of impermeable walls.11,26 We are able to note that the solution δC(x) = cos(kx) satisfies the requirements, where k is the wavenumber. This last parameter is related to the wavelength by this simple expression , and it physically represents the wave's spatial frequency, i.e., the number of waves per unit distance.30 Therefore, the spatio-temporal perturbations are given as δC(x,t) = αeλtcos(kx).

Considering the form of the perturbation and substituting Eqs. (31) and (32) in the system of Eqs. (29) and (30), we are going to obtain a linearized system of equations with the following Jacobian matrix associated:

As we carried out in the last section, we can analyze the stability of the solutions through the direct calculation of eigenvalues from the characteristic polynomial, which is similar to Eq. (26), see the explicit equations in the Supplementary Material. To do that, we build the parameter space of k × k4 with Tr[J] = 0 and Det[J] = 0, see Figure 2A.

 


Figure 2. In all cases k1 = 77.27 and k2 = 0.02. (A) Parameter space of k × k4. The black curve is given by Det[J] = 0 and the gray curve is given by Tr[J] = 0. (B) Numerical solution of the RO model with mass diffusion with k3 = 1.0, k4 = 25.0, Du = 6.0 × 10-3, Dv = 1.6 × 10-3, length L = 1, and Dirichlet boundary conditions, u(0,t) = u(L,t) = v(0,t) = v(L,t) = 9.5, (Note u* = v* = ≈ 9.5). The graphic is constructed using u(x,t). (C) Rate of entropy production related to the chemical waves presented in Figure 2B

 

Figure 2A presents four possible dynamical states, and the characteristics of each one of them, in terms of stability, are exposed in Table 1S in the Supplementary Material. Briefly, from that Table and Figure 3S, we can verify regions I and II are unstable saddle nodes, region III is a stable spiral, and region IV is an unstable spiral, with Tr[J] > 0 and Det[J] > 0, as in the SSO case. Among these dynamical states, the region IV fills the requirements for the emergence of waves. Then, taking parameters in that region, we can describe the formation of chemical waves by straightforward integration of the PDE system, i.e., Eqs. (29) and (30). There are different classes of numerical methods used to integrate such equations. A widespread approach is based on finite differences, e.g., the Crank-Nicolson method.11,26,31 However, we use the software Mathematica to solve this problem. See in Supplementary Material a script of the calculation. We expose the outcome of the integration in Figure 2B. From that figure, we can see that the waves spontaneously emerge after a transient time.

We also calculate the rate of entropy production, but, as we already mentioned, in the emergence of chemical waves, two irreversible processes take place: chemical reactions and mass diffusion.11,27 Because of that, s is not given by the Eq. (28), which is only related to the occurrence of chemical reactions. We need to add a term associated with the diffusion process. Therefore, the local rate of entropy production (σL) is:

In Eq. (36) the last term is referent to the rate of entropy produced by the diffusion. Note that, σL ≡ σL(x,t) and then there is a unique value of σL to each position of the space. Therefore, to get the rate of entropy production of the entire system, we integrate σL over the volume of the space:

From Eq's (36) and (37), we obtain an oscillatory behavior of σ as shown in Figure 2C. It is interesting to realize that through the rate of entropy production, we can verify the starting point of the self-organization. So, it can be a useful parameter to track down nonlinear events, e.g., symmetry breaking processes.4,11,27 Furthermore, once again σ > 0, in a way that its integration over time increases the total entropy.

Stationary patterns - Turing Patterns

Alan M. Turing suggested the occurrence of stationary patterns in chemical systems in his famous work, "The chemical basis of morphogenesis," published in 1952.32 In this article, the British mathematician proposed for the first time in the history of dynamical systems theory a reaction-diffusion model, similar to Eqs. (29) and (30), to explain the formation of complex cellular structures, a process known as morphogenesis.2,3,11,33 From his theoretical investigations, Turing predicted the spontaneous emergence of spatially inhomogeneous and stationary structures of chemicals from an initial homogeneous condition, which he correlated to the biological problem.32 In honor of his outstanding work, these structures are called Turing patterns.

Turing patterns arise in the situation that the system is stable to homogeneous perturbations (HP) and unstable to inhomogeneous perturbations (IP). Naturally, as the system is stable to HP, i.e., fluctuations of concentration in the entire system, then it tends to a SS, as discussed in SSO section. However, the system is simultaneously unstable to IP, i.e., localized fluctuations of concentrations, which generates chemical gradients in the SS. Therefore, the dissipative effects of diffusion lead to the spontaneous formation of spatial patterns. Such a scenario can be defined through the LSA approach, as have been done with the phenomena addressed in this work. In this case, we carry out the analysis by establishing the conditions in which (1) the system is stable to HP (k = 0) and (2) unstable to IP (k =  0). Note that k = 0 implies that perturbations are spatially independent, so they refer to HP. Whereas, the situation that k =  0 indicates that perturbations are spatially dependent, and they are related to IP. Such a notation is considered in the LSA developed in the Supplementary Material. To solve this problem, we consider the reaction-diffusion version of the RO model given by Eqs. (29) and (30).

We already established the conditions to satisfy the first requirement (1), in the subsection "Self-sustained oscillations". There, we used the LSA approach and constructed a parameter space with two dynamical states, one stable and the other one unstable, see Figure 1A. Therefore, to the system be stable to HP, we need to take parameters in the region I of the Figure 1A.

Now, we proceed with the second requirement, in other words, to define conditions of instability to IP. This is done by identifying parameters whereby the system undergoes Turing bifurcation following the LSA methodology. However, considering the similarity between the physical models used in this subsection and the "Waves" subsection, the LSA approach results in the same Jacobian matrix, Eq. (35), and consequently, the same characteristic polynomial of the last subsection. So, we find the right conditions for Turing bifurcation using those equations.

Turing bifurcation happens precisely when the real and imaginary parts of the eigenvalues are equal to zero with k =  0, i.e., Rei) = Imi) = 0 and k = kT =  0 where kT is the wavenumber of Turing instability. From the information of the previous paragraphs, such bifurcation occurs when Det[J(k =  0)] = (fu(u*,v*) - Duk2)(gv(u*,v*) - Dvk2) - fv(u*,v*)gu(u*,v*) = 0, see Supplementary Material. Note that, k must be real and greater than zero. Therefore the lowest wavenumber that satisfies these conditions is kT , such as .2,32 We highlight that in this case, the derivation of kT results in a long algebraic equation, and because of that, we do not present its explicit expression. See in the Supplementary Material the Mathematica code to determine kT. Thus, the region of instability to IP, i.e., a situation that implies eigenvalues with positive real parts, is when Det[J(kT)] < 0.

Now, we can use the Det[J(kT)] and the information of Figure 1A to construct a parameter space k3 × k4 to find regions where Turing patterns arise spontaneously, see Figure 3A. Note we assume large differences among the diffusion coefficients of the chemicals, activator and inhibitor, for the construction of parameter space. This assumption is a necessary, but not sufficient, condition for the emergence of Turing patterns. It can be easily derived from the LSA approach, see the Supplementary Material.

 


Figure 3. In all cases k1 = 77.27 and k2 = 0.02. (A) Parameter space of k3 × k4. The black curve is given by Det[J(kT)] = 0, related to the IP, and the gray curve is given by Tr[J] = 0, related to the HP, the same of Figure 1A. (B) Numerical solution of the RO model with mass diffusion with k3 = 1.0, k4 = 70.0, Du = 1.0, Dv = 75.0, length L = 15, and periodic boundary conditions. This graphic is constructed using u(x,t). (C) Rate of entropy production related to the Turing patterns presented in Figure 3B

 

From LSA, we get three different dynamical states. The regions I and III define oscillatory and mono-stable states, respectively, as in Figure 1A. The region II represents the Turing instability. Therefore, we can properly choose the values of k3 and k4 in the Turing region, and proceed with simulations of the system of Eqs. (29) and (30). As we have done in the "Waves" subsection, we employed the software Mathematica to solve such a task, and the code can be found in Supplementary Material. The Turing patterns obtained from the numerical integration are shown in Figure 3B. From the outcome of this simulation, we can realize that after a transient time, the system evolves to a spatially organized state, and the structures formed do not change in time.

From Eqs. (36) and (37) we calculate the rate of entropy production. In this case, as we can see in Figure 3C, the rate of entropy production tends toward a plateau. This behavior happens because, as mentioned before, Turing patterns are stationary, and as a consequence of that s keeps constant, while entropy is being produced continuously. We highlight again that the continuous change of entropy is greater than zero for this process, as expected.

We emphasize that Turing patterns are well known by the diversity of structures formed in two spatial dimensions. Here we present an example of a Turing pattern in 2D, Figure 4. This figure is the outcome of the numerical integration of the Oregonator model, given by Eqs. (4), (5) and (6). Generally, simulations of nonlinear partial differential equations with spatial dimensions higher than one demand a great computational effort, and because of that, we performed such calculations using the alternating direction implicit method, which is a semi-implicit method based on finite differences. Such a method is implemented in a code written in Fortran 90, see ref. 11 for details about the numerical method and its implementation.

 


Figure 4. Turing pattern in 2D with dimensions 64 × 64 units of area. In this simulation, the time step is equal to 0.01, the spatial step is equal to 0.75, k1 = 77.27, k2 = 0.02, k3 = 0.95, k4 = 0.25, Du = 0.025, Dy = 0.12, Dv = 0.80 and periodic boundary conditions. The image represents the concentration of u(x,y,t).

 

CONCLUSIONS

In this paper, we present the basic techniques and theory related to nonlinear dynamics and irreversible thermodynamics to theoretically describe the emergence of self-sustained oscillations, chemical waves, and stationary patterns/Turing patterns in the Belousov-Zhabotinsky reaction, through the Oregonator model. We assumed that the main readers, i.e., the students, know a few concepts of differential equations, linear algebra, chemical kinetics, and thermodynamics. Such an assumption resulted in long algebraic deductions and extensive supplementary material, composed by scripts of Mathematica, and discussion of the mathematical approaches, helping the students to follow the correct methodological procedure.

Differently than the most specialized literature, this work provides explicit calculations of terms associated with thermodynamics to prove that the occurrence of such phenomena does obey the second law. This approach enables an integrated discussion of the main concepts involved, offering the reader a thorough view of the complexity of the problems addressed.

We highlight that the mathematical tools and physical-chemical concepts considered here can be used in a large number of chemical problems, even in situations that the system does not spontaneously evolve to ordered states. Because of that, we believe that the subject presented in this work is suitable for undergraduate and graduate advanced physical chemistry classes. We expect that after studying the content of this paper, the student must be able to deal with similar problems and to carry out investigations of simpler and more complex kinetic mechanisms by themselves.

 

SUPPLEMENTARY MATERIAL

Find in the supplementary material algebraic deductions related to LSA, a graphic classification of the dynamic states, codes of the software Mathematica, and the rate of entropy production calculation description. The material can be freely accessed at http://quimicanova.sbq.org.br, in PDF format.

 

ACKNOWLEDGEMENTS

This work was supported by the São Paulo Research Foundation (FAPESP), grants 2019/23205-3 and 2019/12501-0, and Coordenção de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), finance code 001. The author profoundly thanks the reviewers for their valuable suggestions. The author thanks Dr. Anderson José Lopes Catão and Dr. Alejandro López Castillo for helpful comments. The author also thanks the organizing committee of the São Paulo School of Advanced Sciences on Nonlinear Dynamics, which inspired the development of this work.

 

REFERENCES

1. Laidler, K. J.; Archive for History of Exact Sciences 1985, 32, 43.

2. Epstein, I. R.; Pojman, J. A.; An introduction to nonlinear chemical dynamics: oscillations, waves, patterns, and chaos, Oxford University Press: Oxford, 1998.

3. Kondepudi, D.; Prigogine, I.; Modern thermodynamics: from heat engines to dissipative structures, John Wiley & Sons: Hoboken, 2014.

4. Kondepudi, D.; Kapcha, L.; Chirality 2008, 20, 524.

5. Novák, B.; Tyson, J. J.; Nat. Rev. Mol. Cell Biol. 2008, 9, 981.

6. Gross, P.; Kumar, K. V.; Grill, S. W.; Annu. Rev. Biophys. 2017, 46, 337.

7. Faria, R. d. B.; Quim. Nova 1995, 18, 281.

8. Ishizuka, J. J.; Song, H.; Peacock-López, E.; Chem. Educator 2004, 9, 142.

9. De Groot, S. R.; Mazur, P.; Nonequilibrium thermodynamics; Courier Corporation: Chelmsford, 2013.

10. Nagao, R.; Varela, H.; Quim. Nova 2016, 39, 474.

11. Silva-Dias, L.; Dissertação de mestrado, Universidade Federal de São Carlos, São Carlos, SP, 2019.

12. Monteiro, L. H. A.; Sistemas dinâmicos; Editora Livraria da Física: São Paulo, 2002.

13. Wiggins, S.; Introduction to applied nonlinear dynamical systems and chaos; Springer, Science & Business Media: New York, 2003.

14. Nayfeh, A. H.; Balachandran, B. Applied nonlinear dynamics: analytical, computational, and experimental methods; John Wiley & Sons: Hoboken, 2008.

15. Field, R. J.; Korös, E.; Noyes, R. M.; J. Am. Chem. Soc. 1972, 94, 8649.

16. Kitagaki, B. T.; Pinto, M. R.; Queiroz, A. C.; Breitkreitz, M. C.; Rossi, F.; Nagao, R.; PCCP 2019, 21, 16423.

17. Field, R. J.; Noyes, R. M.; J. Chem. Phys. 1974, 60, 1877.

18. Jwo, J.-J.; Noyes, R. M.; J. Am. Chem. Soc. 1975, 97, 5422.

19. Crowley, M. F.; Field, R. J.; J. Phys. Chem. 1984, 88, 762.

20. Bond, R. A.; Martincigh, B. S.; Mika, J. R.; Simoyi, R. H.; J.Chem. Educ. 1998, 75, 1158.

21. Wolfram Research, Inc.; Mathematica, Version 9.0, Champain, IL, 2012.

22. Peacock-López, E.; Chem. Educ. 2001, 6, 202.

23. Peacock-López, E.; Chem. Educ. 2000, 5, 216.

24. Boyce, W. E.; DiPrima, R. C.; Meade, D. B.; Elementary differential equations; John Wiley & Sons: Hoboken, 2017.

25. Yang, L.; Dolnik, M.; Zhabotinsky, A. M.; Epstein, I. R.; J. Chem. Phys. 2002, 117, 7259.

26. Franco, N. B.; Cálculo Numérico; Pearson, 2006.

27. Silva-Dias, L.; Lopez-Castillo, A.; PCCP 2020, 22, 7507.

28. Ross, J.; Vlad, M. O.; J. Phys. Chem. A 2005, 109, 10607.

29. Dutt, A.; J. Chem. Phys. 1987, 86, 3959.

30. Nussenzveig, H. M.; Curso de Física Básica: fluidos, oscilações e ondas, calor. Vol. 2. Editora Blucher, 2018.

31. Smith, G. D.; Numerical solution of partial differential equations: finite difference methods; Oxford university press, 1985.

32. Turing, A. M.; Philos. Trans. Royal Soc. 1952, 237, 37.

33. McGhee, E.; Peacock-Lopez, E.; Chem. Educ. 2005, 10, 84.

On-line version ISSN 1678-7064 Printed version ISSN 0100-4042
Qu�mica Nova
Publica��es da Sociedade Brasileira de Qu�mica
Caixa Postal: 26037 05513-970 S�o Paulo - SP
Tel/Fax: +55.11.3032.2299/+55.11.3814.3602
Free access

GN1