Michael Engbers , Mattes Heerwagen ORCID logo , Sebastian Rosmej and Andreas Engel

Work Statistics and Energy Transitions in Driven Quantum Systems

De Gruyter | Published online: April 8, 2020

Abstract

Thermodynamic quantities of small systems fluctuate and have to be characterised by their appropriate probability distributions. Within the two-point energy measurement prescription, the distribution of work in a quantum system can be derived from the transition probability from initial to final energy. We consider a simple yet representative model system starting in thermodynamic equilibrium and driven by an external force, and compare two different numerical techniques to determine this transition probability with respect to accuracy and numerical effort. In addition, we perform a semi-classical analysis of the process using the WKB approximation. The results agree well with the numerically exact values if Airy-tails modelling the tunnelling into classically forbidden regions of phase space are properly taken into account.

1 Introduction

The ongoing miniaturisation of technical devices like engines and frigistors as well as the rapid development of new experimental techniques in biophysics aiming at the analysis of single molecules, molecular motors, and ion pumps require new concepts for the theoretical modelling of energy conversion at the nanoscale. Within the last 20 years thermodynamics has been extended to small systems with energy turnover comparable to their energy fluctuations. The key concept is to describe thermodynamics quantities like work, heat, and entropy by their corresponding probability distributions. The new field of stochastic thermodynamics (for introductory reviews, see [1], [2], [3]) has meanwhile proven to provide the appropriate framework for analysing the efficiency of nanomachines [4], [5], [6], estimating free-energy differences in single-molecule experiments [7], and highlighting the role of information as thermodynamic resource [8].

Focussing on small systems invariably brings quantum effects into play. While stochastic thermodynamics has reached a fairly mature state for classical systems, the thermodynamic description of fluctuating quantum systems is much less complete. One reason is that central thermodynamic quantities like work and heat are no state variables, and their proper quantum definition remains controversial [9], [10], [11], [12]. Work performed on a classical system, e.g. depends on the complete trajectory the system undergoes during the process. However, no entire quantum analogue of the system trajectory is available. One possible workaround restricts the analysis to isolated systems for which the work performed or delivered must be equal to the energy difference of the system as required by the first law of thermodynamics [12]. Then, measuring the systems energy twice, first before the driving and second after its end, should yield a first estimate for the work. Although being simple and operative, this prescription has its own deficiencies because it involves two projective measurements that destroy existing quantum correlations. As a result, the system dynamics is merely replaced by a classical stochastic process with just the transition rates derived from quantum mechanics.

After the first energy measurement, the system is in an eigenstate of the initial Hamiltonian. Being isolated from its surroundings, it then follows a unitary evolution specified by the process under consideration. The final state of this evolution determines the probability for the results of the second energy measurement. In the simplest setting, the system starts in equilibrium at some temperature T, so the probability for the first energy value is given by the Gibbs measure. The central quantity of interest determining the histogram of work values is then the transition probability from the initial state to the one corresponding to the value of the energy at the second measurement.

In the present paper, we study this transition probability for a simple but representative and not analytically solvable model system introduced in [13]. We compare two different numerical methods for its determination, characterise their accuracy as a function of the various parameters, and recommend suitable choices for these. Finally, we check our results against an approximate analytic treatment within a semi-classical framework.

The paper is organised as follows. Section 2 contains the basic equations and fixes the notation. Section 3 describes the two numerical methods we use for our comparison. In section 4 we compare the results of these methods with each other and discuss in detail the influence of the various parameters on accuracy and computing time. Section 5 is devoted to the semi-classical analysis. Finally, section 6 contains our conclusions.

2 Basic Equations

We consider the one-dimensional motion of a quantum particle of mass M in a potential

(1) V ( x ) = λ ( t ) x 4

in the time interval 0 t t f with the parameter λ ( t ) changing from λ 0 := λ ( 0 ) at t = 0 to λ f := λ ( t f ) at final time tf [13]. The dynamics is described by the time-dependent Schrödinger equation for the wave function ψ ( x , t ) of the particle:

(2) i t ψ ( x , t ) = 2 2 M x 2 ψ ( x , t ) + λ ( t ) x 4 ψ ( x , t ) .

By the first energy measurement at t = 0, the system is prepared in an energy eigenstate ψ m ( 0 ) of the initial Hamiltonian H ( 0 ) satisfying

(3) 2 2 M x 2 ψ m ( 0 ) ( x ) + λ 0 x 4 ψ m ( 0 ) ( x ) = E m ( 0 ) ψ m ( 0 ) ( x ) ,

with E m ( 0 ) denoting the corresponding energy eigenvalue. For t > 0, the state then follows the unitary time evolution described by

(4) U ( t , 0 ) = 𝒯 e i 0 t d t H ( t )

where 𝒯 denotes time ordering. The driving of the system is specified by the explicit time dependence of the Hamiltonian H ( t ) . At the final time t = t f , a second energy measurement is performed that projects the state on an eigenstate ψ n ( f ) of the final Hamiltonian H ( f ) with corresponding eigenvalue E n ( f ) .

Our central quantity of interest is the transition probability

(5) P ( n | m ) := | ψ n ( f ) | U ( t f , 0 ) | ψ m ( 0 ) | 2

for the state to be projected onto ψ n ( f ) at t = t f when starting in ψ m ( 0 ) at t = 0. We will only consider protocols linear in time, i.e.

(6) λ ( t ) = λ 0 + ( λ f λ 0 ) t t f .

Introducing

(7) ( 2 2 M λ 0 ) 1 / 6 , ( 2 M ) 2 / 3 ( λ 0 ) 1 / 3 , and ( 2 2 M ) 2 / 3 λ 0 1 / 3 ,

as units for length, time, and energy, respectively, and measuring λ in multiples of λ0, we arrive at the dimensionless form of the Schrödinger equation (2)

(8) i t ψ ( x , t ) = x 2 ψ ( x , t ) + λ ( t ) x 4 ψ ( x , t )

with λ ( t ) starting at λ0 = 1.

For the classical x 4 oscillator, the dependence of the oscillation period T on the amplitude A is given by

(9) T ( A ) = Γ ( 1 4 ) Γ ( 1 2 ) 2 Γ ( 3 4 ) 2 M λ 1 A .

Replacing A by the energy E of the particle, this corresponds in dimensionless units to

(10) T ( E ) = Γ ( 1 4 ) Γ ( 1 2 ) 2 Γ ( 3 4 ) 1 λ ( λ E ) 1 / 4 2.622 ( λ E ) 1 / 4 .

On the other hand, we get for the energy of the m-th quantum level in the potential V ( x ) = λ x 4 from Bohr–Sommerfeld quantisation the approximate expression [14]

(11) E m ( λ ) ( 3 Γ ( 1 2 ) Γ ( 3 4 ) Γ ( 1 4 ) ) 4 / 3 λ 1 / 3 ( m + 1 2 ) 4 / 3 2.185 λ 1 / 3 ( m + 1 2 ) 4 / 3 .

Combining (10) and (11), we find

(12) T m := T ( E m ) 2.155 λ 1 / 3 ( m + 1 2 ) 1 / 3

for the classical oscillation period corresponding to the m-th quantum level of energy. We will mainly be interested in values of λ between 1 and 5 and energy levels up to m = 200. The relevant dimensionless oscillation periods then lie in the range 0.2…1. In order to find appreciable probabilities P ( n | m ) for off-diagonal transitions, m n , the driving process parametrised by λ ( t ) should not be slow on these time scales. Accordingly, for our numerical investigations, we choose t f = 0.5 and λ f = 5 .

3 Numerical Methods

We use two different procedures to solve the Schrödinger equation (8) numerically. The first one is the well-known Crank–Nicolson method [15] that provides a direct prescription to compute the wave function ψ ( x , t ) at times t > 0 given its initial form at t = 0. The second method employs a special ansatz for the wave function ψ ( x , t ) involving time-dependent coefficients c n ( t ) [13]. It transforms the Schrödinger equation (8) into a system of ordinary differential equations for these coefficients that is then solved numerically. In the following, we shortly characterise both methods.

3.1 Crank–Nicolson Method

The central point of the Crank–Nicolson method is the discretisation of the time evolution operator (4) in Cayley form

(13) U ( t + Δ t , t ) = 𝒯 e i t t + Δ t d t H ( t ) = 1 i H ( t ) Δ t 2 1 + i H ( t ) Δ t 2 + O ( Δ t 2 ) ,

where Δ t denotes the temporal step size. The main virtue of this replacement is that the first term on the r.h.s. of (13) is unitary and, therefore, norm preserving. The discretised Schrödinger equation acquires the form

(14) ( 1 + i H ( t ) Δ t 2 ) ψ ( x , t + Δ t ) = ( 1 i H ( t ) Δ t 2 ) ψ ( x , t )

with

(15) H ( t ) = x 2 + λ ( t ) x 4

as given by (8).

It is convenient to rewrite (14) in terms of a new function

(16) y ( x , t ) := ψ ( x , t + Δ t ) + ψ ( x , t ) ,

and using the customary notations y k ( x ) := y ( x , k Δ t ) , λ k := λ ( k Δ t ) , and ψ k ( x ) := ψ ( x , k Δ t ) where k = 0 , , K we find

(17) 2 y k ( x ) x 2 = g k ( x ) y k ( x ) + f k ( x )

with the abbreviations

(18) g k ( x ) := ( λ k x 4 i 2 Δ t )

and

(19) f k ( x ) := i 4 Δ t ψ k ( x ) .

The second-order differential equation (17) has to be solved numerically at every time step. To this end, we also discretise the space coordinate x x j = x 0 + j Δ x with the spatial step size Δ x and j = 0 , , J and use the notation y j k := y k ( x j ) , etc.

From its structure, (17) is amenable to the Numerov method that allows to improve on the accuracy of the solution with very little additional effort. The method is particularly efficient if a recursive procedure is used to perform the necessary matrix inversion (see [16], [17]). The main idea is the replacement

(20) 4 y k x 4 ( x ) = 2 x 2 ( g k ( x ) y k ( x ) + f k ( x ) )

as implied by (17) in the approximation

2 y k x 2 ( x j ) = y j + 1 k + y j 1 k 2 y j k Δ x 2 1 12 4 y k x 4 ( x j ) Δ x 2 + O ( Δ x 4 ) .

for the second derivative with respect to x. In terms of the auxiliary quantities

(21) d j k := 1 Δ x 2 12 g j k

and

(22) w j k := d j k y j k Δ x 2 12 f j k

one then gets a three-term recursion relation of the form

(23) w j + 1 k + w j 1 k = ( 2 + Δ x 2 g j k d j k ) w j k + Δ x 2 f j k d j k

that approximates the original Schrödinger equation (8) to order O ( Δ t 2 ) and O ( Δ x 6 ) .

The procedure is then as follows. With ψ j k given from the k-th time step, f j k is determined using (19). The potential λ ( t ) x 4 fixes g j k according to (18) and, subsequently, d j k by using (21). Therefore, the recursion (23) allows to determine all w j k from two initial values, w 0 k and w 1 k . From w j k , we find y j k using (22), which via (16) gives .., the wave function at the next time step:

(24) w j k d j k + Δ x 2 12 f j k d j k = y j k = ψ j k + 1 + ψ j k

The final dodge of the algorithm concerns the point that (17) has to be solved as a boundary value problem with ψ j k (and therefore also w j k ) given for j = 0 and j = J rather than as an initial value problem fixing w j k for j = 0 and j = 1. As detailed in [17], this problem can be dealt with by introducing yet another set of variables q j k and e j k according to

(25) w j + 1 k = : e j k w j k + q j k

which transforms the recursion (23) into

(26) e j k = 2 + Δ x 2 g j k d j k 1 e j 1 k

(27) q j k = Δ x 2 f j k d j k + q j 1 k e j 1 k .

The initial conditions e 0 k and q 0 k finite ensure w 0 k = 0 and allow to successively calculate all e j k q j k using (26) and (27). From these all, w j k can be determined using (25) by starting with w j k = 0 and decreasing j in each step down to j = 1.

3.2 Expansion in Instantaneous Eigenstates

The second method to solve the Schrödinger equation (8) builds on an ansatz for the wave function [13]. We introduce the eigenstates | ψ n ( λ t ) and corresponding eigenvalues E n ( λ t ) of the Hamiltonian H ( t ) for a fixed value λt of the parameter λ ( t ) ,

(28) H ( t ) | ψ n ( λ t ) = E n ( λ t ) | ψ n ( λ t ) .

Next, we expand the state | ψ ( x , t ) at time t on the basis of these instantaneous eigenstates according to

(29) | ψ ( t ) = n = 1 N c n ( t ) | ψ n ( λ t ) e i α n ( t )

where

(30) α n ( t ) := 0 t d t E n ( λ t ) .

The maximal value N of n in this expansion is a crucial parameter for the accuracy of this method (cf. Section 4). Plugging (29) into the Schrödinger equation (8), it transforms into a set of linear ordinary differential equations for the time evolution of the expansion coefficients c n ( t ) :

(31) t c n ( t ) = λ t l c l ( t ) ψ n ( λ t ) | ψ l ( λ t ) λ t e i ( α n ( t ) α l ( t ) ) = λ t l n c l ( t ) ψ n ( λ t ) | x 4 | ψ l ( λ t ) E l ( λ t ) E n ( λ t ) e i ( α n ( t ) α l ( t ) ) .

Here, the second line follows from (28) and the Hellmann–Feynman-theorem.

To solve this system of coupled ordinary differential equations, we need the matrix elements ψ n ( λ t ) | x 4 | ψ l ( λ t ) and the instantaneous eigenvalues E l ( λ t ) for each value of λ. Since the stationary Schrödinger equation is not analytically solvable, this can be a time-consuming numerical task. However, if we rescale the space coordinate x in the initial, i.e. λ t = 1 , Schrödinger equation (28)

(32) x 2 ψ l ( x , 1 ) + x 4 ψ l ( x , 1 ) = E l ( 1 ) ψ l ( x , 1 )

according to

(33) x = λ t 1 / 6 x ~

we are left with

(34) x ~ 2 ψ l ( λ t 1 / 6 x ~ , 1 ) + λ t x ~ 4 ψ l ( λ t 1 / 6 x ~ , 1 ) = λ t 1 / 3 E l ( 1 ) ψ l ( λ t 1 / 6 x ~ , 1 ) .

Adapting the normalisation of the eigenfunctions to the rescaled coordinate we therefore find the simple mappings

(35) ψ l ( x , λ t ) = λ t 1 / 12 ψ l ( λ t 1 / 6 x , 1 )

(36) E l ( λ t ) = λ t 1 / 3 E l ( 1 ) .

These in turn allow to rewrite the system (31) in the form

(37) t c n ( t ) = 1 λ λ t l n c l ( t ) ψ n ( 1 ) | x 4 | ψ l ( 1 ) ( E l ( 1 ) E n ( 1 ) ) × e i ( E n ( 1 ) E l ( 1 ) ) γ ( t )

with

(38) γ ( t ) := 0 t d t λ t 1 / 3 .

Hence, we have to solve the stationary Schrödinger equation only once, namely, for λ t = 1 , which can be done, e.g. with the matrix Numerov method [18]. Plugging the results for | ψ l ( 1 ) and E l ( 1 ) into (37) a completely determined system of coupled linear differential equations with time-dependent coefficients for the expansion parameters cn have to be solved numerically. For a better comparison between the two methods, we use for this task a second-order Runge–Kutta algorithm, which has the same accuracy as in the Crank–Nicolson method.

4 Comparison of Results

In the present section, we compare the two numerical methods described above with respect to their accuracy as a function of the various parameters. To this end, we study four representative examples for transition probabilities corresponding to the combinations (n, m) = (46, 50), (50, 50), (58, 50) and (50, 46). Their respective numerically exact values obtained with parameter combinations detailed below are compiled in Table 1. We then change the parameters such as, e.g. to reduce the necessary computation time and monitor the resulting deviations Δ P ( n , m ) from these reference values.

Table 1:

Quantum mechanical and semi-classical transition probabilities P ( n | m ) .

n m P q m ( n | m ) P s c ( n | m ) P a d ( n | m )
46 50 0.2695 0.2626 0.0023
50 50 0.1090 0.1089 0.8222
58 50 0.0449 0.0194 2.8 10 6
50 46 0.2010 0.2009 0.0026

    The numerically exact values of the second column were determined by the Crank–Nicolson method with Δ x = 7.5 10 4 and Δ t = 10 6 . The expansion method yields exactly the same values for the digits shown. The semi-classical results displayed in the third column were obtained using 10 5 classical trajectories. The last column shows the Crank–Nicolson values for a slower process with t f = 2.5 .

Let us first look at the influence of the temporal and spatial resolution on the results. Figure 1 shows the dependence of the relative accuracy Δ P ( n | m ) / P ( n | m ) on the spatial step size Δ x . Irrespective of the specific values of m and n, both methods yield accurate results for Δ x 10 2 already. This is due to the high convergence rate of the Numerov method discussed in Section 3.

Figure 1: Relative accuracy ΔP(n|m)/P(n|m)$\Delta P(n|m)/P(n|m)$ of the transition probabilities as function of the spatial resolution Δx$\Delta x$ for the Crank–Nicolson (top) and the expansion methods (bottom). The other parameters are Δt=2.5⋅10−5$\Delta t=2.5\cdot{10^{-5}}$ and N = 200.

Figure 1:

Relative accuracy Δ P ( n | m ) / P ( n | m ) of the transition probabilities as function of the spatial resolution Δ x for the Crank–Nicolson (top) and the expansion methods (bottom). The other parameters are Δ t = 2.5 10 5 and N = 200.

The dependence on the time step Δ t is qualitatively similar, but now, step sizes of the order of Δ t 10 4 or smaller are necessary to obtain reliable results. This reflects the fact that the chosen time integration methods are only accurate to order Δ t 2 . Nevertheless, both methods are accurate and reliable for sufficiently small increments in x and t.

The plateau values to which Δ P / P converge in Figures 1 and 2 for small step sizes depend on the value of the complementary parameter Δ t and Δ x , respectively, and for the expansion method also on N. As an example for the interplay of these parameters, we show in Figure 3 examples of heat maps of the accuracy in the Δ x - Δ t plane. The qualitative behaviour is as expected: both Δ x and Δ t have to be sufficiently small to obtain high accuracy. But the figure also offers quantitative information. It is again clearly seen that Δ t has to be significantly smaller than Δ x for getting a desired accuracy. Moreover, comparison of the figures with each other shows that for higher values of m and n, a finer resolution in x and t is necessary to get the same accuracy. This is in line with intuition since higher values of m and n imply shorter wavelengths and higher frequencies of the participating states.

Figure 2: Relative accuracy ΔP(n|m)/P(n|m)$\Delta P(n|m)/P(n|m)$ of the transition probabilities as function of the temporal resolution Δt$\Delta t$ for Crank–Nicolson (top) and expansion methods (bottom). The other parameters are Δx=10−3$\Delta x=10^{-3}$ and N = 200.

Figure 2:

Relative accuracy Δ P ( n | m ) / P ( n | m ) of the transition probabilities as function of the temporal resolution Δ t for Crank–Nicolson (top) and expansion methods (bottom). The other parameters are Δ x = 10 3 and N = 200.

Figure 3: Heat maps of the accuracy of the Crank–Nicolson method for (n,m)=(50,46)$(n,m)=(50,46)$ (top) and (n,m)=(58,50)$(n,m)=(58,50)$ (bottom). Note that the colour code in both figures is the same.

Figure 3:

Heat maps of the accuracy of the Crank–Nicolson method for ( n , m ) = ( 50 , 46 ) (top) and ( n , m ) = ( 58 , 50 ) (bottom). Note that the colour code in both figures is the same.

When applying the expansion method, the maximal value N of the expansion parameter is an additional important parameter affecting the accuracy of the results. Although it is obvious that N has to exceed both m and n, it is not so clear to which extent, since during the driving of the system between t = 0 and t = t f , a proper approximation of its state | ψ ( t ) may well require eigenstates | ψ l in the superposition (29) with values of l significantly larger than those of n and m. Figure 4 gives an impression on how the relative accuracies of the considered transition probabilities depend on N. It shows that in the present context, it is sufficient to include roughly 20 more modes than given by the maximum of n and m. Note the large error in P ( 58 | 50 ) if N remains below this threshold.

Figure 4: Relative accuracy of the transition probabilities resulting from the expansion method as a function of the number N of modes included in the expansion (29). The other parameters are Δx=10−3$\Delta x=10^{-3}$ and Δt=2.5⋅10−5$\Delta t=2.5\cdot{10^{-5}}$.

Figure 4:

Relative accuracy of the transition probabilities resulting from the expansion method as a function of the number N of modes included in the expansion (29). The other parameters are Δ x = 10 3 and Δ t = 2.5 10 5 .

Guided by the results displayed above, we have chosen the parameters for the numerically exact reference values as given in the caption of Table 1. In the last column of this table, we have included the values of the transition probabilities for t f = 2.5 instead of t f = 0.5 , i.e. for five times slower driving. These values show the approach of the transition probabilities to P ( n | m ) = δ n m as required by the adiabatic theorem of quantum mechanics. These values, therefore, may serve as an additional independent check of our numerical code.

5 Semi-Classical Analysis

In addition to the numerical methods discussed above, we give in the following a short report on a semi-classical treatment of the problem. Quite generally, semi-classical methods yield approximations to quantum mechanical results by building on classical trajectories [19]. For the problem at hand, the classical equations of motion have again to be solved numerically.

To obtain a semi-classical approximation for the transition probabilities (5), we need semi-classical expressions for both the eigenfunctions ψ n ( f ) of the final Hamiltonian and the state | ψ m ( t f ) = U ( t f , 0 ) | ψ m ( 0 ) to which the initial state | ψ m ( 0 ) evolved at the end of the driving. Following [13], we implement WKB wave functions of the form

(39) ψ m , s c ( t f ) ( x ) = b ρ b ( t f ) ( x ) e x p [ i S b ( t f ) ( x ) i π 2 μ b ( t f ) ] ,

(40) ψ n , s c ( f ) ( x ) = b ρ b ( f ) ( x ) e x p [ i S b ( f ) ( x ) i π 2 μ b ( f ) ] ,

where ρ b ( t f ) ( x ) and ρ b ( f ) ( x ) denote the classical phase space densities corresponding to the states | ψ m ( t f ) and | ψ n ( f ) , S b ( t f ) ( x ) and S b ( f ) ( x ) are their classical actions, and μ b ( t f ) ( x ) and μ b ( f ) ( x ) denote the Maslov indices labelling the different possible branches b of classical transitions [19]. For notational simplicity, we suppressed the dependence on n and m in the r.h.s. (39) and (40), respectively.

The scalar product in (5) then acquires the form

(41) ψ n , s c ( f ) | ψ m , s c ( t f ) = b , b d x ρ b ( f ) ρ b ( t f ) e i ϕ b b ( x ) ,

with

ϕ b b ( x ) = 1 ( S b ( t f ) ( x ) S b ( f ) ( x ) ) π 2 ( μ b ( t f ) μ b ( f ) ) .

In a semi-classical analysis, one is interested in the leading behaviour for 0 ; the integral in (41) may therefore be performed by the stationary phase approximation. Because of

(42) d S d x = p ( x ) ,

where p ( x ) stands for the classical momentum as function of position, the condition

(43) d d x ϕ b b ( x s ) = 0

for the stationary point xs translates to leading order into

(44) p b ( t f ) ( x s ) = p b ( f ) ( x s ) .

The stationary points xs are therefore nothing but the crossing points between the final classical orbit in phase space and the curve H cl ( x , p ) = E n . Here Hcl denotes the classical Hamilton function.

Expanding the densities ρ b ( f ) ( x ) and ρ b ( t f ) ( x ) as well as ϕ b b ( x ) to second order in ( x x s ) , we may replace the sum over the branches b and b′ by a sum over the crossing points and find to leading order in ℏ

ψ n , s c ( f ) | ψ m , s c ( t f ) = s ρ s ( f ) ρ s ( t f ) e i ϕ s d x e i κ s 2 ( x x s ) 2 = s ρ s ( f ) ρ s ( t f ) e i ϕ s 2 π | κ s | e i π 4 s g n ( κ s ) ,

with

ρ s ( f ) = ρ b ( f ) ( x s ) , ρ s ( t f ) = ρ b ( t f ) ( x s ) , ϕ s = 1 ( S b ( t f ) ( x s ) S b ( f ) ( x s ) ) π 2 ( μ s ( t f ) μ s ( f ) ) , κ s = d 2 S b ( t f ) d x 2 ( x s ) d 2 S b ( f ) d x 2 ( x s ) = d p b ( t f ) d x ( x s ) d p b ( f ) d x ( x s ) .

Here, b and b′ denote the branches corresponding to the crossing point xs.

Finally, the transition probabilities are given by

(45) P ( n | m ) = | ψ n , s c ( f ) | ψ m , s c ( t f ) | 2 = | s a s e i θ s | 2 ,

with

a s = 2 π | κ s | ρ s ( f ) ρ s ( t f ) , θ s = ϕ s + π 4 s g n ( κ s ) .

The amplitudes as and phases θs are fully determined by the properties of the classical trajectories at the intersection points xs. Integrating the classical equations of motion numerically for a certain number of different initial conditions corresponding to the same initial energy, H cl ( x , p ) = E m ( 0 ) , we determine the values of ϕs and κs and produce reliable estimates for the densities ρ s ( f ) and ρ s ( t f ) . Table 1 shows results for the transition probabilities obtained in this way using 105 initial conditions and compares them to the findings of Section 3.

For P ( 46 | 50 ) , P ( 50 | 50 ) and also for P ( 50 | 46 ) , the results from the numerical solution of the Schrödinger equation and those coming from the semi-classical analysis agree rather well. For P ( 58 | 50 ) , there is a larger discrepancy. It is due to the fact that, classically, only a finite interval of energy transfers can be accomplished and the difference E 58 ( f ) E 50 ( 0 ) lies just outside this energy window. Building solely on classical trajectories, the semi-classical method as described above must then fail. This is a standard problem of the WKB method that can be remedied by a higher-order expansion of ϕ b b resulting in Airy-tails of the semi-classical wave function extending into the classically forbidden region. Proceeding as described in detail in Section V of [13], we get in this way the refined semi-classical value P s c ( 58 | 50 ) = 0.0452 that agrees much better with the corresponding result of Section 3.

In some circumstances, the semi-classical limit 0 is found to be tantamount to the adiabatic limit t f . This seems to make a WKB treatment of non-diagonal transition probabilities P ( n | m ) , n m questionable. However, ℏ cannot be compared with a time scale as such, but only with a product of a time scale and a characteristic energy. By considering rather large energy levels m ≃ 50, we chose this energy scale large enough to keep the small ℏ limit compatible with a comparatively short driving time tf.

6 Conclusion

In the present paper, we have studied two numerical methods to determine the transition amplitude between energy eigenstates of a simple quantum system driven by a time-dependent external protocol. The corresponding transition probability is the pivotal quantity to compile the work statistics of the system, which in turn determines many of its thermodynamic properties. Our motivation was two-fold: on the one hand, we systematically investigated the accuracy of both methods as function of their parameters, in particular of the temporal and spatial resolution used in the discretisation. On the other hand, we compared the two methods with each other with respect to the numerical effort involved.

Technically speaking, the problem consists of solving the time-dependent Schrödinger equation for the one-dimensional model system. No analytical solution is available. Our first procedure implements the standard Crank–Nicolson method for partial differential equations. The second one builds on an expansion of the quantum state into a superposition of instantaneous eigenstates of the Hamiltonian.

Implementation of the Crank–Nicolson method gives rise to a rather universal solver for the initial value problem of the Schrödinger equation with Dirichlet boundary conditions. It is easily implemented, reliable, and yields accurate results. However, it is a purely mathematical prescription to solve a partial differential equation. There is no way to improve its performance by using additional physical insight into the dynamics of the system.

Expanding the searched-for wave function in the instantaneous eigenfunctions of the system, on the other hand, allows to calculate part of its time dependence analytically. Moreover, studying the dependence of the results on the number of states included valuable information on which states contribute substantially to a specific transition and which do not. On the down side, this method requires to solve the stationary Schrödinger equation of the system for each value of the protocol parameter λ ( t ) separately. As far as numerical effort is concerned, the method is hence only competitive with the Crank–Nicolson method if a way is found to circumvent the repeated numerical solution of the eigenvalue problem of the Hamiltonian. For our system, this was possible by a scaling transformation as explained in Section 3.

In both methods, we found the Numerov prescription for the spatial discretisation extremely valuable. With only little additional effort, the accuracy is considerably improved. As a result, rather large spatial step sizes Δ x could be used, which reduced the computation time substantially.

Given appropriate values of its parameters, both methods gave consistent results for the transition probabilities considered. In order to additionally verify their correctness, we compared them with the outcome of a semi-classical analysis using the WKB method. Again, very good agreement was obtained if Airy-tails extending into the classically forbidden region were taken into account.

Acknowledgement

This work has been funded by the Deutsche Forschungsgemeinschaft (DFG; German Research Foundation) – 397082825. We would like to thank Vincent Preut, Tim Utermöhlen, and the members of the DFG Research Unit FOR2692 for fruitful discussions.

References

[1] C. Jarzynski, Annu. Rev. Condens. Matter Phys. 2, 329 (2011). Search in Google Scholar

[2] U. Seifert, Rep. Prog. Phys. 75, 126001 (2012). Search in Google Scholar

[3] K. Sekimoto, Stochastic Energetics, Springer, Berlin 2010. Search in Google Scholar

[4] T. Schmiedl and U. Seifert, EPL 81, 20003 (2008). Search in Google Scholar

[5] M. Esposito, R. Kawai, K. Lindenberg, and C. van den Broeck, Phys. Rev. E 81, 041106 (2010). Search in Google Scholar

[6] P. Chvosta, M. Einax, V. Holubec, A. Ryabov, and P. Maass, J. Stat. Mech. Theor. Exp. 2010, P03002 (2010). http://dx.doi.org/10.1088/1742-5468/2010/03/P03002. Search in Google Scholar

[7] C. Chipot and A. Pohorille, Free Energy Calculations: Theory and Applications in Chemistry and Biology, Springer, Berlin 2007. Search in Google Scholar

[8] J. M. P. Parrondo, J. M. Horowitz, and T. Sagawa, Nat. Phys. 11, 131 (2015). Search in Google Scholar

[9] S. Yukawa, J. Phys. Soc. Jpn. 69, 2367 (2000). Search in Google Scholar

[10] A. Engel and R. Nolte, EPL 79, 10003 (2007). Search in Google Scholar

[11] J. Gemmer, M. Michel, and G. Mahler, Quantum Thermodynamics, Springer, Berlin 2009. Search in Google Scholar

[12] M. Campisi, P. Hänggi, and P. Talkner, Rev. Mod. Phys. 83, 771 (2011). Search in Google Scholar

[13] C. Jarzynski, H. T. Quan, and S. Rahav, Phys. Rev. X 5, 031038 (2015). Search in Google Scholar

[14] L. D. Landau and E. M. Lifshitz, Course of Theoretical Physics III: Quantum Mechanics, Non-Relativistic Theory, Butterworth-Heinemann, Oxford 2005. Search in Google Scholar

[15] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes, Cambridge U. P., Cambridge 2007. Search in Google Scholar

[16] C. A. Moyer, Am. J. Phys. 72, 351 (2004). Search in Google Scholar

[17] A. Goldberg, H. M. Schey, and J. L. Schwartz, Am. J. Phys. 35, 177 (1967). Search in Google Scholar

[18] M. Pillai, J. Goglio, and T. G. Walker, Am. J. Phys. 80, 1017 (2012). Search in Google Scholar

[19] R. G. Littlejohn, J. Stat. Phys. 68, 7 (1992). Search in Google Scholar

Received: 2020-01-17
Accepted: 2020-02-23
Published Online: 2020-04-08
Published in Print: 2020-05-26

© 2020 Walter de Gruyter GmbH, Berlin/Boston