This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Brought to you by:
Paper

On the connection between bound and scattering states of finite square-well potentials: a unified approach

, , and

Published 26 January 2021 © 2021 European Physical Society
, , Citation Ian Morrison et al 2021 Eur. J. Phys. 42 025405 DOI 10.1088/1361-6404/abcc40

0143-0807/42/2/025405

Abstract

We discuss a general description of the solutions to the 1D time-independent Schrödinger equation that does not a priori distinguish between scattering states and bound states and emphasizes and reinforces their relationship and connection to each other. This manuscript also introduces the concept of transfer matrices, which it presents as a logical extension of the traditional approach to evaluating 1D potentials. Using the transfer matrix method and a finite step approximation allows for a simple and straight-forward numerical solution of arbitrary 1D potentials. It also separates the process of solving the Schrödinger equation from selecting physically relevant solutions, which is a critical skill in quantum mechanics and is at the core of physics problems in general.

Export citation and abstract BibTeX RIS

1. Introduction

One-dimensional (1D) finite potential wells form an essential part of any introduction to quantum mechanics. Such models are used extensively in graduate-level quantum mechanics; they are also featured heavily in modern research and industrial applications of quantum mechanics. Although highly simplified models of real quantum-mechanical systems, 1D finite potentials are technically challenging. It is well-known that certain aspects of these models cannot be solved exactly. For instance, even in one of the simplest examples, the finite square-well, the exact bound state energy spectrum is not known in closed form. One must resort to numerical methods, typically involving transcendental functions [112]. In addition, common textbook approaches to finite square-well problems employ different techniques for obtaining the wave functions of bound and scattering states, which is useful to get students comfortable with the complex and counterintuitive aspects of these systems; however, the use of these approaches alone can hide interesting physical aspects of the problem, and as a result, analysis of generic 1D potential landscapes is generally inaccessible to undergraduates.

In the following, we present a treatment of 1D potential wells that offers several pedagogical enhancements to common textbook approaches. Our approach is based upon the formalism of transfer matrices [13, 14]. While this approach is certainly not new; it is not commonly employed at the level of a first undergraduate course in quantum mechanics. Our transfer matrix approach to 1D finite potential wells has the following benefits:

  • (a)  
    It cleanly separates the process of solving the Schrödinger equation from the process of selecting physically relevant solutions. The authors have found that the introduction of this approach after courses in classical mechanics and mathematical methods in physics, in particular, helps the students to see the similarity between solving the Schrödinger equation and solving other boundary value physics problems involving differential equations such as the heat diffusion or wave equations. The fundamental fact that the Schrödinger equation is simply a differential equation subject to the same mathematical rules and procedures used to tackle classical physics problems, but where additional conditions are imposed on the solutions due to the postulates of quantum mechanics, can sometimes be overshadowed by the 'mysticism' of quantum theory.
  • (b)  
    It addresses bound and scattering states in a unified manner, which allows for their connection to be seen more transparently.
  • (c)  
    It allows for a relatively simple unified numerical implementation that does not fundamentally distinguish between bound and scattering states.
  • (d)  
    While our approach pays dividends even for the simple finite square-well potential, it can easily be extended to tackle any discretized 1D potential well [16].
  • (e)  
    It has parallels with treatments used in more advanced quantum theory and classical optics. In particular, the transfer matrix is a common ingredient in graduate-level quantum mechanics and optics courses. It is used to determine the transmission and reflection properties of stacked thin-films [19, 20], and it plays a crucial role in describing integrable quantum systems [21]. It is also closely related to the monodromy matrix formalism of integrable field theory [22]. Thus, incorporating transfer matrices into undergraduate classrooms can set a foundation for advanced study in the future.

Our approach has antecedents in the works of Walker, Gilmore, Baym, and others [13, 14, 2326]. However, these works involve a significant departure from the standard notation and intuition developed in many modern undergraduate quantum mechanics textbooks. Here the transfer matrix method is presented, somewhat through exploration (which is achieved in the classroom through inquiry driven assignments and guided class discussions), as an extension of the conventional approach and maintains a consistent notation and formalism. In this way, we aim to provide a bridge from a textbook presentation of the finite square-well potential to a transfer matrix technique that may be applied to any 1D finite potential.

We stress that the presented work is based on implementation; the authors have used this technique in the classroom for over a decade. In addition to the benefits listed above, the authors have found that this method allows the students to come to a deeper appreciation for the connection between calculus, Fourier analysis, and more advanced physics problems. For example, most students seem to find the idea that curved potentials can be approximated as a series of square steps where the analytic solutions can be approximated by a series of sinusoidal or exponential functions to be unexpected and intriguing. Moreover, given the importance of computational skills in contemporary physics research, there has been an ongoing effort in the physics education community to increase the degree of computation in physics curricula. In this spirit, the presented method relies heavily on computations and simulations, primarily performed in MathematicaTM, to allow easier visualization of otherwise tedious calculations. To facilitate easy integration of this approach into the classroom, annotated versions of all MathematicaTM files discussed or used to create figures are provided in the online supplementary materials (https://stacks.iop.org/EJP/42/025405/mmedia).

This paper is organized to mirror the classroom treatment developed and used by the authors. We begin in section 2 by reviewing the concrete example of a particle in a finite square-well potential. We use this section to initiate our discussion using language that is familiar to introductory quantum mechanics students and use numerical values similar to those found in undergraduate modern physics and quantum mechanics textbook problems. The authors have found that this firm foundation facilitates the transition to the more abstract concepts of transfer matrices and later to the use of dimensionless variables. In section 2.1, we compute the transition coefficient as a function of energy and note that this quantity still has meaning for energies less than zero, i.e., for energies that do not correspond to scattering states. Through exploration, we show that poles in the transition coefficient correspond to bound state energies [13]. In section 2.2, we continue to use the transmission coefficient to uncover the relationship between scattering resonances and bound states. All of this serves to motivate a more unified approach to addressing scattering and bound states, which we present in section 3. In section 3.1, we derive the transfer matrix for a simple step potential. A key advantage of the transfer matrix approach is that the transfer matrices of several step potentials readily combine to produce the overall transfer matrix of any discrete potential. We then use this fact to re-examine the finite well potential in section 3.2. Finally, in section 3.3, we extend our discussion to arbitrary piecewise-linear 1D potentials and analyze the examples of an inverted 'cityscape' potential, as well as a discretized version of the symmetric Pöschl–Teller potential.

2. The symmetric finite square-well potential

2.1. Derivation and graphical analysis of the transmission coefficient

We start by writing the general solution of the time-independent Schrödinger equation for a symmetric potential well with depth Vo and width 2a, as shown in figure 1. For any coordinate interval where the potential has a constant value V1, the general solution for the time-independent Schrödinger equation is:

Equation (1)

Figure 1.

Figure 1. A diagram of a finite quantum well with a depth of Vo and width of 2a. The general solutions for the wave functions of an electron incident upon the well from the left, and with an energy greater than the well, are also shown.

Standard image High-resolution image

It is a standard textbook problem to calculate the transmission coefficient for a particle with an energy above the well [1, 2, 4, 7]. In this case, the general solution consists of three parts that can be written in the same form as equation (1), without assuming anything about the relative magnitude of the energy E compared to the value of the potential over each coordinate interval:

Equation (2)

where

Equation (3)

Equation (4)

Since Vo < 0, k2 will always be real if E is above the bottom of the well. The coefficients A, B, ..., G allow us to construct a variety of solutions subject to the conditions of continuity in Ψ(x) and Ψ'(x) at x = ±a, which provides four equations that relate these coefficients. In keeping with the fact that the Schrödinger equation is a 2nd order ordinary differential equation, one must also impose two boundary conditions at x = ±. Imposing the boundary conditions completely defines the coefficients. We chose to examine solutions where G = 0, a choice that ensures that the particle has positive momentum in region III [27]. In many publications and textbooks, this choice is usually described as necessary to ensure that all particles are incident from the left-hand side of the well. While dynamical language such as 'incident from the left' or 'traveling to the right' is commonly used when describing particles in scattering problems, one must remember that we are dealing with the time-independent Schrödinger equation. The wave function defined by equation (2) is a 'steady-state' solution for a particle as it could exist in a time-independent potential. Upon restoring the time-dependence of the wave function, solutions with G = 0 correspond to right-moving plane waves in region III.

Applying the continuity conditions for the wave functions and their derivatives at the boundaries yields the following four equations that relate the coefficients A, B, C, D, and F:

Equation (5)

Equation (6)

Equation (7)

Equation (8)

When E > 0, it is common to explore the effect of the potential on the particle by its transmission through the potential and its reflection off of the potential, which are quantitatively defined in terms of transmission and reflection coefficients. The transmission coefficient is defined as the ratio of the probability flux (or probability current) of the transmitted wave to the incident wave. Analogously, the reflection coefficient is defined as the ratio of the probability flux of the reflected wave to that of the incident wave. In one dimension, the probability flux for a plane wave, such as that of the incident component of our particle's wave function, is [4, 26]

Equation (9)

In our case, the transmission coefficient is then given by [28],

Equation (10)

When E is greater than zero, the wave function cannot be normalized, and it is defined up to an arbitrary constant. One often sets A = 1, but we prefer not to do so in the spirit of keeping this example as general as possible, later allowing us to discuss the role of this coefficient explicitly, so we will let A take on any value except for zero. Solving for T(E) with this condition on A yields the expression:

Equation (11)

Restricting ourselves to the case that Vo is negative in region II and substituting equations (3) and (4) into equation (11) yields,

Equation (12)

Let us first examine this equation with a concrete example to get a sense of the dimensionality of this type of problem before moving onto to more generalized cases. Take an electron incident on a square-well potential of depth Vo = −9 eV and width 2a = 1 nm. There is nothing in the mathematical derivation of equation (12) that would limit it to the case where E > 0. Figure 2(a) shows a plot of equation (12) as a function of the electron's energy from 0 eV to 10 eV. The standard oscillatory behavior of the transmission into region III is observed, reaching T(E) = 1 for the energies corresponding to scattering resonances, where the particle has a 100% chance of making it into region III [1, 2, 2326]. Figure 2(b) shows a plot of T(E) for E < 0, which has poles at a set of discrete energy values, with the spacing between these values increasing toward higher energies. For our case, the energy values where T(E) diverges are: −8.706 eV, −7.802 eV, −6.371 eV, −4.403 eV, and −2.020 eV. What do the energies where T(E) goes to infinity represent? It can be shown that each of these poles occurs at a bound state energy eigenvalue [13]. Some students do recognize that these values should be the energy eigenvalues, mostly due to the increased spacing at higher energies which they recognize from energy spectrum of the infinite square well, however they typically cannot correctly explain why this is the case. So why do these poles represent bound states when the problem was initially solved assuming that the particle was above the well, and why do the poles take on values greater than 1 for E < 0?

Figure 2.

Figure 2. (a) A plot of T(E) for an electron incident on a finite square-well of depth −9 eV over an energy range of 0 eV–10 eV, displaying typical oscillatory behavior. (b) A plot of T(E) for a range of electron energies below the well. There is a notable behavior where T(E) goes to infinity at discrete energies [29].

Standard image High-resolution image

Although T(E) is typically interpreted as the 'transmission probability' for energies above the well and has an upper limit of unity due to the conservation of probability flux [4], it does not lose significance when E < 0, however its interpretation requires explanation: in this example, T(E) is technically just a function that defines the ratio between the amplitudes of two solutions to the Schrödinger equation. When V0 < E < 0, the momenta related to k1 in the outer regions (I and III) become imaginary, i.e., k1 → i|k1|, and so the wave functions in these regions are real exponentials. For the overall solution to remain well-behaved in regions I and III, it must decay exponentially to the left and right of the well, respectively. This occurs when A = G = 0 and F = B ≠ 0. Our boundary condition that G = 0 takes care of region III, but A was assumed to be a non-zero value, and so the component of the solution associated with A will diverge as x → −. However, if A were to become zero for V0 < E < 0, the function T(E) would diverge, and this makes T(E < 0) a handy quantity since the divergences indicate where A has vanished.

To understand how A can become zero, but only at certain energies, let us revisit equation (12). Upon inspection, the function T(E) will go to infinity when the term inside the brackets becomes zero. If we define everything inside of the brackets as β , then equation (12) becomes:

Equation (13)

Upon rearranging equation (13), we see that $A=\sqrt{\beta }\left\vert F\right\vert $, and under the condition that β = 0 and F is finite, A vanishes. We never preemptively imposed the physical requirement that A = 0, but this condition is naturally satisfied when β = 0. Consequently, any energy where β = 0 is the energy of an eigenstate, which allows one to go back to equation (2) and obtain the wave function for that eigenstate. It should be noted that the reflection coefficient for this potential would be defined as R(E) = |B/A| [1, 2, 25] and while R(E) = 1 − T(E) for E > 0, R(E) diverges at the same energies as T(E) for E < 0.

Using the transmission coefficient to determine the eigenvalues of the square-well demonstrates the difference between solving the Schrödinger equation and picking relevant solutions. Equation (11) was derived under the mathematical restriction that the solutions to equation (2) and their derivatives were continuous across the boundaries. This yielded a continuum of mathematically valid solutions for electron energies less than zero. We never imposed the boundary condition that the wave function remained finite, and therefore most of these solutions are physically meaningless. However, for a specific subset of these solutions, the physical requirement of finiteness is naturally satisfied. In the conventional method, one picks out this subset of solutions when the physical condition that A = 0 and G = 0 is imposed. Essentially, one preemptively limits the solutions to only those that correspond to β = 0, before solving the problem for the bound state energies.

2.2. The origin of bound states and their wave functions

We have established that the poles in T(E) represent bound states, which means that the energy values where these occur can be used in equation (2) to obtain their associated wave functions, but how precisely does one need to know these energies? It turns out that a small numerical error in the energy eigenvalue (even if only in the precision of the value) can still result in solutions to equation (2) that are not well-behaved for all values of x. Figure 3 shows a plot of equation (2) for energies around the pole associated with the lowest bound state (E ∼ −8.706 eV). A divergence exists in region I (x < −0.5 nm) that becomes unacceptably close to the barrier of the potential well when the energy is off from the 'exact' pole by just 0.001 eV or 0.01%. This subtlety is not typically addressed but is critical in making appropriate numerical approximations [13, 14, 2326]. The function in region I is a superposition of two solutions with coefficients A and B. This divergence originates from the solution associated with A, which is usually set to zero in textbooks when dealing with bound states. However, as we mentioned above, it is unnecessary to set A to zero to get a 'well-behaved' wave function. The amplitude of the divergent component goes to zero naturally, so long as one uses a 'reasonably' precise numerical value for the bound state energy. The fact that only at infinitely precise energies will true bound states exist for static problems is an essential point in quantum mechanics. When performing numerical evaluations, one must be mindful of the precision used and the consequences of that decision.

Figure 3.

Figure 3. The normalized solution as given by equation (2) for energies close to the ground state eigenenergy of the well, as determined by the lowest energy pole in figure 2(b).

Standard image High-resolution image

Next, we will follow the methodology introduced by Walker and Gilmore [2325], but also expand upon it using the treatment described above, which allows us to quickly probe the energies of the bound and scattering states for wells of various depths and widths by simply using equation (12), without any rederivation. Moreover, we can probe the solutions of the function defined by equation (2) and gain useful insight on what happens to these solutions as one approaches the energies corresponding to the poles in T(E). In the following, the term 'wave function' will only be used to describe a solution to equation (2) that also behaves acceptably, i.e., there is no divergence close to the edge of the well (figure 3).

With these considerations in mind, let us investigate T(E) and the associated solutions to equation (2) for a well of fixed width, but increasing depth. In figure 4(a), T(E) is plotted for four different well depths where the number of bound states, as seen from the number of poles, goes from three to four. The green arrow tracks a scattering resonance (for E > 0), which moves closer to the top of the well as the depth of the well is increased. If the plot of T(E) is monitored as the well depth is continuously lowered, it is clear that every new bound state is correlated to a scattering resonance that was 'pulled' into the well (see supplementary material for the movie) [23]. In figure 4(b), the absolute value of equation (2) evaluated at the energy corresponding to the transmission resonance indicated by the arrow for a well depth of −3.32 eV is shown on the right. On the left of figure 4(b), the same is shown for the highest bound state energy at a well depth of ∼−3.39 eV. The part of the solution representing the electron in region II has a similar shape in both cases, except that the bound state wave function has four peaks inside the well. In contrast, for the scattering resonance, the peak of the last oscillation exactly coincides with the boundary of the well.

Figure 4.

Figure 4. (a) A plot of T(E) from −5 eV → 10 eV, for various well depths where the poles corresponding to three bound states can be observed within the well. As the depth increases, the lowest energy transition resonance approaches the well and becomes narrower, finally transitions into a new bound state pole. (b) The absolute value of equation (2) evaluated at the energy corresponding to the transmission resonance indicated by the arrow at a well depth of −3.32 eV (left) and the highest bound state energy at a well depth of ∼−3.39 eV (right).

Standard image High-resolution image

We have just shown that each new bound state that appears when the well's depth is increased has a one-to-one correspondence to a scattering resonance that has been pulled into the well. Another way to think of this is that the well width and particle mass restrict the value of the wavenumbers that are needed to meet the conditions for a scattering resonance or a bound state. Most of the energies where these conditions are met exist as scattering resonances. However, once the well is deepened, the energies where the required wavenumbers exist get closer to the top of the well until they finally exist within the well, at which point the scattering resonance becomes a bound state, which was also pointed out by Gilmore using a different representation [23]. Overall, once the students associate a pole in T(E) as a bound state, and a peak for E > 0 as a resonance, they can directly observe that each pole (bound state) is always a result of a transmission resonance peak being pulled below the well. They can then probe the wave functions above and below the well at the point of crossover and observe, as Gilmore puts it, that scattering resonances near the top of the well are 'almost bound states' [23], which the presented method makes very clear.

3. Transfer matrix formalism and piecewise potential approximations

Next, we will show that the calculations above can be significantly simplified using a well-known matrix formalism based on the so-called 'transfer matrices'. In essence, transfer matrices generalize the concept of the transmission coefficient, which will allow us to tackle more complex arbitrary piecewise 1D square-well potentials.

3.1. The transfer matrix of a step potential

Let us consider a step potential located at the origin,

Equation (14)

To the left and right of the step, depicted in figure 5, the general solution to the Schrödinger equation has the usual form given in equation (1). After imposing continuity of Ψ(x) and Ψ' (x) at x = 0, we obtain two equations that relate the coefficients A, B, C, D:

Equation (15)

Figure 5.

Figure 5. Diagram of a step potential centered at the origin. The amplitudes of the incoming and outgoing solutions are shown for each region.

Standard image High-resolution image

These equations can then be compactly written as a matrix equation:

Equation (16)

The matrix $\mathrm{N}$ 12 is formally known as a transfer matrix, which relates the rightward and leftward traveling waves on either side of the step. The transfer matrix is the fundamental solution to the Schrödinger equation for the potential given by equation (14); it neatly stores the matching conditions that relate the four coefficients A, B, C, D, and that must be satisfied by any solution regardless of whether the energy is above or below V1. While the transfer matrix encodes the step potential information, it is ignorant of additional physical restrictions one might impose, such as boundary conditions at x = ± or normalizability, which is typically achieved by imposing conditions on individual coefficients (i.e., setting them to unity or zero).

If the step potential is located away from the origin, we can shift the step back to the origin by shifting our x coordinate, xx + a. Under this change of coordinates, the left and right solutions transform as:

Equation (17)

We see that moving the location of the potential merely adds relative phases (in blue) to the coefficients of the solution. Thus, for a step potential located at x = a (and which we wish to shift back to the origin), the relation between coefficients is:

Equation (18)

We can absorb the phases into the transfer matrix, and instead write:

Equation (19)

where the transfer matrix $\mathrm{N}$ 12(a) describes a potential step located at x = a:

Equation (20)

3.2. A single finite well as the combination of two step potentials

With the transfer matrix $\mathrm{N}$ 12(a) in hand, it is easy to build solutions to more complicated piecewise constant potentials. Consider a finite well potential that is not necessarily symmetric, noting that this is just two step potentials put together (figure 6). Initially, we will focus on attaining information on the energies of the well, particularly for the bound states, and then solve for the complete wave functions to which they correspond.

Figure 6.

Figure 6. Generic square-well potential made of two step functions, where the width of the well is 2a and V1, V2, V3 are not necessarily equal. The amplitudes of the incoming and outgoing solutions are shown for x < −a and x > a. The solutions inside the well are not needed to solve for the bound state energy eigenvalues.

Standard image High-resolution image

Outside the well, the solutions to the Schrödinger equation have the form:

Equation (21)

The coefficients A, B, ..., F are related by transfer matrices at each step of the finite square-well,

Equation (22)

Thus, the coefficients A, B, G, and F can be directly determined without considering the coefficients C and D, by taking the product of two step-potential transfer matrices at the respective location of the steps as [14, 2326],

Equation (23)

And just like that, we have obtained the matching conditions for the finite well, where the width of the well is automatically encoded via the position of neighboring steps when they are multiplied together properly [30].

In our new formalism, setting G = 0, the bound states correspond to the zeros of the top-left element of the overall transfer matrix,

Equation (24)

T(E) would be given in general by: [28]

Equation (25)

Plotting this and looking for poles at negative energies yields the energy eigenvalues of the bound states of the well. If k3 = k1, it can be shown that the left side of equation (24) reduces to √ β from equations (13), and (24) is then equivalent to the case of β = 0.

Moreover, after a bit of algebra, one finds that equation (25) is also equivalent to:

Equation (26)

In our notation, this is the general transcendental equation satisfied by the bound state energies of a finite square-well potential, cf reference [31]. Overall, the condition given by equation (24) leads to equation (26) and is the condition that defines the poles in T(E), as defined by equation (25). This mathematical fact reiterates that the poles of T(E) are the energy eigenvalues of the bound states.

3.2.1. Example: constructing a wave function using the transfer matrix method

The elegance of using the transfer matrix method comes from the fact that we do not need to address solutions inside of the well to determine the bound state energies of the well. The power of the transfer matrix becomes even more apparent when one considers that the coefficients in each region can also be determined without directly addressing the solutions inside the well. Below, we will provide a detailed example of using the transfer matrix method to determine the bound state energies of a two-step asymmetric potential well and compute the associated wave functions. This method can then be directly applied to a potential with more than two steps.

Let us consider the uneven potential well shown in figure 6. With the concrete example of section 2 in hand, we are ready to consider a more generalized approach. We will define the dimensionless quantity x* = (x/a). With this convention, the well has length 2, and the potential goes from x* = −1 to x* = 1. Next, we define a dimensionless potential (V*) and energy (E*):

Equation (27)

In this convention, let the values of the potential in each region be such that V*1 = 1, V*2 = −2, and V*3 = 0. We treat the problem as before, but making the substitutions VV*, EE*, and xx*. The first step will be to define our initial conditions on the solutions. In this method, we will be using matrix multiplication, and it is more convenient to define both coefficients in region III, and then use the transfer matrix to determine the coefficients in region II and region I.

Let us define G = 0 as before, which allows us to ensure a positive momentum in region III for E greater than any of the potential pieces. We will also define F = 1, which is an arbitrary choice to make the matrix multiplication more straight-forward, and one that can be modified to normalize the wave function at the end. With our boundary conditions set, we will use equation (23) to compute A and B from F and G. Plotting │1/A2 and looking for poles; one finds a single pole at E* = −1.136 56 (figure 7(a)). This determines the numerical values of k1, k2, and k3, and thus, the numerical values of the transfer matrix at the bound state energy. With this in hand, one can use the transfer matrix at each step to find the coefficients in each region. In our case:

Equation (28)

With all the values of the coefficients known, the total wave function is also known and can be normalized and plotted in the usual way (figure 7(b)).

Figure 7.

Figure 7. (a) Plot of the │1/A│2 used to determine the first bound state energy of a particle of mass m in an asymmetric potential well of length 2. (b) The dimensionless bound state wave function constructed using the transfer matrix method. Plotted on the left axis is the dimensionless energy E*, and the normalized dimensionless wave function Ψo is plotted on the right axis.

Standard image High-resolution image

Although Gilmore addressed the construction of wave functions using transfer matrices, the method presented here to generate coefficients and compute wave functions echoes the straight-forward process typically outlined in undergraduate textbooks, and without the introduction of, e.g., different sets of solutions to the Schrödinger equation or hyperbolic trigonometric functions [23]. As we will see, by translating this process into an intuitive code, the wave function of almost any finite piecewise square-well potential, including approximated potentials, can be easily and quickly computed and plotted using the same code by merely updating the parameters of the potential.

3.3. Extending to more than two steps

Consider the case of many step potentials of varying depths connected to each other, or a so-called 'upside-down cityscape' (figure 8(a)). The procedure outlined above can be extended to determine the energy spectrum and wave functions for this potential, where again we will use the dimensionless variables V*, E*, and x*. One simply set the coefficients in the last region to 0 and 1 for the positive and negative momentum solutions, respectively, and then multiply these coefficients by each step consecutively from right to left. It is possible to simply add a new matrix multiplication line for each new step in the two-step example, but this can get messy as the number of steps increases. To keep things compact, a loop can be created in MathematicaTM where the values of each potential and step location are stored and called on as the loop cycles through the steps from right to left [2326], From this, A can be computed, T(E*) = │F/A2 can be plotted for E* < 6, and the energy of the poles can be identified as shown in figure 8(b). Once the bound state energies are known, they can be plugged back into the transfer matrix. The coefficients for the wave function associated with each pole can then be systematically computed, and the wave function can be plotted (figure 8(a)). As was the case for two-step potential well, the solutions inside the well did not need to be directly addressed in this procedure.

Figure 8.

Figure 8. (a) Wave functions for the first three bound states of the potential, in dimensionless units. (b) T(E*) plot for a range of particle energies, E*, inside the potential, where the four bound states have energies that coincide with the poles.

Standard image High-resolution image

The power of this approach is further revealed in the ability to discretize curved potentials as a series of steps and then apply the above procedure to quickly calculate a numeric approximation of the energy spectrum and wave functions for the potential. Some examples of known curved potentials would be the harmonic oscillator and the Morse potential. These potentials go to infinity at one or both ends, but it has been shown that they can be well-approximated by truncating them and observing, e.g., the lowest states of the harmonic potential or potentials with a Gaussian shape [14, 23].

We have chosen the Pöschl–Teller potential to further illustrate the benefits of the transmission coefficient approach because it is a finite potential commonly seen in chemistry and classical mechanics whose solutions exist in closed form, thus facilitating easy validation of our method [32]. In its symmetric form, the Pöschl–Teller potential is:

Equation (29)

where l is a real parameter that is greater than zero. For integer values of l, the bound state wave functions of this potential are the Legendre functions:

Equation (30)

where l = 1, 2, 3, ... and μ = 1, 2, ... l − 1, l. Note: the general solution to the Pöschl–Teller potential for generic l and μ is a linear combination of Legendre functions (${P}_{l}^{\mu }\left(\mathrm{tanh}\enspace \left(x/a\right)\right)$ and ${Q}_{l}^{\mu }\left(\mathrm{tanh}\enspace \left(x/a\right)\right)$). When l is an integer and μ takes certain discrete values, the ${P}_{l}^{\mu }$ solutions give normalizable bound states. The energy eigenvalues of the symmetric Pöschl–Teller potential have the form:

Equation (31)

Choosing l = 1, the potential was discretized as a series of square steps starting with two steps and increasing to 20 steps. The discretized potentials and the corresponding wave functions are shown in figures 9(a)–(c). The analytic expression for the wave function of this potential is:

Equation (32)

Figure 9.

Figure 9. (a)–(c) The discretized symmetric Pöschl–Teller potential using an increasing number of steps, plotted over the 'actual' Pöschl–Teller potential along with the dimensionless ground state wave function. Plotted on the left axis is the dimensionless energy E*, and the dimensionless wave function Ψo(x*) is plotted on the right axis. (d) The estimated dimensionless ground state energy is plotted for each number of steps used to approximate the potential. (e) The fractional uncertainty between the theoretical ground state value and the estimated value as a function of step number.

Standard image High-resolution image

As can be seen from figures 9(a)–(c), the wave functions computed from the discretized potential are in excellent qualitative agreement with the analytic expression [32], which further demonstrates this method's utility. The value of the dimensionless ground state energy (after dividing E0 by ħ2/2ma2) should be −1, according to equation (31). Figure 9(d) shows the dimensionless ground state energy as a function of the number of steps used to approximate the potential, and after six steps, there is a clear trend toward −1. The fractional uncertainty of the estimated ground state energy compared to the theoretical value is shown in figure 9(e) and is ∼0.02 by as few as twelve steps.

It is well-known that the number of bound states increases with the depth of a potential, and one might expect that anomalous bound states may arise depending on the level of discretization. However, only one bound state is found in all cases, regardless of the number of steps used. It turns out to be true in general that the number of bound states remains accurate regardless of the number of steps used; [23] however, the value for the energy of that state becomes more accurate as the number of steps used increases. The potential goes to zero asymptotically. The width of the plotted area was chosen such that the steps on the outside were at zero and sufficiently followed the actual potential. Analogous to increasing the number of steps, in this example, the energy eigenvalue does not become significantly more accurate by extending the width beyond the range of ±4 nm.

Next, we will increase the depth of the Pöschl–Teller potential by changing the value of l to observe where new bound states occur, as in section 2.3. To explore this, let us discretize the Pöschl–Teller potential using a seven-step potential where the widths of each step are the same, and the depths are changed to roughly follow the exact potential at each l (figure 10(a)). With the placement and potential value of each step set, T(E*) for different values of l can easily be computed and plotted, which is shown for two values of l in figure 10(b). If one plots T(E*) for the exact form of the Pöschl–Teller potential, it can be observed that new bound states form at the top of the well only at integer values of l. As seen in figure 10(b), this result is also observed in the discretized model. Resonance peaks approach and fall into the well only when the step depths correspond to approximately integer values of l.

Figure 10.

Figure 10. (a) The discretized symmetric Pöschl–Teller potential for two different l values plotted over the 'actual' Pöschl–Teller potential for these values, where the dimensionless energy E* is plotted on the left axis. (b) A plot of T(E*) for two values of l, the resonance peaks fall into the well forming a new bound state only at approximately integer values of l.

Standard image High-resolution image

It should be pointed out that T(E* > 0) for the exact Pöschl–Teller potential does not contain oscillating resonances due to the shape of the potential. However, even with just seven steps, the transmission resonances only oscillate by ∼3%, and as more steps are added, the resonance oscillations diminish in amplitude. Lastly, as the l values increase above l = 3, the width of the Pöschl–Teller potential increases significantly along with the depth, and so it becomes necessary to vary the location and depth of the steps to approximate the potential accurately.

4. Final thoughts

The method of transfer matrices prescribed above also finds similar efficacy in classical wave mechanics. This approach is commonly utilized, for example, in the understanding of the transmission and reflection properties of electromagnetic waves incident on stacks of thin films with varying indexes of refraction [19, 20]. In this classical situation, the index of refraction influences the wavevector in a similar way as the value of the potential in our quantum system. By invoking boundary conditions at the interfaces between the thin film layers, one can find similar transfer matrices for each boundary. By multiplying these matrices, one attains the overall transfer matrix for the entire stack of thin films. Following the same procedure of multiplying the transfer matrices at each barrier, the overall transmission coefficient of the stack can be determined.

This unified treatment of the scattering and bound states of a particle incident on the potential has several pedagogical benefits. It circumvents the need to solve transcendental equations directly and allows students to explore the parameters of the well, such as depth and width, and re-evaluate the energy eigenvalues of the bound states quickly rather than having to re-extract them from multiple variations of transcendental equations. In addition, students can also plot the solution of the Schrödinger equation for any of these situations and explore the bound state wave functions and scattering resonances, simultaneously.

In their own courses, the authors prescribe both the derivation of the transmission coefficient for the symmetric well and the determination of its bound state energies, using the traditional method of transcendental equations, as a homework assignment. In that assignment, the students are also asked to use MathematicaTM to plot the transmission coefficient that they derived for energies above and below the well, and then comment on the results based upon their solution for the bound state energies. This behavior is discussed in class following this assignment. As mentioned earlier, several students do associate the poles with energy eigenvalues because they exist at energies inside the well, are sharp and the pole spacing resembles the energy spectrum for the infinite square well; however, few are able to give a satisfactory explanation for why this would occur. In the next homework assignment, the students are asked to plot the wave functions for the symmetric potential well and 'play' with the precision of the bound state energies to explore its effect on the wave functions. This is later extended in class by the instructor to the cityscape example. Finally, after the typical solution to the quantum harmonic oscillator is covered, the use of the transfer matrix method to estimate curvilinear potentials is covered, and examples are gone over in class. Where time and student proficiency with MathematicaTM or similar programs permit, however, the authors encourage students to create programs to generate the bound state energies and wave functions for the cityscape and Pöschl–Teller potentials as a homework assignment.

Applying this approach after the typical treatment of the finite square-well potential, where the scattering states and bound states are treated separately, leaves the students with a better understanding of how to solve the Schrödinger equation and how to select relevant solutions from the infinite family of solutions that exist for a particular potential. An important takeaway is that solutions to an equation of physical relevance always exist within the continuum of mathematically valid solutions. However, it is the same properties that make them physically relevant that also make them fundamentally different in some way and allows them to be revealed from amongst the other solutions. This intuition will serve the students well as they tackle more advanced potentials such as the Coulomb potential of the hydrogen atom, where for example, they must select physically meaningful solutions to Legendre's equation, which involves a similar level of reasoning.

Please wait… references are loading.

Supplementary data Mathematica files and movie file.

10.1088/1361-6404/abcc40