>

# An introduction to laplace transforms and fourier series pdf

Date published:

Springer Undergraduate Mathematics Series. Phil Dyke. An Introduction to Laplace. Transforms and. Fourier Series. Second Edition. Laplace transforms continue to be a very important tool for the engineer, physicist and applied mathematician. They are also now useful to financial, economic. An Introduction to Laplace Transforms and Fourier Series. Authors; (view PDF ยท Further Properties of the Laplace Transform. Philip P. G. Dyke. Pages

 Author: CARMELITA HAWKEY Language: English, Spanish, Dutch Country: Italy Genre: Environment Pages: 377 Published (Last): 28.03.2016 ISBN: 364-9-79947-481-5 PDF File Size: 13.79 MB Distribution: Free* [*Regsitration Required] Uploaded by: SHAWNTA

Request PDF | On Jan 1, , Phil Dyke and others published An introduction to Laplace transforms and Fourier series. 2nd ed. P.P.G. Dyke An Introduction to I Laplace Transforms and Fourier Series SPRINGf: R I!I UHDERGRAl)UAyt Cl MATHEMATICS Springer D \$(RIES Springer. we only use Fourier series as a motivating introduction. 2. for functions on all of R, and that the Fourier transform is also a function on all of R, whereas the.

Springer, Cambridge University Press, These transforms decompose complicated signals into elementary signals, and are widely used across the spectrum of science and Geometric Trilogy I. Geometric Trilogy II.

The first term is the particular solution called the transient response by engi- neers since it dies away for large times , and the final two terms the complemen- tary function rather misleadingly called the steady state response by engineers since it persists.

Of course there is nothing steady about it! After a "long time" has elapsed, the response is harmonic at the same frequency as the forc- ing frequency. The "long time" is in fact in practice quite short as is apparent from the graph of the output x t which is displayed in Figure 3. However the amplitude and phase of the resulting oscillations are different. Convolution and the Solution of ODEs 59 x t The graph of x t response frequency is the same as the forcing frequency that is important in practical applications.

There is very little more to be said in terms of mathematics about the so- lution to second order differential equations with constant coefficients. The solutions are oscillatory, decaying, a mixture of the two, oscillatory and growing or simply growing exponentially. The forcing excites the response and if the response is at the same frequency as the natural frequency of the differential equation, resonance occurs.

This leads to enhanced amplitudes at these frequen- cies. If there is no damping, then resonance leads to infinite amplitude response. Further details about the properties of the solution of second order differential equations with constant coefficients can be found in specialist books on differ- ential equations and would be out of place here.

What follows are examples where the power of the Laplace Transform technique is clearly demonstrated in terms of solving practical engineering problems. It is at the very applied end of applied mathematics. We return to a purer style in Chapter 4. In the following example, a problem in electrical circuits is solved. As men- tioned in the preamble to Example 3. Resistors have resistance R mea- sured in ohms, capacitors have capacitance C measured in farads, and inductors have inductance L measured in henrys.

A current j flows through the circuit and the current is related to the charge q by. Ohm's law whereby the voltage drop across a resistor is Rj. The simple circuit 2. The voltage drop across an inductor is dj L dt' 3. The forcing function input on the right hand side is supplied by a voltage source, e. Here is a typical example. Convolution and the Solution of ODEs 61 q 6.

Imposing the initial conditions gives the following equation for q s , the Laplace Transform of q t: The former is easier. This gives: This solution is displayed in Figure 3. It can be seen that the oscillations are completely swamped by the exponential decay term.

In fact, the current is the derivative of this which is: The variation of current with time and this is shown in Figure 3. This is obviously not typical, as demonstrated by the next example. Here, we have a sinusoidal voltage source which might be thought of as mimicking the production of an alternating current. Solution The differential equation is derived as before, except for the different right hand side.

The equation is d2 q dq. The choice is to either use partial fractions or convolution to invert, this time we use con- volution, and this operation by its very nature recreates sin 3t under an integral sign "convoluted" with the complementary function of the differential equation. Recognising the two standard forms: What this solution tells the electrical engineer is that the response quickly be- comes sinusoidal at the same frequency as the forcing function but with smaller amplitude and different phase.

This is backed up by glancing at Figure 3. The behaviour of this solution is very similar to that of the mechanical engineering example, Example 3. A mathematical treatment enables analogies to be drawn between seemingly disparate branches of engineering. The differential equations we solve are all linear, so a pair of linear differential equations will convert into a pair of simultaneous linear algebraic equations familiar from school.

Of course, these equations will contain s, the transform variable as a parameter. These expressions in s can get quite complicated. This is particularly so if the forcing functions on the right-hand side lead to algebraically involved functions of s. Comments on the ability or otherwise of inverting these expressions remain the same as for a single differential equation. They are more complicated, but still routine. Let us start with a straightforward example.

The solutions x t and y t Solving for x and y is indeed messy but routine. We do one step to group the terms together as follows: A partial fraction routine has also been used. The last two terms on the right-hand side of the expression for both x and y resemble the forcing terms whilst the first two are in a sense the "complementary function" for the system.

The motion is quite a complex one and is displayed as Figure 3. The forces due to a a damper and b a spring y Figure 3. A simple mechanical system Having looked at the application of Laplace Transforms to electrical circuits, now let us apply them to mechanical systems. Again it is emphasised that there is no new mathematics here; it is however new applied mathematics.

In mechanical systems, we use Newton's second law to determine the motion of a mass which is subject to a number of forces. The kind of system best suited to Laplace Transforms are the mass-spring-damper systems. The components of the system that also act on the mass m are a spring and a damper. Both of these give rise to changes in displacement according to the following rules see Figure 3.

A damper produces a force proportional to the net speed of the mass but always opposes the motion, Le. A spring produces a force which is proportional to displacement. Here, springs will be well behaved and assumed to obey Hooke's Law.

This force is key - x where k is a constant sometimes called the stiffness by mechanical engineers. To put flesh on these bones, let us solve a typical mass spring damping problem.

Choosing to consider two masses gives us the opportunity to look at an application of simultaneous differential equations. The mechanics thankfully for most is now over, and we take the Laplace Transform of both equations to give: The right hand side involves the initial conditions which are: The solution is displayed as Figure 3. This introduces the concept of normal modes which are outside the scope of this text, but very important to mechanical engineers as well as anyone else interested in the be- haviour of oscillating systems.

A simple mechanical system solved 3. In this section we extend this exploration to problems that involve differential equations. The second shift theorem, Theorem 2. The prop- erties of the 8 function as required in this text are outlined in Section 2.

Let us use these properties to solve an engineering problem. This next example is quite extensive; more of a case study. It involves concepts usually found in me- chanics texts, although it is certainly possible to solve the differential equation that is obtained abstractly and without recourse to mechanics: Nevertheless this example can certainly be omitted from a first reading, and discarded entirely by those with no interest in applications to engineering.

The beam and its load W x where k is a constant called the flexural rigidity the product of Young's Modulus and length in fact, but this is not important here , and W x is the transverse force per unit length along the beam. The layout is indicated in Figure 3. Use Laplace Transforms in x to solve this problem and discuss the case of a point load. Solution There are several aspects to this problem that need a comment here. The mathematics comes down to solving an ordinary differential equation which is fourth order but easy enough to solve.

In fact, only the fourth derivative of y x is present, so in normal circumstances one might expect direct integration four times to be possible. That it is not is due principally to the form W x usually takes. There is also that the beam is of finite length I. In order to use Laplace Transforms the domain is extended so that x E [0,00 and the Heaviside Step Function is utilised.

To progress in a step by step fashion let us consider the cantilever problem first where the beam is held at one end. Even here there are conditions imposed at the free end. However, we can take Laplace Transforms in the usual way to eliminate the x derivatives.

Sy x dx where remember we have extended the domain to In transformed coordinates the equation for the beam becomes: It is at this point that the engineer would be happy, but the mathematician should be pausing for thought! The beam may be long, but it is not infinite. This being the case, is it legitimate to define the Laplace Transform in x as has been done here?

What needs to be done is some tidying up using Heaviside's Step Function. In general, the convolution theorem is particularly useful here as W x may take the form of data from a strain gauge perhaps or have a stochastic character.

This enables the four constants of integration to be found. The following procedure is recommended. Convolution and the Solution of ODEs 71 y x 1. The length l equals 3. This is in fact the result that would have been obtained by differentiating the expression for y x twice ignoring derivatives of [1 - H x - l ].

This provides the general solution to the problem in terms of integrals y x [1 - H x -l ] It is now possible to insert any loading function into this expression and calculate the displacement caused. This however is not a mechanics text, therefore it is quite likely that you are not familiar with enough of these laws to follow the derivation. From a mathematical point of view, the interesting point here is the presence of the Dirac-6 function on the right hand side which means that integrals have to be handled with some care.

For this reason, and in order to present a different way of solving the problem but still using Laplace Transforms we go back to the fourth order ordinary differential equation for y x and take Laplace Transforms. Convolution and the Solution of ODEs 73 w x 1 1. The length I equals 3 This solution is illustrated in Figure 3.

The Laplace Transform proves very useful in solving many types of integral equation, especially when the integral takes the form of a convolution. Here are some typical integral equations: A more general equation is: In these equations, is called the kernel of the integral equation.

The general theory of how to solve integral equations is outside the scope of this text, and we shall content ourselves with solving a few special types particularly suited to solution using Laplace Transforms. The second type is called a Fredholm integral equation of the third kind.

In integral equa- tions, x is the independent variable, so a and b can depend on it. The Fredholm integral equation covers the case where a and b are constant, in the cases where a or b or both depend on x the integral equation is called a Volterra integral equation. The following example illustrates this. This is the solution of the integral equation.

The solution of integral equations of these types usually involves advanced methods including complex variable methods.

These can only be understood after the methods of Chapter 7 have been introduced and are the subject of more advanced texts e. Hochstadt Convolution and the Solution of ODEs 75 3. Use the convolution theorem to establish 3. Solve the following differential equations by using Laplace Transforms: Solve the following pairs of simultaneous differential equations by using Laplace Transforms: Demonstrate the phenomenon of resonance by solving the equation: Assuming that the air resistance is proportional to speed, the motion of a particle in air is governed by the equation: Deduce the terminal speed of the particle.

Convolution and the Solution of ODEs 77 9. H x is the Heaviside Unit Step Function. To understand why Fourier series are so useful, one would need to define an inner product space and show that trigonometric functions are an example of one.

It is the properties of the inner product space, coupled with the analytically familiar properties of the sine and cosine functions that give Fourier series their usefulness and power. Some familiarity with set theory, vector and linear spaces would be useful.

These are topics in the first stages of most mathematical degrees, but if they are new, the text by Whitelaw will prove useful.

The basic assumption behind Fourier series is that any given function can be expressed in terms of a series of sine and cosine functions, and that once found the series is unique.

Stated coldly with no preliminaries this sounds preposterous, but to those familiar with the theory of linear spaces it is. All that is required is that the sine and cosine functions are a basis for the linear space of functions to which the given function belongs. Some details are given in Appendix C. Those who have a background knowledge of linear algebra sufficient to absorb this appendix should be able to understand the following two theorems which are essential to Fourier series.

They are given without proof and may be ignored by those willing to accept the results that depend on them. The first result is Bessel's inequality. It is conveniently stated as a theorem. Theorem 4. In addition, the inequality 00 L I a, enW:: An important consequence of Bessel's inequality is the Riemann-Lebesgue lemma. This is also stated as a theorem: This theorem in fact follows directly from Bessel's inequality as the nth term of the series on the right of Bessel's inequality must tend to zero as n tends to Although some familiarity with analysis is certainly a prerequisite here, there is merit in emphasising the two concepts of pointwise convergence and uniform convergence.

It will be out of place to go into proofs, but the difference is particularly important to the study of Fourier series as we shall see later.

Here are the two definitions. Definition 4. Fourier Series 81 It is the difference and not the similarity of these two definitions that is im- portant. All uniformly convergent sequences are pointwise convergent, but not vice versa. This is because N in the definition of pointwise convergence depends on Xj in the definition uniform convergence it does not which makes uniform convergence a global rather than a local property.

The N in the definition of uniform convergence will do for any x in [a, bj. Armed with these definitions and assuming a familiarity with linear spaces, we will eventually go ahead and find the Fourier series for a few well known functions. We need a few more preliminaries before we can do this.

We have also emphasised that the theory of linear spaces can be used to show that it possible to represent any periodic function to any desired degree of accuracy provided the function is periodic and piecewise continuous see Appendix C for some details.

To start, it is easiest to focus on functions that are defined in the closed interval [-1r, 1rj. These functions will be piecewise continuous and they will possess one sided limits at -1r and 1r. So, using mathematical notation, we have I: The restriction to this interval will be lifted later, but periodicity will always be essential. It also turns out that the points at which I is discontinuous need not be points at which I is defined uniquely. As an example of what is meant, Figure 4.

However, Figure 4. It is still however difficult to prove rigorously. At other points, including the end points, the theorem gives the useful result that at points of discontinuity the value of the Fourier series for f takes the mean of the one sided limits of f itself at the discontinuous point. Given that the Fourier series is a continuous function assuming the series to be uniformly convergent representing f at this point of discontinuity this is the best that we can expect.

Dirichlet's theorem is not therefore surprising. The formal proof of the theorem can be found in graduate texts such as Pinkus and Zafrany and depends on careful application of the Riemann-Lebesgue lemma and Bessel's inequality. We now state the basic theorem that enables piecewise continuous functions to be able to be expressed as Fourier series.

The linear space notation is that used in Appendix C to which you are referred for more details. Fourier Series 83 Theorem 4. Proof First we have to establish that f, g is indeed an inner product over the space of all piecewise continuous functions on the interval [", 11"].

The integral certainly exists. As f and 9 are piecewise continuous, so is the product f9 and hence it is Riemann integrable. There are no surprises. Time spent on this is time well spent as orthonor- mality lies behind most of the important properties of Fourier series.

For this, we do not use short cuts. Hence the theorem is firmly established. It is in fact also true that this sequence forms a basis an or- thonormal basis for the space of piecewise continuous functions in the interval [",11"].

This and other aspects of the theory of linear spaces, an outline of which is given in Appendix C thus ensures that an arbitrary element of the linear space of piecewise continuous functions can be expressed as a linear combination of the elements of this sequence, i.

At points of discontinuity, the left hand side is the mean of the two one sided limits as dictated by Dirichlet's the- orem. At points where the function is continuous, the right-hand side converges to f x and the tilde means equals. The authors of engineering texts are happy to start with Equation 4. This is the standard expansion of f in terms of the orthonormal basis and is the Fourier series for f.

Invoking the linear space theory therefore helps us understand how it is possible to express any function piecewise continuous in [-7r,7r] as the series expansion 4. Unfortunately books differ as to where the factor goes. This should not done here as it contravenes the defi- nition of orthonormality which is offensive to pure mathematicians everywhere.

The upshot of this combination is the "standard" Fourier series which is adopted from here on: There is good news for those who perhaps are a little impatient with all this theory. It is not at all necessary to understand about linear space theory in order to calculate Fourier series.

The earlier theory gives the framework in which Fourier series operate as well as enabling us to give decisive answers to key questions that can arise in awkward or controversial cases, for example if the existence or uniqueness of a particular Fourier series is in question.

The first example is not controversial. Example 4. Here is a slightly more involved example.

## Laplace Transform Books

Solution This problem is best tackled by using the power of complex numbers. We start with the two standard formulae: Let us take this opportunity to make use of this series to find the values of some infinite series. The most straightforward way of generalising to Fourier series of any period is to effect the transformation x --t rrxjl where 1 is assigned by us. Thus if x E [-rr,rr], rrxjl E [-l,l]. Here is just one example. However, here we give formal definitions and, more importantly, see how the identification of oddness or evenness in functions literally halves the amount of work required in finding the Fourier series.

Well known even functions are ;- Well known odd functions are x, sin x , tan x. An even function of x, plotted on the x, y plane, is symmetric about the y axis. An odd function of x drawn on the same axes is anti-symmetric see Figure 4. The important consequence of the essential properties of these functions is that the Fourier series of an even function has to consist entirely of even functions and therefore has no sine terms.

Similarly, the Fourier series of an odd function must consist entirely of odd functions, i. We have already had one example of this. The function x is odd, and the Fourier series found after Example 4.

Fourier Series 93 Example 4. We shall utilise the properties of odd and even functions from time to time usually in order to simplify matters and reduce the algebra. Another tool that helps in this respect is the complex form of the Fourier series which is derived next.

If these equations are inserted into Equation 4. More importantly perhaps, it enables the step to Fourier Transforms to be made Chapter 6 which not only unites this chapter and its subject, Fourier Series, to the earlier parts of the book, Laplace Transforms, but leads naturally to applications to the field of signal processing which is of great interest to many electrical engineers.

Fourier Series 95 Solution We could go ahead and find the Fourier series in the usual way. How- ever it is far easier to use the complex form but in a tailor-made way as follows.

In Example 4. It is there- fore legal see Section 4. From a practical point of view, it is useful to know just how many terms of a Fourier series need to be calculated before a reasonable approximation to the periodic function is obtained.

Problems arise where there are rapid changes of gradient at the corners and in trying to approximate a vertical line via trigonometric series which brings us back to Dirichlet's theorem. The overshoots at corners Gibbs' phenomenon and other problems e. Here we concentrate on finding the series itself and now move on to some refinements. Fourier Series 97 f 4 3. This is entirely natural, at least for the applied mathematician! Half range series are, as the name implies, series defined over half of the normal range.

That is, for standard trigonometric Fourier series the function f x is defined only in [0,7r] instead of [-7r, 7r]. The value that f x takes in the other half of the interval, [-7r, 0] is free to be defined. We are not defining the same function as two different Fourier series, for f x is different, at least over half the range see Figure 4.

We are now ready to derive the half range series in detail. First of all, let us determine the cosine series. We evaluate this carefully using integration by parts and show the details. However the sequences 1 v'2' cos x , cos 2x , Half range series are thus legitimate. Fourier Series 4. Intuitively, it is the differentiation of Fourier series that poses more problems than integration. This is because differentiating cos nx or sin nx with respect to x gives -nsin nx or ncos nx which for large n are both larger in magnitude than the original terms.

For those familiar with numerical analysis this comes as no surprise as numerical differentiation always needs more care than numerical integration which by comparison is safe. The following theorem covers the differentiation of Fourier series. The integration of a Fourier series poses less of a problem and can virtually always take place. A minor problem arises because the result is not necessarily another Fourier series.

A term linear in x is pro- duced by integrating the constant term whenever this is not zero. Formally, the following theorem covers the integration of Fourier series. It is not proved either, although a related more general result is derived a little later as a precursor to Parseval's theorem. The three Fourier series themselves can be derived using Equation 4. We state without proof the following facts about these three series. The series for x 2 is uniformly convergent.

Neither the series for x nor that for x 3 are uniformly convergent. All the series are pointwise convergent. It is therefore legal to differentiate the series for x 2 but not either of the other two.

All the series can be integrated. Let us perform the operations and verify these claims. It is certainly true that the term by term differentiation of the series gives 4 2x'" L: Integrating a Fourier series term by term leads to the generation of an arbitrary constant. This can only be evaluated by the insertion of a particular value of x.

To see how this works, let us integrate the series for x 2 term by term. The result is x3 1T2 4 L: This integration of Fourier series is not always productive.

Integrating the series for x term by term is not useful as there is no easy way of evaluating the arbitrary constant that is generated unless one happens to know the value of some obscure series.

Note also that blindly and illegally differentiating the series for x 3 or x term by term give nonsense in both cases. Engineers need to take note of this! Let us now derive a more general result involving the integration of Fourier series.

Suppose F t is piecewise differentiable in the interval -1T, 1T and there- fore continuous on the interval [-1T,1T]. We then set ourselves the task of determining the Fourier series for G x. In fact we alluded to this in Example 4. Here is an example where the ability to integrate a Fourier series term by term proves particularly useful. Fourier Series Example 4. This is a useful result for mathematicians, but perhaps its most helpful attribute lies in its interpretation. The left hand side represents the mean square value of f t once it is divided by 27r.

It can therefore be thought of in terms of energy if f t represents a signal. What Parseval's theorem states therefore is that the energy of a signal expressed as a waveform is proportional to the sum of the squares of its Fourier coefficients.

In Chapter 6 when Fourier Transforms are discussed, Parseval's theorem re-emerges in this practical context, perhaps in a more recognisable form. For now, let us content ourselves with a mathematical consequence of the theorem. The first two exercises depend more on knowledge of Appendix C and may be left if desired. Fourier Series Hence find the values of the four series: Determine the two Fourier half-range series for the function f t defined in Exercise 9, and sketch the graphs of the function in both cases over the range [" Obtain the first five terms of the complex Fourier series for V t.

In Chapter 4, Fourier series were introduced, and the important property that any reasonable function can be expressed as a Fourier series derived. In this chapter, these ideas are brought together, and the solution of certain types of partial differential equation using both Laplace Transforms and Fourier Series are explored.

The study of the solution of partial differential equations abbreviated PDEs is a vast topic that it is neither possible nor appropriate to cover in a single chapter. There are many excellent texts Sneddon and Williams to name but two that have become standard.

Here we shall only be interested in certain types of PDE that are amenable to solution by Laplace Transform. Of course, to start with we will have to assume you know something about partial derivatives!

## An Introduction to Laplace Transforms and Fourier Series

If a function depends on more than one variable, then it is in general possible to differentiate it with respect to one of them provided all the others are held constant while doing so. Thus, for example, a function of three variables f x, y, z if differentiable in all three will have three derivatives written of of of ax' ay' and oz' The three definitions are straightforward and, hopefully familiar. If all this is deeply unfamiliar, mysterious and a little terrifying, then a week or two with an elementary text on partial differentiation is recommended.

It is an easy task to perform: Also, it is easy to deduce that all the normal rules of differentiation apply as long it is remembered which variables are constant and which is the one to which the function is being differentiated. One example makes all this clear. Example 5. Solution a The partial derivatives are as follows: This is direct extension of the "function of a function" rule for single variable differentiation.

There are other new features such as the Jacobian. We shall not pursue these here; instead the interested reader is referred to specialist texts such as Weinberger or Zauderer Thus we will eventually concentrate on second order PDEs of a particular type.

However, in order to place these in context, we need to quickly review or introduce for those readers new to this subject the three different generic types of second order PDE. The general second order PDE can be written where aI, b1 , C1, d 1 , e1, It and gl are suitably well behaved functions of x and y. However, this is not a convenient form of the PDE for ifJ.

By judicious use of Taylor's theorem and simple co-ordinate transformations it can be shown e. Williams , Chapter 3 that there are three basic types of linear second order partial differential equation. These standard types of PDE are termed hyperbolic, parabolic and elliptic following geometric analogies and are referred to as canonical forms.

This notation is very useful when writing large complicated expressions that involve partial derivatives. Laplace Thansforms are useful in solving parabolic and some hyperbolic PDEs. They are not in general useful for solving elliptic PDEs.

The commonest hyperbolic equation is the one dimensional wave equation. This takes the form where c is a constant called the celerity or wave speed. This equation can be used to describe waves travelling along a string, in which case u will represent the displacement of the string from equilibrium, x is distance along the string's equilibrium position, and t is time.

As anyone who is familiar with string instru- ments will know, u takes the form of a wave.

The derivation of this equation is not straightforward, but rests on the assumption that the displacement of the string from equilibrium is small. This means that x is virtually the distance along the string. If we expand f and 9 as Fourier series over the interval [0, L] in which the string exists for example between the bridge and the top machine head end of the fingerboard in a guitar then it is immediate that u can be thought of as an infinite superposition of sinusoidal waves: If the boundary conditions are appropriate to a musical instrument, i.

Although it is possible to use the Laplace Transform to solve such wave problems, this is rarely done as there are more natural methods and procedures that utilise the wave-like properties of the solutions but are outside the scope of this text. What we are talking about here is the method of characteristics - see e. Williams Chapter 3. There is one particularly widely occurring elliptic partial differential equation which is mentioned here but cannot in general be solved using Laplace Transform techniques.

Partial Differential Equations following. Laplace's equation occurs naturally in the fields of hydrodynamics, electromag- netic theory and elasticity when steady state problems are being solved in two dimensions. Examples include the analysis of standing water waves, the dis- tribution of heat over a flat surface very far from the source a long time after the source has been switched on, and the vibrations of a membrane.

Many of these problems are approximations to parabolic or wave problems that can be solved using Laplace Transforms. There are books devoted to the solutions of Laplace's equation, and the only reason its solution is mentioned here is because the properties associated with harmonic functions are useful in providing checks to solutions of parabolic or hyperbolic equations in some limiting cases.

Let us without further ado go on to discuss parabolic equations. The most widely occurring parabolic equation is called the heat conduction equation. The thermal conductivity or thermal diffusivity of the bar is a positive constant that has been labelled K,. One scenario is that the bar is cold at room temperature say and that heat has been applied to a point on the bar. The solution to this equation then describes the manner in which heat is subsequently distributed along the bar.

Another possibility is that a severe form of heat, perhaps us- ing a blowtorch, is applied to one point of the bar for a very short time then withdrawn. The solution of the heat conduction equation then shows how this heat gets conducted away from the site of the flame. A third possibility is that the rod is melting, and the equation is describing the way that the interface between the melted and unmelted rod is travelling away from the heat source that is causing the melting. Solving the heat conduction equation would predict the subsequent heat distribution, including the speed of travel of this interface.