Lec 21 | MIT 6.450 6.450 Principles of Digital Communications I, Fall 2006


FEMALE VOICE: The following
content is provided under a creative commons license. Your support will help MIT
OpenCourseWare continue to offer high quality educational
resources for free. To make a donation or to view
additional materials from hundreds of MIT courses, visit
MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: We’ve just started
talking about wireless communication. We spent a lot of time talking
about how do you communicate, essentially on wire lines, where
essentially the only problem is white
Gaussian noise. Namely, you transmit a signal,
noise gets added to it, you receive the sum of the
transmitted signal plus noise and all of the going from
base band to pass band. All of that stuff is all messy
analytically, but essentially all that’s happening is that
you’re going from — you take a signal, you move up
to pass band, you add noise, you move back to base band,
in which case you get the original transmitted signal
plus the noise moved back to — move back — moved to base band. So what we wound up
with there was a relatively simple situation. Wireless communication
is not so simple. As you all know, if you use a
cell phone, and I’m sure most of you do, because of something
called fading — what we want to understand is
where this fading comes from, how it arises and some of the
things you might do about it. One of the things you should
recognize right at the beginning — and we won’t
spend much time talking about it — is if you happen to be trying
to use a cellular phone and there’s a big wall, which is
perfectly reflecting right in front of you and the base
station you’re trying to communicate with is on the
other side of that wall, you’re not going
to get through. In other words, there are some
situations where no matter how you design a cellular phone,
you just can’t get any communication. It’s part of life. What we want to do, however, is
to make sure that when you can get communication,
you will get it as well as possible. You will prolong the period, but
you’re still communicating while the channel’s getting
worse and worse — and part of the way to do that
is to understand what it is in the physical mechanisms that’s
making the problem difficult. OK, so to start out with this,
we’ll start out kind of easy. We’ll assume an input, which is
just a cosine of 2 pi ft at some ficed antenna — and this is radiating outwards
and the electric field — anywhere in free space, if you
go a long distance away. If you go a very short distance
from the antenna, if you study electromagnetism,
you know that all sorts of crazy things are going on. When you get very far away,
there’s something called the far field and essentially what
happens is that all of these disturbances close in just sort
of disappear and very far away what’s happening is that
the field strength — and this is true for the
magnetic field also, which is just in — if the electric field is
this way, the magnetic field is that way. They’re both propogating
outwards and they’re going down as 1 over r, just like
when we deal with linear systems, we never talk about
voltage and current anymore. This is why we use a square
root of minus 1 at this point — because after you learn about
voltage and current, you don’t have to talk about
them anymore. You just deal with one of them
for the function that exists someplace and the other
one just follows along from the impedance. Same thing happens for
electric fields and magnetic fields. You don’t have to bother about
both of them, unless you’re really trying to solve Maxwell’s
field equations, which is a very challenging
endeavor. I admire any of you
who can do this. I used to be able to do it many,
many years ago and I’ve given up on it because I decided
that’s for younger people than me. This far field has a very
simple kind of behavior, because what happens is it has
to go down as 1 over r. How do you know it has to
go down as 1 over r? Namely, the distance away from
this radiating antenna. If you look at a sphere, which
is very, very far away from the radiating antenna, all you
find is this field that’s radiating outwards and you look
at how much energy is radiating outwards. The energy which is radiating
outwards, there’s no place to lose it because we’re
transmitting just an open space now — or at least that’s
what we’re imagining at this point. So we’re transmitting
out in open space. Energy doesn’t get beaten up
any place so it just keeps going out until it disappears
out of the outer edges of the known universe. It travels out there with
the speed of light. That’s what Maxwell’s laws
say if you solve them. Since this sphere has an area
which is proportional to r squared, the only thing that can
be happening in this wave, which is propogating outward
spherically, is it has to be going down as 1 over r, because
that’s the only way for the power to balance out. You can’t be creating power out
in this free space and you can’t be losing power. Whatever you’re sending is just
radiating outwards, so we have to have this 1 over r
dependence any time you’re dealing with free space. The other thing that’s happening
is we have an antenna pattern. The antenna pattern is a
function of two angles. We’ll think of theta as being
the angle around this way and psi as being the
angle this way. If you like to think
of angles in some other way, be my guest. The only thing is, when we’re
radiating outwards through this sphere, if the antenna is
directional, it’s going to be radiating more in some
directions than it is in other directions and this simply
takes that into account. I’m not going to pay any
attention to this at all. It just exists and it’s there. For people who design antennas,
that’s an important issue: How do you make
antennas which are directional, which will shine
their energy in one direction rather than another? We’re not going to go
into that at all. We’re just representing the
fact that it’s there. This is just a factor which
says, how much loss do you have in the antenna and how much
of the power that you’re radiating goes in each one of
these directions and how does it depend on the frequency that
you’re transmitting at? Antennas are sometimes
designed to be rather frequency dependent,
so they’re tuned to a certain frequency. They work very well at that
frequency and on other frequencies, they
just go to pot. The other part of this equation
is that if I send a signal, as it radiates outward,
it’s going to be radiating outward at
the speed of light. Therefore, whatever I receive is
going to be delayed by the distance that I am away divided
by the speed of light. So this equation is something
you don’t really have to know any electromagnetics
to derive. If you want to find out what
this term is, yes, you’d need some electromagnetics. All this is saying is the power
has to be going down as 1 over r and you have a
propogation delay, which has to be going as r divided
by the speed of light. Now, if we look at what
happens when we put a receiving antenna out at some
distance r away from the transmitting antenna and in this
direction theta and psi, that receiving antenna is
going to distort this electromagnetic wave locally
around the receiving antenna, but it’s not going to distort
the whole thing. In other words, its power
is radiating outwards. It doesn’t know anything about
this little receiving antenna until it gets close to it, then
the electromagnetic wave gets distorted somewhat. The only thing that’s happening
because of this receiving antenna is that
there’s some added antenna pattern due to the receiving
antenna — namely, some added attenuation which multiplies
by the attentuation in the source antenna. You have both the source antenna
pattern, the receiving antenna pattern. We put the two together. We call that alpha and we don’t
bother about it anymore except recognizing that
it might change with frequency also. Then we have this propagation
delay between transmitting antenna and receiving antenna If
you’re transmitting in free space from one antenna to
another antenna, this is what happens, with some arbitrary
pattern here for the two antennas, which depends on
whether they radiate spherically or whether
they radiate in some sort of direction. That’s the received wave form. That said, if you look at it,
at the received field — this is supposed to be
valid for any — oh, sorry. Yes, that would help. This equation is supposed to be
valid for any frequency we want to transmit at, at
any time, and if we’re transmitting two signals, both
together, the response is going to be the sum of the
response to one signal and a response to the other signal. In other words, Maxwell’s laws
are linear and therefore the response that you get when you
solve Maxwell’s laws — I’ve never solved Maxwell’s
laws for anything this complicated and probably
none of you had either. You might have, but anyway, they
are linear and therefore, in fact, what’s going on is that
this gives you a system function which says what the
response is to any given input that you might want
to transmit. It says the response — at frequency f to a sinusoidal
input, which is what we were assuming before. We were assuming that we
transmitted cosine 2 pi ft. The system function is just this
antenna pattern times e to the minus 2 pi i times f
times the distance away divided by c divided by r. Namely, this takes into account
the propogation delay. The only thing it doesn’t
take into account is what the input is. The received field is then the
real part of the system function times what we already
assume that we were going to be transmitting. Namely, the real part —
e to the 2 pi i ft. So at this point, what we’re
doing is taking into account the fact that the solution to
Maxwell’s equations is going to be linear and therefore we
can just add up what happens for each frequency of input. What you notice from looking at
that is that when we have a fixed transmitting antenna, a
fixed receiving antenna and free space between them, we are
right back to the problem that we started with. Namely, white Gaussian noise on
a channel, because nothing is varying with time. There isn’t any fading. Nothing interesting
is going on. This is sort of like the case
of microwave towers. Microwave towers are set up and
they have nice directional antennas, nice horns which are
blowing at each other. Nothing changes except every
once in awhile is a rainstorm or something and the
communication goes to pot. Usually, you’re just
transmitting as if it was white Gaussian noise and you
usually view microwave towers sending microwave as being
almost equivalent to wire line communication. So there’s nothing very
interesting there. Now if the receiving antenna
starts to move — at this point, we are
transmitting from a fixed sending antenna. We have a receiving antenna,
which for example, is in a car where somebody’s running along,
driving a car with their feet and talking on
two cellular phones at the same time — and the car’s driving along
at 100 miles an hour and something’s going to
happen soon, but it hasn’t happened yet. At this point what we’re
interested in is not the response of some fixed place r,
but what we’re interested in is the electromagnetic
field in the absence of receiver at this point,
which is moving. We’re interested in the
electromagnetic field at a point r 0 plus v2, where v is
the velocity of this vehicle. We’ll assume for the time being
that the vehicle is going directly away from the
transmitting antenna. If it’s going at some angle,
it just changes these equations a little bit. And the electric field there,
before we put the car in the car changes all of the field
patterns, but just changes it in a local way again. Just like when we put the
receiving end antenna in a fixed location, it changed all
the local field equation, but it didn’t change anything
globally. Again, what we have
when we put in the receiving antenna — is now the electric field
at a point r, which is varying with time. There’s a time dependence
with this now. 1 over r 0 plus v.t. times the real part of this
antenna pattern — which we’ll assume remains fixed — times e
to the 2 pi if times t minus r 0 plus vt over c. This is for a velocity away
from the antenna. That’s just what this same
equation says, but we can now interpret this nicely if we take
the vt over c and combine it with the ft here and then we
get something which looks like this antenna
pattern again — e to the 2 pi i f times 1
minus v over c times t. This v over c here is
just this term here. It’s coming down to there. Nothing mysterious has happened
here, but what you see is this well known
phenomenon called Doppler shift. If you throw some screaming
person over a cliff, what you’ll hear coming back to you
is a scream in velocity much smaller than the actual scream
that the person is actually transmitting to you. You’re all familiar with this. You’re familiar with having
planes fly overhead and when you hear them coming towards
you, you hear a higher pitched sound. When it passes by you — this
is a nicer example than throwing somebody over
a cliff, obviously — and you then hear a lower
frequency sound as the plane starts moving away from you. This Doppler shift is a well
known phenomena as far as sound is concerned. The same phenomena exists with
electromagnetic radiation. And there’s nothing more
to it than just this — it’s just that as you are
transmitting from here to a point which is moving away, it
keeps taking longer for the electromagnetic wave to get out
to there than it takes to get here, so if you look at the
wave fronts going along, the peak of the wave as it
travels along, the peak of the wave takes a little longer to
get out here than it took to get here, which means that
from the viewpoint of the receiver, it looks like the
receive frequency is much smaller than it was when
it was actually being transmitted. We get this thing called
the Doppler shift. Just to get some idea of the
magnitude of this Doppler shift, what you’re interested in
is the speed of the vehicle divided by the speed of light. That’s the relative
change in the frequency that you observe. The situation here is quite
different than it is in sound. Sound travels rather slowly. Light travels pretty fast and
therefore, you need a really rapidly speeding vehicle to make
this be any appreciable fraction of one. So it looks like this is
a very small effect. The trouble is, what you are
multiplying this effect by is the carrier frequency,
which can be up in the gigahertz range. To look at it another way, we
are looking at situations where the wavelength is small
fractions of the meter. What this equation says is that
any time this receiving vehicle moves by one wavelength,
namely a small fraction of a meter, the crest
of this wave goes from maximum down to minimum back up
to maximum again. In other words, in a quarter
wavelength, it will go from maximum down to zero. What you are observing at this
carrier frequency is very, very different and it keeps
changing rather rapidly. All these other terms are
just junk, of course. This f times r 0 over c — all that is is just a fixed
phase difference, so we don’t care about that. This is just some fixed term. This quantity here is changing
with t also. If we’re thinking of a distance
away, it’s several kilometers and we’re thinking of
the amount of time for this to become an appreciable
fraction of one. For this to becoming an
appreciable fraction of this, that’s a pretty long time. The amount of time for this to
change appreciably is seconds or minutes. The amount of time for this to
go through a wavelength change is milliseconds. So despite the fact that you
see this sitting in an important place down there,
this is not important. Everything that goes on as far
as fading is concerned is tied up with that term there. This is important. So we now have a system
which is linear. We still have the linear field
equation, but it’s not time invariant anymore. It’s changing with time. The response is changing
with time. You send an exponential,
what you received. If you have a linear time
invariant system, when you send an exponential of frequency
f, you receive an exponential with frequency f. The only thing that a linear
time invariant system can do is change the phase of that
signal and change the amplitude of it. Can’t do anything more
complicated than that. That’s why we love to study it;
because it’s so simple. Now we have something more. We have a system that can also
change the frequency of what’s getting received. This small change down here — and of course, if obstacles get
in the way or something then there’s this huge shadowing
difference and all those important things,
but you can’t do anything about that. You can do something about
this, which is why we’re focusing on that. Let’s go to the next example. Incidentally, that example
is no problem at all for communication. I’ll show you why
in a little bit. You can get around that problem
very, very easily and I’ll show you why. This is a problem you can’t
get around so easily. Here we have a vehicle, which
is travelling, say, at 60 kilometers an hour. Person’s talking on his two
cell phones, has his eyes closed because something
surprising is happening, there’s this big reflecting wall
right in front of him and he doesn’t see it at all, so
he’s speeding into this wall. We’re going to analyze
this problem right before he hits the wall. We have two paths here. We have one path which is the
path from here out to the vehicle, which has a
length, r of t — this is the distance away from
the sending antenna to the receiving antenna. We have another path which has
a length d, then it gets reflected and this distances
is d minus r of t. The total length — and you’re adding up this length
with this length — is 2d minus r of t. The reason is — do I have
an extra picture there? No, I didn’t make my extra
pretty picture. The reason is that one way to
deal with electromagnetic radiation is when you see a
wall, the thing that happens is that you get a reflection
which is coming back this way. The reflection has a strength
which is equal to the radiation that you would get if
there weren’t any wall — uhhuh except, of course, that
you’ve changed directions. In other words, this wall has
generated a new plane wave which is going backwards, which
is just enough to make the electric field strength on
this wall equal to zero, because we’re assuming a
perfectly reflecting wall. You can satisfy Maxwell’s
equations by having this incoming electromagnetic wave. You would like to have an
outgoing electromagnetic wave, but you can’t do that because
there’s no way for the wave to get through it. The only way you can do it is to
generate a new wave, which is moving backwards, which
cancels out the incoming wave right at this point. We really have a path here of
length 2d minus r of t and as a result of that, the electric
wave has two components. One is the component we were
dealing with before where there’s this Doppler shift
because this is moving away from the sending antenna. The other term — in fact, we are moving closer
to the wall and the distance in this path is getting shorter
and shorter as doom approaches. Here we have a positive
Doppler shift. Here we have a negative Doppler
shift and we have these junk terms
in both places. One is fr 0 over c. One is 2d minus rc over c. Here we have r0 plus vt. Here we have 2d minus
r0 zero minus v.t.. As we said before, this term and
this term are not changing very rapidly. It makes it a little easier to
analyze this if we say, let’s suppose that this is
equal to this. In other words, we’d like to
look at this right before the car strikes the wall. That also is where this
approximation is best because for those of you who have
studied electromagnetism, you know that if a plane wave
impinges on a wall, funny things happen. If you look very far away from
the wall, you will find this electromagnetic wave, which
looks like a plane wave if the wall is distant from
the source. So this electromagnetic wave
coming in, there’s this wall in here and what happens to the
electromagnetic radiation is outside of the wall that’s
going to go out past the wall — and because of Maxwell’s
equations, it just sort of gathers together beyond
the wall and it sort of comes together. What you find is a disturbance,
which is just around the wall. Far away from the wall, you get
the same electromagnetic radiation that you had before
and close to the wall you have this disturbance. If you look at the
situation — here it is. If you look at what happens here
and the wall is not big enough, if the wall is very
small, this reflection is really going to look like what
happens when you have an electromagnetic wave hitting
the wall and the wall then re-radiates an electromagnetic
wave, which very far away, this wall just looks like
a point source. What you have is instead of a
1 over r, 1 over 2r minus 1 over 2d minus r attenuation,
you have a 1 over d attenuation multiplied by a 1
over d minus r attenuation. If you didn’t get all
of that, fine. Doesn’t make any difference. The point that I’m trying to
make is that this analysis is really limited to the case where
there the wall is rather small, where the wall is very
large, because otherwise you won’t have just this plane
wave radiation effect. What we wind up with these two
terms instead of one term. If I throw away all of the phase
terms and I assume that the denominators are equal — I’m going through this for
some sort of reason — I wind up with two sinusoids:
e to the blah and e to blah plus. When I take the real part of the
sum of two sinusoids, and I look in all of the high school
books I can think of about elementary geometry and
playing around with sine waves, what I find is that this
collapses into 2 alpha times the sine of 2 pi
ftv over c times the sine of 2 pi ft. In other words, it collapses
into a sinusoidal term, which is the major part of this term,
ft, and the major part of this term. In other words, I can cancel
out the terms here that are the same as the terms here. When I cancel out those
same terms, that’s the term that comes out. When I look at the other terms,
I get an e to the minus vc over t and an e to the
plus vc over to t. When we look at e to the minus
vc over to t and e to the plus vc over t, even I know how
to deal with that. It looks like either a cosine
term or sine term, depending on whether the sines
in the same or the sines are different. What we have at that point is
this sinusoid is really a sinusoid at the carrier that
we’re transmitting at. This term here is really
something which expands and contracts slowly. So it’s a beat, which says that
if you’re transmitting from this source at this
receiver, what you’re hearing is something which contracts,
expands, then contracts, then expands again. There’s nothing you can do about
that problem either. There’s just no energy there
part of the time. This sine term here is running
along, sort of changing from maximum to minimum at a few
milliseconds time period. If you have this vehicle
traveling at 60 kilometers per hour — you just work out the numbers
there with the velocity of light and all of that stuff. You find that you really can’t
communicate over your cellphone in that situation,
because of these peak frequencies. It’s too fast. It’s too slow to ignore and ride
over it and it’s too fast to be able to get
all of your data transmitted before it happens. It sort of is a catastrophe. So that’s what happens because
of Doppler shift. You get a response which is
periodically fading at the Doppler frequency. This is called multipath
fading or fast fading. It’s called fast fading because
it happens so fast. It’s called multipath fading
because it happens because of multiple paths, which each
have lengths which are changing relative
to each other. Let’s go back and look at the
thing we had before, where we just had a moving antenna. Here you have a Doppler
shift also. Why doesn’t it bother you? Here you have this Doppler shift
and you’re transmitting, let’s say, a gigahertz and what
the receiver is getting is the gighertz minus — perhaps a kilohertz
or something. So why isn’t that a problem? If I demodulate at the carrier
frequency, I’m sort of in bad luck because I have a
signal then which is changing very rapidly. I have something which looks
like a time varying system. But what’s going to happen? If I use the same kind of
frequency recovery system that we talked about earlier, that
frequency recovery system has all the time in the world to
track that frequency which is one gigahertz minus
a kilohertz. It can track it perfectly, which
says so what happens is we start out with a signal. We move it up in frequency
by one gigahertz. The Doppler shift moves it
down by a kilohertz. We track that frequency, then
we move it down again by a gigahertz minus a kilohertz and
everything works fine and nobody even knows that there’s
any Doppler shift there. So the problem is not
Doppler shift. The problem is multiple Doppler
shifts which are at different speeds relative
to each other. That’s an important point and we
will come back to it as we move along. If you put all of those phases
that we neglected in the analysis that we just went
through, this is the equation that arises. I write this down not because
it’s important, but because the notes — this is
in lecture 20 — have an error. In the sine term, it fails to
put an i in, which should be there and this is the correct
term and that is off like ninety degrees. If you write this down, you will
then see what’s going on. Equation 7 in the notes has
an e to the 2 pi f.t. minus f.d. over c, which is not
the right thing. As we said, the fading is due
to Doppler spread between different paths. The single Doppler shift does
not bother us at all. If you have a vehicle which
is traveling away from the sending antenna and you have
some kind of reflector, which is not something you’re running
into, but which is a reflector above, a reflector
below or something like that, the thing which is going to
happen is that both of those paths then are going to have
roughly the same Doppler shift in them and if they both have
roughly the same Doppler shift, it’s not going to be this
kind of beat cancellation that we have here, which
is something you really can’t get rid of. The other thing that this points
out again is that this variation is going to be in
terms of minutes or seconds and anything you’re doing to
track the signal is going to be adequate for that until you
get to the point where there just isn’t enough energy anymore
and then of course you have to move to a different
base station or something else. Want go through one more example
of electromagnetism because it’s so surprising — at
least, it was surprising to me when I found this out. If you have a sending antenna
— think of this as a base station, which is high
up at about maybe 15 meters or something. It’s sending to some receive
antenna, which is at some height above the ground — usually quite a bit smaller. Suppose there is some more or
less partly reflecting plane, like a road. Here we have a vehicle which is
travelling along a road and we have ascending antenna, which
is also close to the road, which is sending the
signal so we have two paths, one which is the direct path
from sending antenna to receive antenna. The other is a reflecting path
which goes down here and comes back up again. The rather surprising thing
here, the thing that I couldn’t believe when I saw it,
because it contradicted all of my intuition, is that
when r gets quite big, the difference between these two
path lengths goes to 0. Is that surprising
to anybody else? Anybody awake enough
to be surprised? This difference in path length
here really goes down as 1 over r. That’s an easy geometric
problem to solve. You just write down what this
length is and the sum of these two lengths and you will find
when you do it that the difference is proportional
to 1 over r. What happens then is that as
r gets big, these two path lengths get closer and
closer together. As they get closer and closer
together, eventually they’re much closer than one wavelength
to each other. When they’re much closer than
one wavelength to each other, the thing that happens is that
we get a reflection in the electric field here, so. This field and this field
are going to be canceling each other. They’ll be canceling each other,
except for this phase difference which is proportional
to 1 over r, and the phase difference which is
proportional to 1 over r is the only thing that gives us
any power here at all. As we move further and further
away, that phase difference goes down as 1 over r, which
means what’s happening is the overall electric field that you
receive here, instead of going down as 1 over r, which
it would in free space, is going down as 1 over
r squared. Since it’s going down as 1 over
r squared, it means that the power that we’re receiving
is proportional to 1 over r 4th instead of 1
over r squared. The analysis of this is not
particularly important. Why it is that surfaces such as
macadam reflects so well is not particularly important
to us either. The point is that when you look
at all the problems of electromagnetic radiation, in
actual situations, you find some situations which behave
like this, you find some situations which behave
like this nice plane wave and free space. You find other things where in
fact the radiation goes down as 1 over r 6th, rather than 1
over r squared, so that we wind up with each path has its
own particular kind of attenuation in with path lengths
and it’s kind of hard to figure out what
all of those are. Then you go on and say that
things are even worse than this because sometimes you’re
communicating through a wall which is only partly absorbing
and if you’re transmitting through a wall which is partly
absorbing, then the attenuation is exponential
in the width of the wall. You have a lot of different
paths, all of which are very messy electromagnetic radiation
problems and all of which have attenuations which
range from 1 over r squared to 1 over r 6th, sometimes with an
exponential thrown in for good measure, which says that if
you really try to find the electromagnetic field at a
wireless cell phone, you’re in very deep trouble. It’s certainly not something
that your cellular phone is going to solve for you and it’s
certainly not something that you’re going to have
solved ahead of time and program into your cellular phone
or into the base station or anything else. The question is, what
do we do about this? One thing is, we’re not going to
study these electromagnetic phenomena any further because
it’s a losing game. If the thing that you’re
interested in is finding out where to place base stations,
all of this kind of analysis is useful and you can go much
further with it and you should go much further with it. If your interest is in, how do
you build third or fourth or fifth or 20th generation
wireless systems? All the electromagnetics that
you study is only going to give you gross ideas of what
kind of phenomena you have to deal with. We already have some idea of the
kind of phenomena we have to deal with. We have to deal with paths
which have different attenuations on them, which
have different propogation delays on them, and all of
these multiple paths are things that we have to somehow
deal with without analyzing them in detail. The question is, how
do we do this? Everything that we’ve done so
far is called rate tracing. In fact, even with all the
complexity that we’re dealing with now, we have highly
oversimplified it. Each of these paths that we have
is going to give rise to an attenuation. We’ll now call the attenuation
beta of j of t and a propogation delay, which we’ll
call passive j of t and the propogation delay is just what
we get by assuming that a plane wave is going from source
to destination and we add up the distance from
source to reflector to destination and that gives us
this propogation delay. These are going to vary with
time, but in our rate tracing approximation, we’ve assumed
that they’re independent of frequency. I originally assumed that the
antenna pattern was a function of frequency. We didn’t want to say
anything about that. So we have a total of
capital J paths. we put in an input of cosine 2
pi ft, what’s going to come out is an electromagnetic
radiation, which is a sum of these different attenuation
factors times e to the 2 pi ft minus this propogation delay. Everything you can do
with ray tracing is included in this formula. What you might be able to do
in a wireless system is you might, by looking at the
received wave form and knowing things about the transmitted
wave form, you might be able to figure out what these
attenuation factors are and what these propogation
delay factors are. Just like when we tried to do
frequency recovery, we can find out what the transmitted
frequency was. We can play the same sorts of
games here, but they’re harder and we’ll talk about
that later. If you ever hear the term rake
receiver, a rake receiver is a receiver that in fact measures
all this stuff and responds to it and we’ll talk about that
probably next Monday. If we want to look at that the
reflecting wall just as an example of what this formula
means: beta 1 of t, namely for the direct path. We have an attenuation, which
is the magnitude of the antenna patterns divided
by r0 plus vt. For the attenuation on the
return path from the wall, it’s the same alpha. Why is it the same alpha? Because we assumed it was the
same alpha to make things simple for ourselves
— divided by 2d minus r0 minus vt. If you look at these propogation
delay terms, the propogation delay terms are r0
plus vt divided by c and this gives us the Doppler shift that
we’re interested in here. We’re also going to have an
extra term here, which is really caused by the phase
change at the transmitting antenna and the phase change at
the receiving antenna and a phase change at a reflector,
if there’s any there. We have the same sort of term
at both of these places. The reason that I talk about
that is if you look at the electromagnetic wave that you
received for this reflecting wall problem that we’ve talked
about a good deal, the second term is there with a negative
sign rather than a plus sign. Here, everything is put
in with plus signs. You can create negative signs
by phase changes of pi. So the assumption is, we put
in a phase change of pi as part of this term here. So all of these terms can be
expressed in this general forum here. As we said before, you only
have two choices with a cellular system. You cannot solve the
electromagnetic field problem at the cell phone. The person using the cell phone
is not going to do it. The cell phone is not
going to do it. The base station is not going to
do it and you’re not going to store all those changes
because these radiations change remarkably within a
period of just small fractions of one meter. You have a coverage here, which
in fact, is at least area coverage of one kilometer
times one kilometer. Then the reflectors are going
to be moving also, so you can’t deal with them
very easily either. It’s a hopeless problem,
to try to solve the electromagnetic problem and
store it some place. Electromagnetism helps us to
limit the range and likelihood of choices, but it doesn’t
help in actual detection. So we’re now going to deal with
the kind of thing we just talked about, which is this sort
of general expression for electric field in terms of
attenuation factors and phase changes, as opposed to
anything which is wh much more detailed. We’re going to define a channel
system function as just this sum of these
attenuation terms times phase change terms. The reason that we’re doing this
is that if we put in an input — e to the 2 pi ift,
then what we get is this system function here — times x
of t, which is e to the 2 pi ift, so we get this
quantity here. Here’s the minus
2 pi if tau — that’s that term coming down
there and here is the e to the 2 pi ift coming down here. So all of this term and this
term are both included in this system response term. This is linear also, so we know
what the response is to an exponential that we put in. I’m cheating you a little
bit here by going from real to complex. And the notes do that a little
more carefully, but we ought to be used to that now. If I put in an input, x hat of
f e to the 2 pi ift df and integrate it. In other words, if I put in an
arbitrary input, which I represent in terms of its
Fourier transform, I now know what the response is to
every x-hat of f. It’s given by this
response here. So I just integrate over that
and I find that the response y of t to an arbitrary input now
is the integral of x-hat of f, h-hat of ft, e to the
2 pi ift df — namely, the same game
that we always play. Namely, that’s just the system
analysis way of looking at arbitrary systems in terms of
their Fourier transforms. So the output y of t is just
this integral here. Important point: When you look
at this, this says this is the same as any old linear time
invariant system. This is not a linear time
invariant system. If you try to take the Fourier
transform of this to get y-hat of f, you’re not going to get
x-hat of f times h-hat of f. Namely, this is not
equal to that. Why isn’t it equal to it? First reason is, try to take the
Fourier transform of this and see what you get over here
and when you deal with the fact that there’s a t in here,
you will find that there’s nothing you can do. You’ll find you’re stuck. So you can’t derive this
equation here. Next, argument is this quantity
here is not a function of t. This quantity here is
a function of t. You can’t have a quality between
something which is not a function of t and something
else which is a function of t. It just can’t happen. The final argument is, if you
look at what’s happening here, when you put in a single
frequency — when you put in x of t equals e to the 2 pi
ift, what comes out? In terms of this reflecting wall
example, the thing that came out was not one sinusoid,
but two sinusoids. One sinusoid a little bit above
the carrier, the other sinusoid a little bit
below the carrier. In other words, because of
Doppler shifts, when you put in a sinusoid, what comes out
is not a sinusoid, but a modulated sinusoid. It’s something spread out over
a region of frequencies. So one of your favorite tools
for dealing with linear time invariant systems is
no longer adequate. Doesn’t work. Just make a mark of that,
because every time you see a problem like this, everytime I
see it, the first thing I try to do is go through that most
familiar and most favorite form of linear system analysis,
which is the Fourier transform of convolution, is the
same as multiplication in the frequency domain. You cannot do that anymore. However, convolution still work,
so that’s the next thing we want to look at. So the thing that we have is
the output of the system is now going to be this integral
over frequency — x-hat of f times the frequency
response function for the linear but time varying
system, times e to the 2 pi ift. This is what we derived on the
last page, on the last slide. It is the thing which just
automatically happens here. If I let h of tau and t be the
inverse Fourier transform of h-hat of ft — and here what I’m doing is I’m
regarding t as a parameter. So this is a function of tau now
and this is a function of tau for a given t, I can take
the Fourier transform of this. Take the Fourier transform of
any old thing at all, so long as it’s L2. I will sort of half-pretend
it’s L2. We’ll worry about that later. So this is the Fourier
transform of this. So then, the thing that happens
is that y of t is going to be equal to this
quantity here, except in place of the system function, h and f
of t, for a particular value of t, I’m going to put in this
inverse Fourier transform. So it’ll be h of tau and
t, e to the minus 2 pi i of tau d tau. This integral here is just
h-hat of f and t. If I take this quantity here
and I move this term inside and I interchange orders
of integration — incidentally, when we’re dealing
with wireless, we’re going to forget about all of the
nice things that we know about L2 functions. There’s just too much new stuff
that’s going on here to worry about that. So what you want to do is just
take Fourier transforms like while, interchange orders of
integration, interchange everything you want to and
simply forgot about all the mathematical problems
that might arise. After you understand this, at
that point, go back and straighten out the mathematical
issues. This in fact is the way we deal
with any problem, or the way you should deal
with any problem. You don’t bring the mathematics
in unless it’s going to help you solve
the problem. You don’t bring it in to
frustrate yourselves. So the thing we’re going to do
now is to interchange these orders of integration. We’re going to integrate over
tau on the outside and f on the inside. So we’re going to bring the
function of tau outside. This is an integral in tau. The function of f is going to go
on the inside. we have e to the 2 pi ft — that’s this quantity here. We have this term there. When we look at this, we see
something very nice because this is in fact just the
Fourier transform of x of t minus tau. When we take that out, what we
get is the integral of x of t minus tau times h of
tau and t be tau. In other words, this is time
varying convolution. Nice, simple equation,
makes a lot of sense. It says you have this impulse
response here. h of tau and t now can be interpreted as the
response at time t to an impulse tau seconds earlier. If you have a system which is
changing very, very very, slowly, then this is essentially
just a function of tau and it’s the usual impulse
response that you’re familiar with — namely, this convolution
equation gives you the response at time t to an
impulse tau seconds early. Now we just have something which
says this is a linear time varying filter and in all
the cases, we’re interested in this linear time varying filter,
changes its impulse response very slowly
as time changes. Relatively fast change
with tau, relatively slow change with t. So this is very similar to
linear time invariant convolution. Channel behaves like a slowly
time varying filter and that’s the bottom line of this. For these ray tracing models we
were looking at, the system function is a sum of terms at
the sum of attenuations times phase change terms. If we take the inverse Fourier
transform of this — remember, we’re taking the
inverse Fourier transform on f and putting in a tau. So we’re taking this inverse
Fourier transform for a particular t. We have a function
of f and of t. We take the inverse Fourier
transform with respect to the f and get a tau here. This then becomes this
quantity here. How do I interpret that? If I look at a single term
here, what is it? A single term here is just
an attentuation factor, a constant times a sinusoid. At a particular value of
t, this is just the constant here also. So for a particular t, all
I have is a sinusoid. What’s the inverse Fourier
transform of a sinusoid? I told you all along that it
doesn’t exist, but for the time being we will assume that
it’s what you learned early, that the inverse Foruier
transform of the sinusoid is an impulse. So here we are with
our impulse there. This says that the response at
time t to an impulse at tau is going to be zero unless tau
is equal to one of these propogation delay terms. In other words, I have a system
where I’m putting in an input and this input comes in. The response to the input is
a number of different path delays and at each path delay,
what I’m going to get out of the system is just a delayed
and attenuated version of what I put in. Namely, the system isn’t a
function of frequency at all. That’s what I get through
using ray tracing. I mean, it’s one of
the consequences of using ray tracing. So what I wind up with is a
system function which is a string of impulses and the
output then, the convolution of this with that — you’re probably better at
working with impulses than I am — and it’s y of t is just
the sum of these attenuation terms times x of t at these
various delays. So what we’re doing is we’re
putting in an arbitrary input. What’s coming out is that
attenuated input coming out at various different times,
due to these various different paths. I get various paths that are
delaying the input by different amounts and out
that delayed input comes at various times. This is a nice sanity check
because if you think about it, that’s exactly what ought
to come out of here. On the other hand, you ought to
wonder about this impulsive impulse response, because that
clearly doesn’t make any sense physically. So what’s going on here? The thing that’s going on is
that when we started, we said, if we’re putting in a narrow
band input, we don’t care about the frequency response
because the frequency response on these different paths cannot
change very quickly and therefore, we’re just going
to have a fixed frequency response term. Then we’ve worked with that
thing which is not a function of frequency and then finally
we get down here, where in fact, what we’re doing is
looking at the output. Due to a bunch of delayed input
terms, if this input term here — if x of t is in fact the
narrow band term — I guess the way to see
that as to look at — where do I look at it? I want to look at this
expression here. If I have a narrow
band or maybe — I guess this one
is better here. Let’s look at this expression. If my input is in fact narrow
band, it’s only going to be non zero over a small range
of frequencies. If it’s only non zero over a
small range of frequencies, I don’t care what this is, except
over that small range of frequencies. All of this gets filtered out. In other words, this filters
out this, opposite of the usual case. So I don’t care about what this
is at different frequency ranges and therefore, we simply
have the consequences of this, which is something that
you see in linear system theory all the time. We’ve sort of ruled out impulses
and sine waves because they don’t carry
information, but in terms of looking at things as
intermediate points and going through filters and things like
that, they’re perfectly fine So here, for simplicity, we
assume that these channel filters really do
not have any — don’t respond to frequency
changes, whereas in fact they do, and all we’re doing is
modelling them in certain frequency bands, which, when
we get all done, is what really gets rid of all our
problems, due to this sort of input because this input is
smooth now and therefore, the output is smooth also. The next thing I want to spend
a little bit of time on and we’ll come back to it next time
is, how do you deal with all of this at baseband? I should warn you here that the
notes don’t do a terribly good job of this. They have all of the results
that you need. They don’t seem to put
them in a very nice, well organized fashion. I’m not sure there is a
nice, well organized fashion to put them in. But anyway there’s a lot of
stuff going on when you try to move this down from pass
band down to baseband. I will try to change the notes a
little bit to make it clear, but I’m not sure that I can. The kind of system that we’re
looking at now is our usual QAM type system, which can
be generalized somewhat. We have a binary input
coming in. We have a baseband encoder. That baseband encoder is
creating baseband signals, which are being added together
to give us a baseband complex input to the channel. This is being frequency
modulated up to some function, x of t, which is just the real
part of ut times e to the 2 pi if of ct as usual. So we now have a real part of
this signal here, modulated up by the carrier frequency. This is going through what we’ll
now regard as a time varying channel filter. We talked a little bit about
channel filters before when we were talking about Nyquist
theory, because we said in general, you want to take your
input, you want to pass it through a pulse p of t. That goes through another
filter, which is some h of t and that goes into another
filter, which is at the receiver, which is q of t and
you want the product of t and you want the convolution of all
of those to satisfy the Nyquist criteria. So we’re back with that in
spades because now this is varying with t also. So this goes through
this channel filter, up at pass band. The channel filter up at pass
band, we’ve seen that one of the things that it can be viewed
as doing is putting Doppler shift into this input. So what comes in at some
frequency f is now going to be coming through here at some
slightly different frequency. We then add white noise to it. We then get y of t out, which
has now been shifted around and smudged in frequency
a little bit. We go through a frequency
demodulation. We get down frequency
to modulation by this carrier frequency. We get down to v of t, which
is now a baseband complex function again, which is
supposed to be the same as this except for the white
Gaussian noise and except for the fact we’ve gone through this
filtering operation here. Then we want to do base band
detection at this point. What we would like to do and
what will make life a little easier for ourselves because
we all got sick of this business of looking at filters
at baseband and also looking at filters at pass band, we
would like to be able to take some baseband equivalent
of this filter here. So what we’re going to do
is look at the baseband equivalent. The system function at baseband
corresponding to this system at carrier frequency
will just be this response moved down by s of t. In other words, when you take
what comes out of here, you multiply it, and you shift it
down in frequency by s sub c, what happens is that this gets
shifted down in frequency by s sub c and this channel filter
gets shifted down in frequency by f of c and therefore, what
happens is the effect of passing y of t through
a base band filter — h-hat of f plus fc and t or
0 for f, less than or equal to minus fc. So far, this is all kind of
straightforward and not too mysterious. So you wind up then in — and this is pure analogy to
what we did before — the output is then going to be
the integral of this Fourier transform of the input. The system function for the
baseband filter times e to the 2 pi ift df. Same equation as we had before,
but before we did it at pass band and now we’re
doing it at baseband . For the ray tracing model that
we looked at, this function here down at baseband is going
to be the same as it was at pass band, except in place of
the 2 pi i tau jft, we now have f plus the carrier
frequency times tau jft. So when we take the inverse
Fourier transform of this, we’re given parameter t. What we’re going to wind up with
is this quantity here. This is the same as we had
before, with the difference that now we’re stuck with this
carrier frequency times propogation delay, which
is occurring in time t. We still have the same delta
functions here and the output v of t is the same time variant convolution as we had before. I’m doing this much too
fast for you to follow this in real time. What I’m saying is, this is
exactly the same thing as we did before and the only thing
new that happens is now this carrier frequency term is coming
in here on this term. If we’re using the ray tracing
model then, v of t is equal to this quantity here. The reason I write this down is
that this shows you quite simply what’s going on in terms
of the Doppler shift, because these terms here, these
delay terms are changing with Doppler shift and now we’re
going to see exactly what they do to us. So we’re going to represent
the propogation delay on each path. That’s tau j of t, which is
tau j prime, which is the propogation delay of time zero
minus the Doppler shift times t divided by f. This is just Doppler shift,
which is a frequency term, and it’s increasing linearly. The propogation delay is
increasing linearly with t. The Doppler shift is a shift in
frequency and, as I think we saw before, you need a 1 over
f to compensate for this Doppler shift. Namely, this is in
terms of hertz. This is in terms of hertz. This is in terms of time. This is in terms of time. So you need this frequency here
to be dimensionally right at any rate. If we’re using narrow band
communication, then v of t is just going to be this difference
here, which is just the last equation with the
Doppler shift put into it. So all of this says that when
we’re now looking at things at base band instead of a pass
band, what has happened is that this term has been added. Before, we just had these delay
terms here and we had these attenuation terms. Now what’s happening is because
we are modulating up with carrier frequency s sub c,
we then get a Doppler shift that moves us down
a little bit. We’re then moving
down by s sub c. What we wind up with is
something which is off by that Doppler shift. If we only had a single
term, we wouldn’t be demodulating by s sub c. We’d be demodulating by
something which would get rid of this term for us. Then all we’d have is just
some arbitrary phase term here, because this
isn’t important. This is just a phase term. This is the term which
is important. The reason for going through
all of this — and I’m going to come back
to this next time — is that when you use frequency
recovery at the receiver and you always use frequency
recovery at the receiver, what’s going to happen is you’re
not going to shift down by s of c. You’re going to shift down s of
c, plus some average value of Doppler shift. If they have a bunch of terms,
each with different Doppler shifts in them, when you try to
measure how much Doppler — when you try to measure what
the received carrier is, you’re going to be frustrated
by all of these different Doppler shifts. You’re going to come up with
some average value in your frequency circuit, which means
that in place of this quantity, you will get something
which replaces each of these D sub j’s by not the
actual Doppler shift, but by how far this Doppler shift
is away from the main Doppler shift. So these terms here, which are
the things that make this system function in here change
with time, are in fact changing according to how far
away each of these Doppler shifts are from the main Doppler
shift, which means that the amount of time this
system function in here is going to remain stable depends
on how much the Doppler shifts vary from each other on
these different paths. In other words, the important
thing is the Doppler spread between the biggest Doppler
shifts and the smallest Doppler shifts. That Doppler spread is going to
determine how long you’ve got something that looks like a
linear time invariant system function which says that every
once in awhile, if you’re trying to measure what’s going
on at the channel, at intervals of time approximately
one over two times the Doppler spread, you’re
going to have to change those measurements. So whether you can make cellular
telephony work or not depends on whether you can make
measurements at a speed which is equal to one over two
times the Doppler spread. I’m going to do more about
that next time. I don’t expect you to
understand it now. Maybe after you read about it
and we talk about it more, because in fact that’s — if you look at this question of
Doppler spread and what its effect is on how long a system
looks like it’s time invariant, this is one of
the key parameters to understanding how any
kind of wireless system is going to work.

2 thoughts on “Lec 21 | MIT 6.450 6.450 Principles of Digital Communications I, Fall 2006

  1. @MIT OpenCourseWare: Is there any special lecture series especially on Wireless Communications/Communication Networks, Protocols and Standards and evolving technologies in these areas?

Leave a Reply

Your email address will not be published. Required fields are marked *