designates my notes. / designates important.
Heavy focus on signals and systems theory over “practical” electronics. I would not suggest this as a first course if you are interested in EE/EECS. If you are looking for communications theory, this is perfect.
IMHO the 2007 MIT 6.002 Circuits and Electronics is a better place to start. Further, the 2011 MIT 6.01SC was also more appealing to me as it mixes the theory with a more hands on approach (python programming and robots). The next course in this Rice U series, ELEC242 I think, looks good but is not available online.
CONNEXIONS, Rice University, Houston, Texas
pdf page numbers
A Mathematical Theory of Communication by Claude Shannon (1948)
a signal is merely a function. Analog signals are continuous- valued; digital signals are discrete-valued. The independent variable of the signal could be time (speech, for example), space (images), or the integers (denoting the sequencing of letters and numbers in the football score)
multiplying a complex number by j rotates the number’s position by 90 degrees.
polar form for complex numbers.
z = a + jb = r∠θ
r = |z| = √(a2 + b2)
a = r cos (θ)
b = r sin (θ)
θ = arctan(b/a)
z = re^(jθ)
T = 1/f
f = The number of times per second we go around the circle equals the frequency f.
T = period, how long for one complex exponential cycle
These two decompositions are mathematically equivalent to each other.
A*cos(2πft + φ) = Re[Ae^(jφ)e^(j2πft)]
A*sin(2πft + φ) = Im[Ae^(jφ)e^(j2πft)]
s(t) = e^(−t/τ)
One of the fundamental results of signal theory (Section 5.3) will detail conditions under which an analog signal can be converted into a discrete-time one and retrieved without error.
A discrete-time signal is represented symbolically as s (n), where n = {…,−1,0,1,…}.
The most important signal is, of course, the complex exponential sequence. s(n) = e^(j2πfn)
Discrete-time sinusoids have the obvious form s(n) = A*cos(2πfn + φ).
all signals consist of a sequence of delayed and scaled unit samples.
we can decompose any signal as a sum of unit samples delayed to the appropriate location and scaled by the signal value.
∞
s(n) = sum s(m)δ(n − m)
m=−∞
Another interesting aspect of discrete-time signals is that their values do not need to be real numbers. We do have real-valued discrete-time signals like the sinusoid, but we also have signals that denote the sequence of characters typed on the keyboard. Such characters certainly aren’t real numbers,
like a math set, formally called alphabet (but not limited to characters in any actual language alphabet)
discrete-time systems are ultimately constructed from digital circuits, which consist entirely of analog circuit elements.
Signals are manipulated by systems. Mathematically, we represent what a system does by the notation y(t) = S[x(t)], with x representing the input signal and y the output signal.
A system’s input is analogous to an independent variable and its output the dependent variable. For the mathematically inclined, a system is a functional: a function of a function (signals are functions).
Systems can be linked like linux pipes
y(t) = S1[e(t)]
The input e(t) equals the input signal minus the output of some other system’s output to y(t): e(t) = x(t) − S2[y(t)].
Derivative system y(t) = dx/dt(t)
Integratort
t
y(t) = integral x(α)dα
−∞
the value of all signals at t = −∞ equals zero.
linear systems, double the input double the output as a simple test (not 100%)
linear systems output 0 on input 0
“They’re [linear systems] the only systems we thoroughly understand!”
Systems that don’t change their input-output relation with time are said to be time-invariant.
The collection of linear, time-invariant systems are the most thoroughly understood systems
electric circuits are, for the most part, linear and time-invariant. Nonlinear ones abound, but characterizing them so that you can predict their behavior for any input remains an unsolved problem.
When we say that “electrons flow through a conductor,” what we mean is that the conductor’s constituent atoms freely give up electrons from their outer shells. “Flow” thus means that electrons hop from atom to atom driven along by the applied electric potential.
A missing electron, however, is a virtual positive charge. Electrical engineers call these holes, and in some materials, particularly certain semiconductors, current flow is actually due to holes.
Current flow also occurs in nerve cells found in your brain. Here, neurons “communicate” using propagating voltage pulses that rely on the flow of positive ions (potassium and sodium primarily, and to some degree calcium) across the neuron’s outer wall.
For every circuit element we define a voltage and a current. The element has a v-i relation defined by the element’s physical properties. In defining the v-i relation, we have the convention that positive current flows from positive to negative voltage drop.
p(t) = v(t)i(t) (p)ower is measured in watts./span>
A positive value for power indicates that at time t the circuit element is consuming power; a negative value means it is producing power.
as in all areas of physics and chemistry, power is the rate at which energy is consumed or produced. Consequently, energy is the integral of power.
t
E(t) =integral p(α)dα
−∞
positive energy corresponds to consumed energy and negative energy corresponds to energy production.
—the resistor, capacitor, and inductor— impose linear relationships between voltage and current.
p(t) = Ri^2(t) = v^2(t)/R Instantaneous power consumption of a resistor.
As the resistance approaches infinity, we have what is known as an open circuit: No current flows but a non-zero voltage can appear across the open circuit. As the resistance becomes zero, the voltage goes to zero for a non-zero current flow. This situation corresponds to a short circuit. A superconductor physically realizes a short circuit.
The capacitor stores charge and the relationship between the charge stored and the resultant voltage is q = Cv. The constant of proportionality, the capacitance, has units of farads (F),
As current is the rate of change of charge, the v-i relation can be expressed in differential or integral form.
voltage source’s v-i relation is v = vs regardless of what the current might be. As for the current source, i = −is regardless of the voltage. Another name for a constant-valued voltage source is a battery,
If a sinusoidal voltage is placed across a physical resistor, the current will not be exactly proportional to it as frequency becomes high, say above 1 MHz. At very high frequencies, the way the resistor is constructed introduces inductance and capacitance effects. Thus, the smart engineer must be aware of the frequency ranges over which his ideal models match reality well.
Kirchhoff’s Current Law (KCL): At every node, the sum of all currents entering a node must equal zero.
Kirchhoff’s Voltage Law (KVL): The voltage law says that the sum of voltages around every closed loop in the circuit must equal zero.</span
RT = R1 || (R2 || R3 + R4)
RT = (R1*R2*R3 + R1*R2*R4 + R1*R3*R4) / (R1*R2 + R1*R3 + R2*R3 + R2*R4 + R3*R4)
A simple check for accuracy is the units: Each component of the numerator should have the same units (here Ω^3 ) as well as in the denominator (Ω^2). The entire expression is to have units of resistance; thus, the ratio of the numerator’s and denominator’s units should be ohms.
In system theory, systems can be cascaded without changing the input-output relation of intermediate systems. In cascading circuits, this ideal is rarely true unless the circuits are so designed.
for series combinations, voltage and resistance are the key quantities, while for parallel combinations current and conductance are more important. In series combinations, the currents through each element are the same; in parallel ones, the voltages are the same.
Because complex amplitudes for voltage and current also obey Kirchhoff’s laws, we can solve circuits using voltage and current divider and the series and parallel combination rules by considering the elements to be impedances.
The entire point of using impedances is to get rid of time and concentrate on frequency.
Even though it’s not, pretend the source is a complex exponential. We do this because the impedance approach simplifies finding how input and output are related. If it were a voltage source having voltage vin = p (t) (a pulse), still let vin = Vinej2πf t. We’ll learn how to “get the pulse back” later.
With a source equaling a complex exponential, all variables in a linear circuit will also be complex exponentials having the same frequency. The circuit’s only remaining “mystery” is what each variable’s complex amplitude might be. To find these, we consider the source to be a complex number (Vin here) and the elements to be impedances.
We can now solve using series and parallel combination rules how the complex amplitude of any variable relates to the sources complex amplitude.
V_out/V_in = H(f) Transfer Function
Implicit in using the transfer function is that the input is a complex exponential, and the output is also a complex exponential having the same frequency.
The node method begins by finding all nodes–places where circuit elements attach to each other–in the circuit. We call one of the nodes the reference node; the choice of reference node is arbitrary, but it is usually chosen to be a point of symmetry or the “bottom” node. For the remaining nodes, we define node voltages e_n that represent the voltage between the node and the reference. These node voltages constitute the only unknowns; all we need is a sufficient number of equations to solve for them. In our example, we have two node voltages. The very act of defining node voltages is equivalent to using all the KVL equations at your disposal.
In some cases, a node voltage corresponds exactly to the voltage across a voltage source. In such cases, the node voltage is specified by the source and is NOT an unknown.
i(t) = I_0 · e^(q/kT)*v(t) − 1
q represents the charge of a single electron in coulombs, k is Boltzmann’s constant, and T is the diode’s temperature in K. At room temperature, the ratio kT/q = 25 mV. The constant I_0 is the qleakage current, and is usually very small
diode’s schematic symbol looks like an arrowhead; the direction of current flow corresponds to the direction the arrowhead points.
Because of the diode’s nonlinear nature, we cannot use impedances nor series/parallel combination rules to analyze circuits containing them.
all signals can be expressed as a superposition of sinusoids
Let s(t) be a periodic signal with period T. We want to show that periodic signals, even those that have constant-valued segments like a square wave, can be expressed as sum of harmonically related sine waves: sinusoids having frequencies that are integer multiples of the fundamental frequency. Because the signal has period T, the fundamental frequency is 1/T.
A signal’s Fourier series spectrum ck has interesting properties.
Property 4.1: (conjugate symmetry) If s(t) is real, c_k = c∗_−k (real-valued periodic signals have conjugate-symmetric spectra).
Property 4.2: If s(−t) = s(t), which says the signal has even symmetry about the origin, c_−k = c_k.
Property 4.3: If s(−t) = −s(t), which says the signal has odd symmetry, c_−k = −c_k.
Property 4.4: The spectral coefficients for a periodic signal delayed by τ, s(t − τ), are c_k*e^−(j2πkτ)/T , where ck denotes the spectrum of s(t). Delaying a signal by τ seconds results in a spectrum having a linear phase shift of −(2πkτ)/T in comparison to the spectrum of the un-delayed signal.
The complex Fourier series and the sine-cosine series are identical, each representing a signal’s spectrum. The Fourier coefficients, a_k and b_k, express the real and imaginary parts respectively of the spectrum while the coefficients c_k of the complex Fourier series express the spectrum as a magnitude and phase.
Equating the classic Fourier series (4.11) to the complex Fourier series (4.1), an extra factor of two and complex conjugate become necessary to relate the Fourier coefficients in each.
IMAGE EQ after 4.11
A new definition of equality is mean-square equality: Two signals are said to be equal in the mean square if rms(s1 − s2) = 0
The Fourier series value “at” the discontinuity is the average of the values on either side of the jump.
To encode information we can use the Fourier coefficients.
Assume we have N letters to encode: {a_1 , …, a_N}. One simple encoding rule could be to make a single Fourier coefficient be non-zero and all others zero for each letter. For example, if an occurs, we make c_n = 1 and c_k = 0, k != n. In this way, the nth harmonic of the frequency 1/T is used to represent a letter. Note N/T that the bandwidth—the range of frequencies required for the encoding—equals . Another possibility is Tto consider the binary representation of the letter’s index. For example, if the letter a_13 occurs, converting 13 to its base-2 representation, we have 13 = 1101. We can use the pattern of zeros and ones to represent directly which Fourier coefficients we “turn on” (set equal to one) and which we “turn off.”
Because the Fourier series represents a periodic signal as a linear combination of complex exponentials, we can exploit the superposition property. Furthermore, we found for linear circuits that their output to a complex exponential input is just the frequency response evaluated at the signal’s frequency times the complex exponential. Said mathematically, if x(t) = e^(j2πkt)/T, then the output y(t) = H(k/T)*e^(j2πkt)/T because f = k . Thus, if x(t) is periodic thereby having a Fourier series, a linear circuit’s output to this signal will be the superposition of the output to each component.
Thus, the output has a Fourier series, which means that it too is periodic. Its Fourier coefficients equal c_k*H(k/T). To obtain the spectrum of the output, we simply multiply the input spectrum by Tthe frequency response. The circuit modifies the magnitude and phase of each Fourier coefficient. Note especially that while the Fourier coefficients do not depend on the signal’s period, the circuit’s transfer function does depend on frequency, which means that the circuit’s output will differ as the period varies.
we have calculated the output of a circuit to a periodic input without writing, much less solving, the differential equation governing the circuit’s behavior. Furthermore, we made these calculations entirely in the frequency domain. Using Fourier series, we can calculate how any linear circuit will respond to a periodic input.
S(f) is the Fourier transform of s(t) (the Fourier transform is symbolically denoted by the uppercase version of the signal’s symbol) and is defined for any signal for which the integral converges.
The quantity sin(t)/t has a special name, the sinc (pronounced “sink”) function, and is denoted by sinc(t)
The Fourier transform relates a signal’s time and frequency domain representations to each other. The direct Fourier transform (or simply the Fourier transform) calculates a signal’s frequency domain repre- sentation from its time-domain variant (4.34). The inverse Fourier transform (4.35) finds the time-domain representation from the frequency domain. Rather than explicitly writing the required integral, we often symbolically express these transform calculations as F(s) and F^−1(S), respectively.
the mathematical relationships between the time domain and frequency domain versions of the same signal are termed transforms.
We express Fourier transform pairs as (s(t) <=> S(f)).\
the original amplitude value cannot be recovered without error.
signal-to-noise ratio, which equals the ratio of the signal power and the quantization error power.
each element of the symbolic-valued signal s (n) takes on one of the values {a1, . . . , aK } which comprise the alphabet A.
5.6 Discrete-Time Fourier Transform (DTFT)
5.9 Fast Fourier Transform (FFT)
The computational advantage of the FFT comes from recognizing the periodic nature of the discrete Fourier transform. The FFT simply reuses the computations made in the half-length transforms and combines them through additions and the multiplication by e^(−j2πk/N), which is not periodic over N/2, to rewrite the length-N DFT. Figure 5.12 (Length-8 DFT decomposition) illustrates this decomposition.
Exercise 5.22 Answer from page 200: In discrete-time signal processing, an amplifier amounts to a multiplication, a very easy operation to perform.
linear, shift-invariant systems Slightly different terminology from analog to digital. This is to emphasis integer vals only.
Here, the output signal y (n) is related to its past values y (n − l), l = {1, . . . , p}, and to the current and past values of the input signal x (n). The system’s characteristics are determined by the choices for the number of coefficients p and q and the coefficients’ values {a1 , . . . , ap} and {b0, b1 , . . . , bq }.
Aside: There is an asymmetry in the coefficients: where is a0 ? This coefficient would multiply the y (n) term in (5.42). We have essentially divided the equation by it, which does not change the input-output relationship. We have thus created the convention that a0 is always one.
a unit-sample input, which has X ej2πf = 1, results in the output’s Fourier transform equaling the system’s transfer function.
In the time-domain, the output for a unit-sample input is known as the system’s unit-sample response, and is denoted by h (n). Combining the frequency-domain and time-domain interpretations of a linear, shift- invariant system’s unit-sample response, we have that h (n) and the transfer function are Fourier transform pairs in terms of the discrete-time Fourier transform.
(sampling in one domain, be it time or frequency, can result in aliasing in the other) unless we sample fast enough. Here, the duration of the unit-sample response determines the minimal sampling rate that prevents aliasing.
For IIR systems, we cannot use the DFT to find the system’s unit-sample response: aliasing of the unit- sample response will always occur. Consequently, we can only implement an IIR filter accurately in the time domain with the system’s difference equation. Frequency-domain implementations are restricted to FIR filters.
Transmitted signal amplitude does decay exponentially along the transmission line. Note that in the high-frequency regime the space constant is small, which means the signal attenuates little.
Wireless channels exploit the prediction made by Maxwell’s equation that electromagnetic fields propagate in free space like light. When a voltage is applied to an antenna, it creates an electromagnetic field that propagates in all directions (although antenna geometry affects how much power flows in any given direction) that induces electric currents in the receiver’s antenna.
The fundamental equation relating frequency and wavelength for a propagating wave is
Thus, wavelength and frequency are inversely related: High frequency corresponds to small wavelengths. For example, a 1 MHz electromagnetic field has a wavelength of 300 m. Antennas having a size or distance from the ground comparable to the wavelength radiate fields most efficiently. Consequently, the lower the frequency the bigger the antenna must be. Because most information signals are baseband signals, having spectral energy at low frequencies, they must be modulated to higher frequencies to be transmitted over wireless channels.
The maximum distance along the earth’s surface that can be reached by a single ionospheric reflection is 2Rarccos(R/R+h_i), which ranges between 2,010 and 3,000 km when we substitute minimum and maximum ionospheric altitudes (80-180km).
delays of 6.8-10ms for a single reflection.
s0(t) = ApT(t) s1(t) = −ApT(t)
This way of representing a bit stream—changing the bit changes the sign of the transmitted signal—is known as binary phase shift keying and abbreviated BPSK.
The datarate R of a digital communication system is how frequently an information bit is transmitted. In this example it equals the reciprocal of the bit interval: R = 1/T. Thus, for a 1 Mbps (megabit per second) transmission, we must have T = 1μs.
The first and third harmonics contain that fraction of the total power, meaning that the effective bandwidth of our baseband signal is 3/2T or, expressing this quantity in terms of the datarate, 3R/2. Thus, a digital communications signal requires more bandwidth than the datarate: a 1 Mbps baseband system requires a bandwidth of at least 1.5MHz. Listen carefully when someone describes the transmission bandwidth of digital communication systems: Did they say “megabits” or “megahertz?”
In frequency-shift keying(FSK), the bit affects the frequency of a carrier sinusoid.
Synchronization can occur because the transmitter begins sending with a reference bit sequence, known as the preamble. This reference bit sequence is usually the alternating sequence as shown in the square wave example19 and in the FSK example (Figure 6.13).
This procedure amounts to what in digital hardware as self-clocking signaling:
As the received signal becomes increasingly noisy, whether due to increased distance from the transmit- ter (smaller α) or to increased noise in the channel (larger N_0), the probability the receiver makes an error approaches 1/2.
As the signal-to-noise ratio increases, performance gains–smaller probability of error pe – can be easily obtained. At a signal-to-noise ratio of 12 dB, the probability the receiver makes an error equals 10−8. In words, one out of one hundred million bits will, on the average, be in error.
Once the signal-to-noise ratio exceeds about 5 dB, the error probability decreases dramatically. Adding 1 dB improvement in signal-to-noise ratio can result in a factor of ten smaller pe .
Shannon’s Source Coding Theorem (6.52) has additional applications in data compression.
Lossy and lossless
Create a vertical table for the symbols, the best ordering being in decreasing order of probability.
Form a binary tree to the right of the table. A binary tree always has two branches at each node. Build the tree by merging the two lowest probability symbols at each level, making the probability of the node equal to the sum of the merged nodes’ probabilities. If more than two nodes/symbols share the lowest probability at a given level, pick any two; your choice won’t affect B (A).
At each node, label each of the emanating branches with a binary number. The bit sequence obtained from passing from the tree’s root to the symbol is its Huffman code.
Huffman showed that his (maximally efficient) code had the prefix property: No code for a symbol began another symbol’s code. Once you have the prefix property, the bitstream is partially self-synchronizing: Once the receiver knows where the bitstream starts, we can assign a unique and correct symbol sequence to the bitstream.
Otherwise you need a “comma” or some seperator but between each symbol.
FIG 6.20