Phasor: Wikis

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

(Redirected to Phasor (sine waves) article)

An example of series RLC circuit and respective phasor diagram

In physics and engineering, a phase vector ("phasor") is a representation of a sine wave whose amplitude (A), phase (θ), and frequency (ω) are time-invariant. It is a subset of a more general concept called analytic representation. Phasors reduce the dependencies on these parameters to three independent factors, thereby simplifying certain kinds of calculations. In particular the frequency factor, which also includes the time-dependence of the sine wave, is often common to all the components of a linear combination of sine waves. Using phasors, it can be factored out, leaving just the static amplitude and phase information to be combined algebraically (rather than trigonometrically). Similarly, linear differential equations can be reduced to algebraic ones. The term phasor therefore often refers to just those two factors. In older texts, a phasor is also referred to as a sinor.

Definition

Euler's formula indicates that sine waves can be represented mathematically as the sum of two complex-valued functions:

$A\cdot \cos(\omega t + \theta) = A/2\cdot e^{i(\omega t + \theta)} + A/2\cdot e^{-i(\omega t + \theta)},$    [1]

or as the real part of one of the functions:

\begin{align} A\cdot \cos(\omega t + \theta) &= \operatorname{Re} \left\{ A\cdot e^{i(\omega t + \theta)}\right\} \ &= \operatorname{Re} \left\{ A e^{i\theta} \cdot e^{i\omega t}\right\}. \end{align}

As indicated above, phasor can refer to either  $A e^{i\theta} e^{i\omega t}\,$ or just the complex constant,  $A e^{i\theta}\,$  . In the latter case, it is understood to be a shorthand notation, encoding the amplitude and phase of an underlying sinusoid.

An even more compact shorthand is angle notation:  $A \angle \theta.\,$

A phasor can be seen as a rotating vector

The sine wave can be understood as the projection on the real axis of a rotating vector on the complex plane. The modulus of this vector is the amplitude of the oscillations, while its argument is the total phase ωt + θ. The phase constant θ represents the angle that the complex vector forms with the real axis at t = 0.

Phasor arithmetic

Multiplication by a constant (scalar)

Multiplication of the phasor  $A e^{i\theta} e^{i\omega t}\,$ by a complex constant,  $B e^{i\phi}\,$  produces another phasor. That means its only effect is to change the amplitude and phase of the underlying sinusoid:

\begin{align} \operatorname{Re}\{(A e^{i\theta} \cdot B e^{i\phi})\cdot e^{i\omega t} \} &= \operatorname{Re}\{(AB e^{i(\theta+\phi)})\cdot e^{i\omega t} \} \ &= AB \cos(\omega t +(\theta+\phi)) \end{align}

In electronics, $B e^{i\phi}\,$  would represent an impedance, which is independent of time. In particular it is not the shorthand notation for another phasor. Multiplying a phasor current by an impedance produces a phasor voltage. But the product of two phasors (or squaring a phasor) would represent the product of two sine waves, which is a non-linear operation that produces new frequency components. Phasor notation can only represent systems with one frequency, such as a linear system stimulated by a sinusoid.

Differentiation and integration

The time derivative or integral of a phasor produces another phasor[2]. For example:

\begin{align} \operatorname{Re}\left\{\frac{d}{dt}(A e^{i\theta} \cdot e^{i\omega t})\right\} &= \operatorname{Re}\{A e^{i\theta} \cdot i\omega e^{i\omega t}\} \ &= \operatorname{Re}\{A e^{i\theta} \cdot e^{i\pi/2} \omega e^{i\omega t}\} \ &= \operatorname{Re}\{\omega A e^{i(\theta + \pi/2)} \cdot e^{i\omega t}\} \ &= \omega A\cdot \cos(\omega t + \theta + \pi/2) \end{align}

Therefore, in phasor representation, the time derivative of a sinusoid becomes just multiplication by the constant, $i \omega = (e^{i\pi/2} \cdot \omega).\,$  Similarly, integrating a phasor corresponds to multiplication by $\frac{1}{i\omega} = \frac{e^{-i\pi/2}}{\omega}.\,$  The time-dependent factor,  $e^{i\omega t}\,$,  is unaffected. When we solve a linear differential equation with phasor arithmetic, we are merely factoring  $e^{i\omega t}\,$  out of all terms of the equation, and reinserting it into the answer. For example, consider the following differential equation for the voltage across the capacitor in an RC circuit:

$\frac{d\ v_C(t)}{dt} + \frac{1}{RC}v_C(t) = \frac{1}{RC}v_S(t)$

When the voltage source in this circuit is sinusoidal:

$v_S(t) = V_P\cdot \cos(\omega t + \theta),\,$

we may substitute:

\begin{align} v_S(t) &= \operatorname{Re} \{V_s \cdot e^{i\omega t}\} \ \end{align}
$v_C(t) = \operatorname{Re} \{V_c \cdot e^{i\omega t}\},$

where phasor  $V_s = V_P e^{i\theta},\,$  and phasor $V_c\,$ is the unknown quantity to be determined.

In the phasor shorthand notation, the differential equation reduces to[3]:

$i \omega V_c + \frac{1}{RC} V_c = \frac{1}{RC}V_s$

Solving for the phasor capacitor voltage gives:

$V_c = \frac{1}{1 + i \omega RC} \cdot (V_s) = \frac{1-i\omega R C}{1+(\omega R C)^2} \cdot (V_P e^{i\theta})\,$

As we have seen, the factor multiplying $V_s\,$  represents differences of the amplitude and phase of $v_C(t)\,$  relative to $V_P\,$  and $\theta.\,$

In polar coordinate form, it is:

$\frac{1}{\sqrt{1 + (\omega RC)^2}}\cdot e^{-i \phi(\omega)},\,$    where  $\phi(\omega) = \arctan(\omega RC).\,$

Therefore:

$v_C(t) = \frac{1}{\sqrt{1 + (\omega RC)^2}}\cdot V_P \cos(\omega t + \theta- \phi(\omega))$

The sum of phasors as addition of rotating vectors

The sum of multiple phasors produces another phasor. That is because the sum of sine waves with the same frequency is also a sine wave with that frequency:

\begin{align} A_1 \cos(\omega t + \theta_1) + A_2 \cos(\omega t + \theta_2) &= \operatorname{Re} \{A_1 e^{i\theta_1}e^{i\omega t}\} + \operatorname{Re} \{A_2 e^{i\theta_2}e^{i\omega t}\} \ &= \operatorname{Re} \{A_1 e^{i\theta_1}e^{i\omega t} + A_2 e^{i\theta_2}e^{i\omega t}\} \ &= \operatorname{Re} \{(A_1 e^{i\theta_1} + A_2 e^{i\theta_2})e^{i\omega t}\} \ &= \operatorname{Re} \{(A_3 e^{i\theta_3})e^{i\omega t}\} \ &= A_3 \cos(\omega t + \theta_3), \end{align}

where:

$A_3^2 = (A_1 \cos{\theta_1}+A_2 \cos{\theta_2})^2 + (A_1 \sin{\theta_1}+A_2 \sin{\theta_2})^2,$
$\theta_3 = \arctan{\left(\frac{A_1 \sin{\theta_1} + A_2 \sin{\theta_2}}{A_1 \cos{\theta_1} + A_2 \cos{\theta_2}}\right)},$

or, via the law of cosines on the complex plane (or the trigonometric identity for angle differences):

$A_3^2 = A_1^2 + A_2^2 - 2 A_1 A_2 \cos(180^\circ - \Delta\theta), = A_1^2 + A_2^2 + 2 A_1 A_2 \cos(\Delta\theta),$

where Δθ = θ1 − θ2. A key point is that A3 and θ3 do not depend on ω or t, which is what makes phasor notation possible. The time and frequency dependence can be suppressed and re-inserted into the outcome as long as the only operations used in between are ones that produce another phasor. In angle notation, the operation shown above is written:

$A_1 \angle \theta_1 + A_2 \angle \theta_2 = A_3 \angle \theta_3.$

Another way to view addition is that two vectors with coordinates [A1 cos(ωt+θ1), A1 sin(ωt+θ1)] and [A2 cos(ωt+θ2), A2 sin(ωt+θ2)] are added vectorially to produce a resultant vector with coordinates [A3 cos(ωt+θ3), A3 sin(ωt+θ3)]. (see animation)

Phasor diagram of three waves in perfect destructive interference

In physics, this sort of addition occurs when sine waves "interfere" with each other, constructively or destructively. The static vector concept provides useful insight into questions like this: "What phase difference would be required between three identical waves for perfect cancellation?" In this case, simply imagine taking three vectors of equal length and placing them head to tail such that the last head matches up with the first tail. Clearly, the shape which satisfies these conditions is an equilateral triangle, so the angle between each phasor to the next is 120° (2π/3 radians), or one third of a wavelength λ/3. So the phase difference between each wave must also be 120°, as is the case in three-phase power

In other words, what this shows is:

$\cos(\omega t) + \cos(\omega t + 2\pi/3) + \cos(\omega t +4\pi/3) = 0.\,$

In the example of three waves, the phase difference between the first and the last wave was 240 degrees, while for two waves destructive interference happens at 180 degrees. In the limit of many waves, the phasors must form a circle for destructive interference, so that the first phasor is nearly parallel with the last. This means that for many sources, destructive interference happens when the first and last wave differ by 360 degrees, a full wavelength λ. This is why in single slit diffraction, the minima occurs when light from the far edge travels a full wavelength further than the light from the near edge.

Phasor diagrams

Electrical engineers, electronics engineers, electronic engineering technicians and aircraft engineers all use phasor diagrams to visualize complex constants and variables (phasors). Like vectors, arrows drawn on graph paper or computer displays represent phasors. Cartesian and polar representations each have advantages.

Circuit laws

With phasors, the techniques for solving DC circuits can be applied to solve AC circuits. A list of the basic laws is given below.

• Ohm's law for resistors: a resistor has no time delays and therefore doesn't change the phase of a signal therefore V=IR remains valid.
• Ohm's law for resistors, inductors, and capacitors: V=IZ where Z is the complex impedance.
• In an AC circuit we have real power (P) which is a representation of the average power into the circuit and reactive power (Q) which indicates power flowing back and forward. We can also define the complex power S=P+iQ and the apparent power which is the magnitude of S. The power law for an AC circuit expressed in phasors is then S=VI* (where I* is the complex conjugate of I).
• Kirchhoff's circuit laws work with phasors in complex form

Given this we can apply the techniques of analysis of resistive circuits with phasors to analyze single frequency AC circuits containing resistors, capacitors, and inductors. Multiple frequency linear AC circuits and AC circuits with different waveforms can be analyzed to find voltages and currents by transforming all waveforms to sine wave components with magnitude and phase then analyzing each frequency separately, as allowed by the superposition theorem.

Power engineering

In analysis of three phase AC power systems, usually a set of phasors is defined as the three complex cube roots of unity, graphically represented as unit magnitudes at angles of 0, 120 and 240 degrees. By treating polyphase AC circuit quantities as phasors, balanced circuits can be simplified and unbalanced circuits can be treated as an algebraic combination of symmetrical circuits. This approach greatly simplifies the work required in electrical calculations of voltage drop, power flow, and short-circuit currents. In the context of power systems analysis, the phase angle is often given in degrees, and the magnitude in rms value rather than the peak amplitude of the sinusoid.

The technique of synchrophasors uses digital instruments to measure the phasors representing transmission system voltages at widespread points in a transmission network. Small changes in the phasors are sensitive indicators of power flow and system stability.

Footnotes

1. ^
• i is the Imaginary unit (i2 = − 1).
• In electrical engineering texts, the imaginary unit is often symbolized by j.
• The frequency of the wave, in Hz, is given by ω / 2π.
2. ^ This results from:  $\frac{d}{dt}(e^{i \omega t}) = i \omega e^{i \omega t}$ which means that the complex exponential is the eigenfunction of the derivative operation.
3. ^ Proof:

$\frac{d\ \operatorname{Re} \{V_c \cdot e^{i\omega t}\}}{dt} + \frac{1}{RC}\operatorname{Re} \{V_c \cdot e^{i\omega t}\} = \frac{1}{RC}\operatorname{Re} \{V_s \cdot e^{i\omega t}\}$

(Eq.1)

Since this must hold for all $t\,$, specifically:  $t-\frac{\pi}{2\omega },\,$  it follows that:

$\frac{d\ \operatorname{Im} \{V_c \cdot e^{i\omega t}\}}{dt} + \frac{1}{RC}\operatorname{Im} \{V_c \cdot e^{i\omega t}\} = \frac{1}{RC}\operatorname{Im} \{V_s \cdot e^{i\omega t}\}$

(Eq.2)

It is also readily seen that:

$\frac{d\ \operatorname{Re} \{V_c \cdot e^{i\omega t}\}}{dt} = \operatorname{Re} \left\{ \frac{d\left( V_c \cdot e^{i\omega t}\right)}{dt} \right\} = \operatorname{Re} \left\{ i\omega V_c \cdot e^{i\omega t} \right\}$
$\frac{d\ \operatorname{Im} \{V_c \cdot e^{i\omega t}\}}{dt} = \operatorname{Im} \left\{ \frac{d\left( V_c \cdot e^{i\omega t}\right)}{dt} \right\} = \operatorname{Im} \left\{ i\omega V_c \cdot e^{i\omega t} \right\}$

Substituting these into  Eq.1 and  Eq.2, multiplying  Eq.2 by $i,\,$  and adding both equations gives:

$i\omega V_c \cdot e^{i\omega t} + \frac{1}{RC}V_c \cdot e^{i\omega t} = \frac{1}{RC}V_s \cdot e^{i\omega t}$
$\left(i\omega V_c + \frac{1}{RC}V_c = \frac{1}{RC}V_s\right) \cdot e^{i\omega t}$
$i\omega V_c + \frac{1}{RC}V_c = \frac{1}{RC}V_s \quad\quad(QED)$

References

• Douglas C. Giancoli (1989). Physics for Scientists and Engineers. Prentice Hall. ISBN 0-13-666322-2.

Phasor may refer to:

=

Study guide

Up to date as of January 14, 2010

From Wikiversity

A phasor is a constant complex number representing the complex amplitude (magnitude and phase) of a sinusoidal function of time. (In older texts, a phasor is alternatively referred to as a sinor.) It is usually expressed in exponential form. Phasors are used in engineering to simplify computations involving sinusoids, where they can often reduce a differential equation problem to an algebraic one.

Introduction

A sinusoid (or sine waveform) is defined to be a function of the form (the reason for using cosine rather than sine will become apparent later)

$y=A\cos{(\omega t+\phi)}\,\!$

where

• y is the quantity that is varying with time
• Ф is a constant (in radians) known as the phase or phase angle of the sinusoid
• A is a constant known as the amplitude of the sinusoid. It is the peak value of the function.
• ω is the angular frequency given by ω = 2πf where f is frequency.
• t is time.

This can be expressed as

$y=\Re \Big(A\big(\cos{(\omega{}t+\phi)}+j\sin{(\omega t+\phi)}\big)\Big)\,\!$

where

• j is the imaginary unit $\sqrt{-1}$. Note that i is not used in electrical engineering as it is commonly used to represent the changing current.
• $\Re (z)$ gives the real part of the complex number z

Equivalently, by Euler's formula,

$y=\Re(Ae^{j(\omega{}t+\phi)})\,\!$
$y=\Re(Ae^{j\phi}e^{j\omega{}t})\,\!$

Y, the phasor representation of this sinusoid is defined as follows:

$Y = Ae^{j \phi}\,$

such that

$y=\Re(Ye^{j\omega{}t})\,\!$

Thus, the phasor Y is the constant complex number that encodes the amplitude and phase of the sinusoid. To simplify the notation, phasors are often written in angle notation:

$Y = A \angle \phi \,$

Within Electrical Engineering, the phase angle is commonly specified in degrees rather than radians and the magnitude will often be the rms value rather than a peak value of the sinusoid.

The overarching conceptual motive behind phasor calculus is that it is generally far more convenient to manipulate complex numbers than to manipulate literal trigonometric functions. Noting that a trigonometric function can be represented as the real component of a complex quantity, it is efficacious to perform the required mathematical operations upon the complex quantity and, at the very end, take its real component to produce the desired answer. This is quite similar to the concept underlying complex potential in such fields as electromagnetic theory, where—instead of manipulating a real quantity, u—it is often more convenient to derive its harmonic conjugate, v, and then operate upon the complex quantity u + jv, again recovering the real component of the complex "result" as the last stage of computation to generate the true result.

Phasor Calculus

When sinusoids are represented as phasors, differential equations become algebraic equations. This result follows from the fact that the complex exponential is the eigenfunction of the derivative operation:

$\frac{d}{dt}(e^{j \omega t}) = j \omega e^{j \omega t}$

That is, only the complex amplitude is changed by the derivative operation. Taking the real part of both sides of the above equation gives the familiar result:

$\frac{d}{dt} \cos{\omega t} = - \omega \sin{\omega t}\,$

Thus, a time derivative of a sinusoid becomes, in the phasor representation, multiplication by the complex frequency. Similarly, integrating a phasor corresponds to division by the complex frequency.

As an example, consider the following differential equation for the voltage across the capacitor in an RC circuit:

$\frac{dv_C}{dt} + \frac{1}{RC}v_C = \frac{1}{RC}v_S$

When the voltage source in this circuit is sinusoidal:

$v_S(t) = V_P \cos(\omega t + \phi)\,$

the differential equation (in phasor form) becomes:

$j \omega V_c + \frac{1}{RC} V_c = \frac{1}{RC}V_s$

where

$V_s = V_P e^{j \phi}\,$

Solving for the phasor capacitor voltage gives:

$V_c = \frac{1}{1 + j \omega RC} V_s$

To convert the phasor capacitor voltage back to a sinusoid, we need to express all complex numbers in polar form:

$V_c = \frac{1}{\sqrt{1 + (\omega RC)^2}}e^{j \theta(\omega)} V_s$

where

$\theta(\omega) = -\arctan(\omega RC)\,$

Then

$v_C(t) = \frac{1}{\sqrt{1 + (\omega RC)^2}} V_P \cos(\omega t + \phi + \theta(\omega))$

Circuit laws

With phasors, the techniques for solving DC circuits can be applied to solve AC circuits. A list of the basic laws is given below.

• Ohm's law for resistors: a resistor has no time delays and therefore doesn't change the phase of a signal therefore V=IR remains valid.
• Ohm's law for resistors, inductors, and capacitors: V=IZ where Z is the complex impedance.
• In an AC circuit we have real power (P) which is a representation of the average power into the circuit and reactive power (Q) which indicates power flowing back and forward. We can also define the complex power S=P+jQ and the apparent power which is the magnitude of S. The power law for an AC circuit expressed in phasors is then S=VI* (where I* is the complex conjugate of I).
• Kirchhoff's circuit laws work with phasors in complex form

Given this we can apply the techniques of analysis of resistive circuits with phasors to analyse single frequency AC circuits containing resistors, capacitors, and inductors. Multiple frequency linear AC circuits and AC circuits with different waveforms can be analysed to find voltages and currents by transforming all waveforms to sine wave components with magnitude frequency and phase then analysing each frequency separately. However this method does not work for power as power is based on voltage times current.

Phasor transform

The phasor transform or phasor representation allows transformation from complex form to trigonometric form:

$V_m e^{j \phi } = \mathcal{P} \{ V_m \cos( \omega t + \phi ) \}$

where the notation $\mathcal{P} \{ \}$ is read "the phasor transform of ____."

The phasor transform transfers the sinusoidal function from the time domain to the complex-number domain or frequency domain.

Inverse phasor transform

The inverse phasor transform $\mathcal{P}^{-1}$ allows one to move back from the phasor domain to the time domain.

$V_m \cos( \omega t + \phi ) = \mathcal{P}^{-1} \{ V_m e^{j \phi } \} = \Re \{ V_m e^{j \phi } e^{j \omega t } \}$

Phasor arithmetic

 Wikibooks Circuit Theory has a page on the topic of Phasor Arithmetic.

As with other complex quantities the exponential (polar) form Aejφsimplifies multiplication and division, while the Cartesian (rectangular) form a + jb simplifies addition and subtraction.

Power engineering

In analysis of three phase AC power systems, usually a set of phasors is defined as the three complex cube roots of unity, graphically represented as unit magnitudes at angles of 0, 120 and 240 degrees. By treating polyphase AC circuit quantities as phasors, balanced circuits can be simplified and unbalanced circuits can be treated as an algebraic combination of symmetrical circuits. This approach greatly simplifies the work required in electrical calculations of voltage drop, power flow, and short-circuit currents.

Interference

Phasors are most commonly used to visually solve problems of the type "several waves of similar frequency but different phases and amplitudes are interfered at a point, what is the resulting intensity?" To solve this problem, draw one phasor for each of the waves, and then simply perform vector addition on them. The length of the resulting vector is the amplitude of the resulting wave, and its length can be squared to find the intensity. Note that, while the sum of several sine waves is not necessarily another sine wave, the sum of several sine waves of the same frequency is, allowing the resultant phase to be read as the angle of the resultant phasor.

An interesting, related use of phasors arises in the inverted question, "what phase difference do I need between three identical waves for perfect cancellation?" In this case, simply imagine taking three vectors of equal length and placing them head to tail such that the last head matches up with the first tail. Clearly, the shape which satisfies these conditions is an equilateral triangle. Since the angle between each phasor to the next is 120 degrees, or one third of a wavelength λ / 3, so the phase difference between each wave must also be 120. The problem is solved for four phasors with a square and so forth.

Three waves in perfect destructive interference

In the example of three waves, the phase difference between the first and the last wave was 240 degrees, while for two waves destructive interference happens at 180 degrees. In the limit of many waves, the phasors must form a circle for destructive interference, so that the first phasor is nearly parallel with the last. This means that for many sources, destructive interference happens when the first and last wave differ by 360 degrees, a full wavelength λ. This is why in single slit diffraction, the minima occurs when light from the far edge travels a full wavelength further than the light from the near edge.

Simple harmonic oscillator

A phasor can be used to model the behavior of a particle in simple harmonic motion, in which case, the y value of the phasor corresponds to the particle's current displacement, and one should imagine the phasor rotating around the origin as the object oscillates. The maximum displacement is given by the phasor's length. If we are provided with the period of rotation (i.e oscillation), and we call the length of the phasor A, then the end of the phasor travels a distance A in time T, so the end of the phasor travels with velocity A / T, which is the maximum speed of the oscillator. Since the maximum velocity occurs when there is no displacement (and so zero phase) so the phasor is completely in the x direction, we can see that the current velocity is given by x / T, where x is the current x value.

 Wikibooks Circuit_Theory has a page on the topic of Phasors.
• Phase angle
• Frequency domain
• Symmetrical components

References

• Douglas C. Giancoli (1989). Physics for Scientists and Engineers. Prentice Hall. ISBN 0-13-666322-2.

Simple English

A phasor is a tool in mathematics. It is used to show numbers in a different coordinate system. Certain electronic components have models that can be described more easily by the use of phasors. Inductors add a +90º "phase", while capacitors add -90º "phase". Both elements work on the imaginary axis. Resistors have 0º "phase", and are considered real.