Satellite Communications

The Technology & Economic Forum is a venue to discuss issues pertaining to Technological and Economic developments in India. We request members to kindly stay within the mandate of this forum and keep their exchanges of views, on a civilised level, however vehemently any disagreement may be felt. All feedback regarding forum usage may be sent to the moderators using the Feedback Form or by clicking the Report Post Icon in any objectionable post for proper action. Please note that the views expressed by the Members and Moderators on these discussion boards are that of the individuals only and do not reflect the official policy or view of the Bharat-Rakshak.com Website. Copyright Violation is strictly prohibited and may result in revocation of your posting rights - please read the FAQ for full details. Users must also abide by the Forum Guidelines at all times.
Post Reply
SSSalvi
BRFite
Posts: 785
Joined: 23 Jan 2007 19:35
Location: Hyderabad

Satellite Communications

Post by SSSalvi »

Hello BRFites,

There is a thread covering 'Indian Space program', but general subjects like informative communiques on the subject of satellite communications/ orbits etc appear to be out of topic there.

Hence this thread has been started.

As a starting point explaining the TLEs.
( e.g. TLE mentioned in the topic of PSLV C18 launch/ Meghatropiques / Jugnu / SRMSAT.)

The status of the satellite can be totally described if we describe ( at an instant of orbit called as Epoch Time )

1. the shape of orbit
2. How the orbit is positioned around Earth and finally,
3. How is the Satellite is situated in that orbit.

Let's study TLE for Cartosat 2 that was launched in 2007. We find the TLE from this place.

There it lists Cartosat 2 TLE as follows:

CARTOSAT-2 (IRS-P7)
1 29710U 07001B 11289.87789931 .00001006 00000-0 14407-3 0 2742
2 29710 97.8976 348.0802 0002117 70.9435 289.1994 14.78689525257248

What is that????

We find that the satellite has an international standard designation 29710U.
It was launched in 2007 in first launch of the year and it was the second object in that launch. This info is available in 07001B decoded as (07=2007), 001 = 1st launch in 2007 [ launch date 10th Jan 2007 ] and B is the second object in following list that were launched by PSLV C7

2007-001A 29709 LAPAN-TUBSAT INDO 2007-01-10
2007-001B 29710 CARTOSAT-2 (IRS-P7) IND 2007-01-10
2007-001C 29711 SRE-1 IND 2007-01-10 SRILR 2007-01-22 D
2007-001D 29712 PEHUENSAT 1 (PO-63) ARGN 2007-01-10

The extra description against SRE-1 ( It was India's re-entry capsule experiment ) indicates that it fell on earth on 2007-01-22.

A description of how the things are coded is found here.

TLE stands for Two Line Elements .. wherein a basic set of input parameters is put together using which one can ( not easily :wink: ) calculate the location of satellite for any time in future. This is a standard format so that one can copy and use as input to most of the orbit calculation programs.

The parameters can be decoded using the link below:

https://docs.google.com/spreadsheet/ccc ... c&hl=en_US

and after using that link we get following list:

Sat Name: CARTOSAT-2
CAT No. 37838
DRAG 0.00001006
BSTAR 14407-3
Incl 19.7947
RA 62.7084
Ecc 0.0060638
Ap 17.8079
MA 342.4581
Mean Motion 14.2066792
Element Set 2742
Rev No. 48
Epoch Time 11289.8779
SMA 7201.23183
Ht abv Eq 823.09
Period 6081.646441
Epoch Year ------------------> 2011
Epoch Day of year---------> 289
EpochTime ------------------> 21:04:10

The figure below shows the parameters graphically.



Image

Basically these numbers tell the status of satellite in its orbit around Earth. Orbit is always in a flat plane called orbital plane and the Earth is in its one of the foci.

The satellite can be put in any of the following orbits: ( classified with respect to angle between Orbital plane and Equator.)
Equatorial: Where the orbit is over equator. So the angle i ( = inclination ) in above 2nd figure will be = 0 deg.
Polar : Where the orbit is passing near poles. So the angle i will be about 90 deg.
Inclined Orbit : Is in between the above two .. there are other subcategories in this : prograde, Retrograde and a very special called as Molniya which has an inclination angle of 63.4 deg.

We will restrict ourselves with Elliptical orbits.

Epoch time is the time for which the given numbers are measured. ( these numbers together are also called as State Vector because they depict the status of satellite in its dynamic path at that Epoch time ).
In TLE set the epoch time is given as YYDDD.TTTT , YY is the year,DDD day of year from 1st Jan and TTTT is the Time in fraction of day, at which these parameters are listed.

The Orbit in its orbital plane itself has different shapes. The Ellipse may be Thin, Fat, Circular etc. ( these are unscientific words used only for explanation ). This definition of shape is denoted by the Eccentricity. Eccentricity e is a constant defining the shape of the orbit (0=circular, Less than 1=elliptical)

The Apogee and Perigee are the points in the ellipse which are nearest and farthest from the central body. The straight line between Apogee and Perigee will necessarily pass through the central body and that distance is called Major axis.
The other perpendicular axis to Major axis in orbital plane passing through the centre of major axis is Minor axis.

The satellite can be anywhere in the orbital plane and it is denoted by small Omega sign ( called Argument of perigee , the angle between perigee and satellite at the time of Epoch.

The ellipse of the orbit is not stationary but it rotates around the main body.
Last edited by SSSalvi on 17 Oct 2011 13:49, edited 2 times in total.
member_20015
BRFite -Trainee
Posts: 2
Joined: 11 Aug 2016 06:14

Re: Satellite communications

Post by member_20015 »

Thank you Sir for such an informative thread ... much appreciated.
member_20015
BRFite -Trainee
Posts: 2
Joined: 11 Aug 2016 06:14

Re: Satellite communications

Post by member_20015 »

A brief introduction to celestial mechanics Keplerian orbit transfers would also be very helpful :)
SSSalvi
BRFite
Posts: 785
Joined: 23 Jan 2007 19:35
Location: Hyderabad

Re: Satellite communications

Post by SSSalvi »

A Practical example with an actual satellite graphic is shown here.


First figure depicts Apogee, Perigee, Major and Minor axes, Ascending and Descending nodes, True Anomaly, Argument of Perigee
( We said TRUE Anomaly whereas in the list given above there is MEAN anomaly. No, it is not typo.. TRUE and MEAN anomalies ARE different. It is an involved subject and so will be covered separately at end of this post.

Image

The satellite orbit crosses equator at two points.. once while traveling from North to South and second time from South to North . These points are called as NODEs.. The first case N>>>S) is defined as DESCENDING node because the satellite is traveling downwards. The S>>>N node is called as Ascending node. ( In the figure Ascending node is in front while descending Node is behind Earth hence can’t be seen ). Inclination is defined as the angle between orbital plane and Equator at Ascending node measured from East.


Image


Figure below shows:


Right Ascension the angle between Vernal Equinox ( also called 1st point of Aries , One of the points of ( Which was earlier In Aries Constellation )The line formed where the The Celetial Equator and Earth's Equator intersect.
This Vernal Equinox is shown in the first figure. BTW I said ' Which was earlier In Aries Constellation ' . Yes, it moves by about a degree in 70 years and so right now it is in Pisces Constellation!! Hence the name first point of aries where it was at the time when Babylonians named it so. Awesome!!


Image

Image

The figure below shows True Anomaly ( Which is the one which we can depict on the orbit physically ) and the Mean Anomaly which is a anomaly in a fictitious circle drawn around the orbit with Major axis as the dia of the enclosing circle.

Image
SSSalvi
BRFite
Posts: 785
Joined: 23 Jan 2007 19:35
Location: Hyderabad

Fundas of Communications .. Types of signals

Post by SSSalvi »

SORRY ... SORRRRRY .... SOOOORRRRRRYYYY

Just when I started posting on this subject, I got busy with some assignment and could not concentrate on pursuing this this thread. Really sorry for that.

We restart now with Basics of Communication Theory in a non technical language.
Americans would have called it " Communications for Dumbs" but I will call it Essential Communications for the uninitiated. ( Hindustani dumb nahi hota hai ).

Before entering the communications sphere we first get ourselves conversant with different types of signals using everyday examples.

Our UPS battery charger has an indicator lamp showing charging status. Same indicator conveys different meanings by different colors.

Image

The charger lamp changes its colour from RED >> YELLOW >> GREEN.

The colour of light coming out of the bulb and entering the eye changes to convey the information.
Here the light is a medium that is used to carry info and colour is the parameter ( signal ) which is changed to convey three states : 1. No Charge, 2. Charging and 3. Charge complete.


Note that although the battery voltage increases continuously from say 10V to 14V , ( all through 10.1>10.2>10.3 ..... 13.8>13.9>14.0 ) the information is conveyed in only three levels.

Had we connected this voltage to the good old multimeter the needle would have slowly moved across the scale from 10V to 14V. This type of continuously varying parameter is called as ANALOG signal. But the lamp shows only three distinct stages out of this continuous voltage variation. Such a signal is called as DIGITAL signal and because the signal is divided into three stage it is called as TRI-STATE signal.

A two state signal ( e.g. if we had only the RED and GREEN signals in above example ) is called as BINARY signal and is the most common that is used in communications and two states are called as ON-OFF or 1-0 or ONE-ZERO or True-False or High-Low etc depending on user's choice and context although all these pairs indicate same meaning viz presence or absence of a parameter. Note tha 1 does not mean 1 volt or 1 KG or 1 Meter .. it just means signal is above ON thresold. ( e.g. in our battery charger any voltage less than 12V is defined as RED state, any voltage between 12.1V to 13V is defined as YELLOW state and any voltage above 13V is defined as GREEN state.

Thus there are two types of signals: Analog and Digital and based on which method is used for communication we have a Digital Communication System OR a Analog Communication System.

Analog signals was the natural choice till a few years back because of its simplicity ( and we never cribbed about its disadvantages like the noisy TV when we were away from TV tower for the simple reason that there was no other choice and we had to 'live with it' ) but as the technology advanced and it became possible to make components that processed digital signals, the digital era began to overtake and now we hardly see analog systems in use. ( and we again live with new problems like ' arre tumhara sound cut ho raha hai; phirse call karta hoon' or ' ye picture me colour ke rectangular spots aa rahe hai ' , because DTH operator says ' digital me to aisa hota hi hai ' )

So we have simple analog systems which have problems of noisiness which makes the picture to appear with haziness whereas in digital communication ( as its name implies ) we have either a Fully clear image or a total loss of image if there is a less signal. No intermediate quality.

Having seen which are the two basic types if signals we now see how these signals are transported from one place to other.

We deliberately used the word 'transport' because the task for any communication system is to reproduce the exact replica of what was at the source. It could be a picture captured by camera or a sound produced by a musical instrument or a scientific value like Speed of train and so on. The list is endless.

What would we do if a suitcase is to be sent from Kashmir to Kanyakumari?
We will place this suitcase in a truck and this truck will carry the item from source to destination travelling on the surface of the road.

In a communication system we write previous statement as: this signal is modulated on a carrier and this carrier is transmitted over a medium.

So we have following process for a communication system to work: ( And now we have to stick to technical jargon )

at transmitter end we make the information to ride the carrier. This process of putting information on a carrier is called as Modulation.

We transmit the modulated carrier through space or a conductive wire or optical fibre.

Then, at the receiving end we extract information from carrier by the process called 'demodulation' and, bingo, our message has reached the destination.
But rarely does the deliverytake place without some damage ( i.e. some distortion in the information) by the time it reaches destination.

The carrier will generally be a sinusoidal waveform specified in a mathematical form as
A*sin(ωt+ɸ) , where:

A, the amplitude, is the peak deviation from its center position.
ω, the angular frequency, specifies how many oscillations occur in a unit time interval, in radians per second
ɸ, the phase, specifies where in its cycle the oscillation begins at t = 0.

By changing any one of the parameters A, ω or ɸ it is possible to convey intelligence from source to destination.

Image

This 'change' is called Modulation and accordingly this way of communication is defined as Amplitude Modulation, Frequency Modulation or Phase Modulation respectively.

Amplitude and Frequency modulation is shown graphically in the image on left. The 1st waveform is the 'unmodulated' carrier

The second waveform is the signal ( called 'intelligence' in technical parlance ) which is to be modulated on the carrier.

If we use Amplitude Modulation then the amplitude of the carrier changes as per the amplitude of signal as shown in 3rd figure.

If a Frequency modulation is used then the carrier with modulated intelligence looks as shown in adjoining figure wherein the frequency ( indicated by distance between adjacent cycles ) changes as per the intelligence.

We have saved phase modulation description for a future post.

Notice that the modulating signal in above case was a continuously varying signal taking any value from positive to negative extreme or an analog signal.

Had it been a digital signal then the amplitude/frequency/phase would not change continuously but would take only two state values depending on whether the input is 1 or 0.

Image



Having covered basic components of a communication system we will next cover first aspect of a communication system; viz the ever present enemy of good communication ... Mr. NOISE.
SriKumar
BRF Oldie
Posts: 2246
Joined: 27 Feb 2006 07:22
Location: sarvatra

Re: Satellite communications

Post by SriKumar »

Great post, SSSalvi. Please continue with the explanations. If you could also cover how de-modulation is done, for analog and digital. I am also interested in how the informational signal (=intelligence) is different for different inputs like audio (voice, instruments), pictures and numbers; and are they all modulated in the same manner onto a carrier signal.
SSSalvi
BRFite
Posts: 785
Joined: 23 Jan 2007 19:35
Location: Hyderabad

Fundas of Communications .. Introduction to Noise effects

Post by SSSalvi »

Before we proceed further we translate my last post in ‘Technical’, dry, robotic language used on airports and railway stations. ( 'Yatriyon se nivedan hai ..' to '...… ke liye hame khed hai.' All are pronounced in same tone )

Communication is the process of transmission of Intelligence( also called as signal ) from source to destination using a medium.
This transmission is achieved by Modulating the signal on a carrier at source and demodulating the carrier at destination. The carrier has a form A sin(ωt+ φ ). Any of the 3 variables viz A, ω and φ can be changed in proportion to the signal and resulting modulations are defined as Amplitude Modulation ( AM ),Frequency Modulation ( FM ) and Phase Modulation ( PM ) respectively.
The Signal can be Analog ( Wherein the signal can take any value between its +ve and –ve maximums ) or Digital ( wherein the signal takes just two values, a HIGH or a LOW .. no intermediate amplitudes ). Resulting analog modulations have the nomenclature ( AM,FM,PM ) given above whereas if the signal is digital then the nomenclature changes to Keying and they are called as Amplitude Shift Keying ( ASK ), Frequency Shift Keying ( FSK ) and Phase Shift Keying ( PSK ).
PSK is the most used modulation type in modern communication systems. Analog modulations are almost not used in new designs.

With that we end our robotic bulletien and begin a new episode enlightening the very important concept of NOISE in communication systems. Tighten your seat belts!

Let’s begin with a simplest communication system that we encounter in everyday life, that of hearing a sound a simple Lip-to-Ear communication.

How do we hear? A very primitive question.

Image

The sound falling on the ear drum transfers the vibrations through a chain of bones to the Cochlea within which these mechanical vibrations get converted into sensory nervous signals which we call as "Hearing the sound".

Let's go a little deep into this process. The sound vibration energy received through air creates vibrations into the eardrum. But for the eardrum to vibrate some minimum quantum of energy is required. If the energy that is falling on eardrum is less than that quantum then even though the sound which IS present will not be heard because the drum will not vibrate. This minimum required power is the threshold of hearing and if one has to whisper in this ear then the whisper sound should be above this threshold level. So in the figure below sound level B can’t be heard but A will be hearable because it is above threshold.

Image

But the real life is not so straight. In the example above we assumed that the surroundings are totally noise free ( like mid of the night in a very quite location away from noises like railway, tractor, bore-well digging, vacuum cleaner, pressure cooker etc. ). We now take a situation wherein the sound level is above threshold so it will be heard clearly in noise free room but what will happen if there is a surrounding noise ( say rumblings of a digging machine nearby )? Then the ear will receive both Noise and the sound but the power of noise is more than sound so the ear-drum will vibrate according to Noise and there could be a feeble vibration due to sound. In short the the useful sound will not be heard clearly even though it is above threshold. To state it mathematically we say that more the S/N ratio, more clearer the sound quality.

Also notice one more phenomenon: You can hear to the whisper if the speaker brings his mouth near the ear .. if the mouth is not very near to the ear then the other person will not hear the whisper.. you have to speak louder and as you move away from the other person you have to talk still louder. This is because the sound intensity decreases as it travels more distance. ( In fact it decreases as the square of the distance i.e. 1/d^2 hence is called inverse square law )

Now think of a situation where a person has to address a large gathering of many people spread over a large area. The speaker has to shout so much that his sound level is above threshold of hearing upto the last person. But there is limit upto which a human sound can be created. So we use Microphone-Amplifier-Loud Speaker arrangement for a large crowd.

Here again we note the following two possible arrangements:

Image


In the first method there are few high-power speakers. Here the speakers emit large volume sound because the sound has to be audible upto the last row which is far off from speakers. But this is deafening to the persons seated next to the speaker. This leaves a large percentage of crowd un happy ( red faced .. so red in the diagram .. half near the speaker are unhappy because of large volume they experience while the last few rows are also unhappy because they can’t hear clearly. A few of course are cool green happy.

In the second arrangement there are many low-power speakers. This distributed power emission makes everyone get a pleasent volume of sound energy because even though the speakers are operating at low volume they are near to every person in the audience.

Moral of the story: for avoiding large pollution put many low power transmitters spread over the area.

Hey, Does it sound familiar? Does it ring a bell .. I mean a cell phone ring?

Yes, that precisely is the arrangement used in Cell Phone technology.

There is a safe power level of Radio transmission which can be tolerated by humans. So you can’t use a big transmitter at a central place in the town. Instead there are several small towers operating at safe RF power emissions, each serving about a 5-6 Kms radius. By this arrangement each tower has to transmit a small power ( and more importantly ) even the cellphone does not have to transmit a large power .. just sufficient to reach the tower which is less than 6 kms away. If the towers were at a larger distance then the cell phone which is touching the ear would have to transmit a larger power which would have caused damage to brain tissues.

And since each of this 5-6 kms circle coverage area is called a CELL, the name Cellular Phone Technology !!

= = = = = =

Now we translate the above narration in technical parlance applying communications terminology:

1. In a communication system the receiver has a minimum detectable level known as Receiving Threshold. For the receiver to detect a signal it is necessary that the signal level at the front end is above this threshold.

2. The receivers are rarely noise free. The noise could be inherent to the front end or the noise level in the vicinity of the receiver. For the receiver to detect the signal the signal has to be more than the noise level by a minimum level called as a Threshold Signal to Noise Ratio ( S/N Ratio or SNR ). If the SNR is below this threshold then the detected signal will be noisy. The more the SNR better the detected signal quality. ( this statement is true only upto a certain value. Beyond a certain value the signal quality remains constant even if SNR is increased.)

3. The signal level decreases as the distance between Transmitter and Receiver is increased. It follows inverse square law.. the signal level fall is proportional to the square of the distance between transmitter and receiver.

3. The kind of signal that we have illustrated above is an Analog signal .. we will see the Digital signal in a subsequent post.

4. Also we have seen a simple transmitter and receiver system wherein the actual sound is directly transmitted . Such systems were in use till not very long ago. A simple manual telephone switching systems that were in use wherein one had to call a operator to connect to other party in a small premises used this type of direct connection between two users and the signal looked like this .. words seperated by gaps of silence between them as shown in the first portion of image below .

Image

Such systems are rarely used now a days. Almost always a Modulation is involved whrerein the sound is modulated on a carrier ( like the 2nd part of image above wherein there is a continuous carrier but its characteristics are changed over a period of time ) and the carrier is transmitted and received by carrier receiver and demodulated to retrieve the original signal.
Last edited by SSSalvi on 22 May 2012 16:51, edited 1 time in total.
Murugan
BRF Oldie
Posts: 4191
Joined: 03 Oct 2002 11:31
Location: Smoking Piskobidis

Re: Satellite communications

Post by Murugan »

SSSalvi-ji

Badhiya ! Sooper !!
SSSalvi
BRFite
Posts: 785
Joined: 23 Jan 2007 19:35
Location: Hyderabad

A Digital Explosion

Post by SSSalvi »

That’s it. This is the last post that is in low key format. I don’t want to spend this precious BRF space for amusing stuff. The purists must be cursing me why I am writing such crap low key material and wasting space. So from next post we will change to serious note and dry technical literature.

But that is NEXT post ji .. not now.

Now recall from earlier post that there are two types of signals which are used for communication viz. Analog where the signal can continuously take any value between +Vmax and – Vmax whereas in a digital modulation systems the signal takes only two values, either TRUE or FALSE.
A question comes to mind then .. if just YES or NO can be transmitted by digital systems then what is the use? It is useless if we want to send TV signal because the TV signal has 1V amplitude and is continuously varying as per the picture info.
No, it is not useless. It IS possible to send TV , and how? .. read on.
Before that suppose I want to tell my wife where I am going but without using the words Noth/South etc because this secret service guy is listning to my conversation. Then I use 1 to indicate North and 0 to indicate South. That leaves East/West directions uncovered. So we decide that our communication will always contain a pair of ’ those ones and zeroes’ , first will tell N/S and second will tell E/W so for N,S,E and W I will use 00,10,01 and 11 respectively.

We continue for some days and my wife says ‘ no no .. I want more details of where you are going’. I being genius say ‘OK’ and tell her I will use another two of ’ those ones and zeroes’ for NE/NW/SE and SW and we will use

N 0000 NE 1100
S 1000 NW 1110
E 0100 SE 1101
W 1100 SW 1111

And .. as soon as I uttered these values my wife laughed and said ‘ YOU ARE A FOOL’ ( a regular sentence .. no harm ) that’s why you use four ‘bits’. Why not we use , just 3 ‘bits’ like this,

N 000 NE 100
S 001 NW 101
E 010 SE 110
W 011 SW 111

Well, that’s the story which tells how efficient digital representations are. For eight directions we used just 3 ‘bits’ and not 4.

And what happens if I use 4 bits? I can define as many as 16 entities

0 - 0000, 1 - 0001, 2 - 0010, 3 - 0011,
4 - 0100, 5 - 0101, 6 - 0110, 7 - 0111
8 - 1000, 9 - 1001, 10 - 1010, 11 - 1011,
12 - 1100, 13 - 1101, 14 - 1110, 15 - 1111

( A question always asked .. But there are only 15 and not 16? Well that big fat ‘0’ in the first place is also a value making 0 to 15 as 16 distinct values)

[ A simple way to find the value of 1s and 0s in decimal numbers is to add the values of all 1s .. a bit confusing statement but let me explain practically :
From the above set we see that 1 in rightmost position has a value of 1, 1 in 2nd position from right has a value of 2, 1 in 3rd place has a value of 4 and so on 4th place = 8, 5th is 16 , 6th is 32 .. etc
Now just add the place value of each 1
e.g11010 = 16( 5th bit from right ) + 8 ( 4th bit from right ) + 2 ( 2nd bit from right ) = 26 .. bingo
Purists will say 11010 = 1*2^5 + 1*2^4 + 0*2^3 + 1*2^2 + 0*2^1 = 16+8+0+2+0 = 26
Well it is left to you whether you want to utilise Purist way or ( RK Laxman’s ) Common Man way ]

OK, we understand that with just 3 bits you can represent 8 values. So what ? How is it useful for communication?
Suppose I took Rs 1000 and want to inform how much money is left with me. How do I do it? I divide 1000 into 16 parts ( so that each bit represents 1000/16 = Rs 62.5 ) . Now for 1000 the representation becomes:
0000=Rs0 , 0001= Rs62.5, 0010 = Rs 125, and so on.

Note that here we can’t represent Rs.100 so you will have to transmit either
0001( =62.5 ) or 0010( = Rs. 125 ) to represent Rs.100, a big amount ( 100-62.5 = 37.5 or 125-100 = 25 lost in rounding off which can’t be ignored.

We can REDUCE( you can’t make it zero ) that error by increasing the number of bits. If you increase the number of bits to 8 then you get following: ( same logic as above )
Each bit = 1000/256=Rs 3.9065 .
00011001 = 24 = 24*3.9065= Rs 97.65625 and 000011010= 25 *309065 = Rs 101.5625
Rounding off error is under 3 Rs , surely much better than earlier rounding off error!

What I said for rupee is true for any analog signal also. We can digitize TV signal or Voice or Voltage or Temperature etc ( i.e. connect normal analog signal to a ‘anlog to digtal converter’ [ADC] which converts instantaneous voltage to digital value continuously. ). In this way we get a digital representation of almost any analog signal. But why all this Dravidi Pranayam ? Why convert the good anlog signal into digital format?
The reason is whole our new technology is digital. Our cellphones use digital modulation ( hence resulting in .. arre tumhari awaj cut ho rahi hai ), Our satellite TV has now become digital ( resulting in ‘picture freeze ho gayi’ ), Our Cameras have become digital .. the list is endless. But why this migration to digital?

It is a sort of Pahele andaa ya pahele murgi question.

Several developments and new technonologies have made it possible and are now forcing other things also to shift to digital domain.

Till a few years back simple task of converting analog signal to 8 bit digit format ( called Analog to Digital Converter[ADC] ) involved interconnecting an amplifier, sample and hold circuit made up of another op=amp and capacitor and analog switch, an analog comparator ….. there is a big list. And after assembling all that stuff when you switch it on you find either it is oscillating or is stuck to one power supply rail. What I mean is that it was not an easy task and so use of these circuits was more or less restricted only to Lab or for writing project reports for engineering degree. But then electronic manufacturing technologies evolved and it became possible to integrate many circuit blocks and so a single device could perform many tasks. For example a single component could be built which not only converted anlog voltage to digital format that has been mentioned above but it could also have a built in USB interface for connecting ADC to computer.

The computer itself got transformed ( the first computer I operated occupied a full room of 20ftX20ft and I had to ‘talk’ to it using a row of on- off switches on which the digital values ( as in our example at the beginning , 001101010011, were set. And that fellow gave answer using a row of lamps which were on or off ) into a laptop and hardly any user does the programming because the software technology has also developed and that task of ‘talking to’ is done by our key board using Operating systems. In fact even the floppy disks which were a must few years ago have disappeared and instead the USB stick has taken over.

And another development of course is the concept of Internet. Recall how we used to go to the Head Post Office to send a telegram which would be delivered in a day or two. Compare it to realtime talks that we do today with each other.

And because now all other things have become digital, you can’t use analog anymore if you have to connect to these systems. That is the reason why ( without you knowing it ) everything is digitized. We talk using some voice messenger from our home computer to a relative in US. And we never know that the first thing that the computer does is it digitises our sound. And then sends that ‘data’ ( just like any other piece of information ) through the net to other computer which again receives it as passively as it would receive any other file and convert that digital ‘data’ into sound using a speaker.

That is about technological evolution into digital domain. Simultaneously the theory on digital technology has also developed from Shannon’s treatise on theory of digital principles to practical digital communication developments. There are certain manipulations possible in digital domain which are just not possible in analog. E.g Using certain algorithmic techniques one can use the same communication channel to transmit a data which is several times that of possible data transmission over a certain bandwidth. ( It is now possible to use a 72 MHz Bandwidth trans ponder to send data of 370 MBPS data rate .. such a thing is just not possible using analog technology ) .

In the simplest format we can double the data being sent on a channel by using QPSK modulation instead of a BPSK.
We will cover these topics of BPSK/QPSK etc in the next post.
pgbhat
BRF Oldie
Posts: 4163
Joined: 16 Dec 2008 21:47
Location: Hayden's Ferry

Re: Satellite communications

Post by pgbhat »

Quite frankly, all this is quite refreshing. It is a good read and helps me recollect stuff. :)
SSSalvi
BRFite
Posts: 785
Joined: 23 Jan 2007 19:35
Location: Hyderabad

Transition of Venus in front of Solar Disk on 6thJun2012

Post by SSSalvi »

Details of this event have been posted at

http://forums.bharat-rakshak.com/viewto ... &start=880 and also at

http://sssalvi.blogspot.in/2012/06/ecli ... -2012.html

( The Table is clearer in the post in this link )
SSSalvi
BRFite
Posts: 785
Joined: 23 Jan 2007 19:35
Location: Hyderabad

Fundamentals on Modulation

Post by SSSalvi »

Any communication over a longer distance involves modulation. So just before entering into the satellite specific subjects we will do a run through on Modulations. ( Now in a more formal way ).

BASIC COMPONENTS OF COMMUNICATION:

As explained in earlier post three components are always involved in a communication:

i. An Information or Data Or Message which is to be sent. In technical parlance it is called as Baseband ( BB ).

ii. A Carrier which is modulated by changing one of its component ( Amplitude or Frequency or Phase ) based on the value of Baseband

iii. And A Medium through which the modulated carrier will travel. It could be a Copper wire or space or Fibre cable through which a light is passed.

iv. We also saw that the Baseband can be Anlog or Digital and also seen that Digital is nothing but converting the analog
baseband into a digital value by ( this is a new word but again old wine in new bottle ) Quantization. ( remember 1000 Rs converted to 010101011000 format we said there will be Rounding Off error … well that is the process of Quantization and that error is the Quantization error. An important property of BB in digital case is that it will be generated at a fixed frequency called Data Rate. The change of state from 1 to 0 or 0 to 1 is called as a Transition.

Some terms associated with digital signals are graphically shown here and are self explanatory:

Image
From now onwards we will concentrate only on Digital signals because Analog modulation is rarely in use now a days. Even in Digital modulation our main focus will be on Phase Modulation … again because that is the one which is in use for most of the high efficiency systems. What is high efficiency? We will shortly see that because before that we must understand what is Bandwidth and we will try to understand that right away.

SPECTRUM AND BANDWIDTH:

Recall what is a carrier. It is a sine wave of a certain frequency. Suppose that this frequency is 70 MHz and has an amplitude of 1V. How will it appear on a graph having Frequency as X axis and amplitude as Y axis.? It will look like shown in drawing below. Idealy it should look a thin fine single line.


Image

This representation of Amplitude versus Frequency is called as Frequency Spectrum. A practical Frequency spectrum as seen on an instrument called Spectrum Analyzer is shown in photograph.

Image

Although apparently it looks like a clean vertical line if we expand it to a large extent then we see very small jitter in frequency.

Image

It is not a single line because practically there is never a rock stable frequency. It will always jump in phase and frequency ( A 70 MHz oscillator will give 70.00001 MHz,70.000024 MHz, 69.00002MHz, 70.0000 MHz, 69.000042 MHz and so on at different instances . Naturally most of these wanderings will be near the exact frequency but some stray frequencies will also show their presence on spectrum display ) causing the thickness instead of a clean line.

That was about a clean sinewave. How does the spectrum of a square wave look like? To answer that let’s see how a square wave can be generated using sine waves. ( Please ignore the imperfections due to hand-drawn figure )

Image

In this diagram two sine waves ( Red and Green shown on top portion )with frequencies of f and 3f are added. ( i.e. if f= 1khz then 3f=3khz or if f=1 MHz then 3f= 3MHz … what is important is that they are in the ratio of 1:3 or second frequency should be 3 times the first one. Such frequencies are called as harmonics .. Red is the 3rd harmonic of Green ) The addition results in a near square wave as shown in bottom portion.
If we add next odd harmonic ( i.e. 5f ) then a we get a waveform which is still nearer to a square wave as shown in right bottom portion of figure.

Addition of more odd harmonics 7f,9f … will bring the output waveform still nearer to a perfect square wave.

Image

A professionally drawn image with higher harmonics is reproduced here on left from a very good article ( http://www.skm-eleksys.com/2010/10/four ... rical.html ).

In converse we can say that a square wave is made up of many odd harmonics of sine wave, the lowest frequency ( f ) being equal to the square wave frequency.So its spectrum should show all the odd harmonics that it is made up of.

Actual spectrum of a square wave is on right. Notice several vertical thin lines each representing a sine wave. ( The small components near bottom are due to imperfection in squarewave used for making the spectrum display and we can ignore them. )

Image

Just for academic interest. In the above description we have taken odd harmonics. What happens we take even harmonics and add them? We get a Triangular wave instead of a square wave.

Such studies is the subject of Fourier Analysis and Harmonic Analysis. We will not go in those details here.

Decibel, db, dbm, dbW .. etc Explained

Although in previous figure amplitude was shown as 1 Volt for clarity, any Spectrum Analyzer almost always represents the Y Axis in dbm that is ‘ the amount of RF power delivered to load when referred to 1 mw’. Please do not faint I will explain.
In electrical systems voltage is a measure of amplitude. ( Like we say 230V is the voltage of household supply in India ). When this voltage is connected to a 100W bulb, then 230V is applied across the filament of lamp and it causes current to flow through filament which heats the filament and it starts glowing due to the heat produced. Naturally the filament is causing a load of( V^2/R ) which is 100W. ( R is the resistance of filament of bulb ).
Same thing happens In Radio Frequency Communications also. The RF signal from transmitter ( Technically called SOURCE ) is connected to a LOAD ( Equivalent of Lamp in electrical circuit ). The RF energy generated in Source is transferred to the load.
Now here comes the difference. In electrical circuit we don’t bother much about the capacity of Source and the Load because the source has almost infinite capacity ( compared to a single lamp that we have connected ). But in RF connections it is not so simple. Rf energy is generated with a highly complex oscillator and has to always be transferred in total to the load ( We will see later if the two are mismatched then there is possibility of them getting burnt or an arcing to happen ) , and this happens when the source and the load are perfectly matched.
The source generates 1 W power. The source has an internal impedance ( impedance is similar to resistance but its value changes with frequency ) of 50 Ohms. This power is connected to the load which is also 50 Ohms. One more concept that now needs to be explained is that the line connecting two devices in RF circuit is called a transmission line and it also has an impedance. The connection between source and Load is done through a line which also has an impedance of 50 ohms.

Image

That is Maximum power gets transferred from source to load when the two are matched.
Now we said 1 Watt of power. In RF circuits Watt or similar units are rarely used.
We use what is called as a decibel system. It is a logarithmic system to show a ratio between two powers i.e. 10*log(ratio).


By definition decibel is represented by 20*log(voltage ratio) or 10*log( power ratio )

E.g. Suppose there is an amplifier which amplifies the signal voltage 25 times. So we represent voltage gain in db as

Image


Instead suppose the Power ratio was 25, then we get

Image

That explains about how relative ratios are shown in db. But we also must be able to define some absolute levels. ( Ratio of 25 can mean 1W is amplified to 25W, or 100W is amplified to 2500W. it is only a ratio.
We said earlier .. The carrier amplitude is 1V. Now this is an absolute value. You can’t have .2V as 1V or 20V as 1V. It is an absolute value 1Volt. )
In such cases we have to define ratio with some fixed ( absolute ) level.
E.g. 5V is 5 times of 1V so it will be computed as 20log( 5/1 ) and since it is defined w.r.t 1V reference level it is called as dbV. So 5V is = 13.98 dbV.
Some other Reference Decibels that we regularly come across are,
Image
A Common approximation used for quick calculation is Power ratio of 2 ( doubleor half the power ) = 3 db ( actually it is 3.010) . Some more quickies are 10 times power = 10 db, 2 times voltage = 6 db, 10 times voltage = 20db.

PSK, BPSK, QPSK Modulation systems

In an earlier post we have seen that the carrier has a form A*sin(ωt+ɸ), ɸ, the phase, specifies where in its cycle the oscillation begins at t = 0. Adjacent figure shows 4 possibilities of how 90 deg phase shifted waveforms will look like. Thick line on left is the start time indicator, ( T=0 ); and one can see how the four waveforms start at different phases at T=0.
Image
Such type of modulation where the Phase of the carrier is changed as per the Digital value of Baseband is called as Phase Shift Keying (PSK). This gives various possibilities and therefore is most preferred where high data rates are involved. Two most common PSK modulation schemes are BI Phase Shift Keying ( BPSK ) and Quadrature Phase Shift Keying ( QPSK ). We will go in some detail about these two.

How can we generate this BPSK modulation? Right side figure symbolically shows a BPSK modulator.
Image

The phase of the carrier is changed as per the input value of Baseband resulting in two types of outputs:
Image

A combo graph of Baseband and Modulated carrier is shown below for the baseband data stream '01101110100'

Image

Notice that whenever BB changes its state from 0 to 1 or 1 to 0 , the carrier changes phase by 180 o ( see expanded view ).

Instead of just 180 we can have 4 waveforms spaced at 0, 90, 180 and 270. But there is a small problem. In BPSK we had 0 or 1 as the two states of Baseband but for QPSK we require 4 states. How we do that? To overcome this we now use four states, each consisting of a pair of bits from two data streams viz 00, 01, 10 and 11. Each pair is called as a symbol and so for each symbol there will be a distinct state on output carrier as plotted in next diagram.
Image


For generationg these symbol codes an encoder is used which strips the incoming serial data into two streams I and Q ( They are actually abbreviations for In-Phase and Quadrature ) which are applied to each half of modulator.At the receiving end a special decoder is used which will combine the demodulated I and Q data streams into a single serial data output which is the replica of original data. Encoder actually serves another very important function, that of security key. The encoding is done by using some algorithm and one can keep his data secure by keeping the encoding algorithm a secret. A corresponding Decoding Key is required at receiving end and data reproduction can be done only by using that key.

We can imagine a QPSK modulator to have been made up of two identical BPSK modulators fed with quadrature phased carriers for modulation. This results in one giving a 0/180 phase outputs and the other givinging 90/270 phase outputs which connected to an adder which combines both the BPSKs.

Image


In our next dose we will cover demodulators and some other communications topics.
SriKumar
BRF Oldie
Posts: 2246
Joined: 27 Feb 2006 07:22
Location: sarvatra

Re: Fundamentals on Modulation

Post by SriKumar »

Interesting post. I have a few questions if you dont mind.
SSSalvi wrote: An important property of BB in digital case is that it will be generated at a fixed frequency called Data Rate.
By frequency, I assume you mean the 'pulse rate' at which the 'digits' are generated. I assume it is not the frequency of the carrier sinewave (as in freq. =velocity/wavelength?).
SPECTRUM AND BANDWIDTH:
This representation of Amplitude versus Frequency is called as Frequency Spectrum. A practical Frequency spectrum as seen on an instrument called Spectrum Analyzer is shown in photograph.
Could you also define 'bandwidth', for completeness. If it was defined, I missed it.
That was about a clean sinewave. How does the spectrum of a square wave look like?
As I understand it, to modulate information onto a carrier wave, all we are changing is the phase. So why do we need a square waveform which takes extra effort in generating multiple sinewaves and superposing them together? Would it not be simpler to use just a simple sinewave and modify that alone?
PSK, BPSK, QPSK Modulation systems: In BPSK we had 0 or 1 as the two states of Baseband but for QPSK we require 4 states.
I did not understand the this line. Are not the states automatically guaranteed when we use 2 signals for inputs? Taking a step back, as I understand it, using BPSK means that we have one carrier signal modulated into 2 states: 0 and 1. If we use QPSK, we are using 2 input signals, each with 2 states and we get 4 states right there. The advantage of QPSK is that twice the info. can be conveyed *because* two input signals are being used? Is this a fair summary?
We can imagine a QPSK modulator to have been made up of two identical BPSK modulators fed with quadrature phased carriers for modulation. This results in one giving a 0/180 phase outputs and the other givinging 90/270 phase outputs which connected to an adder which combines both the BPSKs.
Along these lines, could we use 2 signals, with phases broken at every 45 degrees?
General question about detecting phase change (machines vs. humans): The human ear can detect a change in frequency (pitch) and amplitude (loudness). Can it detect a change in phase? Your posts are quite detailed and this is appreciated. Links posted in the post are also appreciated.
SSSalvi
BRFite
Posts: 785
Joined: 23 Jan 2007 19:35
Location: Hyderabad

Re: Satellite communications

Post by SSSalvi »

^^^
Thanks and appreciate reading carefully.
1. It was an oversight to call a BB frequency. BB is always a Rate. Fortunately was able to correct it in time. ( No so fortunate .. edit tab was available when I read ur comments. I corrected the sentence like this

" An important property of BB in digital case is that it will be generated at a fixed rate called Data Rate. Why do we not use the word frequency although the data has a fixed bit width? It is because by frequency we understand a repeating phenomenon at a particular fixed interval. But that is not the case here .. data will be like 01110010011100000010100110101.. so there is no repetitive pattern at a fixed interval still the new data 1 or 0 comes at a fixed interval ( bit rate ). So it is called Data Rate and not Data Frequency. "

But by the time I tried to post it system did not allow me to do so. So the error remains in the post but my corrective sentence is reproduced above.
2. How does the Modulated output look like was not discussed hence Bandwidth was also not touched. Will cover in next post.
3. Sine wave and Square wave spectrum was just for showing the difference in two spectrums. It has nothing to do with using a square wave for carrier. Carrier is always a sine wave whose phase is modified. A square wave is used as BB because of the advantages that it has ( like what you have pointed out doubling the quantum of data that can be sent using same bandwidth.)
4. Using two independent data streams for QPSK is possible provided both streams are generated with the same clock because the two streams have to be synchronous. ( otherwise phase modulation of carrier at exact 0,90,180,270 will not happen.
Secondly the data input is a data input so a single input. That single input is encoded into two parts I and Q using an encoder for feeding the two BPSK Modulators.In fact encoder takes two seqential bits at a time and encodes them. One more important reason for encoding. Suppose that there is a long stream of 1s or 0s, then modulator output will remain static. There are rules which do not allow such conditions ( again the reason for not allowing is technical.. such fixed carriers can cause interference to other users. ) . The encoders use XOR circuitry on two consecutive bits of input stream which ensures that you do not get repetitive 1s or 0s in either I or Q outputs. Such circuits generate what is called a Pseudo Random Sequence which acts like Energy dispersal function used in analog modulation. But all these topics are to be covered later.
5. 45 degrees is the phase shift that occurs in Carrier not Data so question of 'using 2 signals .. ' does not arise.

One of the most complex algorithms was used by Mr. GOD for human ears . With two ears you should be able to discriminate just LEFT or RIGHT directions but you can identify sound source from any direction. This is achieved using the reflection from various undulations in outer ear and the phase difference in sound signal between two ears.

And remember there is a specific algorithm for each individual because everybody's outer ear is different.

Most of solutions to my complex problems have come by asking the question: Is there a similar problem case in nature and how was it solved by nature?
SSridhar
Forum Moderator
Posts: 25109
Joined: 05 May 2001 11:31
Location: Chennai

Re: Satellite communications

Post by SSridhar »

johneeG, I am deleting your post as being irrelevant for this thread.
johneeG
BRF Oldie
Posts: 3473
Joined: 01 Jun 2009 12:47

Re: Satellite communications

Post by johneeG »

SSridhar wrote:johneeG, I am deleting your post as being irrelevant for this thread.
Saar,
I can understand your deleting the post. My intention in posting it was that I hoped that someone more knowledgable would point out the flaws in what they are saying. Personally, I didn't know what to make of it. The alternate theory of using radio waves sounded convincing to me. Yet, I cannot reconcile the idea that Satellites are fake. May I request you to move that post to some appropriate thread instead of deleting.

Maybe, you could let the post stay just for somemore time to allow it to be refuted. Then, you can delete it. Thanks.
Post Reply