What Does Magnitude Mean In Astronomy

What does magnitude mean in astronomy?

The night sky has some stars brighter, while others are dim. The composition of the distance of a planet is a critical variable for planetary imaging.

The brightness of celestial bodies is expressed in terms of apparent magnitude. The intensity is visible at a standardized distance of 32.6 light-years or 10 Parsec.

Measuring the luminosity or the amount of energy emitted is another calibration for celestial bodies. This is a rudimentary system.

Today, we have at our disposal more advanced tools. Tools that make the calculation more precise and accurate.

The concept of apparent magnitude and absolute magnitude

With earth as a reference point, we cannot account for the different brightness of stars. The apparent magnitude depends on your location as the observer.

Differing locations bring up different measurements. Stars at a closer distance to earth but less bright will appear quite illuminated.

While the farther but well-illuminated stars appear might appear fainter. This concept is simple, indeed, but highly relevant for measurement of astronomical magnitude.

The factored precise brightness is what interests astronomers. This makes it essential to look for a convention that eliminates the variations of intensity and distance.

The solution was deduced as a scale of true magnitude. This scale is the principle reference between different stars.

The energy emitted by stars is calibrated in watts. It is more commonly known as luminosity.

For instance, the luminosity of the sun is factored as 400 trillion watts. But the closest star to earth, the Alpha Centauri A is 1.3 times more luminous than the sun itself.

The history behind astronomical magnitude.

The magnitude of stars is a result of an ancient disparity. This fluke seemed to be a good prospect at that time.

The story begins with a Greek astronomer Hipparchus. The ranking of stars by Hipparchus was straightforward.

He understood that the brightest stars were also the biggest. In contrast, the smallest was the smallest.

His work remained as fundamental for quite some time in the 14th century. Galileo Gallie brought a change to this basic theory.

His new inventions telescope took the astronomical world by the storm. As the advanced telescope astronomers developed a more robust basis for the concept of enormous magnitude.

By the 19th century, the entire stellar magnitude scale was redefined. Even with such a large scale evolution, a lot of quandaries remain unanswered.

Norman R. Pogson, an astronomer from Harvard, suggested the difference magnitude in the number 5. The rule was simple and convenient, which led to its speedy adoption.

What makes absolute magnitude a limited concept?

A difference of 5 on the magnitude scale is close to a 100 on the luminous level. This is an essential relation for measuring the actual luminosity of celestial objects.

You cannot measure how bright a star is without absolute magnitude. There are specific disparities in the theory that are more technical.

The astronomical instruments are a significant source of such inconsistencies. You need an accurate number on the wavelength of light.

The emission from stars can range between high energy X rays to low infrared radiations. Depending on these deductions, a star can appear more or less bright.

To overcome this problem, you need to ascertain the range of wavelength that is being used. The manner in which your instrument reacts or its sensitivity is another limiting factor.

With the technological advancements, telescope mirrors have improved by a large margin. Measurements made today are fairly more accurate than those that are a decade older.

But there is a subtle paradox to this theory. The brightest is the least touched subjects among astronomers.

But the very recent effort to record their luminosity makes us hopeful. A critical example of this effort is a constellation of satellites. This constellation is called BRITE.

BRITE is short for Bright Target Explorer. This integration calculates the different brightness between stars.

Variability of the stars

Most stars that we know of have a stable brightness. There are more than 100,000 such starts in our astronomical archives today.

Variable stars are either extrinsic or intrinsic. The tricky part of all the deliberation is that the brightness of all-stars changes with time.

To bring things into a better perspective, the polarise as 4.4 times more luminous once. The northern Polaris belongs to a family of stars called the cepheid stars.

These stars are characterized by high luminosity and short pulsations. This variation helps astronomers deduce the distance between the earth and the subjected star.

Other types of stars are the cataclysmic star, eruptive stars, etc. some intrinsic variables include the rotating and the binary stars.

Recommended Reading:


  1. Sky and Telescope
  2. Love the Night Sky
  3. e-education.psu.edu