A stellar color may be numerically defined by subtracting the magnitudes of a star through a given pair of filters such as B from U, or V from B, for example U–B and B–V, yielding magnitude quantities that represent a measurement of the ratio of the ultraviolet to blue light and the blue to the visible light, respectively. Because magnitudes run backwards, a large B‐V means B > V, so the star is actually fainter in B than V, so it is a red star. So these values are indeed related to the color of the star with numerically large positive values (1.6–2.0) of U–B and B–V obtained for red (cool) stars, but values near 0.0 or even negative indicate blue (hot) stars. Photometric colors often are used instead of surface temperatures or spectral types for stars.