The effectiveness standard most of us associate with sunscreens — the SPF, or sun protection factor — measures only how well they block UVB rays. That’s because scientists used to believe UVB rays acted alone in causing skin cancer.
UVB rays cause sunburn. So to measure how well a sunscreen blocks UVB rays, testers actually observe a group of people wearing the screen and a group not wearing it and time how long they can be in the sun before they start to turn red. The SPF rating is the ratio of the first group’s average time compared with the second group’s. For example, if you use a sunscreen with SPF 15, theoretically it should take about 15 times longer for you to start to burn than it would if you didn’t use the sunscreen. Of course, the SPF assumes that you use enough sunscreen and that it doesn’t rub or rinse off.
But scientists now know that UVB rays are not the only cancer causers. UVA rays can do dirty work of their own. So a sunscreen that measures only the blockage of UVB rays can’t be completely useful.
Historically, there has been no universally accepted UVA measure, but theU.S. Food and Drug Administrationhas established a “broad spectrum” test meant to determine whether a sunscreen provides UVA protection proportional to its UVB protection. By “proportional,” the FDA means that if one sunscreen’s UVB blockage is better than another’s, its UVA blockage must be better too, and to the same degree.
Starting in December, new FDA rules are scheduled to go into effect that rely on the broad spectrum test. Sunscreens that pass will carry a Broad Spectrum SPF rating. If the Broad Spectrum SPF rating is 15 or higher, the product can claim to reduce the risk of skin cancer and early skin aging for anyone who uses it as directed and together with other sun protection steps.
Sunscreens that don’t pass the broad spectrum test or that earn a rating of less than 15 will carry a “skin cancer/skin aging alert” saying they have only been shown to help prevent sunburn, not skin cancer or early aging.