The last time you saw your credit score, it was likely a number that fell between 300 and 850. And while there are exceptions to that score range, all general-use credit scoring models are scaled in such way that all scores are three-digit numbers. Have you ever wondered why scoring models were designed in such a way?
Credit scoring models can be scaled any way the developer chooses. They can have ranges of 1-10, 0-100 or 1-1,000. Credit score models can even have an alphabetical range, A to Z for example. It has become an industry standard that higher scores indicate better creditworthiness, or lower risk for the lender, but there’s no law or regulation requiring that, either. Everything regarding the range of scoring models is optional — but developers have learned that some degree of standardization is important if they want anyone to use their scoring models.
Credit score models are scaled to contain three digits for two reasons. The first is, frankly, that they’ve always been three-digit numbers. Automated application-processing systems therefore evolved to read three-digit numbers, and so credit scores do not typically exceed 999.
The second reason credit score models are built to be three digits is about giving the model developer flexibility with respect to the meaning and precision of each score value. Behind each numeric credit score is a mathematical quantity known as probability of default (PD), the statistical likelihood that a consumer will default, or go 90 days or more past due on a loan payment within the next 24 months. PD is expressed in the form of percentages ranging from zero to 100. Credit scoring models use statistical analysis to identify patterns of credit behavior within each consumer’s credit file that align with probability of default, and then rank-order consumers within credit-score bands, according to statistical risk.
Lenders use reference tables known as performance charts to interpret credit scores. Performance charts characterize the relationship between credit-score intervals and corresponding PD values. Credit score bands at the top of the range (e.g., 830 to 850) have low PDs, indicating that consumers with high scores are unlikely to default. Credit score bands at the bottom of the range (e.g., 300 to 320) have high PDs, indicating that consumers have a high likelihood of defaulting. The number of units between the largest and smallest possible score in a range determines the precision of the score and the performance chart.
It turns out that the 550 increments in the familiar 300-850 scale range are just about ideal for purposes of characterizing the country’s 300 million credit-eligible consumers. A scale with fewer increments, say 1-10, or even 1-100, would lack sufficient granularity for meaningfully interpretation, while a scale with significantly more increments, say 1-1,000, would be unwieldy overkill, with more granularity and greater complexity than necessary.
Lenders want the right amount of meaningful precision, because it lets them price risk better, reduces their costs, and leads to lower fees and interest rates for consumers.