The RSCI Guy
You have currently viewed 1 story this month.
IlliniBoard now offers two free stories per month, for more please subscribe.
As I've mentioned about 37 times on here (OK, not on here, but 36 times on ALionEye and now one time on ALionEyeBoard), I am an RSCI junkie. Ever since I discovered the Recruiting Services Consensus Index, it's the only high school basketball ranking system I use. Jerry Meyer moved Myke Henry up to #21 nationally? Let's see where it shakes out when the next RSCI comes out. Because if the other ranking services don't do the same, Henry's RSCI number will settle back down to where everyone else has him ranked.
The concept is simple (and brilliant). Take all of the different top-100 lists, assign a 1-100 point system for them, and find a consensus Top 100. I'm so addicted to it now that I get crazy nervous when our players are right on the edge of either making it or missing it. The guy who runs the website updates it once in the fall (after the all important rising-senior summer) and then once again when the final rankings come out in the spring. So when incoming freshman Austin Colbert was ranked #98 in the fall rankings, I was biting my nails hoping he'd stay in the top-100 when the final rankings came out.
Why? No idea. It's loosely based on the fact that the more top-100 players we have, the most basketball games we'll win. But in the grand scheme of things, it really doesn't matter if Colbert is #98 or #102 (he did fall out of the final rankings). What matters is that we brought in a player who was right on the fringe of the top-100 and should be a solid player for us down the road. Oh, and I can also win more internet arguments if we have more RSCI players.
Given all of that, imagine my glee to get an email from the guy who runs the RSCI. His name is Jeff Crume, and he emailed to thank me for frequently linking the site. I emailed back and demanded an interview. He emailed back and was all "hey, man, I'll do the interview, but why so demanding?" and I was like "YOU LET AUSTIN COLBERT FALL TO #102".
None of that happened, of course. Well, he did email me to thank me for linking the site, and I asked for an interview, and he graciously said yes. So here it is. Unmasking The RSCI Guy.
OK, first question: what led you to start the RSCI in 1998?
I've been interested in college basketball recruiting for ages. One of the perplexing problems for me has always been that recruiting rankings could vary so wildly on certain players so it seemed to me that what would be really great is if you could get all the best experts in a room and force them to come to a consensus. That way you limit the effect of the outliers and get a more accurate prediction of a player's capabilities.
A lot of people would simply say that you should take the average of all the rankings but it's not that simple. Averages work fine if every player is ranked by every source that you consider. That works for most players at the top end of the scale but by the time you hit the middle of the top 100 you will find that some players are not considered top 100 by all the scouts. That means you either have to only take the average of the sources that do have him in the top 100, which totally ignores the fact that he might really be considered #200 by the "missing" source or you arbitrarily assign him a #100 ranking, which again runs afoul of his true ranking. In other words, there's no way to do it with averages without skewing the results.
Then one day I saw in one of The Sporting News annual college basketball issues where they had come up a system that assigned points in reverse order of ranking, where #1 got 100 pts all the way down to where #100 got 1 point. This way if a player was out of the top 100 for a source he got a zero and the guys that got higher rankings by more sources ended up at the top -- which is what you want.
That got me to thinking... so this system works mathematically but the problem is that no one is updating it throughout the season. There was just a single snapshot based upon summer rankings which can't account for what a player accomplishes in his senior year nor, even more importantly, what he does in the post season all star games.
So, I decided to create RSCI. I wanted these rankings for myself and figured that if I had gone to all the trouble to compile them, why not let others enjoy them as well and I've been doing it ever since.
What leads to your decision on which experts to use? I know you dropped Clark Francis one year (I believe it was because he didn't rank prep school kids or something, which always threw off your rankings). How much has that evolved in the 16 years of the RSCI?
The sources have definitely changed over the years. Sometimes this is due to the fact that some simply cease to exist, which is always a risk for any long term venture like RSCI especially when it is based on a fairly transient medium like the Internet. In some cases, the scouts doing the rankings have moved to another organization and, if they were good, I've tried to follow them.
As you mentioned, the sources that don't include 5th year or prep school players, like Clark Francis with Hoop Scoop, have always been problematic given the way the RSCI results are compiled. In fact, I'd guess that the most frequent complaint I got for years was the fact that some of the top 5th year guys had artificially low RSCI rankings because of this. I've heard all sorts of approaches to adjust for this but none really work so, in the end, it was just better to leave those sources out and let the RSCI system remain completely objective rather than introduce a subjective special case into the formula that would surely invite yet more criticism.
In at least one other case, I dropped a source because their rankings were coming out so much later than all the others that it was impacting my ability to get out results in a timely manner. I tried communicating with this source on a number of occasions but never got a response so I just had to make the decision to move on. Finally, in a few cases when it seemed that a source was consistently far outside the mainstream thought, it raised questions as to their reliability. The beauty of the RSCI system is that no single source can skew things too much but I just felt that there were a couple of cases where there were sources that really weren't very reliable so I dropped them.
**I've always taken note of your "+\-" column, because it seems to capture late risers very well. Rajon Rondo, for example, jumped 27 spots on your final list back in 2004. Danny Green, James Harden, Andre Iguodala - all players that came on late in all the rankings. **
What was your reasoning for adding that column?
Exactly. Trends are important in my opinion. A guy who started off the season as a top 10 that ended as a top 50 might be someone who everyone will need to adjust their expectations for because his weaknesses might have been exposed, the hype is losing steam or the rest of the class has caught up and surpassed him. By the same token, late bloomers may be just beginning to show you what they are capable of so you might want to keep an eye on them. There can be some anomalies with this statistic, though. If a guy got injured, chose not to attend some of the summer league events or reclassified, a sudden change in this number might not mean anything. Still, I feel my job through RSCI is to put the raw, unvarnished data out there and let the fans decide for themselves what it all means. That keeps my bias out of the equation and creates great fuel for message board debates.
I know your stuff is discussed on all of the Illini message boards that I visit. In fact, in bios on the official Illini website, they refer to your site. For example, Tracy Abrams' bio says "Ranked No. 69 on the recruiting services consensus index, including No. 58 by Rivals.com and No. 62 by Prep Stars..." Do you see that a lot? Is your site visited by fans from coast to coast?
Yes, in fact, I see that sort of stuff all the time and get a big kick out of it. This whole project started as just a guy and a very amateurish-looking web site but has grown beyond my wildest dreams. I've had major newspapers and sports magazines make use of the rankings. Lots of other school's message boards debate endlessly the merits of the rankings. I've even seen a few official school web sites which include RSCI rankings in the bios for players as well as coaches. Then, of course, there are the knockoffs. I've seen a few sites try to emulate RSCI. They say imitation is the sincerest form of flattery so I guess I should be flattered.
I'm curious as to your opinion on the rankings themselves. 15 years ago, in my estimation, you were much more likely to see a kid ranked 21 on one list and 96 on another list. These days, if one guy has a kid ranked much higher than the others, it seems like he'll adjust his ranking to fall more in line with the others. Illinois has had a couple kids like this. Mychael Henry was at one time #21 on Rivals and ranked as a five-star, but nobody followed suit, so they dropped him to #40 or so (presumably to align better with the average ranking in the 60's. Have you seen this trend, or is it just my imagination? And does that trend help or hurt your numbers?
I think you're right. I suspect, but have no hard facts to support, that one factor could be that players are getting seen more often by more scouts now so it's harder to fly below radar as well as easier to get exposed. Also there has been a good deal of attrition in the ranks of the evaluators, themselves, which, if Darwin applies, could result in the survival of the fittest, most accurate scouts. Finally, the fact that more and more of these rankings are easily accessible online by fans means they are also accessible by scouts, who now have a better opportunity to look over each other's shoulders and normal their results. RSCI might even be indirectly contributing in this area.
OK, last question. How would you group players on your site? I've always had this argument with people on various message boards. We generally came up with this:
Top-35 on the RSCI is almost guaranteed to be an impact player. There's a falloff after that, but players maybe ranked 35-75 are your bread and butter. Guys that will stick around and not declare for the draft yet are skilled well beyond the unranked kids. And then, after 75, it's kind of a crapshoot. Lots of busts in the 75 and up category, which might suggest that kids ranked maybe 75-150 are all generally in the same category.
Having put together this list for 15 years, do you have any "groupings" that you see like that?
I think there is definitely value in grouping players within the rankings. I tend to group players by top 5, top 10, top 25, top 50 and top 100. The fact that each successive grouping is basically twice the size of the previous one accounts for the lesser degree of precision in the rankings as you move down. In other words, I would expect that, in general, there is more difference between the #1 and #11 players than there would be between the #81 and #91 players.
Before the one and done rule, a top 5 was likely headed straight to the NBA. Now they are someone who should be able to be a star almost from day 1 in college. Top 10's are capable of being All Americans in their first year (although, clearly not all will be). Top 25's should be stars if not in year one then in year two. Top 50's may or may not be stars but should be impact players and likely will be there for 3 or 4 years. Top 100's still could be impact players but it may take them a couple of years before they show it consistently. So, the ranking can tend to predict not only what the ceiling is for a player but also when you can expect them to impact the game.