Go to the NEW FORUM
I'm curious, in compiling all these lists, how the web page weights reviews written in 1960 to reviews written last week.
They were written by different people for a different audience with different standards. Also whenever newer sources like Pitchfork review older music they get little 20/20-hindsight bumps that they're too cautious to give to newer albums.
I'm curious how you can possibly fairly measure reviews written about albums released 40 years ago to reviews written about new albums on the same scale.
Also, about metacritic ratings. They have the flaw that they're overly influenced by single outliers. Something will have an 80 or something, a single publication comes along that really doesn't like it and suddenly it has a 72. Does acclaimedmusic.net have some kind of system to control for outliers? (If someone came along tomorrow and said "Pet Sounds is the WORST ALBUM EVER! 0/10!" Would it suddenly fall like 50 places?)
Yeah it kind of bothers me that some of these new songs can get so high - songs like Crazy and Take Me Out - topped most critics lists for their years(which is assumed to be just as strong as say 1965) - but appeared on hardly any all-time lists...The strength of music is just so much weaker nowadays
Eh. I think most of the very best rock music came from the 60s, but the average quality isn't any lower. Most of the crap from the 60s has just been dismissed and forgotten by now.
Also now radio music is weaker, because radio music is based on vocals and breast size now instead of instrumentals (Which are now mostly shallow dance loops). But radio music is hardly representative of 'music'.
Sorry for not responding at once.
A critic list's weight is, among many other things, a logaritmic function of the number of years between the album/song release and the critic list. Hence, a list from 2007 has a much higher weight than a list from 2002 for an album from 2001, but for an album from 1967 the weight for those lists is essentially the same.
I agree about the metascore flaw. Instead of calculating an average score I match every album against each other. For example, 'Pet Sounds' might be ahead of 'Revolver' on 52% of their "meetings". The AM score is a function of all meeting scores for all pairs of albums. Using this system, it wouldn't matter much if someone came along tomorrow and said "Pet Sounds is the WORST ALBUM EVER! 0/10!" or "Pet Sounds is just above average 6/10", as long as only a few other albums at AM get a score between 0/10 and 6/10.