1. Subjective perception of frequencies
Human ear isn’t equally sensitive to different sound frequencies. Because of the evolutionary changes, we are far more sensitive to the upper-mid spectrum frequencies. According to the archeologists, there were no dancing clubs in ancient times, so it was much more important for a man to hear a predator’s roar and run away in time than to feel the power of the pumping bass… That was a joke. Anyway, if we play two sounds with a volume of, say, 65 dB, but the first source will have a frequency of 70 Hz and the second one will have a frequency of 1000 Hz, the second sound will definitely seem louder to our ear, and any online mastering and mixing engineer knows that. Note that the measuring instruments will show that both sources had the same volume pressure, but our ear has a different opinion. Human ear was historically customized for better voice perception, and this characteristic has been exploited by the audio engineers for a long time. They pay maximum attention to the sound of the vocal in the song, as it becomes the main part for the human ear.
Besides the high vibrant harmonics of the vocal, the upper mids area also contains a lot of musical components of the mix (off bass drum snap, hi-hats body, snares punch and many more). But let’s get back to hip-hop. Its base contains of rich low sounding kick drum and sub bass, which occupy a lot of space in the mix. But, as we remember, what really makes a song seem loud is an opposite side of the spectrum. As a professional engineer, you have to achieve a balance between the richness of the low end (as required by the style) and the need to make the track as loud as possible.
Please also learn how to mix vocals into an instrumental better.