@Peerless lol
You could explain it like this.
When Gnutella was new, the searches were flooded through every participating PC. The problem was, that while the network grew bigger, dial-up users and people with other low bandwidth were not able to handle the upcoming traffic, or it was so much overhead that the actual file-sharing suffered from it. So, what shall you do?
The soultion are Ultrapeers: Systems with fast Internet connection, and good system performance. The idea is, that these fast computers would handle the most searches, so that the slow users will be shielded from the high traffic. Only searches that fit shared files on slow PCs, or requests from those slow nodes would use the slow connection. When an Ultrapeer is reached, the connections between the Ultrapeer and other Ultrapeers are very fast. These Ultrapeers would then ask matching slow PCs, so that most search traffic not concerning you will be outsourced to the ultrapeers.
Every Ultrapeer has hundreds of connections to slow "leafs", so imagine you are slow: you would just send your request to an Ultrapeer, that Ultrapeer will then send it to 16 others (no problem, their speed is high): And now, all slow PCs connected to these Ultrapeers would be queryd, if they match a criteria which makes them a potential sharer. By this, the slow nodes who are not expected to have the file, or should just forward the request, are spared.
Allover you can say it is to spare slow PCs from high bandwidth, and put it to Systems which can handle it much better. It's specializing.