Seeing as I have just studied this topic, I feel I should contribute a bit.
Multicast in general requires 2 steps. First is the negociation whereby one computer advertises itself as being a multicast source, and N other computers "subscribe" to the source. (see
http://en.wikipedia.org/wiki/Internet_G ... t_Protocol). This does not have a trivial overhead cost in terms of setting up, and each time a sink computer joins or leaves, there is an overhead when the routing tree is re-evaluated. The second is actually sending the data which is easy when the routing tree has been established.
ISPs generally prevent multicast sources inside their network as it gets them into financial problems with other peer'd ISPs (think: i send a single multicast packet out of my ISP which gets multiplied 10 times in the next ISP. The next ISP charges my ISP for 10x traffic but it cant charge me for 10 times traffic as I only used 1 packet). Increasingly however, ISPs are allowing multicast sinks inside their network as it allows them to charge extra for the privelage.
Most multicast data is like UDP - its stateless with no flow/error control mechanisms. If a packet gets dropped at any point, the sink has no way of informing the source. This isnt a problem with video/audio streams where you are sure the next packet will be along soon, and it doesnt affect the end viewer too much if you miss a few miliseconds of the stream. They are going to ignore it and continue watching.
There is a protocol called Pragmatic General Multicast which adds TCP-like error control, but is not an official standard yet, and in unlikely to be supported in modern routers. The error control relys on routers 'playing nice' and adds a significant overhead to the source and sink computers in question (linear degredation in terms of packets lost).
Overal, it is my oppinion that multicast is not a viable option for DC, even if ISPs fully implemented and allowd it. It is unlikely that more than 2 or 3 clients are going to need exactly the same bytes off the source at exactly the same time. With conventional tcp based sharing, the connection can expand to saturate the connection. However, with multicast, the speed has to be bounded by the slowest link to any of the sink computers, limiting its effectiveness at transmitting (you really really dont want to be doing error control due to the overhead).