Whatever happened to ZLIF?

Site Announcements
Locked
iceman50
Junior Member
Posts: 26
Joined: 10 Jun 2010, 15:10

Whatever happened to ZLIF?

Post by iceman50 » 16 May 2011, 07:36

You know last night speaking with FlipFlop he brought up the ADC extension for ZLIF, so me being curious by nature I began looking into it and no more than 15 minutes later I had DiCe!++ (my DC++ "mod" if you will, although it's taken on its own identity in my own eyes) fully supporting ZLIF.

I began testing in the FlexHub test hub and wouldn't you know it, it worked like a charm, so with all the talk by the DC++ devs about how ADC is so much more bandwidth effective than why was ZLIF never implemented it literally took no more than 15 lines or so of code to factor into my client. All this talk about saving bandwidth with ADC and you leave out one of the biggest bandwidth savers? I mean seriously guys, what is up with that (and where the hell is the support in ADCH++ ?). If you want people to catch on to a protocol that is so much better than NMDC then why do you set yourself up for failure by bypassing an extremely useful extension like this? Where's the love for ADC at?

For those who do not know what ZLIF is and how awesome it really is, let me give you a basic breakdown on its entirety. Basically what ZLIF does is it allows the hub to send data (let's go with BINF's of everyone in the hub for instance), the hub sends a signal to the client (of course the client sends ADZLIF in the HSUP on connection) in the form of ZON which from that initial ZON everything becomes compressed and as well know when information or "data" is compressed it consumes less bandwidth to send that data because it is essentially smaller than the plaintext which was sent before the compression (an easy way to think of it is a trash compactor which basically takes all of this garbage and crunches it all down to a size much smaller than originally... yes I know the whole subject of compression is much more complex but you get the idea...) and obviously using compression has its disadvantages mainly in the form of CPU usage but for reference in 2003 DC++ had added support for using Zlib to compress data in Client to Client transfers ... so my point being is the cpu usage is so miniscule by today's standards that the benefit far outweighs the drawback.

Anyways when the hub wants to stop sending compressed data it basically sends the ZOF command and communications go back to being "normal" (in the sense that the data is uncompressed), the benefit of all this is it saves bandwidth, and usually a lot of it, which is great for hubowners.

Well that's pretty much the gist of it for a more detailed explanation check the spefication under "ZLIB-FULL"

Quicksilver
Member
Posts: 56
Joined: 17 Aug 2009, 21:32

Re: Whatever happened to ZLIF?

Post by Quicksilver » 16 May 2011, 11:22

2 Kritisisms I would have on zlif why its not as simple to implement in any client.

1. It breaks layers. By using Zlif you tightly interact with the layering in the communication
i.e. if we see the following layers:

tcp - encryption - decompression - presentation (charencoding) - commands

commands interact with something below presentation layer.

Bu tthats only reason why it may be problematic with some impelmentations.. e.g. like if using nonblocking IO this becomes harder to handle than with blocking IO.

2. ZLIF is not completely specified. ZLib is not ZLib... as in there is more than one RFC describing this kind of compression.
Especially the way data is flushed is not clear in the specification of ZLIF.

iceman50
Junior Member
Posts: 26
Joined: 10 Jun 2010, 15:10

Re: Whatever happened to ZLIF?

Post by iceman50 » 16 May 2011, 12:40

Quicksilver wrote:2. ZLIF is not completely specified. ZLib is not ZLib... as in there is more than one RFC describing this kind of compression.
well from looking at the wiki word for word and maybe it's justme but this looked awfully straight forward to me ...
ADC Extensions wiki wrote:must start decompressing the incoming stream of data with zlib before interpreting it
...

but on your first criticism ... well most clients out there are a mod of DC++ which already had the majority of the work done, so I won't accept that as an excuse (although in your case with jucy it's understandable since you don't use any kind of DC++ core whatsoever)

Quicksilver
Member
Posts: 56
Joined: 17 Aug 2009, 21:32

Re: Whatever happened to ZLIF?

Post by Quicksilver » 17 May 2011, 22:23

Yes in that respect I am one of the few depending on the specification to be correct... otherwise I have to reverseengineer DC++.

ZLIB knows several different flushing modes
good website explaining it is: http://www.bolet.org/~pornin/deflate-flush-fr.html

Being a bit more precise in the spec could help there.

I just find it ugly to implement ZLIB as it forces me to break my layering and gets me closer to a big ball of mud architecture.
If you would just compresss everything or nothing at least this layering wouldn't be a problem .. though this would of course cost more memory per user for the hub.


Also there is little incentive: Jucy is so seldom in hubs that it doesn't make a difference.

arnetheduck
Newbie
Posts: 8
Joined: 17 Mar 2009, 13:37

Re: Whatever happened to ZLIF?

Post by arnetheduck » 21 May 2011, 10:03

Thanks for the pointer to the flush modes - I remember toying with them but never came to any conclusion.

As far as I can tell, we don't really *need* to specify an explicit flush mode - we need to specify that the sender must flush in such a way that the receiver can decode the full message and that can be done with either a sync, partial or full flush (much like is described for TLS/openSSL). That said, I'd probably go for the sync flush if anything...and we don't really need to flush at the end of each command either - if we're sending 10 commands together only the last needs to be flushed.

I'm not sure how you want things to work without breaking layers? What we're doing is renegotiating the transport layer after a connection has been established - as does http (for compression) and as the ssl spec recommends (but we don't do to better resemble https)...it's a clean cut - we turn it on and off but that's all - implementation wise you just apply an extra filter in the byte stream layer or remove it ...

I would tend to agree that we should not name zlib in the spec - but it's an easy way out and other implementations (than c/c++) maintain zlib compatibility afaik - if you want to put down more precise, vendor-independent language I'm all for.

Locked

Who is online

Users browsing this forum: No registered users and 1 guest