Page 3 of 3

Re: Segmented downloading : Why the big deal?

Posted: 03 Jan 2011, 20:08
by arnetheduck
Ah, misunderstanding then, I was suggesting doing a separate benchmark app in any language (in the end, they all hit the os), not actually modifying dc++...

Re: Segmented downloading : Why the big deal?

Posted: 07 Jan 2011, 07:53
by Quicksilver
I would have not choosen a different program.

Especially as I am still not sure how this all interacts with the cache on the hdd.
I don't know if we could properly map the behaviour of DC++ to a simple program.

Re: Segmented downloading : Why the big deal?

Posted: 07 Jan 2011, 18:10
by Flow84
arnetheduck wrote:Ah, misunderstanding then, I was suggesting doing a separate benchmark app in any language (in the end, they all hit the os), not actually modifying dc++...
I have developed a test app.
Please test and give me feedback.
It is writen in C# and should give you accurate speeds and so.

http://files.flowertwig.org/FlowerBench.7z

/Flow84

Re: Segmented downloading : Why the big deal?

Posted: 09 Jan 2011, 16:26
by Quicksilver
What does the program do? Its not selfexploratory!

Re: Segmented downloading : Why the big deal?

Posted: 23 Jan 2011, 02:01
by Flow84
Quicksilver wrote:What does the program do? Its not selfexploratory!
Sorry for late answer.
This forum doesn't send out notifications...

It basically tries to read/write data to/from disk and show speed.
You can set buffer size, access type and more.

If I understand this thread right, it is what you wanted.
You can see speed impact depending on buffer size and more :)

Re: Segmented downloading : Why the big deal?

Posted: 24 Jan 2011, 15:00
by Big Muscle
I reimplemented shared file stream into StrongDC++ : http://strongdc.svn.sf.net/viewvc/stron ... te#dirlist

However, one question comes to my mind. Shared file stream fixes the problem when file handle is destroyed at the end of each segment and new one is created at the beginning of next segment. But now it needs locking critical section for each read/write to ensure that changing file position and file operations are atomic. Now the question - doesn't such critical section bring more overhead than separate file handles?