Segmented downloading : Why the big deal?
-
- Newbie
- Posts: 8
- Joined: 17 Mar 2009, 13:37
Re: Segmented downloading : Why the big deal?
Ah, misunderstanding then, I was suggesting doing a separate benchmark app in any language (in the end, they all hit the os), not actually modifying dc++...
-
- Member
- Posts: 56
- Joined: 17 Aug 2009, 21:32
Re: Segmented downloading : Why the big deal?
I would have not choosen a different program.
Especially as I am still not sure how this all interacts with the cache on the hdd.
I don't know if we could properly map the behaviour of DC++ to a simple program.
Especially as I am still not sure how this all interacts with the cache on the hdd.
I don't know if we could properly map the behaviour of DC++ to a simple program.
-
- Newbie
- Posts: 6
- Joined: 18 Oct 2008, 11:05
Re: Segmented downloading : Why the big deal?
I have developed a test app.arnetheduck wrote:Ah, misunderstanding then, I was suggesting doing a separate benchmark app in any language (in the end, they all hit the os), not actually modifying dc++...
Please test and give me feedback.
It is writen in C# and should give you accurate speeds and so.
http://files.flowertwig.org/FlowerBench.7z
/Flow84
-
- Member
- Posts: 56
- Joined: 17 Aug 2009, 21:32
Re: Segmented downloading : Why the big deal?
What does the program do? Its not selfexploratory!
-
- Newbie
- Posts: 6
- Joined: 18 Oct 2008, 11:05
Re: Segmented downloading : Why the big deal?
Sorry for late answer.Quicksilver wrote:What does the program do? Its not selfexploratory!
This forum doesn't send out notifications...
It basically tries to read/write data to/from disk and show speed.
You can set buffer size, access type and more.
If I understand this thread right, it is what you wanted.
You can see speed impact depending on buffer size and more
-
- Junior Member
- Posts: 39
- Joined: 01 Jul 2008, 19:27
Re: Segmented downloading : Why the big deal?
I reimplemented shared file stream into StrongDC++ : http://strongdc.svn.sf.net/viewvc/stron ... te#dirlist
However, one question comes to my mind. Shared file stream fixes the problem when file handle is destroyed at the end of each segment and new one is created at the beginning of next segment. But now it needs locking critical section for each read/write to ensure that changing file position and file operations are atomic. Now the question - doesn't such critical section bring more overhead than separate file handles?
However, one question comes to my mind. Shared file stream fixes the problem when file handle is destroyed at the end of each segment and new one is created at the beginning of next segment. But now it needs locking critical section for each read/write to ensure that changing file position and file operations are atomic. Now the question - doesn't such critical section bring more overhead than separate file handles?