chunkPartNonces Jminer



  • Hello guys, I'm really in trouble with this part of Jminer settings.
    My configuration is: Notebook HP Pavilion G6 with i7 3632QM CPU, 16GB Ram, Radeon HD 7670M (2GB Memory) and 3x8 TB External hard drives (using USB 3.0) and I'm using Jminer.
    The question is... how much chunkpartnonces should I give at the jminer settings?I've experimented a lot, but it is not easy to give a straight answer, and I really would appreciate your opinion on this.

    I read all, that I could possibly find regarding my question, but I'm still a little bit confused.

    Thanks guys.



  • @SmartonoseN Right now I'm running jminer on two machines, one with 16GB system RAM and one with 8GB. Oddly enough, I found that the "sweet spot" for chunkPartNonces seemed to be around 960000 on both machines. Beyond that, I didn't get much improvement, and trying much higher values like 1920000 actually got me slightly slower read speeds.

    Obviously this is a pretty unscientific test. I'd love to hear other people's results.


  • admin

    @SmartonoseN jminer reads the size of chunkPartNonces into memory before it computes the deadlines. Without that setting, it would load the staggersize into memory and thats a problem on big optimized plotfiles and small memory.
    So if staggersize is bigger than chunkPartNonces, the data-packages will be split into parts, and first part can be computed while 2nd is still loading.
    So if you have memory issue reduce chunkPartNonces...

    @zyzzyva Well, i also use 960000 🙂

    But it is just a fine tuning anyway, no big need to change the default if there is not a memory issue, even if i choose a quite low default, to prevent issues on machines with small memory.



  • I think this is a setting folks should look into, by default mine was set at 320000 and I was doing fairly well already with 15/16 second read times, but when I started trying out higher amounts I seemed to slow down, however when I went lower to 160000 I actually got a couple extra seconds so now I have 13/14 read times... I think this is worth playing with the setting at least once for everyone, a couple seconds is a couple seconds! (I should also mention my CPU usage went down despite having better read times, totally don't understand that one but it works)

    I would also like to point out my FAVORITE settings...

    showDriveInfo=true

    this will show you a break down of each drives specific times so you can see what different plot sizes or drives do... in my case, 1TB plot files did better than a single 9TB plot file in read times, by a couple seconds! so now I am going to experiment later on with smaller files perhaps 256GB and see how that compares to the 1TB and 9TB files... its always nice to see a breakdown of where the bottleneck is or where the performance can be achieved, add the line to your files to see each drive...