How to combine drive to make the same plots per thread
There was a discussion awhile ago that I cannot find now indicating that the optimum was to have the same number of drives as threads or GPU "computing units". Can anyone enlighten me as to how you would combine 2 drives in the config file, like
c:\burst\plots,d:\burst\plots is 2 threads
c:\burst\plots+d:\burst\plots is 1 thread
I tried this on JMiner but the + bombed the program. I need the proper syntax.
@rds With Blagos miner you can combine them with c:\burst\plots+d:\burst\plots , but I don't believe it's possible with jMiner.
Digidigm last edited by
@rds Hello again, did you ever get any more responses on this?
What is the real value of "+" the drives together?
I read your question about matching the drives to GPU units.
In guess to really understand how would one know their GPU units? I am using Jminer right now.
Thanks in advance
@Digidigm , Jminer reports "computing units" at startup. I tried the concatenation of the drives like haitch explained with no discernable increase in read speed. I just run my drives one at a time now.
IncludeBeer last edited by
@rds I'm about to upgrade my personal comp with an i9-7980xe. Finally I'll have enough threads so I can run each hdd on its own
@IncludeBeer , I have a server with 32 threads and 36 drives, 24 second scan time.
vaxman last edited by vaxman
I have a server with 32 threads and 36 drives, 24 second scan time.
I get about 90 MB/s with AVX2 per 2.1 GHz core (E5--2650v3 ES).
20 cores (no HT) scan 1.8-1.9 GB/s.
Speed diff SSE4 to AVX2 about 2x (dcct, mddct).
One might translate that to scanning 185 GB of plotfile per second per GHz per core using AVX2 or half that with SSE4 on up to 4 year old intel CPUs.
If you have to buy and can't rely on existing hardware AND have lots of plotted space, jminer (opencl on GPU) is most likely the most economic path.
Digidigm last edited by Digidigm
@rds I have been here with you on a different topic about how big your rig is. Soooooooooo Jealous but one needs to begin somewhere.