GPU Miner not recognizing all of VRAM



  • 0_1481775704367_GPUvram.png

    Having an issue with the GPU Plot Generator. I have a 2GB VRAM video card and it seems to only load 500MB. I have tested the video card through benchmarking software and it is able to load 2GB. If anyone has any ideas or have had similar issues, please let me know.



  • 2K Burst reward to whoever can solve this!



  • @nox The globalWorksize value in the devices.txt defines how much VRAM is used, but it's a bit convoluted. For example, a value of 4096 translates to 1GB of VRAM AND 1GB of RAM usage. I assume you have a globalWorksize value of 2048 since it's only using 512MB.

    Also, you may have to reduce your stagger size if you ask for more VRAM because it will use an equivalent amount of RAM on top of what it uses for your stagger size (see your CPU memory: 8GB 512MB).

    In any case, I didn't notice any significant performance increase when using 1GB vs 4GB VRAM on my GTX 1070. Once you get to about 20,000 - 30,000 nonces/minute, you've pretty much hit the wall for writing to a single hard drive. Writing to two drives in parallel is a better way to maximize write speed, in my experience.



  • @sevencardz just wondering... what's the max nonces/min you get on your 1070 in an ideal scenario?? or lets say "buffer mode" in the first minute to RAM? afterwards its really nonces/min including writes!

    GTX 970: 30000 to 33000 with a bit of OC.
    but I'm thinking about a second card.......!-)

    and my son is screaming 1080!



  • @nixxda The best I've gotten on my 1070 is still only about 30,000 - 32,000 nonces/minute and that's with writing to two drives in parallel in buffered mode. I figure if I had about 16 GB more system RAM, then I could do a larger stagger size and maybe a bit more VRAM and maybe get higher throughput that way, but based on my testing and reading others' experiences in the forums, it doesn't seem worth it and I would have to buy more hardware. The GPU is churning out nonces faster than the hard drive can write them, so it seems the best way to get more throughput is to write to more drives.

    Also, my OCD took over and I went back and optimized the plots I wrote in buffered mode and started writing the rest in direct mode. :D Seems direct mode takes EVEN MORE system RAM, so I had to cut the stagger size down from 5GB to 2.5GB (still writing to two drives in parallel). It also cut my throughput down to about 18,000 - 20,000 nonces/minute, but that's still quicker than writing in buffered mode and then optimizing later.

    I'd shy away from a 1080 - the price/performance just isn't there. And for plotting, I hear the AMD cards are much better performers for the price and they have better OpenCL support. I already had the 1070 lined up for a new gaming rig, so he got drafted to write plots first. :)



  • @sevencardz said in GPU Miner not recognizing all of VRAM:

    @nox The globalWorksize value in the devices.txt defines how much VRAM is used, but it's a bit convoluted. For example, a value of 4096 translates to 1GB of VRAM AND 1GB of RAM usage. I assume you have a globalWorksize value of 2048 since it's only using 512MB.

    Also, you may have to reduce your stagger size if you ask for more VRAM because it will use an equivalent amount of RAM on top of what it uses for your stagger size (see your CPU memory: 8GB 512MB).

    In any case, I didn't notice any significant performance increase when using 1GB vs 4GB VRAM on my GTX 1070. Once you get to about 20,000 - 30,000 nonces/minute, you've pretty much hit the wall for writing to a single hard drive. Writing to two drives in parallel is a better way to maximize write speed, in my experience.

    Thanks @sevencardz sent some Burst your way



  • @nox Awesome! thanks!



  • @sevencardz 30000 n/m ?!?!?!
    Please share your system specs and config. Im only getting 1/10 of that

    My specs
    i7 3.5GHz / 32GB / Radeon 480 8GB

    Device.txt = 1 0 8192 64 8192
    Makeplot.bat = gpuPlotGenerator generate direct I://2072119255914509084_0_15247632_32168
    with a stagger of 32168

    Please share what you are doing different.

    0_1482279957485_Capture2.JPG


  • admin

    @Havoc In direct mode, the plotter creates the file, then fills it in. During the first phase, which can be hours, no nonces are being plotted. The number you're getting will continue to increase all the way up to the end of the plot.



  • Just an update: I think the source of my problems is my nVidia graphics card. Capped at 8k nonce/minute with 750Ti http://www.newegg.com/Product/Product.aspx?item=N82E16814487025 . Stay away from nvidia!



  • Update 2: I will be keeping my POS nvidia card so I can take part in this http://course.fast.ai/



  • @nox 8K n/m from a 750ti doesn't sound too bad, honestly. Speed of the drive might also be capping the throughput.

    @Havoc As haitch mentioned, plotting in direct mode doesn't measure nonces/minute accurately until the entire plot is done, which does make it difficult to figure out throughput. I usually do a small plot in buffered mode to get a general idea first.

    My specs:

    FX-8350 4.4GHz / 16GB / GTX 1070 8GB
    devices.txt = 0 0 4096 512 128

    :: Dual 500GB plots with 2.5GB stagger
    gpuPlotGenerator generate direct E://Burst/plots/10105409137940147041_0_1904640_10240 F://Burst/plots/10105409137940147041_19046400_1904640_10240

    :: Single 500GB plot with 5GB stagger
    gpuPlotGenerator generate direct G://Burst/plots/10105409137940147041_99041280_1904640_20480

    Also, be sure open up resource monitor to see what kind of disk write speeds you're getting. I was getting 20MB/s on a Seagate 'archive' drive and so naturally my nonces/minute dropped way down. Thankfully, I realized my mistake early and returned the drive and bought a proper NAS drive instead.