My CPU and GPU plot time tests (based on 5TB drive)

  • Hi Superfriends!

    I wanted to do a CPU vs GPU plot test on my USB 3.0 Seagate 5TB Expansion drive for all of you that are curious about plot times. I figured maybe we all can share our plot times on this thread. BEFORE we get into this, I want to say that I know everyone's hardware is different, and there are so many variables that can attribute to different times. I basically wanted to share my experience with you all, and perhaps this will help as an unofficial baseline to "guesstimate" some plot times for your own.

    Basics of my system used:

    Intel i7 3770K (Stock 3.5Ghz) OC to 4.5Ghz on single radiator AIO water cooled system. 4 cores with Hyperthreading = 8 total CPU threads/processes
    16GB DDR3 Corsair Vengeance RAM
    EVGA 980 Ti 6GB SC ACX 2.0 with backplate (running at factory OC, speed ranges from 1291Mhz to 1304Mhz on it's own self-boosting method, I tampered with nothing)
    Windows 10 64-bit

    ...and of course, the Seagate 5TB USB 3.0 External Expansion drive.
    Seagate 5TB Expansion drive, USB 3.0 at

    Both plots had the same number of nonces, but I used the Nonces Calculator 3.0 to get the actual nonce number to go with. I sized my drive to leave about 1.05GB of free space (just because I don't like maxing the drives all the way).
    Nonces for my drive were 19070976 for around a 4.88TB plot sized plot file.

    TEST 01
    Test was with the AIO Wallet CPU Plotter (Xplotter_AVX.exe)
    I did manually modify that settings to utilize all 8 CPU threads, and 8GB of RAM versus the default 4GB setting.

    Here was the primary settings I used for Xplotter_AVX.exe:
    XPlotter_avx.exe -id 10911507984205051965 -sn 1100000001 -n 19070976 -t 8 -path K:\Burst\plots -mem 8G

    With the settings above the CPU plot for my 5TB drive took about Roughly 5 and 1/2 days with my system running for 24hrs straight.
    (Sorry no screenshots, my bad)

    TEST 02
    Test was with the gpuPlotGenerator x64 version 4.0.3 (gpuPlotGenerator.exe)

    I re-formatted the drive prior to this test.

    Here are my devices.txt settings: (I still don't know if these are the optimal settings, but they did work to where my system didn't crash. It took a few tries to figure this out)
    0 0 1024 128 8192

    Here are my batch file settings:
    gpuPlotGenerator generate buffer K:\Burst\plots\10911507984205051965_1100000001_19070976_8192

    With the settings above the GPU plot for my 5TB drive took about Roughly 20hrs, 7m, and 16sec with my system up from start to finish
    (I did manage to grab a screenshot of this)


    CPU plot: 5 and 1/2ish days
    GPU plot: 20ish hours.

    I hope you all find this information helpful my Superfriends. Again... this is based on my hardware. Yours will undoubtedly be different which could mean slower or faster times.

    BOOM! Boom...and all that!

  • @Vneck Really good comparison you have here, thanks! That obviously took you awhile. You can also use the GPU plotter to plot two drives at the same time, which should push you into the 20 - 22,000 nonces/minute range. 🙂

  • @Vneck interesting, you might want to redo test with CPU tho and use 6 threads instead of all 8.
    I have ploted 7.5TB with cpu i think i i think it took me around same or less time than yours. I will need to check timestamps on plot files to see if i am not mistaken.

    Will get beack to this post once get back home from work 🙂

  • I think it lacks one thing to be the complete data, if I do not equalize the first optimizes the plots and the second does not, so if you want to do the plots the same at 20 hours it takes with the GPU you have to add what It takes time to optimize.

    Is it correct what I am saying ??

  • @Energy its possible to setup gpu plotter so it does optimized plots, at least that was a case couple months ago, havent been tracking changes in plotters software

  • @Vneck pls retest with
    gpuPlotGenerator generate direct K:\Burst\plots\10911507984205051965_1100000001_19070976_8192

  • @LithStud Yes, but as this is not optimized
    gpuPlotGenerator generate buffer K:\Burst\plots\10911507984205051965_1100000001_19070976_8192
    Depending on how they fit my optimized would be something like this 19070976_19070976

  • @Energy @Blago just posted how to do optimized 😉

  • i acctuly just finished ploting my 5TB internal Sata HDD using AIO Xploter on just defult seting's using 5 of possible 8 cores and i only have 4g of system ram the prog used around 2gig it took me 34 hours, in comparison i had GPU ploted the same drive a few months ago using Amd R7 240 2g card it ploted in around 28 - 30 hours and then used Optimiser for an added 127 hours ..... thats when i learned that the optimiser Prog is VARY RAM heavy and with my 4gig i could only direct about 1.5g at it. and somone hear sugested using a ram cleaner witch did help keep the Prog from Hanging when it caped RAM usage

  • It remains unclear how this is going. So if you use direct instead of buffer they come out optimized???

  • @Energy Buffer is 3 or more times faster than Direct mode. But since you will need to optimize the plot later, it will take double time.
    So, Direct from the start and get your optimized plots, overall timing is less than buffer and optimize together.
    Xplotter also generates optimized plots from the start vs. to wplotgenerator.

  • @rnahlawi perdon, ya se que no es el sitio pero no me puedo quedar con la duda.

    When I generated a plot without optimizing I find this something like this 19070976_8192
    And after using the opimizer it stays like this 19070976_19070976.
    What is that number then indicates???
    I want to make it clear that I did things more like autopilot than to understand them, it still costs me a little

  • @Energy I only know its the stager size which is multiple of the nonces count. I'll leave this to an expert to explain it better than me.

  • from my understanding last digit is Stagger , plot file is made up of chunks / bit's scoop ,Nonces fully unoptimised or a stagger of 1 data would look like a long string of Hash's 1-4096 , 1 -4096 , 1 - 4096 , ect, ect, bumping the stager up writes them like a string of groups based on stagger size what ever Xmultiple is used will group that amount of 1 - 4096 sections together and will re write them for a stager of 4x multiple would be writen as (1111,2222,3333,4444,ect,ect,4096) , followed by another set in the same order this speeds up read time. a fully optimised plot file will be written as 1 contened string of all 1's , all 2's , all 3's for fastest read time.

  • @Vneck Its an very unequal comparison! As others said with Xplotter you've got optimized and whith "gpuplotter buffer" you've got unoptimized plotfiles! But since the disk you're using is a SMR drive, "gpuplotter direct" would not have really worked anyway! Or, it would have worked but would have taken a week also!
    However, you've confirmed that Xplotter does not work very good on SMR drives!-)
    They are perfectly fine for mining! I have a few of them.

    The fastest way to plot these type of drives and get optimized plotfiles is probably still to plot to a "normal" PMR drive first and the copy it over or use the plotoptimizer!

  • Hi Superfriends! Thank you so much for the support, critiques, and feedback. I will take them all under consideration. I only have the 1 5TB drive, so optimizing will have to wait until I buy more drives, as I'm currently mining with this. My other drives I mine with don't have the capacity to hold/optimize another 5TB plot. Once I get more space, I will run another test and update this thread.

    Also thank you for the updated Optimized GPU settings, I will definitely try that when I get more space. Much appreciated.

    Once I build my 3rd rig (for some other GPU based cryptomining) with some AMD based video cards, I will dedicate that to plotting.

    Much love and appreciation for the advice. BOOM! Boom...and all that!

  • Minor update:

    I did a GPU "Direct" plot this time on another (but same model Seagate 5TB USB 3.0 Expansion drive. BUT I only did a 1TB plot for testing the optimization. This time I used 10GB of system RAM out of the 16GB I have.

    1TB GPU Optimized plot took 1day, 12hrs, and 37mins for the GPU Direct "Optimized". I don't know if something is wrong or if this normal on an SMR drive.
    0_1488691467081_3-4-2017 2-45-43 PM_1TB_test_GPU Optimized_Final.jpg

    Here is an access/time comparison of the optimized 1TB plot versus a 5TB non-optimized plot. Both are on 2 separate drives, both are on the same model Seagate 5TB USB 3.0 external drive. Granted the 1TB will be faster, but if you multiply the 1TB by 4.5 or 5, it does come out faster than the 5TB non-optimized plot.

    1_1488691467081_3-4-2017 3-00-12 PM_Segate_1TB_Optimized_vs_5TB_non_optimized.jpg

    The GPU Optimized 1TB plot is averaging around 3.4-3.5 secs access time, versus the the 35~36 secs access times of the 5TB non-optimized plot size.

    In theory, the 3.5sec(1TB) optimized x 4.5TB or 5TB "COULD" end up being 15.75~17.5secs, which is a GOOD thing. At least that's an estimate if I did optimize the full 5TB (minus room lost during partitioning).

  • @Vneck said in My CPU and GPU plot time tests (based on 5TB drive):


    From what I know, increasing RAM will not affect the speed at all as bottleneck is the HDD itself.
    1863~ nonces is slow for your 980Ti, you should get something around 8000 as minimum in Direct mode.
    From my tests with GPUgenerator, I found that disconnecting your HDMI (or whatever display port) from the plotting card and connecting to the embedded VGA will help alot your operation. Its not science, but its something I observed.
    Anyways, keep plotting in direct mode even if its deadly slow coz its worth it for the mining phase 🙂

  • @Vneck I find it a little strange it's taking you that long to plot a 5TB HDD with your cpu. I used my i3-6100 to plot 4 and 5TB external HDDs (seagate/WD) using the aio wallet cpu plotter (AVX) with 3 of 4 threads, using the default 4gb of ram, and I was plotting in 2-3 days - 5 days sounds like a lot. I'm pretty new to burstcoin as well but I figured i'd just chime in and share my experiences. Not sure if 5 days is typical for the rest of you.

  • @rnahlawi Yes very strange. When it started, the nonces/min were at 25000~26000 nonces/min, then dropped when I checked the next day.
    0_1488706126025_3-2-2017 9-30-51 PM - 1TB GPU Optimized 10GB_RAM.jpg