GPU plot generator v4.1.1 (Win/Linux)
root@Tesla-SuperComputer:/home/tate/gpuPlotGenerator-bin-linux-x64-4.0.2# ./gpuPlotGenerator listDevices 0 ./gpuPlotGenerator: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libOpenCL.so.1: no version information available (required by ./gpuPlotGenerator) ------------------------- GPU plot generator v4.0.2 ------------------------- Author: Cryo Bitcoin: 138gMBhCrNkbaiTCmUhP9HLU9xwn5QKZgD Burst: BURST-YA29-QCEW-QXC3-BKXDL ---- [ERROR][-1001][CL_UNKNOWN] Unable to retrieve the OpenCL platforms number root@Tesla-SuperComputer:/home/tate/gpuPlotGenerator-bin-linux-x64-4.0.2#
@Tate-A Okay, Installed the latest Drivers and OpenCL. Now I get this.
root@Tesla-SuperComputer:/home/tate/gpuPlotGenerator-bin-linux-x64-4.0.2# ./gpuPlotGenerator listDevices 0 Inconsistency detected by ld.so: dl-version.c: 224: _dl_check_map_versions: Assertion `needed != NULL' failed! root@Tesla-SuperComputer:/home/tate/gpuPlotGenerator-bin-linux-x64-4.0.2#
@vadirthedark I finally got my new r9 380 4gb and I installed it!
So now I start to face your same problem to GPU plotting my hdd.
did you finally have been able to use the script on ubuntu?
Hello everyone. I decided to give this GPU plotting method a try. Once I got through the initial stages of prepping my system I recall there being a question of whether gpuPlotGenerator 4.0.3 supported CUDA or not. I ran the software not knowing either way but it just stopped.
Before I go ahead and troubleshoot can someone clarify this for me please?
With what settings are we able to create a very optimized plots?
1 GPU and 16GB of ram
Hi, just checking out this plotter as I found out my integrated GPU will work with the plotter/miners for Burst.
I ran the setup to configure the GPU and the recommended values were not powers of 2:
I put in the recommended values and the program appeared to store different values, specifically the 19 was stored as a 8:
Is this normal?
@rds Not seen that before - I'd manually set it to a power of two - either down to 4096 or up to 8192
@rds This setup is just creating the devices.txt i guess ... you can always edit that by hand, too. But i can not tell you why the 19 was not saved but 8 instead (seams to be a cpu? very low work group size) ... if this is important to you, you could try contact dev via github.
Has anyone else had an issue with gpuPlotGenerator where in buffer mode it will start by using pretty much 100% of my GPU (which is what I want) at around 30k-50k nonces/minute and then it drops to around 12k nonces/minute. Sometimes it will stay at the higher amount and I can get 1TB plotted in an hour or less. Other times it drops down and only utilizes my gpu around 17%. It's weird that this doesn't happen all the time and I was wondering if anyone else had this problem.
Also I have a MSI Gaming X RX 480 8GB GPU.
My devices txt is:
0 0 8096 64 8192
As you can see there are a couple spikes here and there, but it is barely using my GPU.
The only gain I have managed to get so far is running gpuPlotgenerator as admin and I seemed to get a gain 23%-27% percent utilization of my gpu and and 18k-20k nonces/minute.
Any help would be greatly appreciated!
@KB-Bountyhunter Plotgenerator can only generate plots as fast as your drive can write them, it starts with 30-50k cause they are generated in memory ... once writing to disk, the rate may drop to the write speed of your drive ... you could plot to multiple drives at once to increase plotting speed.
I've been using Xplotter for some time, but recently got a RX480 card.
One feature I loved about the Xplotter was, that if you set the nonce number to 0, then it will calculate the biggest plotfile, and fill the drive completely..
Will this be a feature in a future update.
And maybe an argument to automaticaly split the file, that would also be great.
mah, idk I tried ploting with HD6870 but system start to freeze.
is this project still being developed?
I'm not really good at C++ since most of my work is based on Java but i think there are still some points that could be improved. I looked at my gpu and hdd usages and found it very odd to have like 50% average gpu usage and 50% average hdd usage becase generation and writing are not done in parallel...
For example it should be possible to add two features:
- hybrid mode, which does fill a part of the hdd (like direct mode does for the full plot size) while the gpu calculates the plot and then write it
- reserve twice (4x?) as much system ram then generate part after part (keeps the gpu running all the time) while the hdd writes those parts
@tco42 i think in most cases gpu will be much faster in generating than the data write speed of hdd, so it would still sit idle
yeah, thats a part of the problem... lets say your gpu generates 24,000 nonce/min (equals 100mb/s) and your hdd can write 100mb/s so 1tb should take about 10,000 seconds in reality it looks like that (of example at 4gb stagger size):
- gpu generates 4gb of data in 40 seconds
- hdd writes 4gb of data in 40 seconds
- gpu generates 2. set of 4gb of data in 40 seconds
- hdd writes 2. set of 4gb of data in 40 seconds
-> it will take 20,080 seconds to finish
if you add additional ram to hold a second set of data it looks like that:
- gpu generates 4gb of data in 40 seconds
- hdd writes 4gb of data while gpu generates the 2. set of data in 40 seconds
- hdd writes 4gb of data while gpu generates the 3. set of data in 40 seconds
-> it will only take 10,080 seconds to finish (about 100% faster)
if your gpu is faster than your hdd write speeds you would still be able to keep your hdd writing 100% of the time so 48,000 nonce/min and 100mb/s write speed would look like that:
- gpu generates 4gb of data in 20 seconds
- hdd writes 4gb of data (40s) while gpu generates the 2. set of data (20s) and waits (20s) in 40 seconds
- hdd writes 4gb of data (40s) while gpu generates the 3. set of data (20s) and waits (20s) in 40 seconds
-> it will take 10,060 seconds to finish
a faster gpu would ofcourse change the first example to 40s hdd and 20s gpu which results in 15,060 seconds so the optimized code would be about 33% faster
you should be able to optimize the plot while its in system ram and waiting to be writen without any additional time
What the hell is this supposed to mean:
Install the build-essential and g++ packets. Install OpenCL (available in the manufacturer SDKs). You may have to install the opencl headers ([apt-get install opencl-headers] on Ubuntu).
Modify the [PLATFORM] variable to one of  or  depending on the target platform. Modify the [OPENCL_INCLUDE] and [OPENCL_LIB] variables of the Makefile to the correct path. Example:
OPENCL_INCLUDE = /opt/AMDAPPSDK-2.9-1/include
OPENCL_LIB = /opt/AMDAPPSDK-2.9-1/lib/x86_64
Hello everyone. I decided to give this GPU plotting method a try. Once I got through the initial stages of prepping my system I recall there being a question of whether gpuPlotGenerator 4.0.3 supported CUDA or not.
It is opencl, but the nividia cuda drivers include opencl support.
So I came across 4 Fermi M2090, installed CentOS-6.9, the toolchain, …, and was able to build gpuPlotGenerator-4.0.4
#bin/gpuPlotGenerator.exe listDevices 0 Id: 3 Type: GPU Name: Tesla M2090 Vendor: NVIDIA Corporation Version: OpenCL 1.1 CUDA Driver version: 375.66 Max clock frequency: 1301MHz Max compute units: 16 Global memory size: 5GB 946MB 704KB Max memory allocation size: 1GB 492MB 688KB Max work group size: 1024 Local memory size: 48KB Max work-item sizes: (1024, 1024, 64)
BUT: Plotting in buffer mode just hangs at the last "buffer block" - up to that point all is good, GPUs are busy, the output file grows. Then the last piece is never written, GPUs and plotter process are idle.
This command line produces a 14 GiB file and omits the last 2 GiB:
#bin/gpuPlotGenerator.exe generate buffer /tmp/12345678901234567890_738197504_65536_8192 bin/gpuPlotGenerator.exe: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libOpenCL.so.1: no version information available (required by bin/gpuPlotGenerator.exe) ------------------------- GPU plot generator v4.0.4 ------------------------- Author: Cryo Bitcoin: 138gMBhCrNkbaiTCmUhP9HLU9xwn5QKZgD Burst: BURST-YA29-QCEW-QXC3-BKXDL ---- Loading platforms... Loading devices... Loading devices configurations... Initializing generation devices...  Device: Tesla M2090 (OpenCL 1.1 CUDA)  Device memory: 512MB  CPU memory: 512MB Initializing generation contexts...  Path: /tmp/12345678901234567890_738197504_65536_8192  Nonces: 738197504 to 738263039 (16GB 0MB)  CPU memory: 2GB 0MB ---- Devices number: 1 Plots files number: 1 Total nonces number: 65536 CPU memory: 2GB 512MB ---- Generating nonces...
Oh, and I played around with devices.txt, single+multiple GPU, various localWorkSize + hashesNumber, to no avail;
0 0 2048 128 8192 0 0 2048 256 8192 0 0 4096 512 8192 ...
globalWorkSize (corresponding to RAM on GPU) MUST be under 4 GiB, although these M2090 have 6. -?-
Yes, /tmp has 32 GiB of free space.
Ideas, anyone ?
@Akito you have to install the build toolchain (compiler, kernel-headers, nvidia drivers and cuda-devel) and build it yourself.
I did this (NOT a dev type) with the help of google within 2 hours. Add 3 hours trying to install centos7 (crashes?! wtf?) and going back to centos6. Did I mention my hatred for linux ?
Or you download the binaries.