For the second part of this GRC mining miniseries I would like to talk about getting the most out of your GPU in terms of the most meaningful mining metric - GRC/day. For most other cryptocurrencies, this takes the form of downloading the most recent miner (usually as suggested by the coin's wiki) and running the program. For Gridcoin, optimising yield is a little more complicated.
Before we begin, I would like to point out that this article will not address pet projects or what science is more worthwhile. There is definitely a range of projects from pure theory to directly applicable medical research. However, the more interesting projects tend to not have the best return on investment. A lot of BOINC veterans (myself included) get around this issue by spending part of our compute on the projects we believe in and support the most, and another part on making profit. In the end, it's the philanthropic approach that makes Gridcoin different and unique from the other 700+ Altcoins, and if you were 100% profit driven you would likely be mining something like ETH instead.
Understanding Your GPU
GPUs come in two distinct flavours - single precision focused (FP32) and double precision focused (FP64). That means literally what you think it means, as FP32 calculations use 32 bit floating point operations and FP64 calculations use 64 bit floating point operations. Which can your graphics card do? Well, both, in all likelihood.
All cards can carry out FP32 calculations at some base rate, and most can then carry out FP64 calculations in lieu of FP32 ones at between 1/32nd and 1/2nd the rate in GFLOPS. As a general rule of thumb, if the FP64 rate is 1/4 the FP32 rate or better, you will want to dedicate your card to an FP64 project. Otherwise, dedicate it to an FP32 project.
To find out how your particular GPU performs, find out the model and then look up the series on Wikipedia. If you are running Windows (which most of you are), the easiest way to do this is hitting start, typing 'run', and entering 'dxdiag'
If you are asked whether or not you would like to check if your drivers are digitally signed, choose 'no'. You will now be presented with a screen like this:
Note how this screen lists a lot of your PC's specs, such as the OS, CPU and RAM. Navigate to the second tab, marked 'Display 1' to find out what GPU your machine has installed:
In my case the GPU is an NVIDIA Quadro 600, which is old and not much use anymore. From here on, lets pretend I had a relatively common gaming card installed - a GeForce GTX 960. This GPU comes from the GeForce 900 series, so lets look that up in Wikipedia and scroll down to the products summary:
Unless you have large screen you may have trouble reading those numbers, but click the image to go straight to the Wikipedia page. The numbers we are interested in are in the processing power columns. For the GeForce GTX 960 these show 2308 GFLOPS of FP32 and 72.1 GFLOPS of FP64. Therefore, we would want to task a GeForce GTX960 to a single precision project.
Picking The Project
Having found out whether to apply your GPU to a single or double precision project, you now need to select the specific project to crunch. In the case of FP64 projects, your choices are severely limited - MilkyWay@Home. In future, it is likely many more FP64 projects will be appearing on the BOINC scene as many modern modelling applications need FP64 accuracy.
If you need to select an FP32 project, your first step is to go to the Gridcoin Whitelist and check what GPU projects are available. Then, go to the Gridcoinstats Website and sort all the whitelisted projects by number of hosts.
In general, less hosts means less competition and thus a higher payout for the work you did. This is because the total GRC mined each day is split evenly across all whitelisted projects. As a result, your aim will be to contribute the greatest percentage to the total compute of any project. Pick one of the least populated GPU projects from this list, and assign your GPU to that. Good choices at the time of writing are Primegrid and Amicable Numbers.
Some projects supply both CPU and GPU jobs. Because GPUs vastly outperform CPU jobs, the CPU jobs in these projects will pay out very little and are not worth running. Make sure you elect not to receive such jobs through the project settings on your chosen project's home page.
While we do not want to be running CPU only jobs in a GPU project, a job is never 100% GPU based. They always require some degree of CPU co-processing, which is why you will often see this in your BOINC manager:
This co-processing is required for your GPU to do its job. As such, it is important to not fully load your CPU with another project, as this will starve the GPU of its co-processing and effectively throttle it. In your BOINC settings, under options --> computing preferences, reduce the % of CPUs used until your PC is not running at 100% CPU load:
The goal is for your CPU load to remain high, but not repeatedly cap at 100%. It should look something like this:
Further GPU Optimisation
Once you have gone through the above steps you can further optimise your GPU performance by doing one or more of the following:
- Overclocking and overvolting your GPU
- Actively electing whether to run OpenCL or CUDA jobs
- Running multiple projects concurrently on each GPU to minimise downtime
However, these are all outside the scope I would like to go into here. If you have any questions about this, feel free to leave a comment below and I would be happy to help. If you are serious about getting the most mag from your card, I would recommend Vortac's series on GPU mining.
Good luck, and ask if you would like any more help!