Time and Work Based Model for GridCoin Reward Mechanism

in #gridcoin6 years ago (edited)

This post is a continuation of a discussion @ilikechocolate started in Researching a FLOP and Energy Based Model for GridCoin Reward Mechanism

Problem

There are big differences in number of credits various projects reward solvers. Gridcoin current solution is to reward each project the same amount of GRCs. As a result, members of popular projects are less rewarded than members of less popular projects.

Source of the problem

Historically, BOINC credit system is based on FLOPS. However, there are other factors affecting application performance, like memory references, cache usage and parallelization of workunits. GPUs may have ~100x higher peak FLOPS than CPUs, however, application efficiency can be as low as 10%-20% for GPUs, or 30% - 50% for CPUs. Usually GPU projects reward a lot more credits than CPU projects. Credit awards are not uniform between BOINC projects and admins of particular projects have freedom of adjusting rewards scheme.
Situation is further complicated by the fact that there are three main types of projects – CPU only, GPU FP64 and GPU FP32.
TL;DR: There is no satisfactory normalization of credit awards between projects.

Fair solution requirements

Credit reward system should fulfil following requirements:

  • be hardware independent: similar workunits should be rewarded with similar credit

  • cross- project congruency: the same CPU (or GPU) should earn the same amount of credits on different projects (in properly configured system and adequate resources like RAM)

  • CPU projects should reward adequate number of credits when compared to GPU projects – the basis could be cost of purchase and cost of running the equipment

Solution

Ideally proper solution would be implemented within BOINC platform itself. Proposed solution can be, however, implemented internally or externally.
Credits would be awarded based on actual work done measured by the time needed to complete work on reference hardware.

  • reference hardware – a proper, medium to high end hardware should be chosen for each of the three types of projects; for CPU projects it could be Ryzen 7 processor with 16 GB Ram, for GPU FP32 it could be NVIDIA GTX 1060 or 1080 graphic card.
  • reference hardware (RH) - one hour of run time of the reference hardware would be awarded with one credit on each of eligible projects (for example for Ryzen 7 it would be all CPU-only projects);

To implement this solution outside of the BOINC platform, following adjustments are needed:

  • reference project (RP)– reference hardware would be awarded with one credit for one hour of run time on BP
  • extra cross-project normalization – BOINC-credits rewarded by project X (PX) would be multiplied by a proper factor (mPX):

mPX = hourly-credit(RP, RH) / hourly-credit(PX, RH)

Credit earned (per hour) in project X by hardware X:
BOINC-credit = Bc

Credit(PX, HX) = mPX * ( Bc(PX,HX) / Bc(RP,RH) )

Credit earned between superblocks (DC):
DtS – time between superblocks in hours

DC = DtS * Credit(PX, HX)

Example 1 – GPU projects

Let’s asume GTX 1060 is a reference card and it is rewarded in average 17k BOINC-credits on Amicable Numbers. We nomiante Amicable Numbers as a reference (benchmark) project (RP). We award 1 credit for 17k BOINC-credits per hour. Now we want to find a number of credits that GTX 960 should be awarded on e@h project. It can achieve 7.6k BOINC-credits at e@h, thus

m(e@h) = 17k / 12k = 1.4
Credit(e@h, GTX 960) = 1.4 * ( 7.6k / 17k) = 0.62
Credit(e@h, GTX 1060) = 1.4 * ( 12.2k / 17k) = mPX / mPX = 1

Credit unit has been set by definition as 1 per one hour of work of reference hardware on reference project. In fact calculations can be simplified and we can use directly RAC or TCD.
Under condition that RH runs 24/24 and achieves max RAC on both projects:

mPX = RAC(RP, RH) / RAC(PX, RH)

As total RAC for a project may vary due to variable nature of the time work units need to be completed, something like 7-day average might be better choice for mPX.

Credit(PX, HX) = mPX * ( RAC(PX,HX) / RAC(RP,RH) )

Credit(e@h, GTX 960) = 1.4 * ( 182k / 412k) = 0.62
Credit(e@h, GTX 1060) = 1.4 * ( 293k / 412k) = mPX / mPX = 1

GTX 960 would be able to earn up to 24 * 0.62 = 14.88 credits a day, while GTX 1060 24*1 = 24 credits a day.

With the above method cross-project rewards can be normalised and similar hardware will receive similar rewards per hour of completed work.

Example 2 – CPU project

Lets assume Xeon E6545 is a reference processor. It can earn around 430 boinc-credits per hour at yoyo@home. As it is chosen as a reference processor, it’s earnings in new credits are normalized to 1 per hour. Therefore it can earn as many credits at yoyo as GTX1060 at any FP32 GPU project .
Ryzen 7 1700 has 16 threads and can probably earn around 1000 boinc-credits per hour. This allows for ~56 new credits a day. However, if Ryzen 7 would be chosen as a reference computer it would earn 24 credits a day and mentioned Xeon ust around 10.

Closing words

The main problem with the benchmarking is BOINC system has many variables and we want to reduce it to just one. I think it is impossible to achieve and closest we can get to a satisfactory solution might be to use real reference hardware for benchmarking - as proposed above.

You might also be interested in:
Counting FLOPSs in a FLOPS – part 1
Counting FLOPSs in a FLOPS – part 2

GRCLogo-small.pngboinc_logo_ai.png
Sort:  

One of the issues with this is that performance on a given CPU project is not just determined by CPU performance. The primary example of this VGTU which has a strong dependence on memory bandwidth.

i.e. you would need to control for memory type, frequency, and number of channels (single dual quad).

Contrary, proposed solution would address this problem. It's seems I've cut some text off before publishing and it might not be obvious. Graphic card or a processor cannot run by itself. But they are crucial factors. This is why I have focused on them in the post. The point is to run the same or very close configuration in each of the respective class of the projects. Chosen configuration performance is a benchmark for credit normalization. Basic requirement is that GPU or CPU is not starved of other resources. Topic needs further research.

Makes sense. My point however is to be careful and don't assume that any of the task metrics are consistent between projects. Or that the project performance is dependent on the piece of hardware you think it is. Use a large variety of reference machines.

I've been working on making a magnitude/rac calculator and its become apparent that some of the projects don't even report cpu/wall time the same way as other projects.

Nice post! How are you thinking the benchmarking could happen securely? One could imagine unscrupulous individuals skewing the inter-project or inter-hardware conversion factors by running on poorly configured or extremely overclocked hardware. (Or by faking their hardware altogether, although that's a different issue.)

The situation seems to be this. We want conversion factors between hardware and projects to get a fair credit rewarding system. This could be done in a centralized manner, within the projects themselves or by community-approved individuals; or in a decentralized manner, as you and @ilikechocolate have been exploring. While the centralized solution has its own problems, the decentralized solution certainly must demonstrate that it is robust again a minority of unscrupulous network participants.

How are you thinking the benchmarking could happen securely?

That's the tough part. Ideally, maybeBOINC project admin and Gridcoin Foundation would run several reference computers with known specs so 'everyone' could re-check. Thus hardware should be quite affordable, consumer not prosumer grade, and popular, so easy to purchase.

I consider my model to be rather on centralized, not de-centralized side, or maybe in-between.

We could think of a decentralized model where blockchain would reward those who are closest to max-RAC... or something like that.

We are looking how to optimise system built on quite broken BOINC reward system and partial data available... It's quite hopeless to build on bad foundations. We should either have partnership with BOINC developers or have our own distribution of BOINC platform.

Still, with reference computers we could get maybe the closest its possible to optimal rewarding.

Got it. So your proposal is more along the centralized lines. We already put a certain amount of trust in project admins, so having them run reference hardware (which anyone could double-check) is not a huge stretch from the current implementation.

We are looking how to optimise system built on quite broken BOINC reward system and partial data available... It's quite hopeless to build on bad foundations. We should either have partnership with BOINC developers or have our own distribution of BOINC platform.

I definitely agree with this for the long-term! In the short-term, hopefully we can at least find a partial solution.

You are now looking at credits per project, when in fact there are huge differences between subprojects and tasks done. Some tasks do not stress a GPU 100%, and running parallel tasks change results. Also there are massive differences caused by silicon lottery and other computer settings.

The idea is good, but based on the above, I dont think we can really find a "reference" situation anywhere.

I think the current system is good for CPU and GPU specific projects, but I agree that its counterproductive that CPU's have to stick to CPU-projects only.

Maybe one solution would be just to separate CPU and GPU tasks in magnitude calculations project-wise, and just use the current system otherwise.

Btw: you say "As a result, members of popular projects are less rewarded than members of less popular projects.". I think this is a good thing, which leads to work being split between projects, instead of everyone crunching on just one project.

huge differences between subprojects

First we need to even up situation between project. For yoyo I've tested 2 subprojects and in that case rewards are consistent. Without direct control of boinc platform improving subprojects scoring might be impossible.

... I agree that its counterproductive that CPU's have to stick to CPU-projects only.

Did you mean its counterproductive for CPU's to work on GPU projects?

I think this is a good thing, which leads to work being split between projects, instead of everyone crunching on just one project.

I've researched and proposed above solution, but I'm hesitating whether I would like this type, or current. One we have is a bit like a communistic system project wise and capitalistic user wise. Cleaner or surgeon working in Ukraine or India gets 10 times less per hour than cleaner or surgeon working in the UK or Japan. This is the kind of situation we have in Gridcoin-BOINC.

BTW, I think its much easier to win a lottery than have all users to switch to one project.

P.S. We could also use a hybrid system, let's say 10k GRC are shared equally between projects as it is now and 40k paid for job done calculate as in the above model.

Excellent post!

Some thoughts:

I'm not sure that taking only a single reference hardware is enough. I solved this problem by taking a weighted average of all the hardware in the network, but this is definitely not the only way to do it. Since different architecture will perform differently, maybe the correct approach is to have a reference hardware for each architecture family, but I'm not sure just one for the whole network will suffice.

Let’s asume GTX 1060 is a reference card and it is rewarded in average 17k BOINC-credits on Amicable Numbers. We nomiante Amicable Numbers as a reference (benchmark) project (RP). We award 1 credit for 17k BOINC-credits per hour. Now we want to find a number of credits that GTX 960 should be awarded on e@h project. It can achieve 7.6k BOINC-credits at e@h, thus

You didn't mention this (I don't think), but am I understanding correctly that credits for the 1060 on Einstein@home = 12k? And you're saying that 17k on Amicable Numbers = 12k on Einstein@home, and so the rest of the calculations follow from there? (and ultimately, we only need the information we already have publicly available?)

As it is chosen as a reference processor, it’s earnings in new credits are normalized to 1 per hour. Therefore it can earn as many credits at yoyo as GTX1060 at respective FP32 GPU.

I'm not sure I understand this fully. Can you please elaborate on the equivalence between CPUs and GPUs?

And you're saying that 17k on Amicable Numbers = 12k on Einstein@home, and so the rest of the calculations follow from there? (and ultimately, we only need the information we already have publicly available?)

Only partly, as from project data (RAC, boinc credits) you don't know how many hours / day card is working plus other unknowns. Thus it would need to be averaged by the runtime of particular projects - assuming these are consistent between projects, i.e. counted the same way. Without control on reference computer it is chasing a carrot on the stick. Like WINE for Linux is chasing MS Windows compatibility. And will never get there. I run both of these projects so can make quite accurate calculations.

I'm not sure I understand this fully. Can you please elaborate on the equivalence between CPUs and GPUs?

Should be Therefore it can earn as many credits at yoyo as GTX1060 at any FP32 GPU project. Equivalence between CPU and GPU - a computer based on Ryzen 7 1700 and GTX 1070 should earn similar amount of credits on both GPU FP32 project and CPU project. How to choose ( reference CPU and GPU) would need further research and community agreement.

Of course GTX1060 is a shorthand for ~ GTX1060 based computer; components cannot hinder card's performance.

Equivalence between CPU and GPU - a computer based on Ryzen 7 1700 and GTX 1070 should earn similar amount of credits on both GPU FP32 project and CPU project.

Could you give an example?

Coin Marketplace

STEEM 0.17
TRX 0.13
JST 0.028
BTC 56681.24
ETH 3016.79
USDT 1.00
SBD 2.28