Sort:  

Sure, I did it for another application a day ago in python for the time span from Jan 1st 2018 to yesterday:

from steemdata import SteemData
import datetime as dt

s = SteemData()
timerange = {"$gt": dt.datetime(2018, 1, 1, 0, 0, 0)}
ops = list(s.Operations.find({'type':'fill_vesting_withdraw',
                                'timestamp': timerange}))
rates = []
dates = []
for r in ops:
    deposited = float(r['deposited']['amount'])  # Steem
    withdrawn = float(r['withdrawn']['amount'])  # Vests
    rate = deposited / withdrawn * 1e6
    date = r['timestamp']
    # [few filters hidden for simplicity here]
    rates.append(rate)
    dates.append(date)

steem_per_mvest.png

The outliers are cases where withdraw vesting routes with less than 100% were used, so some of the Vests from one account went as STEEM or SP into another account. I think the upper limit of these values per day/hour/... should give a reasonably precise value.

Cool, thank you.
But if I understand it right, SteemData is a MongoDB server and therefore the data not comes directly from the blockchain.
https://steemit.com/steemdata/@furion/getting-started-with-steemdata

Yes, that's a MongoDB server. It contains the same information as on the blockchain, but it's magnitudes faster than streaming the blocks directly from the blockchain. If you want to get it directly from the blockchain, one could think about streaming a couple of full blocks around each reward claim to catch a fill_vesting_withdraw operation close by. But your steemdb API approach is in that case probably simpler...

Coin Marketplace

STEEM 0.19
TRX 0.16
JST 0.034
BTC 63935.74
ETH 2749.19
USDT 1.00
SBD 2.65