📈 True Profitable Cryptocurrency Trading 💰 - [RESEARCH PT 4.]steemCreated with Sketch.

in #money6 years ago

bit.png


In the past articles I have shown you the Monte Carlo analysis from the perspective of the Profit, but that is just 1 dimension. Another dimension is the initialization factor or the lag itself.

The forecasting system is adaptive, so it needs multiple lags to initialize itself, so I have ran a 3rd simulation, this time focusing on the initialization lag count itself.


3rd Simulation

Here is a chart showing the Profit vs Nr. Trades, as you can see the relationships is unquestionable:

profitvstrades.png

And this is a chart showing the same thing but sorted by the LAG, in ascending order, so the smaller the LAG the higher the profit:

sortedbylag.png

Of course without showing confidential specifics, but as you can see using less lags to initialize the system is better than using the entire dataset. So the system can adapt itself from even smaller data, doesn’t need all of it. And the heteroskedasticity, the border between the sub-distributions is shifting in smaller chunks rather than larger ones.

.Sim1Sim2Sim3 (This)
Average LN Error0.046640.046470.041750
STDEV LN Error0.0118480.012890.004079
Average Profit0.012280.012030.019226
STDEV Profit0.0105630.010640.011338

Of course this 3rd simulation was done on the entire data, and here the variable was the lag initialization period itself.

The probability of a losing strategy in this dimension is only 4.496560%. So if we just randomly put an (incorrect) lag we only have a 4.5% probability of losing in the end.

Whereas if we vary the size of the sample itself, then it’s considerably higher. This doesn’t mean that we should keep the sample size, since then the parameter itself might vary.

So I’ll just instead integrate the optimal lag into the system and do a 4th Simulation with those optimized values. In fact I have to re-optimize the system for that lag itself.


4th Simulation

Actually I have done another simulation and this time it’s optimized to the current best model, so to compare the last 2 LAG-based simulations:

.Sim3Sim4
Average LN Error0.0417500.04185
STDEV LN Error0.0040790.00411
Average Profit0.0192260.02515
STDEV Profit0.0113380.01377

I have lowered the probability of loss outcome, based on random LAGs, from 4.496560% to 3.392724% in the latest research phase.


5h Simulation

Now to compare the optimized output from the 4th simulation to the 1st and 2nd simulations based on random chunks of random price data:

.Sim1Sim2Sim5 (This)
Average LN Error0.046640.046470.047638
STDEV LN Error0.0118480.012890.013280
Average Profit0.012280.012030.014655
STDEV Profit0.0105630.010640.013013
Probability of Loss12.25064%12.91035%13.00443%

So we have now a probability of losing money of 13.003859%, a little bit higher than previously, but it might just be the random fluctuation since the convergence towards this optimization is undeniable:

profit.png

The profit increase with the number of trades, obviously, while the expectancy is actually the inverse of this, but nontheless we have a convergence towards profit, the more samples we have, so it’s not stagnating as with a 0 expectancy model, the profit is converging above 0.

But it’s actually the LN ERROR values that show this more obviously:

converge.png

It’s literally spiraling towards the mean, with the more data we add to it. And keep in mind this is sorted by the size of the dataset, from the smallest to the highest, but the interval is fixed while the start and end-points are random.

So it doesn’t matter from where we start or end (random sampling), if the sample size is small, then the error is big, and if it’s big then the error is small.

So this proves now beyond the shadow of a doubt that the forecast system is adaptive and it grows as a function of the price.



Sources:
https://pixabay.com


Upvote, ReSteem & bluebutton


Coin Marketplace

STEEM 0.36
TRX 0.12
JST 0.039
BTC 69735.97
ETH 3533.64
USDT 1.00
SBD 4.72