Do I really need 64GB of RAM for my witness nodes?

in witness-update •  16 days ago

TLDR: no.

32GB servers with fast storage like SSD or NVMe are still good enough to run low memory nodes. The trick here is to keep the shared file in RAM (tmpfs device) and SWAP located on a fast disk(s). This solution was also described by @gtg in his post Steem Pressure #3 which I highly recommend you to read.

From the configuration standpoint, we only need to be sure that tmpfs and SWAP volumes have enough room to hold the shared file. Linux kernel will take care of the rest and start paging if it's required, in short, the paging is the process of optimizing memory by saving data on a hard drive instead of RAM.

My setup

By default on the Linux, the size of /dev/shm (tmpfs) is a half of the available RAM, 32GB servers will have tmpfs of 16GB. During my tests, the shared file already crossed the 33GB, so before I run steemd I need to remount /dev/shm with the bigger size, and I will use 48GB for it and 32GB for the SWAP.

# mount -o remount,size=48G /dev/shm/

How long does it take to replay the 91GB of blockchain?

The replay time highly depends on the hardware, so please don't stick too much to the numbers below, but I hope it gives you at least some kind of reference.

I used 4 dedicated servers for my tests,

HDD configurationMemoryCPUReplay time (s)
1xSSD / SAMSUNG MZ7LN256HMJP32 GBAtom C275035708
2xSSD (RAID-0) / Samsung SSD 850 EVO 250GB64 GBXeon D-154012762
2xNVM (RAID-0) / INTEL SSDPE2MX450G732 GBXeon E3-1245 v610234
4xHDD (RAID-0) / WDC WD10EZEX-00BN5A032 GBi7-479021456
HDD configurationDisk read speedDisk write speedPassmarkPassmark single core
1xSSD / SAMSUNG MZ7LN256HMJP271.61 MB/sec2.0 MB/s3805582
2xSSD (RAID-0) / Samsung SSD 850 EVO 250GB925.36 MB/sec900 kB/s105731344
2xNVM (RAID-0) / INTEL SSDPE2MX450G71.4 GB/s43.2 MB/s104102191
4xHDD (RAID-0) / WDC WD10EZEX-00BN5A0678.43 MB/sec126 kB/s99982285

READ: hdparm -t / WRITE: dd if=/dev/zero of=tst.tmp bs=4k count=10k oflag=dsync

No doubt the winner is the server with NVMe disks, the full replay finished in 3 hours, but I was also very positively surprised by the performance of the server with 4 HDD drives.

All data below comes from the weakest server 1xSSD/Atom C2750 ;-)

Memory, I/O and disk utilization

dstat running during replay process (1xSSD/Atom C2750)

SWAP paging (1xSSD/Atom C2750)

/dev/sda throughput (1xSSD/Atom C2750)

/dev/sda utilization (1xSSD/Atom C2750)

Beginning of the SWAP paging (1xSSD/Atom C2750)

$ free -m
              total        used        free      shared  buff/cache   available
Mem:          32094         329         305       29478       31459        1815
Swap:         30517        4288       26229

Memory usage during regular work

  • during the replay, the READ/WRITE ratio is ~3:1 (read 91GB blockchain/write 33GB shared file)
  • after the replay, the paging goes back to the minimal level


My biggest problem was to convince myself to let the servers use the SWAP memory because by many years I was learning how to optimize various systems, and SWAP paging was always something I don't want to see... but in this case I'm going to turn a blind eye and continue to use some of my 32GB nodes.

The data and graphs speak for themselves ;-)

If my contribution is valuable for you, please vote for me as witness.
May The Steem Be With You!

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  Trending

I have always kept shared-memory on disk. Not even an NVMe, just a regular SATA disk on RAID0. Disk usage never goes above 5%, though I'm sure there may be spikes. Never had any issues thus far.

My RAM usage has never exceeded 2 GB.

I do have a backup node with NVMe testing AppBase 0.19.4. Tested /dev/shm, couldn't think of a single benefit, so I'll stick with NVMe for now. After NVMe comes Optane. Using RAM seems a long, long way away.

Of course, it's a different story for a full node.


Yikes, I'm not sure it's reliable to run it on regular HDD. Is that your backup node? Check your latencies, I bet you're hitting very high numbers every once in a while (i.e. will cause missed blocks).


Short spikes are fine ;-) I hope someone with full node will share best practices ;-) I would love to read about the configuration, requirements and daily issues. ;-)

Do you remember how long is the replay when shared file is on a disk?


i would like to bring you witness to my w/(h) ea L/L (t/h) D

This post is yet another example which follows Betteridge's law of headlines (PL: Prawo nagłówków Betteridge’a)



I didn’t know law like this even exist... ;-) thx, I learned sth new ;-)

To the question in your title, my Magic 8-Ball says:

It is decidedly so

Hi! I'm a bot, and this answer was posted automatically. Check this post out for more information.

I could use a spare 8 gig for my mac mini, jk.
Out of subject since it's techie-friendly, why don't you have any community projects. It's mostly the main reason for minnows to vote, I know our votes doesn't mean much but that popularity may help in the end.