Few days ago, @Nexusprime developed a very good script called CPU_QuickMag to estimate the magnitude (which you can multiply 0.225 (can be changed from time to time) to get the estimated GRC per day) of all whitelisted projects. While this repo is quite a breakthrough based on the fact that GRC PoR miners did not have a quantitative way to test expected Mags except for testing all of the projects manually until this came out.
However, I have found a room for improvement for the initial update of the database for various scripts. In the UpdateDatabaseFiles.sh file, which downloads the team and host statistics database, there is no option to see if a download failed, or the speed of each download. Especially, some database files are larger than others, so we would have to wait without knowing if a download failed, or it is just downloading slowly.
Instead of showing just the percentages of the download based on how many downloads out of all are finished, a more verbose way is to show part of the wget download process in stdout. Therefore we can find out if a download has not failed, after setting an additional option such as -v during script execution. After downloads are finished, the wget messages may be erased and some new download messages can be shown. If downloads fail, the script should detect the failed download and then restart the download automatically, so that people do not have to rerun the whole script and download all of the databases again.
Mockups / Examples
After the problems related to unknown downloads are resolved, people may now check the status of their downloads and see how the downloads are running. They may also see how their downloads restarted after some kind of an internet problem stopped the downloads. Also, you can see how much time you have left until all the download terminates.
Thank you very much.
EDIT: @Nexusprime stated that the download does not hang indefinitely, so fixed related information.
Posted on Utopian.io - Rewarding Open Source Contributors