The standard way to round numbers is to round up when the last decimal is a 5. However, different software may or may not follow this convention.
For example, in Python 2 the call round(.125,2) will result in .13.
But in Python 3 the same call will result in .12.
As a result, if a researcher used a computer program to round their values it is possible they unknowingly rounded all of their values down instead of what they likely meant to do, id est round up.
So how can you check if this may have happened to the values you are testing?
I provide you the option to decide how you want numbers rounded. If you believe rounding is an issue you can try the different options.
Another problem may arise that involves how computers store numbers. For example, the number 2.675 should get rounded to 2.68, but the computer actually represents the number as 2.6749999... and it gets rounded down to 2.67. See this page for more details.
This is an example of a floating point error that primarily occurs at sample sizes that are multiples of 40. The different rounding options give you the ability to account for this possibility. For the GRIMMER test there is a very small chance this type of floating point error will provide spurious results that cannot be accounted for without a detailed investigation.