It is usually fairly clear which data type to choose for each item in a program, though there are some borderline cases among the various arithmetic data types.
When processing data which are inherently integers, such as the number of seeds which germinate in each plot, or the number of photons detected in each time interval, it is not always clear whether to use integer or real arrays to store them. They both use the same memory space but on some machines additions and subtractions are faster on integers than on floating-point numbers. In practice, however, any savings can be swallowed up in the data type conversions that are usually necessary in subsequent processing. The main snag with integers is the limited range; on some machines integer overflow is not detected whereas floating-point overflows nearly always produce error messages.
If your machine stores its real variables in 32-bit words then the precision of around 1 in 107 is likely to be inadequate in some applications. This imprecision is equivalent to an error of several pence in a million pounds, or around ten milliseconds in a day. If errors of this order are significant you should consider using the double precision type instead. This will normally reduce the errors by at least another factor of 107. Mixing data types increases the risks of making mistakes and it is often simpler and safer to use the double precision type instead of real throughout the program, even though this may use slightly more memory and processor time.
Although automatic type conversions are provided for the arithmetic types in expressions, in other cases such as procedure calls it is essential for each actual argument to have the same data type as the corresponding dummy argument. Since program units are compiled independently, it is difficult for either the compiler or the linker to detect type mismatches in calls to external procedures.