You also need to check for aliases. Aliases are cases where the same physical RAM appears in multiple places in the address map. This will happen when the RAM is only partially decoded (see below) or address bits are stuck to a certain value, just like data bits can be. In this case the actual physical RAM may be repeated multiple times throughout the addresses available for RAM, giving the system an impression of more RAM than is actually present. Also, it's possible for blocks of addressis within the actual physical RAM present, to repeat, again giving the system an impression that the physical RAM operates correctly wen in fact it does not.
In order to reduce the time needed to check the RAM, one might want to limit the check to the size of the actual physical RAM installed. Obviously, the forst step would then be to try to find out how much RAM there is.
A simplistic but effective way to do check for this is to simply attempt to write each long word starting from the RAM bottom (this is usually a known value) with it's address, assuming all possible RAM is present, then read back from the start and check for the first location where the read data does not correspond to the actual location address. When that happens, we have reached the end of RAM, or a faulty area - same thing as far as the system is concerned. However... imagine doing that with 4G of RAM

Also, this sort of check looks for contiguous RAM, which is what most systems expect - but even in the QL world, there is the Atari TT that has two separate areas which are not contiguous between them. In this case, one can run separate tests for each area knowing where to start looking for them.
Another (faster) approach is to declare that RAM has to come in certain size increments, for instance 16k or 64k. Then you can write only one (long) word of each block with it's 'index' and read back the contents to check how many blocks are really there. For a 16k block ths reduces the number of memory locations tested by a factor of 16384

so it's much faster. This incidentally will take care of detecting aliasing of blocks (the indexes will be screwed up on read if there is any). Then at least one block has to be tested by writing a pattern the size of the block (for instance writing a word count into each word and checking they read back correctly, even better writing a pseudo-random sequence with a random seed and checking it on read) to take care of aliasing within a single block. This solution can be extended to keep the RAM blocks tested in a table, if it is to work for systems with non-contiguous RAM.
Another, even faster approach that can go right down to individual byte addresses (this however usually makes no sense, down to a single long word is the smallest one that's still logical), is to do a binary end search. This starts with zero for the 'top of RAM' address, and generates two addresses by setting a bit of the 'top of RAM' address low and high, starting from the highest bit (*). A check is made to find out if the two addresses properly write and read different data (writing to one does not return the same data reading from the other and the other way around).
If the test shows that these two addresses indeed return different values independant of each other, the address bit tested is set to 1 in the 'top of RAM' value. If the test shows that the two addresses actually map to the same physical location, the bit tested is left za zero in the 'top of RAM' value. This process is repeated down to the least significant bit of the address, since we are talking about checking down to the size of a single long word, this will be address bit 2. If the bottom or RAM is offset (not 0) then all the addresses generated are offset by the same amount. Also, if any address falls within addresses that cannot be used as RAM, the alias test above is assumed to have failed.
This approach is only usable for contiguous RAM and will check the entire 4G available to a 68k CPU with as little as 120 memory location tests.
All 3 approaches would still benefit from a thorough test like described in the first post. The patterns to test with can actually be simplified (think 01010101, 10101010, 00110011, 11001100, 00001111, 11110000 - if you look carefully other combinations of bits appearing in two or more places at once will upset at least one of these tests), in a system with 16 or 32-bit RAM the patterns should be expanded to 16 or 32 bits respectively. These patterns can also be used to test locations in the above 3 RAM size tests. They checks for data bit duplication or stuck bits on the signal level, to test the actual RAM chip itself there is no substitute for checking each location. However, there is the matter of statistics. This is what both the original QL and the Minerva use - they extensively check for errors on the signal level (aliases, shorted or stuck bits) but do not do an extensive memory location test every time, instead they copy a part of the ROM into RAM block by block, but with a random offset. The assumption is that there can be random errors (unlike pattern replication errors) so at some point one may hit upon an error like that and report it. Whereas the regular test determines the RAM size and will not stop the machine if it passes, an error in the random test phase on Minerva will (with the large HEX numbers on screen), then after a while test again.