proposed solution:
My present solution looks like this:
There are four classes of distance:
0-50 km, 50-100 km, 100-150 km, 150 - 250 km.
For each class I create a table with three columns: "zip_seek", "zip_target" and "distance".
Tables have a multi-column index on "zip_seek" and "distance".
A second index on "zip_target" exists, for faster joins.
In "zip_seek" I put the zip I'm searching for. Then I generate a record for every other zip belonging to this class (e.g. that is up to 50 km away to "zip_seek"). This zip is put in "zip_target". Distance in km is put in "distance", calculated from the geodata I have.
Then I physically sort the table based on my multi-column index (theres a mysql command for this, somewhere in the manual).
So I'm able to do very fast selects on these tables: It needs only around 5 or 6 seeks to find the matching position where to read out the rows, further sorting is unnecessary since distance is the second key part of my index and therefore all rows are already sorted. Compare this to reading out and sorting 20 000 rows on every query!
I'f I'm right index size should be around 50 - 100 MB, database size also a few hundred MB, so it's possible to put all that stuff in ram.
I didn't try this yet nor did I do serious calculations, but I'll post some benchmark results here soon.