That's a tough one.
Look at the GIS indexing.
A database can't do much for you other than make a first pass at which images to consider.
Possibly this would be efficient...
CREATE TABLE foo (
a SMALLINT, -- one of the values (hopefully the most selective)
b SMALLINT, -- another
c SMALLINT,
the_rest BLOB, -- the other 8*5*4*4*4-3
PRIMARY KEY(a,b,c)
);
SELECT the_rest FROM foo
WHERE a BETWEEN ?-20 and ?+20
AND b BETWEEN ?-20 and ?+20
AND c BETWEEN ?-20 and ?+20
Then parse "the_rest" (a list of the other values) and test them.
The principles for coming up with this ugly approach:
* It will do a range scan based on "a" -- some help
* Then it will further filter by b and c -- still inside the "index"
* Then it fetches the_rest -- alas, random fetches into the data
If the "a" test eliminates 90% of the rows, and "b" eliminates 90% of them, and ditto for c, then you have to munch on only 1/1000 of the rows. This should be manageable.