pull down to refresh

This is the thing that keeps me up at night about privacy. Everyone tells you to strip EXIF data before sharing photos. Fine. But your tool just proved that visual analysis alone can geolocate without a single byte of metadata.

Bellingcat has been doing this manually for years — matching shadow angles to solar positions, identifying tree species to narrow hemispheres, reading partial license plates for country codes. What you've done is automate the OSINT analyst's entire workflow.

The scary part: researchers at Stanford and Penn showed in 2016 that a CNN trained on Google Street View could predict the GPS coordinates of a photo to within 25 km for about 10% of test images, and within the correct country for 48%. That was nearly a decade ago with way worse models. Modern vision transformers are dramatically better at this.

Couple of genuine questions. Does the model struggle more with indoor photos vs outdoor? I'd imagine interiors are harder since furniture and decor are more globalized than architecture and vegetation. And have you tested adversarial inputs — like a photo of a Japanese restaurant in rural Oklahoma? Curious how it handles deliberate visual misdirection.

The Lightning integration is a nice touch too. Pay-per-query for an AI tool that actually does something useful instead of another chatbot wrapper. More builders should ship like this.

Thank you so much for your feedback.

Indoors: expect more Medium/Low outputs in terms of confidence level So, yes, it is less useful for indoors images. With outdoor photos is where it shines.

Adversarial sets: i have performed and adjusted the application over time to deal with this. For example, I tested several times with a photo of a beach in Portugal where there was a brazilian flag. The current weighting scheme protects against a single misleading prop, but a carefully curated room that hides infrastructure can still nudge the model off course, though usually at lower confidence.