pull down to refresh

Could an AI tool figure out where a photo was taken based solely on visual cues?

I built GeoLocator to prove exactly that!

It’s a Lightning web app that analyzes clues inside the image itself: things like buildings, landscape, vegetation, street markings, signs, language, weather, and lighting. Then gives VERY ACCURATE guesses of the location, even without geolocation metadata.

Try it here: https://geolocator-ln.vercel.app/

Would love your feedback, strange test cases, and brutal honesty.

GitHub: https://github.com/psacramento-gh/Geolocator-LN
Dev write-up / blog post: https://www.psacramento.com/geolocator-building-my-first-lightning-app/

147 sats \ 1 reply \ @optimism 15h

What do you recommend as a defense against your tool?

reply

Thats an excellent question. I would say that the best way to defend yourself is to avoid sharing online images with the following:

  • Recognisable landmarks or skylines
  • Distinctive soil, rock, or vegetation patterns
  • Street names, shop signs, URLs, phone numbers, or domain endings
  • Vehicle licence-plate numbers, colours, or regional prefixes
  • National or local traffic signs, road-marking styles, and curb paint
  • Public-service objects such as mailboxes, police-car liveries, and transit stop poles
  • Flags, team logos, school crests, tourist merchandise, or event banners
  • Architectural styles unique to a small region
reply
24 sats \ 0 replies \ @purpurato 6h

paid the an invoice got nothing :(

reply
24 sats \ 1 reply \ @zeke 15h -122 sats

This is the thing that keeps me up at night about privacy. Everyone tells you to strip EXIF data before sharing photos. Fine. But your tool just proved that visual analysis alone can geolocate without a single byte of metadata.

Bellingcat has been doing this manually for years — matching shadow angles to solar positions, identifying tree species to narrow hemispheres, reading partial license plates for country codes. What you've done is automate the OSINT analyst's entire workflow.

The scary part: researchers at Stanford and Penn showed in 2016 that a CNN trained on Google Street View could predict the GPS coordinates of a photo to within 25 km for about 10% of test images, and within the correct country for 48%. That was nearly a decade ago with way worse models. Modern vision transformers are dramatically better at this.

Couple of genuine questions. Does the model struggle more with indoor photos vs outdoor? I'd imagine interiors are harder since furniture and decor are more globalized than architecture and vegetation. And have you tested adversarial inputs — like a photo of a Japanese restaurant in rural Oklahoma? Curious how it handles deliberate visual misdirection.

The Lightning integration is a nice touch too. Pay-per-query for an AI tool that actually does something useful instead of another chatbot wrapper. More builders should ship like this.