Interesting podcast discussion about it:
reply
Interestingly:
  • It can't do asian man and white woman. But the opposite works.
  • It can't do white man and black woman. But the opposite works.
  • It works better when using keywords such as platonic or academic. It quirks out when it's supposed to be a relationship.
So the theories that would interest me:
  • Is this the "safeguards". Typical tech companies trying so hard to be woke that they accidentally become more racist than they would have been
  • Or is it some weird quirk with lack of training data? Are there just much more pictures of some combinations than others that the AI is accidentally learning that one combination is wrong and the other correct?
reply
Google pulled its Gemini image generator because of this. There are a couple of reasons I have heard and for the most part, the consensus is a quirk with the data that is triggering safeguards. That is what Google thinks happened with Gemini from what they told us at a briefing over it.
I believe that there is an issue with the training model and the results for relationships that the AI has built up. It could be an easy fix but with what we are seeing it is likely some complex coding relationship that is messing up relationship themselves.
reply