Any time you get data, you clean it and understand it first before implementing it into one's supply of unconscious adoptions.
If someone says
"Truth is relative to culture aspect of observer"
instead of getting lost by trusting it is true it's just up to you to perform endless loops of dialectic to imagine scenarios to 'allow yourself' to get mindfucked into seeing YOUR thoughts as divided based on how the same source would need to provide labelling algos to identify whatever 'culture' determines your mind, it is better to first ask the question:
Is the system that outputted the statement, presenting its awareness of truth as consistent with its own statements?
So I say,
REINPUT "Truth is relative to culture aspect of observer" BACK INTO THE MIND OF THE HUMAN SOURCE OF THE STATEMENT.
See what happens? ERROR.
The source is eliciting at least two inconsistent outputs, one symbolic the other performative.