Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Assuming the singularity and AI takeover is inevitable, do we know what we can do to make AIs safe in the future? While we may not be able to comprehend AI behaviours once they surpass us, is there any conceivable way we could at least ensure AIs has compassion, respects existing geopolitical boundaries, does not commit acts of human rights violation, etc?
youtube AI Governance 2025-11-23T14:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwyCu5NQxlipPDEhWp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwx-7RFUw7J5EqBHmV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyOhDvpSa-j1-d9MmJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwgZTJQtbyJ73k7Fxp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5B0eUpXBQYJvYMRl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzVqOUBTPjeMkyhqK54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz2qBIE3qUCcJTW12t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyrjY3vfsESHJQC8O54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz6gOg4amhw7vnwzNV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyUZO21j01DsQ27xyZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]