Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Seems like we created our own hypothetical dilemma test for ourselves. But this one isn't hypothetical. You also have to remember that if our adversary develops it first, our ai will be steps behind theirs and probably unable to stop a catastrophic disaster of their making. While there's also a chance both will produce a disaster. Or worse, one will convince the other to help.
youtube AI Governance 2025-08-26T15:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwqcWFpGoNKsj8F5JB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxRLUCksk7TXs9OQU54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwUeWXBOv65Ir9HlYB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzE1GPrdJvKoHj7n6p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzyksaQX2QTzoFpkrZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]