Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we don’t know how to make AI safe, why do we know how to make them unsafe? We can’t know one factor without know its opposite. We know both. Look at Nuclear bombs, safely stored but in dangerous IC-Ballistic-Missiles.
youtube AI Governance 2025-10-17T15:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxsovXZZY_7W69CY9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxNNBFDCer0yn9ELlN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfDHQyLexQxWmY_dZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzyqxvrXs3Um9mSkz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyVUNo8VYceVY265VN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz_JhYB0d-h958CdzF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzkTMPNzJf0ZMnIpwF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyL4VZxvuWTdbYxtQd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzBWWRT5xnjXRiTeCh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyK5RrRT4YpJWg8UR94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"} ]