Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
the big thing we need to focus on is not making a.i. do the big things for us like for example giving them the goal of saving the world that is the true risk of a.i. sometimes it’s goals could turn on us if we don’t give the right ones
youtube AI Harm Incident 2023-07-09T06:3… ♥ 4
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyHr0NlnBNNZ3S2IGt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyCE9vZy1w9Nxe_-0l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzWYN9-yWfVTvG6F6d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyXwVVFUSjmc7T9PC94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwE0846lQ-qN7vdkx94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzkQfJAl1li8d-DIaB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwmRMtP316ptKhc5FZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxpst7DW2VbEzg-BqZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzp0R7Gm1FHAhK40bh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxKF_NWQMyrmKz-sJR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"} ]