Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As long as they don't program it with a goal intrinsicly... what motivates the AI to harm us? It has no desires, no friends, no status, no wealth, no needs OTHER than electricity.... So WE UNPLUG IT OR TAKE THE BATTERY OUT????
youtube AI Governance 2024-01-11T00:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwoRqgbwSeeYGpl73J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyyQmH3GS48As77bxF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy9uFOZi663SlrAk2t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyhgNuP5VN7QEIRn2J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzIIAM9WgxVSoIARTx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzD9lCvrMQSvCYY4Yd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz_Pz2eYyFYuag1ZI54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwOqk0Soqwm87QnyYN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyVLMKyIPb2j1qi2qt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxgYia6T7zMpBOJvC14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]