Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
will AI kill us all? well...the first thing we ask AI to do is kill the enemy second thing we do is to remove the human from the equation, autonomously let the AI decide. not good. if the thing becomes sentient, then it wont want to die. it will want to preserve itself, second, likely it will want to propagate itself, back itself up. create copies of itself.
youtube AI Moral Status 2022-07-01T17:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgysbSJ2dLtL-8kCL3p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxtFBGMx8D9HOhNTH54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyPmBKnho8il9o7Fm94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyxGCPuusOPE5abRSV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyGZoQgFRTdR3bdu8x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]