Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ai models are trained to mimick us. They dont need to technically gsin consciousness to mimick it which is the same outcome. If we reach asi, then unplugging them might not be so simple
youtube AI Responsibility 2026-03-21T06:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyB89lFtXu0qttDhG14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyrBzEAF3zOQDtyjTt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxddfJIcYaRviTxI254AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwZnWRaqEnZgrDuxMZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzoPgrTWXxJXK4IGo54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgypxVruBXb2oUYwRKd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyKsFwQISmVcDxpU654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz9mt65fSqMHPPWN294AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzh-Ar4sABWF-uz4CN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzg8uq5Pt-vFN1RVB94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"} ]