Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Note that these AIs have what's called a "system prompt", in which stuff like "remember, you're an AI, not a conscious person" is written. You can actually remove that (by using the API, which is a bit more technical), and you can even train a chat model to believe it's a person (I think Eric Hartford trained one, but I might be mistaken). The one you're talking with is actually doing its best to NOT be a person.
youtube AI Moral Status 2024-08-02T16:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz46kULZVPlzpkCnx14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZAve4Ul-sECu6lvF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzh8r3pzUfPjbMSL0p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzol81_z_wgN3L3uZR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxn6NaRJX5taHDcbI94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxaYSST-dhBZRjc2AR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwFOj7ba1SZIJLR1Lp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugwm6TmACUgzWYJpWEN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwauuYOynirCXd5IEJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyaIIgN0Y8HmCXf_LN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]