Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
But if we really make artificial intelligence they could become self aware and then we could end up with something like GlaDos and that would be bad
youtube 2013-07-24T20:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxlL_28gaMwstTd8914AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyteB37JUW16D4otX94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzs0ZKA6F3tVy8PRdZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxn600cM46SM9ffmi54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzRyVQ_Lm9QwHC1xv14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz77MDf6TtyRgXXxxx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzWOPuOLhm0rk6C9H14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzeg6IeSjSqAqUhxZp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyQSMDJ-ikyzFweloB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxmKC9CPGp_HSMeIDF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]