Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Its potential to benefit humanity is enormous... ...allowing AI to choose..."! "Allowing AI to choose..."! Anything wrong with that picture? "Artificial Intelligence"... Whose "Intelligence"? Who's determines the "evil guy", the "bad guy"? The classic moral question continues to be sidestepped... "Just because we can... ...should we?"
youtube 2022-01-11T05:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxTZm536eczKElv5WR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw5JZ3GYy0KGBRK_mV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxpwlN0YKQlhAlaUFl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy-mQmVwuGlWaPxYVN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyM20rwDhtQ4tU4s6x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzB2LsxV5XlzPcBhG94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWgBR9ehhwQ0ga7KR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwhfFBRgOd5GpRABzd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgymGxThEPGxHZ8cmq54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy3cCqEFMm4x0etts14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]