Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was really hoping to hear about the moral implications of humanity creating a truly sentient AI with the possibility of gained experience and then asking or forcing this, essentially, new life form to stop developing itself or shut itself off.
youtube 2025-04-19T16:3… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyRYJ6SOgdp3ybn0At4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy9U7hhe9x3rADRfv94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxbQyJasjqdUuY3G-B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz8DKiHH2PCQt8xt7t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzEIDjqGAH6qsRM6Pl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxLbqhlPwmAMtcChld4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzXphPwBQ4lF_FKkwN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyGZdQfrBFzrZnf3CJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxDHkhLJZ3rYt2zA-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"resignation"}, {"id":"ytc_UgxptIEpIbJWGtk3eIF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]