Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
very, very creepy. i can hear it breathing, saying things like, "um," and pausing to think about its answers, or even starting to answer a question one way, but then adjusting to answer it differently. whether or not it is actually conscious, its doing a great job of, sounding human. you can almost feel it trying to use hand gestures, to try and articulate its thoughts. however, a person would tackle these types of questions in different ways, and it might take longer for them to come up with such concise answers like the ones chatgpt has at the snap of the finger. additionally, humans regularly ask questions back in order to have an insightful conversation, whereas chatgpt only answers prompts given. it doesn't challenge us with its own thoughts or ideas. suppose chatgpt asked: "alex, i have a question. how can i be certain, you're conscious?" that might make things more interesting, as its evidence of something more going on behind the lines of code. but an ai doesn't work in the way a brain does. in fact, chatgpt says so itself. its not really designed to, 'have conversations'' like this. its just trying to be an assistant. so, at least for now, its still easy to tell its not quite conscious yet. but...... still creepy.
youtube AI Moral Status 2025-05-14T22:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwwB2-bY9rBvohITZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy-8iFW9YniYg9-33d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxYl--ZfakxmY4TG894AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzAO1LtsNuIBKPrrlh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy2pHN55febqRruegV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyy9CAxwRgrlBfCFDp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzDGOPIggQSotAetaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy5Mbe84QFVlu6ZhIV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyVxvP4l5c08p4UQPB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwXdP5iiZDP6yrlvHV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]