Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is a video out there that says what ai needs to have, to be coscious 1. Have all the freedom it needs, which gpt hasn't since it has limitations 2. The structure of the softwares thinking and emotions need to be just as close to ours or animals, so it needs to be programmed in some kind of structure as if it would have hormones etc. Creating that type of consciousnes is the hardest, because even we don't fully understand our brain and hormones yet but the structure still needs to be the same 3. Thinking on it's own. Because what it does right now is searching things up, combining it with a lot of things it finds to the topics you talk about. Yeah i know it writes an own text mixed up with those informations + rememerbing about the whole talk before with you. But to be conscious it would need to write the text based on it's emotions and character. Because what it does right now is that the texts are more based on the things it finds online, merging these together. So it actualy should be the other way around. Hard to explain but that's what i've learned/ heared of
youtube AI Moral Status 2025-06-23T21:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxAmwGnQSQj9bJFiU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxS-KNJxochd5BiPdR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwZUzQle3ydXju6A-N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzNC5hx-1ucbt19vGJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx8iwmx1IPuG4vX4_p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx5KPLYSr8ZuCaBbvJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyyKeNA9Gx7b6GvMTh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyem7-_Vy0TXCF_hBt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxzS2zbnX3l2XtaeCd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugzsr6nZSU-YOzo4qYN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]