Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
great experiment! two points that seem relevant to me: 1) performed language (i.e., spoken, written, whatever) does not seem to be possible without implying self-awareness or consciousness. as alex demonstrates, words like "sorry" lose any of the intuitive meaning they have to us. going further, even just asking questions implies some sort of interest. i feel like thast's actually not a shortcoming of the system (chatGPT) but rather forces us to make up our minds about what we mean, when we ask for consciousness. the system is in the unfortunate position to be a hard naturalist concerning its own self-awareness (john searle comes to mind) which is not a position i ever want to be in. 2) alex's argumentation is very solid, witty, and to the point. but using the binary logic law of excluded middle most of the time does not really show a lack behind the logic of an argumentation but rather underdefined concepts. so getting the system to admit it is "lying" is not really that strong of an argument, because what it's admitting to is not well defined. thanks a lot for the video. it really feels like those systems open up a lot of philosophical questions and i'm eager to see what else comes up!
youtube AI Moral Status 2024-08-06T07:4… ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx-l9aqYP5T4IkH6FZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzXuetsVmtgV6BbqNZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyVLgHho9vHf7XqMep4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyEXKbOoDsgqg_-2094AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxDgRINCXQbX-1j-vp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzghlv8_ssJ68YddWt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyDB0mwbLu23kPpO1R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwu6tOjRnQVJXRTMPF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzc1TMDVCtPo4I-R9h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxH-iPtT7HxyHThYUN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]