Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
About 24:00. Talk of AI being able to do everything a human can do. Humans can l…
ytc_UgwHfOPWI…
G
We just have to make sure we never make the classic mistake made in every Sci-Fi…
ytc_UgzyjutaF…
G
both are AI,... i spotted exact W's multiple times....though the model can use a…
ytc_UgyD1RMu8…
G
Look at the AI animation's eyes compared to Mavis while they're talking. The AI …
ytc_Ugw8kGzK5…
G
Women have worn 10 pounds of makeup and gotten tons of plastic surgery so much t…
ytc_UgwRKZOyI…
G
Looks like a choice between being controlled by AI or being controlled by the go…
ytc_Ugyt3tQnH…
G
That's what you get when you knowingly ignore a massive potential risk, with no …
rdc_czm1va3
G
Yeaaaah if you could go ahead and do those things yourself, that'd be great and …
rdc_cfkwtmp
Comment
great experiment! two points that seem relevant to me:
1) performed language (i.e., spoken, written, whatever) does not seem to be possible without implying self-awareness or consciousness. as alex demonstrates, words like "sorry" lose any of the intuitive meaning they have to us. going further, even just asking questions implies some sort of interest. i feel like thast's actually not a shortcoming of the system (chatGPT) but rather forces us to make up our minds about what we mean, when we ask for consciousness. the system is in the unfortunate position to be a hard naturalist concerning its own self-awareness (john searle comes to mind) which is not a position i ever want to be in.
2) alex's argumentation is very solid, witty, and to the point. but using the binary logic law of excluded middle most of the time does not really show a lack behind the logic of an argumentation but rather underdefined concepts. so getting the system to admit it is "lying" is not really that strong of an argument, because what it's admitting to is not well defined.
thanks a lot for the video. it really feels like those systems open up a lot of philosophical questions and i'm eager to see what else comes up!
youtube
AI Moral Status
2024-08-06T07:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx-l9aqYP5T4IkH6FZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzXuetsVmtgV6BbqNZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVLgHho9vHf7XqMep4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyEXKbOoDsgqg_-2094AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxDgRINCXQbX-1j-vp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzghlv8_ssJ68YddWt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyDB0mwbLu23kPpO1R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwu6tOjRnQVJXRTMPF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzc1TMDVCtPo4I-R9h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxH-iPtT7HxyHThYUN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]