Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we dont know what makes us, us. We cant know if AI is us, we can only know what we do but not (at least yet) what we are or what eventually makes us do what we do (if you consider that to be). So the only truth we can say is if AI does as we do, but not if AI is what we are or if AI does what it does because of the same reasons we do it (to be). We know AI does what it does because of the data its trained on (fancy statistics on massive amounts of data). If we manage to proof that humans also do and how we do this kind of fancy statistics, then we might have proved AI consciousness. We need something else, which is if given this statistical analysis of data AI can create or invent stuff like humans do. We would need to create an AI trained on data that has 0 knowledge on 1 human invention and then ask the AI the same questions the inventor had in his mind when he got the idea. Then, if AI manages this (creativity) and we manage to proof our brains work the same, we could say that indeed they are conscious.
youtube AI Moral Status 2023-10-12T19:5… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugwtq9uE_PbBTvlNDvx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw7z2MvnAMS6_yhSRl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},{"id":"ytc_UgxHxmnNQYAlYpvthoR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzYaw_t0avrdXvu1z54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwZ9Z4DqMVTSDAKYyl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxmxV8igOFsrt6ret54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzLktdrjVoPz93e-vZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},{"id":"ytc_UgzSZlHzoZ52DC-F6at4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugz8qnf2o5l2sSS0xzB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyXTauLDc9e84YbIOB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]