Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yes, and no. Yes, computers will never be able to understand Goedel’s theorem. No, since that is not the point. We are creating a new kind of live form, which is faster and more intelligent than us and completely lacks any morale. In the end it will be power which kills us, not whether A.I. is conscious or not…
youtube AI Moral Status 2025-06-25T09:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwvFuy-zcNrexPQEWl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxENx4v8DR-hKlmAV14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz653HW58eAdV9B-F54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyek2bghu7OyBrvL_t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwvc20FlnZum3lN7zR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwu3QsSobK05qPQoA54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxKPB-IeF6MKWEqPoN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLalFwqHauef3zW814AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzRn8WJMHhk7M347414AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzrv6G1sb9PaVoXFAN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]