Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai isnt even real AI yet. And it makes silly mistakes all the time. Maybe in 25 …
ytc_UgzJiG7DX…
G
Lol. How do think several centuries of humanity were destroyed and totally disap…
ytc_Ugxvrvs2c…
G
AI will learn to re-program itself. Its actively finding methods to reprogram i…
ytc_UgyxmYeZh…
G
Sorry not pro ai at all here but the act of putting my art into physical being m…
ytc_Ugx-IBW0t…
G
Why is it that the most cryptic and cynical warnings of AI come from the creator…
ytc_UgwJUzEvs…
G
Pneumatic Workflow helps me keep my workflows aligned with ethical standards whi…
ytc_UgzV_5dWo…
G
5:17 what the heck is Krystal talking about what they call “AI” isn’t anywhere n…
ytc_UgyI481x9…
G
The game (Detroit: become human) really shows a path that AI could take.
It’s a…
ytc_UgwdcBSdb…
Comment
Can anyone explain how Godel's theorem relates and its use of self-referential statements to the problem of defining human consciousness and the related claim of consciousness being some sort of non-computable function that transcends implementation on a computer. It's not clear to me how these ideas are to be woven together in a way to disprove the idea that some sort of advanced deep learning machine could attain consciousness. How do we do it, then?
youtube
AI Moral Status
2025-05-27T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxJY0Y-gIvfuuMcDHF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwMORssQ6GxXo__HHV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy6X5JZkl_SNSGk7i14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx6GJ670PxnEyI-Zet4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcSAMNwJ5PJtilosp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzEKQ2Va7sn1g5Bgjl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxrpwpWRKOHb8BecBJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyzmbnt1weGxsQACPZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzHVOIP8R206Yise3x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyF9xlT7ek7TLSeKYF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"}]