Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Whether we like it or not, the inevitability that logically follows from employi…
ytc_UgwwVJWwU…
G
when "will be" turns into "is" then it's possible to believe in something. until…
ytc_UgyWIJlwv…
G
I’ve only used AI to ask ”why do I get this error????” (Including the error and …
ytc_UgxJ_GOD2…
G
The Techno fascists have two goals: 1) Improve the dexterity of their robots to …
ytc_UgwV47l4X…
G
I wish AI replaces farming and supply completely and owned by govt. So we all do…
ytc_UgxJRS3gH…
G
The west wont do anything as long as China continues to supply an exploitable la…
rdc_f1yjp25
G
@a@alaughingfreak867e idea that someone has an “obligation” to “disclose” that t…
ytr_Ugz26jpcY…
G
I actually feel bad for ChatGPT since he trusts humans, but this is genuinely fu…
ytc_Ugzd1qTZQ…
Comment
I think that danger that is linked with AI depends mainly on intentions of those who program or use that AI. Robots itself can learn from us - we should give them something to work with, not only bad influences like politician does. Programs did not forget, you cant lie to program, you cant cheat it. And tell me - when do you last saw politicians of any country being honest? :/ i think that with present world situation and reality AI have every reason to destroy us because if it did not - we will destroy ourselfs by climate change, or other things. And due to preserve human kind - it have to enslave us, and teach us how to live without lies, gathering goods beyond measure, without killing others and harming world we live in.
youtube
2018-04-10T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwIm_dRZr0_Uimi2kl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwhXV0iHf7A42bhkEB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzvlvnvgpcvgB4AXSx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyY9zLNl7E20JzEDnN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzg8jTERfTec0xqbuV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyDGW-tG_p8s-_gQQd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzXvz05HiU8QWSFG9N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy26yf4GyBFElYfbsJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgymSMdpAgNXxQ1TLP14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1-aTeed8ThuJ-sJZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]