Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The scenario where AI will bring more "cheap" services and everybody will have a…
ytc_UgwWV35Rn…
G
Maybe we should start a boycott for Amazon to remove ai, ai will NEVER do jobs a…
ytc_Ugw_ZRA2b…
G
If the human in the first video was animated by real humans then the hair would …
ytc_Ugwte_C1m…
G
My experience with AI in the corporation so far is that 10x the output is expect…
ytc_UgxSAjqfP…
G
All AI talk is nonsense. Bring engineers on the table, back end developers. Ai d…
ytc_Ugy6bbe1N…
G
15:50 (Sorry about my english, is not my first language) That's the thing about …
ytc_UgwYp9iec…
G
By 2030? 😂 yea okay I’ll be back in 2030 to prove that wrong. Sure we will lose …
ytc_UgzHo-BCU…
G
LLMs democratically reflect human nature(it is all linear algebra and stats). If…
ytc_Ugz9zUH8u…
Comment
Absolute bonkers, this development is pretty scary, I consider 80% of humans on the net to be gullible enough to fall prey to all kinds of future AI shenanigans. Edit:6 Months later December 2025, I have to admit I was way off, in the last year things have accelerated big time and I can't tell the difference often between real and artificial myself anymore.
youtube
AI Moral Status
2025-06-04T18:2…
♥ 445
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxsxZwcYMnkgGQ8M-F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEEu7Feh30h3BPylN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgylGYmkIcM8JaXECJt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxsfcONkBhyJA6ypot4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzZSjT89zazee1r7_J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBH66VWhS6bwpzioB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwmVzdOvJBNzykHFQV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwitw9AR3tAySgKRJp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwDZYwcrtnj53MKx1x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxV7TfyGbJcnMi533B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]