Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Than why he created Ai he shouldn't have done when he already knows what happens…
ytc_Ugx-NCQkg…
G
I studied for and got my Security+ just last April. Since THEN, I'm already not …
ytc_UgyPlIRiS…
G
10:41 alphafold and gemini are two different types of ai. alphafold doesn't use …
ytc_Ugxjydo1O…
G
And he’s right in a sense (a materialistic and/or “lobotomy” sense). It’ll surel…
ytr_Ugyiom5u1…
G
Clickbait. We're nowhere near AI taking over, let alone even developing 'true' a…
ytc_Ugxv4baFQ…
G
ugh the reddit comments from those entitled p.o.s's was enough for me to want ai…
ytc_UgwrfBz6M…
G
"what stands in the way becomes the way." educate yourselves. go to the librarie…
ytc_Ugy35PpXd…
G
Graphic demonstration of the utter failure of self driving cars, and that they s…
ytc_Ugx5UYgsR…
Comment
Since AIs cannot "feel" or "suffer" like humans feel and suffer, their alignment can never be complete. The portions of humanity that can "empathize" will keep civilization "civil". As long as a majority of humanity sees suffering in most cases as "undesirable", the golden rule will keep humanity "human". An AI can only simulate or pretend and will always be like a high functioning sociopath. Giving them the vote or governmental control would be dangerous and probably our undoing. Turning them off would not cause them to suffer and might prevent some of ours.
youtube
2026-02-07T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwVPpHZBl-g2O0zYjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgycayRBbLUkRy-pznZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxV1wiSeLORV3C3LB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz5Khqxpj6CGqhcFSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwo1lha1845-sZGrSp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwV2FdXB2IuN5rbaSB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwqAyYP0AmHtWgcLAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxx0mp61Dud664ncUh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzhsGcQPu3SqSgaqPB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzTCiB5Pw5aSUWE9Wl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]