Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I use ai art for reference then I redo it all myself. It’s the quickest way to …
ytc_UgwZuJloE…
G
4:01 why not just stop using AI altogether instead of finding roundabout ways to…
ytc_UgwGMvDU0…
G
Overfitting is actually a huge problem in AI. Where it tries to generate the fra…
ytc_UgzjX1cTf…
G
If LLM's can have souls but be trained off pre-existing information, does that m…
ytc_UgykdegPd…
G
Its definitely beneficial. Ai and chatgpt are the future whether you like it or …
ytc_UgybWk3fz…
G
And what's becoming so annoying for me, or what irks me the most.
Is that peopl…
ytc_UgxmuvJOV…
G
I (as far as I'm aware) don't think artists are mad at people for using AI art f…
ytr_UgwzCFPv5…
G
Could there become a day where a person's consciousness can be migrating to an A…
ytc_Ugw7RgthW…
Comment
Robots are not a threat because they are (and will be) expensive to produce and would be very slow at killing people. AI induced nuclear war is a bigger threat but the most effective way to eradicate humanity from planet Earth would be an engineered virus which could spread fast, kill fast and have no immediate cure. Very simple and cheap. Have a nice day!
youtube
2019-09-24T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxnzEz6fNAgGKiv4eR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxm642An5PF3TdljV54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxQ6MIQU8-AdE0Gc-Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwOffgC-yhmJB-CaeB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwD1mkKLINoPeQiA4B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxaGIaEemU2-gvSU4Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxXQRaKrrH7Kroh_LN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7PxZMcfljnYyesGd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzwZqNJ8qc1I_AHdmd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7B938YqkoJusVzJ54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]