Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
8:35 First you ask an AI to give solutions without moral boundaries,
and then ur…
ytc_UgyXxO9lg…
G
That's because they programmed it to do so so they can blame mass destruction th…
ytc_UgwQyXUBn…
G
Funniest thing is: AI is getting there fast, even Linus of Linux has started adm…
ytc_Ugwo_BpW2…
G
It seems people dont realise ai art and normal art will eventually have to coexi…
ytc_UgxkwKLsq…
G
You still need to code AI to get it started. Also someone has to supply the data…
ytr_UgxXlbU4G…
G
“Construction signs were inconsistent” and AI went haywire. Well in the real wor…
ytc_UgwqSHDhD…
G
@THEKINGOFNISSANS Targeting systems on a fighter jet aren't language models maki…
ytr_Ugw7f4Ork…
G
> *, to train an AI we should need court permission*
I don't see how that'd be e…
ytr_Ugy5PoFPc…
Comment
If they went from 2 hrs of learning to 3 hrs , I’d say it’s a great idea !
youtube
Cross-Cultural
2025-05-26T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgycK4TnRiyWpubx5Rt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw23z0OSK8KPfJ3XcN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKnzA74rsR_hFe4dp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxfzoZRP_R3kC4Gc_h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxyMko-j4y-agXqdbB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyvFtxXS_tQFG3rwep4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLyjUK0ftsQkN-8gJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgydGhFYbumDrl71eFF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyOBfqKCHO0jRi7Gq54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxVse9I-QIbJu8lCjx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]