Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think that deepfake videos are disgusting. It might be considered art if you s…
ytc_UgypItr3h…
G
we shouldn't penalise automation, we should tax all rich people more to pay for …
rdc_gli01dr
G
Going straight while attempting to slow seems like the obvious choice. Also the …
ytc_UgjjWOUDi…
G
Wow, that traced over boot is kind of blowing my mind. I'd already noticed befor…
ytc_UgwaxO5gD…
G
You don't explain why it's bad to have deepfakes, you just show that people don'…
ytc_UgyXRlFyv…
G
I have trouble picturing images in my mind, so using ai just to get a picture in…
ytc_UgwbZ-7r2…
G
AI’s just a tool. Humans are still in charge. It helps us out, but it still need…
ytc_UgycR16Ik…
G
Copilot is a big pile of sh1t. The code it produces for me is never good enough…
ytc_UgxLAozxY…
Comment
I think there's already a lot being done to make AI safe - there are sophisticated guardrails being put in so that people can't chain innocent-sounding requests to create harmful things (like explosives etc.). This is why I'm comfortable that AI is not 100% accurate - as long as we are putting the right effort into making it safe. My caveat - I'm hoping that developers will consult more openly about the ethical considerations as they arise - or at least have the right ethical oversight of what is happening - both within companies and across the world generally.
youtube
Cross-Cultural
2025-11-22T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgylrGLXDrkvd9Hqw314AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwBmqhSoMohwo2fYpp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy372F0NgCZwbhhGb54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzo2wktaMva4qMxy794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxy3nCri4Vlj2oGrXN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgztCjk9jzv-LCQUBox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxiNVkNHbhHesVuuPN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzj-E0217NIpaBt6gl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzZn7MWtU1QIMBuxmp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwP2CCRSE-Q9yA3ywt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]