Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I notice that any reddit thread on topics that I'm actually really knowledge abo…
rdc_ntcrh7p
G
FAKE! Nothing like rotoscoping out whomever the fighter was that actually knocke…
ytc_Ugy4rG6gu…
G
I can see ai being good for like medical fields spotting potential problems peop…
ytc_UgzZFLG0S…
G
If we were truly intelligent we would have already long respected one another an…
ytc_UgxqjOFcL…
G
I've never seen a group of people become more hostile to the idea of self better…
ytc_Ugz9mVaLe…
G
Here an little idea if we're going to co-exist. "Human and ai are not perfect" t…
ytc_UgxYc138e…
G
Yuval is right: Human Alignment is necessary for AI Alignment.
So far in huma…
ytc_UgzL9HzXD…
G
Im careful with calling ai art ugly since that mesh of nonesense was someones st…
ytc_Ugwxz86hj…
Comment
In our district before my class graduated there were strict rules against AI usage on specifically writing assignments/ essays, and even introduced a new software for the teachers that could review how many times you opened a document to type on it or even review live typing to check if you copy and pasted large chunks of it. they were very open about the new changes and heavily urged us to steer away from using ai because in our early years we never needed it so why use it now and ruin the foundational knowledge we already had and try to save a little time using ai
youtube
2025-07-10T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzisUrgNg9Hk6XslEF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3dLnDMFMhb2WXAy54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyNpYH8LWS43sR-9Yl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw1H2oA6rK5YBjLG5h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzY_vFiBGUnXDWZt4R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwYLqLdiWlQjaLEBbV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwYs7f0-NLYie-djy14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy0TmbN8D-Y_qUKbIZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwjgFUwOIXCx2CHxGN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzd3LRl_WGlElDV35N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]