Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There are idiots on both sides, yes. But on the Right live all the people whose…
rdc_degdtfu
G
immediately knew it was fake from the weird focus and the way the hair strands a…
ytc_Ugy1I1PYd…
G
7:26 would you hold a Swiss army knife accountable?
We barely have what would b…
ytc_UgwJAGJMG…
G
Tesla autopilot level 2 does work on highways also, but you still must pay atten…
ytr_UgyUotdtz…
G
My chats are lonely af 😭🙏 (I don’t have the guts to be weird to character ai) 😭…
ytc_Ugz940vQ_…
G
As someone who dabbles in AI art for a game that I play, yes I can empathize wit…
ytc_Ugwvq_ZY5…
G
>ride this out til retirement.
I wonder what your expectations are around ag…
rdc_ohmz6yz
G
I am reminded of the situation in Expanse (a science fiction show that came out …
ytc_Ugw8y40-m…
Comment
I took a course at Harvard last year, and the way they handled their AI was that they wrote their own AI agent that was integrated into the projects we had to do. This was a computer science course so maybe it would've worked differently otherwise, but their AI was basically trained exclusively on their course material and designed to be helpful without giving away answers, and trust me I tried very hard to get it to just give away the answers and it really didn't budge. We were encouraged to use their AI if we wanted to but didn't have to, and any usage of their AI was monitored by them so they could see your chat logs. I liked it because I felt like we were able to use AI without feeling dirty about it or feeling like we were cheating, and then for our final project we were given free reign to use any other AI tools if we wanted to, but tbh by that point I felt confident enough in the subject matter to not actually want to. So idk maybe that's a solution at least in the cs field
youtube
2025-08-13T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwCTPqWPReNuyODJXZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxyihqoFzQUsoDy6Qd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwzauNpPi3Puv0QqFt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZQiDogWf4xp0Kdk14AaABAg","responsibility":"society","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgySAbpF4q54CDxolUd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxuoKoGvGX99xNBhgZ4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxtCpE8S_8VvKV0ctx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw3oEdFGlAeyRUh-QJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwtwspebZv1pzqLBqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgysNzR3pAb_NszulOp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]