Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
its not uncommon for woke people to think the world would be better without huma…
ytc_UgzdrZhT9…
G
Why would an AI can only exist in a single machine?
Machine is a machine, but AI…
ytc_Ugjui8lyY…
G
My fears exactly --- Politicians just censor themselves on that just as they do …
ytc_UgxjV38s9…
G
Tesla has been murdering people for years. There have been multiple incidents o…
ytc_UgxBnLjIl…
G
Powerful people will force us to accept shit decisions taken by cheap software a…
ytc_Ugzp28C0o…
G
Glorifying your experiences over future experiences of others seems like a dumb …
ytc_Ugzm4sOTy…
G
What would full AI consciousness even look like. If I would ask AI if it’s consc…
ytc_UgxmbQrhx…
G
Question is question is could the AI sue the human for copywriting its song with…
ytc_Ugy_L_3_r…
Comment
I've seen this first hand. Whenever I'm in a group project, we'll start by discussing the assignment and tasks.
Member1 asks "what do they mean by that/what is x and y?", then based on context and also looking at lecture slides, i will get back to them with "oh I believe its z".
Conversation will move on, and 2 mins later member1 or member2 will say (referring to the preivous question) "okay so chatGPT says it z", and come to the exact same conclusion I did....
youtube
2025-10-28T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxiPpVelVheSIY1oFh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxlOWrIUIhTwwQ4ra54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx5Zaphh6dm45sPQi54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgybHuI2y2o3IzkndE54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxdpNaJZXy6Q6Ac08V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTSVnrbjwSXCG152l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyzfGWhZM0FjRj1gGF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyaqpYMyZNSj6LiNJN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWQR1qOcRS1nLkngF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynXH3bAyHIaUJoUMR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]