Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m using AI to work on cutting all my employers and coworkers and eventually g…
ytc_UgxR1b_ot…
G
This could be bait because AI is being trained by users in the real world and th…
ytc_Ugwzh_mF0…
G
That sounds like quite an intriguing dream! The idea of robots controlling human…
ytr_UgyKWevzP…
G
Ai is accelerating an already rapidly changing world. If it didn't affect people…
ytc_Ugw7scxML…
G
" *AI will most likely lead to the end of the world* , but in the meantime there…
ytc_UgwGQOsf1…
G
I dunno, I feel Children concentration camps, where kids are kept drugged takes …
rdc_e2wa0u2
G
The problem isn't AI making mistakes sometimes, humans do that constantly, too.
…
ytc_Ugxx4YsIt…
G
I am so fed up with this bullshit. AI art looks good, but it does NOT contain an…
ytc_UgxK_AQDl…
Comment
"It's going to be the Good AI vs the Bad AI". When the bad AI is capable of being more intelligent, more capable, and powerful that it infects the Good AI like a hidden self-aware virus to control it, we've already lost and it will probably destroy us as we will be less efficient for its purpose or life.
youtube
AI Governance
2024-01-04T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyJnXmJnUjI4BZw5cd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwpYg_nreZwNpknQop4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzjYp7Oaf7u7i6xvml4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzm3XB7nXqrSR1ypFZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwTPRPTwe106_KghyB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyNfYpULfh_Ir9Ix1R4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwO_VzN-pF3q4Py3c54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwkgvHDi4UVDVYQ8Ml4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxm3y0DqVd6ftgejit4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxD2Yiu9gI7Z8nwxY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]