Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI gives the term “hallucination” a bad name. LSD can expand the mind in ways t…
ytr_UgxP8cmpH…
G
Actually not 100% true as AI usually prefers East Asians and Indians in these sc…
ytc_UgzwUP-FD…
G
I don’t think a.i knew it was lying I think it was just part of it’s programming…
ytc_UgyL4an_p…
G
Imagine...robots like this are designed as weapons, just like in the terminator …
ytc_UgyGL0tm7…
G
This video is great, i really hope ai "art" is not something that will be taken …
ytc_UgyTnpwRH…
G
Do I believe that digital minds can be conscious? Yes. Do I believe that any cur…
ytc_UgzeABksj…
G
yeah... Ai doesn't know what... well.. everything really is supposed to look lik…
ytc_UgzlR5X0a…
G
Humans freaking out about AI when the issue is actually that we are violent self…
ytc_UgyIs8MgL…
Comment
Can't believe why people trust the auto-pilot that much... I know it's a highly advanced AI better than human depending on situations, but it's so scary to me to let them drive automatically......
youtube
2021-07-27T22:4…
♥ 57
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyYYzw9RoIIO-wa1Zt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz5isH5KpVUwUmAfVF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgydhPHomdHqOYN_ppp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzmCrbee1Fq9eKbv6p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyf1fzBaI8cz5My1H94AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]