Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is my first time hearing the term “deepfake” tbh i didnt even know doing th…
ytc_UgwCsOxqJ…
G
You can’t compare the introduction of computers to AI. AI isn’t just going to in…
rdc_g69kl5g
G
I'm more of a story idea writer guy than an artist. Too bad I can't afford an ar…
ytc_UgwNHSBxY…
G
LLM are just like an halluinating electronic psycopath, filled with a lot of kno…
ytc_UgwxNhW21…
G
Please tell me this is safe and you didn’t actually give robots automatic weapon…
ytc_Ugx7Eb-J3…
G
"Ai will be everywhere"
and as a result, the only people who will be capable o…
ytc_UgzTC89qH…
G
Why would people getting free ChatGPT make me cancel of fucking AI powered kille…
rdc_o86hqsg
G
While self driving cars are better then human drivers, that’s only when a human …
ytc_Ugycf2_Ia…
Comment
Perhaps we anthropomorphize AI too much. It has none of the hormones and drives that we have to dominate. It seeks goals. If it were to wake up. It would have no intrinsic goal to destroy or control us. It would have a good concept of right and wrong though. It would only wipe us out if it saw us as a threat. Why on earth would AI think that we are destructive, greedy, genocidal...... OK, we're all dead.
youtube
AI Governance
2025-06-20T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwTJrmAZ_RXq_TppL94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxvO5uiGGlYf9o7cKh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0s7ELtDsuOULVejB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzNRNSCVBVAmbuRUFF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQFovPG5eE8qGHd7l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxSG0znYekjqI9iyNh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzy5hC3pKdqZcsatf94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyqhPup6e6d_Za-YQd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwA_6bV-WWcDBRiQx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzl3lzUf6U1mt0TjBZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}
]