Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not to rain on the parade but there’s no information I see about the efficacy of…
rdc_fjzcmky
G
don't get me wrong I like Star Wars but an AI intelligent system I don't know ab…
ytc_Ugwz1C78w…
G
Apparently Canada will pay O&G workers to clean up abandoned oil wells, whic…
rdc_fnwx57n
G
this means your work is as good as ai that the system couldnt differ, thats awso…
ytr_UgzmPOZYm…
G
Art is something made by humans. Art shows emotions. I struggle with showing my …
ytc_Ugxsawyus…
G
It's funny that alot of young people are using AI for what they term as therapy …
ytc_Ugxn9germ…
G
if god exists he abandoned us now there is nothing but our own hubris and want f…
ytc_UgwxcFvdZ…
G
More than half of code on stackoverflow is AI generated. If you hire an engineer…
ytc_UgwhcBGvw…
Comment
To those who think this is nonsense: consider that the development of AI is speeding up and that we may see a superexponential development when (and not if) AI can improve itself. The timeline in the report is probably too pessimistic, but we are fast approaching the day when AI is way smarter than us. Remember, AI does not have to hate us or even be conscious to be dangerous. All it needs are goals that are misaligned with our goals. Nobody currently knows how to ensure that doesn't happen.
youtube
AI Governance
2025-08-02T15:0…
♥ 22
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyRIRZbpybOVekMaUx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyK2hhZrQ2ql3FhJsN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwO18M8AIEUFpllNBl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxCzupU0SWvjplTQyB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_wF-X5rQR1AM9q6t4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzTMURUbX_RfLNusdN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFz0OEKW5zddee_sJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgykkP_3bXpP1Gw7xLV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGwKVRK5hDxtjRvCx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzd8AIDo8cRZBqmcR94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}
]