Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Totally agreed that AI should be a tool to help humans not replace humans
I wou…
ytc_UgzEAKI4S…
G
Jesus dude, who are you, surely you're not human, break a AI language model, tea…
ytc_Ugw_MAQOd…
G
so why creators saying AI IS not sentient...why these models ARE blackmailing wh…
ytc_UgxShAN_w…
G
First it’s that one crash out robot in China, now it’s this. Guys I think it’s h…
ytc_Ugyz4fIqe…
G
If the parents provided that info about the AIs, then they're both psychopaths b…
ytc_UgyLbojhE…
G
I'm currently pursuing a degree in data science. Professionals who poise themsel…
ytc_Ugwm6C5mh…
G
@BaconBaron52 i'm not worried FOR ai at all i'm just worried that it is going to…
ytr_UgykQaDni…
G
OK yeah good point, yes let's all let ourselves die. Close it up boys! Humanit…
rdc_eh4agej
Comment
AI looks very smart and all the while makes big mistakes. If the human user or the AI itself was to innocently make poor decisions based on wrong conclusions.... No I dont believe AI is smarter as long as it's not 100% reliable nor autonomous and doesn't really know how to solve problems. But it is quicker which is a big issue indeed.
youtube
AI Governance
2025-08-17T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgynCaj4sjCqdtdlEwt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy0FtG_xcpHdLf6OrB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz1BZv8cOGabONqmq14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwVsOOE8NOOXa-hWtN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxBZm2qzceOlKw5vI94AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugxgbm5cQrGjL9Tk1ox4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzTf7vb0WrCpSAJ8eF4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyiUn4-NODoNCWqpml4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxTiCiPoatbvAO6D194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwNClq8PZmqnWu_iSt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"}
]