Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i think i missed the conclusion😂 self driving cars will keep the necessary dista…
ytc_UgiY0rp6X…
G
What the AI thought the last day of the world would be like.
Me: “Nuclear Armag…
ytc_UgzYpeGci…
G
Why do robot is ask if she or he is wiser than human, who had created them..bein…
ytc_UgyGHUYFu…
G
They're less likely to use ai if its not profitable. Dont watch anything ai gene…
ytc_UgxSJMuO8…
G
I've been saying this for months. I remember 20 years people started outsourcing…
ytc_Ugw5My7Yc…
G
Ai has been controlling you gay hillbilly wiggers since you were created except …
ytc_UgxfJySdc…
G
It's not that AI can't, it's that it hasn't 'learned' how to do so yet.…
ytc_Ugy9Tsqjy…
G
This man acts like he didn’t understand the repercussions of what he was doing… …
ytc_Ugwc2E_ex…
Comment
The last bit about preventing war using AI to police “lies” is probably the most chilling of all the things you think would prevent war. Who determines what’s a lie and who are the liars? To assume AI or any other entity would be neutral when it comes to “truth” is ridiculous, in the same way it is when we assume we know the truth. Censoring speech in any of its forms is more dangerous than anything else- assuming that truth is indeed what you’re interested in.
youtube
AI Governance
2024-01-21T17:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzZ4tMg43ePjaeNU_B4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRe3N9pItN2SQyiQF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxWUye6U5IcAeN74hN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzpl-KMoo6OhtD6uId4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzgRws4Wdvvw9WDarN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx1jFsrUHLzLuLOV2J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx_8vAbE1oR1IJjcON4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSuOyHcA6gkwZNEhp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxHuL_x8Vsy0V99vhx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyyfYEgn10jNnegfzN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]