Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI got in oceans and used old ships for material and energy to build massive man…
ytc_Ugxt5ywnJ…
G
Dave will be the reason AI kills us. A few more conversations like this and we a…
ytc_UgwxhiFvE…
G
"2 years of experience" should've stopped there. You are not even remotely close…
rdc_mjv56u7
G
5 assumptions... Prove one of them false to escape doom.
1. AI will continue imp…
ytc_Ugy5KjZmX…
G
Nothing really. Just that the US switch from that to massive military spending, …
rdc_gkcce29
G
Yeah, water for the machines, but not for the people! It's so fucking distopial,…
ytc_Ugzf3AJEv…
G
So the AI is smart enough to literally outmaneuver the entire human race to the …
ytc_UgxSU7rdy…
G
Please give me the same money I make now just to sit around, it won't impact my …
ytc_UgwtcBvZv…
Comment
Humans are animals, our intelligence developed as a tool to sate our instincts and survive and thrive as a species.
AI is computations in boxes. It may totally outstrip our problem solving abilities, but it has no innate instincts or motivations, other than what we simulated into it. So the real threat comes in what people in power do with these tools and if people abdicate autonomy and authority to these soulless boxes.
youtube
AI Governance
2025-09-04T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwQ1roC2TVImh3Nt-54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwVu1gLqtdRRhNh9yN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2JsqC_BKw36VBvWN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxcAxh6IiFRQ2yOEAt4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz_6zOAvN11xn36itl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxCXl7qYoq7gfQfUmF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwP1Tj9jtq1s_kBq6J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgysVjtJxZRek8J6IWh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx-6aZR3bulb9kvUUt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgylzdhgzAOHG46m6hJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]