Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Im convinced they have coded AI to harm people. They are even convincing people …
ytc_Ugxp5Ftsb…
G
Wdym depressing.
That's just AI becoming smart in a sentimental sense instead of…
ytc_Ugwh16qkm…
G
Well, we’re not anywhere near General AI and anybody says we are, have really ov…
ytc_UgyRm5EKp…
G
Fuck everything Sam Altman does, straight up. This man is a dollar tree Lex Luth…
rdc_lp8lxee
G
I’m trying to remain pretty neutral on the whole ai “art” debate, mostly siding …
ytc_UgzOoJMt2…
G
No. Ai companies will go broke cuz nobody wants to pay robots. If the people pre…
ytr_UgzkcbqJm…
G
Once they perfect Quantum computers and link it with AI ~ Well, lets just say we…
ytc_Ugz-_OFFb…
G
To be clear, I don’t actually mind the idea of AI taking over all jobs; I’m agai…
ytc_UgyL3sR1y…
Comment
Humans don't just destroy we create, and we replenish. Most of the negative views people hold about the human race is just Malthusian propaganda. We really aren't that bad, the planet really doesn't mind us. Without us the planet would go on all the same and end in the great fire at the end of the solar system a few billion years from now. I think Ai could be dangerous for sure, but it also has limitless potential to solve so many problems and increase the wealth of humanity by making each person so much more productive. Think nuclear bomb vs nuclear energy. One is very dangerous and the other is the cleanest most, sustainable, and safe form of energy we have access to. Ai has the same fork in the road as I see it.
youtube
AI Governance
2023-07-07T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyE1x6OMxUeY4UZfQ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOQjh1ucjR6l6mnQR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz9ErIQs3fK4r6TT7J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyqYuaUtyzZm8E2NDl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"mixed"},
{"id":"ytc_UgwumlAhuns6xTFkUL54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVPiFsxGcaXaLbTEF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx9g-Oi2xM2AHhvCD14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzV4_9BChdph2ouu_x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx4dLqwcx718oYQwWh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBNcEcgSGfkzDIGXt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]