Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We're glad you're enjoying the reactions! If you're intrigued by AI and have mor…
ytr_UgwjYs1nJ…
G
He didn't say that. He said that's a probability if AI safety won't become a pri…
ytr_Ugx4BqeuM…
G
Thank you so much for this video.
As an artist myself, it's really discouraging…
ytc_UgwBl4E_Q…
G
Look, if I use a calculator to solve a crazy hard math problem, I still did the …
ytc_Ugz2t6hyx…
G
Imagine this:
these guys are AI figures having fun while educating us about the…
ytc_UgzP8r2St…
G
Semi autonomous weapons have been around since the earliest set and forget traps…
ytc_UgxwxvqHv…
G
Simply put, if we take the guardrails off of A.I., after reaching a certain leve…
ytc_UgwQGXjt0…
G
Smarter than us?
So-called “smart” AI has shown clear limitations. It made two …
ytc_UgwRkT9-8…
Comment
The best solution is to establish an international global organization, that monitors the development of artificial intelligence and works to prevent its misuse — just as there are organizations that monitor and prevent the spread of nuclear weapons. Knowing that if artificial intelligence develops to a great extent, it will be more dangerous than nuclear weapons, because it will become like a nuclear weapon that thinks and makes decisions.
Thank you very much to the channel owner for spreading this awareness.
youtube
AI Jobs
2025-10-09T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwjt_95ONraqAi44CN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxu-di1k8t11ysE6Q94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyw2X64A92y4p11AJ94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyMcHi_8mufG-mLULl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwR6d88BR6dsCBkTWd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyE5fv-wtkau7GELNF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy5VxniIh0IzwcMh8R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxRUL1gKNLYLQo9UNp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0BVknUnxO9GLTOyJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyNF48cz7upIoetWeJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]