Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Heres how ai will end humanity. Cant have a baby with a robot. No babies, no fut…
ytc_UgzPBJ0Dt…
G
Nothing would be impacted negatively in any significant way if we just slowed do…
ytc_UgwpsjOzB…
G
honestly i've found that the AI performs best when I treat it like talking to an…
ytc_UgwySjAC2…
G
I’m so sorry this happened I think ai is really is “dangerous” as what scientist…
ytc_Ugzm549Dj…
G
Exactly, so all the rest of it is moot. The nonsense about AI colonialism was …
ytr_UgyvmC7mN…
G
You've got to admit, as a group, white people have been the biggest racist assho…
ytc_Ugwc7ROpa…
G
1:15 BMO THERE YOU ARE and 3:28 that robot from overwatch( I don't play or watch…
ytc_UgivGeenb…
G
So wrong. Morality and ethics should come first. Once you've unleashed strong AI…
ytr_UggJXPMrG…
Comment
AGI was predicted to be here in 2025. Now it is being predicted to be here at 2027. These predictions are made by companies that earn money by predicting their "glorious" future. There is still a chance that real AGI will actually never happen. There will be alot of "fake" AGI that will steal stuff from humans for other humans. That is for sure. But a world dominating super AI that will secretly control everything? Not so sure...
youtube
AI Governance
2025-09-05T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxOapG5hHq1Y038BQp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgytlBwHVkcVU7g__lB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0JtuCBB7hypN1NV94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLk1GmVSWgWewv0-N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzw-SnDYw55J53RU7N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyzSXfx31PY3SsjeBh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwtRaQATfImxZFPgJd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyCCwwriCBrfrtbj094AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzks8X_Eu1uvlezuVt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzFs39hrfbd8Hnq9u94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]