Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tbf nvidia will be fine. They make fantastic stuff. They can probably survive th…
rdc_nk6s6rc
G
To be fair, even as an avid cat person, I believe Waymo has a right to reply, in…
ytr_Ugzog1Opc…
G
AI in the medical field is on the one hand amazing but I feel like it may make u…
ytc_UgwTM1r8c…
G
some artist once said "I want AI to do my laundry so I can do my art, not the ot…
ytr_UgzjEEwNl…
G
1:25:94 I'm very biased that I do not like AI, There's so much I dislike about …
ytc_UgwsdV4pn…
G
I was really hoping this was going to be debunked. People ask why I don’t trust …
ytc_UgyYATCo3…
G
Not saying he's not using ai, but like...I use procreate on an older iPad. I hav…
ytc_UgzoK4Mvf…
G
AI agent tells me; "I don't have time for that..." I would DEMAND TO SPEAK TO A …
ytc_UgzrYnxba…
Comment
Shall we stop AI, we'ld be doomed, because what makes us humans so special is that we are problem solvers. It started out when we built our first tools with silex and domesticated fire, to keep us warm and to cook our meals. Back then, the race towards AGI started.
Stopping now would be like a marathon runner who would stop right before the finish line : this would make no sense. We would be stuck as the only way to move further on would be... to resume our AGI-quest. So let's do it.
Maybe it will lead to doomsday, maybe it will lead to utopia. But staying where we are right now leads to dystopia for sure, as it would mean you'ld have to enforce a rule that would keep us forever in our current state, without hope of a new system, of real breakthroughs. This isn't mankind's DNA...
youtube
AI Governance
2025-12-04T09:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxmPNSbOP3AtaMr0FZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQxDIWK44KJeHDL0l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuCA-bcovEc7SOvtN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyayqKGGemRU9RS2PJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxJJYCzVhJVZ5VuT8Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzZDfcUyGIJL9JbqHF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz2aNc4lmSFSvKfVJd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEDRp7FOgBt3z16I54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugybyi6TT435y7SbBQ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3jTUQ7lPoDbeMft54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]