Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well maybe you all people need to accept, that we need good ai in future. If mor…
ytc_UgyqCQMvK…
G
You know this is a lie because Humans can do one thing AI can't and that's havin…
ytc_UgywIfnJy…
G
AI doesn't think! Really?! It just makes connections based on the information yo…
ytc_Ugyc25_9C…
G
a photographer is still a photographer, not a visual artist... well not in my bo…
ytc_UgxRPndZz…
G
In the 1970's, my father authored several books on artificial intelligence and c…
rdc_ctian0n
G
All technologies were developed to solve problems and or create entertainment ..…
ytc_UgxBCTWgI…
G
As a primarily digital artist(I do physical art sometimes) It’s absolutely absur…
ytc_Ugw67dElr…
G
AI is continuing to advance, extremely rapidly. There's nothing you can do eithe…
ytr_UgzP-3f_e…
Comment
Humans have all sorts of fear driven objectives and they think a machine will eventually behave like humans and will have existential fears and decide to take over the world . Yes if we deliberately program it this way, then... maybe. However our LLM s do not have self generated objectives, they lack long term memory and continuous learning they don't understand the real world and so on. So we are still far from AGI
youtube
AI Governance
2025-12-04T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzkaHi4i4WbTfYN07J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxnJy42h39uZwlo5jB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXqIvlEBfeXrykYBR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxwdT4i8HUSmgV_svd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzm0F6DA5rfGzctsDp4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxlGobmdj7hFgQcBDF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzC_ctZBKXWJZIxhW54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzh8pqiexJcFvZ72FN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxXDZ0Mb03n7h9LcJp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5sO8zIEoVkp8POOx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]