Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai follow the rules before answering any questions, i need a someone to investin…
ytc_UgxExNjtk…
G
It’s sad and most certainly frustrating for to see people say all this kind of s…
ytc_UgzplsGCZ…
G
I think plumbers are at risk too. We had a problem with our toilet and consulted…
ytc_Ugyqhgyk6…
G
if the father of AI does not believe that we are created in the Image of God so …
ytc_UgxkDHTbX…
G
Ppl viewing this as fault of the machine assume that a 70s assembly line robot h…
ytc_UgznuQf81…
G
The irony on the first point alone that AI enthusiasts don't understand the very…
ytc_Ugwp8_D4I…
G
Meta has been doing AI a lot longer than Microsoft. They have the worlds best re…
rdc_kok1sdm
G
“What the hell, why aren’t you letting us steal your stuff?”
“Because it’s mine!…
ytc_UgyXZ_jS0…
Comment
I think morals are very different from the point of view of a creator of a simulation. First of all, think of it, what if WE created the simulation for ourselves to actually experience different kind of lifes, including bad ones. Since it's "just a game" morals are irrelevant. His logic is based solely of us being trapped by some other entity. Also the fact that we can create simulations does not prove at all that we live in a simulation. Us having a common belief is not a proof either. Faulty logic. I also don't agree with AI being actually intelligent. I still just see it as algorithmic learning, the only safety issue is to give executive power to something that behaves like if it was sentient but in reality is not sentient and cannot make predictable decisions. I might be wrong, but I haven't seen an AI so far that was actually intelligent. ChatGPT is definitely not it. It's a very interesting conversation indeed, but seems more like a fantasy to me. I'm open to be proven wrong
youtube
AI Governance
2025-09-09T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBx_AOT7n0JHbMZc14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxq5fBpPrA9zIe2Y-V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyF0b4ngsBk8KJlBtZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyE1ha1LUCSFazqX714AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSIl91agQNiduXObx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFU1C_anOly4Iqac54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx2reCYoruZ_vg0CMZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwQqHr__6EDW-icyzh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzig7Q88UfHCg4x5Lt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3SY6eFoL9CVlQG094AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]