Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Good man Blake! I know next to nothing about AI, but can the AI access its own s…
ytc_UgwsJYDik…
G
Your way of thinking about this and your entire value system in general are miss…
ytc_Ugyt6ZUol…
G
Pssssst, here’s an idea, join the trades. Companies are dying for electricians, …
ytc_UgwryE6KN…
G
"AI needs to be maximally truthseeking", I mean, in a non critical analysis I ag…
ytc_Ugw3I4vMX…
G
Just because AI is such a huge “advancement” does not mean you should defend it …
ytc_UgyeoUdHw…
G
Thanks for teaching me new word: subliminal -- (of a stimulus or mental process)…
ytc_UgxBva5G-…
G
@LiterallyRainClaiming you have no attatchment to art, then putting forwards a …
ytr_Ugxd1uFYy…
G
@andybaldman the humans are not aligned with each other, and Ai will not align…
ytr_UgxjUkKXr…
Comment
I don’t think this will be the immediate focus of our attention with regards to automation. It is better to focus on how simple, non conscious machines can be used to exploit humans, and who controls them. We are closer to machines that can model millions of people at once than those that can argue for rights. AGI is quite a ways away, and building AGI that is aligned with humanity’s goals is more important than worrying about whether it will feel bad.
youtube
AI Moral Status
2018-05-14T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugzh1wdOOHKk7AEvEOZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyMQpRfgepJs_b43f14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzZ3yd0xcUfATtKCuh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxoH3CHkZZ3Q4iGr-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzagFJ9PYFKvkOqdEF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzQLosjNyCDhYjepBR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzbqgiIjw8rlWcmH2t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw6GL-VcARNPVudB354AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy4f-Au3qIAvy45JPt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgymqWaWzHC3NxHIoU54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}]