Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People don't understand how language works, there's no creative thought, it's a …
ytc_UgxJMpAwJ…
G
>I don't expect it's without Silicon Valley's overlords approving.
I don't t…
rdc_ohzff3a
G
AGI (the goal of literally every multi billion dollar tech/AI company) doesnt re…
rdc_ohzinga
G
You have such a distinct style that ai would have a hard time even getting close…
ytc_Ugws1rnD4…
G
A day will come when AI takes over the entire world, something created by humans…
ytc_UgxoZkr3j…
G
Unlike physical Product or entity this ai or IT industry won't be able to sustai…
ytc_UgxSDoSps…
G
For me, I wouldn't say that it was premeditated homicide. It was an accident aft…
ytc_UgwZU9lw8…
G
AI has been controlling all of our existence since our inception. The big decept…
ytc_UgxvULEN5…
Comment
Do you know the AI safety YouTuber Robert Miles? Heavily recommend him. Saw him once or twice on Computerphile, but he also has his own channel.
He also talks specifically about AGI, artificial general intelligence, not just any AI, and is very good at building intuition for how it behaves under specific parameters and why.
Things like alignment (how do we make it want what we want, even if we don't know what that is?), instrumental goals (things it will most likely always try to get, because they're helpful for any terminal goal) and so on are really well explained.
youtube
AI Moral Status
2025-11-11T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxdHrYPTPNOhO891Tt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAXf3hrblCWY6Ugph4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw9CnVlptMpS3H921N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzr8bee1-IJvveIiIV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBrSxO5uV-z_MhcFJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0Tt4XKK1CETZR8ER4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyn8OoCn_Tv2tkN4wp4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9h-a6IYSf_NwImCF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxD1JRMbp4vkOFvQFd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugze6jqOoR7gGknhckp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}
]