Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I dont believe in an after-life but I imagine being reborn as Alex's ai would be…
ytc_UgzaL0IZA…
G
Most individuals are capable of acting within personal responsibility in an ethi…
rdc_gtfzzt1
G
Your counter arguments are so on point I feel like I'm being convinced even thou…
ytc_UgxcBeaW1…
G
bro what. This is real. Not ai. Theres mutiple angles of this and the buidling c…
ytr_UgzQ_yUzJ…
G
So, does he want to share his AI company with everyone? Why doesn’t he do that n…
ytc_UgyvkABZ8…
G
So, what I hear is that AI needs to go to preschool. LOL, as a preschool teacher…
ytc_UgxMPrZIh…
G
@Jurassicparkatmospheres It doesn't matter why someone makes art, or their level…
ytr_UgyY-iCAJ…
G
Instead of paying artists to make character art, book covers, etc. artists' jobs…
ytr_UgwzmKgoT…
Comment
Very similar argument was made against Nuclear power. It costed us millions of lives if you believe the data from NASA. He does not want to explore and progress safely he wants to stall entirely. This only makes common sense regulation more difficult and clouds our judgement in populist fear. AI has serious threats and benefits and there is no evidence that an increasingly intelligent system will be incapable of cooperation to mutual benefit or will be maximally selfish.
It is almost the exact same argument found in the dark forest hypothesis that this very channel scoffed at. It could be possible, but we have no reason to assume so. We need to consider the danger of fear even when our our biases are so enticing. Lest we find our-selfs shooting in the dark at foes who could have been friends.
youtube
AI Moral Status
2025-11-04T03:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzdD362N-69jb_GqO54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFKzdZ6IS3bSjeDGB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwK8vNHvAAC4qgyPZB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyi9ZyCrLQY6-3cWCF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHDlDtpu7Dv0PEtkx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz8TKA8OgiK9y0qax14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMJI7gRBEnkFgn6JB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwcNk_cuVklAe_4VVp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGrgrKNaUKIJiZ74l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUMsFWYfQOUsLfRIB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]