Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What is the AI equivalent of a vice I wonder.
Can they get mental health issues …
ytc_Ugwe9SEfM…
G
Same why do they make it hard to not chocse no ai they put it any freaking thing…
ytc_UgxmfwNjU…
G
AI may suggest treatment based on "guidelines" but will AI be able to negotiate …
ytc_Ugzls5T6g…
G
I was going to mention that interestingly enough, most of these speakers are old…
ytr_UgzoJtHLD…
G
I dont agree with this demo and conclussion, DAN act/response accordingly, becau…
ytc_Ugw1P5t9c…
G
Easy fix: Assign a priority level to each self-driving car so they can solve for…
ytc_UgzY6lWFx…
G
@ 3:07 not the robot clock and its robot eye turning on right as soon as he says…
ytc_UgyT15-Q0…
G
Legislators will not use their tools to aid in writing laws for AI. They will ho…
ytc_UgycuOf9i…
Comment
Just had a conversation with Manus re. AI alignment. In the video you state an 80% probability of an existential threat from AI if alignment isn't guaranteed. In _any scenario_ Manus recommends an acceptable probability of 0%. Zero. Or as darn close to zero as possible. Also if that cannot be acheived, Manus agrees that the complete shutdown of all AI investigation programmes would be the _ONLY_ acceptable option. What are we doing? Why leave these developments in the hands of private companies?
youtube
AI Moral Status
2025-06-22T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyEl767gt8vZc4QJSF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxUPFYSvDAQ4q6N9WZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwxXQ9K3bjdGSSOeZh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZyqCwq94-dU8hEfZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzyluzG8GC_OdtyPVx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfwFw4b9rhFpZcRPZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdBqMB8M_2qjQS0st4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdlkOpjUBZWc1YpyN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz6ll_V0lllgYc-UTF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjOGDm3US0xWtRWTF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]