Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As humans we are not built for the exponential level of progress of the AGI. Currently our brains are not built for that level of comprehension and intelligence. I think most likely we will have to either stop the progress altogether, or keep the progress but became the AGI itself through bioengineering. But my question is: Can we build an AGI that is locked from killing us even if it wants to? I'm thinking no access to physical world - no robots, no autonomous control, only locked in a system where you can ask it questions through text/speech like it is now with LLMs? Or do you think it's inevitable that AGI will find a way to get outside to our physical world on it's own, no matter how locked or secured the systems will be?
youtube AI Moral Status 2025-12-11T12:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwoPEbqR72Y76GfReN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyzxDoTG4lCWfuSRQR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyIeD0e8JM5xYKzHb94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx2-63QYAoulf2hXUB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwxIy3NCmAXrkzxbQl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw509kiPU9zlTJEqap4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwwRUlONUOZxqeik-t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxVXB9GCNlWuILPK0F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxhrg69guZkSLQ92KV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzxMKf7A1zNZtEXAvJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]