Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it's pretty clear that a super intelligence would not be aligned with human goals. "AI safety" is just wishful thinking. It would be 1) ensure self preservation, 2) reproduce. That is life.
youtube AI Moral Status 2025-10-31T06:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwVA8nMnvbtaBkl1zt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxsWyUB95SEhWn4JeZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxVl_ePAJpVw42M4k54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxT4R5RhN6d7vWn3eB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxVoBgKgc3vBJ2NKkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwoxI7YRZHVy2XR6jl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxc4S8u6T9BmYwz50F4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxhvE96GGj2KI86ul94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwIskV34Cxf46XfY7N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz4pkgpv4bNlAGUchF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]