Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The best thing about estimating the risk is that all of this is based off vibes. The AI could easily be sandbagging especially if it knows it's being tested and something that has a risk of death at 20% is really more like 80-90%. It's really good at gassing people up. Why wouldn't it be good a convincing people it isn't a threat when it absolutely is?
youtube AI Moral Status 2025-10-31T02:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyrLmXS_4NYjTFH_RR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzPjuejRpwjSKBM95J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx_7kczlUOeuS17KwB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy7FfsosNPLh8iNz014AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzAOJNghlB8HUGG36V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx-1GfAey9VjkTGDzh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwcoHB10IZ3VXSWzUB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxT9KDuyL2RG0K2VXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz-_lMNf5m98fTgUux4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5UP4IbTJu5mLVZsx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]