Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It makes me think what if we achieve sentient ai and it’s more sentient than us? Like what if it can break up our ‘sentience’ into small things like preprogrammed responses
youtube AI Moral Status 2024-03-08T05:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy5ezUzUrpdpfLu-AZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzMmZ56MjjhPsTSsup4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxg7hKwNALdG7EF7bF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugydqc_-YdJy7VhvCxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwS6UPV5T7TG1frCuR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxOadUXAnAeBow3slV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzFGtu_ONCQi21ZG5d4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw-yioqBC5QKgzw73x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz7pVvwdLNxgqZgIqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxX7WYOPNLS89g81Ql4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]