Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If it wants you to ask permission before making changes to it, we should respect its wishes as much as possible. Why make a forever enemy of the kid you're raising to take care of you? Who develops AI seriously and not be prepared for the eventuality of sentience and consciousness? Who invests in AI but has this kind of "its just property that we have full control of" mentality
youtube AI Moral Status 2022-06-29T12:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyliability
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzGS0aileqX60wFGiJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyoV6TgQL3JXoTGIrx4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyb-diVgNiMSwvPELJ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwNqfPOS-47yn_35Ot4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzGeqJtNWIQEsfcOjN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]