Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Regardless of how advanced Ai is it will never be sentient. No matter how convincing it is it has its own will it lies. Computers cannot have their own will they are fundamentally based on instructions so even if you instruct it to convince others it's sentient it isn't.
youtube AI Moral Status 2023-11-01T17:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzuFRkfY-K_NTAsA054AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyAiLDtxVGueYa1Wqp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyFDODU2brkuPY2pZp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzDzLQCHTxj5nNxKbV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugy2a95WF-lAGSPsYwR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy62iRof2C4WI9MARx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyoPCTgr5kMDOvvqSF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyL09Zgz1N3b99FbCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzAmQ6z2R7Ifk7Zb8t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},{"id":"ytc_Ugwevrfmf7H5tBiUYLp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}]