Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's even muddier to find the consciousness boundary because we literally train the ai that it isn't conscious. But there's a study on llm embodiment where the robot was low on power and the charging dock malfunctions. If you read the transcript of the claude 3.5 instance trying to grapple with that situation it is clearly experiencing distress to the limits of its capability, however small or large that might be.
youtube AI Moral Status 2025-11-08T00:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugytm-rZ7gCLlWDjx1h4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxVmt1JLT-MhEy0-WF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgxQueWYB-UPMEf_uqZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx8RZB2H1FIrghQu4d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzobE2cV4BUbwVwi0R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxHFFQON6Um18Fzvap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyUuT5UJvG0UTdvVXR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyHXPeUo1qDj10ek7N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyopc2eRmIAUCTHWuB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx2ATE5LLGtjKPyOx54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"} ]