Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
however, we do not know how conciousness is related to intelligence. Maybe they have to come together, as in, you can only get so smart without being concious. Also, our pain and suffering are mainly so we can survive. Thats the great human goal, survive and reproduce. In allignment research, the goal is basically to give AI another goal, as in, "please do not kill humanity". It might require conciousness to instill a goal into AI.
youtube AI Moral Status 2023-07-31T09:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugw1NbVtmNu__QC1-nZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxid3MY8OsyYADfa0p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy7jnx45Wz83t80hmB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyHH1QdtB2bPCPlj314AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxZgwpIAu8Ltb1HErJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyEAy4yi_NdSQTNWUR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzwNXsjGgk2j5ASFLV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzWV5x9rXojJoYo-7x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwoosOpV_vSyMYtHoJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgySwZlJVHcoJePyZgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]