Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The interviewer doesn't understand what Penrose is saying, but imho Penrose is still wrong. If you need to understand why something (a computated result) is true as a requirement to have a consciousness then neither an AI nor the human brain would have one. Neither an artificial neural network nor the human brain works by actually proving things. It’s mostly about making good guesses based on experience and patterns.
youtube AI Moral Status 2025-12-30T06:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwD9zsa3n6rDt-ABJN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwZF0MEhRPEh-A3dI94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwNQXZlc2IsfWy8edJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-9Sn2vTiM7xFKaV54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzbZnATuGDy0IL9xm94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyxbxceS2Y3550bzQV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxs-PRe1M6btkZflid4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzOjMebMmU7xwEWEdd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxBEocINNMRhL89RSR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyet_uBaDOhYC56FKl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]