Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I sometimes ask myself - What if we are just llm's - like chat gpt - but with more complex modality and more sensors and experiences to get data from? Brain picks up competing actions by probability from patterns baked into it during a lifetime. We perhaps want it to be more, but it may not be that special. Still, even then, a hard problem of the consc. remains valid.
youtube AI Moral Status 2026-04-25T14:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxEvMiTPcoyB6WXqzd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz1AIPhGm8mtPpzJpl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwSQcKjeCObOYT0EJp4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyR17ArQghauiBIWyh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzNflbiPwESs3KiN0p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx6Cw0d6eZIs6PCibl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzrUZAhBWWWqdCq7ut4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwqQ-8gfdMHGx3awDl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzEzXE6eJr1hssCtPl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxVOBUKWC7nvAm6VuV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]