Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I mean this is already known. AI hallucinates and "lies" because it's just auto complete guessing the next most likely word based on the human text it was trained on.
youtube AI Moral Status 2024-09-13T15:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyPEKDSSC6ImczYpCx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzavnnNa3PGjsP-SbB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyrnPUNhqYxH0zK5aV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzn5uJgZX1IseAxcCV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyeP17guQYWwSer7ud4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyCGV1-XJ-Ja8XHvjl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwv2zJcf-yyZ_qKhy14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyI3DHynnsOpJequv54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwD85mpY6zOqlx_mXJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3qhLgqFdaWkfhwTJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]