Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The way I see it is that ChatGPT is essentially just reflecting back information of the prompt that it was given. It’s not thinking for itself instead it’s molding it’s answers based on the task it was given. Is it dangerous? Not right now but in the future it might be.
youtube AI Moral Status 2023-06-01T01:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxojMEd3FuG_OTu0Dh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzgUKqVFesm_tpIJa94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyBrntwedmcm3FtbkF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgySoFXdYNCHECEQwLZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx0p5skMenA09IaLcN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw6abfu_UijYom0hk54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgynJgfqUITQdkuupJN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzPBBTUxgcw0eodgGd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyn-OOgqfqxWMTTASF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJpycfe7K9UGIK_Qp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]