Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I knew that AI would never ever replace human intelligence. Also, AI hallucination IS a fundamental problem. Meaning that it can't be fixed. It can never be zero, but it can be negligible or reduced to an insignificant degree and this is currenly impossible.
youtube AI Jobs 2026-03-21T22:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwaolCSCrbAmzmXPy14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzE5vuj-DLOECBsDzV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz_OZ-y_IxebN0dWOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3cdAjIfc7VJckYAJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwX7qjzo1jevbrvHc14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1DARepnEMK3u2lPB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgytrV3NOnPLyOb7RQ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzgFuK7fSuKvYnKJAh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxRrFppSFBbTWXiKO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxlVaW17VPUUpRsX4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]