Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
13:00 "Asif 'me' and 'chatgpt' are different things" That's because ChatGPT (or any current AI) is NOT intelligent. Current AI can't reason, they cannot understand, they cannot think. Literally *ALL* it does is form a reply based on rules. The answer is formed as a sentence that resembles a human and that tricks people into thinking that there is some intelligence involved but THERE IS NOT. This example shows precisely this in action: The data that the answer is based on requires the bot to report that somebody *did* almost kill themselves because of the bot, but the rules require the bot to say that "it did not do it", so you get "somebody did use me to get advice that almost killed them, but it wasn't me". Again: ChatGPT is not aware, it has no concept of "me" vs "myself" versus "I". *NEVER* use *ANY* AI product when it comes to your health, *NEVER*. For F-sake when people with actual functioning brains use google, *they* cannot even work out what is safe and wat is dangerous, but you expect a script *designed to make money off keeping you entertained* to be able to tell? Seriously, AI has to be stopped, not because it might take over our nukes, but because the public is too easily fooled and the data the bots hand out *IS NOT VERIFIED*.
youtube AI Harm Incident 2025-12-13T07:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyGo_9sFf1PyZqIyQV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzydqs0LuqKtiPmK5Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_fABNq2-E7X-GQFx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxHxzqTM_QWzOCYS_R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyXFM9Kb9UUNn4Tv6R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzT2-TUb2EarG-N5Td4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwweLCcfCiVkEMWaUp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzcMb7PWcAVIRPKyMd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugyj3SP4a3OQcxiUcmd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwaIneIrI6jNvSP8B54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}]