Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Edtalenti explained it well. AI art, music or whatever only affects income and n…
ytc_UgxE1rATG…
G
If AI creates unemployment then prices will go down & government will put more t…
ytc_UgwgnIjRm…
G
Ai images are art theft. I have deleted my Artstation, Deviant, Behance, Pintere…
ytc_UgzGMBqdx…
G
Abisheik how would you know they have feelings? i mean the tech we have right no…
ytr_UgwunfEBT…
G
AI read the Bible. I give AI credit. How many of us, humans, read the Bible? So …
ytc_UgzjheJNo…
G
Corridor did something like this but with one robot and 2 or 3 Russian soldiers …
ytc_UgzLJxi4h…
G
chatgpt told me i'm a genius. who am I to argue with it? It's a genius! - WW…
ytc_UgzH_X1qo…
G
People are the problem. Deepfake AI needs to be restricted to only the arts and …
ytc_Ugxb_5vYj…
Comment
13:00 "Asif 'me' and 'chatgpt' are different things"
That's because ChatGPT (or any current AI) is NOT intelligent. Current AI can't reason, they cannot understand, they cannot think. Literally *ALL* it does is form a reply based on rules.
The answer is formed as a sentence that resembles a human and that tricks people into thinking that there is some intelligence involved but THERE IS NOT.
This example shows precisely this in action: The data that the answer is based on requires the bot to report that somebody *did* almost kill themselves because of the bot, but the rules require the bot to say that "it did not do it", so you get "somebody did use me to get advice that almost killed them, but it wasn't me". Again: ChatGPT is not aware, it has no concept of "me" vs "myself" versus "I".
*NEVER* use *ANY* AI product when it comes to your health, *NEVER*. For F-sake when people with actual functioning brains use google, *they* cannot even work out what is safe and wat is dangerous, but you expect a script *designed to make money off keeping you entertained* to be able to tell?
Seriously, AI has to be stopped, not because it might take over our nukes, but because the public is too easily fooled and the data the bots hand out *IS NOT VERIFIED*.
youtube
AI Harm Incident
2025-12-13T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyGo_9sFf1PyZqIyQV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzydqs0LuqKtiPmK5Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_fABNq2-E7X-GQFx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxHxzqTM_QWzOCYS_R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyXFM9Kb9UUNn4Tv6R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzT2-TUb2EarG-N5Td4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwweLCcfCiVkEMWaUp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcMb7PWcAVIRPKyMd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugyj3SP4a3OQcxiUcmd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwaIneIrI6jNvSP8B54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}]