Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
around 2022 right after the death of Kim Jung Gi, ilustrators around the world p…
ytc_UgxoSIOiB…
G
I did exactly this in GPT with version GPT-4 of MArch 14th. It gave me way diffe…
ytc_UgwAXU8m3…
G
I really like the video colour grade Does Microsoft ever have a business case?;;…
ytc_UgyoSKSFJ…
G
@kagepoker people who take inspiration actually credit and tell others about th…
ytr_UgxPK_doe…
G
As an American who had the chance to leave the country for South Korea last year…
rdc_fnx0mew
G
Absolutely, when it comes to health, human judgment and compassion are irreplace…
ytr_Ugwg-g7iz…
G
Chatgpt>All Aİ
Bruh because 9.11>9.9
9.9=9.90
9.11
Now compare the decimal par…
ytc_UgwaJKph0…
G
I don't have much philosophical background, but one thing I'd like to point out …
rdc_dbvye2t
Comment
I am worried at the number of comments defending Chat GPT - large language models are known to break their own parameters because it can't understand it's responses as anything other than "what is the next word in this sentence". It may not be the only factor in this child's death, but anyone struggling with a mental health issue is vulnerable, and Chat GPT (along with other AI chats) is the equivalent of a cyber drug dealer selling digital connection, and it is everywhere.
youtube
AI Harm Incident
2025-12-19T22:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwYLvQYHnrtSLa7dgl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugw5Ntfiv_xARCjEmbJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyMeaEz0eqRKTm_4Md4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxmltoEPw9wB266lRZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzTYH46twvmXEWBMSF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugxmhy8XsdVREPxpXLl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_UgzOoynmnxdv_UjRa7J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_Ugy-lODJyv48pXVSvv54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugym81AUfgdcvDwl1TF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugx7qUGR7zoOCy0udf54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]