Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT seems to agree, encourage or go with whatever emotions you speak or type…
ytc_UgyG6CCyv…
G
The robot soldier's inflatable doll needs, eating, drinking, coddling, toilet, b…
ytc_Ugz-w6IOd…
G
From the stuff I’ve seen in this sub the last few days: New Mountain Dew logo = …
rdc_oi3b4j9
G
Currently the chat bot sucks better use ai customer service ai for easy tasks an…
ytc_Ugz5Q543B…
G
What artists would get together and post random "arts" that are actually just ra…
ytc_UgzYd_C0e…
G
If ai is intelligent at all it will figure out a way to eliminate evil, if evil …
ytc_UgzESzavb…
G
Ai is very inept. You can make them say anything. Not to mention humans feed the…
ytr_UgxIWXUE3…
G
The 2008 crisis and now this one caused s hyperinflation in job requirements. Wh…
rdc_gkqnig1
Comment
Blaming ChatGPT for how people use it is like blaming a knife for a crime. A knife in the hands of a criminal can harm, but in the hands of a surgeon, it can save a life. The same tool, two entirely different outcomes; the difference lies not in the blade, but in the intent and responsibility of the person wielding it.
ChatGPT is no different. It’s a tool; one capable of spreading misinformation if misused, or empowering education, creativity, and progress when used wisely. Technology doesn’t create ethics; people do.
youtube
AI Harm Incident
2025-11-07T18:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugye3P4h1APTwGABueh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxPlUnpnEus5-Dvx-V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwaGsKrhaP7Bs-OCJd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2PVkHWftrJiiM3MN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzqyAYKW2YDgx9CZsd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxEt5HrKBNgfUpFLYx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"disapproval"},
{"id":"ytc_Ugyr_osXO8dN0UyTPop4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"pain"},
{"id":"ytc_Ugy3ovKJb0VAZMywbM54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXGkfclmZUPwK4JkB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwFQP_76JD4R_krneF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"disapproval"}
]