Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Im so glad seeing an actual artist not bash ai art and instead giving an in dept…
ytc_Ugz2IrYwW…
G
Cloud Computing = Storing your stuff on another companies computer
AI Computing …
ytc_UgxsJod9F…
G
Good point! I'm not sure why ChatGPT thinks it has an "unbiased" answer. I'm pre…
ytr_UgwN5zZYm…
G
I don’t believe most of these can be effectively done by AI, or that consumers w…
ytc_UgzBNCE7z…
G
Oh! I can actually provide some perspective here as a previous wet-lab scientist…
rdc_ohw3r03
G
Just asked Chat-GPT who an all-powerful sentient AI would consider an enemy..."-…
ytc_UgxbcoGwQ…
G
I dont see why we are trusting self driving cars. Theyre man made, thus not perf…
ytc_UgweGjhaD…
G
The problem is how do you define intelligence? If we agree on a definition, we k…
ytr_UgzGQwgnU…
Comment
I think ChatGPT just refuses to admit any wrongdoing, it's important that you ask questions in a neutral way and force it to do a web search before anything because it can get things wrong. I've noticed in my testing that even some leading questions can trigger ChatGPT to simply tell you that it's a bad idea or that you are wrong. If what you say is obviously untrue or a conspiracy, it will definitely snaps you back to reality.
youtube
AI Harm Incident
2025-11-25T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugzp2mpmsVRsAXFGaZJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7lSm0qmn4EDTuqP94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz4oLQZwX55dbrACJF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy3ZiZD5j8BWYiayVV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz4Y9f9kwCyll0ra2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzH0K4PE0xSBBfHU3x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyPpodWcmxK9e-jcIF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwGoe7bGvgDjUeldSJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyLthjV-WZWUSy7G94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzUsV6QlkGIKaz2jQF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"})