Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not really Waymo fault, the cat should be indoor and not on the road. People sha…
ytc_UgwaF8Qm2…
G
Honestly. This can be solved with something as simple as an individual email, a…
rdc_ects9ng
G
Good thing ive already graduated from college and just enjoying ChatGPT with fun…
ytc_Ugy73wsAn…
G
I would say this may be true in some shape or form. LLMs are generally great at …
rdc_n248dip
G
Man you guys remember a candidate name Andrew Yang that was predicting and talki…
ytc_UgwX0Gc0l…
G
So, you're saying that being trained on human output doesn't make the the AI hum…
ytc_Ugwahq6aD…
G
This only deals with half the problem. Once the AI becomes good enough to compet…
ytc_Ugz0oLpJF…
G
While we will likely lose many jobs to AI, there are still tons of jobs out ther…
ytc_UgyyATDaG…
Comment
If AI turns deadly to us, these data centers will be like its ability to think and act, except there will be thousands or tens of thousands of these things scattered everywhere. Until we know for certain that AGI superintelligence won't lead to human extinction, we should be building these things with some sort of failsafe that instantly cuts power to them or destroys them in case things go horribly wrong one day. a 50% chance of humanity being wiped out does not sound like a risk we should be taking, but there is no way to stop this AI train anymore.
youtube
AI Harm Incident
2025-12-20T22:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxlfmxXkSCjTsG4BJt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxvGKxPZ3aheNQ5FMl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwatLl89BOygdm2Fql4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxpe54UtVBgVXaa4X94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz43uj66wtfLxJj1cl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxs_qc-VeVynXzrftR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugxl6Oc5yHHRZehLut54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgznrXTdpQ-Gvifn6d94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxesOHggsTN0AP48xB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz6FnzOKp3ZEHAzyqN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]