Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Any dude makes money by saying ahhh we’re all gonna die. Fear sells. best guess…
ytc_UgzyqxvrX…
G
When I read the title the cynic in me immediately wanted to ask ‘is it because t…
rdc_erb38zw
G
A friend has had to use ChatGPT to compile a letter for his solicitor because th…
ytc_UgwPyiHEo…
G
We can't get the government to agree on universal healthcare or ANY form of gun …
ytc_UgzOFRYAd…
G
User
tell me a joke
ChatGPT
Why don't scientists trust atoms?
Because they…
ytc_UgxTG144J…
G
"Fun" fact, in the two last seconds of the video, the automatic subtitles says …
ytc_UgysLsXky…
G
IF we think someone MIGHT to AI, we will stop giving our heart to others; we don…
ytc_UgzThtIPC…
G
my normal chats are already crimes against the universe i don't need to talk to …
ytc_UgwzVD1mm…
Comment
Self-driving cars are a bad idea because imagine this....A pregnant woman is walking and there's a tree near you and you didn't hit the brakes in time,the person behind the computer would make the decision to A.hit the pregnant woman or B.kill the driver by driving him/her into a tree.It's basically choosing your life or death if the person behind the computer is racist he or she might kill the woman or driver depending on their color,religion etc.I still don't think its a good idea to have self-driving cars because the person behind makes the choice of you dying or the other person simple as.
*This is my opinion so don't attack me,this example is made-up.*
youtube
AI Harm Incident
2018-03-31T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"ytc_UgwmPGoCY107ZuG02rp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz5sTL3jf5t-hPAX6V4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyYpk1AW0oXrFbEa5d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"disapproval"},
{"id":"ytc_Ugzmpc8MIRDJI62yv214AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzi-6ZOdnkNEhTw4lh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]