Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So the replaced clueless people with clueless AI. I THOUGHT THIS WAS SUPPOSED TO…
ytc_UgzvGWELq…
G
"lighten the load" yeah right, all gen ai is doing is taking and taking and taki…
ytc_UgyNDaGpE…
G
Ai wont need nukes , they could somehow take over internet , and restart the who…
ytr_UgyrKYA5p…
G
You create AI for companies to remove people for profits, and those are the peop…
ytc_UgyUt1Qlr…
G
On dating apps, scammers will utilize a variety of ways to scam you:
1) Glamoro…
ytc_UgzsdpMQ7…
G
Yes it would, it'd make it **A LOT** less accessible.
Edit: to clarify, I mean …
rdc_k22a5z2
G
I dunno man the only real existential threat I see is from letting the military …
rdc_kqswznd
G
I’m not using that
Ai is going to take over and it is SCARY bro.…
ytc_Ugy2zAVIA…
Comment
I have 3 issues with this, 1. You can easily like to a computer, questions like, does a starving man have the right to steal, or if someone insults my family they're in for truble, the answers are clear here! Answering I strongly disagree (even if you don't) would make it appear you are at low risk. Even if they change the algorithm to compensate for the lies, what about the people who aren't lying? 2. You could memorize the answers, if this became widely used people would research the algorithm and eventually people would start making answer keys. True you could change the questions, but that would create other issues as the algorithm would become more accurate with use. 3. You take out the human factor, humans with experience can be better judges of carachter sometimes, (not always thought). This is just my opinion from what I heard in this video, don't take me for an expert.
youtube
AI Harm Incident
2017-09-01T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxEoXJmsU2o6hW7-6l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzQ5-X22e_jw6AD6qh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzJdKeEiLauijES-xh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxqiNKoyqTcAP4sAER4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxo75JzWjtKUEJa2vx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzYjmZgqwkL0p4GOal4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UghETbDDLjdorXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjjbdybEHWzyXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugjn9WM4zF6TIHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugi75uTYKgdOH3gCoAEC","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"})