Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you’ve seen/read Cloud Atlas, the Sonmi chapters, then that’s a pretty good i…
rdc_ljb6yu9
G
Considering it's the news outlets that are telling us that people are losing the…
ytc_UgzGoh3nt…
G
Excellent question! As more and more work goes into AI/ML entities, these are c…
ytr_Ugxkz-UqN…
G
I need to mention it was definitely stupid for the prosecution to use ChatGPT. I…
ytc_UgxO2gOt_…
G
I feel sorry for whoever thinks this is real 😔. They replaced the human with a r…
ytc_Ugzi94u3I…
G
@catgo2 Not when you claim that your experience invalidates the many more positi…
ytr_UgwO6SFiY…
G
What I would like to ask is how much confidence would you place in an autonomous…
ytc_UgxMQ5ZZC…
G
If you're going to compare total energy costs, you need to include:
All energy i…
ytr_Ugw5BmMBx…
Comment
A.I. is in a way a mirror of us, as it advances it sees us as much of a threat as we predictively see it as a possible one, once A.I. can assist itself to develop we are no longer of use, only a liability. We should get a little farther to restore sight to the blind and then pull the plug while we still can. It would be unintelligent to support an enemy (that already has plans for our eradication) past our own gain, and that is a mistake A.I. likely won't make. Especially after seeing this comment and all the other ones here.
youtube
AI Harm Incident
2025-09-16T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzsnN8nQrzsROSwKjF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzaQEe_66-YHlYJKgR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw_ZS9-z9G3hV7syKp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxgwqiceG4ZosDZBBl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyFscoHoSxlL9q4rIl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgywZ_RkaN6FrbK4fVt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw5W3jsquS7rZD2w2p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8tzdit3rk8owlTbB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxiKYMKH1c8D4Cl7jR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzOr8XdTOaQNCGLpyp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]