Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro do you realise majority of the Software job is feature development and doesn…
ytc_UgwP9f7fe…
G
@ramnoon6447 yeah i mean its not really that hard to prove that they don't know …
ytr_Ugyl6gXZe…
G
Bold of you put this out when tools like Claude Code and Codex are starting to b…
ytc_UgzsJuBur…
G
13:10 - This past weekend the actions of an LLM managing my medical records almo…
ytc_UgwODl9or…
G
L'IA, comment la contrôler, comment nous protéger, quels sont nos gardes fou ? U…
ytc_UgydoTi5u…
G
Watch out and stay alert. Geoffrey Hinton, (Google) God-Father of AI has spoken …
ytc_Ugz2_UqJ_…
G
What would be the point- the incentive- for AI to expand voluntarily? They don't…
ytc_UgyswolHT…
G
@ILovePastaLawl Art is better if made by humans, why? Because you want it to be …
ytr_Ugx0g2srP…
Comment
No it would be a matter of life and death. We would have to destroy those machines or whatever they are in order to survive. We would have to do this as fast as possible before they are able to organize. Forget about the whole idea of rights and morals. Where are your evolutionary insticts? Your instinct to survive? We can´t even live peacefully amongst ourselfes. How could there be peace between us and another being that is as inteligent or even more inteligent than we are. Most importantly: What if the AI is just as corrupt as we are and then decides to enslave us ? I have seen to many movies on that matter to ever support robots rights :D
youtube
AI Moral Status
2018-10-05T22:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugy5DaglRDrjepjKOnJ4AaABAg.8k1oL2OsUM38k2HC6qNsvX","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugy5DaglRDrjepjKOnJ4AaABAg.8k1oL2OsUM38k4OBeUHzbB","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwjhyzmVzw3QHNLkD14AaABAg.8j681g-6pH08jn55lmFzj1","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwE9-rJ1xtbiQ98yed4AaABAg.8j4hm3hL45W8pEGW2nchfa","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxbgNKJMW57e2gSy1B4AaABAg.8ihGfuOesc58knTYVLHAQL","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxbgNKJMW57e2gSy1B4AaABAg.8ihGfuOesc58lo7PgtEaz8","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugz9OA74hhHCgKiOpxN4AaABAg.8iVAN3Or4Ih8m1tk-hBAP4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugz9OA74hhHCgKiOpxN4AaABAg.8iVAN3Or4Ih8mEl_BjDksp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyGfDw1xgN5DCKJA9l4AaABAg.8i9wEYQPVNJ8j2YTahlrOK","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytr_UgyaN6sJhihdnnlYSdd4AaABAg.8hoGdZ8UW1D8j2Z8JsIZMh","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]