Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Brilliant questions. There are a few basic solutions that should prevent these s…
ytc_UgibjtNUD…
G
AI Bros said that all jobs would replaced by AI in 6 months at the start of 2025…
ytc_UgwTKSSTO…
G
I think the main thing that FSD just needs to solve is how it most of the time r…
ytc_UgyHGti__…
G
Sorry I’m a little uneducated on foreign policy. Putin can’t even beat Ukraine h…
rdc_mcqiwb0
G
Basically artist should be paid when their art is sampled. This would need an ev…
ytc_Ugy3AhzXk…
G
I believe that GPT-3 is at least partially conscious. A good test is asking whet…
ytc_UgzulgIb-…
G
It is not ‘more rational’ - this description betrays a fundamental misunderstand…
ytr_UgwyyrUO1…
G
Lawmakers are forgetting the most obvious threat from AI, and that is the reduct…
ytc_UgzfNNcLH…
Comment
If AI become smarter than us and ethics are rational and optimal, won't AI learn to be more ethical than humans? Even if so, what could go wrong in the meantime?
youtube
AI Harm Incident
2025-07-28T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzo-pWt1j_uVcAyXAl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzANO5_3D5M9MXyt3d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyTxHlv94cYpyAb6RJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyVomSsPIHEBU6D0kV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxEmaIKSqJQJuk7R-p4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgylU79T1Q3Z6zuK_1N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzWpa793raxy8W5BLd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxvpNXUZ7BYyh0GWRV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxTk07Y1R66teFpEaF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwjtbIou4CeD7Nfrod4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]