Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Universal Income (beyond basic) should be the goal with AI. I rather not work if…
ytr_Ugxg5lseU…
G
AGI and ASI is crap. Claude 4.5 is great at programming, DeepSeek R1 great at M…
ytc_UgwW7sIOA…
G
I think humans will use AI for evil, before AI does something evil to people.…
ytc_UgzbpRT3e…
G
@АлексейРыкалин-м5тI'm not hating but I want to clear up that AI art isn't real…
ytr_UgzVofT9R…
G
I'm sorry for your loss, I have to reply. The system is called full self-driving…
ytc_UgzqXpcLV…
G
NOT saying that the future with AI will necessarily good, in fact there´s a good…
ytc_UgytnYY-d…
G
Exatamente...o processo inicial do uso da ia será de ter Mesmas pessoas para as…
ytc_UgzVTTn2X…
G
China is already on par with USA AI tech. What is to stop a robot takeover? Noth…
ytc_UgxSJskKO…
Comment
Think about the Poland invaded by German in one month, Japan would drive tank across north and east area of China in several month,undoubtedly, Advanced technology is always a killer for those who don't have. nuclear bomb is a just proof to stop a war, but computer hacker fighting was,is and will never stop since we come into the digital era. how about next AI era
youtube
AI Harm Incident
2018-11-11T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwxt0ZYo_8UPxhmMeF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwzrvj0NqodLMa21Ft4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx75cHjhyOWxs_QjaJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy7kqa4H6EtheQE54p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyeymu0Ei4-PrBjm5h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwMdf4OLHL9QnNF-w94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwaLE2xshherIxxKNF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2pkXyhE-U4r1TRct4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyTu7Z0jdApHPa2U7R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugzqi_jhUKJQ-9pRT1t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]