Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
F-uckerburg pulling back at the end by saying "I think AI will augment the human…
ytc_UgyYlDfAt…
G
IMHO AI will and already does take jobs, especially programming jobs where it’s …
rdc_oahnwzc
G
80% of all of those stuff is probably also generated by ai in some sort. Impossi…
ytc_UgxMB2CFB…
G
THANK YOU SO MUCH!!!! I didn't have the words for it but you NAILED IT. I am a G…
ytc_UgwFyHZwg…
G
I love the idea of AI as well as to play with the technology itself, but i hate …
ytc_UgzzIMmf5…
G
If the family is publicly claiming that ChatGPT encouraged their son's suicide, …
ytc_UgxgoLcJD…
G
A bag of BS as always when it comes to AI, of course the founders are worried of…
ytc_UgwI1E2NV…
G
I used to use Ubereats and they implemented Chatbot to deal with any issues rela…
ytc_UgzrO5_A3…
Comment
While stories of self-driving disasters make headlines, synchronized autonomous vehicles have the potential to be far safer than human drivers. Always remember if inventors gave up every time the masses feared technological evolution, we'd still be in the Stone Age.
The deeper issue isn’t just tech capability, it’s trust. How do we build public trust when one high-profile accident creates panic, even though thousands of human-caused crashes happen daily? Simple educate the public.
youtube
AI Harm Incident
2025-04-06T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzxUA3pdw6XICXgfe54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzT6q_928opBFQ2Njh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyPARdUb-jKgdKQLRp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgynrpoPqtB1guOBvZF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw5vgUKZjDtDSs-g3J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwMR50D8Kqz6wQj2PJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzQCeHsff0340w_Rep4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz08dusJR0Wn-QFUgV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy3vxHI7ov4MvbeC_F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwrK2N-NL5F2THGzXF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}
]