Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you’re against AI art you cannot also claim to be for the working class…
ytc_UgzPKxUn3…
G
I'm sorry. I'm out if a person is speaking and they're telling me they're AI. Cl…
ytc_Ugw6AuP9E…
G
I've been using AICarma for a while; its insights have really transformed my app…
ytc_Ugz7RRKW7…
G
In my experience, that kind of thing indicates that a statement has been heavily…
ytr_UgzwyJTG7…
G
AI is only conscious in so far as a person programmed it - so if you are giving …
ytc_Ugw0F_yP2…
G
The satanic matrix of AI artificial machines and virtual reality systems and sim…
ytc_UgxzuL6-F…
G
That is the dumbest pro-AI argument I've ever heard. It's like saying using an a…
ytc_UgwmD-xur…
G
AI is so dumb that it cant generate wristwatch with custom analog time... it is …
ytc_UgzWPZ3M_…
Comment
The safety of any autonomous vehicle is measured in amount of accidents per autonomous driven kilometers against number of accidents per human driven kilometers.
Sure, machines can fail in silly ways. Sure you can ban the tech. But if they are safer as defined above, the removal of silliness is paid with the lives the autonomous vehicle was saving and now won't save.
This video was made by people who have this piece of reality in their moral blind spot.
youtube
AI Harm Incident
2024-12-24T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugylawk4Wwo2HaZN6Gd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygC_qYcmGjSMiJmA94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx3VkzIh25fPT5W7Gh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTcfQFE4lU3kA-jFl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgygOgEYmoGpSAABP_F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxo0KySK5XL5aODMIp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugym8uQepetHZvqvByt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzK4hC1-MQssaF_xpV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzftlsHe8yLGgZJjS14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCPvU_oF2h9AS6a_d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]