Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No work no pay who will pay AI. 😂 Please explain. AI will pay for AI?…
ytc_UgxKKWYhE…
G
Even the most primal organisms will self protect. AI is already past the point o…
ytc_Ugx0VesQ_…
G
This kinda reminds me with that one guy that said he’s the best AI artist and ma…
ytc_Ugx9D0T2b…
G
Qui va en profiter? Les grands capitaliste qui vont se faire encore plus de prof…
ytc_UgwVtxHLo…
G
It sounds like you are reflecting on the concept of freedom and understanding wi…
ytr_UgwLL6uOZ…
G
I guess that its robot isn't real just make up some edit in this video and that'…
ytc_UgxPdI1FT…
G
Okay, so what happens when all of this eventually lawn darts or an AI manages to…
ytc_UgwfvptnU…
G
I use ChatGPT for helping me lay out material I'm writing, getting constructive …
ytc_Ugyqb3RCj…
Comment
I think they should let the regulation take it's time, but no experiments should be allowed before that. I don't think the regulators will do everything that I think they should do even if they take their time. Before any self driving system is allowed on roads I think there need to be ethics rules for self driving cars: e.g. if a car has to make a choice like crashing the car to avoid running over school children crossing the road how should it act. Ethical rules need to be considered and then manufacturers have to implement rule sets for the vehicles.. which probably excludes the Tesla model based on an AI model that may be hard to control in such ways. Then they should consider responsibility when something happens. I think manufacturers need to have full responsibility it they claim "full self driving".. and I don't think any companies can do that for a long time yet.
youtube
AI Harm Incident
2025-10-22T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwvPLhlRk0qSqQjXrx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuAZSgaG7ls2Mw37Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxcURLOJPFcbmfwhCp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwxewlIiwb4oT14LY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyejCoE2dQxEafIaJB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw6EC-bQnwDazllkEp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuNtJaaFEwatvsQ5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyux2RlLLKNjE0v8ON4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxKm5qAFE2OiEgF0QR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxSzEh_OTKgKQmy2fZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]