Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
lol AI ethics smh... i do not care if my computer becomes "alive." It is a machi…
ytc_Ugysfs0e7…
G
Here is the beginning of the end
Rock n roll, art, sport, love will always be ou…
ytc_UgwbDRI-g…
G
🤢🤢🤢🤮 Pathetic 🤢🤢🤢🤢🤮
Ai will NEVER reach super AGI 🤏
If it succeeded in this, the…
ytc_UgzOO2VJ8…
G
Vid: "Suppose you're in a self driving car that is following too closely behind …
ytc_UgzG4obmJ…
G
@Gub0-m6i considering they promote on twitter... where they say its ai... and ha…
ytr_UgzeeEdpT…
G
THEY DIDNT EVEN TRY TO MAKE THE AI LOOK BETTER? THE HAND ANIMATIONS STILL STARTE…
ytc_UgzRGVSFB…
G
NOT GONNA LIE!! My mum calls AI bullshit ‘gorgeous’ and doesn’t understand why I…
ytc_Ugzo5zVWC…
G
Good for you! Art takes a lot of work, especially to learn! People shamelessly s…
ytc_UgzP66MXg…
Comment
In the box dilemma, if every vehicle is self-driving it's not a single vehicle that is gonna react at the same time. Driverless vehicles reactions have to be in a "hive" state, as they are connected by a network to the grid. As soon as your car notices the danger, every car around reacts with it to avoid that danger. The way the dilemma was presented its as if yours is the only driverless vehicle. Also, driverless vehicles might have an reaction time hundreds of times better than a human and, even if alone, if the processor is fast enough and the AI smart enough, might avoid damage that, to a human, would be impossible to avoid.
We are trying to judge the dilemma by a human's point of view and a human's capabilities. the truck itself might speed up together with the cars in front of it, so the boxes get pulled with it while your car slows down, creating a bigger distance between you and the boxes, while that happens, every vehicle to your left and right before you will probably break while every one after you speeds up, creating a gap where you can move into. All of that happening at the SAME TIME. It's not "minimizing the damage" that should be sought after, it's nullifying it. If there is a choice to be made in a situation, it's still not good enough and needs to develop more.
youtube
AI Harm Incident
2015-12-11T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgiNpg6zN3dcEHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghvTvR7vS3DkngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgjbGUooE19fn3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UggsqGMr9EidiXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UggKVhVX1FqATXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugjo9a6Pw85wEHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgieJyVJNtEyqngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugh-9Rr-OAATf3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UghZ3KaDpI3WZXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghvD0lAXFuW8HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]