Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
my friend has this and still makes fantastic and creative artworks by hand
you d…
ytr_UgwWSqeqq…
G
Mixed in with slightly overweight, just-past-their-prime Indian men doing slow-m…
rdc_jfaiunr
G
Do people not know that these large language models that are short handed to be …
ytc_UgwrO7Sn9…
G
Why are people stupid enough to believe that lie? This idea that AI would produc…
ytc_UgxGr11Y_…
G
If the ai “artists” are mad at you, you know you’ve done something right :))…
ytc_Ugwa2mXH0…
G
AI and the digital economy are going to destroy mankind …………get popcorn and enjo…
ytc_UgwKLZv97…
G
AI replaces humans, so products should be cheaper, so we don’t need to work as m…
ytc_UgzfrDknv…
G
If you’re in the tech sector, you chat with your friends about layoffs. Silicon …
rdc_nluv04n
Comment
I must protest against defending tailgating again. Do not take this away from the many other issues here that are true and should be taken seriously. But a huge issue is that people do not drive safely. No matter if the driver is a human or a machine, if the driver feel it need to make a sudden stop it should be able to do so without the risk of being tailgated. It is the driver behind that did not keep sufficient distance to the car in front to avoid a collation. Stop victim blaming. And of course this goes for self-driving cars that tailgate too. If they do so, they are at fault.
So that pileup mention around 9:15 is an example of people not actually driving properly. Yes, the driverless car should likely not have stopped, but more because there was no reason for it. Safety wise, it should not have mattered. I know reality is a bit different. But people need to learn how to drive in a safe manner. Expect that car in front could always make a sudden stop for some reason. As a drive, it is up to you to know how much safety margin you need. How fast your car can stop and how fast you can react.
youtube
2024-11-15T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxgowVY79x6s9wCRK54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugweit1p7Rda9pMQTk14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyyWoTrhg7WcoUKY54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyBh9vUcQ4hO2nbjwF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5I6hU8JZIQBNG3R94AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz9RV9SbsudMcEX70V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyUIl4f2qfyy1BnLeN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz1PKtF23fiJhjZwUZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyNPTw8mtaN48NkTgV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxB7L14PPYcECzt19p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]