Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think her book would be a good one to read (I haven’t read yet but will)… also…
ytr_UgzC87pa6…
G
Their OWNERS are so powerful. The technologies are just chatbots and upgraded au…
ytr_UgxZFNMm0…
G
Just keep AI away from humans services. It’s clear that it’s not ready. Humans a…
ytc_UgwZf37wT…
G
It won't be chatgpt of an ai take over. Wich is like really probable soon or lat…
ytc_UgzC867OH…
G
In modern German, 'ä' is pronounced basically the same as 'e'. It used to be a d…
ytc_Ugxa3AeCe…
G
humans are the worse danger because we like to kill. How's does A.I feel about t…
ytc_UgwhBnyEY…
G
Ai can be sold to us in anyway they want but ultimately it will be used by the g…
ytc_UgwvNjWth…
G
Since we are conscious and we are intelligent and we call these models that we t…
ytc_Ugw9zvAR2…
Comment
What is heard during this video is: "AV's are dangerous, therefore we should continue to let drivers kill themselves at a rate of about 46,000 deaths per year for safety reasons." 3% of those deaths in 2022 were children (1,129). Sorry but the moral thing to do is to solve autonomous driving right now.
There's no way to make AV's better without putting them in real world scenarios. What you are seeing now is the worst state these AV's will ever be in. Yes, there's accidents, but we are slaughtering tens of thousands of people every year by not getting AV's right as soon as humanly possible.
youtube
2024-11-17T00:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz_tlf1zqGDRHfwuat4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugx8sd7xjbFoEz1jXKF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw98rTG2dQPLRO0UDl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw-CNCsS0PNg8bXRtp4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxIptcZX-37hlpoEq14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyUT6TZRyfObS_vE5N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxxlCAtvSsaTd6fUbd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwy7Q0VSL0Z4bNziTd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxcJ6ksDlOryapG9XV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz3XLbWukP0Q77v_jF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]