Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
GJ OpenAI! We know what you did, and even if some patsy gets caught, the real fu…
ytc_Ugxrthk1T…
G
Well let's keep AI out of policing and make them actually do some work. Or we mi…
ytc_UgylsIFEY…
G
When discussing the potential misuse of AI on public platforms, the concern is t…
ytc_UgxX5PHtA…
G
So even te FSD can be an a-hole driver. Congrats. We have taught AI to be irrita…
ytc_UgyZUg9at…
G
11:28 that possibility sounds like living in h-e-double hockeysticks and makes m…
ytc_UgwCz_RY8…
G
I'm sure some AI artists will do well and people will go to see them -- but pers…
ytc_Ugw2EAohV…
G
The biggest danger of so called AI is by far it's potential use for propaganda, …
ytc_Ugxv7twu9…
G
Hands on learning. We used to have a lot of it for life skills why did it stop?…
ytc_UgzsOmucV…
Comment
People in cars will intentionly pull in front of a self driving truck and know that it isn't going to hit them. A human being in a car, in a bad mood is one fucking arse hole . Car drivers could easy prevent a self driving truck from moving down a busy high street . It will be fun to see a car cut across the front of the truck on a motorway. It will be a car drivers favourite pass time because they know the truck will apply it's brakes . Have they thought of this ??
youtube
2017-08-25T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxgrVF0uwrRJ97bma14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOXp99wN1ENn58meJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyIV-GzKxp9JYtYU2F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugws1Np6TBXiEYQhBJ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwz9vbUrjQOiOhxeNB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwV-z6W0veWFMMIwlp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwUHU4kWF-rtnyQSlF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyD0Nfe8B-WWe0ErUt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugydb6ZmLSwZhSklAS54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXi2JXrQHCaa24iQd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]