Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i think the PC comparison is useful but there is a key difference that gets over…
rdc_oi2foic
G
So f*cking typical that she blames the 'patriarch' and people as the cause of he…
ytc_UgyoD_7rZ…
G
6:30 That entire segment made my blood boil. So all it does now is encourage peo…
ytc_UgxeGFIu0…
G
I dont understand the porpurse of seing a pretty art for a few seconds, but hell…
ytc_Ugy-5X5sY…
G
The biggest danger of AI is that Government will regulate it, Dangerous? Nope, i…
ytc_Ugx0yZ1hU…
G
AI needs to be developed on an open source platform and not be in the hands of a…
ytc_UgwHEVOun…
G
AI - a fresh bullet aimed squarely at the nucleus of human creativity and endeav…
ytc_Ugx1FNE4Q…
G
The video discusses the potential future where AI might replace many human jobs …
ytr_Ugxo2VuSF…
Comment
Sad, but of course, totally inevitable. Good, balanced video!
Humans are appalling drivers, machines are not yet, (and of course never will be) perfect, but they’re rapidly getting to the point that they’ll be better than most humans.
Humans get tired and even fall asleep behind the wheel, distracted, drunk, drug affected, and so much more. If we wait with autonomous vehicles till they’re absolutely perfect, we may never see them. If we wait till they’re significantly better than MOST humans, (which is not far away), then lots of lives will be saved. There will still be accidents, even deaths, but in the long term, I’ll trust machines over fellow humans every time.
There will of course have been pedestrians killed somewhere around the world just while I’ve been typing this out, but of course you will not hear about that in the news. Also remember, the best driver in the world will not avoid all pedestrian accidents, (though I’m in no way casting blame here, just stating a simple fact.)
youtube
2018-03-21T01:1…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgykkJHY2zNi0eHSo_R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw-wvkKZc9KIwiNtj14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy0o7Dc_M5GWXuHKHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwPryzJFiadJAgTPu54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJapNITb9mBuq_T1Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNSoZT7mB4mHEBUfJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJnkToSlwv0ZS6Qox4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy7WP_A0S-nsWLEsTx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQVReE7RsUkUYUn494AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxWbQwzYPtwc9j8kOR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]