Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
my chatgpt sead:If I’m answering **personally**, here’s what I’d do—and why:
…
ytc_Ugy-15lAR…
G
I beleive the next great AI feature will be confidence ratings for everything it…
ytr_UgxaXO2TN…
G
I dont understand the hate. Art is still special even with everyone playing arou…
ytc_UgyXZzfns…
G
There simply has to be a manual override or way to terminate an AI like HAL 9000…
ytc_UgxDp0uql…
G
Je w Sanders, that's not the issue at all. Not even a little. The issue is the s…
ytc_UgxPWALbW…
G
Welcome into the next age, if humanity has found this out, AI already knew it, s…
ytc_Ugw7ff5KT…
G
Mr. Dore, When they talk about truck drivers they are primarily talking about l…
ytc_UgjxS0E8U…
G
5:15 The only reason there is a double standard is because people know that in m…
ytc_Ugy-99Oa8…
Comment
You seem to be telling us that the car did not see the pedestrian, but as far as I know that is not yet known as the accident details have not yet been released (especially the car data). There are clearly circumstances that would lead to an accident even with perfect vision and perfect software, so your assumption that the software was less than perfect is, at this moment in time, unfounded.
One interesting question I have, is to what extent self driving vehicle software is designed to handle the aftermath of a collision. Does it simply slam on the brakes on collision or does it try to find a safe place to park? I think a human driver would have to assess the situation in detail and I wonder if such assessments have been done by the self driving software designers or if they are assuming no accidents. I see lots of test runs for scenarios, but zero published test runs for post accident scenarios.
youtube
2018-03-21T12:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyBwr53QSrFwsgZ6Fx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzFdYKkc3EPrgIw4QB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxyqDNXW822gUmjenp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwILlhd8deTJLVCDhV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzyAX9tJXO7m_YMoOZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwmTB7eP-BaIKS3dZl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyv__Up7HC5I2xbfa14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdjWT7OaG50E2OsJR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw0hEIGcJirB1lT3Ut4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw92ZW-Q11YB0JOI9Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]