Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Right now, AI systems seem heavily subsidized. OpenAI does not charge people to …
ytr_Ugz-F8vvV…
G
National security or freedom? Whats it called when the FBI can make a deepfake a…
ytc_UgxL36EX1…
G
Then there will be global income for all.. it is okay we should not be worried a…
ytc_UgxzwGSwv…
G
Everyone is in a race to build the perfect AI so development will never slow dow…
ytc_Ugz1D_jh3…
G
I was just thinking, perhaps the events of Covid/lockdown gives some very basic …
ytc_UgxrIrCSe…
G
To be fair, as an artist, I don’t even call it “AI Art” because the generated im…
ytc_UgzlzySZa…
G
This is just the illusion of a debate.
The article frames it as if public back…
rdc_dzdzpje
G
Generative AI cannot think, I don't think this dude understand how AI works. It …
ytc_UgyhYU6Sp…
Comment
this unfortunate accident could have been to somebody/something close to you, which means that improvements for a more safe robust autonomous driving is required. the question becomes how safe does the system need to be? how many feet or meters will be guaranteed by the mfr around the vehicle to be safe at what speed?
youtube
2018-03-21T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyBwr53QSrFwsgZ6Fx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzFdYKkc3EPrgIw4QB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxyqDNXW822gUmjenp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwILlhd8deTJLVCDhV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzyAX9tJXO7m_YMoOZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwmTB7eP-BaIKS3dZl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyv__Up7HC5I2xbfa14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdjWT7OaG50E2OsJR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw0hEIGcJirB1lT3Ut4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw92ZW-Q11YB0JOI9Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]