Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Your understanding of how ChatGPT works "day to day" is off. The model doesn't c…
ytc_UgwvBGKhw…
G
Well... obviously? If we self-extinct, every day, we truly get a day closer to i…
rdc_jfaccsi
G
I understand your concern! The balance between AI efficiency and human needs is …
ytr_UgzhBF4Iz…
G
you can make a safe a.i. and the perdiction of 2027... well there will be the gr…
ytc_UgynnrkPW…
G
Black man 99.9% likely to be involved in a shooting
...Is shot twice
You do r…
ytc_UgxlEzEKX…
G
Humans are ALREADY being destroyed by religions!
We're moral beings, and artific…
ytc_UgwDFH75C…
G
This was great until the overcorrection went too far the other way, and terms li…
ytc_Ugxq4_Cdy…
G
I disagree. This has nothing to do with personality. "The man" is trying to save…
rdc_o45lm1l
Comment
AI is far more ominous than the atom bomb. All these founders who developed AI and now declare someone needs to stop it, piss me off. Bloody cheeky to cry 'I'm scared of the Frankenstein monster I created stopping it is now everyone's responsibility. No, it is their fault and they should be held accountable. There will be no breaks because nations want to exploit AI for military purposes and will never stop or ever trust other nations halted AI development.
youtube
AI Governance
2023-07-07T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyYK6Pl_7tuhZ-0z4B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw_lS5Ed2T8VWsT4bZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwR4e2yVi1QTz60BTJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxfRLTioDl4jNoKKWN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzlKMXO626NIvk9jr14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy7Y0w3NnD1kv9Vm494AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzW02AiHkfiSy1TUjF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgylhP1uYsj_w84MEi14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzG1JH5Rq6nzhjSIh94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzRYFYqTtKUreLXXUJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]