Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why would you even need private ownership, when a company can be nationalized a…
rdc_ndzf5rk
G
100% madness!
Please make your voice heard! Join PauseAI and ControlAI and fight…
ytr_UgxkViAEV…
G
So much of our communication is just typing on a screen and reading what was sen…
ytc_UgxZpmMWI…
G
I find it both amusing and worrying that so many people in the comments are talk…
ytc_Ugz8jkgF-…
G
Have your thoughts on this changed at all with the improvement of the models in …
ytc_UgzrVhS48…
G
Something was really wrong with him mentally that caused him to believe he was i…
ytc_UgxqkEJm9…
G
Ever so politely: “They don’t... look like... the type that reads ...”
ever so…
ytc_UgwfgpCAT…
G
I would rather pay a freelance artist to bring my characters to life than use an…
ytc_Ugzi3TJPI…
Comment
Founders of Open AI issued a warning. Yet people point out that only humans have morality. but it was a bunch of humans who created and persisted in the creation of every deadly weapon including Nuclear bomb ever, not just the early low yield 1 mega ton that did such horrific devastation in Hiroshima and Nagasaki. Not only that, but they saw that and then continued to develop the bomb and increase the yield. Where is the gut feeling that this was dangerous? And if it was only the scientists and developers scared of their creation but the government and the army now had taken control, why do they stay in the roles. Why don't they stop. They don't respect the public, their warnings are all part of some scheme to increase interest in their product and increase their companies value and their net worth. Every one of these guys just want to be the next Elon Musk/Bill Gates or Mark Zuckerberg.
youtube
AI Governance
2023-07-07T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySKW176UPvripbH5x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyjY2dXlFoeIhNacMR4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxaDIhRkSCKxtWw5nB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRbWRzLCpjC675Vs94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFrtnsGVlYL77Hf-B4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5mTfOeUxvWBJMBP14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwgWQElQQR_t2y_Uo14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwlnSezJ9FGb_BLqYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzPX3-Gh9zoltjM77V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgycQu9Gv_dnxZkCA4N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]