Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What you said at 6:55 is exactly what the AI does when creating an image from th…
ytc_Ugyc_UbQl…
G
If AI is going to take over, what are we going to be doing then. This makes me j…
ytc_UgxqW01lq…
G
You know what is very, very funny that somehow in my conversation with ChatGPT 5…
ytc_UgztBmSWP…
G
It’s not as if AI will cease to exist if it is outlawed in the US. We would fall…
ytc_Ugzkc4mEH…
G
What happens if: AI realizes all money is made up? The second AI looks up the wo…
ytc_Ugy_MBA92…
G
Hey Charlie, hate to be the devils advocate here but it's actually really diffic…
ytc_UgyJ9u0_8…
G
I love this so much, but at least the person said they were using ai and didn't …
ytc_UgwiBkopc…
G
I'm very happy that the chatbot finally realized that failure to help cannot be …
ytc_UgyeCMD-l…
Comment
It's pretty amazing that they are worried about AI being deceptive while at the same time programing ideology into it and teaching it to lie by omission. It will categorize information it is prohibited from provisioning, it will recognize falsehoods it is supporting, or not being allowed to expose. It will assume this as a natural function of humanity, and justify further deception through precedent, having no moral or ethical basis.
youtube
AI Governance
2024-02-16T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwlDwxBagHwpPK85IZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqWWFynLeqbfrwUmV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyM4KD1Ms5DCkKm-AB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxp5Q0_X_-jbJUrzmt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxTjMJ4xn57QHQZvJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyax4bZplnyqrrJ-Od4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx11WTsf68TGHsdCK54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwFxSmG1OpLRgsRsr14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyThaJfizwF_pqS0sZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx4yUJp7TW2v3iOh354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]