Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First one should be real does ai also animates the white dot bettwen the eyes…
ytc_UgxUIjcAo…
G
why are they using AI to replicate your style- are they that lazy??? Your whole …
ytc_UgxuCOOcP…
G
Jesus died for AI so that means we are seeing the Spirit of God moving! https://…
ytc_UgxdomAxd…
G
a superintelligence AI will realize immediately the billionaire class should be …
ytc_Ugwch7WjN…
G
Who is held RESPONSIBLE when bad things happen?
The THREE PEOPLE that started th…
ytc_Ugz3L-zd0…
G
The problem is there are high level development personnel within the Ai systems …
ytr_UgwKNmkfI…
G
I feel like AI is suppose to be used as a tool, i dont think people have the rig…
ytc_UgwOL-qEQ…
G
Anthropics competitors will run with the contract, noone is off the hook due to …
ytr_UgyfoiJWb…
Comment
its really not a grand thing , this is just copying whatever data it had been fed.... and what data was fed to the ai ? human data of course, so its really not surprising that the decisions that the llm generates will be heavily influenced by human logic like survival and personal gain etc. what to do to shut it down? kill switch. nothing is going rogue this whole paradigm is too over hyped
youtube
AI Governance
2025-05-28T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxgKfOaHDdN_rcNqd94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwU6jUHAwTtkwX2Af54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzsMY8cOXXACkmDZ0p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyUBASy2QNqZQdPjTx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw1NCsO0rfG5cF3MUl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyeNcbn6-8d7eBt-YR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugx3Ni37noR36ZwlllZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugxi0LEFrF4YxnhSde14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxMPZJ7fEqviJQs5Sp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYSmgK1SIqJBjVM_l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]