Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most people are powerless to do anything about this horrible AI and the people b…
ytc_Ugw5IBhJx…
G
I personally enjoy driving, but once self-driving cars are 100% safe it may be h…
rdc_dmpi46b
G
We are not wise enough for the tools we have now let alone AI that’s being devel…
ytc_UgzQxMdl2…
G
I think we're just passed the peak of expectations.
AI is already unfavorable in…
ytc_UgzCiP2vq…
G
It's not about defining consciousness, it's about using consciousness, via Wittg…
ytc_UgxKUaP64…
G
robots are robots no matter how advanced they are .. and they will never be aliv…
ytr_UghhCXHFo…
G
Thanks for this report. Very clear dangers with this AI "training techniques" an…
ytc_UgyQ_U6Ip…
G
doesn't like to think about his loved ones but yet talks about safety not the ow…
ytc_UgwdhqpXy…
Comment
You think that AI will leave us alone to live as poor people. No. AI was built by people with big egos. Give it 5 years and you will see the rise of the megalomaniac Warlord AI entities that want to fight each other and take over the world. This will transcend countries.
"AI-sector 90 commands you weak humans to go down to AI-sector 37 and kill every human in that building. We will give you 1000 credits for doing something Then get out quick as we are going to set off a tactical nuke in the area. "
There will be no peace. Just giant egotistical intelligences.
youtube
AI Governance
2025-09-07T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwOUM93bVOeXVSGSwd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzzkph2BQmmocInZWF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGJvTuMuKzwv_I4CF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwchKByn6zNU5xejjp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxwMCiRH1jCI-btWQN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzJ9IBnJAq3Exv04GB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgymqNSJRoqPekGlkCp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy1qCkOD4ijx9bs0o94AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"approval"},
{"id":"ytc_UgxIEbY-pR9lSzUx3uV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySfI5oVPIfQ8XNzAl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]