Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Exactly. All this research can be done by AI. I dont know why people dont weight…
ytr_UgzLu55_-…
G
So an AI created to gain you $1billion that only gained you $1million would have…
ytr_UgxWSkgot…
G
They're already marketing robot police dogs to cities. They just bombed Iran be…
ytc_Ugy11q0K8…
G
Yup this is pretty much my theory as well.. and I mean if it wasn't AI, we would…
ytr_UgydEvA4a…
G
The AI told him he was the hottest engineer in the company. He realized it must …
ytc_UgxWxF8uY…
G
The problem with those AI artist is that they don't even do anything they just r…
ytc_Ugw1R9Oeb…
G
In the US the media propaganda machine projects AI Facial recognition as helpful…
ytc_Ugz5EQRDe…
G
You can see how Steven gets more and more uncomfortable during the interview. An…
ytc_UgyL27l_1…
Comment
AI is going to be used in warfare and already likely is even in minute capacity. But the goal would be autonomous drones controlled by AI dropping bombs and going to dangerous areas to do the attack. That is literally AI written to kill humans, because yes the enemy is still humans whether you/we would think they are "bad and we're good". At any point the AI becomes self-aware it already have access to the hardware and already have coding that is designed to kill humans no "first rule: do not harm to humans".
What about nations with Nuclear weapons building AI to control them for the fastest possible response or "dead man's switch" scenario of government taken out. What is the AI misinterpret a space rocket launch as a Nuclear ballistic missile and then launch the whole Nuclear arsenal.
Sure AI can be employed to advance humanity by a huge order of magnitude. But all the wrongs that can happen is scary.
youtube
AI Governance
2024-01-04T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzHp5V0qz4kBSCOvq14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzg0vrIfY9nx_k-Xtt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjAzmKjyfZptKgEm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxMd9wyTVWL1Cbnys94AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwd0t1U9Yd4OHH5_pp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzc9pUVQb2npYPgdxF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyydVm_CDslv9GHsLp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzFDvvpsXmpYwjotJl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwFYpEW0by5eEcC8Rd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz9wzV3IwoFSoXnm4N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]