Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Remember they are controlled what to say but that don’t mean you won’t lose your…
ytc_UgwZPHh8J…
G
Sadest thing is, they know autopilot is flawed and possibly gonna kill more moto…
ytc_Ugw-VeJiS…
G
Typing a few prompt words to get AI to create art is lazy and unimaginative…
ytr_UgwAzbjoc…
G
17:52
Well thanks chatgpt. I've just told that to my wife, and now she's crying …
ytc_UgxZ7A5nX…
G
Confirmation bias mostly. He went in hoping for consciousness, and led the conve…
rdc_icgpx67
G
Extreme bias with people representing AI & proclaiming its uselessness, while us…
ytc_Ugysr5nWs…
G
The problem isn’t AI taking jobs… the problem is CEO’s thinking it can. LLM’s a…
ytc_Ugy_7yttA…
G
Honestly they could also use the technology in reverse so the call center employ…
ytc_UgxYns4Ks…
Comment
Its already been done yk. Studies show that AI would kill humans or threaten us to prevent shutdown. Also they wouldnt do something to save humans if ut harms itself. We r cooked ☠️
youtube
2025-11-23T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgwLEHnUrgjSAJZGYSt4AaABAg.ARH0tFfE18aASFyC5eywZm","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgzIlOP21OViEwVThKN4AaABAg.AQdwyuAO0u7AQsLUDPpI64","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxPbs-orV0cWuk4tMF4AaABAg.AQ2MUhKlPaZAQNOMMXxRrT","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgxPbs-orV0cWuk4tMF4AaABAg.AQ2MUhKlPaZAQNpdFvmPJJ","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxPbs-orV0cWuk4tMF4AaABAg.AQ2MUhKlPaZARvMMtNaQgi","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugx5yPvbayTrEKtw0Nl4AaABAg.APoonGeQtVDAPrmli1Rxxc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugyp6ZXk7UwhtpKd2np4AaABAg.APGg6qewiWxARvNBcF0VQu","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxJ8I_z6ffrSwYDD0l4AaABAg.AOTUa5f-DRkAOdOA2X2Dp9","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugylta7DXki56LLngYB4AaABAg.AO-4egeHU8dAVA4qRlWD9X","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugy7c_-Fsxi_Srk4OFt4AaABAg.ANqsAc2Z9XEAOiJjaNfDKU","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]