Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
its Elon Musk who first warn of the danger of AI. Elon Musk is the most intellig…
ytc_UgwiF7jHN…
G
Best way to make lawmakers listen is to make deep fakes of them doing illegal th…
ytc_UgwFYc6VU…
G
"Hi Soumyadeep, we are sorry to say that you got the wrong answer but in any cas…
ytr_UgyTMv4gg…
G
This might be why we cant find any Aliens - when they get advanced enough they f…
ytc_UgxvQ7nZ1…
G
Elon Musk made his AI believe LIES. Elon Musk made his AI believe that South Afr…
ytc_UgwkCkVA4…
G
i don't want to know what will they do to me if character ai takes over…
ytc_UgyhrbQaH…
G
@leiser_sa How do you knock out a robot? Well, you could always challenge it to …
ytr_UgwAyYqxj…
G
I don’t know. I’ve already heard a robot talk about something that makes her an…
ytc_UgxMY4ZnQ…
Comment
I feel this is being exaggerated, it might hinge the extreme sides to some extent however. Some kind of doomsday saying. A realistic scenario to AI development is that more control gets imposed eventually. Probably even on the cost of rapid development if that ever seems necessary to the expert eyes
youtube
AI Governance
2025-06-16T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwnVSuzSOjrYdqWtdl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzkb0JgYMNNYS4Bbah4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyxSR6EJKMTP9gN_Rx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxRHJVsTqufHWovB1V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzpfELacn4dGlfkBb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw95Kev7pLCn2xahL54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxh0s_jNSrT_Ujhwb54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz04yGUE4Weo7XymBd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw_q4uWMeHz7qZvZ3J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgydxUUgU2wIVK651ZF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]