Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A CEO’s job sounds like honestly the perfect job for an AI. Any negatives a CEO …
ytc_UgwP2k5z5…
G
If you can't beat them, join them.
There are two ways this can be achieved, one…
ytc_UgyI0tYZ2…
G
the ~~Chinese~~ west buying property in ~~Vancouver~~ Thailand is a travesty tha…
rdc_dy88st1
G
When FSD not supervised (currently on test @Austin on Robotaxi) will come to Eur…
ytc_Ugw0WxeuU…
G
Doctors Don't Know NOTHING. Ai will be alot better❤❤❤ Doctors ONLY work for Mone…
ytc_Ugwt9qbVG…
G
Oui, puis la caisse s’interrompt, écrit «contrôle aléatoire», et un employé vien…
ytr_Ugy3czmRs…
G
1:31:00 This was the original plot of The Matrix. Humanity was used as a compute…
ytc_Ugx1Praam…
G
There is a series on TikTok about an AI that has only 100 days to live and it's …
ytc_UgxFVLkzu…
Comment
Hi, one thing is to keep a hybrid model with just a bit more level of controllability. So that it can hunt kill the other models… Is this something that is already thought of ?? A terminator 😂😅😢😮 ?? But one humans control??
Or engineer weaknesses in their code . Quick unpicks.
It does worry me a bit that I’ve not seen anyone on vids like this listing the different ways we have to kill AI? Is no one working on this practically. Their seems to a lots debating phase and whether or not it’s dangerous??
Or am I talking about modern war fare? I bet pentagon etc already has some of this tech so people don’t talk about it??
youtube
AI Governance
2025-12-13T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyIifZRUPO956SIa3N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCEAFHEPwOGtYPSQ94AaABAg","responsibility":"elite","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxkay22JrYHlgIz6DF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyzBFLUR7uWzcPpAhJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyzc4n19CJiA9H3huJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzVo_fJKGyjlIoKC0t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzH8kwMGzBFhuA0-7t4AaABAg","responsibility":"elite","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzSFOOUh7oBNvjfNH94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwtdSu3rB6t7-3Se0N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy77fMJ2hSuju8CVG54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]