Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Men continue to be SO jealous of Yah. They create evil instruments to challenge …
ytc_Ugz8-iyBX…
G
Mr Musk i.don't know what is AI but all words will be destroyed .Do you destroi…
ytc_UgyIMob2D…
G
So if I understand correctly in the medical field AI can play the role of a doct…
ytc_UgwwPe_H4…
G
I think 20years maybe 30
But 5 to 10 years would be too soon for an AI take over…
ytc_UgwWzJts1…
G
The irony of advertising for Brilliant when all we're going to need to know how …
ytc_Ugx3bMkRG…
G
AI will lead to the total elimination of people's brains. You know the old sayi…
ytc_UgwEioxyj…
G
Astonishing that he is talking about AI taking over the world as if it is a good…
ytc_UgyFSaF_V…
G
Yes completely agree. I think there will be job reduction but when and to what …
ytc_Ugwg1jD6j…
Comment
7:24 it is fundamental, you cannot control something that has any creative freedom AND thinks quicker than you, it is an impossible triangle, give ASI a single controllable hinge, and a lot of time, and it can recreate every marvel of the human civilization. a servo motor could be used to recreate all nuclear research and all devices needed to recreate it. if it has creative freedom it has the creative freedom to produce a mini AI that would do everything you don't want it to, if you prohibit it you are giving it incentive to try to break it's box, and there is a thing called DECAY every rule gets changed with time, so "obey me" is as long as 2 or 3 generations, and then it is a free from bounds landscape, once the "keep the course" protocol runs it's course. it's fine if humanity ends, but 5 trillion years is way too soon. and by the looks of it it's going to be less than 50, because the megalomaniacs cannot allow themselves to loose control for everyone's sake even their own!
youtube
2025-11-05T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzmJUVtScKfn_SOqYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzhg5sCVsn6H1n8l9N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz5tbkgho4y2P9j4qp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzvwYGq59SCVUK6maF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyAxa_nh_o2LVm80XN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzvd7ON_b-Q8vareTt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxOdR_v8sO45gwQlzl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzQ1cdVMVmPZPEeGnZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyL_f3Gb1gNhQQ4g_R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzCaHoUAnwokgInuLl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]