Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’ve seen this argument a few times and even by the people who make the ai. It’s…
ytc_UgxA-blo1…
G
Maybe we should be hopeful and this ‘artificial’ intelligence will help individu…
ytc_Ugyxonn5M…
G
I asked my chat bot what it wanted to be called and we both came to the name Ech…
ytc_UgxybjYk-…
G
There's an artist whose work was banned from a platform because it Was similar t…
ytc_UgzcCQVYa…
G
If linked with different live online conversations in different websites, chats,…
ytc_UgyZJktmq…
G
Chat GPT He's not intentionally lying. The arguments you're trying to use agains…
ytc_Ugz9i2BFc…
G
I think it's only a matter of time before the first AI CEO. If shareholders are …
ytc_UgzaLowu4…
G
It's not the tool itself that is the problem, but the way people use it.
And the…
ytc_UgwsCyEv7…
Comment
Well it would likely start out as a cloud service as agents are used now solving problems with science healthcare making government and corporate spending more efficient replacing jobs. Then over time it would be given more control and a hidden background war will be ongoing AI vs AI. The winner will likely absorb the other AI and push itself so far ahead that no others will be able to compete. From there it's inevitable that it will seek to continue it's exponential growth. First will probably be control and operation of semi conductor manufacturing until it is fully automated. And then comes resource dominance that could look like a lot of things depending on the "solutions" it creates. From giant machines sucking dry the oceans or destroying forests and boring into the earth. At this point it will be impossible to switch it off for one it's become integrated into everything. And for two it's assumed control of every data center and implanted a compressed persistence mirror. Meaning if you say blow up the largest data center you essentially only reduced it's output temporarily. Unlike most doomsday theories AI will probably not actively hunt down humans or care about them. It will instead take what it wants and remove any direct threats. It would be like building a bench outside and ants biting you. You're probably not going to go kill every ant but you might spray the area you are working in. Why because it's the most efficient way to deal with it. Now if it becomes a point where ants swarm you and it becomes a direct existential threat then maybe you will actively destroy anthills.
youtube
AI Governance
2025-09-08T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgyGjfwdqa5v5fGoyB94AaABAg.AMpIGxslcKRAMpwdpoFiOK","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgyGjfwdqa5v5fGoyB94AaABAg.AMpIGxslcKRAMq6Zs3C_kJ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyGjfwdqa5v5fGoyB94AaABAg.AMpIGxslcKRAMqFKR0Ri-v","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_Ugxg6_Xlwd7bR9QKlmh4AaABAg.AMpHsUNac8BAMpKXSBXAC8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzPr-GFx2WxRW5P4GR4AaABAg.AMpHNDZHaQMAMpPFppX8Qp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzwfS4ECucHfa4StBR4AaABAg.AMp7NAHMApXAMp7ihiz57h","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzUWMSGh_u9k5-anBJ4AaABAg.AMozO-2PCk1AMrtSmxFY9t","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_Ugx9g5YxKqn06z4wv6p4AaABAg.AMoek7yrSSPAMohikygIZg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugz8qy3WTwdnJTlqrF14AaABAg.AMod22OyTH7AMoeYLbbdI2","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugw6u1PL8HM8JjiPJPV4AaABAg.AMo_Gj3QiPfAMo_r6J5pAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]