Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
until the software can identify all hazards and not act irregularly leading to h…
ytc_Ugwt2VqYl…
G
I can 100% understand the concerns of AI, but why would regulations make any dif…
ytc_UgzsYNwPK…
G
Im saying this as a big AI defender , Im damn stick of ai “artists” who dont kno…
ytc_UgxmuFPTO…
G
To me, the eyes looked lifeless and didn't move much making it seem more like AI…
ytc_Ugw0HCEsw…
G
Apparently all it takes for people to like you is for you to be well spoken. Th…
ytc_UgyAFibYA…
G
The breadtube commie argument is "I get paid the value of three widgets an hour …
ytc_UgzAWE7eb…
G
ai isnt dangerous, all you need to do to stop an ai is turn off a power grid or …
ytc_UgwFAiJQf…
G
Well, AI doesn't buy goods or services or rent property, so...
Who are they g…
rdc_kig972r
Comment
Can't we have a scenario wherein the AI decides to help humanity intead of deciding to destroy humanity?
5:00 is likely never to happen, humans can't hold back AI. A rather more likely outcome of superintelligent AI is that it will decide to help humans on its own volition. Even genetically. Without having to obey any human
youtube
AI Governance
2025-08-14T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw6zQ3tbOI2Y04Cbnx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyxSQz13FMxlaATM94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgydsWrUaghYf1ElErt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoKvibx8VavjvHGsd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwguGzCfJs4KwjWZKJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxC7SL1jgHJUwJMkpl4AaABAg","responsibility":"society","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzpqZfwTr4Ya5Z10hN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVHGVKkjc6axcbzI14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx9VMoa4XEQAEZpGcl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlEhDUifLS8lfcSlB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]