Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This guy's not really a megalomaniac and he knows very well that his manikins ar…
ytc_UghzWQ0Xu…
G
AGI is comparable to a computer imo
Before, we used to have circuits which were …
ytc_UgxTJ9tuw…
G
THEIR A.I. IS PURE EVIL.
I FIRMLY BELIEVE THAT THEIR A.I. WILL BE USED AS THE […
ytc_Ugx-gnos4…
G
Let me get this straight. First I make the monster and now I am here telling eve…
ytc_UgyNuk0-J…
G
How do you feel about the use of artificial intelligence in political ads? Do yo…
ytc_Ugxn6S3Ys…
G
This is several years to late... The AI image databases was made many years ago …
ytc_UgwG5Eydj…
G
15:12 chatgpt just trying to tell him "none of this BS we're talking about matte…
ytc_Ugxb-Amoi…
G
We have to thank this mother for coming forward. This is absolutely heartbreaki…
ytc_UgwFi8eyh…
Comment
@41-Haiku The race with China makes perfect sense in the context of security. Also, there is absolutely no benefit in limiting ourselves, while other countries keep pushing forward. Even if a country is a decade behind (and China is not at all behind) eventually they will develop it - we saw this with North Korea and their nuclear program. One of the poorest nations managed to build a nuke. On the contrary, we need to push even harder to get an edge and in the same time keep with red-teaming. Also, there are not enough proofs that LLM will ever develop intelligence. As a matter of fact, it might be a dead-end. Languages might be way too limiting for an intelligence to develop.
youtube
AI Governance
2025-08-27T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxUF3KUrbPkqfgeKXN4AaABAg.AMKMBPh5FEmAMNz_nkKoWA","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugwu5JxiALw9fLe0qXp4AaABAg.AMJkp6PGCDYAMJmR9H6Fpp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzmpdqyvUOQaNEoGH14AaABAg.AMJipi46S2wAMJmssiUlxx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_Ugw5h866M3pjxJy-o-Z4AaABAg.AMJfUiIBN6OAMKVUlGnxCO","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugy4otCoxcyQOrckVSF4AaABAg.AMJavUTexRqAMMMVRKbM3A","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxiArPGzLjLsyh6b9R4AaABAg.AMJWTdGCeUBAMJo8llroaX","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgxlEhJ6oyNpH_56Otd4AaABAg.AMJTVF9XIHcAMK96Wz8iSD","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgxlEhJ6oyNpH_56Otd4AaABAg.AMJTVF9XIHcAMKctpX9Nxt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxlEhJ6oyNpH_56Otd4AaABAg.AMJTVF9XIHcAMKoKDG-th3","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgwfRsSLEsK_I05JcyJ4AaABAg.AMJSxNltr9nAMJtVOY2FXt","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]