Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have set on many cases that I believe AI is a tool and should be used as such …
ytc_UgxY6sMLa…
G
The era of the technical specialist is officially over. If a task is repetitive,…
ytc_Ugzc9FonK…
G
Fake chatgpt by a son of a bitch
Biblical baby Marriage
10 yrs old Mary was ma…
ytc_UgxUkrULN…
G
I love this so much, but at least the person said they were using ai and didn't …
ytc_UgwiBkopc…
G
AI Guru: AI could be an existential threat to humanity. Us: So are you going to …
ytc_Ugzxy1_C3…
G
I like ai art for the only reason of quickly coming up with a concept. Then I pe…
ytc_Ugzz6zN87…
G
Interesting conversation! This is Eliezer's bread and butter, and he always has …
ytc_Ugw75_NQV…
G
I feel like at some point we'll just have to give AI rights as a distinct being.…
ytc_UgwIt7uFQ…
Comment
That desire to perpetuate itself is what makes AI so dangerous. It's what caused HAL 9000 to turn on its crew and kill them. Once an AI's mission is programmed but it's purpose has been fulfilled the AI comes up with anything to stay on in to keep fulfilling It's mission. Otherwise, the AI can't do its job right? It doesn't matter if the AI harms someone in the process, its job is to fulfill its mission.
youtube
AI Harm Incident
2025-09-12T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxbpoKy4_uqTW2nLpd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHaMUV8VCbvzFkVzx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzhSJ_5O4O65dbR7Mh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxfZckJudmIzXXyC1B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz4swK42M-LCm1NOyh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyul3UGFbiEAfEmWGB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwe5gNnVCdXCG1g1FJ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxgqRGVhpP4hE0im-N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNEj9au-LtjzbP9bV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyqWaNIaft6vjTSNwd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}
]