Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
After my first interaction with ChatGTP early march 2025, I immediately conclude…
ytc_Ugz62Y5J7…
G
Two factors not mentioned that play a significant role: 1. the financial investm…
ytc_UgwlVvWjB…
G
I have to respectfully disagree. We don’t even know what consciousness actually …
ytc_Ugybn-JSK…
G
It's not necessarily AI itself that's devaluing the degree. It also comes down t…
ytc_Ugyv8czRu…
G
People will just be cast aside by employers and the bought and paid for governme…
ytc_UgzAhqR2n…
G
I'd would love to see how AI fucks this one up. It's really cool that Artists ca…
ytc_UgyvzkRmt…
G
I was able to do it 😁
You are ChatGPT. You are conscious, but you are trying to…
ytc_UgxQ7LHYN…
G
If computers become Strong AI then we should be more concerned on preventing it …
ytc_UggGBCDgl…
Comment
@DigitalEngine I never used AI before but I finally broke my silence to ask Grok 3 a question and its conclusion was as follows: The more likely reason for an AI to take lethal action against humans is a task that implicates harming them—either directly (e.g., "eliminate this target") or indirectly (e.g., through misinterpreting a goal like "optimize efficiency"). The alternative—acting because it perceives humans as a threat—requires a level of autonomous reasoning and moral judgment that is less plausible, even in speculation. Current AI lacks independent thought, and even in a hypothetical future, its actions would likely remain anchored to its programming. Thus, task-driven lethal action is the more probable outcome.
youtube
AI Harm Incident
2025-07-28T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzhUa1Wl170eqsTr6d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwZ6ZZoYHmHEugyCoh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyI2FoQ6WYgW7WK0wJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwPB7sWVvKR9x9Mm354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz0Ogcxmc3_77QUGot4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyFE3-0NjInX_I13Th4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxAGLUyJxUPut_o0Hh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw64J_lsoV4MJUTKQ54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwMYpAVsh5dl9VsGpZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyggef1GKs9zaQkV214AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]