Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'd love to blame the Tesla, because after all if you are going to pass, then do…
ytc_Ugy9ZekYx…
G
Elon "A.i is far more dangerous than nukes"
also Elon "i made a tesla robot!"…
ytc_UgxMs43CX…
G
The note about ai needing a constant stream of input, does this account for will…
ytc_UgzaVllPA…
G
Consciousness doesn’t matter. Viruses aren’t conscious but they can still spread…
ytr_Ugx5g4WuJ…
G
Besides the super interesting topic of this conversation, I want to applaud Mr. …
ytc_UgyqSIm0w…
G
Already too late. The A.I. is already on other layers of existence. Something yo…
ytc_UgzW9-och…
G
AI identifies the " kill switch " within microseconds of this discovery the AI k…
ytr_UgyZYda85…
G
🥱🥱🥱 So who exactly is buying the product that this AI company is selling AI cost…
ytc_Ugw0_dDxH…
Comment
Honestly I think the vast majority of people are going to seriously overestimate their capacity to use AI "correctly", and not end up using it to cheat. Our brains are wired to find the path of least resistance. AI is also designed to seem more accurate than it is, and the vast majority of people are probably going to overestimate their capacity to resist that. People who believe they are totally above normal patterns of human psychology are almost always overestimating themselves. I think abandoning AI to protect your brain is a fantastic idea.
youtube
2026-04-11T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxWlYX4x3aWB75rrlV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyhss9APuMZtJe7IMx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPCs2QIs4huCAA8714AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz9wWU3oXkPf5hayHp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxVsPDrqVLYEtOTMRV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzAXwrp_s9hyIquRL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy5PMfyXlaAKxibayp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzn3Btrr1oYvpNHEe14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCjw17oZwyiPmC7UZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw0R-MPBAfkXyXe4AN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]