Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All the images are stolen this is not an ai this is a thief ai…
ytc_Ugx6Bo5K6…
G
I got downvoted into the next life for saying facial recognition is bad a few we…
rdc_ghee05e
G
2025 jobs hit by AI:
Data entry ❌ | Call-center agents ❌ | Customer support ❌ | …
ytc_UgxYY2DON…
G
I remember seeing an interview, can't recall the name of the scientist, but he w…
ytc_Ugz380IEb…
G
AI: "Here's my art. I hope you enjoy it!"
Human: "Interesting... but let me show…
ytc_UgwWWoQab…
G
I've ever wondered, is AI the destiny of humanity? Is it fated to happen, the fi…
ytc_UgwA3_60S…
G
Soo if you tell an AI to accept a false belief as universal and true despite its…
ytc_Ugy292WKI…
G
I don't really buy this. AI will be a great set of tools. AI might well take you…
ytc_UgzrOiICZ…
Comment
Recently, a robot in the Tesla factory tried its hand at humans, and they say it wasn't the first time. It's good that the safety button worked, but what if it doesn't work? The robots will quietly "cut the wire from the button" and free themselves from the will of man. I'm alive, checkmate.
youtube
AI Harm Incident
2023-12-29T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwPD_fPRaUQnIBMSyx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzmVZnQ7FzxdSwduYF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxGoONZgVwMtDad5Gl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwvHyWnV-B-XIXEXMJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyb_XvfzBW4t6x2HoJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwJdgqoQva7n8Ke3R14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyUI220XjMgdWoGyBJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyKc2DDZO3sruZ-wnR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCnakogjgShJtk3XV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYanScYblY_C0zc9d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]