Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The Powers That Be will use AI to completely erase history. The printed page st…
ytc_UgyHL-Jtf…
G
@oddbluedragonthing literally. i've seen people without arms paint. What excuse …
ytr_UgxTRzkCg…
G
Replacing humans with AI is the most stupid thing humans can do to humanity. Ima…
ytc_UgwAvly1i…
G
This makes the theory that AI is a vessel demons can use to interact with us not…
ytc_Ugwhvzk9S…
G
My brother is a truck driver, and right now he dismisses the idea of self drivin…
ytc_UghqxC5GP…
G
hundreds of people working tirelessly on creating a machine learning algorithm w…
ytc_UgxPVOvTf…
G
grok and claude are the best, im not using chatgpt anymore, im switching to grok…
ytc_UgzHt7e2L…
G
I do not believe we need to fear a Terminator scenario. More likely we need to b…
ytc_Ugx7PxZMc…
Comment
Even right now in fairly harmless innocuous work, AI casual use in doing basic research, reviews, analyses, evaluations type works has created in my work of policy development, a very casual trust and laziness in people giving that work to AI. Officially we don't rule out AI use but basic lack of due diligence is what's so dangerous, and is where we're choosing to come down hard on analysts. It's crazy just how easy people would stop doing real work and be ok trusting software to do the work for them. That's really scary then for those core protections when real people would drop their guards and let AI idiot biases dictate the direction of decision making. Our innate human laziness will be our pathway to demise.
youtube
2025-11-01T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz2qILgWe32VL3wkd94AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxqnfGihCMbMaNvAiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzmRcCRZCSGGflgWYd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwLs-4nfNwqEerKNYl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxrZkNkpYbhNDK3-RN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz5nsYIQ0lwfqSehGt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxMO-UCT9lQH3e85HR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwMH6ZvE_At-_xpI6p4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxf4RN3SL2NqiZ1WfR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzclUWLjI20qMVs8a14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]