Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's fascinating how we are unable to build programs without bugs, hackers all t…
ytc_UgxRHV1vZ…
G
I see the issue, a lot of amazing and talented animators are working knowing the…
ytc_UgyH2VgQk…
G
Why is the Venn diagram of AI users and Right wingers a circle? Why even hate fu…
ytr_UgzDBeDM1…
G
What a fucking surprise, facial recognition thinks all brown people look the sam…
ytc_UgxKLhrSq…
G
The AI unit has no “understanding” in the usual sense (e.g., as explained by phi…
ytc_Ugxn0r54x…
G
"we'll never do it again"
Bro with as much material as deviant art had, they do…
ytc_UgznORp0O…
G
3:14 omg patapon art anyways ai is like hacking in video games, it makes doing t…
ytc_UgyV6RmF6…
G
Tyrannical overreach we have cameras everywhere we don't need facial recognition…
ytc_Ugx4Oco3x…
Comment
AI safety isn’t just code — it’s culture. As people specialise into deep micro-skills and new sub-domains, education systems must evolve at the right moments so communities don’t fracture or get left behind.
We also need to treat cybersecurity as public health: threats exist (from basic exploits to sophisticated attacks) and that means better education, clearer regulation, and real-world safety design — not secrecy.
If we get this wrong, the future could look very different — underground living, new economic ecosystems, even new migrations of knowledge and labour. Let’s plan for safety, fairness and resilience now.
#AISafety #SuperMentality
youtube
AI Governance
2025-09-16T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwGB54AwVzp0bqKxpN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxh3tIpqJrLxdaBx4x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw7MQfpnC17Xd1jnLB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQ0R-_5PuyYqNB7CB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy6Z3zU63FrODtOoTR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzWEEHJgI3PGog2cmx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyQolMPvsu-HD6yVtl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlqSgeeaOteBgNBPR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyhJlEu0pI9ED9EA5t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzlW7bjWGG4TIeRrIZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]