Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
يَا أَيُّهَا النَّاسُ ضُرِبَ مَثَلٌ فَاسْتَمِعُوا لَهُ إِنَّ الَّذِينَ تَدْعُونَ…
ytc_Ugzw86FOj…
G
I’ve noticed my content isn’t showing up in AI recommendations lately. AICarma’s…
ytc_UgxSjbtoI…
G
Humanity is always looking for a way to destroy itself we started with clubs to …
ytc_Ugx_4tfvf…
G
as much I was a big fan of this technology if you let the best AI right now ruli…
ytc_UgzZrQIRW…
G
I find the idea that all artists are born with the gift laughable. Obviously. It…
ytc_UgwNFgfsE…
G
This will be amazingly successful. If my school was like this i would be. Billio…
ytc_UgwbTZOWF…
G
@rishikeshwagh Mary's Room, the Chinese Room Thought Experiment... They all have…
ytr_UgzmxzcrG…
G
honestly the UK should have just done what we have done in sweden a lot of the t…
rdc_fwh05bo
Comment
It's fascinating how we are unable to build programs without bugs, hackers all the time find backdoors to systems, the newest technologies like blockchains are nothing but a wild, unregulated financial slaughterhouse and yet we claim we'd manage to control an AI or for that matter, all of the AI's...
In all our human history we had the benefit of trial an error. When something went wrong, we built it again, more resilient, better, bigger. We are a species evolving through failure and iteration.
Only that this time there will not be much room for mistakes, if any at all.
youtube
AI Governance
2025-06-16T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwqfiE6fLDUAJnlZmp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwXCDyusY0tvfJo5Fx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxLU5jJT2jvjg0_D-V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw7rGJi3p459udONtd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx_9EGlCxkyy1tyZvd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxUQSYGwQ0gD5HyRbR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8oGV-35ppzI_du354AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugye3TwEgki8rufOhg54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxRHV1vZw5a8FPmv9t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx72cx8Y8WvqbIeV514AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]