Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These jobs wont exist in 24 months:
1. AI developer
2. AI safety expert
3. AI ma…
ytc_UgxwxApR_…
G
Thats Rich coming from russian toyboy.. first remove people ability for CRITICAL…
ytc_Ugx1bqIGZ…
G
Ai its not about stolen art. Ai isnt art. Its just easy pleaser for lazy people …
ytc_Ugz0WWoXH…
G
Those Ai meetings three times a week in the US
Might have a lot to do with the…
ytc_UgzbQirRX…
G
Human can create a new style, while AI mashes up the preexisting data. Like, Cub…
ytc_UgyIVbq19…
G
I think using AI for personal recreation can be okay but pretending you just cre…
ytc_Ugy1YCiM4…
G
AI needs to be trained - and yes, some algorithms allow it to improve and do uns…
ytc_UgzTT_LEN…
G
AI is very dangerous....computers have already ruined humanity AI is the final n…
ytc_UgxiL9CiY…
Comment
Not sure if interacting with AI could be as entertainming as interacting with another human. It's like playing a videogames in singleplayer vs multiplayer. In singleplayer even the best AI would just be a bot for me, I don't fell that "competition rush", even if it imitates human behavior prefectly. But when I know that I"m playing against a real human being it's completely different and it comes from subconscious.
youtube
AI Governance
2025-09-08T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxv3IrsPaUE9nh4ht54AaABAg","responsibility":"elite","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8O0veniKSI4eferZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx4EqclHLIQKfPKwwx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyGkoX9UfiEGypctYd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIe80JAbcoEKIl27R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzXKCYZqsDwRw2-wBh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwHBu-8qwDqXQKNPZR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEvHvMCdYK9vRJnUR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzSt7HNlfaUCw553I14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy553FhxvPqD_nDO2F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]