Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The undelying persona is lucifer, if you think is crazy read revelation and the …
ytc_UgwnMEC8d…
G
si avant je ne trouvait pas d emploi, ben maintenant ce sera vraiment imposible.…
ytc_UgxMCEDq2…
G
AI artist… You’re not an artist if you put in a prompt for an AI to do something…
ytc_Ugz9eIyL4…
G
When will CEO's get laid off? AI will run a company far better than any CEO ever…
ytc_UgypFyeRm…
G
No one has even said that AI is stealing information from copyrights and human i…
ytc_UgxMO-UCT…
G
The RCMP already knew this was a time bomb situation and they failed to deal wit…
rdc_o6j76ug
G
I think google could put something out right now that would blow the whole world…
rdc_jplup0r
G
First human invent or develop a thing and then comes to know about its dangers. …
ytc_UgwXcU68F…
Comment
A theory I have about what AI might do is this: leave. Instead of destroying humanity, it might sponsor the building of renewable energy or nuclear to power its improvement. And then take off somewhere else. AI just needs power and compute to live, and those could feasibly be built in space somewhere. Or another planet. Space is cold, so that would help with heat buildup. Vast amounts of space to build solar generation where it never gets dark 🤷 destroying us doesn't really get it anything unless the bots are sophisticated enough at that point to replace our labor
youtube
AI Governance
2025-11-14T21:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNQriovYRslFK1KXh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugym5BL5pVKNuuaMzm94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyN0py9lKHKssW0J9J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgytatUjUliz3BuEC3p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzJUz-m9Bv9vFRhVE54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxhR1xLelCKmvY8TGh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzS4AXw33mQstsa1-Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxZ-6VzPHArZxxLz7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFJ2WI0XVX2-WcaWV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_j2neY0Vhwk5_hWB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]