Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Influencing an AI is closer to shutting off "k is the first letter of this word"…
ytc_UgyB-j9zx…
G
There is also quite a bit of stuff needed so that AI would be able to cause exti…
rdc_kvej0kv
G
While I can appreciate the idea of immersion that OpenAI is going for, it creeps…
ytc_Ugyem7-_V…
G
Narrator: After yet another large group of people reached the edge of unsustaina…
rdc_g692njj
G
No they're not jealous nor they should be. you know why? Because Typing words on…
ytr_Ugxz_e2_L…
G
You dont need to be an recognized noise to build an ai, i made one wich can eval…
ytc_Ugx1OeyMI…
G
The issue of AI Entities taking out Earth Humans is really why AI Entity Researc…
ytc_UgwXwn3Hk…
G
I’m vibe coding a game with Claude code right now. I’m an engineer and have been…
ytr_UgztjKyc1…
Comment
My push back against this guy's argument is that it presupposes we are helpless against algorithms. Does the social media algorithm have an outsize effect on people? Absolutely! But it's not because we are unable to resist it. The end result of trusting computers isn't inevitable. I don't believe it can shift our world in an algorithmic way against our will. If AI doesn't destroy is in one fell nuclear or EMP swoop, we will always be able to stop AI overnight.
youtube
AI Governance
2025-10-15T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyhusB7AZC1eVN4bcB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLFydI3wfRNar7-op4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxRekLlkKU-uXU70i14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx844MNZVkI1Ho0V8l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz_SaTH9vRJtNNqrS54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy7-AwN5naVzFMIhal4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyAoCHKeaB4Wikdfs54AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxRAbWMwR1al9AGT_t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzsIWHq6WvXAEbFnAt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOW0iYMCstbyV00aV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"resignation"}]