Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Look I hate ai, but yeah that was on AJ. he saw what he wanted to see and nearly…
ytc_Ugw94st37…
G
The government that printed unlimited and backed by nothing fiat system are the …
ytc_UgyH1ta2a…
G
Thanks for the speech. Great now my ai knows me mind body and soul 😂…
ytc_UgxXuk-VI…
G
How about using the money from AI to fund childcare, public schools, health care…
ytc_Ugyd_jhSP…
G
I feel so much better seeing this. I just got back from a lecture about how an a…
ytc_Ugze9A2ce…
G
This is content intended for the unintelligent. Here…
Prompt:
Consider this vi…
ytc_Ugzk343dz…
G
Everyone I know was hacked this year including my enemies I now know it was an A…
ytc_Ugx9Ht_y1…
G
This kinda reminds me with that one guy that said he’s the best AI artist and ma…
ytc_Ugx9D0T2b…
Comment
I have a lot of respect for this guy but, his is a true lack of ignorance and understanding about AI and the risk. It doesn't need to be conscious. general intelligence leads to general super intelligence. We don't know how these things work and can't predict their behaviors now. The existential risk is that we are creating things that might not even be terminators it could simply make a decision tree or coding mistake and exterminate life. Look into Roman yampolskis work. I am butchering his last name but he has the best explanation of the risks.
youtube
AI Responsibility
2025-11-14T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwwkOrKX0I4sRA41CR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxm7KXDV7-nrm_5Z6t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx_82_QfTn6Ocubhzl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyjmKWTRi30rqyxy3J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyQ0XRkZyiufKB2oIl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz0RJQYGSKSNgCHAZ14AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxXT7PwZfQGZxNsNJd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyeDXD9x-_lVhRFFkV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzCgRbX6txKqEBJvr94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyVUL5IUdPLBXoO7iJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]