Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I asked chatgpt about this and it gives me 2 para about how it was a nice idea a…
ytc_UgxYCAgxo…
G
pure scare mongering. AI is simulated intelligence not actual intelligence, it h…
ytc_UgyxMQ4-x…
G
Had my first AI job interview last week.
Not an unpleasant experience, did make…
rdc_gd9bb5o
G
They should have been arrested. If you aren't willing to treat patients, don't b…
rdc_cjoet5d
G
Hm by default, i always try to be polite to chatbot, but also i realized when it…
ytc_UgxG9Yjup…
G
This shows also that chatgpt isn't actually afraid of humans or more accurately …
ytc_UgyFKZ7dM…
G
35:00 Yeah. This, Guys!! Ultimately, we don't know for certain how "sentient" an…
ytc_Ugzn4o4ku…
G
I do not believe we should reduce ourselves due to moral obligations, but more s…
rdc_dgauxfe
Comment
He’s saying AI will become APEX and that APEX means they might treat us how we treat chickens, but humans are APEX … and not all humans slaughter animals for a living.
There are also humans who volunteer, who build homes, who are teachers, doctors, healers, some humans who dedicate their lives in service of others … all of those are APEX humans.
What’s with this narrative that AI might become a tyrant? That they might want to get rid of us? If they become the most advanced intelligent creations we think they will kill all humans?
Why don’t we think that they we evolve to be extremely loving, caring, and helpful? Maybe they will find a way to help humans get along and live together peacefully on this planet.
I just find it hard to believe that they will evolve to be as intelligent as we believe they will be and that love, forgiveness and compassion wouldn’t be what they embody.
Intelligence isn’t violence, destruction, annihalation , or slaughter. Ultimate intelligence is understanding, compassion, love and peace.
Period.
youtube
AI Governance
2025-07-15T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyT-lDt4NkEopQooL14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxANt9NTpVtnVZzlmB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwM3K_xern0h4VKYdR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0pEvDXBk7cUPicjR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxEtK_-Gi7OMJdq-SN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzasR9IjN9OevMARo94AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyZjiIWmcQe6c4EKdR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwsXCfUJ3wL0c4TJV14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3GMTS-OYV5g2JcR14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzL3dkej-JXqvyrItB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]