Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Society has taught us not to value human life. We actually protect under the law…
ytc_UgyG5xJCo…
G
AI WILL CLEARLY NOT BE GOOD FOR YOU. THE KNLY PEOPLE WHO WILL BENNIFET FROM AI A…
ytc_UgxM9Wcw4…
G
Ai will one day do the sums & decide how many humans the world needs to sustain …
ytc_UgzKRB38L…
G
Someone like him must have already known that AI was bound to become smarter tha…
ytc_UgwxA-LjG…
G
AI is not real. It's still just spitting out words. It's not intelligent. It's j…
ytc_UgxocN20j…
G
When an AI can be programmed to believe it's a human but through its own autonom…
ytc_UggXYUtVS…
G
Aucune mention des millions de tonnes de CO2 qui vont être envoyés dans l'atmosp…
ytc_UgxHnQ1lk…
G
Not much hope here. Go and read the book of Revelation, it tells you what will h…
ytc_UgwdEsW1Y…
Comment
AI is an abacus with all the bells and whistles and will never experience a self induced thought. If deep in the reccess of our minds lays a synthesised falicy of virtual humanity spinning in subspace chiral duality, then tech has got a long way to go. Its a tool.
youtube
AI Governance
2025-10-20T23:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyMkp_eHRRL0dDh44p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwMyh32CmdkyYd4T9h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzyMjkxAfxT6e0NDpF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8PDrcKhxHPFH7wHp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzFIM1AWbBki8LQo2Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxL5Ytx-_8kID63QbV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyKupnQIyCgQHUPUf94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwLUsO4qF1V423BTcx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwlvxYcaKeuSqtTALl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwREmL4EJrbZp6hs314AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]