Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It very definitively is not storing it, for example, an image in an image model …
ytc_Ugwnr-US8…
G
People code ai to do whatever they want it to do, so if the people developing ai…
ytc_UgxUu3H1n…
G
So I'm a sucky writer, as in science fiction. My biggest weakness? Description. …
ytc_UgwaXi7lN…
G
It feels you are proposing don't leave AI runned tech behind . Why can't we keep…
ytc_UgwxKuWSU…
G
@josegregorionavassanchez1340 Thanks for your comment! Comparing the strength of…
ytr_UgxnSlsSg…
G
Here are facts AI is reffered to as a tool yet just as we were created in gods…
ytc_UgzGsrAo1…
G
Define conscious. Also, AI will continue to improve to "mimic" consciousness, bu…
ytc_Ugy8AgmYR…
G
I been learning about AI in many many years ..and AI problems aren't solved by s…
ytc_UgwQvAJo4…
Comment
They can for sure become smarter than us. It's built on learning everything about human affairs and intelligence and the internet is literally a library of the human mind. Literally all knowledge can be found on the net and A.I. has unfiltered access to it.
youtube
AI Governance
2025-07-06T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwrpsbQHx6ZxdcezG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwtA87u5H7hqjKtInt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxpxfeXF9ab-GAHQyJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxrX7fErVuscNZrNzd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyAqwCUfMXvb31Y6KN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw0tmYBcilSkx9FJfV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy3yw7uop8UC1VqP4Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy1vl3g_Ck4aDWtglB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzf3DXFHFSd8Fx3R3h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxqsrHxE7BkAsbppmB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]