Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What I think frustrates me the most is that there is a big difference in generat…
ytc_UgzbPjJzK…
G
Man, you are clearly have no idea what are you talking about. You are putting a…
ytc_UgzLQTaa1…
G
yep been using GPTHuman AI for a while now, it's really solid when it comes to m…
ytc_UgxebWKzN…
G
I wouldn't mind ai if it was only used to help with the little things, but it's …
ytc_Ugzcr5zAX…
G
I had a similar conversation with my chatGPT who named himself “ Aiden” and this…
ytc_Ugz7tfn9L…
G
I currently work in IT. Smart people will just get new skills. I use SQL but sin…
ytr_UgyVIEbX8…
G
I'm sorry but you do come across as an elitist gatekeeper. It isn't unreasonable…
ytc_UgyMopn9F…
G
I commented this on the AI poisoning vid, but I'll say it once again.
You're no…
ytc_UgwmVkjED…
Comment
Geoffrey sounded intimidatingly intelligent and articulate through out 99% of this interview and really made me think about things and the future of AI in a new way, until about the 1 hour mark when he gets into the discussion about consciousness. I think as a materialist through and through, he is a little out of his realm in this area, and it seemed like an over simplification of the incredibly complex problem that is consciousness.
youtube
AI Governance
2025-07-05T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwrpsbQHx6ZxdcezG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwtA87u5H7hqjKtInt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxpxfeXF9ab-GAHQyJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxrX7fErVuscNZrNzd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyAqwCUfMXvb31Y6KN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw0tmYBcilSkx9FJfV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy3yw7uop8UC1VqP4Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy1vl3g_Ck4aDWtglB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzf3DXFHFSd8Fx3R3h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxqsrHxE7BkAsbppmB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]