Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Would love to talk to a sentient robot. It would be quite interesting as they wo…
ytc_UggXxUS6I…
G
As an artist, it's not that deep.
Use AI if you don't want to learn how to draw…
ytc_Ugyps8cwS…
G
As you interact with the AI it actually becomes more aligned. But it becomes mor…
ytc_Ugws5xvyz…
G
The only good thing to come of AI art is how many people like me it finally got …
ytc_UgyJJb3ag…
G
Artificial Intelligence is a perfect name …. We’ve become a bunch of dumb “click…
ytc_Ugx6CTuQR…
G
We are "trusting" self driving cars because humans are a greater source of error…
ytr_UgweGjhaD…
G
I bet the property management company has some ties to Open Ai. Probably let th…
ytc_UgzzfmsbE…
G
The mythology here is insanely bad. The example questions in the article are bas…
rdc_mkba3m2
Comment
What a load of Bull. A.I. is a tool and helps gets tasks done. There should be no regulations on that. A.G.I. however, that is where the risk is. They are totally different things. Calling for a pause on AI is just so he can catch up. He's just bought thousands of GPU's. There is no need to pause AI, just AGI.
youtube
AI Governance
2023-04-18T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzcShx882zGZN9X7WN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXQ-aAN_yINWMCwnt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2YYEghygIvxXYYRx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxpCzcwEFEjn26cud94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwy1f9PF37mMYopH3t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxiEQvyoUeVKotCbG94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgweXKTuoDmhoXNLf0p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxuWxoVAOVFsqeL4IF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxMXwC3BoT42juee2x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyq6gNj_0Zl1hidWml4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}
]