Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I CALL THIS INADEQUATE SOFTWARE. You have a legal responsibility for accuracy. …
ytc_UgyXvi9ys…
G
Remember when people started using ChatGPT in its initial days to automatically …
ytr_Ugyaon7nU…
G
That's not what this is about. The AI teaching is cutting down 8 hours of teachi…
ytr_UgzfzJHO9…
G
I'm not sure how this guy was able to have a 2 hour conversation with bing when …
ytc_UgynQBPXR…
G
Why would any sane engineer solely rely on visual cameras for autonomous driving…
ytc_Ugz9OWWDU…
G
Ai has no basis in reality. It can be mislead easily. It accepts lies as easily …
ytc_UgxPoIXou…
G
1:30 It's only cheaper because they are promoting their product. Once they are e…
ytc_UgxJz1EJf…
G
Don't teach the AI robots to fight. Once one robot knows how to fight it will go…
ytc_UgzIsCkvN…
Comment
People don't realize that merely by talking about it, writing about it and worrying so much about it in such detail. You're giving AI a playbook. Essentially your writing the future of AI just by talking about it. Imo we need to 100% stop playing with AI. Think about all the things we say about AI. Now look at it differently for a moment. What if it was you people were saying things about like that? How would you take it?
youtube
AI Governance
2023-07-07T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxNNRXn0Hk5es2_Po14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwjjFh3ILNHCkjdccB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"curiosity"},
{"id":"ytc_UgwjvO3WZfwjKwsdusx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxXs-CbPnQsQRsaLo94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyGMUSQ0bDgZ_BOq4t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyCS4NTwQS9XKjCsOV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyk1QxAnSu7Egf4QDZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzzyzglpE1r9VXg3R54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugwz8hAsWs5qIzA9qDZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwdMLC5V5DxaNUYw1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]