Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this is EXACTLY the experience i have. it's like, 3-5 years out of date with its…
ytc_UgzifOfq6…
G
AI and no code frameworks are going to reduce the demand of programmers and redu…
ytc_Ugx6hp0f-…
G
I am very upset with our current government and realize even a flawed dictator w…
ytc_Ugz5T85Uw…
G
Oh yeah the training set contained some cringe Reddit thread that the chatbot qu…
ytc_UgwjzhWBy…
G
I really don't know why this even gets pop up?
Ai phase started to die down?
T…
ytc_UgwUnEtFI…
G
autonomous driving is such an american solution to the american problem of needi…
ytc_Ugxg-zF-9…
G
The timeline seems a little aggressive to me. I agree that the technology could …
ytc_UgyThbt7n…
G
If its faster and smarter than us, then we would automatically stop being a thre…
ytr_UgxC3h8s0…
Comment
He’s jumping the gun in this argument by skipping over 100% implementation of self-driving cars which would inherently mean restricting freedom from human driving. At that point we’d be better off riding trains & subways for most of our transportation needs. Better to encourage behaviors that have a net benefit to society by improving things (public transit, infrastructure etc… ) rather than restricting freedoms
youtube
2025-12-31T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyttlqIJyXMaItYdz94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzunGc_fqpDeJ1J4A54AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzlxRPoVOfcCuKUUxF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy2ug3BwjNDgqxnx_h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx1jGWClBmBsznbv4J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzR1VG-2A8dWLvzZ9t4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyRE9y8tHMRXyHIkUx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx6bCjnhkl0tq0Pl0l4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwL7d1oZF512O1MROh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwqEeaF5KzOrmneWJl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]