Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sounds good on paper but any country that doesn't do that will have a major econ…
ytr_UgzokjBYP…
G
Robot 1:i ma do mAh work-
Robot 2:*throw the box of the hole*
Robot 1:WTH N…
ytc_Ugz2Z_J0d…
G
>bring 3rd worlders in
>3rd worlders do crime
>"WE NEED TO MAKE OUR CITIES SAF…
ytc_Ugw59x-P_…
G
I bet the robot did not feel any of his punches he was just acting 😂…
ytc_UgzvDZjJ4…
G
Damn modern women ain’t no shame in a man game. Women going look crazy walking r…
ytc_Ugz6D06ME…
G
Why are you teaching the robot to use guns?
WHY ARE YOU TEACHING THE ROBOT TO US…
ytc_Ugz8kxryd…
G
In the US, our budget bill has a stipulation that the states can not regulate AI…
ytc_UgyJ6hJZu…
G
I find disturbing that the Anthropic models are already pretending to have a sor…
ytc_UgynRxABJ…
Comment
I've seen some of your other posts and videos on these topics and commented on them before about not totally agreeing with things like outright banning the research and development of superintelligent AIs. However regulating AI in general seems like common sense to me. Just like how the internet eventually developed its own laws and regulations, AI should be no exception. I'd agree that a federal regulation *would* be more effective, but I'd also agree that states should realistically have the ability to regulate themselves on this too atleast to some degree. Perhaps get those together who voted against the 10 year state regulation ban to brainstorm some common sense bipartisan regulations, and leave additional regulations to the states. Government can move slow as molasses so I do agree we can do more to get ahead of things, whether I'm cautious on overregulation or not that doesn't mean we shouldn't move forward on the things everyone can agree on.
youtube
2025-11-23T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwzRvaog3XnToQjjYl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzhL-K3IJ6Fq0ci5A14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyOLtvJzW2wwyXgiD14AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgySYFAK6-sCDfd-8yd4AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugy3Vm0uqSFcZia4A5t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwhgLs0KPWmiciS_XZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw8bXmFMnMSIbBrfQd4AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwJsmeUw-_Lw4g3L4t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx4z3qF8SDBs8BFzqN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXauxVQaDvJbzdu5N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]