Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All I can say about self driving cars is, just because we can, does not mean we …
ytc_UgxhAkINW…
G
Don’t agree with Bernie on everything but he is absolutely 110% right on this to…
ytc_Ugxflg6yk…
G
If history repeats itself then that makes us the AI from the past that has taken…
ytc_UgzawBXRw…
G
13:41 I "learned" to draw by constantly doodling in the corners of notes at scho…
ytc_UgwiDuUXZ…
G
@KirbsDeConzbeing "bothered" is a normal reaction, don't you think? We all are …
ytr_UgxEBC8c8…
G
What is absolutely wrong with these CCP people? Last time they leaked lab create…
ytc_UgxQ4l3x_…
G
Today I was watched one video from Tectone in what he reacted to video abt Ether…
ytc_UgyZCXmwD…
G
In defense of consciousness, we are building these ai essentially as perfected r…
ytc_Ugya3eWRv…
Comment
YES but.........The people running this AI regulatory agency cannot be morons like we have running many of our agencies today. They cannot be appointed by affirmative action or some "program". They have to be A-political. They have to be extremely experienced with this technology. In other words they cannot be Democrats or Republicans. The problem is that the federal government would regulate our use if AI, but use it to their benefit to control the people. It might then turn them once they get control, because if AI is so smart, it will know that our federal government is easily corrupted.
youtube
AI Governance
2023-04-18T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzA7a1lYbXq7Z4VpSB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzA018fJuq3baUoyzJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxISmjRnuwOkwtTRrR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyjUs_OEihLbBB32P54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwbjHhc6LEUzpDA-yN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyrNtpt9Kmpzmes6t94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw6FiCC_ZEFDcQcsgx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyffRqAY2vVFyRfSQN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0ifdMyXPrP7epTQh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvldEwWb41-KmNnu54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}
]