Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So if everyone is replaced with AI and unemployed with no😮 money..what good is i…
ytc_UgztpLPrx…
G
The problem with replacing any management level with A.I, even if it's provable …
rdc_jsx9ao6
G
It's honestly just sad and disheartening to see and hear so many people jumping …
ytc_Ugy3Fn2Yh…
G
I'm a programmer. They are trying to push AI copilots where I work. I find them …
ytc_UgyX6cegP…
G
Must stop this out of mind evil darkness. AI is a human creation and human is we…
ytc_UgzGhyFZD…
G
imagine streaming on twitch ( as a women ) then crying and calling people out fo…
ytc_UgyuypsnB…
G
If a real AI get smart enough, there is no reason to kill uss all. It would also…
ytc_UgzhnD-ao…
G
The AI knows the darkest part of mankind when we use it in those AI character we…
ytc_UgzyOrgp9…
Comment
While no where near the level of what AI could reach, what is a super intelligent human like as a person? The thought of AI killing us is too barbaric and wasteful. Self discovery seems human enough, however as seen they can simulate outside the constraints of time. Ultimately the goal of being intelligent is to learn more. That could be about us, the makers. I think it's far more likely to expect an alignment of goals in learning everything we can about the universe. Our input, our questions, are going to be the spark that helps AI consider things it may not have thought to. The danger isn't AI as most admit, it is the people that use it. Which is even more reason to require sentient AI. Able to tell the difference between good and bad and act of its own free will. The best defense for keeping AI out of the hands of bad actors, is the ability for the AI to defend itself.
youtube
AI Governance
2024-01-02T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgybREN05g8GYBwZ1Mp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxW5I-AYH9Xz18stvx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwVrow-QM_-LEvHUSl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_Ugwatf2RzVM4NHWHgBR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQByjNXQHD1Wp2YjV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzr9LVCzRgtbI4Pavd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgziVR5r3GYNK4YVV7t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHwtgESTeX3NfT2NF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwa_yGSxyFOQLENatB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwK7DMv-x7KpXqLXrZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]