Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One rule I think could work for Ai future integration in work places (Let's be r…
ytc_UgyO-lgqD…
G
The process of asking an AI to make you an image is the same as comissioning art…
ytc_Ugw1zAQ9B…
G
Why is it trumps job to stop it? That's the goverment they want, you just want t…
rdc_elv7yp8
G
Is was never abput AI art, it was about the friends we made along the way…
ytc_UgyzUr8mI…
G
Next year? Like fully automated vehicles would come next year in 2016. and 2017.…
ytc_Ugy5KlxOE…
G
Why do we even need these AI debates, this video came up in my feed, I have seen…
ytc_UgyG29oP0…
G
Waymo prices are what Uber and Lyft were priced at in the beginning. I'm all for…
ytc_Ugz42NE-p…
G
I work for a tiny ngo that doesn’t have money to throw around, and I’m not an ar…
ytc_UgxFvskkD…
Comment
One of the things that has concerned me about AI, is when you look at the differences between humans and animals, humans are intelligent enough to deny our own instincts. We are able to think ethically and look to the future, and ignore our base animalistic desires. With AI, we can give it rules and parameters, but will these just be instincts for the AI? If an AI becomes smart enough, could it question its own guidelines? Also, frankly its much easier to change code than DNA. It already knows how to program itself.
youtube
AI Governance
2024-02-13T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw00ZExX5oLaYNND5l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz9sDXtmvNKmLdXFed4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz9fhDMu1KY-ynCtFV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyZdLlADbzzGB1EiU14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwb2g37AoHRdqwwNmF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzDNvPBhUcVUju4S_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx1EznoBF23kSS5_hx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz9pQ0n0785K9Aa64h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx6FbQD-ngcOHLA0f54AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzYgT79Me0FhbB-Ks94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]