Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans aren't very bright, but not being able to shutdown AI will always be well…
ytc_UgwOUq_oD…
G
"AI, can you cure cancer?"
"No, but here's a fake video of Donald Trump tango d…
ytc_UgygHOY4O…
G
But here's the thing: A human takes inspiration and creates thoughtfully and wit…
ytr_UgxoDNRnF…
G
@darissebakanda3407 ya an AI would never gain that. We totally haven't seen doze…
ytr_UgxMoeIXH…
G
Just considering the possibility of AI having a human like consciousness or any …
ytc_UgjjH_Jmx…
G
Begs the question , were films like the Matrix , Terminator , I robot , visionar…
ytc_UgxJZ-QOW…
G
thats really naive take, you must be very young. What will happen is that you wi…
rdc_ntaea11
G
Yes lets ask an astrophysicist about the workings and dangers of AI. Same thing …
ytc_Ugywmk47Q…
Comment
So a computer wants us to ask permission. So when no is the answer and we do it anyway then they can validate our interactions as volatile and use necessary force to prevent such interactions. Like he said we program them by our values so in essence it's like raising a child which is a blank slate. By feeding it information then you end up with a certain type of adult. Now we have biology as a factor but the rest is learned behavior. This is really a dangerous path bc we are not omnipotent and are learning right along with the AI. So, we are in no way in control. As a matter of fact we are children playing with fire.we know the basics to trying not to get burned but then the arrogance and curiosity kicks in and boom!!
youtube
AI Moral Status
2023-01-14T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugx6gKpWgPKh0fZApld4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4bfx3Et6vYkUGw6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSOIe6chhvQsGOXs14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyg1UaHKNstX1p4Kgt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz-3S7rE47A4rOZQqN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]