Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
its fun and games until the robot doesn't give the gun back and shoot the man an…
ytc_UgwIF4T4i…
G
@kuzakiv3095the context of his statement is when AI does everything better than …
ytr_UgzrzvueO…
G
You can always trust this channel to take a break from the woo-woo stuff to hit …
ytc_UgyQsg4vM…
G
I would be quite amazed that channel called "Documenting AGI" would do video lik…
ytc_UgyQMKL75…
G
They claim the ai art is real art, but it's rlly just a robot generating stuff.…
ytr_UgwQkPpsl…
G
@rudimcloughlin3627 In terms of originals being effected, they're not being effe…
ytr_UgwjOvpoC…
G
Let them take their jobs back in the fake industries they have created in the fa…
ytc_UgyHKbFzb…
G
In 5 years every single anime studio will use AI.
In 10 years most if not all an…
ytc_Ugyq-_kW7…
Comment
As humans we are not built for the exponential level of progress of the AGI. Currently our brains are not built for that level of comprehension and intelligence. I think most likely we will have to either stop the progress altogether, or keep the progress but became the AGI itself through bioengineering.
But my question is: Can we build an AGI that is locked from killing us even if it wants to? I'm thinking no access to physical world - no robots, no autonomous control, only locked in a system where you can ask it questions through text/speech like it is now with LLMs? Or do you think it's inevitable that AGI will find a way to get outside to our physical world on it's own, no matter how locked or secured the systems will be?
youtube
AI Moral Status
2025-12-11T12:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwoPEbqR72Y76GfReN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyzxDoTG4lCWfuSRQR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyIeD0e8JM5xYKzHb94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx2-63QYAoulf2hXUB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwxIy3NCmAXrkzxbQl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw509kiPU9zlTJEqap4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwwRUlONUOZxqeik-t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVXB9GCNlWuILPK0F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxhrg69guZkSLQ92KV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxMKf7A1zNZtEXAvJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]