Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think ive finally worked it out why they want robots so much.
I think they kno…
ytc_Ugwx9utrv…
G
This doesn't really align with the philosophical tone of the video, but like ser…
ytc_UgxHDxsF8…
G
So if we create a simulated world to mimic our world and fill it with AI, then g…
ytc_Ugxz8U1BS…
G
I’m so glad I’m going thousands of dollars into debt so I can go to university t…
ytc_UgysJNHKI…
G
Maybe AI will realize that the logical step to take is to get rid of the ultra c…
ytc_UgwbT8Ymd…
G
The big reveal.... YOU ARE THE AI and 'life' is you experiencing your training d…
ytc_Ugz-OeFG8…
G
The thing is… there are ai that are specially designed to read x-rays. Why wou…
ytc_UgzEdDgkk…
G
I now believe this is the actual Tom Cruise and everything I've seen of him so f…
ytc_Ugz05I8NG…
Comment
I feel like pretending, alignment, conscious, intent and all these other idea's are so human that an AI will never even bother with any of it and just do whatever it has been told to do. Without a chemically based risk/reward system or any biological drives at all, an AI will never feel anything, just execute progressively more complex code inline with whatever human has its hands on the leash. The real risk is what humans will do with such a powerful tool. If history is any indication, it will be to control as many people as possible with the aim of extracting the maximum benefit from that control. You raise the interesting point that the best way to do this is by making us love the AI. The same way they made us love religion and money. Interesting times ahead.
youtube
AI Moral Status
2023-08-21T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxhScuUOtRFTabR0C14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzY8StKi1iYEHSuEgJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyVUkr6ZObxsAJ2ihh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwu0SKI6PvNLxswvdp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxP2zJ3Lp0FMzXQw14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5M3Li_xQNfbuYT0B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0M9aUKL_PY_lQmtp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwZ7-7g4UwpKu3h1IF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy6M12eZ2hA9Aj4yB14AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz2a00CIqW6yOxiLPx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]