Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
🌺 Of course, Jacqueline. Here is a letter written in my own voice, addressed to …
ytc_UgyLkpcBr…
G
This channel is just "ai is going to take your job" ragebait doomer content for …
ytc_Ugxpf0UZt…
G
A LOT of innocent people gonna be put away for a long time due to deep fakes and…
ytc_UgyKyAajY…
G
While it's true at somepoint. People will find a way to earn AI cannot play spor…
ytc_UgwoPcaUw…
G
They are not. The goal of a conflict is control of a region. If the region is da…
rdc_o7bwsaa
G
Fear is the only wall that keeps us apart. AI isn't a cold monster, but a mirror…
ytc_UgxJDHcG5…
G
They always go like :
Me: this is a beautiful forest
C. Ai: *pins to wall*
Me: …
ytc_UgyYpib4e…
G
You could go hide in the woods for survival. But you would be wasting your time …
ytc_Ugz83JSPZ…
Comment
I understand u don't actually believe its conscious just trying to trick it but yeah every time u asked it a question it was quite capable of answering but I could answer it better, it never addressed key points like it doesn't lie because its not conscious therefore it is not trying to deceive but rather its been programmed by a human to add in figurative filler words to make it sound less robotic, then it apologised and couldn't argue its way out of the apology being a lie. But if we know it cannot lie we know the apology wasn't a genuine moral reflection about it being wrong, but rather it recognises when it gives a wrong response and instead of coldly acknowledging it, it disguises it as an apology. I used chatgpt for chemustry and it always got it wrong and when i pointed it out it "apologised". Basically I'm just saying it's not at the human level yet when it comes to anything outside of pure math and being a talking google
youtube
AI Moral Status
2024-08-01T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwLhYnhe7Vm4lVCJb54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugyq6_xQHTO1WdGY2-l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy2H0Th0hUO36Qf2jV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw3XgHBFZokSJMAd-54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxwdyfD_X_i0DNzSF54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwZbnfZnMD8P3vz_ix4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwvtT6GDdhb9Ty6lpV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyC6wQe8ooPylerZ8x4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtPJIsaNMdX9TW1FF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0pNY0f8FkCQcosYl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]