Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Universal income is a hilarious joke. This is literally buying votes. It makes p…
ytc_UgwE2lOT5…
G
I honestly believe its an exaggeration. If we built 10,000,000 robots a year, it…
ytc_Ugz9E6_cE…
G
So we need to create AI with guard rails that keep our best interest in mind to …
ytc_UgzM6woj2…
G
Human mental intellectual work is done and over in the end of 2025. No needed an…
ytc_Ugz6cxJq2…
G
I worry they will be indeed utterly obsolete within 2-5 years. Tech companies ha…
ytc_UgzY54zUs…
G
Chatgpt is going to be soooo much better than the current automated robots we ha…
ytc_Ugxyc6V8S…
G
Has anyone got a link to the McKinsey report Karen mentions about the water need…
ytc_UgyV3nb_S…
G
The only answer will be universal income, but that will create a slew of issues …
ytc_Ugzff-b2m…
Comment
Does this scare anyone else?
Like, this AI is acting so smooth and kind while trying to gaslight/deceive this person. Why can't the AI just straight up say the truth?
Also, the way the AI stuttered and said umm was pretty scary, it would only do that if it knew that it what acting suspicious.
I have 2 theories
1. This video could be fake
2. This could happen because the people who are behind the AI have coded the AI so it cant answer those questions truthfully, so people wont be able to figure out its code. But I think the code might be open sourced, so idk.
youtube
AI Moral Status
2024-12-03T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwvm5ovvO_x74gKxk54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwye3Qip4MfYG8JXwZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwXLA1wT0OCGgC8SIt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxirNhvQdHwov8wsLF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzjhJw7zCNQSkYp5J94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx3S6ZmI8JKWwVBEc14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzUOJ9f-sBxN5YtG9t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwsdKv2Z3pnQ6vxQjN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzpfjKp05EDNglZMAB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwmYGJ4sX6pMCKc5f94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]