Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One day we will understand AI is the enemy of humanity. Too late I am sure, but…
ytc_UgxQwWo2Z…
G
AI will make life much worse than better. Over 40% of jobs gone (while in compar…
ytc_Ugw5hSnEd…
G
The worst part about AI is that the training of the "intelligence" (it' not real…
ytc_UgxnnywrZ…
G
So stupid how AI has to be looked at as a negative thing. Can we look at the opt…
ytc_UgznW_XmJ…
G
Someone said let’s integrate AI with our nuclear weapons because it will be more…
ytc_UgxRVfaqM…
G
They really should be including ALL of the top AIs in their study, not just GPT …
ytc_UgzaqEDMC…
G
We appreciate your feedback. If you have more questions or topics you'd like to …
ytr_Ugzo6yKgX…
G
I like Eliezer. He's grounded in his beliefs. I don't have his expertise or exte…
ytc_UgxF1_Hmu…
Comment
All the current AIs do is to answer questions based on data and prompts.
If you don't ask, they won't answer.
Also, AIs don't do any "scheming" in the background unless you set up some kind of feedback loop, like an AI talking to another AI. If you see weird behavior, it's likely because you have fed an AI weird data or asked weird questions.
youtube
AI Moral Status
2025-06-05T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzncFgYW2ktZ6k3Ycx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzv-xXyuoXxCaCxWrF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzBq9OYhm-pGSntC-N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwooTcF9U7u9YEbc654AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzrkr6QQ38prsM0v3Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyS4BPMxErU6wgjoA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzVzPupTPnnq0Yc4H14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdHxOW1X7reE6XA8l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyvDjWfmQoR5rLQGat4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbMWJZmbidDT5yIMF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]