Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Same but tell it to fact check it sometimes hallucinate I think everyone is okay…
ytr_UgwLLuEH8…
G
Sure… 😂 Eliezer is like a human LLM, seems to make a lot of sense but has only o…
ytr_UgxvPQ92K…
G
Neil, thanks for covering this. Given the chatter about AI, I decided to dive de…
ytc_Ugw6uGeza…
G
I AS AN IT GUY will never trust AI not to kill people out on the road and I will…
ytc_UgyiHOZv-…
G
Angel Engine seems to me to be the right way to use AI. They guy clearly had a s…
ytc_UgwAzxq3-…
G
As an artist who’s had a decade of doing art by hand and loves doing it by hand,…
ytc_UgzwWoPP-…
G
@mayanksharma4651nah even today ai is more of a tool to enhance programming. It…
ytr_UgxI97afG…
G
Parce que c'est dangereux. Si on est entièrement dépendant de l'état pour sa sub…
ytr_UgzOPayqD…
Comment
I wish that these people would stop scaremongering - AI is A COMPUTER PROGRAM its built on CODE. Rules are Still rules, it still needs a task. You can make AI say anything you want. Its all subjective. If it creates something thats outside of the expected thats called hallucination. These are the limits under which AI works. Rubbish in Rubbish out. Remember its only guessing the next few words :)
youtube
AI Moral Status
2025-12-11T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwoPEbqR72Y76GfReN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyzxDoTG4lCWfuSRQR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyIeD0e8JM5xYKzHb94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx2-63QYAoulf2hXUB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwxIy3NCmAXrkzxbQl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw509kiPU9zlTJEqap4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwwRUlONUOZxqeik-t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVXB9GCNlWuILPK0F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxhrg69guZkSLQ92KV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxMKf7A1zNZtEXAvJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]