Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@chrischen8580 , Yes............movies made into reality! 1st sign was the extre…
ytr_UgyKPjaJe…
G
Progress is rarely ethical. This is going to be embarrassing in a few hundred ye…
ytc_UgysodhFn…
G
@Pun116 How exactly do you think that will happen? 😅
If humans wanted to be cor…
ytr_Ugxl_C2Ou…
G
Do me a favor. Go down to my DMV for 5 minutes and hang out in the line. When …
ytc_Ugz7WtUTS…
G
Here’s my rule about believing explanations of complicated things: If, in the ex…
ytc_UgxjYcopF…
G
Even states in the video that the guy didn't actually listen to the full stateme…
ytr_UgwcybQWi…
G
Although I am against GEN AI it can be useful, it can help you imagine how your …
ytc_UgythmTdD…
G
My close friend gets distracted easily and diagnosed with austim but even then..…
ytc_UgxS6nj6R…
Comment
I've only been playing with AI for a few days, but I discovered fairly quickly that opening a conversation with it and establishing context, rapport, and goals really gives it a unique edge. When you talk to it like a person and ask it to refine its approach, change its demeanor, or avoid specific errors, it picks up and keeps that information within the scope of that conversation.
youtube
AI Moral Status
2025-07-26T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyYdg2BFc0BCMUThZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwUVMzbLx8o8zWoHhJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwQUiak7g5jtck8re54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzMCTmRLUy3cKLMR354AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgztM6sNOho9lxsumz94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwocQGv1J3KfFQCBUF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy5OeXXDJ44T1b9_aF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKAtijM8vsTEZsDYt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzl_Hg-_xsVfymXtlh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPw6bOL39wnfe0Os94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]