Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As for using AI for therapy; I don't think it's inherently a bad idea. I do it. …
ytr_UgwYuwo0j…
G
This is why AI won’t last. It pushes out the one thing that’s needed above all e…
ytc_UgxTaY4e6…
G
No A.I. will ever know what a strawberry tastes like. What redness looks like. A…
ytc_UgwHI-h_K…
G
Thanks, @changing_dunyaname! Artificial intelligence definitely dominated in thi…
ytr_UgzUBsfni…
G
fax i wernt to Rufolf Steiner in London and its a similar concept to this school…
ytr_Ugyqvdr0f…
G
What I learned from this, the teams who are making these AI apparently aren’t fa…
ytc_Ugw1jdqFC…
G
The most probable scenario in my opinion is this: In a world where everything is…
ytc_Ugyc15Jzg…
G
Humans will destroy humanity. If AI results in that then all the better. After w…
ytc_Ugz7EnhDn…
Comment
I have maximum respect for companies that say categorically say "do not use AI to do this or that" including things like drafting a resume.
youtube
2026-04-14T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxWYesLt1uDeAf3i0N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz_fowgep4JS6OI_9R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy8bS-I6_zHVM2lmV94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxN5LENCiTkTgZQWtp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw3xIhGAFsOspIurPp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwQ3QoQjTNiwmuvs2J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugypc9pALiLvIbreMzZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzAh0rpjbXBMYhzpfJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx47uvN9FG9Lb6jCo54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYqYsZomhPsjm-5BB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]