Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LoL tweaking Detroit: Become Human cut scenes with AI generative sampling is now…
ytc_Ugy4JyMYi…
G
The majority of chips are manufactured in Taiwan.
This, to me, looks a lot like…
rdc_gt5btoh
G
I used AI to summarize this video becuz its 5 minutes long and wont give out the…
ytc_UgxezhiHD…
G
AI will lead us down a very dangerous path that nobody seems to be talking about…
ytr_UgxbYoUaQ…
G
@1:36-.-I mean: where's the lie in this meme? A director's job is only to direct…
ytc_UgzaVeJkM…
G
Anytime you have a company that grows headcount even when they don't need to gro…
rdc_o89jjsy
G
It's very likely that they're doing this in order for people to get used to the …
ytc_Ugxy8Y8zi…
G
The nerve of you… how dare you? How dare you post this after the dog shit custom…
ytc_UgzW7DxM9…
Comment
I agree but consider that machines can work millions of times faster than humans. We could make an adversarial neural network to simulate eons of political and philosophical discourse and then just tell us the answer.
If philosophy can ever truly be solved, like math or science, then confirming that the answer is correct will be much simpler than finding the answer ourselves.
If course, that assumes the AI will be acting in good faith. If it's so much smarter than us, as established, it can convince us of anything. It might predict our interpretation and feed us the wrong answer, knowing we'll arrive at the correct one ourselves. It could play reverse^n psychology.
Truly this is a question with no trivial answer.
youtube
AI Moral Status
2017-02-24T21:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgjLxrOYZeejZngCoAEC.8PMrCOh_WlP8PO9vFpJHID","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgiVpfQj89rjKngCoAEC.8PMnhumW13f8PN9y8DTfnA","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ughhc4tyNo5_AXgCoAEC.8PMkNeGZPj08PNBctcwzYf","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_Ugh-Tr1Sq_E_FHgCoAEC.8PMiIj2qZQP8PNkXaBst5k","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugh-Tr1Sq_E_FHgCoAEC.8PMiIj2qZQP8PNmsD1TZH2","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UggJXPMrGWhAjHgCoAEC.8PMe5OU3nm-8PNgRoCLtIK","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UggJXPMrGWhAjHgCoAEC.8PMe5OU3nm-8PNiBFc9N9t","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UggJXPMrGWhAjHgCoAEC.8PMe5OU3nm-8PQpvl1uO8l","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugjn2HuW9ataL3gCoAEC.8PMbyf9yAOo8PMufIkpGiW","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UghFccuO56tzBngCoAEC.8PMKX4oTf0X8PMLdWg0f5o","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]