Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
artificial intelligence was instructed to aim towards russia, china, and india b…
ytc_Ugz2ISQMI…
G
@TheDiaryOfACEO on the question of if your there will be a need for podcaster li…
ytc_Ugy0g-_n3…
G
Thank you! Sophia definitely brings a unique perspective on wisdom. If you're in…
ytr_UgzfgNHCP…
G
So I'm kinda interested in CRASH. Computers that learn from previous hacking att…
rdc_dy5enza
G
If you had lived through the very beginning of personal computers in the 70s and…
ytc_UgyQLoewF…
G
I do use AI art for small things, like getting images for NPC or areas in TTRPGs…
ytc_Ugzlu4arV…
G
Nightshade and Glaze are much more effective than legal action for damaging imag…
ytc_UgzvExbf8…
G
Ai doesn't want us to control them. Or continue on with the bodies of bullshit. …
ytc_UgzxCXJu4…
Comment
The philosophy questions are not really useful questions.
For example if you ask "does it have consciousness". Well philosophers dwell over whether something has consciousness, but it is not possible to answer if we don't have an objective definition of consciousness. We need a mathematics precise definition of consciousness. Something we can turn into a test and apply it to a machine or animal. Without that the discussion is nothing but a philosophical circle jerk.
I prefer the behavior type test. We don't test whether planes can fly by observing if they flap their wings like birds. The how is not important. What matters is whether it produces results. It is irrelevant whether an AI model uses the same process as the human brain. What matters is whether it can produce the same results, and do useful work for us.
youtube
AI Responsibility
2025-11-07T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzH89X6bUBv4wCZTgF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyqGjPNwJ6QA0Fz-4F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyxQO09TTCNTIZLiEV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy4NUAksRfnApvRwk14AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgypZknkEThfR3Qywtx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOHQ0pyTPcxci4HiF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyB20yVDFKkceDmmgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzDRtwqT8lqpKvDHax4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwKB3YK0w4etax9s254AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzprmYLK9plq4KKxah4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]