Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just wondering but will they still have humans loading and unloading the trucks …
ytc_UgzZ6VC7v…
G
Totally agree with the guests about this needing to be a democratic process. Un…
ytc_UgxLamiN3…
G
I hope these problems are fixed very soon. If self driving cars become more main…
ytc_UgytpCR4h…
G
Musk is training Grok to be Mecha-Hitler and Peter Thiel is training Palantir AI…
ytc_Ugz4JPrWl…
G
We will still create beauty, “emotional poetry” art! We are the creators ! Never…
ytc_UgwkdQpiJ…
G
If he makes the art from scratch, then he could copyright it. :)) I mean, his lo…
ytc_Ugx_MxOeB…
G
How can AI be made safe? If humans will become superfluous to the production of …
ytc_UgzWtTNOr…
G
I have a moot who had someone run their art through an ai filter and send it bac…
ytc_Ugya69Z6U…
Comment
... basically, if any one "central" artificial intelligence (or an advanced computer algorithm with "learning" functions) is active and working, it has to be under constant surveillance - by a number of other independent "AI-s" and at the same time by a number of people, as both the responses and its behavior needs to be both studied, adjusted, modified, etc. Also the users interacting with this AI will be evaluating its preformance, reporting inappropriate behavior, wrong or unethical responses, etc. There is a lot of work to be done here.
youtube
AI Moral Status
2023-03-01T10:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwLmnuVNiClXzuzzqZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw0hWv3xBYNklg9Rox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzvojwlDQ4wRwerGMN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwfbG7CeyFV8Fnc68J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwWprSJSBbiUbVyajx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzds0C6Yq2_fIbw0R94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzHLdJAB7t8M1BDlzx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy-7kFl9pTAq0y0PSx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzZcwU6IOP-q55NcFB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzzCFQQI2klqIt139V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]