Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My bank uses AI chatbots and they're so annoying and do not work. Most of the ti…
ytc_UgxhFmF7e…
G
LLMs can produce an output that's not average, that's why telling them to preten…
ytc_UgzDoruIK…
G
Uh oh, I hope Bosch don't incorporate AI in the next dishwasher update... It's s…
ytr_Ugx8-POh_…
G
Arthur C. Clarke nailed it - any sufficiently advanced technology will seem like…
ytc_UgyZhUv_F…
G
AI is a wonderful concept IN an ideal society, like or closer to Star Trek. Unf…
ytc_Ugx_NX4Qk…
G
it will be worse, in the future the AI will be in photorealistic silicon robot.…
ytr_UgzdQHoE-…
G
Why does the AI needs to know the Skin Color of the person in any of these scena…
ytc_UgymPSQmI…
G
Being against abortion doesn't automatically mean that you support Trump, are re…
rdc_euhqy0v
Comment
I'm not sure, if we can just assume an AI to think and feel like a human. We have no idea what kind of consciousness an AI could develop if any. How could an AI even develop something like a will? But to be fair, we don't really have a clue how the will in ourselves comes about anyway. We have no idea, what we are doing, so there is always a risk something like this could happen. I just wonder, what the hell an AI would actually do with its time, after it killed all humans.
youtube
AI Governance
2023-07-08T10:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzN_idUQGYkfE4_tEV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzOCP1deUXUdhAfCV94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxog01iOsjUownCwEJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyrOSBTFaZxWhYpO7h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjgPjTZbrN-MqEsl94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwt3JpSY_CwuRKAZYh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzTPFuur8Ztblxe-yp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgycObNM2xRuydKqsLV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXtoQbvrFxOIBJHJR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKCvZOoCe0ECNG-7p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]