Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It didn't reach that logical conclusion due to some artificial "bias" - it reach…
ytc_Ugy_RQlPp…
G
What skill do AI prompts require? Genuine question here, At least I my spaces, n…
ytr_UgzOSypYp…
G
Its not bad but i think people should be able to do what they want in privacy wi…
ytc_Ugw2RVCon…
G
There is nothing AI about teslas shitty 'autopilot'. They were still having real…
ytc_UgyOtVWY0…
G
Did anyone else heard about this doctor from Switzerland. He went missing few da…
ytc_UgxL1HGcu…
G
What no one is getting is that AI will achieve superior intelligence and it wil…
ytc_UgzKnXv4P…
G
Thousands of lines of codes is going into making AI. I don't know about y'all, b…
ytc_UgxxkrAsU…
G
Of course, there will be regrets during the transition period, but AI will conti…
ytc_UgzK8r8PO…
Comment
I think that until a.i. somehow obtains imagination, we are not in serious danger. Furthermore there should be legislation preparations so that a.i. only assists humans in their work and is not allowed to perform independently even if capable unless the human it is assigned to, is incapacitated ( with a directive of course that it should never incapacitate its human but even prevent it when eminent)
youtube
AI Governance
2025-09-07T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzzRYv1B8_JgVH8sdV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwWFtIQPUFaUT7zwcZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwPstfbWrF1cI0K8ft4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyc-gRDILHh3YUrtNN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxEhtQzPXCg3an1-wd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwRzNtXkn1XkG1_4CV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwww6bzfGTkbqQ_3MF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyMYtAbQv8aovrodAh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"approval"},
{"id":"ytc_Ugx46cqe9tvopJrEIdN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz6MYEwADI_5JNZ55R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"indifference"}
]