Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans and robots connot leave together this a warning ⚠️ robots are robots ther…
ytc_UgxuLeI11…
G
@Naglfar94 as an animator, having ai is good for some things. this will just get…
ytr_UgxxCH0LF…
G
Wouldn’t it be horrifying if this was being monitored by a.i and the a.i putting…
ytc_UgzBQeXNu…
G
"used as a tool" oh aye it's being used as a tool! It's being used as intended …
ytr_Ugx1fRcqG…
G
well , i am not a tech specialist or a developer , but with help of copilot at m…
ytc_UgznnqNyR…
G
you can tell how ai magnificently struggles with logic— yes her earring disappea…
ytc_UgxD_VbGe…
G
Hi! I want to poison my art and stuff, but I draw on iPad and when I tried it di…
ytc_UgzVM8G_p…
G
Au milieu de toute cette technologie Le Seigneur est là. Il attend. Pour sauver.…
ytc_UgyzN2eFn…
Comment
In my upcoming book on an AI Constitution I will clearly show that we certainly will loose control to an ASI. And therefore there is a need to develop a set of universal rules and implement them to any AI of whichever definition in order to ensure that the decisions and actions are in the individual interest of each one. This is my approach to try to shape a future which ensures the maintenance of human consciousness and free life.
youtube
AI Governance
2025-01-03T18:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyuKve0p8NacQZ3jl94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxliOdOHICKNldjUnV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRscm_LqcObDfLevF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw6hcLS-ig_2l0mKih4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzgNakNI2froR_VBJh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvbW-3XGJXi6zQWkp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx82TcIE1NIjP5EFhd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzghbarzZgKxERfW_R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgypFt-geNkrVXiCZKZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzgIK4NlPPKCCuuXFB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]