Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To think that humans' legislation and regulations can hinder the A.I.'s autonom…
ytc_Ugx0pWFOo…
G
Guys like this don't get it yet. People aren't going to listen to other people's…
ytc_UgwGtDATM…
G
Most LLMs learn more when they are taught the same thing in as many languages as…
rdc_myrwfrf
G
i ran it through an ai detector they said it was 98% accurate ai or deepfake was…
ytr_Ugz3N5VvY…
G
It's understandable to feel concerned about the state of the world. Sophia's per…
ytr_UgzQrb5tY…
G
Where is christianity written is the scriptures and books? ...As a matter of fac…
ytc_UgwFQy5NO…
G
I get what you are saying. We need to be more creative directors. But we should …
ytc_UgyZJ-24_…
G
but that's the thing people miss when talking about AI "creation tools", the YET…
ytc_UgyCh-dAX…
Comment
The AI by itself will do jack shit. It doesn't have agrncy, purpose, free will, continuity and most importantly, 4.5 billion years of finetuning behind. The people controlling a powerful computing tool such an AI connected to the modern infrastructure- those are DANGEROUS. A dumbed down, convenient use, ultra sophisticated console for various infrastructures - this is what AI is. A fully aware and hyperintelligent AI would be as dangerous to the ones using it as to the ones being used upon, but this is not the case for now, nor will if be until embodied.
youtube
AI Governance
2026-01-05T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw8s6O2oYcwyDxNoUh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwJNDQOfhI_-13Xlwl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwprBFUZ1tX2uh8pUF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxxeRhg86WW8WT4nH94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRHerbx0hdt0DKqm54AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwMkgH1DBteG3oQGSV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz85dXYp0Z62Hyuwix4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwozc-ir4cXxJoLYpd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxes2qmQbPaOtvKAJp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyRWsCayJenYYAz73x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]