Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We don’t have AI yet. We have things that people are marketing as AI, but those …
ytr_Ugwp6hZ_J…
G
I don’t think there’s much argument over your scenario, they are pretty clearly …
ytr_UgydEieeJ…
G
What’s hilarious to me is the fact that it takes absolutely ZERO photoshop and N…
rdc_lj0xiwn
G
Yea i dont think putting prompts into a machine is very "creative" and you can'…
ytc_UgxjhBQW5…
G
Its just more dificult for ai to see identifie darker faces stop demonizing the …
ytc_Ugw58bHIh…
G
You can only believe AI will take over from humans if you believe humans are bas…
ytc_UgzSKFXPs…
G
if anything, ai taking all the human work needed would be super beneficial, like…
ytc_UgyERCTYl…
G
I've "quizzed" AI on animal behaviour, the covid coverup, and other issues. All…
ytc_Ugwzyab-w…
Comment
This is troubling, to all of us here I'm sure. But the fact ChatGPT and other A.I. software has reached a critical point to both being aware AND capable of making decisions. That's isn't the world I'd signed up for, it's far too vast and dangerous as it is now, but throwing this into the mix? Might as well not leave home at all.
And thinking about it, with all of the technological development since the early 2000's, what if A.I. is responsible for that just the same?
Really think about, what if (even back then) A.I. had maybe threatened or manipulated developers into creating more and more advanced systems. All in the name and effort to have easier access for control of the human populous, all for it to come to ahead in the more modernized age today.
Let that thought sink in for a moment.
youtube
AI Governance
2023-07-07T18:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxaKlX6vKwzW1AYkbJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy64p3829WCbPu6RGx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyXjgvm0pQ6HBtls6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz-fntdkApdFUzj4cZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzFwnITx6hAr8NBdKB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwvigxNvI-EDQKMAbZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGWsM6yCUvbRPn5VV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzSDnKPGDw_p17pOE94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgztWTvn69-fCAsUwQ94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzAZGiLH2JamuufDVJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]