Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The hardest stuff is the easiet to automate. Kind of looks like if the more proc…
ytc_UgzuFS1Uo…
G
I thought all jobs were going by 2025? Or was it 2027? Wasn't Chat GPT 5 going t…
ytc_Ugx_7Jfe3…
G
It's not about what is or isn't, but about perception. Right now the perception …
ytc_Ugzwy1_mQ…
G
Dave, the question in regards to china isnt economic superiority. Its AI superio…
ytc_Ugxun3SgG…
G
Ideally yes.. However prices don't go down immediately, that impact takes time f…
ytr_UgyhlQkYP…
G
To say it won't eventually replace Software Engineers is hubris.
AI will get be…
rdc_mp05n52
G
AI training often sets an end result that is wanted and lets the machine figure …
ytr_UgzO0pXSR…
G
„Stable diffusion is free”
And it is still an unethical model.
„some of the sc…
ytr_Ugz0e2FaG…
Comment
appreciate the perspective on AI and definitely agree with curtailing the advances in favor of humanity but a point that these discussions miss is who is pushing humanity towards this Rubicon and what is their motivation for doing so. May want to explore population control and surveillance and visibility that this group wants over all of humanity and the desire to become as close to God as possible. No normal humans being wants this - so perhaps exploring who these people are that are deciding humanities future for them is a good start.
youtube
AI Governance
2025-09-06T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzkTJYW09Rc6SU2HtR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzJoJ-wC0VFaH9elRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyf05YUWDPw8yKbCYt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwCq8PXtKwYZGX3EQ54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxn3_RoHPndW4ieif94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzxbEaBS_0zfzNRDu54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxj_gune8ya56UQgS54AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzAirvcotdR2CvKPUJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5v6rjP0kCvn_DYHJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwBaON6yNwmM-vYs114AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]