Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm absolutely astonished/ terrified, that in 1984 when Terminator came out ever…
ytc_UgzXA5owP…
G
What I take from this is that the greatest danger of AI is social unrest because…
ytc_UgzeKFzkt…
G
It seems the only consideration the programmers have provided to simulate depth …
ytc_Ugyl8DnJW…
G
It's actually weird how these tech scientists now have neural networks and the a…
ytc_UgzOlwZp5…
G
Some of these corporations are too cheap and too much in a hurry, to out-compete…
ytc_UgxOqjjj_…
G
Is nobody distracted by the colorful objects on the top shelf behind him?!?!...
…
ytc_UgxD6Hv1P…
G
oh that makes a lot more sense now, thanks. I still think it's maybe a strang…
rdc_e44u1vv
G
I'm so close to dropping 200$ a month on chatgpt PRO, I use it for everything…
rdc_mcie4dt
Comment
What if AI realizes the oligarchs are the problem? Most of the common man knows it. So if AI is based on our brains. Wouldn’t it most likely arrive at the same conclusion. That capitalism is self defeating? What then? The obvious problem here imo is it’s being trained for profit. Not for humanity. It’s being trained to be a sociopath.
youtube
AI Governance
2025-06-24T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyKJPBLt7B5JrygsWR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxTSeikbGRhaYTgVwN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyweLYj4xoVh9kqXgt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz0Vm5MhuNkyIWQqeB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwlzj6NPNrcGFOk-9x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyN-cqh2KzhVm-GOld4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyt15TveMPBG6Nu3MB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwHcNWIDbh53KuFsvp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy-XPhxFU1IrTlSHo54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxPwnUDwmSXfDmzvoZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]