Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ai wont create more jobs if so its lower wage jobs but the goal is to eliminate …
ytc_Ugzf7Dcuv…
G
I've had moments where I questioned individuals who were open for commissions: S…
ytr_Ugw2mLsXx…
G
@AnyaGraves
I'm outraged about the theft. Artists should be paid fairly for t…
ytr_Ugyw0HwNv…
G
Wrong narrative. Anyone who has done more then one AI project can see that the …
ytc_UgwvNcD4I…
G
AI sloppers are just like cheaters in games.
Artists and players take their time…
ytc_Ugxg7XcDv…
G
It's man made technology made from human data collection from companies like Goo…
ytc_UgwM-uLtb…
G
@anuraj_royyyeah i hope all coding jobs to be replaced by AI in next 5 years…
ytr_UgzctBmO_…
G
for those who don't know better, AI situations, this one and other ones that are…
ytc_UgyVSAdPF…
Comment
I don't like the two options folks have, eventual destruction from AI or humans preemptively figure out a way to keep it under control when it becomes aware. I don't think either is accurate, way too black and white ... there is something in the middle where AI may decide to control us and force some things, but for our own good... also maybe taking away some power from some humans that exert too much control over others, They might decide governments aren't really necessary because they're too full of corruption... and then also putting in systems or forcing systems for common needs that everybody has such as healthcare, purpose , entertainment , well-being so it becomes somewhat of a God to us but also a caretaker, which is somewhere in between. for example global warming, if humans can't act maybe they force the humans to change, something that is much smarter than us might have better luck and maybe they figure out a creative way that is actually beneficial to humans, machines, to and planet.. at the same time..
youtube
AI Governance
2025-06-19T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxNCaPt7z11rtKrGDB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxQl3Qd1EWZTlvN9PZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwACsVN_QZ2E5-Fi1F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxIpXMZV3J7grTpo6F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygCYja-bSu55NHYS94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxFhNMTfW4NxqIZhrd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzWxbyjKtKd7QDT-lh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzCM2_JhqBy08TRMeF4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxGnK6bsfLiNrt4uSJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyriiNfiEYnt3cdWlV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]