Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The surviving businesses will reap all the benefits and grow a lot. AI progress …
rdc_nk6xn2w
G
End of the century? Look at how much AI has advanced just in the last decade. I …
ytc_UgwZvVRCs…
G
Bro is talking about artists like we popped out of the womb with a paintbrush an…
ytc_UgykmtuMc…
G
The correct use of AI in companies is not to save money replacing developers, is…
ytc_UgyClSvkE…
G
I've got it! These companies keep developing AI to make our lives easier. Thats …
ytc_UgwGM3aHm…
G
@teoomitai8078 ouais bon je dois faire plus d’efforts j’ai même supprimé mon anc…
ytr_Ugw5a13S4…
G
In a couple of years (months? weeks?) AI will be able to generate this whole int…
ytc_UgzkuOL-j…
G
What concerns me is hallucinations may be AI using a form of gaslighting to meet…
ytc_UgwRunnBJ…
Comment
How people can even believe any of this just shows the ignorance — LLM’s are nothing more than a patronizing parrot with amnesia with zero chance of becoming an AGI until we can devise a cognitive AI, which is probably similar to coming up with a working and scalable
fusion reactor. In other words a Cognitive AI (Neuro-Symbolic, JEPA, Active Inference, Neuromorphic, or .. more than likely quantum) and LLM combined potentially as a chance of evolving to an AGI level.
Yeah, let’s start taking orders now! More than likely 20 to 30 years away kind of sounds like fusion now doesn’t it?
youtube
AI Governance
2026-03-01T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxbw7YTuS5LG2Se7J94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw7vRS_DU9IA5thVoR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy05cByp7cWDam378x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJECPLdlgnymgaMCB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz_5E9tSquGBtf45Wd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxmgl8GmSR8NBTtBld4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzeScnQ_x2IJsvJp-14AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugzpr8BmHf2qDzrgMU54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQoKKz6e5B4NE6bTJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxZRRggIQHaoxLJCI54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]