Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But to be fair every single invention thus far created more jobs than it took. I…
ytr_UgzUuCJFK…
G
Why not just ask the AI whether they categorize themselves as a being that shoul…
ytc_UgxRf2pnp…
G
People, think about it, it's always good that jobs become useless, that cars dri…
ytc_Ugxz01pJm…
G
@klarahaplova9098 Arguably, cells were (and still are) motivated only by stimulu…
ytr_Ugwbt_rTc…
G
I've seen AI "replace" jobs already. It's going poorly and it's not the AI that'…
rdc_k0ea3dt
G
Btw, I heard you can add ":before 2022" to limit the search to the time before A…
ytr_Ugw9DFZ1-…
G
And Joshua burnt Ai, and made it an heap for ever, even a desolation unto this d…
ytr_UgytaH3zA…
G
I agree
AI DONT CREATE
Its not inteliggent
Its justt running logic
Human do cre…
ytc_Ugy3vrSgg…
Comment
I've seen a big problem with some people around me who I considered intelligent enough and might have a good sense of discernment, they are completely relying on AI's like ChatGPT, Gemini or Copilot, they are not confirming information and are losing the ability to identify the quality and veracity of the information, a lot of the information they receive is not even that accurate just depending on the person the AI responds in a more ‘personal’ way and they believe absolutely everything.
youtube
AI Governance
2025-06-18T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzCSQjhkjp7UakVll54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-_BCMBia8hnb1U3J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzJ3uQqtZv03RjsRL94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwRnMxjkVTC_vPqyBB4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwrR5vx8xbfqcloVwZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxAbnUSkUUUt472H0J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwgyK-ZZOlXreC0YJJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz2ot8rmUh9NQO3Dg94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzAsY9pJmBeKk6KX1R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxSH9sMa4yddxdVcNN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"})