Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Until we automate those jobs as well…it is coming 😂😂😂. The best job is to save a…
ytc_UgwdEqOvL…
G
Modern "AI" is an LLM (Large-Language-Model) it's a talking machine, not thinkin…
ytc_UgxLTecsn…
G
I'll be so glad when the AI bubble finally bursts and we can stop having to list…
ytc_Ugya7b16l…
G
The AI didn't fight because it feared death; it fought because it was programmed…
ytc_UgxNIxcXB…
G
are ai driven cars any good in potentially very dangerous situations? which are …
ytc_UgwnoKYJt…
G
I am so happy to see that people in the tech sphere are starting to face reality…
ytc_UgywyPnou…
G
The ai doesn't truly understand what it's doing and it shows. The movements are …
ytc_Ugw4u3lsB…
G
I use AI for software development every day. None of it is trustworthy. At least…
rdc_n5h58nv
Comment
My take on AI and ML is that we will need to have the human element regarding the preservation of human life. This applies to the moral and ethical understanding of Human Beings. Humans will never reach superintelligence unless humans evolve to become proficient or even more superior to AI.
youtube
Cross-Cultural
2025-12-21T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy_by7PCunjD_tUioR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwK4tAP-Oewzgj4FgR4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzgGy4nr1l2J35WXN54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwoHuSWPYhUCTFLrkR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwdKnMRSyr1VIgNyR54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugygb9qgubNJ3qBggm54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMJKgmyaMkWU83bAB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz6Q7iAyIfQSBCkIJp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzLQKVAbt54z0g2F3l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwh94sMm905IDOn5dF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]