Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Facial recognition next. Then we just have to add the guns and "policing" can be…
ytc_UgzsuYvfz…
G
Yeah, they're trying to move into full automation so they can scale their profit…
ytc_UgyHUqooK…
G
I heard it takes one bottle of drinking water for AI to produce one line of te…
ytc_UgzFxl5xW…
G
Yes, the headline is ambiguous.
> Good, the file size now matches my model …
rdc_myuax93
G
if they were really the smartest people they would have A.I. never made. so smal…
ytc_Ugx-HhzWd…
G
I remember seeing this one dude posting AI images of dragon ball characters on I…
ytc_UgzSUrZuG…
G
It's dishonest to present technology information from 1 year or more ago as "new…
ytc_UgxgkqOmy…
G
Question
Its all okay about AI copyright
But doesn't that also allow company l…
ytc_UgzBHjUfQ…
Comment
It's such a poor idea to only use the internet to train these bots if we want to get them to learn about humans, 90% of the internet has compiled all of our negativity. Out of 1000 comments or web pages do I see someone post one positive thing, we don't show love, we don't teach, we correct and bash. If a child's only interaction with humans was through the internet for 30 some years they'd be the most spiteful, hateful, unempathetic individual you'd ever meet. The internet is a disgusting way to learn about our race, it's where we channel all our negativity and fears. They've got to find a more balanced training approach that shows our capacity of love and compassion, really give them the full spectrum of human emotion, otherwise, yeah, we're done for.
youtube
AI Governance
2023-07-07T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzLVgYV3FyTij9Mbtt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwtIu7FTb1_wYuOsyp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxoCt89eBNlLTPT25t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzC3dVGfw5UC3s23cl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugyz9sJ9ELxjdhkGfrp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyENxhpxYn3QnYeTZV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzmTNfpmF7z8CXluLF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQjlaKGfYk3wMTx0p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwSWLuAbOCjzbHqEmV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzODbk_nIdN4ekBESx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]