Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ummm they been using tobot and algorithms for years. We all knew this was inevi…
ytc_UgwrhiO4c…
G
We the masses will need our own AI to protect us.Because the AI created now coul…
ytc_Ugxt96IfM…
G
AI is dangerous because it is part of the end time, AI is not mankind friends.
T…
ytc_UgxcfxCtQ…
G
WTF IS AN AI ARTIST 💀 that likes buying food at a restaurant and taking it home …
ytc_UgzV0Zpbn…
G
You miss the point then. Just because the AI art isn’t a precise Rutkowski doesn…
ytr_Ugyw5JDOl…
G
What a bait and switch. For a video with the title, "Why AI is overrated," it's …
ytc_Ugz4JYQo6…
G
YA ME QUIERO ..C0G3R.. A UNA ROBOT ASÍ (SIEMPRE DSIPONIBLE, MAMADORA, NUNCA SE C…
ytc_UgzZExjVp…
G
I have zero background in anything so this is just my own fledgling thoughts int…
rdc_ctifmmc
Comment
I do not in any way believe that self regulation is viable. I in no way believe that companies should be in charge of regulation. I get it that government is not capable at this point in time to put together regulations or rules or laws, for the purpose of regulating the safety of using AI. After all, the assumption that the volume of information translates into wisdom is a false belief and is supported only by hubris. All the data used to train AI is fundamentally the sum total of the written knowledge some of which is incorrect or even evil. We cannot afford to make the assumption that good will be the only outcome.
youtube
2025-05-11T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyOq9n2-zQVO34-viZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw6WuctUNvct2zVt4x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy2UFYtpgquxati5MZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw93vWgEqkcZyX16654AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZ67SQGXdX8RhDbNV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwXlft9THpRo0XRekF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyKaoov30uadXqWhtB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgytC9Sy5ai3DXaQ66N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9GPjpySi4d5lvOwt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyfPNTqIt7V7FMyLIt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]