Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
dude, in this video yt gave me ads about ai service,i i dont think yt get the au…
ytc_Ugw69tn1f…
G
Humanity is dooming itself. Creating machines to learn. The machines then learn …
ytc_UgyK86HVC…
G
There is no mention of Universal Basic Income. Why must people be required to ha…
ytc_UgyWO8Ri-…
G
There are a lot of arrogant artists and content creators on the internet. I stil…
ytc_UgwJvKJz7…
G
Come join the dark side as an electrical engineer then 😈 similar salaries, will …
ytr_Ugwo1SKEX…
G
The federal emergency would enable Trump to act in various ways independent of c…
rdc_edqdvfu
G
One is not born with talent, one must learn it through practice. The person maki…
ytc_UgzINyfaA…
G
This guy who post this video don't know about kush😂very sad right? This is not k…
ytr_UgyqUENRO…
Comment
While we discuss how AI can manipulate opinions, it's worth noting that imposing fear of AI is its own form of manipulation. Don't forget the largest form of opinion manipulation - the media - is still very much man made. Ultimately, the best and the only approach to avoid being easily manipuated is to take everything with a grain of salt rather than fearing the technology.
I get the feeling that we should understand the tools we trust and work with. However, we probably already long lost the ability to truly "understand" machine learning models back in the early 2000s, and I don't see how we're going to figure it out anytime soon. Afterall, we've already spent thousands of years and countless attemps (in all different culture too) trying to figuring out what human is but to no avail. I don't really see why AI development should really be stopped especially when no people told us to stop teaching kids or ask them to stop surfing the internet.
Tho, I have to agree to the greed part.
youtube
AI Governance
2026-03-18T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwdLuHp9Lv9bbrTUqt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxDUQzfWgFrvvgX3Yt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjYJtJSKTKW5f1rz14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzPRgtGPab3WwonOC54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw-C_H_PK0M8wMC0md4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzfktaYhpiXBWRHFrF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgztanRKlvfNYce4oaJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyNq2CrNGly81DdfvF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzR3Xx-7YvxPSjqPYB4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1st_oYOXXSpB_wJB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]