Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tesla removed the radar since they could not mux the data with the vision sensor…
ytc_UgzefrXgA…
G
No surprise there, to me it's predominantly the "sheeple" personality who has st…
ytc_Ugy1t6IJ9…
G
I was yesterday stuck for 5 hours chatting with chatGPT4 about a problem that ne…
ytc_UgzNX4VRj…
G
When they start pumping out movies using AI actors, they can also focus group EV…
rdc_luawulx
G
🤔 This is where things get even more interesting.
For the past decade, politicia…
ytc_UgyaHh07R…
G
In your comment, you ignored another humans + another COUNTRIES that are trying …
ytr_UgyMcljUo…
G
Am I the only one that finds it funny that most of the images in this video were…
ytc_Ugx9g-Oi2…
G
People posting more art gor ai companies to devour and train better art generati…
ytc_Ugwovm_PA…
Comment
OK, full disclosure I regularly listen to Mr. Ballin strange dark and mysterious… And murder podcast so forgive me if I find it interesting that all of the questions are about how humans would use AI for good??? Have we not thought about the villains of society??? The Voldemorts, Sauron’s… or ummmmmm…Like Agent Smith or the MF Terminator??? Ummmmm… hello. Ok, but for real, I feel like the questions are asked from the perspective that humans are naturally good, and we come to this challenge with the perspective of a human… so if a bad actor asked AI, how do I commit the perfect crime? (😬😬I watch a lot of crime podcasts where people tend to search the internet this Q). Soooooo like what if someone asked AI this question, does AI told the person I don’t think it’s a good idea and I’m not gonna tell you? Or does AI tell people the actual answers? I feel like one topic regarded how emotions play into AI… do certain things cause sadness, regret, empathy, sympathy, if that’s the case… if One understands these emotions, they also have to understand the opposite emotions. So AI understanding the opposite sides what makes us think they’re gonna choose the side we think is right based on morals and cultural understandings. Does AI feel sad if somebody dies as a result of them providing a platform for committing a crime? I felt a disconnect?. Ezra continues the conversation from the human side (devil’s advocate) but Mr. Yudkowski explains from the ai POV. lol, I need a (layman’s terms) translation to truly understand the analogies. This is a case where the parents find out their kids are smarter, and what would humans do when they find out they are smarter than the rest and possibly more influence, curiosity, lack of actual human contact…I mean, think young boys risk taking, but no emotion attached to consequence???
youtube
AI Governance
2025-10-16T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxYZZUWf1e0BmiKVjB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyj61IC9y4O1eajFIx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw3j7ix_m4O6fjeX954AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy4-TMZuxbJYngQ8Mx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw58XvKpbBYlzWFchJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyGHqp3D-7GTb5h_Id4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwzjuhdXejZ9g-vYPJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxEeUhfSG0D_ImVweV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzohPviAeIyf6Vgdm54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxjhUMpviZsQdzeHKJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]