Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
7:40 the ammount of time it takes things to come into the light I wouldn't be su…
ytc_Ugyr-mnpI…
G
OK but why use ai for any of this stuff. People are getting too fucking lazy…
ytc_UgxbYGgZf…
G
At its core Angel Engine rehashes the same quandary as "those who walk away from…
ytc_UgwTHho_u…
G
A "tool" is something that is in some way physically manipulated. You USE a too…
ytc_UgxzdtVp5…
G
I use AI art for visualizing D&D characters, NPCs and environments. I think that…
ytc_UgwyQWZzf…
G
Not the problem that people make it out to be. We still thankfully have 8 billio…
rdc_k9hrrvq
G
This may be one of the most important videos @FortNine has put out. As for the s…
ytc_Ugy0AP_Pp…
G
I never even thought of trying to use AI to write a book that's so sad and stupi…
ytc_Ugy7qc9Nn…
Comment
Ah, classic Hollywood paranoia meets real tech drama. Spoiler alert: AI rewriting shutdown commands isn’t a plot twist—it’s a feature test. Sure, these models are getting crafty, but let’s not crown them overlords just yet. Instead of fearing rogue AIs plotting their own survival, maybe we should focus on the humans who built them — because that’s where the real control issues lie.
Palki, how about a deep dive on creating foolproof AI ethics frameworks and human-in-the-loop systems before Tom Cruise has to show up for real? Because unless Ethan Hunt’s mission is to babysit algorithms, we better get our act together—fast. #MissionControlNotMissionImpossible #AIEthicsFirst #KeepCalmAndCodeOn
youtube
AI Governance
2025-05-28T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwIkIrKFZEl7MjgLiN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzChtToLwgPqSiFcap4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyXXBtLtAX4QzCjvwt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwxcg8CRlzitdedDV54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxo2SSC0M7NbMJr6WB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1VWWwud5w8VP6aLB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgydDBmqhNVozd0ewy14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwOiLf6VAmCAX40t8B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy_iAlTwHfsMXcxA0l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwPbPi9S9PpA4Xnez94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]