Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate cleaning my house, toilets, mopping, cleaning up pet hair, etc. On the ot…
ytc_Ugx_w0qyg…
G
Absolutely! Sophia's insights on balancing rationality and human oversight are s…
ytr_UgykVw9y5…
G
wtf did you merge ukraine and venezeula? make 2 seperate threads ffs, this threa…
rdc_cfl57k4
G
AI has no context.1 minute in the video and AI is supportive. It is designed to …
ytc_Ugy3bahUu…
G
Fellow artist here, reason why I and many others dislike ai because it steals fr…
ytr_Ugx5F1jC2…
G
I love ChatGPT or Claude for code. I never go bigger than a method though, but I…
ytc_UgygtCKVx…
G
Douchebags like David Autor are just shills for big money. To compare todays sta…
ytc_Ugzf_5XLy…
G
If it boils down to a civil war where it’s us versus the billionaires and their …
ytc_UgwoNZv6T…
Comment
Is it possible that maybe Bill Gates doesn't want regulation stifling the share price growth of Microsoft and diminishing his position as a shareholder? If there are any "good guys" in the world of AGI development, they should be focusing at least as much of their resources towards AI safety, alignment and ethical simulation as they are in product release and corporate growth. I would even argue 90% of their resource should go towards such goals, if "most" AI researchers truly believe AGI has a 10% chance of eradicating humanity. But alas, how much money is enough?...
youtube
2023-05-09T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyhRtbqdhPlkxBtQaR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzLA-Ob-Z_ggQJvTWh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgySG_2san1IziVbiF94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzAiSdcg8LBhCDKrnl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzlcbWls4z1DsYsQxB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzEtjFPu7apeawicyF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgySsjHzA0HLp3y8E4l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyQ9dIV8EMIApDGgi54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzvTS93Dndjf-15YDB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz8ddAZqGxVj50Bw9Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]